This is a simple guide that demonstrates how to use TileDB on HDFS. HDFS is a distributed Java-based filesystem for storing large amounts of data. It is the underlying distributed storage layer for the Hadoop stack.
TileDB integrates with HDFS through the
libhdfs library (HDFS C-API). The HDFS backend is enabled by default and
libhdfs loading happens at runtime based on environment variables:
The root of your installed Hadoop distribution. TileDB will search the path
The location of the Java SDK installation. The
The Java classpath including the Hadoop jar files. The correct
If the library cannot be found, or if the Hadoop library cannot locate the correct library dependencies a runtime, an error will be returned.
To use HDFS with TileDB, change the URI you use to an HDFS path:
For instance, if you are running a local HDFS namenode on port 9000:
If you want to use the namenode specified in your HDFS configuration files, then change the prefix to:
Most HDFS configuration variables are defined in Hadoop specific XML files. TileDB allows the following configuration variables to be set at run time through configuration parameters:
Optional runtime username to use when connecting to the HDFS cluster.
Optional namenode URI to use (TileDB will use
Path to the Kerberos ticket cache when connecting to an HDFS cluster.
The Configuration page explains how to set configuration parameters in TileDB.