This is a simple guide that demonstrates how to use TileDB on Azure Blob Storage. After configuring TileDB to work with Azure, your TileDB programs will function properly without any API change! Instead of using local file system paths for referencing files (e.g. arrays, groups, VFS files) use must format your URIs to start with
azure://. For instance, if you wish to create (and subsequently write/read) an array on Azure, you use URI
azure://<storage-container>/<your-array-name> for the array name.
Sign into the Azure portal, create a new account if necessary.
On the Azure portal, click on the
Storage accounts service.
+Add button to navigate to the
Create storage account form.
Complete the form and create the storage account. You may use
BlobStorage as the account type.
Once your storage account has been created, navigate to its landing page. On the left-hand side, select the
Access keys. Note the
Storage account name and one of the auto-generated
Set the following keys in a configuration object (see Configuration). Use the storage account name and key from the last step. If your storage account name is
foo, your endpoint is
So far we explained that TileDB arrays and groups are stored as directories. There is no directory concept on Azure Blob Storage (similar to other popular object stores). However, Azure uses character
/ in the object URIs which allows the same conceptual organization as a directory hierarchy in local storage. At a physical level, TileDB stores all files on Azure that it would create locally as objects. For instance, for array
azure://container/path/to/array, TileDB creates array schema object
azure://container/path/to/array/__array_schema.tdb, fragment metadata object
azure://container/path/to/array/<fragment>/__fragment_metadata.tdb, and similarly all the other files/objects. Since there is no notion of a “directory” on Azure, nothing special is persisted on Azure for directories, e.g.,
azure://container/path/to/array/<fragment>/ does not exist as an object.
TileDB writes the various fragment files as append-only objects using the block-list upload API of the Azure Storage cpplite SDK. In addition to enabling appends, this API renders the TileDB writes to Azure particularly amenable to optimizations via parallelization. Since TileDB updates arrays only by writing (appending to) new files (i.e., it never updates a file in-place), TileDB does not need to download entire objects, update them, and re-upload them to Azure. This leads to excellent write performance.
TileDB reads utilize the range GET blob request API of the Azure Storage SDK, which retrieves only the requested (contiguous) bytes from a file/object, rather than downloading the entire file from the cloud. This results in extremely fast subarray reads, especially because of the array tiling. Recall that a tile (which groups cell values that are stored contiguously in the file) is the atomic unit of IO. The range GET API enables reading each tile from Azure in a single request. Finally, TileDB performs all reads in parallel using multiple threads, which is a tunable configuration parameter.