Azure Blob Storage
Last updated
Last updated
This is a simple guide that demonstrates how to use TileDB on Azure Blob Storage. After configuring TileDB to work with Azure, your TileDB programs will function properly without any API change! Instead of using local file system paths for referencing files (e.g. arrays, groups, VFS files) use must format your URIs to start with azure://
. For instance, if you wish to create (and subsequently write and read) an array on Azure, you use a URI of the format azure://<storage-container>/<your-array-name>
for the array name.
TileDB does not support storage accounts with hierarchical namespace.
Sign into the Azure portal, create a new account if necessary.
On the Azure portal, click on the Storage accounts
service.
Select the +Add
button to navigate to the Create a storage account
form.
Complete the form and create the storage account. You may use a Standard or Premium Block Blob account type.
In your application, set the "vfs.azure.storage_account_name"
config option or the AZURE_STORAGE_ACCOUNT
environment variable to the name of your storage account name.
Alternatively, you can directly set the endpoint to connect to.
TileDB supports authenticating to Azure through Microsoft Entra ID, access keys and shared access signature tokens.
Microsoft Entra ID is the recommended way to authenticate to Azure and provides superior security and fine-grained access compared to shared keys. It is enabled by default and you do not need to specifically configure TileDB to use it. Credentials are obtained automatically from the following sources in order:
Managed identities for Azure compute resources
Only system-assigned managed identities are currently supported.
Workload identities for Kubernetes
When the Azure backend gets initialized, it attempts to obtain credentials by the sources above. If no credentials can be obtained, TileDB will fall back to anonymous authentication.
Manually selecting which authentication method to use is not currently supported.
Microsoft Entra ID will not be used if any of the following conditions apply:
The vfs.azure.storage_account_key
or vfs.azure.storage_sas_token
configuration options are specified.
The AZURE_STORAGE_KEY
or AZURE_STORAGE_SAS_TOKEN
environment variables are specified.
A custom endpoint is specified that is not using HTTPS.
TileDB does not currently support the following features when connecting to Azure with Microsoft Entra ID:
Selecting a specific credentials source without trying to authenticate with the others.
Authenticating with a service principal specified in config options instead of environment variables.
Authenticating with a user-assigned managed identity.
Make sure to assign the right roles to the identity to use with TileDB. The general Reader and Contributor roles do not provide access to data inside the storage accounts. You need to assign the Storage Blob Data Reader or the Storage Blob Data Contributor roles in order to read or write data respectively.
Authentication with shared keys is considered insecure. You are recommended to use Microsoft Entra ID.
Once your storage account has been created, navigate to its landing page. From the left menu, select the Access keys
option. Copy the Storage account name
and one of the auto-generated Key
s.
Set the following keys in a configuration object (see Configuration) or environment variable. Use the storage account name and key from the last step.
"vfs.azure.storage_account_name"
AZURE_STORAGE_ACCOUNT
""
"vfs.azure.storage_account_key"
AZURE_STORAGE_KEY
""
Navigate to the new storage account landing page. From the left menu, select the “Shared Access Signature” option.
Use all checked defaults, and select Allowed resource types
→ Container
Set an appropriate expiration date (note: SAS tokens cannot be revoked)
Click Generate SAS and connection string
Copy the SAS Token
(second entry) and use in the TileDB config or environment variable:
"vfs.azure.storage_sas_token"
AZURE_STORAGE_SAS_TOKEN
""
So far we explained that TileDB arrays and groups are stored as directories. There is no directory concept on Azure Blob Storage (similar to other popular object stores). However, Azure uses character /
in the object URIs which allows the same conceptual organization as a directory hierarchy in local storage. At a physical level, TileDB stores all files on Azure that it would create locally as objects. For instance, for array azure://container/path/to/array
, TileDB creates array schema object azure://container/path/to/array/__array_schema.tdb
, fragment metadata object azure://container/path/to/array/<fragment>/__fragment_metadata.tdb
, and similarly all the other files/objects. Since there is no notion of a “directory” on Azure, nothing special is persisted on Azure for directories, e.g., azure://container/path/to/array/<fragment>/
does not exist as an object.
TileDB writes the various fragment files as append-only objects using the block-list upload API of the Azure SDK for C++. In addition to enabling appends, this API renders the TileDB writes to Azure particularly amenable to optimizations via parallelization. Since TileDB updates arrays only by writing (appending to) new files (i.e., it never updates a file in-place), TileDB does not need to download entire objects, update them, and re-upload them to Azure. This leads to excellent write performance.
TileDB reads utilize the range GET blob request API of the Azure SDK, which retrieves only the requested (contiguous) bytes from a file/object, rather than downloading the entire file from the cloud. This results in extremely fast subarray reads, especially because of the array tiling. Recall that a tile (which groups cell values that are stored contiguously in the file) is the atomic unit of IO. The range GET API enables reading each tile from Azure in a single request. Finally, TileDB performs all reads in parallel using multiple threads, which is a tunable configuration parameter.
By default, the blob endpoint will be set to https://foo.blob.core.windows.net
, where foo
is the storage account name, as set by the "vfs.azure.storage_account_name"
config option, or the AZURE_STORAGE_ACCOUNT
environment variable. You can use the "vfs.azure.blob_endpoint"
config parameter to override the default blob endpoint.
"vfs.azure.blob_endpoint"
""
If the custom endpoint contains a SAS token, the "vfs.azure.storage_sas_token"
option must not be specified.