This is a simple guide that demonstrates how to use TileDB on Google Cloud Storage (GCS). After setting up TileDB to work with GCS, your TileDB programs will function properly without any API change! Instead of using local file system paths when creating/accessing groups, arrays, and VFS files: use URIs that start with gcs://
. For instance, if you wish to create (and subsequently write/read) an array on GCS, you use URI gcs://<your-bucket>/<your-array-name>
for the array name.
TileDB supports authenticating to Google Cloud using Application Default Credentials. Authentication happens automatically if your application is running on Google Cloud or in your local environment and you have authenticated with the gcloud auth application-default login
command. In other cases, you can set the GOOGLE_APPLICATION_CREDENTIALS
environment variable to a credentials file, like a user-provided service account key.
For more control, you can manually specify strings with the content of credentials files as a config option. TileDB supports the following types of credentials:
Config option | Description |
---|---|
If any of the above options are specified, Application Default Credentials will not be considered. If multiple options are specified, the one earlier in the table will be used.
You can connect to Google Cloud while impersonating a service account, by setting the vfs.gcs.impersonate_service_account
config option to either the name of a single service account, or a comma-separated sequence of service accounts, for delegated impersonation.
The impersonation will be performed using the credentials configured by one of the above methods.
The following config options are additionally available:
So far we explained that TileDB arrays and groups are stored as directories. There is no directory concept on GCS and other similar object stores. However, GCS uses character /
in the object URIs which allows the same conceptual organization as a directory hierarchy in local storage. At a physical level, TileDB stores on GCS all the files it would create locally as objects. For instance, for array gcs://bucket/path/to/array
, TileDB creates array schema object gcs://bucket/path/to/array/__array_schema.tdb
, fragment metadata object gcs://bucket/path/to/array/<fragment>/__fragment_metadata.tdb
, and similarly all the other files/objects. Since there is no notion of a “directory” on GCS, nothing special is persisted on GCS for directories, e.g., gcs://bucket/path/to/array/<fragment>/
does not exist as an object.
TileDB writes the various fragment files as append-only objects using the insert object API of the Google Cloud C++ SDK. In addition to enabling appends, this API renders the TileDB writes to GCS particularly amenable to optimizations via parallelization. Since TileDB updates arrays only by writing (appending to) new files (i.e., it never updates a file in-place), TileDB does not need to download entire objects, update them, and re-upload them to GCS. This leads to excellent write performance.
TileDB reads utilize the range GET request API of the GCS SDK, which retrieves only the requested (contiguous) bytes from a file/object, rather than downloading the entire file from the cloud. This results in extremely fast subarray reads, especially because of the array tiling. Recall that a tile (which groups cell values that are stored contiguously in the file) is the atomic unit of IO. The range GET API enables reading each tile from GCS in a single request. Finally, TileDB performs all reads in parallel using multiple threads, which is a tunable configuration parameter.
While TileDB provides a native GCS backend implementation using the Google Cloud C++ SDK, it is also possible to use GCS via the GCS-S3 compatibility API using our S3 backend. Doing so requires setting several configuration parameters:
Note: vfs.s3.use_multipart_upload=true
may work with recent GCS updates, but has not yet been tested/evaluated by TileDB.
Full example for GCS via S3 compatibility in Python:
Config option | Description |
---|---|
Config option | GCS setting |
---|---|
vfs.gcs.service_account_key
JSON string with user-provided service account key
vfs.gcs.workload_identity_configuration
JSON string with configuration to obtain workload identity credentials
vfs.gcs.project_id
The name of the project to create new buckets to. Not required unless you are going to use the VFS to create buckets.
"vfs.s3.endpoint_override"
"storage.googleapis.com"
"vfs.s3.region"
"auto"
"vfs.s3.use_multipart_upload"
"false"
"vfs.s3.aws_access_key_id"
, "vfs.s3.aws_secret_access_key"
Override here, or set as usual using AWS settings or environment variables.