Ask or search…

Google Cloud Storage

This is a simple guide that demonstrates how to use TileDB on Google Cloud Storage (GCS). After setting up TileDB to work with GCS, your TileDB programs will function properly without any API change! Instead of using local file system paths when creating/accessing groups, arrays, and VFS files: use URIs that start with gcs://. For instance, if you wish to create (and subsequently write/read) an array on GCS, you use URI gcs://<your-bucket>/<your-array-name> for the array name.

GCS Setup

  1. 1.
    Navigate to the Google Cloud Console.
  2. 2.
    Create a service account.
  3. 3.
    Set your authentication environment variable.
    The simplest option is to set the credentials path as an environment variable:
    export GOOGLE_APPLICATION_CREDENTIALS=/path/to/gcs-creds.json
  4. 4.
    Configure your GCP project id by setting the following key in a configuration object (see Configuration):
    Default Value
TileDB will automatically authenticate using the configuration file referenced by the environment variable set in step 3. Bucket operations will be performed within the namespace of your configured project id.

Physical Organization

So far we explained that TileDB arrays and groups are stored as directories. There is no directory concept on GCS and other similar object stores. However, GCS uses character / in the object URIs which allows the same conceptual organization as a directory hierarchy in local storage. At a physical level, TileDB stores on GCS all the files it would create locally as objects. For instance, for array gcs://bucket/path/to/array, TileDB creates array schema object gcs://bucket/path/to/array/__array_schema.tdb, fragment metadata object gcs://bucket/path/to/array/<fragment>/__fragment_metadata.tdb, and similarly all the other files/objects. Since there is no notion of a “directory” on GCS, nothing special is persisted on GCS for directories, e.g., gcs://bucket/path/to/array/<fragment>/ does not exist as an object.


TileDB writes the various fragment files as append-only objects using the insert object API of the Google Cloud C++ SDK. In addition to enabling appends, this API renders the TileDB writes to GCS particularly amenable to optimizations via parallelization. Since TileDB updates arrays only by writing (appending to) new files (i.e., it never updates a file in-place), TileDB does not need to download entire objects, update them, and re-upload them to GCS. This leads to excellent write performance.
TileDB reads utilize the range GET request API of the GCS SDK, which retrieves only the requested (contiguous) bytes from a file/object, rather than downloading the entire file from the cloud. This results in extremely fast subarray reads, especially because of the array tiling. Recall that a tile (which groups cell values that are stored contiguously in the file) is the atomic unit of IO. The range GET API enables reading each tile from GCS in a single request. Finally, TileDB performs all reads in parallel using multiple threads, which is a tunable configuration parameter.

Using GCS via the S3 compatibility API

While TileDB provides a native GCS backend implementation using the Google Cloud C++ SDK, it is also possible to use GCS via the GCS-S3 compatibility API using our S3 backend. Doing so requires setting several configuration parameters:
Config option
GCS setting
"vfs.s3.aws_access_key_id", "vfs.s3.aws_secret_access_key"
Override here, or set as usual using AWS settings or environment variables.
Note: vfs.s3.use_multipart_upload=true may work with recent GCS updates, but has not yet been tested/evaluated by TileDB.
Full example for GCS via S3 compatibility in Python:
import tiledb
import numpy as np
import sys
# update this
#uri = "s3://your-bucket/array-path"
uri = "s3://isaiah-test-bucket/test-11apr2023"
# read credentials from 'creds.nogit' file in current
# directory, newline separated:
# "key\nsecret"
creds_path = "creds.nogit"
key,secret = [x.strip() for x in open(creds_path).readlines()]
# gcs config
config = tiledb.Config()
config["vfs.s3.endpoint_override"] = ""
config["vfs.s3.aws_access_key_id"] = key
config["vfs.s3.aws_secret_access_key"] = secret
config["vfs.s3.region"] = "auto"
config["vfs.s3.use_multipart_upload"] = "false"
# context
ctx = tiledb.Ctx(config=config)
# create sample array if it does not exist
vfs = tiledb.VFS(ctx=ctx)
if not vfs.is_dir(uri):
print("trying to write: ", uri)
a = np.arange(5)
schema = tiledb.schema_like(a, ctx=ctx)
tiledb.Array.create(uri, schema)
with tiledb.DenseArray(uri, 'w', ctx=ctx) as T:
T[:] = a
print("reading back from: ", uri)
with tiledb.DenseArray(uri, ctx=ctx) as t: