Below we provide a table with all the TileDB configuration parameters, along with their description and default values. See Configuration for information on how to set them.
This is applicable only if
The factor by which the size of the dense fragment resulting from consolidating a set of fragments (containing at least one dense fragment) can be amplified. This is important when the union of the non-empty domains of the fragments to be consolidated have a lot of empty cells, which the consolidated fragment will have to fill with the special fill value (since the resulting fragment is dense).
The size (in bytes) of the attribute buffers used during consolidation.
The maximum number of fragments to consolidate in a single step.
The minimum number of fragments to consolidate in a single step.
The size ratio of two (“adjacent”) fragments must be larger than this value to be considered for consolidation in a single step.
The number of consolidation steps to be performed when executing the consolidation algorithm.
The consolidation mode, one of
The vacuuming mode, one of
Determines whether or not TileDB will install signal handlers.
The memory budget for tiles of fixed-sized attributes (or offsets for var-sized attributes) to be fetched during reads.
The memory budget for tiles of var-sized attributes to be fetched during reads.
The memory budget used by the read algorithm to force partition the query range in case sorting is much slower than the partitioning overhead.
# of cores
Upper-bound on number of threads to allocate for compute-bound tasks.
# of cores
Upper-bound on number of threads to allocate for IO-bound tasks.
The number of threads allocated for the TBB thread pool. Note: this is a whole-program setting. Usually this should not be modified from the default. See also the documentation for TBB's
The tile cache size in bytes.
The maximum number of parallel operations on objects with
If set to
Permissions to use for posix file system with file creation.
Permissions to use for posix file system with directory creation.
The minimum number of bytes between two VFS read batches.
The minimum number of bytes in a VFS read operation.
The minimum number of bytes in a parallel VFS operation, except parallel S3 writes, which are controlled by parameter
The maximum byte size to read-ahead from the backend.
The the total maximum size of the read-ahead cache, which is an LRU.
The maximum tries for a connection. Any
The scale factor for exponential backoff when connecting to S3. Any
The connection timeout in ms. Any
The S3 endpoint, if S3 is enabled.
The maximum number of S3 backend parallel operations.
The part size (in bytes) used in S3 multipart writes. Any
The S3 proxy host.
The S3 proxy password.
The S3 proxy port.
The S3 proxy scheme.
The S3 proxy username.
The S3 region.
The AWS access key id (
The AWS access secret (
The AWS session token to use
The Amazon Resource Name (ARN) of the role to assume.
A unique identifier that might be required when you assume a role in another account
The duration, in minutes, of the role session
An identifier for the assumed role session.
The AWS SDK logging level (OFF, DEBUG, TRACE)
The request timeout in ms. Any
The S3 scheme.
Determines whether to use virtual addressing or not.
The S3 use of multi-part upload requests (
The path to a cURL-compatible certificate file.
The path to a cURL-compatible certificate directory.
Enable certificate verification for HTTPS connections.
Set the GCS project id.
The part size (in bytes) used in GCS multi part writes. Any
The maximum number of GCS backend parallel operations.
Determines if the GCS backend can use chunked part uploads.
Set the Azure Storage Account name.
Set the Azure Storage Account key.
Overrides the default Azure Storage Blob endpoint. If empty, the endpoint will be constructed from the storage account name. This should not include an
The block size (in bytes) used in Azure blob block list writes. Any
Determines if the blob endpoint should use HTTP or HTTPS.
The maximum number of Azure backend parallel operations.
Determines if the Azure backend can use chunked block uploads.
Path to the Kerberos ticket cache when connecting to an HDFS cluster.
Optional namenode URI to use (TileDB will use
Username to use when connecting to the HDFS cluster.
URL for REST server to use for remote arrays.
Serialization format to use for remote array requests (CAPNP or JSON).
Username for login to REST server.
Password for login to REST server.
Authentication token for REST server (used instead of username/password).
If true, incomplete queries received from server are automatically resubmitted before returning to user control.
Have curl ignore ssl peer and host validation for REST server.
Compression used in HTTP requests.
Prefix of environmental variables for reading configuration parameters.