Architecture
Last updated
Last updated
This page describes the architecture of our TileDB Cloud SaaS offering.
Currently, TileDB Cloud (SaaS) runs on AWS, but in the future it will be deployed on other cloud providers. The principles around multiple cloud regions and cloud storage described in the architecture below are directly extendible to other settings (on the cloud or on premises).
Do you wish to run TileDB Cloud under your full control on premises or on the cloud? See .
The following figure outlines the TileDB Cloud architecture, which is comprised of the following components:
Automatic Redirection
Orchestration
UI Console
System State
REST Workers
Jupyter Notebooks
We explain each of those components below.
TileDB Cloud maintains compute clusters in multiple cloud regions, geographically distributed across the globe. The reason is that users may store their data in cloud buckets located in different regions, and it is always faster and more economical to send the compute to the data; that eliminates egress costs, reduces latency and increases network speeds. However, users may not know which region the array they are accessing is located.
To facilitate sending the compute to the appropriate region, TileDB Cloud supports automatic redirection using the Cloudflare Workers service. This provides a scalable and serverless way to lookup the region of the array being accessed (maintaining a fast key-value store that is always in sync with the System State) and issue a 302
temporary redirect to the HTTP request. TileDB Open Source and the TileDB Cloud client will honor the redirection and send the the request to the TileDB Cloud service in the proper region (see Orchestration).
If your array lives in a cloud region unsupported by TileDB Cloud, the request is sent to us-east-1
. We plan a future improvement to redirect to the nearest region instead.
Currently, automatic redirection is enabled by default, and the behavior can be controlled by using a configuration parameter. The user can also always dispatch any query directly to a specific region.
In every cloud region, TileDB Cloud maintains a Kubernetes cluster that carries out all tasks, properly autoscaling and load balancing to match capacity with demand based upon several factors. We use the Kubernetes built in metrics and monitoring toolchain to ensure pod memory usage is monitored and we have an accurate picture of the real world workloads at all times.
Currently supported regions:
us-east-1
us-west-2
eu-west-2
ap-southeast-1
In each region we use a variety of compute EC2 instance types, predominantly from m5
, c5
and r5
classes.
The TileDB Cloud user interface console (https://cloud.tiledb.com) is a web app written in React that uses the REST Workers API across the same procedures and protocols as the clients. Many of the same routes are also used directly from one of the many clients, such a TileDB-Cloud-Py or TileDB-Cloud-R. The console web app autoscales based on the load, but currently it runs only inside the us-east-1
cluster.
TileDB Cloud maintains persistent state about user records, arrays, UDFs, billing, activity and more by using an always encrypted MariaDB instance. This instance is maintained in the us-east-1
region. In addition, this state is replicated and synced at all times with a read-only MariaDB instance maintained in every other supported region, in order to reduce latency for the queries executed in those regions.
TileDB Cloud's architecture is centered around a REST API Service. The service is a Go based application which provides all of the base functionality such as user management, authentication and access control, billing and monetization (via integration with Stripe), UDF execution, and serverless SQL orchestration used in TileDB Cloud. The REST Service is deployed in Kubernetes with a stateless design that allows for distributed orchestration and execution without the need for centralized coordination or locking.
The REST Service monitors resource usage and does its own book keeping in order to determine if it can service a request or if it should inform the client to retry later. By allowing the client to manage retries and with the high availability of the REST service architecture. TileDB Cloud is able to gracefully load balance and distribute the work across multiple instances.
The REST service handles the following types of serverless tasks, building upon the TileDB Open Source library:
TileDB Cloud offers hosted Jupyter notebooks by using Jupyter Docker Stacks for the base conda environments, and Jupyterhub / Zero to Jupyterhub K8S for the runtime environment. The notebooks are spawned inside Kubernetes using kubespawner to offer an isolated environment for each user with their own dedicated and persisted storage.
Currently, Jupyter notebooks can be spawned in the us-east-1
region, but soon TileDB Cloud will support multiple regions for notebooks.
TileDB Cloud runs over standard http connectivity, using tcp ports 80
and443
. Connection made on port 80
are automatically redirected to https over port 443
.
TileDB Cloud provides Open ID Connect support that can be used with any Open ID Connect compatible service. TileDB Cloud provide a fixed set of IP address used for the outbound request as part of the Open ID Connect sequence.
eu-west-2
13.41.67.254
18.134.194.194
18.135.61.196
us-west-2
35.81.95.218
54.185.206.57
54.189.31.204
us-east-1
52.21.38.106
54.87.160.2
52.70.6.129
ap-southeast-1
13.213.235.67
54.255.255.186
52.76.199.70
See Corporate SSO with TileDB Cloud SaaS if you are interested in enabling OIDC support for TileDB Cloud SaaS in your own environment.