Installation

Introduction

TileDB Cloud Enterprise edition is available for installation in Kubernetes clusters though a helm chart. The helm chart will install all components of TileDB Cloud. The instructions below will walk you though getting the helm chart, getting access to the private docker registry and setting up the installation.

Accounts

In order to use the enterprise edition you will need to get access to the private docker registry and the private helm registry. Please contact your TileDB, Inc account representative for credentials to these services.

Installation

Prerequisites

Kubernetes

A Kubernetes cluster is required for installation. Setting up a Kubernetes cluster is outside the scope of this document, please contact your account representative if you need assistance with this.

Kubernetes Components

The minimum Kubernetes version support is v1.14.0. If your cluster is older than this you will need to upgrade.
You will also need the following components configured in your cluster:

Helm

Helm charts are used for the installation of TileDB Cloud Enterprise services in the Kubernetes cluster. You will need to have helm v3 installed on your local machine to facilitate the installation. Helm v3 does not require any components inside the Kubernetes cluster.

MariaDB

MariaDB 10.3 or newer is required. This is used for persistent storage of user account details, organizations, tasks and more. While MySQL should be compatible only MariaDB 10.3 or newer are officially supported.
It is strongly recommended to enable SSL and at-rest encryption with MariaDB.

Add Helm Repository

To get started with you will need to add the TileDB helm chart repository. This repository requires authentication, please use the username/password provided to you by your account representative.
1
# TileDB Chart is for the TileDB Cloud service itself
2
helm repo add tiledb https://charts.tiledb.com --username <provided by TileDB>
Copied!

Create Kubernetes Namespace

TileDB cloud will be installed into a dedicated namespace, tiledb-cloud
1
kubectl create namespace tiledb-cloud
Copied!

Create Custom Values.yaml

Before you install TileDB Cloud Enterprise it is important to setup and customize your installation. This involves creating a custom values file for helm. Below is a sample file you can save and edit.
Save this value as values.yaml . There are several required changes, all sections which require changes are prefixed with a comment of # REQUIRED:. Examples of the changes needed including setting your docker registry authentication details, updating the domain names you would like to deploy TileDB Cloud too.
values.yaml
1
# Default values for tiledb-cloud-enterprise.
2
# This is a YAML-formatted file.
3
4
# Should hosted notebooks be enabled? If you would like to disable them set this to false
5
notebooks:
6
enabled: true
7
8
# REQUIRED: Set the docker registry image credentials to pull TileDB Cloud docker images
9
# The password should be provided to you by your account representative
10
imageCredentials:
11
password: ""
12
13
##################################
14
# TileDB Cloud REST API settings #
15
##################################
16
tiledb-cloud-rest:
17
# Service Account to run deployment under
18
# Change this if you have different RBAC requirements
19
serviceAccountName: default
20
21
# The autoscaling of the service can be adjusted if required
22
# The following settings are the recommended defaults
23
autoscaling:
24
enabled: true
25
minReplicas: 2
26
maxReplicas: 300
27
targetCPUUtilizationPercentage: 80
28
targetMemoryUtilizationPercentage: 50
29
30
# .spec.volumes
31
#volumes:
32
# - name: test
33
# emptyDir: {}
34
# - name: nfs-volume
35
# nfs:
36
# server: nfs.example.com
37
# path: /nfs/
38
39
# .spec.containers[*].volumeMounts
40
# A volume with the same name declared here
41
# must exist in volumes.
42
#volumeMounts:
43
# - name: test
44
# mountPath: /test
45
# readOnly: true
46
# - name: nfs-volume
47
# mountPath: /nfs_data
48
49
# key:value pairs defined below are configured
50
# as ENV variables on all rest pod containers
51
#extraEnvs:
52
# - KEY1: value1
53
# - KEY2: value2
54
55
# Config ingress, be sure to set the url to where you want to expose the api
56
ingress:
57
annotations:
58
# Configure any needed annotations. For instance if you are using a different ingress besides nginx set that here
59
kubernetes.io/ingress.class: nginx
60
url:
61
# REQUIRED: Change this to the hostname you'd like the API service to be at
62
- api.tiledb.example.com
63
# optional TLS
64
tls: []
65
# - secretName: chart-example-tls
66
# hosts:
67
# - chart-example.local
68
69
restConfig:
70
# REQUIRED: Set the private dockerhub registry credentials, these are the same as the `imageCredentials` above
71
ContainerRegistry:
72
DockerhubPassword: ""
73
74
# REQUIRED: Set the initial passwords for the internal users of Rest
75
# Replace "secret" with a strong password
76
# This config can be removed after the first run of Rest
77
ComputationUserInitialPassword: "secret"
78
PrometheusUserInitialPassword: "secret"
79
CronUserInitialPassword: "secret"
80
UIUserInitialPassword: "secret"
81
DebugUserInitialPassword: "secret"
82
83
# REQUIRED: Set the signing secret(s) for api tokens, this should be a secure value
84
# We recommend creating a random value with `openssl rand -hex 32`
85
# This is a list of token signing secrets. Zero element of the list is used
86
# for signing, the rest are used for validation.
87
# This mechanism provides a way to rotate signing secrets.
88
# In case there are active tokens signed with a key and this key is removed from
89
# the list, the tokens are invalidated.
90
TokenSigningSecrets:
91
- "Secret"
92
93
# REQUIRED: This is needed for the TileDB Jupyterlab Prompt User Options extension
94
CorsAllowedOrigins:
95
- "https://jupyterhub.tiledb.example.com"
96
# REQUIRED: Define supported storage types and locations, if you want to use NFS
97
# enable "local"
98
StorageLocationsSupported:
99
- "s3"
100
#- "local"
101
#- "hdfs"
102
#- "azure"
103
#- "gcs"
104
105
ArraySettings:
106
# When enabled, AWS credentials will be auto-discovered
107
# from the Environment, config file, EC2 metadata etc.
108
AllowS3NoCredentials: false
109
110
Email:
111
# Should users be required to confirm their email addresses
112
# By default email confirmation is disabled as this requires a working SMTP setup
113
DisableConfirmation: False
114
# REQUIRED: The UI Server address is used for sending a link to the reset password email
115
UIServerAddress: "https://console.tiledb.example.com"
116
# Email Accounts
117
Accounts:
118
Noreply: "[email protected]"
119
120
121
# REQUIRED: Configure main database. It is recommended to host a MariaDB or MySQL instance outside of the kubernetes cluster
122
Databases:
123
# `main` is a required database configuration
124
main:
125
Driver: mysql
126
Host: "{{ .Release.Name }}-mariadb.{{ .Release.Namespace }}.svc.cluster.local"
127
Port: 3306
128
Schema: tiledb_rest
129
Username: tiledb_user
130
Password: password
131
132
# Set log level, 1=Panic, 2=Fatal, 3=Error, 4=Warning, 5=Info, 6=Debug
133
LogVerbosity: 4
134
135
# Configure any default TileDB Embedded settings using key: value mapping
136
# Example setting to override s3 endpoint
137
# TileDBEmbedded:
138
# Config:
139
# "vfs.s3.endpoint_override": "s3.wasabisys.com"
140
141
# LDAP settings. Enable and configure if you wish to allow LDAP for user account login
142
# Ldap:
143
# Enable: false
144
# EnableTLS: false
145
# Hosts:
146
# - ldap.example.com
147
# Port: 389
148
# HostsTLS:
149
# - ldap.example.com
150
# PortTLS: 389
151
# BaseDN: DC=ldaplab,DC=local
152
# UserDN: CN=tiledb,CN=Users,DC=ldaplab,DC=local
153
# # can be set via config or env variable (TILEDB_REST_LDAP_PASSWORD)
154
# # Setting via ENV is recommended.
155
# #PASSWORD: ""
156
# CommonNames:
157
# - Users
158
# - IT
159
# - Managers
160
# # OPENLDAP
161
# # Attributes:
162
# # email: mail
163
# # name: givenName
164
# # username: uid
165
# Attributes:
166
# email: mail
167
# name: name
168
# username: userPrincipalName
169
170
# Configure TLS settings. If you wish to use TLS inside k8s
171
#Certificate:
172
# Absolute path to certificate
173
#CertFile: ""
174
# Absolute path to private key
175
#PrivateKey: ""
176
# TLS Minimum Version options
177
# 0x0301 #VersionTLS10
178
# 0x0302 #VersionTLS11
179
# 0x0303 #VersionTLS12
180
# 0x0304 #VersionTLS13
181
#MinVersion: 0x0304
182
# TLS 1.0 - 1.2 cipher suites. Leaving empty will enable all
183
#TLS10TLS12CipherSuites:
184
# - 0x0005 #TLS_RSA_WITH_RC4_128_SHA
185
# - 0x000a #TLS_RSA_WITH_3DES_EDE_CBC_SHA
186
# - 0x002f #TLS_RSA_WITH_AES_128_CBC_SHA
187
# - 0x0035 #TLS_RSA_WITH_AES_256_CBC_SHA
188
# - 0x003c #TLS_RSA_WITH_AES_128_CBC_SHA256
189
# - 0x009c #TLS_RSA_WITH_AES_128_GCM_SHA256
190
# - 0x009d #TLS_RSA_WITH_AES_256_GCM_SHA384
191
# - 0xc007 #TLS_ECDHE_ECDSA_WITH_RC4_128_SHA
192
# - 0xc009 #TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA
193
# - 0xc00a #TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA
194
# - 0xc011 #TLS_ECDHE_RSA_WITH_RC4_128_SHA
195
# - 0xc012 #TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA
196
# - 0xc013 #TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA
197
# - 0xc014 #TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA
198
# - 0xc023 #TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256
199
# - 0xc027 #TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256
200
# - 0xc02f #TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
201
# - 0xc02b #TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256
202
# - 0xc030 #TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
203
# - 0xc02c #TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384
204
# - 0xcca8 #TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256
205
# - 0xcca9 #TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256
206
# TLS 1.3 cipher suites. Leaving empty will enable all
207
#TLS13CipherSuites:
208
# - 0x1301 #TLS_AES_128_GCM_SHA256
209
# - 0x1302 #TLS_AES_256_GCM_SHA384
210
# - 0x1303 #TLS_CHACHA20_POLY1305_SHA256
211
#PreferServerCipherSuites: false
212
# CurveID is the type of a TLS identifier for an elliptic curve
213
# Leaving empty will enable all
214
#CurveID:
215
# - 23 #CurveP256 CurveID
216
# - 24 #CurveP384 CurveID
217
# - 25 #CurveP521 CurveID
218
# - 29 #X25519
219
# Okta Service details
220
# SSO:
221
# Okta:
222
# Domain: "domain-name.okta.com"
223
224
# It is not recommend to run the database inside k8s for production use, but it is helpful for testing
225
mariadb:
226
# Set to true if you wish to deploy a database inside k8s for testing
227
enabled: false
228
image:
229
repository: bitnami/mariadb
230
tag: 10.5.8
231
pullPolicy: IfNotPresent
232
auth:
233
# Auth parameters much match with the restConfig.Databases.main above
234
database: tiledb_rest
235
username: tiledb_user
236
password: password
237
rootPassword: changeme
238
primary:
239
# Enable persistence if you wish to save the database, again running in k8s is not recommend for production use
240
persistence:
241
enabled: false
242
# Set security context to user id of mysqld user in tiledb-mariadb-server
243
podSecurityContext:
244
enabled: true
245
fsGroup: 999
246
containerSecurityContext:
247
enabled: true
248
runAsUser: 999
249
250
####################################
251
# TileDB Cloud UI Console settings #
252
####################################
253
tiledb-cloud-ui:
254
# Service Account to run deployment under
255
# Change this if you have different RBAC requirements
256
serviceAccountName: default
257
258
# The autoscaling of the service can be adjusted if required
259
# The following settings are the recommended defaults
260
autoscaling:
261
enabled: true
262
minReplicas: 2
263
maxReplicas: 300
264
targetCPUUtilizationPercentage: 80
265
targetMemoryUtilizationPercentage: 50
266
267
# REQUIRED: set the url of the jupyterhub server
268
config:
269
# REQUIRED: Set a secret here with `openssl rand -hex 32`
270
SessionKey: "secret"
271
RestServer:
272
# REQUIRED: This needs to be set to
273
# the same value as restConfig.UIUserInitialPassword
274
Password: "secret"
275
JupyterhubURL: "https://jupyterhub.tiledb.example.com"
276
# SSOOkta:
277
# Domain: "domain-name.okta.com"
278
# ClientID: "client_id"
279
# ClientSecret: "secret"
280
281
# REQUIRED: Config ingress, be sure to set the hostname to where you want to expose the UI
282
ingress:
283
enabled: true
284
annotations:
285
# Configure any needed annotations. For instance if you are using a different ingress besides nginx set that here
286
kubernetes.io/ingress.class: nginx
287
# REQUIRED: Set URL for web console
288
url:
289
- console.tiledb.example.com
290
# optional TLS
291
tls: []
292
293
#########################################
294
# TileDB Cloud Hosted Notebook Settings #
295
#########################################
296
jupyterhub:
297
proxy:
298
# REQUIRED: Set a signing secret here with `openssl rand -hex 32`
299
secretToken: "Secret"
300
# The pre-puller is used to to ensure the docker images for notebooks are prepulled to each node
301
# This can improve notebook startup time, but add additional storage requirements to the nodes
302
# If you wish to use dedicated k8s node groups for notebooks, see:
303
# https://zero-to-jupyterhub.readthedocs.io/en/0.8.2/optimization.html?highlight=labels#using-a-dedicated-node-pool-for-users
304
prePuller:
305
hook:
306
enabled: false
307
continuous:
308
# NOTE: if used with a Cluster Autoscaler, also add user-placeholders
309
enabled: false
310
311
scheduling:
312
# You can enable at least one warm instance for users by enabling the userPlaceholder
313
userPlaceholder:
314
enabled: false
315
replicas: 1
316
# Disable podPriority, it is only useful if userPlaceholders are enabled
317
podPriority:
318
enabled: false
319
320
singleuser:
321
# REQUIRED: Set the private registry credentials, these are the same as the `imageCredentials` above
322
imagePullSecret:
323
password: ""
324
startTimeout: 900
325
# Set the size of the user's persisted disk space in notebooks
326
storage:
327
capacity: 2G
328
# JupyterHub expects the Kubernetes Storage Class to be configured
329
# with "volumeBindingMode: Immediate" and "reclaimPolicy: Retain".
330
# If your default Storage Class does not support this, you can
331
# create a new one and configure it bellow.
332
#dynamic:
333
# storageClass: "jupyterhub"
334
335
hub:
336
# REQUIRED: Set the private registry credentials, these are the same as the `imageCredentials` above
337
imagePullSecret:
338
password: ""
339
340
341
# Uncomment for any extra settings
342
# extraConfig:
343
# Uncomment to disable SSL validation. Useful when testing deployments
344
# ssl_config: |
345
# c.Spawner.env_keep.append("TILEDB_REST_IGNORE_SSL_VALIDATION")
346
# Uncomment to modify the securityContext of JupyterHub pods
347
# securityContext: |
348
# c.Spawner.extra_container_config = {
349
# "securityContext": {
350
# "runAsGroup": 100,
351
# "runAsUser": 1000,
352
# "allowPrivilegeEscalation": False,
353
# "capabilities": {
354
# "drop": ["ALL"]
355
# }
356
# }
357
# }
358
359
# REQUIRED: Set the domain for the REST API and the oauth2 service
360
# it is likely you just need to replace `example.com` with your own internal domain
361
# This should match the tiledb-cloud-rest settings above and the hydra settings below
362
extraEnv:
363
OAUTH2_AUTHORIZE_URL: "https://oauth2.tiledb.example.com/oauth2/auth"
364
OAUTH2_USERDATA_URL: "https://oauth2.tiledb.example.com/userinfo"
365
TILEDB_REST_HOST: "https://api.tiledb.example.com"
366
# Uncomment to disable SSL validation. Useful when testing deployments
367
# TILEDB_REST_IGNORE_SSL_VALIDATION: "true"
368
369
ingress:
370
enabled: true
371
# REQUIRED: set the ingress domain for hosted notebooks
372
hosts:
373
- "jupyterhub.tiledb.example.com"
374
annotations:
375
# Configure any needed annotations. For instance if you are using a different ingress besides nginx set that here
376
kubernetes.io/ingress.class: "nginx"
377
tls:
378
# REQUIRED: set the TLS information for hosted notebooks
379
- hosts:
380
- jupyterhub.tiledb.example.com
381
secretName: jupyterhub-tls
382
383
auth:
384
type: custom
385
custom:
386
className: 'oauthenticator.tiledb.TileDBCloud'
387
config:
388
# REQUIRED: Set the oauth2 secret, this should be a secure value
389
# We recommend creating a random value with `openssl rand -hex 32`
390
client_secret: "Secret"
391
# REQUIRED: Set the domain for the jupyterhub and the oauth2 service
392
# it is likely you just need to replace `example.com` with your own internal domain
393
# This should match the ingress settings above and the hydra settings below
394
oauth_callback_url: "https://jupyterhub.tiledb.example.com/hub/oauth_callback"
395
token_url: "https://oauth2.tiledb.example.com/oauth2/token"
396
auth_url: "https://oauth2.tiledb.example.com/oauth2/auth"
397
userdata_url: "https://oauth2.tiledb.example.com/userinfo"
398
state:
399
# REQUIRED: Set the jupyterhub auth secret for persistence, this should be a secure value
400
# We recommend creating a random value with `openssl rand -hex 32`
401
cryptoKey: "Secret"
402
403
########################################
404
# TileDB Cloud Oauth2 Service Settings #
405
########################################
406
hydra:
407
hydra:
408
# REQUIRED: Set the domain for the jupyterhub
409
# it is likely you just need to replace `example.com` with your own internal domain
410
# This should match the ingress settings above and the hydra settings below
411
dangerousAllowInsecureRedirectUrls:
412
- http://jupyterhub.tiledb.example.com/hub/oauth_callback
413
config:
414
# Optionally set the internal k8s cluster IP address space to allow non-ssl connections from
415
# This defaults to all private IP spaces
416
# tls:
417
# allow_termination_from:
418
# Set to cluster IP
419
# - 172.20.0.0/12
420
secrets:
421
# REQUIRED: Set the oauth2 secret, this should be a secure value
422
# We recommend creating a random value with `openssl rand -hex 32`
423
system: secret
424
cookie: secret
425
# REQUIRED: Set MariaDB Database connection, this defaults to the in k8s development settings.
426
# You will need to set this to the same connection parameters as the tiledb-cloud-rest section
427
dsn: "mysql://tiledb_user:[email protected](tiledb-cloud-mariadb.tiledb-cloud.svc.cluster.local:3306)/tiledb_rest"
428
urls:
429
self:
430
# REQUIRED: Update the domain for the oauth2 service and the web console ui
431
# It is likely you can just replace `example.com` with your own internal domain
432
issuer: "https://oauth2.tiledb.example.com/"
433
public: "https://oauth2.tiledb.example.com/"
434
login: "https://console.tiledb.example.com/oauth2/login"
435
consent: "https://console.tiledb.example.com/oauth2/consent"
436
437
# Configure ingress for oauth2 service
438
ingress:
439
public:
440
annotations:
441
# Configure any needed annotations. For instance if you are using a different ingress besides nginx set that here
442
kubernetes.io/ingress.class: nginx
443
hosts:
444
# REQUIRED: set the ingress domain for oauth2 service
445
- host: "oauth2.tiledb.example.com"
446
paths: ["/"]
447
tls:
448
# REQUIRED: set the TLS information for oauth2 service
449
- hosts:
450
- "oauth2.tiledb.example.com"
451
secretName: hydra-tls
452
453
######################
454
# Ingress Controller #
455
######################
456
ingress-nginx:
457
# This is provided for ease of testing, it is recommend to establish your own ingress which fits your environment
458
enabled: false
459
## nginx configuration
460
## Ref: https://github.com/kubernetes/ingress/blob/master/controllers/nginx/configuration.md
461
##
462
controller:
463
name: controller
464
autoscaling:
465
enabled: true
466
minReplicas: 2
467
468
config:
469
use-proxy-protocol: "true"
470
log-format-escape-json: "true"
471
log-format-upstream: '{ "time": "$time_iso8601", "remote_addr": "$proxy_protocol_addr", "x-forward-for": "$proxy_add_x_forwarded_for", "request_id": "$req_id", "remote_user": "$remote_user", "bytes_sent": $bytes_sent, "request_time": $request_time, "status": $status, "vhost": "$host", "request_proto": "$server_protocol", "path": "$uri", "request_query": "$args", "request_length": $request_length, "duration": $request_time, "method": "$request_method", "http_referrer": "$http_referer", "http_user_agent": "$http_user_agent" }'
472
# Set timeouts to 1 hour
473
proxy-send-timeout: "3600"
474
proxy-read-timeout: "3600"
475
send-timeout: "3600"
476
client-max-body-size: "3076m"
477
proxy-body-size: "3076m"
478
proxy-buffering: "off"
479
proxy-request-buffering: "off"
480
proxy-http-version: "1.1"
481
482
ingressClass: nginx
483
484
## Allows customization of the external service
485
## the ingress will be bound to via DNS
486
publishService:
487
enabled: true
488
489
service:
490
annotations:
491
# Set any needed annotations. The default ones we have set are for aws ELB nginx
492
service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: '*'
493
# Set aws-load-balancer-internal to allow all traffic from inside
494
# the vpc only, the -internal makes it not accessible to the internet
495
service.beta.kubernetes.io/aws-load-balancer: '0.0.0.0/0'
496
# Set timeout to 1 hour
497
service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: '3600'
498
499
## Set external traffic policy to: "Local" to preserve source IP on
500
## providers supporting it
501
## Ref: https://kubernetes.io/docs/tutorials/services/source-ip/#source-ip-for-services-with-typeloadbalancer
502
externalTrafficPolicy: "Local"
503
504
type: LoadBalancer
505
Copied!

Installing TileDB Cloud

Once you have created the values.yaml file you can install TileDB Cloud by running the following helm command.
1
helm install \
2
--namespace tiledb-cloud \
3
--values values.yaml \
4
tiledb-cloud \
5
tiledb/tiledb-cloud-enterprise
Copied!

Validating Installation

After you have installed TileDB Cloud you can verify the installation works by performing the following procedure.

Creating an Account

First step is to login to the web UI. The URL is dependent on your installation, in the values.yaml you should have replaced console.tiledb.example.com with the domain to access it on. Navigate in your web browser and create an account.
This step has verified that both the TileDB Cloud UI and TileDB Cloud REST components are working.

Creating your First Array

Now that you have an account we will create your first array. This array will show you that creating, writing and reading it functioning as well as give you an array and a task to view in the UI.
For this section we will use a python script. This script will create, write to and read from an array. Please note there are two sections where you need to adjust the configuration for your TileDB Cloud instance and set the array storage location.

Prerequisites

This section requires the TileDB-Py api installed. You can get this from pip or conda. Once you have TileDB-Py, copy the following script to check_installation.py and modify the first few lines as required.
Python
1
import numpy as np
2
import sys
3
import tiledb
4
5
# username/password for TileDB Cloud instance
6
# Note you could also use an api token, which is generally preferred, however
7
# for simplcity of the example we'll use username/password combo here
8
username = ""
9
password = ""
10
# Where should the array be stored? This can be a object store,
11
# or a path inside the rest server where a nfs server is mounted
12
storage_path = "file:///nfs/tiledb_arrays/example"
13
array_uri = "tiledb://{}/{}/quickstart_sparse".format(username, storage_path)
14
15
# Set the host to your TileDB Cloud host
16
host = "http://api.tiledb.example.com"
17
18
ctx = tiledb.Ctx({"rest.username": username, "rest.password": password, "rest.server_address": host})
19
20
def create_array():
21
# The array will be 4x4 with dimensions "rows" and "cols", with domain [1,4].
22
dom = tiledb.Domain(
23
tiledb.Dim(name="rows", domain=(1, 4), tile=4, dtype=np.int32, ctx=ctx),
24
tiledb.Dim(name="cols", domain=(1, 4), tile=4, dtype=np.int32, ctx=ctx),
25
ctx=ctx
26
)
27
28
# The array will be sparse with a single attribute "a" so each (i,j) cell can store an integer.
29
schema = tiledb.ArraySchema(
30
domain=dom, sparse=True, attrs=[tiledb.Attr(name="a", dtype=np.int32, ctx=ctx)],
31
ctx=ctx
32
)
33
34
# Create the (empty) array on disk.
35
tiledb.SparseArray.create(array_name, schema)
36
37
38
def write_array():
39
# Open the array and write to it.
40
with tiledb.SparseArray(array_name, mode="w", ctx=ctx) as A:
41
# Write some simple data to cells (1, 1), (2, 4) and (2, 3).
42
I, J = [1, 2, 2], [1, 4, 3]
43
data = np.array(([1, 2, 3]))
44
A[I, J] = data
45
46
47
def read_array():
48
# Open the array and read from it.
49
with tiledb.SparseArray(array_name, mode="r", ctx=ctx) as A:
50
# Slice only rows 1, 2 and cols 2, 3, 4.
51
data = A[1:3, 2:5]
52
a_vals = data["a"]
53
for i, coord in enumerate(zip(data["rows"], data["cols"])):
54
print("Cell (%d, %d) has data %d" % (coord[0], coord[1], a_vals[i]))
55
56
57
create_array()
58
write_array()
59
read_array()
Copied!
Run this script with:
1
python check_installation.py
Copied!
If this script ran and printed out the output, then your installation is working successfully for creating, reading and writing TileDB arrays.

Viewing Array in the Web Console

The newly created array, quickstart_sparse should now be viewable in the web console. If you navigate to the arrays page you will see it listed.

Upgrades

When new releases of TileDB Cloud Enterprises are announced you can easily upgrade your installation by first updating the helm repository:
1
helm repo update tiledb
Copied!
After the repository is updated you can run the helm upgrade:
1
helm upgrade --install \
2
--namespace tiledb-cloud \
3
--values values.yaml \
4
tiledb-cloud \
5
tiledb/tiledb-cloud-enterprise
Copied!
Last modified 15d ago