In some environments there might be shared NFS drives that contain data you are interested in accessing from inside a notebook environment or from a batch task graph. To support this we provide the ability to mount any volume into notebook environments.
Creating a Volume for NFS
The first step is to create a volume in the cluster that defines the NFS mount. Save the file below to a file called nfs-pv.yaml . Modify to replace the section marked as REQUIRED.
apiVersion:v1kind:PersistentVolumemetadata:name:jupyter-nfs-shared-volumespec:# Ensure policy is set to RetainpersistentVolumeReclaimPolicy:Retain# REQUIRED# Set to approximate capacity of NFS mountcapacity:storage:100GiaccessModes:# Define the read/write policies# use ReadWriteMany for read/write access - ReadOnlyManynfs:# REQUIRED# Set to path inside NFS mount point to be made accessiblepath:/data# REQUIRED# Set to NFS serverserver:nfs.example.com# Apply any standard NFS mount parametersmountOptions:# Set to vers=3 for NFSv3 - vers=4# Set to 0 or remove if you want NFSv4.0 or NFSv3 - minorversion=1 - noac---apiVersion:v1kind:PersistentVolumeClaimmetadata:name:jupyter-nfs-shared-volumespec:volumeName:nfs-examplestorageClassName:""accessModes:# Define the read/write policies# use ReadWriteMany for read/write access - ReadOnlyManyresources:requests:# REQUIRED# Set to approximate size of existing nfs volumestorage:100Gi
Apply this with kubectl.
kubectl -n tiledb-cloud apply -f nfs-pv.yaml
Using the NFS Mount in Jupyter
Next copy the following file to nfs-jupyter.yaml
Modify the sections marked as REQUIRED
jupyterhub:
singleuser:
storage:
extraVolumes:
- name: nfs-example
persistentVolumeClaim:
claimName: jupyter-nfs-shared-volume
extraVolumeMounts:
- name: nfs-example
# REQUIRED
# Set the path inside the notebook environments where the nfs folder should be mounted
mountPath: /opt/nfs-example
Apply this file to the helm chart by performing an upgrade and adding the additional values file.