In some environments there might be shared NFS drives that contain data you are interested in accessing from inside a notebook environment or from a batch task graph. To support this we provide the ability to mount any volume into notebook environments.
Creating a Volume for NFS
The first step is to create a volume in the cluster that defines the NFS mount. Save the file below to a file called nfs-pv.yaml . Modify to replace the section marked as REQUIRED.
apiVersion: v1
kind: PersistentVolume
metadata:
name: jupyter-nfs-shared-volume
spec:
# Ensure policy is set to Retain
persistentVolumeReclaimPolicy: Retain
# REQUIRED
# Set to approximate capacity of NFS mount
capacity:
storage: 100Gi
accessModes:
# Define the read/write policies
# use ReadWriteMany for read/write access
- ReadOnlyMany
nfs:
# REQUIRED
# Set to path inside NFS mount point to be made accessible
path: /data
# REQUIRED
# Set to NFS server
server: nfs.example.com
# Apply any standard NFS mount parameters
mountOptions:
# Set to vers=3 for NFSv3
- vers=4
# Set to 0 or remove if you want NFSv4.0 or NFSv3
- minorversion=1
- noac
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: jupyter-nfs-shared-volume
spec:
volumeName: nfs-example
storageClassName: ""
accessModes:
# Define the read/write policies
# use ReadWriteMany for read/write access
- ReadOnlyMany
resources:
requests:
# REQUIRED
# Set to approximate size of existing nfs volume
storage: 100Gi
Apply this with kubectl.
kubectl -n tiledb-cloud apply -f nfs-pv.yaml
Using the NFS Mount in Jupyter
Next copy the following file to nfs-jupyter.yaml
Modify the sections marked as REQUIRED
jupyterhub:
singleuser:
storage:
extraVolumes:
- name: nfs-example
persistentVolumeClaim:
claimName: jupyter-nfs-shared-volume
extraVolumeMounts:
- name: nfs-example
# REQUIRED
# Set the path inside the notebook environments where the nfs folder should be mounted
mountPath: /opt/nfs-example
Apply this file to the helm chart by performing an upgrade and adding the additional values file.