RWX support for Diamanti Volumes
A RWX PVC can be used simultaneously by many Pods in the same Kubernetes namespace for read and write operations. In order to support concurrent access to Diamanti Persistent volumes by multiple applications, it requires following
Enable ReadWriteMany(RWX) access mode on Diamanti volumes
Export Volumes with RWX access mode to applications running on multiple nodes
NFS server to export Diamanti volume:
Leverage NFS protocol which supports exporting the volume to multiple users for concurrent access. For a PVC created with RWX access mode, NFS server is automatically created behind the scenes to provide access to the storage. All the applications that use the PVC will mount the NFS share and gain read write access to volumes. For the applications to mount NFS volume as a local drive, volume needs to be exported on an endpoint which does not change with failover. NFS server is therefore created along with Kubernetes service with Cluster IP
The following shows an example of sample specification of persistent volume claim with RWX access mode
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pvc1 spec: storageClassName: best-effort accessModes: - ReadWriteMany resources: requests: storage: 10G
Create persistent volume claim with RWX access mode:
$ kubectl create -f pvc.yaml persistentvolumeclaim/pvc1 created
The PVC with RWX access mode will not get created until the required endpoint is created.
NFS server deployment is automatically created when a PVC with RWX access mode gets created. This deployment is responsible for exporting the PVC on the endpoint we had created earlier. This is essential so that later multiple client pods can be mounted and make use of PVC.
Verify the NFS server deployment and pod created for exporting the volume
$ kubectl get deployments.apps pvc-d61d3693-275b-4c4b-9838-036bc5301a3b NAME READY UP-TO-DATE AVAILABLE AGE pvc-d61d3693-275b-4c4b-9838-036bc5301a3b 1/1 1 1 64s
The NFS server pods are always labelled with app:<volume-name> and this can be used to get the NFS server pod for a particular PVC with RWX access mode as shown below
$ kubectl get pods -l app=pvc-d61d3693-275b-4c4b-9838-036bc5301a3b -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES pvc-d61d3693-275b-4c4b-9838-036bc5301a3b-776fb547b8-n5lf4 1/1 Running 0 7m57s 172.30.0.2 swapnil-n3 <none> <none>
Verify Diamanti volume status. It should be associated with the NFS server pod
$ dctl volume get pvc-d61d3693-275b-4c4b-9838-036bc5301a3b NAME SIZE NODE LABELS PHASE STATUS ATTACHED-TO DEVICE-PATH PARENT AGE pvc-d61d3693-275b-4c4b-9838-036bc5301a3b 10.03GB [swapnil-n3 swapnil-n2] diamanti.com/access-mode=rwx,diamanti.com/endpoint=pvc-d61d3693-275b-4c4b-9838-036bc5301a3b.default,diamanti.com/pod-name=default/pvc-d61d3693-275b-4c4b-9838-036bc5301a3b-776fb547b8-n5lf4 - Attached swapnil-n3 /dev/nvme0n1 0d:0h:11m
Verify the PVC and PV creation:
$ kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE pvc1 Bound pvc-d61d3693-275b-4c4b-9838-036bc5301a3b 9765625Ki RWX best-effort 13m $ kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-d61d3693-275b-4c4b-9838-036bc5301a3b 9765625Ki RWX Delete Bound default/pvc1 best-effort 14m
Create Application Pods to use the volume created with RWX access mode:
$ cat client1.yaml apiVersion: v1 kind: Pod metadata: name: client1 spec: containers: - image: busybox name: busy-container1 command: ["/bin/sleep","7d"] volumeMounts: - mountPath: "/data" name: date-pvc restartPolicy: Never volumes: - name: date-pvc persistentVolumeClaim: claimName: pvc1 $ cat client2.yaml apiVersion: v1 kind: Pod metadata: name: client2 spec: containers: - image: busybox name: busy-container1 command: ["/bin/sleep","7d"] volumeMounts: - mountPath: "/data1" name: date-pvc restartPolicy: Never volumes: - name: date-pvc persistentVolumeClaim: claimName: pvc1 $ kubectl create -f client1.yaml pod/client1 created $ kubectl create -f client2.yaml pod/client2 created $ kubectl get pods | grep client client1 1/1 Running 0 11s client2 1/1 Running 0 8s
Verify ReadWrite by application:
$ kubectl exec client1 -ti -- ls /data/ index.html lost+found $ kubectl exec client1 -ti -- touch /data/client1_file $ kubectl exec client1 -ti -- ls /data/ client1_file index.html lost+found $ kubectl exec client2 -ti -- ls /data1/ client1_file index.html lost+found
Note
FlexVolume driver is completely removed ( Deprecated )