Provision Volumes
Cluster administrators can provision both dynamic and static persistent volumes (PVs) in a Diamanti cluster. Static persistent volumes define real storage that is available to cluster users. Dynamic persistent volumes, in contrast, are provisioned by the system when storage classes are used by cluster users. Dynamic persistent volumes do not require users to create a Diamanti volume in advance to match a persistent volume claim.
Provisioning static persistent volumes involves creating a Diamanti volume and creating a persistent volume claim to request the storage. Provisioning dynamic persistent volumes involves creating a persistent volume claim that identifies a corresponding storage class.
Provision Dynamic Persistent Volumes
Cluster and storage administrators can provision dynamic persistent volumes on Diamanti DSeries appliances by specifying a persistent volume claim.
Ensure that an appropriate storage class is available for the dynamic persistent volume.
This storage class needs to be created and configured for dynamic provisioning to occur.
You can determine the available storage classes using the following Kubernetes command:
$ kubectl get sc
Note that storage administrators can create new storage classes, as needed. For example:
$ kubectl get sc NAME PROVISIONER best-effort (default) dcx.csi.diamanti.com high dcx.csi.diamanti.com medium dcx.csi.diamanti.com
All 3 default storage classes (best-effort, high, medium) are by default 2 mirrored storage classes. See Use Storage Classes for more information about storage classes.
Create a persistent volume claim that identifies the appropriate storage class. The following shows a sample specification with the storage class name highlighted:
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: dynamic-pv-claim spec: accessModes: - ReadWriteOnce resources: requests: storage: 100G storageClassName: high
Kubernetes automatically creates a persistent volume with a name using the following format: pvc-<UUID>. Similarly, the Diamanti software creates a volume based on the storage class specified in the persistent volume claim.
Note that the persistent volume claim specification does not include a volume name; dynamic volumes always specify the storage class name without a volume name.
Use the persistent volume claim within pod definitions.
Note
See Use the PVC in Pod Definitions for more information.
Use Storage Classes
A storage class is used when a PersistentVolume belonging to the class needs to be dynamically provisioned.
Each storage class definition contains fields specifying the provisioner (the plug-in used for provisioning volumes), the reclaim policy, and the following parameters:
Option |
Description |
---|---|
mirrorCount |
The number of mirrors to create for the volume. Valid values are 1, 2, or 3. |
fsType |
|
driver |
The driver that performs the attach, detach, mount and unmount operations on the volume. |
perfTier |
|
encryption |
To create a secure volume, user needs to specify “true” for this parameter. By default, encryption is not enabled for volumes if encryption parameter is not specified.
|
The following shows an example storage class definition:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: high
provisioner:dcx.csi.diamanti.com
parameters:
fsType: ext4
mirrorCount: "1"
perfTier: high
reclaimPolicy: Delete
By default, three storage classes are available after a cluster is created. You can display the available storage classes using the following command:
kubectl get storageclass
For example:
$ kubectl get storageclass
NAME PROVISIONER AGE
best-effort (default) dcx.csi.diamanti.com 14d
high dcx.csi.diamanti.com 14d
medium dcx.csi.diamanti.com 14d
Note: Storage administrators can create new storage classes, as needed. For example, if an administrator needs to provision a dynamic volume with a mirror count of three, the administrator will need to create a new storage class with the required mirror count. Similarly, storage administrators can custom storage classes with different filesystem type and performance tier settings.
Provision Static Persistent Volumes
Cluster and storage administrators can provision static persistent volumes on Diamanti D-Series appliances.
Create the volume using the following command:
dctl volume create <volume-name> --size <size> -m <number> --sel <label> -l <label>
For example:
$ dctl volume create static-volume --size 100G
This command creates a 100G volume with the name static-volume. The Diamanti software automatically creates a Kubernetes persistent volume.
Check the persistent volume that was automatically created using the following command:
$ kubectl get -o yaml pv <volume-name>
For example:
$ kubectl get -o yaml pv static-volume apiVersion: v1 kind: PersistentVolume metadata: finalizers: - kubernetes.io/pv-protection name: static-volume selfLink: /api/v1/persistentvolumes/static-volume uid: 401c9f64-695d-4021-ad9a-19438cb44aa0 spec: accessModes: - ReadWriteOnce capacity: storage: 97656250Ki csi: driver: dcx.csi.diamanti.com fsType: ext4 volumeAttributes: name: static-volume perfTier: best-effort volumeHandle: static-volume persistentVolumeReclaimPolicy: Retain volumeMode: Filesystem status: phase: Available
Create a persistent volume claim to request the persistent volume storage.
Use the volumeName option to bind the persistent volume claim to the Diamanti D-Series volume. The following shows a sample specification using the volume name that you created in the previous step:
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: static-pv-claim spec: accessModes: - ReadWriteOnce resources: requests: storage: 100G storageClassName: "" volumeName: static-volume
Note that the storage class name (storageClassName) is left empty. This is required for static volumes. Dynamic volumes, in contrast, always specify the storage class name without a volume name.
Use the persistent volume claim within pod definitions.
See “Use the PVC in Pod Definitions” on page 52 for more information.
Use the PVC in Pod Definitions
After provisioning a static or dynamic persistent volume, you can use a persistent volume claim within pod definition files.
The following shows an example pod definition file with a persistent volume claim. Substitute the name of your persistent volume claim in place of <pvc-name> below (highlighted).
apiVersion: v1
kind: ReplicationController
metadata:
name: mysql-replication-controller
labels:
app: mysql-1
name: mysql-1
spec:
replicas: 1
selector:
app: mysql-1
template:
metadata:
labels:
app: mysql-1
spec:
containers:
- image: mysql
name: mysql
args: ["--datadir=/var/lib/mysql/mysql"]
env:
- name: MYSQL_ALLOW_EMPTY_PASSWORD
value: "yes"
volumeMounts:
- mountPath: /var/lib/mysql
name: volume-1
volumes:
- name: volume-1
persistentVolumeClaim:
claimName: pvc-name
Using the persistent volume claims created earlier in this guide, you could choose to use staticpv-claim or dynamic-pv-claim to specify a claim against a static or dynamic persistent volume respectively.
RWX support for Diamanti Volumes
A RWX PVC can be used simultaneously by many Pods in the same Kubernetes namespace for read and write operations. In order to support concurrent access to Diamanti Persistent volumes by multiple applications, it requires following
● Enable ReadWriteMany(RWX) access mode on Diamanti volumes
● Export Volumes with RWX access mode to applications running on multiple nodes
NFS server to export Diamanti volume:
Leverage NFS protocol which supports exporting the volume to multiple users for concurrent access. For a PVC created with RWX access mode, NFS server is automatically created behind the scenes to provide access to the storage. All the applications that use the PVC will mount the NFS share and gain read write access to volumes. For the applications to mount NFS volume as a local drive, volume needs to be exported on an endpoint which does not change with failover. NFS server is therefore created along with Kubernetes service with Cluster IP
The following shows an example of sample specification of persistent volume claim with RWX access mode
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc1
spec:
storageClassName: best-effort
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10G
Create persistent volume claim with RWX access mode:
$ kubectl create -f pvc.yaml
persistentvolumeclaim/pvc1 created
The PVC with RWX access mode will not get created until the required endpoint is created.
Verify the PVC and PV creation:
$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
pvc1 Bound pvc-838dcb49-9059-48f9-8969-e97e1ea92889 10Gi RWX best-effort 5s
$ kubectl get pv
NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-838dcb49-9059-48f9-8969-e97e1ea92889 10Gi RWX Delete Bound default/pvc1 best-effort 7s
Create Application Pods to use the volume created with RWX access mode:
$ cat client1.yaml
apiVersion: v1
kind: Pod
metadata:
name: client1
spec:
containers:
- image: busybox
name: busy-container1
command: ["/bin/sleep","7d"]
volumeMounts:
- mountPath: "/data"
name: date-pvc
restartPolicy: Never
volumes:
- name: date-pvc
persistentVolumeClaim:
claimName: pvc1
$ cat client2.yaml
apiVersion: v1
kind: Pod
metadata:
name: client2
spec:
containers:
- image: busybox
name: busy-container1
command: ["/bin/sleep","7d"]
volumeMounts:
- mountPath: "/data1"
name: date-pvc
restartPolicy: Never
volumes:
- name: date-pvc
persistentVolumeClaim:
claimName: pvc1
$ kubectl create -f client1.yaml
pod/client1 created
$ kubectl create -f client2.yaml
pod/client2 created
$ kubectl get pods \| grep client
client1 1/1 Running 0 11s
client2 1/1 Running 0 8s
Verify ReadWrite by application:
$ kubectl exec client1 -ti -- ls /data/
index.html lost+found
$ kubectl exec client1 -ti -- touch /data/client1_file
$ kubectl exec client1 -ti -- ls /data/
client1_file index.html lost+found
$ kubectl exec client2 -ti -- ls /data1/
client1_file index.html lost+found