Backup and Restore
You can configure the Backup Controller to create a backup of a specific Persistent Volume Claim (PVC), all PVCs within a specific namespace, or all PVCs in the cluster. The Persistent Volume (PV) associated with the PVCs are then backed up to the specified NFS target.
Limitations
Note that external KMS (Key Management Service) integration is not supported with the Diamanti Secure Volumes feature. The cluster-wide secure volumes encryption master key is managed within the Kubernetes cluster itself. Therefore, backup of secure volumes across clusters using the Diamanti Backup Controller is not supported.
Create the Backup Controller
The Backup Controller can be a created-on demand to take instant backups or it can be created with a Kubernetes cronjob to schedule periodic backups. The Backup Controller controls the entire backup operation lifecycle for the specified PVCs. You can configure the Backup Controller to limit the number of concurrent backups that can run at the same time.
The Backup Controller performs the following operations:
If the volume is provisioned by the CSI driver, a VolumeSnapshot creation request is issued to the Kubernetes API server.This request creates a VolumeSnapshotContents object with Kubernetes, which represents an actual snapshot of the volume on the Diamanti platform.
Ensures that the maximum number of snapshots in the cluster does not exceed the value specified in the specification (the default value is 16).The Backup Controller deletes the oldest snapshot if the maximum number of snapshots has been reached prior to creating the new snapshot.The Diamanti platform supports a maximum of 16 snapshots per volume.
Creates a volume from a snapshot. This represents the Linked Clone Volume (LCV) for the snapshot on the Diamanti platform. A backup represents the state of volume data when the snapshot was created. This data might not fully represent the filesystem data if the filesystem has not flushed all its buffers.
Creates a backup agent pod to back up the data from the LCV to the NFS target server path. The tar backup plug-in creates an archive that contains the volume data.
Terminates the backup agent pod and deletes the LCV created for the backup after the backup operation is completed.
Note
The backup operation is performed using a Diamanti NFS plug-in and also make sure that the NFS server IP should be pingable from GCP cluster nodes.
Run the Backup Controller
This section provides an example of the steps required to run the Backup Controller. The backup is started using the test-pvc PersistentVolumeClaim. The PersistentVolume is dynamically provisioned.
Create the PersistentVolumeClaim
The following shows an example specification:
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: test-pvc spec: accessModes: - ReadWriteOnce resources: requests: storage: 1G
Use the following commands to create and verify the PVC:
$ kubectl create -f test-pvc.yaml persistentvolumeclaim "test-pvc" created $ kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE test-pvc Bound pvc-46001517-e850-11e8-8831-54ab3a29175b 1G RWO best-effort 29s $ kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-46001517-e850-11e8-8831-54ab3a29175b 1G RWO Delete Bound default/test-pvc best-effort 33s
The system dynamically creates and binds the PersistentVolume.
Create the test pod
The following is an example of a pod definition file that uses the test-pvc PVC. The pod writes data to the volume every 30 seconds.
apiVersion: v1 kind: Pod metadata: name: test-pod annotations: diamanti.com/endpoint: none spec: restartPolicy: Never containers: - name: test-pod image:centos:7.5.1804 command: - "/bin/sh" - "-c" - "while true; do date >> /data/pod-out.txt; cd /data; sync; sync; sleep 30; done" volumeMounts: - name: test-pvc mountPath: /data volumes: - name: test-pvc persistentVolumeClaim: claimName: test-pvc
Use the following command to create the test pod:
$ kubectl create -f pod.yaml pod "test-pod" created
Verify the status of the pod using the following command:
$ kubectl get pods NAME READY STATUS RESTARTS AGE test-pod 0/1 ContainerCreating 0 3s $ kubectl get pods NAME READY STATUS RESTARTS AGE test-pod 1/1 Running 0 5s
Create the snapshot promoter storage classes.
The sample specification files are available at:
/usr/share/diamanti/manifests/examples/backup/backup-storageclasses/
To create a storage class with fsType ext3, use the following command:
$ kubectl create -f backup-ext3-sc.yaml storageclass.storage.k8s.io/backup-ext3-sc created
The following shows a sample storage class definition file to create an ext3 storage class:
apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: backup-ext3-sc provisioner: dcx.csi.diamanti.com reclaimPolicy: Delete parameters: mirrorCount: "1" perfTier: "high" fsType: "ext3"
To create a storage class with fsType ext4, use the following command:
$ kubectl create -f backup-ext4-sc.yaml storageclass.storage.k8s.io/backup-ext4-sc created
Here’s a sample storage class definition file to create an ext4 storage class:
apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: backup-ext4-sc provisioner: dcx.csi.diamanti.com reclaimPolicy: Delete parameters: mirrorCount: "1" perfTier: "high" fsType: "ext4" allowVolumeExpansion: true
To create a storage class with fsType xfs, use the following command:
$ kubectl create -f backup-xfs-sc.yaml storageclass.storage.k8s.io/backup-xfs-sc created
The following shows a sample storage class definition file to create an xfs storage class:
apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: backup-xfs-sc provisioner: dcx.csi.diamanti.com reclaimPolicy: Delete parameters: mirrorCount: "1" perfTier: "high" fsType: "xfs" allowVolumeExpansion: true
To create all three storage classes, use the following command:
$ kubectl create -f backup-storageclasses/ storageclass.storage.k8s.io/backup-ext3-sc created storageclass.storage.k8s.io/backup-ext4-sc created storageclass.storage.k8s.io/backup-xfs-sc created
Note
Don’t change the storage class names in the specification files since the names are unique for running the backup controller.
To verify the storage class status, use the following command:
$ kubectl get sc NAME PROVISIONER AGE backup-ext3-sc dcx.csi.diamanti.com 6m backup-ext4-sc dcx.csi.diamanti.com 6m backup-xfs-sc dcx.csi.diamanti.com 6m best-effort (default) dcx.csi.diamanti.com 5d18h high dcx.csi.diamanti.com 5d18h medium dcx.csi.diamanti.com 5d18h snapshot-promoter dcx.csi.diamanti.com 5d18h snapshot-promoter-backup dcx.csi.diamanti.com 5d18h
Use a Kubernetes cronjob to create the Backup Controller used to back up the volume periodically.
The sample specification files are available at:
/usr/share/diamanti/manifests/examples/backup/backupcronjob.yaml
The following example shows how to create the cronjob:
apiVersion: batch/v1 kind: CronJob metadata: name: backupcronjob spec: schedule: "13 * * * *" concurrencyPolicy: Forbid successfulJobsHistoryLimit: 1 jobTemplate: spec: backoffLimit:0 template: spec: serviceAccountName: backupcontroller-runner affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: diamanti.com/app operator: In values: ["backupcontroller"] # Ensures only one backup # controller runs in the cluster topologyKey: beta.kubernetes.io/os containers: - args: - -virtualIP=x.x.x.x - -sourcePVC=test-pvc - -pvcNamespace=default - -backupPlugin=tar - -pluginArgs= "server":"x.x.x.x","path":"\/dws_nas_scratch\/backupdir","mountOptions":"nfsvers=3"} - -pluginOptions=["-cvp","--selinux","--acls","--xattrs"] - -compressed=true - -maxNumSnapshots=5 - -numDaysToKeep=5 - -activeVolumesOnly=true - -cpuResource=100m - -memoryResource=100Mi - -maxConcurrentJobs=10 - -snapshotPromoter={"ext3":"backup-ext3- sc","ext4":"backup-ext4-sc","xfs":"backup-xfs-sc"} name: backupjob-test-pvc image: diamanti/backupcontroller:v3.4.x-1 VolumeMounts: - name: cluster-config mountPath: /etc/diamanti readOnly: true volumes: - name: cluster-config hostPath: Path: /etc/diamanti type: Directory restartPolicy: Never
See Backup Controller Arguments for more information about the controller arguments. See cronjob Parameters for details about the available cronjob parameters.
Create a backup for the test-pvc PVC using the following command:
$ kubectl create -f backupcronjob.yaml cronjob.batch "backupcronjob" created
Check the status of the cronjob.
In the following example, the schedule is set to run every hour on 13th minute:
$ kubectl get cronjob NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE backupcronjob 13 * * * * False 0 <none> 11s
Check whether the cronjob is active using the following command:
$ kubectl get cronjob NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE backupcronjob 13 * * * * False 1 6s 53s
Verify that the Backup Controller has started using the following command:
$ kubectl get pods | grep backup backupcronjob-1542229980-vcr57 1/1 Running 0 6s
The following shows the snapshot created by the Backup Controller for the test-pvc PVC:
$ kubectl get volumesnapshots.snapshot.storage.k8s.io NAME AGE default-test-pvc-snapshot-1542229986 9s $ kubectl get volumesnapshotcontents.snapshot.storage.k8s.io NAME AGE k8ssnap-147d3229-e852-11e8-8d57-0a58ac140033 12s
Notice that the PVC created by the snapshot-promoter storage class, snapshottedpvc-default-test-pvc, is associated with the snapshotted volume.
$ kubectl get pvc | grep snapshotted NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE snapshotted-pvc-default-test-pvc pending snapshot-promoter 14s
The pvc-15a7ae64-e852-11e8-8831-54ab3a29175b PV represents the snapshotted volume, as shown in the following:
$ kubectl get pv | grep snapshotted NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM pvc-15a7ae64-e852-11e8-8831-54ab3a29175b 1G RWO Delete Bound default/ snapshotted-pvc-default-test-pvc STORAGECLASS REASON AGE snapshot-promoter 5s
The backup-pod-default-test-pvc is the backup agent started by Backup Controller to back up the data from snapshotted volume to the NFS volume, as shown in the following:
$ kubectl get pods | grep backup-pod backup-pod-default-test-pvc 0/1 ContainerCreating 0 1s
Verify the status of backup agent pod using the following command:
$ kubectl get pods | grep backup-pod backup-pod-default-test-pvc 0/1 Completed 0 4sAfter the backup agent pod completes its operation, the backup controller transitions to Completed status, as shown in the following:
$ kubectl get pods | grep backup backupcronjob-1542229980-vcr57 1/1 Completed 0 1m
(Optional) If you no longer need to back up the volume, delete the cronjob using the following command:
$ kubectl delete cronjob backupcronjob cronjob.batch "backupcronjob" deleted
(Optional) If you no longer need the storage classes, delete the storage classes using the following command:
$ kubectl delete -f backup-storageclasses/ storageclass.storage.k8s.io "backup-ext3-sc" deleted storageclass.storage.k8s.io "backup-ext4-sc" deleted storageclass.storage.k8s.io "backup-xfs-sc" deleted
Back up the data on the NFS server.
Since the backup was performed at the 13th minute, data in the volume shows the time until 21:12:29, which was the last write before the backup was done.
$ ls -ltr total 4 -rw-r--r-- 1 root root 439 Nov 14 13:13 tar-backup-default-test-pvc-1542230004.tar.gz # zcat tar-backup-default-test-pvc-1542230004.tar.gz ./PaxHeaders.7/data0000644000000000000000000000037413373106773012554 xustar000000000000000030 mtime=1542229499.137002325 30 atime=1542229499.010003593 30 ctime=1542229499.137002325 76 RHT.security.selinux=system_u:object_r:svirt_sandbox_file_t:s0:c760,c8 95 86SCHILY.xattr.security.selinux=system_u:object_r:svirt_sandbox_file_t :s0:c760,c895 data/pod-out.txt0000644000000000000000000000072013373107675014130 0ustar00rootroot00000000000000Wed Nov 14 21:04:59 UTC 2018 Wed Nov 14 21:05:29 UTC 2018 Wed Nov 14 21:05:59 UTC 2018 Wed Nov 14 21:06:29 UTC 2018 Wed Nov 14 21:06:59 UTC 2018 Wed Nov 14 21:07:29 UTC 2018 Wed Nov 14 21:07:59 UTC 2018 Wed Nov 14 21:08:29 UTC 2018 Wed Nov 14 21:08:59 UTC 2018 Wed Nov 14 21:09:29 UTC 2018 Wed Nov 14 21:09:59 UTC 2018 Wed Nov 14 21:10:29 UTC 2018 Wed Nov 14 21:10:59 UTC 2018 Wed Nov 14 21:11:29 UTC 2018 Wed Nov 14 21:11:59 UTC 2018 Wed Nov 14 21:12:29 UTC 2018
Backup Controller Arguments
The Backup Controller accepts the following arguments:
Argument |
Description |
---|---|
virtualIP |
The virtual IP address of the cluster. |
sourcePVC |
The PVC name to perform a backup of the associated volume. |
pvcNamespace |
The namespace of the PVCs to be backed up. |
backupPlugin |
The backup plug-in type. The “tar” backup plug-in creates an archive that contains the volume data. |
pluginArgs |
The plug-in arguments required for the backup jobs
The default file name is based on the namespace and the name of the PVC. The tar plug-in adds a timestamp suffix to the default directory name and also creates unique archive files. |
pluginOptions |
The backup plug-in options. |
compressed |
Enable or disable compression. |
maxNumSnapshots |
The maximum number of snapshots that can exist for the PVC. The default value is 16. |
numDaysToKeep |
The maximum number of days a backup can exist on the NFS target. |
activeVolumesOnly |
Backup active volumes only. |
cpuResource |
The CPU resource requests for the backup agent. |
memoryResource |
The memory resource request for the backup agent. |
maxConcurrentJobs |
The maximum number of concurrent backup agents. This is applicable when a backup is performed for multiple PVCs by the controller. |
snapshotPromoter |
The map of file systems to storage classes used to create volumes from snapshots, specified as key-value pairs “fsType” : “value”. |
cronjob Parameters
The cronjob accepts the following parameters:
Argument |
Description |
---|---|
schedule |
The schedule to run the job. |
concurrencyPolicy |
Set to Forbid so that concurrent jobs do not run at the same time. |
successfulJobsHistoryLimit |
The number of completed jobs to retain. |
backoffLimit |
The number of times a job should be retried before considering it failed. |
Backup Controller Helm Chart
This section describes how to use a Backup Controller Helm chart to back up PVCs using the volume snapshot feature.
Before you use the Backup Controller Helm Chart
The requirements to use the Backup Controller Helm chart are as follows:
Diamanti Software v2.4.0 (or higher)
Helm installed on your local machine and configured to work with the cluster
An NFS server accessible from the cluster
Explore the Configuration
Helm charts store configured settings in a values.yaml file. The following example shows the contents of this file:
# Default values for backup-controller.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
cronJob:
enabled: true # Set it to false if you want it to be on-demand
schedule: "0 0 * * *"
image:
repository: diamanti/backupcontroller
tag: v3.4.x-1
pullPolicy: IfNotPresent
nameOverride: "" fullnameOverride: ""
rbac:
serviceAccountName: backupcontroller-runner
args:
virtualIP: "172.16.19.99"
plugin: tar
nfsServer: "172.16.0.25"
share:"/dws_nas_scratch/backup"
options: "nfsvers=3"
#route: "172.10.1.2"
maxNumSnapshots: 5
compressed: "true"
numDaysToKeep: 14
activeVolumesOnly: "true"
cpuResource: 1000m
memoryResource: 1Gi
maxConcurrentJobs: 6
namespace: ""
sourcePVC: ""
resources:
# We usually recommend not to specify default resources and to leave this as a
# conscious choice for the user. This also increases chances charts run on
# environments with little resources, such as Minikube. If you do want to specify
# resources, uncomment the following lines, adjust them as necessary, and remove the
# curly braces after 'resources:'.
limits:
cpu: 1000m
memory: 1024Mi
requests:
cpu: 500m
memory: 256Mi
Understand the cronjob Section
Add the following section of the values.yaml file to configure the specific backup job as either an immediate or scheduled backup:
cronJob:
enabled: true # Set it to false if you want it to be on-demand schedule: "0 0 * * *"
To configure the backup job as an immediate backup, set the enabled variable to false. To create a scheduled backup, set the enabled variable to true and configure the schedule. Note that the schedule variable uses cron formatting, as shown in the following:
* * * * * *
| | | | | |
| | | | | +-- Year (range: 1900-3000)
| | | | +---- Day of the Week (range: 1-7, 1 standing for Monday)
| | | +------ Month of the Year (range: 1-12)
| | +-------- Day of the Month (range: 1-31)
| +---------- Hour (range: 0-23)
+------------ Minute (range: 0-59)
When a scheduled backup job is created, the Backup Controller creates a cronjob that controls the backups until the job is canceled. For more information, see https://kubernetes.io/docs/ concepts/workloads/controllers/cron-jobs/.
Explore the Args Section
The following table describes the arguments that you can specify in the args sections of the values.yaml file:
Argument |
Description |
---|---|
virtualIP |
The virtual IP address of the cluster. |
plugin |
The backup plug-in type. Currently, only tar is supported. |
nfsServer |
The FQDN or IP address of the NFS server. Note: If the NFS server IP address is in the same subnet as the cluster management interfaces, backups use the management network. This can have a negative impact on cluster stability and, therefore, this type of backup configuration should be avoided. |
share |
The share path configured on the NFS server. This needs to be set exactly as configured in the /etc/exports file on the NFS server. Contact the NFS server administrator for this information. |
options |
Sets options used by the mount command. NFS versions 3 and 4 are supported. Check with the NFS server admin istrator to determine the correct version to use. |
route |
(Optional) The list of IP addresses/subnets, in cases when the NFS server is associated with multiple IP addresses The Diamanti NFS plug-in adds routes for the specified IP addresses and subnets through the host network gateway (default) on the node on which the backup pod is created. |
maxNumberSnapshots |
The maximum number of snapshots to retain. Snapshots consume space on NVME drives, so be careful with the number of snapshots retained. |
compressed |
Specifies whether to compress the backup file, either true or false. |
numDaysToKeep |
The number of days backups are kept on the NFS server for this backup job. Backups that are older than the configured retention period are expired and deleted from the NFS server. |
activeVolumesOnly |
Specifies that only volumes that are currently in use are to be backed up, either true or false. |
cpuResource |
The CPU resource reservation for backup agents. |
memoryResource |
The memory resource reservation for backup agents. |
maxConcurrentJobs |
The number of concurrent backup jobs that can run. |
namespace |
The namespace for the PVCs being backed up. If you want to back up a specific PVC, you need to set the namespace for th e PVC. If this is configured, but no sourcePVC is configured, all PVCs in the namespace are backed up. If this is not set, all PVCs in all namespaces are backed up. |
sourcePVC |
Only used when backing up a specific PVC. If left blank, all PVCs in the configured namespace are backed up. |
Install the Helm Chart
To install the Helm chart, change to the directory that holds the Chart.yaml file and run the helm install command. Specify the values.yaml file using the -f flag, as shown in the following example:
The sample helm chart is available at:
/usr/share/diamanti/ui/helm/charts/backup-controller-3.3.1.tgz
$ helm3 install --generate-name backup-controller
NAME: backup-controller-1665141530
LAST DEPLOYED: Fri Oct 7 11:18:50 2022
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
It is recommended that you use different values.yaml files for different backup jobs. Copy the values.yaml file and name it using the following format:
values-<backup job name>.yaml
Restore a Backup
Restoring a volume containing data backed up on an NFS server can be done using the pod as well as the helm chart.
Restoring the Backup using Pod (Manual way)
This section shows how to restore the data backed up on an NFS server using the pod.
Create a new PersistentVolumeClaim (PVC) to provision a new volume for the restore pod. Use the following specification to create the PersistentVolumeClaim:
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: restore-pvc spec: accessModes: - ReadWriteOnce resources: requests: storage: 20G
Verify that PVC has been created using the following command:
$ kubectl get pvc restore-pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE restore-test-pvc Bound pvc-5a6ba1ec-4146-44ae-810d-1826bca17454 1953125Ki RWO csi-sc 2s
Create the pod
The following is the example of pod defination file that uses the restore-pvc PVC. In this , have to mention the nfs-server IP , The path on the NFS server to create the backup files and the file name of which have to restore
apiVersion: v1 kind: PersistentVolume metadata: name: nfs-pv spec: capacity: storage: 2G accessModes: - ReadWriteOnce csi: driver: dcx.nfs.csi.diamanti.com volumeHandle: data-id volumeAttributes: name: nfs-pv type: nfs server: 192.168.211.23 share: "/exports" --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: nfs-pvc spec: volumeName: nfs-pv storageClassName: "" accessModes: - ReadWriteOnce resources: requests: storage: 2G --- apiVersion: v1 kind: Pod metadata: name: restore-pod annotations: diamanti.com/endpoint: none spec: containers: - image: centos:7.5.1804 command: ["tar"] args: - "-xvpz" - "--file=/backup-dir/tar-backup-default-pvcfio-volc-ahigh-1-1-1635847373.tar.gz" - "--strip-components=1" - "-C" - "/data" - "--numeric-owner" name: tar-backup securityContext: privileged: true volumeMounts: - mountPath: /data name: restore-vol - mountPath: /backup-dir name: backup-vol restartPolicy: OnFailure volumes: - name: restore-vol persistentVolumeClaim: claimName: restore-test-pvc - name: backup-vol persistentVolumeClaim: claimName: nfs-pvc
Create restore-pod
$ kubectl create -f tar-restore.yaml persistentvolume/nfs-pv created persistentvolumeclaim/nfs-pvc created pod/restore-pod created
The following table describes the argument that you can specify in the above file of restore-pod
Argument
Description
Server
IP of NFS-Server
Share
The path on the NFS server to create the backup files
–file
file name of which have to restore
To Check the status of pod
$ kubectl get pod NAME READY STATUS RESTARTS AGE restore-pod 1/1 Running 0 9s
7. Wait till pod gets completed. Once the pod goes into Completed state that means the data is restored successfully in the volume created by the restore-pvc PVC. To verify the status of pod
$ kubectl get pod NAME READY STATUS RESTARTS AGE restore-pod 0/1 Completed 0 30s
Then delete the restore-pod , so we can check the data which is restored is correct
$ kubectl delete -f tar-restore.yaml persistentvolume "nfs-pv" deleted persistentvolumeclaim "nfs-pvc" deleted pod "restore-pod" deleted
Now the volume of restore-pvc PVC will goes into Available state. So , to get the device file attach the volume manually
$ dctl volume attach pvc-5a6ba1ec-4146-44ae-810d-1826bca17454 NAME SIZE NODE LABELS PHASE STATUS ATTACHED-TO DEVICE-PATH PARENT AGE pvc-5a6ba1ec-4146-44ae-810d-1826bca17454 20.01GB [appserv92] <none> - Available 7s
To check the volume is attached
$ dctl volume list pvc-5a6ba1ec-4146-44ae-810d-1826bca17454 20.01GB [appserv92] diamanti.com/pod-name=default/pvc-5a6ba1ec-4146-44ae-810d-1826bca17454-a ttached-manually - Attached sptocp4 /dev/nvme2n2 48s
By attaching the volume we get the device name. To check the restore is done or not , login to the node and and check the device name here it is /dev/nvme2n2 and mount it.
$ sudo mount /dev/nvme2n1 /mnt
12. Here it mounted in /mnt directory. Now the data should be the same as the backed up on an NFS server
$ cd /mnt/ $ ls –l total 1769496 -rw-r--r-- 1 root root 143 Jun 14 03:45 fio.out -rw-r--r-- 1 root root 113246208 Jun 14 04:12 job0.0.0 -rw-r--r-- 1 root root 113246208 Jun 14 04:12 job0.1.0 -rw-r--r-- 1 root root 113246208 Jun 14 04:12 job0.10.0 -rw-r--r-- 1 root root 113246208 Jun 14 04:12 job0.11.0 -rw-r--r-- 1 root root 113246208 Jun 14 04:12 job0.12.0 -rw-r--r-- 1 root root 113246208 Jun 14 04:12 job0.13.0 -rw-r--r-- 1 root root 113246208 Jun 14 04:12 job0.14.0 -rw-r--r-- 1 root root 113246208 Jun 14 04:12 job0.15.0 -rw-r--r-- 1 root root 113246208 Jun 14 04:12 job0.2.0 -rw-r--r-- 1 root root 113246208 Jun 14 04:12 job0.3.0 -rw-r--r-- 1 root root 113246208 Jun 14 04:12 job0.4.0 -rw-r--r-- 1 root root 113246208 Jun 14 04:12 job0.5.0 -rw-r--r-- 1 root root 113246208 Jun 14 04:12 job0.6.0 -rw-r--r-- 1 root root 113246208 Jun 14 04:12 job0.7.0 -rw-r--r-- 1 root root 113246208 Jun 14 04:12 job0.8.0 -rw-r--r-- 1 root root 113246208 Jun 14 04:12 job0.9.0 drwx------ 2 root root 16384 Jun 14 03:45 lost+foundNote
Restoring the backup using helm chart is not supported.
Backup Controller Log
The following is an example of backup controller logs from a successful backup:
$ kubectl logs backupcronjob-1542229980-vcr57 I1114 21:13:06.239995 1 backupcontroller.go:228] Starting Backup Controller to backup the volume associated with PVC default/test-pvc I1114 21:13:06.248050 1 backupcontroller.go:323] [default/test-pvc] Backup Controller starting backup I1114 21:13:06.290555 1 backupcontroller.go:456] [default/test-pvc] Creating VolumeSnapshot default-test-pvc-snapshot-1542229986 I1114 21:13:07.299002 1 backupcontroller.go:505] [default/test-pvc] Successfully created VolumeSnapshotData k8ssnap-147d3229-e852-11e8-8d57- 0a58ac140033 for VolumeSnapshot default-test-pvc-snapshot-1542229986 I1114 21:13:07.299044 1 backupcontroller.go:509] [default/test-pvc] Wait for snapshot k8ssnap-147d3229-e852-11e8-8d57-0a58ac140033 creation on Diamanti platform I1114 21:13:08.305411 1 backupcontroller.go:523] [default/test-pvc] Successfully created snapshot k8ssnap-147d3229-e852-11e8-8d57-0a58ac140033 on Diamanti platform I1114 21:13:08.305463 1 backupcontroller.go:552] [default/test-pvc] Creating PVC snapshotted-pvc-default-test-pvc to create volume from snapshot default-test-pvc-snapshot-1542229986 E1114 21:13:09.312718 1 backupcontroller.go:575] [default/test-pvc] Waiting for PV creation for PVC snapshotted-pvc-default-test-pvc E1114 21:13:10.315160 1 backupcontroller.go:575] [default/test-pvc] Waiting for PV creation for PVC snapshotted-pvc-default-test-pvc E1114 21:13:11.317454 1 backupcontroller.go:575] [default/test-pvc] Waiting for PV creation for PVC snapshotted-pvc-default-test-pvc E1114 21:13:12.319849 1 backupcontroller.go:575] [default/test-pvc] Waiting for PV creation for PVC snapshotted-pvc-default-test-pvc E1114 21:13:13.322450 1 backupcontroller.go:575] [default/test-pvc] Waiting for PV creation for PVC snapshotted-pvc-default-test-pvc E1114 21:13:14.325206 1 backupcontroller.go:575] [default/test-pvc] Waiting for PV creation for PVC snapshotted-pvc-default-test-pvc E1114 21:13:15.329606 1 backupcontroller.go:575] [default/test-pvc] Waiting for PV creation for PVC snapshotted-pvc-default-test-pvc E1114 21:13:16.335942 1 backupcontroller.go:575] [default/test-pvc] Waiting for PV creation for PVC snapshotted-pvc-default-test-pvc E1114 21:13:17.338749 1 backupcontroller.go:575] [default/test-pvc] Waiting for PV creation for PVC snapshotted-pvc-default-test-pvc E1114 21:13:18.341565 1 backupcontroller.go:575] [default/test-pvc] Waiting for PV creation for PVC snapshotted-pvc-default-test-pvc E1114 21:13:19.344073 1 backupcontroller.go:575] [default/test-pvc] Waiting for PV creation for PVC snapshotted-pvc-default-test-pvc E1114 21:13:20.346534 1 backupcontroller.go:575] [default/test-pvc] Waiting for PV creation for PVC snapshotted-pvc-default-test-pvc E1114 21:13:21.348684 1 backupcontroller.go:575] [default/test-pvc] Waiting for PV creation for PVC snapshotted-pvc-default-test-pvc E1114 21:13:22.350862 1 backupcontroller.go:575] [default/test-pvc] Waiting for PV creation for PVC snapshotted-pvc-default-test-pvc I1114 21:13:23.353504 1 backupcontroller.go:578] [default/test-pvc] Successfully created PV for PVC snapshotted-pvc-default-test-pvc: pvc-15a7ae64-e852-11e8-8831-54ab3a29175b I1114 21:13:23.353540 1 backupcontroller.go:582] [default/test-pvc] Wait for volume creation on Diamanti platform pvc-15a7ae64-e852-11e8-883154ab3a29175b I1114 21:13:24.359481 1 backupcontroller.go:596] [default/test-pvc] Successfully created volume pvc-15a7ae64-e852-11e8-8831-54ab3a29175b from snapshot k8ssnap-147d3229-e852-11e8-8d57-0a58ac140033 on Diamanti platform I1114 21:13:24.378947 1 backupcontroller.go:842] [default/test-pvc] Starting pod backup-pod-default-test-pvc to run backup I1114 21:13:29.428944 1 backupcontroller.go:904] [default/test-pvc] Backup completed at 2018-11-14T21:13:29Z I1114 21:13:29.446741 1 backupcontroller.go:417] [default/test-pvc] Backup agent completed I1114 21:13:29.446771 1 backupcontroller.go:420] [default/test-pvc] Deleting PVC snapshotted-pvc-default-test-pvc created from snapshot defaulttest-pvc-snapshot-1542229986 I1114 21:13:29.448958 1 backupcontroller.go:342] [default/test-pvc] Backup Controller completed backup I1114 21:13:36.248171 1 backupcontroller.go:316] Backup Controller completed