Per document followed back up for Apigee hybrid Cassandra using CSI
https://cloud.google.com/apigee/docs/hybrid/v1.9/cassandra-csi-backup-restore
1.kubectl get sc
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
xxx-sc (yyyyy) xxx.csi.aws.xxx Retain WaitForFirstConsumer false xxx
2.kubectl create job -n apigee --from=cronjob/apigee-cassandra-backup <backup-pod-name>
This command lists the name of pod for back up
3. Override file is set up per below
cassandra:
hostNetwork: false
replicaCount: 3
storage:
storageclass: standard-rwo
capacity: 100Gi
image:
pullPolicy: Always
4. kubectl get cronjob -n apigee
NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE
apigee-cassandra-backup xx yy zz ss m False 0 <none> nnnn lists required output with correct scheduling time
4.kubectl get volumesnapshot -n apigee (returns following)
No resources found in apigee namespace.
Do we have any additional steps to be followed or any other debug commands required to validate back up?, fyi all Cassandra are in running state including other pods for apigee.
kubectl get pv will show the storage class used by the Cassandra volumes.
@dhtx kubectl get pv shows the required output to indicate CS to PV status , but in such case. Does it mean back up is working fine ?
But listed command still shows same output as below, any reasons ?
kubectl get volumesnapshot -n apigee (returns following)
No resources found in apigee namespace.
The next thing to check would be the backup job logs with this command. If the backup succeeded, there should be 1 or more volumesnapshot objects created.
kubectl logs -n apigee <backup pod name>