Persistent Storage
Last updated
Last updated
cegedim.cloud now provides a multi-tenants Ceph Storage Platform as a CSI provider with the following specifications:
Data is replicated 4 times and is evenly distributed (using Ceph Crush Map) across 2 Datacenters to ensure that under disaster scenarios, 2 replicas of data are always available.
Each Kubernetes cluster, as a Ceph client, has its own pool of data on Ceph server and consumes services with its own pool scoped credential.
Only CSI Ceph RBD is provided for the moment.
Further information on Ceph CSI can be found here:
Ceph Cluster
17.2.5
CSI Ceph
3.9.0
cgdm-rwo
use CSI Ceph rbd to provision ReadWriteOnce
persistent volumes
Replication
x4
x4
Fault Tolerance: 1 AZ is DOWN
Fault Tolerance: 1 DC is DOWN
Provisioning new PV
Remount existing PV
Compatible with all K8S applications
Multi-mount (RWX)
Resizable
Snapshot
Fault Tolerance: loss of 1 AZ
Fault Tolerance: loss of 1 DC
Compatible with K8S 1.22+
Compatible with K8S 1.22-
cegedim.cloud uses External Snapshotter to snapshot & restore PVC of your Kubernetes clusters.
All information about this application can be found here:
We recommend to name the snapshotclass after the storageclass as a best practice. Simply execute the below command to check:
$ kubectl get sc
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
cgdm-rwo (default) rbd.csi.ceph.com Delete Immediate true 57d
$ kubectl get volumesnapshotclass
NAME DRIVER DELETIONPOLICY AGE
cgdm-rwo rbd.csi.ceph.com Delete 36d
To list all CSI available in a Kubernetes cluster, perform the following:
$ kubectl get sc
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
cgdm-rwo (default) rbd.csi.ceph.com Delete Immediate true 42d
Here is a mapping between Storage Class and CSI:
cgdm-rwo
Ceph RBD