Persistent Storage
Introduction
cegedim.cloud now provides a multi-tenants Ceph Storage Platform as a CSI provider with the following specifications:
Data is replicated 4 times and is evenly distributed (using Ceph Crush Map) across 2 Datacenters to ensure that under disaster scenarios, 2 replicas of data are always available.
Each Kubernetes cluster, as a Ceph client, has its own pool of data on Ceph server and consumes services with its own pool scoped credential.
Only CSI Ceph RBD is provided for the moment.
Further information on Ceph CSI can be found here:
Versions
Ceph Cluster
17.2.5
CSI Ceph
3.9.0
Storage Class
cgdm-rwo
use CSI Ceph rbd to provision ReadWriteOnce
persistent volumes
High Availability
Replication
x4
x4
Fault Tolerance: 1 AZ is DOWN
✅
✅
Fault Tolerance: 1 DC is DOWN
✅
✅
CSI features
Provisioning new PV
✅
Remount existing PV
✅
Compatible with all K8S applications
✅
Multi-mount (RWX)
❌
Resizable
✅
Snapshot
✅
Fault Tolerance: loss of 1 AZ
✅
Fault Tolerance: loss of 1 DC
✅
Compatible with K8S 1.22+
✅
Compatible with K8S 1.22-
✅
Snapshot and Restore PVC in Kubernetes
cegedim.cloud uses External Snapshotter to snapshot & restore PVC of your Kubernetes clusters.
All information about this application can be found here:
How to know if I have active snapshotclass on my cluster
We recommend to name the snapshotclass after the storageclass as a best practice. Simply execute the below command to check:
$ kubectl get sc
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
cgdm-rwo (default) rbd.csi.ceph.com Delete Immediate true 57d
$ kubectl get volumesnapshotclass
NAME DRIVER DELETIONPOLICY AGE
cgdm-rwo rbd.csi.ceph.com Delete 36d
How to list available CSI in my cluster
To list all CSI available in a Kubernetes cluster, perform the following:
$ kubectl get sc
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
cgdm-rwo (default) rbd.csi.ceph.com Delete Immediate true 42d
Here is a mapping between Storage Class and CSI:
cgdm-rwo
Ceph RBD
Last updated