Persistent Storage
Introduction
cegedim.cloud now provides a multi-tenants Ceph Storage Platform as a CSI provider with the following specifications:
Data is replicated 4 times and is evenly distributed (using Ceph Crush Map) across 2 Datacenters to ensure that under disaster scenarios, 2 replicas of data are always available.
Each Kubernetes cluster, as a Ceph client, has its own pool of data on Ceph server and consumes services with its own pool scoped credential.
Only CSI Ceph RBD is provided for the moment.
Further information on Ceph CSI can be found here:
Versions
Component | Version |
---|---|
Ceph Cluster | 17.2.5 |
CSI Ceph | 3.9.0 |
Storage Class
Name | Description |
---|---|
cgdm-rwo | use CSI Ceph rbd to provision |
High Availability
EB | ET | |
---|---|---|
Replication | x4 | x4 |
Fault Tolerance: 1 AZ is DOWN | ||
Fault Tolerance: 1 DC is DOWN |
CSI features
CSI ceph-rbd | |
---|---|
Provisioning new PV | |
Remount existing PV | |
Compatible with all K8S applications | |
Multi-mount (RWX) | |
Resizable | |
Snapshot | |
Fault Tolerance: loss of 1 AZ | |
Fault Tolerance: loss of 1 DC | |
Compatible with K8S 1.22+ | |
Compatible with K8S 1.22- |
Snapshot and Restore PVC in Kubernetes
cegedim.cloud uses External Snapshotter to snapshot & restore PVC of your Kubernetes clusters.
All information about this application can be found here:
How to know if I have active snapshotclass on my cluster
We recommend to name the snapshotclass after the storageclass as a best practice. Simply execute the below command to check:
How to list available CSI in my cluster
To list all CSI available in a Kubernetes cluster, perform the following:
Here is a mapping between Storage Class and CSI:
Storage Classes | CSI |
---|---|
cgdm-rwo | Ceph RBD |
Last updated