Persistent Storage

Introduction

cegedim.cloud now provides a multi-tenants Ceph Storage Platform as a CSI provider with the following specifications:

  • Data is replicated 4 times and is evenly distributed (using Ceph Crush Map) across 2 Datacenters to ensure that under disaster scenarios, 2 replicas of data are always available.

  • Each Kubernetes cluster, as a Ceph client, has its own pool of data on Ceph server and consumes services with its own pool scoped credential.

  • Only CSI Ceph RBD is provided for the moment.

Further information on Ceph CSI can be found here:

Versions

Component
Version

Ceph Cluster

19.2.2

CSI Ceph

3.14

circle-info

cegedim.cloud annually performs RFC (Request for Change) to keep both Ceph server and client (CSI) up-to-date with the latest stable releases.

Storage Class

Name
Description

cgdm-rwo

use CSI Ceph rbd to provision ReadWriteOnce persistent volumes

High Availability

EB
ET

Replication

x4

x4

Fault Tolerance: 1 AZ is DOWN

Fault Tolerance: 1 DC is DOWN

CSI features

CSI ceph-rbd

Provisioning new PV

Remount existing PV

Compatible with all K8S applications

Multi-mount (RWX)

Resizable

Snapshot

Fault Tolerance: loss of 1 AZ

Fault Tolerance: loss of 1 DC

Compatible with K8S 1.22+

Compatible with K8S 1.22-

Enabling Ceph Storage

circle-exclamation

Usage Recommendations

circle-info

cegedim.cloud recommends careful consideration when planning to use CSI Ceph for your storage needs:

  • Database Workloads: For production database requirements, we recommend using cegedim.cloud's official managed database PaaS offerings (PostgreSQL, MariaDB, Redis, etc.) instead of CSI Ceph. These managed services are specifically optimized, monitored, and supported for database workloads.

  • Critical Applications: For critical application data, thorough testing in pre-production environments is essential before deploying to production with CSI Ceph storage.

  • Best Use Cases: CSI Ceph is well-suited for:

    • Application state storage

    • Configuration and cache data

    • File storage for non-critical workloads

    • Development and testing environments

Testing your specific workload with CSI Ceph in a non-production environment will help ensure it meets your performance and reliability requirements before production deployment.

Snapshot and Restore PVC in Kubernetes

cegedim.cloud uses External Snapshotter to snapshot & restore PVC of your Kubernetes clusters.

All information about this application can be found here:

How to know if I have active snapshotclass on my cluster

We recommend to name the snapshotclass after the storageclass as a best practice. Simply execute the below command to check:

How to list available CSI in my cluster

To list all CSI available in a Kubernetes cluster, perform the following:

Here is a mapping between Storage Class and CSI:

Storage Classes
CSI

cgdm-rwo

Ceph RBD

Last updated