# Persistent Storage

## Introduction <a href="#kubernetespersistentstorage-introduction" id="kubernetespersistentstorage-introduction"></a>

**cegedim.cloud** now provides a multi-tenants Ceph Storage Platform as a CSI provider with the following specifications:

* Data is replicated 4 times and is evenly distributed (using Ceph Crush Map) across 2 Datacenters to ensure that under disaster scenarios, 2 replicas of data are always available.
* Each Kubernetes cluster, as a Ceph client, has its own pool of data on Ceph server and consumes services with its own pool scoped credential.
* Only CSI Ceph RBD is provided for the moment.

Further information on Ceph CSI can be found here:

{% embed url="<https://docs.ceph.com/>" %}

### Versions <a href="#kubernetespersistentstorage-versions" id="kubernetespersistentstorage-versions"></a>

<table><thead><tr><th width="177">Component</th><th>Version</th></tr></thead><tbody><tr><td>Ceph Cluster</td><td>19.2.2</td></tr><tr><td>CSI Ceph</td><td>3.14</td></tr></tbody></table>

{% hint style="info" %}
**cegedim.cloud** annually performs RFC (Request for Change) to keep both Ceph server and client (CSI) up-to-date with the latest stable releases.
{% endhint %}

### Storage Class <a href="#kubernetespersistentstorage-storageclass" id="kubernetespersistentstorage-storageclass"></a>

<table><thead><tr><th width="176">Name</th><th>Description</th></tr></thead><tbody><tr><td>cgdm-rwo</td><td>use <strong>CSI Ceph rbd</strong> to provision <code>ReadWriteOnce</code> persistent volumes</td></tr></tbody></table>

### High Availability <a href="#kubernetespersistentstorage-highavailability" id="kubernetespersistentstorage-highavailability"></a>

<table><thead><tr><th width="280"></th><th>EB</th><th>ET</th></tr></thead><tbody><tr><td>Replication</td><td>x4</td><td>x4</td></tr><tr><td>Fault Tolerance: 1 AZ is DOWN</td><td><span data-gb-custom-inline data-tag="emoji" data-code="2705">✅</span></td><td><span data-gb-custom-inline data-tag="emoji" data-code="2705">✅</span></td></tr><tr><td>Fault Tolerance: 1 DC is DOWN</td><td><span data-gb-custom-inline data-tag="emoji" data-code="2705">✅</span></td><td><span data-gb-custom-inline data-tag="emoji" data-code="2705">✅</span></td></tr></tbody></table>

### CSI features <a href="#kubernetespersistentstorage-csifeatures" id="kubernetespersistentstorage-csifeatures"></a>

|                                      | CSI ceph-rbd         |
| ------------------------------------ | -------------------- |
| Provisioning new PV                  | :white\_check\_mark: |
| Remount existing PV                  | :white\_check\_mark: |
| Compatible with all K8S applications | :white\_check\_mark: |
| Multi-mount (RWX)                    | :x:                  |
| Resizable                            | :white\_check\_mark: |
| Snapshot                             | :white\_check\_mark: |
| Fault Tolerance: loss of 1 AZ        | :white\_check\_mark: |
| Fault Tolerance: loss of 1 DC        | :white\_check\_mark: |
| Compatible with K8S 1.22+            | :white\_check\_mark: |
| Compatible with K8S 1.22-            | :white\_check\_mark: |

## Enabling Ceph Storage <a href="#kubernetespersistentstorage-enablingcephstorage" id="kubernetespersistentstorage-enablingcephstorage"></a>

{% hint style="warning" %}
CSI Ceph persistent storage is **not enabled by default** on Kubernetes clusters. To enable this feature, please submit an ITCare request ticket specifying the cluster name and your storage requirements.
{% endhint %}

## Usage Recommendations <a href="#kubernetespersistentstorage-usagerecommendations" id="kubernetespersistentstorage-usagerecommendations"></a>

{% hint style="info" %}
**cegedim.cloud** recommends careful consideration when planning to use CSI Ceph for your storage needs:

* **Database Workloads**: For production database requirements, we recommend using **cegedim.cloud**'s official managed database PaaS offerings (PostgreSQL, MariaDB, Redis, etc.) instead of CSI Ceph. These managed services are specifically optimized, monitored, and supported for database workloads.
* **Critical Applications**: For critical application data, thorough testing in pre-production environments is essential before deploying to production with CSI Ceph storage.
* **Best Use Cases**: CSI Ceph is well-suited for:
  * Application state storage
  * Configuration and cache data
  * File storage for non-critical workloads
  * Development and testing environments

Testing your specific workload with CSI Ceph in a non-production environment will help ensure it meets your performance and reliability requirements before production deployment.
{% endhint %}

## Snapshot and Restore PVC in Kubernetes <a href="#kubernetespersistentstorage-snapshotandrestorepvcinkubernetes" id="kubernetespersistentstorage-snapshotandrestorepvcinkubernetes"></a>

**cegedim.cloud** uses External Snapshotter to snapshot & restore PVC of your Kubernetes clusters.

All information about this application can be found here:

{% embed url="<https://github.com/kubernetes-csi/external-snapshotter>" %}

### How to know if I have active snapshotclass on my cluster <a href="#kubernetespersistentstorage-howtoknowifihaveactivesnapshotclassonmycluster" id="kubernetespersistentstorage-howtoknowifihaveactivesnapshotclassonmycluster"></a>

We recommend to name the snapshotclass after the storageclass as a best practice. Simply execute the below command to check:

{% code lineNumbers="true" fullWidth="true" %}

```bash
$ kubectl get sc
NAME                 PROVISIONER           RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
cgdm-rwo (default)   rbd.csi.ceph.com      Delete          Immediate           true                   57d

$ kubectl get  volumesnapshotclass
NAME       DRIVER                DELETIONPOLICY   AGE
cgdm-rwo   rbd.csi.ceph.com      Delete           36d
```

{% endcode %}

## How to list available CSI in my cluster <a href="#kubernetespersistentstorage-howtolistavailablecsiinmycluster" id="kubernetespersistentstorage-howtolistavailablecsiinmycluster"></a>

To list all CSI available in a Kubernetes cluster, perform the following:

{% code lineNumbers="true" fullWidth="true" %}

```
$ kubectl get sc
NAME                  PROVISIONER              RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
cgdm-rwo (default)    rbd.csi.ceph.com         Delete          Immediate              true                   42d
```

{% endcode %}

Here is a mapping between Storage Class and CSI:

<table><thead><tr><th width="234">Storage Classes</th><th>CSI</th></tr></thead><tbody><tr><td>cgdm-rwo</td><td>Ceph RBD</td></tr></tbody></table>
