# K8s - Features

There are 2 possible topologies of K8s cluster provided by **cegedim.cloud**:

* **Standard**: workloads are deployed in a single data center, but protected against data center disaster using a secondary data center as failover.
* **High Availability**: workloads are deployed amongst two datacenters. If multi-replica is used and workload is well distributed, no service interruption occurs when a datacenter is down.

## Topologies <a href="#kubernetesarchitecture-topologies" id="kubernetesarchitecture-topologies"></a>

### Compute topology <a href="#kubernetesarchitecture-computetopology" id="kubernetesarchitecture-computetopology"></a>

**cegedim.cloud** provides a compute topology based on :

* Region : a pair of data centers
* Area : infrastructure network isolation between tenants
* Availability Zones : inside an area, isolated infrastructure for Compute and Storage

<figure><picture><source srcset="https://835168969-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F2FXoHyOBZPpJv3UALn4V%2Fuploads%2Fgit-blob-2eb9a68ef9cce9ebf12c67870bd0da9e11469c89%2Fengdark%20(4).png?alt=media" media="(prefers-color-scheme: dark)"><img src="https://835168969-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F2FXoHyOBZPpJv3UALn4V%2Fuploads%2Fgit-blob-18248a553fb9241c7147dbe32cb7d1e359876ad3%2Flighteng%20(6).png?alt=media" alt="" width="563"></picture><figcaption><p>Compute topology</p></figcaption></figure>

### Kubernetes clusters topologies <a href="#kubernetesarchitecture-kubernetesclusterstopologies" id="kubernetesarchitecture-kubernetesclusterstopologies"></a>

Kubernetes clusters can be deployed using 2 topologies :

<table data-full-width="true"><thead><tr><th>Topology</th><th>Datacenters of Masters</th><th>Datacenters of workers</th><th>Worker Availability Zones</th><th>Cluster Availability Zones</th><th data-type="checkbox">Disaster Recovery Protection</th><th>Recovery Time Objective (RTO)</th></tr></thead><tbody><tr><td>Standard</td><td>1</td><td>1</td><td>2</td><td>2</td><td>true</td><td>4h</td></tr><tr><td>High Availability</td><td>3</td><td>2</td><td>3</td><td>4</td><td>true</td><td>0 - 15 min</td></tr></tbody></table>

Based on your requirements in terms of RTO and costs, you can choose the best topology for your needs.

### Availability of topologies

<table data-full-width="true"><thead><tr><th>Topology</th><th data-type="checkbox">EB-EMEA</th><th data-type="checkbox">EB-HDS</th><th data-type="checkbox">ET-EMEA</th><th data-type="checkbox">ET-HDS</th></tr></thead><tbody><tr><td>Standard</td><td>true</td><td>true</td><td>true</td><td>true</td></tr><tr><td>High Availability</td><td>true</td><td>false</td><td>true</td><td>false</td></tr></tbody></table>

For more details about the High Availability topology, please follow this page [#title-text](https://academy.cegedim.cloud/compute/k8s-get-started/high-availability#title-text "mention").

### Topology Keys <a href="#kubernetesarchitecture-topologykeys" id="kubernetesarchitecture-topologykeys"></a>

**cegedim.cloud** uses standard topology keys :

<table><thead><tr><th width="454">Key</th><th>Component</th></tr></thead><tbody><tr><td>topology.kubernetes.io/region</td><td>Region</td></tr><tr><td>topology.kubernetes.io/zone</td><td>Availability Zone</td></tr><tr><td>kubernetes.io/hostname</td><td>FQDN of node</td></tr></tbody></table>

## Components and Versions <a href="#kubernetesarchitecture-componentsandversions" id="kubernetesarchitecture-componentsandversions"></a>

**cegedim.cloud** uses **RKE2** (Rancher Kubernetes Engine 2) as the Kubernetes distribution. RKE2 is a fully conformant Kubernetes distribution that focuses on security and compliance within the U.S. Federal Government sector.

Here is the list of the components and tools that are deployed in a standard delivered cluster:

| Features                | Versions                                          |
| ----------------------- | ------------------------------------------------- |
| Kubernetes Distribution | RKE2                                              |
| Rancher                 | 2.12                                              |
| Kubernetes              | 1.33                                              |
| Ingress controllers     | ingress-nginx 1.12.1, traefik 3.3.4, istio 1.24.1 |
| Prometheus              | 2.53.1                                            |
| Grafana                 | 11.1.0                                            |
| Helm                    | 3.17.0                                            |
| CSI Ceph                | 3.14.0                                            |
| Node OS                 | Ubuntu 24.04                                      |

## Network Architecture <a href="#kubernetesarchitecture-networkarchitecture" id="kubernetesarchitecture-networkarchitecture"></a>

{% hint style="info" %}
The following network architecture is described using **Nginx ingress controller**. The configuration is slightly different with Traefik or Istio, but the overall architecture and concepts remain the same.
{% endhint %}

Here is a figure with all network components explained:

### Outbound flow <a href="#kubernetesarchitecture-outboundflow" id="kubernetesarchitecture-outboundflow"></a>

<figure><img src="https://835168969-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F2FXoHyOBZPpJv3UALn4V%2Fuploads%2Fgit-blob-9357d430e42b6f04a40c5d0a6dfac66e15aad4b1%2Fk8s-network-outbound.png?alt=media" alt=""><figcaption></figcaption></figure>

* Two pods of 2 namespaces that belong to the same Rancher Project can fully communicate between them.
* Two pods of 2 namespaces that belong to two different Rancher Project cannot communicate unless user defines a Network Policy dedicated for this need.
* Pods from Rancher Project named System can communicate to pods from other Rancher Projects.
* Pods can only send requests to servers of the same VLAN, unless a specific network opening rule is configured between the two VLANs.
* Pods cannot send requests to Internet unless, a proxy is setup inside the pod or specific network opening rule is configured for the related VLAN.

### Inbound flow <a href="#kubernetesarchitecture-inboundflow" id="kubernetesarchitecture-inboundflow"></a>

<figure><img src="https://835168969-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F2FXoHyOBZPpJv3UALn4V%2Fuploads%2Fgit-blob-aed2ce31df1c1c197f4e1a8c4026f9747ef9263d%2Fk8s-network-inbound.png?alt=media" alt=""><figcaption></figcaption></figure>

* Requests toward kube api-server can be reverse-proxied by Rancher URL.
* Workload hosted by pods cannot be directly accessible from outside of K8S cluster, but via ingress layer for HTTP protocol or via a NodePort service for TCP protocol with a respective Load Balancer.

### Ingress Controller : nginx <a href="#kubernetesarchitecture-ingresscontroller-nginx" id="kubernetesarchitecture-ingresscontroller-nginx"></a>

nginx is the ingress controller deployed to expose your workloads. You can find relevant documentation on official Github.

{% embed url="<https://github.com/nginxinc/kubernetes-ingress>" %}

Two ingress controllers are deployed:

* One exposing to internal Cegedim Network
  * Workload name: nginx-int-ingress-nginx-controller
  * Listen on every ingress node at the port :80
  * Ingress class: "nginx" (default, so no ingress class needs to be specified)
* One exposing to internet
  * Workload name: nginx-ext-ingress-nginx-controller
  * Listen to every ingress node at the port :8081
  * Ingress class: nginx-ext
    * using the annotation: kubernetes.io/ingress.class: "nginx-ext"

### Load Balancing, DNS and Certificates <a href="#kubernetesarchitecture-loadbalancing-dnsandcertificates" id="kubernetesarchitecture-loadbalancing-dnsandcertificates"></a>

A K8s cluster comes with :

* An Elastic Secured Endpoint, managed by F5 appliances, exposing the K8s workload to the cegedim internal network (once you're connected to Cegedim LAN, either physically or through VPN)
* A \*.\<yourclustername>.ccs.cegedim.cloud DNS resolution to this endpoint
* A \*.\<yourclustername>.ccs.cegedim.cloud SSL certificate configured

### Requesting specific configuration <a href="#kubernetesarchitecture-requestingspecificconfiguration" id="kubernetesarchitecture-requestingspecificconfiguration"></a>

You can use ITCare in case of a need of a specific configuration :

* Exposing your workloads to the Internet or private link
* Using a specific FQDN to deploy your workload
* Using a specific certificate to deploy your workload
* Using Traefik as Ingress Provider instead of nginx
* Adding other Ingress Providers
* Accessing resources outside of cluster

## Cluster Customization Options <a href="#kubernetesarchitecture-clustercustomizationoptions" id="kubernetesarchitecture-clustercustomizationoptions"></a>

When creating a new Kubernetes cluster, **cegedim.cloud** provides you with the flexibility to customize key networking components according to your specific requirements and workload characteristics.

### CNI Provider Selection <a href="#kubernetesarchitecture-cniproviderselection" id="kubernetesarchitecture-cniproviderselection"></a>

You can select from the following Container Network Interface (CNI) providers when provisioning your cluster:

| CNI Provider | Description                                                                               | Maximum Nodes |
| ------------ | ----------------------------------------------------------------------------------------- | ------------- |
| **Canal**    | Combination of Calico and Flannel, providing policy enforcement and simplified networking | 200 nodes     |
| **Calico**   | Advanced networking and network policy solution with high scalability                     | 2,000 nodes   |
| **Cilium**   | eBPF-based networking, observability, and security with high performance                  | 2,000 nodes   |

{% hint style="info" %}
The CNI provider selection should be based on your cluster size requirements and specific networking needs. For clusters requiring more than 200 nodes, Calico or Cilium is recommended.
{% endhint %}

{% hint style="warning" %}
When using **Cilium** as the CNI provider, **kube-proxy** is not deployed. Cilium replaces kube-proxy functionality with its eBPF-based implementation, providing enhanced performance and efficiency.
{% endhint %}

#### When to Choose Each CNI Provider

**Canal (Calico + Flannel)**

* Standard deployments requiring up to 200 nodes
* Proven stability with simplified networking
* FIPS 140-2 compliance requirements
* Default choice for most use cases

**Calico**

* Large-scale deployments (200-2,000 nodes)
* Advanced network policy requirements
* High scalability requirements

**Cilium**

* High-performance workloads requiring maximum throughput
* Advanced observability and monitoring needs
* eBPF-based networking and security features
* Large-scale deployments (200-2,000 nodes) with enhanced performance
* Clusters where eliminating kube-proxy overhead is beneficial

### Ingress Provider Selection <a href="#kubernetesarchitecture-ingressproviderselection" id="kubernetesarchitecture-ingressproviderselection"></a>

You can choose your preferred Ingress controller to manage external access to services within your cluster:

| Ingress Provider | Description                                                                                           |
| ---------------- | ----------------------------------------------------------------------------------------------------- |
| **Nginx**        | Industry-standard ingress controller with robust features and wide community support (default option) |
| **Traefik**      | Modern cloud-native ingress controller with automatic service discovery                               |
| **Istio**        | Service mesh providing advanced traffic management, security, and observability capabilities          |

{% hint style="info" %}
To request a specific CNI or Ingress provider during cluster creation, please specify your requirements through ITCare when ordering your Kubernetes cluster.
{% endhint %}

## Cluster Hardening <a href="#kubernetesarchitecture-clusterhardening" id="kubernetesarchitecture-clusterhardening"></a>

For more information regarding the hardening of Kubernetes, please follow this page [hardening](https://academy.cegedim.cloud/compute/containers-k8s/k8s-features/hardening "mention").

## Persistent Storage <a href="#kubernetesarchitecture-persistantstorage" id="kubernetesarchitecture-persistantstorage"></a>

For more information regarding the persistant solution available for **cegedim.cloud**'s Kubernetes clusters, please follow this page [persistent-storage](https://academy.cegedim.cloud/compute/containers-k8s/k8s-features/persistent-storage "mention").
