K8s - Features

There are 2 possible topologies of K8s cluster provided by cegedim.cloud:

  • Standard: workloads are deployed in a single data center, but protected against data center disaster using a secondary data center as failover.

  • High Availability: workloads are deployed amongst two datacenters. If multi-replica is used and workload is well distributed, no service interruption occurs when a datacenter is down.

Topologies

Compute topology

cegedim.cloud provides a compute topology based on :

  • Region : a pair of data centers

  • Area : infrastructure network isolation between tenants

  • Availability Zones : inside an area, isolated infrastructure for Compute and Storage

Compute topology

Kubernetes clusters topologies

Kubernetes clusters can be deployed using 2 topologies :

Topology
Datacenters of Masters
Datacenters of workers
Worker Availability Zones
Cluster Availability Zones
Disaster Recovery Protection
Recovery Time Objective (RTO)

Standard

1

1

2

2

4h

High Availability

3

2

3

4

0 - 15 min

Based on your requirements in terms of RTO and costs, you can choose the best topology for your needs.

Availability of topologies

Topology
EB-EMEA
EB-HDS
ET-EMEA
ET-HDS

Standard

High Availability

For more details about the High Availability topology, please follow this page How to configure deployments to leverage HA capabilities.

Topology Keys

cegedim.cloud uses standard topology keys :

Key
Component

topology.kubernetes.io/region

Region

topology.kubernetes.io/zone

Availability Zone

kubernetes.io/hostname

FQDN of node

Components and Versions

cegedim.cloud uses RKE2 (Rancher Kubernetes Engine 2) as the Kubernetes distribution. RKE2 is a fully conformant Kubernetes distribution that focuses on security and compliance within the U.S. Federal Government sector.

Here is the list of the components and tools that are deployed in a standard delivered cluster:

Features
Versions

Kubernetes Distribution

RKE2

Rancher

2.11

Kubernetes

1.32

Ingress controllers

ingress-nginx 1.12.1, traefik 3.3.4, istio 1.24.1

Prometheus

2.53.1

Grafana

11.1.0

Helm

3.17.0

CSI Ceph

3.14.0

Node OS

Ubuntu 24.04

Network Architecture

The following network architecture is described using Nginx ingress controller. The configuration is slightly different with Traefik or Istio, but the overall architecture and concepts remain the same.

Here is a figure with all network components explained:

Outbound flow

  • Two pods of 2 namespaces that belong to the same Rancher Project can fully communicate between them.

  • Two pods of 2 namespaces that belong to two different Rancher Project cannot communicate unless user defines a Network Policy dedicated for this need.

  • Pods from Rancher Project named System can communicate to pods from other Rancher Projects.

  • Pods can only send requests to servers of the same VLAN, unless a specific network opening rule is configured between the two VLANs.

  • Pods cannot send requests to Internet unless, a proxy is setup inside the pod or specific network opening rule is configured for the related VLAN.

Inbound flow

  • Requests toward kube api-server can be reverse-proxied by Rancher URL.

  • Workload hosted by pods cannot be directly accessible from outside of K8S cluster, but via ingress layer for HTTP protocol or via a NodePort service for TCP protocol with a respective Load Balancer.

Ingress Controller : nginx

nginx is the ingress controller deployed to expose your workloads. You can find relevant documentation on official Github.

Two ingress controllers are deployed:

  • One exposing to internal Cegedim Network

    • Workload name: nginx-int-ingress-nginx-controller

    • Listen on every ingress node at the port :80

    • Ingress class: "nginx" (default, so no ingress class needs to be specified)

  • One exposing to internet

    • Workload name: nginx-ext-ingress-nginx-controller

    • Listen to every ingress node at the port :8081

    • Ingress class: nginx-ext

      • using the annotation: kubernetes.io/ingress.class: "nginx-ext"

Load Balancing, DNS and Certificates

A K8s cluster comes with :

  • An Elastic Secured Endpoint, managed by F5 appliances, exposing the K8s workload to the cegedim internal network (once you're connected to Cegedim LAN, either physically or through VPN)

  • A *.<yourclustername>.ccs.cegedim.cloud DNS resolution to this endpoint

  • A *.<yourclustername>.ccs.cegedim.cloud SSL certificate configured

Requesting specific configuration

You can use ITCare in case of a need of a specific configuration :

  • Exposing your workloads to the Internet or private link

  • Using a specific FQDN to deploy your workload

  • Using a specific certificate to deploy your workload

  • Using Traefik as Ingress Provider instead of nginx

  • Adding other Ingress Providers

  • Accessing resources outside of cluster

Cluster Customization Options

When creating a new Kubernetes cluster, cegedim.cloud provides you with the flexibility to customize key networking components according to your specific requirements and workload characteristics.

CNI Provider Selection

You can select from the following Container Network Interface (CNI) providers when provisioning your cluster:

CNI Provider
Description
Maximum Nodes

Canal

Combination of Calico and Flannel, providing policy enforcement and simplified networking

200 nodes

Calico

Advanced networking and network policy solution with high scalability

2,000 nodes

Cilium

eBPF-based networking, observability, and security with high performance

2,000 nodes

The CNI provider selection should be based on your cluster size requirements and specific networking needs. For clusters requiring more than 200 nodes, Calico or Cilium is recommended.

When to Choose Each CNI Provider

Canal (Calico + Flannel)

  • Standard deployments requiring up to 200 nodes

  • Proven stability with simplified networking

  • FIPS 140-2 compliance requirements

  • Default choice for most use cases

Calico

  • Large-scale deployments (200-2,000 nodes)

  • Advanced network policy requirements

  • High scalability requirements

Cilium

  • High-performance workloads requiring maximum throughput

  • Advanced observability and monitoring needs

  • eBPF-based networking and security features

  • Large-scale deployments (200-2,000 nodes) with enhanced performance

  • Clusters where eliminating kube-proxy overhead is beneficial

Ingress Provider Selection

You can choose your preferred Ingress controller to manage external access to services within your cluster:

Ingress Provider
Description

Nginx

Industry-standard ingress controller with robust features and wide community support (default option)

Traefik

Modern cloud-native ingress controller with automatic service discovery

Istio

Service mesh providing advanced traffic management, security, and observability capabilities

To request a specific CNI or Ingress provider during cluster creation, please specify your requirements through ITCare when ordering your Kubernetes cluster.

Cluster Hardening

For more information regarding the hardening of Kubernetes, please follow this page Hardening.

Persistent Storage

For more information regarding the persistant solution available for cegedim.cloud's Kubernetes clusters, please follow this page Persistent Storage.

Last updated