K8s - Features
Last updated
Last updated
There are 3 possible topologies of K8s cluster provided by cegedim.cloud:
Standalone: workloads are deployed in a single data center with no disaster recovery plan.
Standard: workloads are still deployed in a single data center, but protected against data center disaster using a secondary data center as failover.
High Availability: workloads are deployed in two datacenters and no interruption of services when data center disaster can be obtained with well distributed multi-replicas deployment.
cegedim.cloud provides a compute topology based on :
Region : a pair of data centers
Area : infrastructure network isolation between tenants
Availability Zones : inside an area, isolated infrastructure for Compute and Storage
Kubernetes clusters can be deployed using 2 topologies :
Standard
1
1
2
2
4h
High Availability
3
2
3
4
0 - 5 min
Based on your requirements in terms of RTO and costs, you can choose the best topology for your needs.
Standard
High Availability
cegedim.cloud uses standard topology keys :
topology.kubernetes.io/region
Region
topology.kubernetes.io/zone
Availability Zone
kubernetes.io/hostname
FQDN of node
Since Kubernetes > 1.20 failure-domain.beta.kubernetes.io/zone is deprecated but still remained available if pre-existing.
Only topology.kubernetes.io/zone will be officially maintained
Here is the list of the components and tools that are deployed in a standard delivered cluster :
Rancher
2.8.3
Kubernetes
1.28
Ingress controllers
ingress-nginx 1.10.0, traefik 2.11.2, istio 1.20.3
Prometheus
2.42.0
Grafana
9.1.5
Helm
3.13.3
CSI Ceph
3.11.0
Node OS
Ubuntu 22.04
CNI - canal (Calico+Flannel)
3.26.3
Docker
24.0.9
Here is a figure with all network components explained :
Two pods of 2 namespaces that belong to the same Rancher Project can fully communicate between them.
Two pods of 2 namespaces that belong to two different Rancher Project cannot communicate unless user defines a Network Policy dedicated for this need.
Pods from Rancher Project named System can communicate to pods from other Rancher Projects.
Pods can only send requests to servers of the same VLAN, unless a specific network opening rule is configured between the two VLANs.
Pods cannot send requests to Internet unless, a proxy is setup inside the pod or specific network opening rule is configured for the related VLAN.
Requests toward kube api-server can be reverse-proxied by Rancher URL.
Workload hosted by pods cannot be directly accessible from outside of K8S cluster, but via ingress layer for HTTP protocol or via a NodePort service for TCP protocol with a respective Load Balancer.
nginx is the ingress controller deployed to expose your workloads. You can find relevant documentation on official Github.
Two ingress controllers are deployed :
One exposing to internal Cegedim Network
nginx ingress controller
listening on every worker node on the port :80
this is the default ingress class (no ingress class needs to be specified)
One exposing to internet \
nginx ingress controller - you can request to have nginx external ingress controller
listening to every worker node on the port :8081
this ingress class is : nginx-ext
using the annotation : kubernetes.io/ingress.class: "nginx-ext"
A K8s cluster comes with :
An Elastic Secured Endpoint, managed by F5 appliances, exposing the K8s workload to the cegedim internal network (once you're connected to Cegedim LAN, either physically or through VPN)
A *.<yourclustername>.ccs.cegedim.cloud DNS resolution to this endpoint
A *.<yourclustername>.ccs.cegedim.cloud SSL certificate configured
You can use ITCare in case of a need of a specific configuration :
Exposing your workloads to the Internet or private link
Using a specific FQDN to deploy your workload
Using a specific certificate to deploy your workload
Using Traefik as Ingress Provider instead of nginx
Adding other Ingress Providers
Accessing resources outside of cluster
For more information regarding the hardening of Kubernetes, please follow this page Hardening.
For more information regarding the persistant solution available for cegedim.cloud's Kubernetes clusters, please follow this page Persistent Storage.
For more details about the High Availability topology, please follow this page .