Apache Kafka
Last updated
Last updated
Apache Kafka is an event streaming platform.
Apache Kafka combines three key features to enable you to implement your end-to-end event streaming use cases with a single, proven solution:
To publish (write) and subscribe (read) event streams, including continuous import/export of your data from other systems.
To store event streams permanently and reliably, for as long as you like.
To process event streams as they occur or retrospectively.
And all this functionality is delivered in a distributed, highly scalable, elastic, fault-tolerant and secure way.
Apache Kafka is deployed on site in cegedim.cloud data centers.
cegedim.cloud guarantees the same level of service as the Compute offer: instance deployment, operational maintenance, flexibility, security and monitoring are all provided by our experts.
The deployment of a minimum 3-node cluster in version 3.6.0 is available in self-service in ITCare. This topology is ready for production with :
a minimum of 3 brokers distributed over several availability zones
3 dedicated controllers distributed over several availability zones
Delivered clusters are secured for both inter-broker and client <-> broker communications, with SASL_SSL :
SSL for the transport layer
SASL SCRAM-SHA-256 for authentication and authorization
Sizing can be configured to suit your needs.
Brokers
3+
Controllers
3
CPU (per broker)
2 - 16 vCPU
RAM (per broker)
4 - 384 Go
Supported Version(s)
3.6.0
Monitoring
✅ Option
24x7 Monitoring
✅ Option
Backup
✅ Option
Data replication (DRP)
✅ Option
Availability
99.8%
Multi-AZ deployment
✅
Self-service
✅
For more information, please visit Apache Kafka - Features.
Billing is processed monthly and based on the number of nodes plus supplementary costs for storage, backup, 24x7 monitoring.
At least 6 Linux virtual machines will be billed : 3 Apache Kafka brokers and 3 Apache Kafka controller nodes.
Costs for a Kafka cluster is accessible via your Service Delivery Manager.