To get started, go to ITCare and search for your target global service where you'll create your new Apache Kafka cluster.
Search for your Global Service in the top search bar and click on it to display its information page.
Once in your Global Service, click on the Create Resource button, select Apache Kafka and the required version.
Fill in the form:
Define the name of the future cluster
Storage required on each broker
Management options (backup, monitoring, 24/7, remote site replication)
Click Next once all fields have been completed.
In the next step, enter the password for the super user account to be supplied, then click Next .
Passwords are not saved by cegedim.cloud. Be sure to save your password!
Review the summary before submitting the form.
Once the deployment is ready, you'll be notified by e-mail.
Start a cluster
At the top of the cluster page, click on the Manage button, then on Start and confirm.
An e-mail notification will be sent when the service is activated.
Stop a cluster
At the top of the cluster page, click on the Manage button, then on Stop.
Enter an RFC number for tracking (optional). Click on Submit .
Shutting down a cluster will stop all virtual machines attached to the cluster, and monitoring will be disabled.
An e-mail notification will be sent when the cluster is shut down.
Resize nodes
At the top of the cluster page, click on the Manage button, then on Resize nodes.
Select the nodes you wish to resize and select the new size (cpu/ram).
An e-mail notification will be sent when all nodes have been resized.
Delete a cluster
At the top of the cluster page, click on the Manage button, then on Delete . This will stop and delete all virtual machines.
Please note that this action is not recoverable!
Enter an RFC number for tracking (optional), then click Submit .
An e-mail notification will be sent when the cluster is deleted.
How to manage Apache Kafka ?
To interact with your secure cluster using Kafka scripts, you first need to download the Apache Kafka archive from the official website.
Ideally, you should download the exact version corresponding to your cluster.
Once unzip and unarchived on your linux server, you will find the Kafka shell scripts under the /bin directory.
These scripts allows to :
Manage item configurations
This guide will not get into the details of every scripts but help you get started with simple commands.
Authentication
To connect to a secured Kafka cluster, you need to configure a keystore and a property file.
Create keystore
Create the keystore with the provided certificate :
Copy keytool -keystore kafka.client.truststore.jks -alias ca-cert-cluster1 -import -file ca-cert -storepass <redacted> -keypass <redacted> -noprompt
Alias : alias of the certificate inside the keystore
Import-file : name of the certificate file containing the provided certificate
Storepass and keypass : password to protect your keystore, should be identical
To list the content of your keystore, use this command :
Copy keytool -list -v -keystore kafka.client.truststore.jks
Property file
With the keystore created, now you need a property file :
Copy security.protocol=SASL_SSL
sasl.mechanism=SCRAM-SHA-256
sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required username="user" password="password";
ssl.truststore.location=/path/to/kafka.client.truststore.jks
ssl.truststore.password=keystore-password
username : the kafka super user provided to you by email
password : the password for that user that you provided at provisioning
ssl.truststore.location : the location to your keystore previously created
ssl.truststore.password : the password to unlock your keystore (storepass / keypass used)
Command line
With these elements, you can now use any Kafka shell script with the following parameter :
Copy --command-config client.properties
Manage topics
Create a topic
Copy # Use an env variable for short commands, the bootstrap-server is available in ITCare
export BROKERS=broker1.hosting.cegedim.cloud:9094,broker2.hosting.cegedim.cloud:9094,broker3.hosting.cegedim.cloud:9094
Copy kafka-topics.sh --bootstrap-server $BROKERS --create --replication-factor 3 --partitions 3 --topic my-topic --command-config client.properties
Created topic my-topic.
List topics
Copy kafka-topics.sh --bootstrap-server $BROKERS -list --command-config client.properties my-topic
Describe a topic
Copy kafka-topics.sh --bootstrap-server $BROKERS --describe --topic my-topic --command-config client.properties
Topic: my-topic TopicId: 84yqCErzTG27J4wv44dkPQ PartitionCount: 4 ReplicationFactor: 3 Configs: cleanup.policy=delete
Topic: my-topicc Partition: 0 Leader: 2 Replicas: 2,3,1 Isr: 2,3,1
Topic: my_topic Partition: 1 Leader: 3 Replicas: 3,1,2 Isr: 3,1,2
Topic: my_topic Partition: 2 Leader: 1 Replicas: 1,2,3 Isr: 1,2,3
Topic: my_topic Partition: 3 Leader: 2 Replicas: 2,1,3 Isr: 2,1,3
Delete a topic
Copy kafka-topics.sh --bootstrap-server $BROKERS --delete --topic my-topic --command-config client.properties
Add partitions to topic
Copy kafka-topics.sh --bootstrap-server $BROKERS --alter --topic my-topic --partitions 16 --command-config client.properties
List under-replicated partitions for all topics
Copy kafka-topics.sh --bootstrap-server $BROKERS --describe --under-replicated-partitions --command-config client.properties
List ACLs for a topic
Copy kafka-acls.sh --bootstrap-server $BROKERS --topic=my-topic --list --command-config client.properties
Manage users
Create a Kafka user
Copy kafka-configs.sh --bootstrap-server $BROKERS --alter --add-config 'SCRAM-SHA-256=[password=secret]' --entity-type users --entity-name username --command-config client.properties
List Kafka users
Copy kafka-configs.sh --bootstrap-server $BROKERS --describe --entity-type users --command-config client.properties
Delete a Kafka user
Copy kafka-configs.sh --bootstrap-server $BROKERS --alter --delete-config 'SCRAM-SHA-256' --entity-type users --entity-name username --command-config client.properties
List all ACLs
Copy kafka-acls.sh --bootstrap-server $BROKERS --list --command-config client.properties
List ACL for Kafka user
Copy kafka-acls.sh --bootstrap-server $BROKERS --principal User:admin --list --command-config client.properties
Set ACL for Kafka user
Copy kafka-acls.sh --bootstrap-server $BROKERS --add --allow-principal User:alice --producer --topic my-topic --command-config client.properties
Remove ACL for Kafka user
Copy kafka-acls.sh --bootstrap-server $BROKERS --remove --allow-principal User:bob --consumer --topic my-topic --group my-consumer-group --command-config client.properties
Produce
Start a producer
Copy kafka-console-producer.sh --broker-list $BROKERS --topic my-topic --producer.config client.properties
>
Consume
Start a consumer
Copy kafka-console-consumer.sh --bootstrap-server $BROKERS --topic my-topic --consumer.config client.properties --group consu
List all consumer groups
Copy kafka-consumer-groups.sh --list --bootstrap-server $BROKERS --command-config client.properties
Describe consumer group
Copy kafka-run-class.sh kafka.admin.ConsumerGroupCommand --bootstrap-server $BROKERS --describe --group consu --command-config client.properties
Delete a consumer group
Copy kafka-consumer-groups.sh --bootstrap-server $BROKERS --group my-group --group my-other-group --delete --command-config client.properties
Kcat
kafkacat is a generic non-JVM producer and consumer for Apache Kafka >=0.8.
Version 1.5.0 and above must be used to support SASL_SSL authentication.
More information regarding Kafkacat is available on Confluent website :
Kafka clients
Please refer to this documentation to create a Kafka client in any language you need :
Kafka connectors
Please refer to this documentation to know more about Kafka connectors :