Apache Kafka - Get started

Deploy a cluster

To get started, go to ITCare and search for your target global service where you'll create your new Apache Kafka cluster.

Search for your Global Service in the top search bar and click on it to display its information page.

Once in your Global Service, click on the Create Resource button, select Apache Kafka and the required version.

Fill in the form:

  • Define the name of the future cluster

  • Number of brokers (3+)

  • Sizing

  • Storage required on each broker

  • Target location

  • Target network

  • Management options (backup, monitoring, 24/7, remote site replication)

Click Next once all fields have been completed.

In the next step, enter the password for the super user account to be supplied, then click Next.

Review the summary before submitting the form.

Provisioning can take up to 2 hours, depending on the current automation load.

Once the deployment is ready, you'll be notified by e-mail.

Start a cluster

At the top of the cluster page, click on the Manage button, then on Start and confirm.

Cluster startup starts all virtual machines attached to the cluster.

An e-mail notification will be sent when the service is activated.

Stop a cluster

At the top of the cluster page, click on the Manage button, then on Stop.

Enter an RFC number for tracking (optional). Click on Submit.

An e-mail notification will be sent when the cluster is shut down.

Resize nodes

At the top of the cluster page, click on the Manage button, then on Resize nodes.

Select the nodes you wish to resize and select the new size (cpu/ram).

Each node will be resized and restarted sequentially.

An e-mail notification will be sent when all nodes have been resized.

Delete a cluster

At the top of the cluster page, click on the Manage button, then on Delete. This will stop and delete all virtual machines.

Enter an RFC number for tracking (optional), then click Submit.

An e-mail notification will be sent when the cluster is deleted.

How to manage Apache Kafka ?

To interact with your secure cluster using Kafka scripts, you first need to download the Apache Kafka archive from the official website.

Ideally, you should download the exact version corresponding to your cluster.

Once unzip and unarchived on your linux server, you will find the Kafka shell scripts under the /bin directory.

These scripts allows to :

  • Produce and consume

  • Manage users

  • Manage topics

  • Manager ACLs

  • Manage item configurations

This guide will not get into the details of every scripts but help you get started with simple commands.

Authentication

To connect to a secured Kafka cluster, you need to configure a keystore and a property file.

Create keystore

Create the keystore with the provided certificate :

  • Alias : alias of the certificate inside the keystore

  • Import-file : name of the certificate file containing the provided certificate

  • Storepass and keypass : password to protect your keystore, should be identical

To list the content of your keystore, use this command :

Property file

With the keystore created, now you need a property file :

  • username : the kafka super user provided to you by email

  • password : the password for that user that you provided at provisioning

  • ssl.truststore.location : the location to your keystore previously created

  • ssl.truststore.password : the password to unlock your keystore (storepass / keypass used)

Command line

With these elements, you can now use any Kafka shell script with the following parameter :

Manage topics

Create a topic

List topics

Describe a topic

Delete a topic

Add partitions to topic

List under-replicated partitions for all topics

List ACLs for a topic

Manage users

Create a Kafka user

List Kafka users

Delete a Kafka user

List all ACLs

List ACL for Kafka user

Set ACL for Kafka user

Remove ACL for Kafka user

Produce

Start a producer

Consume

Start a consumer

List all consumer groups

Describe consumer group

Delete a consumer group

Kcat

kafkacat is a generic non-JVM producer and consumer for Apache Kafka >=0.8.

Version 1.5.0 and above must be used to support SASL_SSL authentication.

More information regarding Kafkacat is available on Confluent website :

Kafka clients

Please refer to this documentation to create a Kafka client in any language you need :

Kafka connectors

Please refer to this documentation to know more about Kafka connectors :

Kafka add broker node

At the top of the cluster page, click on the Manage button, then on Add broker node. Select the disk's size and the availability zone.

Enter an RFC number for tracking (optional), then click Submit.

This feature is only available for Kafka 3.6 or later. Only one node can be added at a time. Node's name, RAM and CPU are predefined for cluster consistency.

An e-mail notification will be sent when the node have been created.

Last updated