View By



Kafka Producer Example

In this Apache Kafka tutorial you will learn - How to Install Apache Kafka on Mac using homebrew. To install Kafka on linux machine refer this.

Kafka Zookeeper Installation

$ brew install kafka

Command will automatically install Zookeeper as dependency.

Kafka installation will take a minute or so. If you are working in cluster mode, then you need to install it on all the nodes.

How to start Kafka & Zookeeper?

You don't need to run these commands right now but just for understanding you can see how to start them in output log.

To start zookeeper now and restart at login, you need to run this:

$ brew services start zookeeper

Or, if you don't want/need a background service you can just run:

$ zkServer start

To start kafka now and restart at login:

$ brew services start kafka

Or, if you don't want/need a background service you can just run:

$ zookeeper-server-start /usr/local/etc/kafka/ & kafka-server-start /usr/local/etc/kafka/

Zookeeper & Kafka Server Configuration

You can open Zookeeper properties file to see default configuration, there is not much to explain here. You can see the port number where client (kafka in this case) will connect, directory where snapshot will be stored and the max number of connections per-ip address.

$ vi /usr/local/etc/kafka/

# the directory where the snapshot is stored.


# the port at which the clients will connect


# disable the per-ip limit on the number of connections since this is a non-production config


Similarly, you can see default Kafka server properties. You just need to change listener settings here to localhost (standalone mode) or change it to ip-address of node in cluster mode.

$ vi /usr/local/etc/kafka/

  • Server basics - Basically you define broker id here, its unique integer value for each broker.

  • Socket server settings - Here, you define the listener hostname and port, by default it's commented out. For this example hostname will be localhost, but in case of cluster you need to mention respective ip-addresses. Setup like this, listeners=PLAINTEXT://localhost:9092

  • Log basics - Here you define log directory, number of log partitions per topic and recovery thread per data directory.

  • Internal topic settings - Here you can change topic replication factor which is by default 1, usually in production environment its > 1.

  • Log flush policy - By default everything is commented out.

  • Log retention policy - Default retention hour is 168.

  • Zookeeper - Default port number is same which you saw during installation : 2181

  • Group coordinator settings - This is the rebalance time in milliseconds when new member joins as consumer. Kafka topics are usually multi-subscriber, i.e. there will be multiple consumers to one topic. However, it can have 0,1 or more consumers.

Starting Zookeeper & Kafka

To start Zookeeper and Kafka, you can start them together like below or run each command separately i.e. start Zookeeper first and then start Kafka.

$ zookeeper-server-start /usr/local/etc/kafka/ & kafka-server-start /usr/local/etc/kafka/

This will print a long list of INFO, WARN and ERROR messages. You can scroll back up and look for WARN and ERRORS if any. You can see producer id, broker id in the log, similarly other properties which is setup by default in kafka properties file which I explained earlier.

Let this process run, don't kill it.

Create a Topic and Start Kafka Producer

To create a topic and to start producer, you need to run this command;

$ kafka-console-producer --broker-list localhost:9092 --topic topic1

Here my topic name is "topic1" and this terminal will act as producer. You can send messages from this terminal.

Start Kafka Consumer

Now, start the Kafka consumer, you need to run this command;

$ kafka-console-consumer --bootstrap-server localhost:9092 --topic topic1 --from-beginning

Bootstrap-server is basically the server to connect to. For this example its localhost with 9092 default port.

Screen on the left is Producer and screen on the right is Consumer. You can see how messages are transferred from one terminal to another.

Thank you. If you have any question please mention in comments section below.


Help others, write your first blog today! 

Home   |   Contact Us

©2020 by Data Nebulae