Apache Kafka Simple Producer Example
1. Introduction
This is an in-depth article related to the Apache Kafka producer example. Apache Kafka is an Apache open-source project. It was initially created on Linkedin. Kafka framework was created in java and scala. It supports publish-subscribe messaging and is fault-tolerant. It is scalable and performs for high-volume messaging. Zookeeper is the basic component that manages the Apache Kafka Server. Kafka has features related to reliability, scalability, performance, distributed logging, and durability.
2. Apache Kafka – Simple Producer
2.1 Prerequisites
Java 7 or 8 is required on the linux, windows or mac operating system. Maven 3.6.1 is required for building the spring and hibernate application.
2.2 Download
You can download Java 8 can be downloaded from the Oracle web site . Kafka framework’s latest releases are available from this website.
2.3 Setup
You can set the environment variables for JAVA_HOME and PATH. They can be set as shown below:
Setup
JAVA_HOME="/desktop/jdk1.8.0_73" export JAVA_HOME PATH=$JAVA_HOME/bin:$PATH export PATH
2.4 How to download and install Apache Kafka
Apache Kafka’s latest releases are available from the apache Kafka website. After downloading the zip file can be extracted to a folder.
To start the zookeeper, you can use the command below:
Zoo keeper
bin/zookeeper-server-start.sh config/zookeeper.properties
The output of the above command is shown as below:
Zoo keeper Output
apples-MacBook-Air:kafka_2.12-2.8.0 bhagvan.kommadi$ bin/zookeeper-server-start.sh config/zookeeper.properties [2021-05-10 14:15:56,896] INFO Reading configuration from: config/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) [2021-05-10 14:15:56,905] WARN config/zookeeper.properties is relative. Prepend ./ to indicate that you're sure! (org.apache.zookeeper.server.quorum.QuorumPeerConfig) [2021-05-10 14:15:56,945] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) [2021-05-10 14:15:56,945] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) [2021-05-10 14:15:56,956] INFO autopurge.snapRetainCount set to 3 (org.apache.zookeeper.server.DatadirCleanupManager) [2021-05-10 14:15:56,959] INFO autopurge.purgeInterval set to 0 (org.apache.zookeeper.server.DatadirCleanupManager) [2021-05-10 14:15:56,960] INFO Purge task is not scheduled. (org.apache.zookeeper.server.DatadirCleanupManager) [2021-05-10 14:15:56,960] WARN Either no config or no quorum defined in config, running in standalone mode (org.apache.zookeeper.server.quorum.QuorumPeerMain) [2021-05-10 14:15:56,985] INFO Log4j 1.2 jmx support found and enabled. (org.apache.zookeeper.jmx.ManagedUtil) [2021-05-10 14:15:57,020] INFO Reading configuration from: config/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) [2021-05-10 14:15:57,021] WARN config/zookeeper.properties is relative. Prepend ./ to indicate that you're sure! (org.apache.zookeeper.server.quorum.QuorumPeerConfig) [2021-05-10 14:15:57,024] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) [2021-05-10 14:15:57,024] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) [2021-05-10 14:15:57,024] INFO Starting server (org.apache.zookeeper.server.ZooKeeperServerMain) [2021-05-10 14:15:57,046] INFO zookeeper.snapshot.trust.empty : false (org.apache.zookeeper.server.persistence.FileTxnSnapLog) [2021-05-10 14:15:57,127] INFO Server environment:zookeeper.version=3.5.9-83df9301aa5c2a5d284a9940177808c01bc35cef, built on 01/06/2021 20:03 GMT (org.apache.zookeeper.server.ZooKeeperServer) [2021-05-10 14:15:57,127] INFO Server environment:host.name=localhost (org.apache.zookeeper.server.ZooKeeperServer) [2021-05-10 14:15:57,129] INFO Server environment:java.version=1.8.0_101 (org.apache.zookeeper.server.ZooKeeperServer) [2021-05-10 14:15:57,129] INFO Server environment:java.vendor=Oracle Corporation (org.apache.zookeeper.server.ZooKeeperServer) [2021-05-10 14:15:57,130] INFO Server environment:java.home=/Library/Java/JavaVirtualMachines/jdk1.8.0_101.jdk/Contents/Home/jre (org.apache.zookeeper.server.ZooKeeperServer) [2021-05-10 14:15:57,130] INFO Server environment:java.class.path=/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/activation-1.1.1.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/aopalliance-repackaged-2.6.1.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/argparse4j-0.7.0.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/audience-annotations-0.5.0.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/commons-cli-1.4.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/commons-lang3-3.8.1.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/connect-api-2.8.0.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/connect-basic-auth-extension-2.8.0.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/connect-file-2.8.0.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/connect-json-2.8.0.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/connect-mirror-2.8.0.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/connect-mirror-client-2.8.0.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/connect-runtime-2.8.0.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/connect-transforms-2.8.0.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/hk2-api-2.6.1.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/hk2-locator-2.6.1.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/hk2-utils-2.6.1.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jackson-annotations-2.10.5.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jackson-core-2.10.5.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jackson-databind-2.10.5.1.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jackson-dataformat-csv-2.10.5.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jackson-datatype-jdk8-2.10.5.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jackson-jaxrs-base-2.10.5.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jackson-jaxrs-json-provider-2.10.5.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jackson-module-jaxb-annotations-2.10.5.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jackson-module-paranamer-2.10.5.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jackson-module-scala_2.12-2.10.5.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jakarta.activation-api-1.2.1.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jakarta.annotation-api-1.3.5.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jakarta.inject-2.6.1.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jakarta.validation-api-2.0.2.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jakarta.ws.rs-api-2.1.6.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jakarta.xml.bind-api-2.3.2.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/javassist-3.27.0-GA.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/javax.servlet-api-3.1.0.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/javax.ws.rs-api-2.1.1.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jaxb-api-2.3.0.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jersey-client-2.31.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jersey-common-2.31.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jersey-container-servlet-2.31.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jersey-container-servlet-core-2.31.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jersey-hk2-2.31.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jersey-media-jaxb-2.31.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jersey-server-2.31.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jetty-client-9.4.39.v20210325.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jetty-continuation-9.4.39.v20210325.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jetty-http-9.4.39.v20210325.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jetty-io-9.4.39.v20210325.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jetty-security-9.4.39.v20210325.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jetty-server-9.4.39.v20210325.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jetty-servlet-9.4.39.v20210325.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jetty-servlets-9.4.39.v20210325.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jetty-util-9.4.39.v20210325.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jetty-util-ajax-9.4.39.v20210325.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jline-3.12.1.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jopt-simple-5.0.4.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/kafka-clients-2.8.0.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/kafka-log4j-appender-2.8.0.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/kafka-metadata-2.8.0.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/kafka-raft-2.8.0.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/kafka-shell-2.8.0.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/kafka-streams-2.8.0.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/kafka-streams-examples-2.8.0.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/kafka-streams-scala_2.12-2.8.0.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/kafka-streams-test-utils-2.8.0.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/kafka-tools-2.8.0.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/kafka_2.12-2.8.0-sources.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/kafka_2.12-2.8.0.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/log4j-1.2.17.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/lz4-java-1.7.1.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/maven-artifact-3.6.3.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/metrics-core-2.2.0.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/netty-buffer-4.1.62.Final.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/netty-codec-4.1.62.Final.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/netty-common-4.1.62.Final.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/netty-handler-4.1.62.Final.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/netty-resolver-4.1.62.Final.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/netty-transport-4.1.62.Final.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/netty-transport-native-epoll-4.1.62.Final.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/netty-transport-native-unix-common-4.1.62.Final.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/osgi-resource-locator-1.0.3.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/paranamer-2.8.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/plexus-utils-3.2.1.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/reflections-0.9.12.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/rocksdbjni-5.18.4.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/scala-collection-compat_2.12-2.3.0.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/scala-java8-compat_2.12-0.9.1.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/scala-library-2.12.13.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/scala-logging_2.12-3.9.2.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/scala-reflect-2.12.13.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/slf4j-api-1.7.30.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/slf4j-log4j12-1.7.30.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/snappy-java-1.1.8.1.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/zookeeper-3.5.9.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/zookeeper-jute-3.5.9.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/zstd-jni-1.4.9-1.jar (org.apache.zookeeper.server.ZooKeeperServer) [2021-05-10 14:15:57,152] INFO Server environment:java.library.path=/Users/bhagvan.kommadi/Library/Java/Extensions:/Library/Java/Extensions:/Network/Library/Java/Extensions:/System/Library/Java/Extensions:/usr/lib/java:. (org.apache.zookeeper.server.ZooKeeperServer) [2021-05-10 14:15:57,152] INFO Server environment:java.io.tmpdir=/var/folders/cr/0y892lq14qv7r24yl0gh0_dm0000gp/T/ (org.apache.zookeeper.server.ZooKeeperServer) [2021-05-10 14:15:57,153] INFO Server environment:java.compiler= (org.apache.zookeeper.server.ZooKeeperServer) [2021-05-10 14:15:57,153] INFO Server environment:os.name=Mac OS X (org.apache.zookeeper.server.ZooKeeperServer) [2021-05-10 14:15:57,153] INFO Server environment:os.arch=x86_64 (org.apache.zookeeper.server.ZooKeeperServer) [2021-05-10 14:15:57,153] INFO Server environment:os.version=10.16 (org.apache.zookeeper.server.ZooKeeperServer) [2021-05-10 14:15:57,153] INFO Server environment:user.name=bhagvan.kommadi (org.apache.zookeeper.server.ZooKeeperServer) [2021-05-10 14:15:57,153] INFO Server environment:user.home=/Users/bhagvan.kommadi (org.apache.zookeeper.server.ZooKeeperServer) [2021-05-10 14:15:57,156] INFO Server environment:user.dir=/Users/bhagvan.kommadi/Desktop/kafka_2.12-2.8.0 (org.apache.zookeeper.server.ZooKeeperServer) [2021-05-10 14:15:57,156] INFO Server environment:os.memory.free=499MB (org.apache.zookeeper.server.ZooKeeperServer) [2021-05-10 14:15:57,156] INFO Server environment:os.memory.max=512MB (org.apache.zookeeper.server.ZooKeeperServer) [2021-05-10 14:15:57,156] INFO Server environment:os.memory.total=512MB (org.apache.zookeeper.server.ZooKeeperServer) [2021-05-10 14:15:57,171] INFO minSessionTimeout set to 6000 (org.apache.zookeeper.server.ZooKeeperServer) [2021-05-10 14:15:57,171] INFO maxSessionTimeout set to 60000 (org.apache.zookeeper.server.ZooKeeperServer) [2021-05-10 14:15:57,177] INFO Created server with tickTime 3000 minSessionTimeout 6000 maxSessionTimeout 60000 datadir /tmp/zookeeper/version-2 snapdir /tmp/zookeeper/version-2 (org.apache.zookeeper.server.ZooKeeperServer) [2021-05-10 14:15:57,249] INFO Using org.apache.zookeeper.server.NIOServerCnxnFactory as server connection factory (org.apache.zookeeper.server.ServerCnxnFactory) [2021-05-10 14:15:57,280] INFO Configuring NIO connection handler with 10s sessionless connection timeout, 1 selector thread(s), 8 worker threads, and 64 kB direct buffers. (org.apache.zookeeper.server.NIOServerCnxnFactory) [2021-05-10 14:15:57,361] INFO binding to port 0.0.0.0/0.0.0.0:2181 (org.apache.zookeeper.server.NIOServerCnxnFactory) [2021-05-10 14:15:57,458] INFO zookeeper.snapshotSizeFactor = 0.33 (org.apache.zookeeper.server.ZKDatabase) [2021-05-10 14:15:58,056] INFO Snapshotting: 0x0 to /tmp/zookeeper/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) [2021-05-10 14:15:58,081] INFO Snapshotting: 0x0 to /tmp/zookeeper/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) [2021-05-10 14:15:58,644] INFO PrepRequestProcessor (sid:0) started, reconfigEnabled=false (org.apache.zookeeper.server.PrepRequestProcessor) [2021-05-10 14:15:58,676] INFO Using checkIntervalMs=60000 maxPerMinute=10000 (org.apache.zookeeper.server.ContainerManager)
You can now start the apache kafka server using the command below:
Apache Kafka Server Run Command
bin/kafka-server-start.sh config/server.properties
The output of the above command is shown as below:
Apache Kafka Server Output
apples-MacBook-Air:kafka_2.12-2.8.0 bhagvan.kommadi$ bin/kafka-server-start.sh config/server.properties
[2021-05-10 14:17:28,975] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$)
[2021-05-10 14:17:33,397] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util)
[2021-05-10 14:17:34,043] INFO Registered signal handlers for TERM, INT, HUP (org.apache.kafka.common.utils.LoggingSignalHandler)
[2021-05-10 14:17:34,108] INFO starting (kafka.server.KafkaServer)
[2021-05-10 14:17:34,112] INFO Connecting to zookeeper on localhost:2181 (kafka.server.KafkaServer)
[2021-05-10 14:17:34,319] INFO [ZooKeeperClient Kafka server] Initializing a new session to localhost:2181. (kafka.zookeeper.ZooKeeperClient)
[2021-05-10 14:17:34,382] INFO Client environment:zookeeper.version=3.5.9-83df9301aa5c2a5d284a9940177808c01bc35cef, built on 01/06/2021 20:03 GMT (org.apache.zookeeper.ZooKeeper)
[2021-05-10 14:17:34,382] INFO Client environment:host.name=localhost (org.apache.zookeeper.ZooKeeper)
[2021-05-10 14:17:34,382] INFO Client environment:java.version=1.8.0_101 (org.apache.zookeeper.ZooKeeper)
[2021-05-10 14:17:34,382] INFO Client environment:java.vendor=Oracle Corporation (org.apache.zookeeper.ZooKeeper)
[2021-05-10 14:17:34,383] INFO Client environment:java.home=/Library/Java/JavaVirtualMachines/jdk1.8.0_101.jdk/Contents/Home/jre (org.apache.zookeeper.ZooKeeper)
[2021-05-10 14:17:34,383] INFO Client environment:java.class.path=/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/activation-1.1.1.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/aopalliance-repackaged-2.6.1.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/argparse4j-0.7.0.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/audience-annotations-0.5.0.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/commons-cli-1.4.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/commons-lang3-3.8.1.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/connect-api-2.8.0.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/connect-basic-auth-extension-2.8.0.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/connect-file-2.8.0.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/connect-json-2.8.0.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/connect-mirror-2.8.0.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/connect-mirror-client-2.8.0.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/connect-runtime-2.8.0.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/connect-transforms-2.8.0.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/hk2-api-2.6.1.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/hk2-locator-2.6.1.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/hk2-utils-2.6.1.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jackson-annotations-2.10.5.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jackson-core-2.10.5.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jackson-databind-2.10.5.1.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jackson-dataformat-csv-2.10.5.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jackson-datatype-jdk8-2.10.5.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jackson-jaxrs-base-2.10.5.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jackson-jaxrs-json-provider-2.10.5.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jackson-module-jaxb-annotations-2.10.5.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jackson-module-paranamer-2.10.5.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jackson-module-scala_2.12-2.10.5.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jakarta.activation-api-1.2.1.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jakarta.annotation-api-1.3.5.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jakarta.inject-2.6.1.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jakarta.validation-api-2.0.2.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jakarta.ws.rs-api-2.1.6.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jakarta.xml.bind-api-2.3.2.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/javassist-3.27.0-GA.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/javax.servlet-api-3.1.0.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/javax.ws.rs-api-2.1.1.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jaxb-api-2.3.0.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jersey-client-2.31.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jersey-common-2.31.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jersey-container-servlet-2.31.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jersey-container-servlet-core-2.31.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jersey-hk2-2.31.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jersey-media-jaxb-2.31.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jersey-server-2.31.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jetty-client-9.4.39.v20210325.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jetty-continuation-9.4.39.v20210325.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jetty-http-9.4.39.v20210325.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jetty-io-9.4.39.v20210325.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jetty-security-9.4.39.v20210325.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jetty-server-9.4.39.v20210325.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jetty-servlet-9.4.39.v20210325.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jetty-servlets-9.4.39.v20210325.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jetty-util-9.4.39.v20210325.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jetty-util-ajax-9.4.39.v20210325.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jline-3.12.1.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jopt-simple-5.0.4.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/kafka-clients-2.8.0.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/kafka-log4j-appender-2.8.0.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/kafka-metadata-2.8.0.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/kafka-raft-2.8.0.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/kafka-shell-2.8.0.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/kafka-streams-2.8.0.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/kafka-streams-examples-2.8.0.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/kafka-streams-scala_2.12-2.8.0.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/kafka-streams-test-utils-2.8.0.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/kafka-tools-2.8.0.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/kafka_2.12-2.8.0-sources.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/kafka_2.12-2.8.0.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/log4j-1.2.17.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/lz4-java-1.7.1.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/maven-artifact-3.6.3.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/metrics-core-2.2.0.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/netty-buffer-4.1.62.Final.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/netty-codec-4.1.62.Final.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/netty-common-4.1.62.Final.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/netty-handler-4.1.62.Final.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/netty-resolver-4.1.62.Final.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/netty-transport-4.1.62.Final.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/netty-transport-native-epoll-4.1.62.Final.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/netty-transport-native-unix-common-4.1.62.Final.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/osgi-resource-locator-1.0.3.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/paranamer-2.8.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/plexus-utils-3.2.1.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/reflections-0.9.12.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/rocksdbjni-5.18.4.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/scala-collection-compat_2.12-2.3.0.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/scala-java8-compat_2.12-0.9.1.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/scala-library-2.12.13.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/scala-logging_2.12-3.9.2.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/scala-reflect-2.12.13.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/slf4j-api-1.7.30.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/slf4j-log4j12-1.7.30.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/snappy-java-1.1.8.1.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/zookeeper-3.5.9.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/zookeeper-jute-3.5.9.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/zstd-jni-1.4.9-1.jar (org.apache.zookeeper.ZooKeeper)
[2021-05-10 14:17:34,389] INFO Client environment:java.library.path=/Users/bhagvan.kommadi/Library/Java/Extensions:/Library/Java/Extensions:/Network/Library/Java/Extensions:/System/Library/Java/Extensions:/usr/lib/java:. (org.apache.zookeeper.ZooKeeper)
[2021-05-10 14:17:34,391] INFO Client environment:java.io.tmpdir=/var/folders/cr/0y892lq14qv7r24yl0gh0_dm0000gp/T/ (org.apache.zookeeper.ZooKeeper)
[2021-05-10 14:17:34,392] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper)
[2021-05-10 14:17:34,392] INFO Client environment:os.name=Mac OS X (org.apache.zookeeper.ZooKeeper)
[2021-05-10 14:17:34,393] INFO Client environment:os.arch=x86_64 (org.apache.zookeeper.ZooKeeper)
[2021-05-10 14:17:34,398] INFO Client environment:os.version=10.16 (org.apache.zookeeper.ZooKeeper)
[2021-05-10 14:17:34,400] INFO Client environment:user.name=bhagvan.kommadi (org.apache.zookeeper.ZooKeeper)
[2021-05-10 14:17:34,401] INFO Client environment:user.home=/Users/bhagvan.kommadi (org.apache.zookeeper.ZooKeeper)
[2021-05-10 14:17:34,401] INFO Client environment:user.dir=/Users/bhagvan.kommadi/Desktop/kafka_2.12-2.8.0 (org.apache.zookeeper.ZooKeeper)
[2021-05-10 14:17:34,401] INFO Client environment:os.memory.free=973MB (org.apache.zookeeper.ZooKeeper)
[2021-05-10 14:17:34,402] INFO Client environment:os.memory.max=1024MB (org.apache.zookeeper.ZooKeeper)
[2021-05-10 14:17:34,402] INFO Client environment:os.memory.total=1024MB (org.apache.zookeeper.ZooKeeper)
[2021-05-10 14:17:34,535] INFO Initiating client connection, connectString=localhost:2181 sessionTimeout=18000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@799f10e1 (org.apache.zookeeper.ZooKeeper)
[2021-05-10 14:17:34,568] INFO jute.maxbuffer value is 4194304 Bytes (org.apache.zookeeper.ClientCnxnSocket)
[2021-05-10 14:17:34,590] INFO zookeeper.request.timeout value is 0. feature enabled= (org.apache.zookeeper.ClientCnxn)
[2021-05-10 14:17:34,596] INFO [ZooKeeperClient Kafka server] Waiting until connected. (kafka.zookeeper.ZooKeeperClient)
[2021-05-10 14:17:34,606] INFO Opening socket connection to server localhost/0:0:0:0:0:0:0:1:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
[2021-05-10 14:17:34,728] INFO Socket connection established, initiating session, client: /0:0:0:0:0:0:0:1:51339, server: localhost/0:0:0:0:0:0:0:1:2181 (org.apache.zookeeper.ClientCnxn)
[2021-05-10 14:17:34,902] INFO Session establishment complete on server localhost/0:0:0:0:0:0:0:1:2181, sessionid = 0x10000234a530000, negotiated timeout = 18000 (org.apache.zookeeper.ClientCnxn)
[2021-05-10 14:17:34,954] INFO [ZooKeeperClient Kafka server] Connected. (kafka.zookeeper.ZooKeeperClient)
[2021-05-10 14:17:35,424] INFO [feature-zk-node-event-process-thread]: Starting (kafka.server.FinalizedFeatureChangeListener$ChangeNotificationProcessorThread)
[2021-05-10 14:17:35,456] INFO Feature ZK node at path: /feature does not exist (kafka.server.FinalizedFeatureChangeListener)
[2021-05-10 14:17:35,457] INFO Cleared cache (kafka.server.FinalizedFeatureCache)
[2021-05-10 14:17:36,213] INFO Cluster ID = amutkB48RcCbNccRQjgW3g (kafka.server.KafkaServer)
[2021-05-10 14:17:36,225] WARN No meta.properties file under dir /tmp/kafka-logs/meta.properties (kafka.server.BrokerMetadataCheckpoint)
[2021-05-10 14:17:36,521] INFO KafkaConfig values:
advertised.host.name = null
advertised.listeners = null
advertised.port = null
alter.config.policy.class.name = null
alter.log.dirs.replication.quota.window.num = 11
alter.log.dirs.replication.quota.window.size.seconds = 1
authorizer.class.name =
auto.create.topics.enable = true
auto.leader.rebalance.enable = true
background.threads = 10
broker.heartbeat.interval.ms = 2000
broker.id = 0
broker.id.generation.enable = true
broker.rack = null
broker.session.timeout.ms = 9000
client.quota.callback.class = null
compression.type = producer
connection.failed.authentication.delay.ms = 100
connections.max.idle.ms = 600000
connections.max.reauth.ms = 0
control.plane.listener.name = null
controlled.shutdown.enable = true
controlled.shutdown.max.retries = 3
controlled.shutdown.retry.backoff.ms = 5000
controller.listener.names = null
controller.quorum.append.linger.ms = 25
controller.quorum.election.backoff.max.ms = 1000
controller.quorum.election.timeout.ms = 1000
controller.quorum.fetch.timeout.ms = 2000
controller.quorum.request.timeout.ms = 2000
controller.quorum.retry.backoff.ms = 20
controller.quorum.voters = []
controller.quota.window.num = 11
controller.quota.window.size.seconds = 1
controller.socket.timeout.ms = 30000
create.topic.policy.class.name = null
default.replication.factor = 1
delegation.token.expiry.check.interval.ms = 3600000
delegation.token.expiry.time.ms = 86400000
delegation.token.master.key = null
delegation.token.max.lifetime.ms = 604800000
delegation.token.secret.key = null
delete.records.purgatory.purge.interval.requests = 1
delete.topic.enable = true
fetch.max.bytes = 57671680
fetch.purgatory.purge.interval.requests = 1000
group.initial.rebalance.delay.ms = 0
group.max.session.timeout.ms = 1800000
group.max.size = 2147483647
group.min.session.timeout.ms = 6000
host.name =
initial.broker.registration.timeout.ms = 60000
inter.broker.listener.name = null
inter.broker.protocol.version = 2.8-IV1
kafka.metrics.polling.interval.secs = 10
kafka.metrics.reporters = []
leader.imbalance.check.interval.seconds = 300
leader.imbalance.per.broker.percentage = 10
listener.security.protocol.map = PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
listeners = null
log.cleaner.backoff.ms = 15000
log.cleaner.dedupe.buffer.size = 134217728
log.cleaner.delete.retention.ms = 86400000
log.cleaner.enable = true
log.cleaner.io.buffer.load.factor = 0.9
log.cleaner.io.buffer.size = 524288
log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
log.cleaner.max.compaction.lag.ms = 9223372036854775807
log.cleaner.min.cleanable.ratio = 0.5
log.cleaner.min.compaction.lag.ms = 0
log.cleaner.threads = 1
log.cleanup.policy = [delete]
log.dir = /tmp/kafka-logs
log.dirs = /tmp/kafka-logs
log.flush.interval.messages = 9223372036854775807
log.flush.interval.ms = null
log.flush.offset.checkpoint.interval.ms = 60000
log.flush.scheduler.interval.ms = 9223372036854775807
log.flush.start.offset.checkpoint.interval.ms = 60000
log.index.interval.bytes = 4096
log.index.size.max.bytes = 10485760
log.message.downconversion.enable = true
log.message.format.version = 2.8-IV1
log.message.timestamp.difference.max.ms = 9223372036854775807
log.message.timestamp.type = CreateTime
log.preallocate = false
log.retention.bytes = -1
log.retention.check.interval.ms = 300000
log.retention.hours = 168
log.retention.minutes = null
log.retention.ms = null
log.roll.hours = 168
log.roll.jitter.hours = 0
log.roll.jitter.ms = null
log.roll.ms = null
log.segment.bytes = 1073741824
log.segment.delete.delay.ms = 60000
max.connection.creation.rate = 2147483647
max.connections = 2147483647
max.connections.per.ip = 2147483647
max.connections.per.ip.overrides =
max.incremental.fetch.session.cache.slots = 1000
message.max.bytes = 1048588
metadata.log.dir = null
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
min.insync.replicas = 1
node.id = -1
num.io.threads = 8
num.network.threads = 3
num.partitions = 1
num.recovery.threads.per.data.dir = 1
num.replica.alter.log.dirs.threads = null
num.replica.fetchers = 1
offset.metadata.max.bytes = 4096
offsets.commit.required.acks = -1
offsets.commit.timeout.ms = 5000
offsets.load.buffer.size = 5242880
offsets.retention.check.interval.ms = 600000
offsets.retention.minutes = 10080
offsets.topic.compression.codec = 0
offsets.topic.num.partitions = 50
offsets.topic.replication.factor = 1
offsets.topic.segment.bytes = 104857600
password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding
password.encoder.iterations = 4096
password.encoder.key.length = 128
password.encoder.keyfactory.algorithm = null
password.encoder.old.secret = null
password.encoder.secret = null
port = 9092
principal.builder.class = null
process.roles = []
producer.purgatory.purge.interval.requests = 1000
queued.max.request.bytes = -1
queued.max.requests = 500
quota.consumer.default = 9223372036854775807
quota.producer.default = 9223372036854775807
quota.window.num = 11
quota.window.size.seconds = 1
replica.fetch.backoff.ms = 1000
replica.fetch.max.bytes = 1048576
replica.fetch.min.bytes = 1
replica.fetch.response.max.bytes = 10485760
replica.fetch.wait.max.ms = 500
replica.high.watermark.checkpoint.interval.ms = 5000
replica.lag.time.max.ms = 30000
replica.selector.class = null
replica.socket.receive.buffer.bytes = 65536
replica.socket.timeout.ms = 30000
replication.quota.window.num = 11
replication.quota.window.size.seconds = 1
request.timeout.ms = 30000
reserved.broker.max.id = 1000
sasl.client.callback.handler.class = null
sasl.enabled.mechanisms = [GSSAPI]
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.principal.to.local.rules = [DEFAULT]
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism.controller.protocol = GSSAPI
sasl.mechanism.inter.broker.protocol = GSSAPI
sasl.server.callback.handler.class = null
security.inter.broker.protocol = PLAINTEXT
security.providers = null
socket.connection.setup.timeout.max.ms = 30000
socket.connection.setup.timeout.ms = 10000
socket.receive.buffer.bytes = 102400
socket.request.max.bytes = 104857600
socket.send.buffer.bytes = 102400
ssl.cipher.suites = []
ssl.client.auth = none
ssl.enabled.protocols = [TLSv1.2]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.principal.mapping.rules = DEFAULT
ssl.protocol = TLSv1.2
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
transaction.abort.timed.out.transaction.cleanup.interval.ms = 10000
transaction.max.timeout.ms = 900000
transaction.remove.expired.transaction.cleanup.interval.ms = 3600000
transaction.state.log.load.buffer.size = 5242880
transaction.state.log.min.isr = 1
transaction.state.log.num.partitions = 50
transaction.state.log.replication.factor = 1
transaction.state.log.segment.bytes = 104857600
transactional.id.expiration.ms = 604800000
unclean.leader.election.enable = false
zookeeper.clientCnxnSocket = null
zookeeper.connect = localhost:2181
zookeeper.connection.timeout.ms = 18000
zookeeper.max.in.flight.requests = 10
zookeeper.session.timeout.ms = 18000
zookeeper.set.acl = false
zookeeper.ssl.cipher.suites = null
zookeeper.ssl.client.enable = false
zookeeper.ssl.crl.enable = false
zookeeper.ssl.enabled.protocols = null
zookeeper.ssl.endpoint.identification.algorithm = HTTPS
zookeeper.ssl.keystore.location = null
zookeeper.ssl.keystore.password = null
zookeeper.ssl.keystore.type = null
zookeeper.ssl.ocsp.enable = false
zookeeper.ssl.protocol = TLSv1.2
zookeeper.ssl.truststore.location = null
zookeeper.ssl.truststore.password = null
zookeeper.ssl.truststore.type = null
zookeeper.sync.time.ms = 2000
(kafka.server.KafkaConfig)
[2021-05-10 14:17:36,552] INFO KafkaConfig values:
advertised.host.name = null
advertised.listeners = null
advertised.port = null
alter.config.policy.class.name = null
alter.log.dirs.replication.quota.window.num = 11
alter.log.dirs.replication.quota.window.size.seconds = 1
authorizer.class.name =
auto.create.topics.enable = true
auto.leader.rebalance.enable = true
background.threads = 10
broker.heartbeat.interval.ms = 2000
broker.id = 0
broker.id.generation.enable = true
broker.rack = null
broker.session.timeout.ms = 9000
client.quota.callback.class = null
compression.type = producer
connection.failed.authentication.delay.ms = 100
connections.max.idle.ms = 600000
connections.max.reauth.ms = 0
control.plane.listener.name = null
controlled.shutdown.enable = true
controlled.shutdown.max.retries = 3
controlled.shutdown.retry.backoff.ms = 5000
controller.listener.names = null
controller.quorum.append.linger.ms = 25
controller.quorum.election.backoff.max.ms = 1000
controller.quorum.election.timeout.ms = 1000
controller.quorum.fetch.timeout.ms = 2000
controller.quorum.request.timeout.ms = 2000
controller.quorum.retry.backoff.ms = 20
controller.quorum.voters = []
controller.quota.window.num = 11
controller.quota.window.size.seconds = 1
controller.socket.timeout.ms = 30000
create.topic.policy.class.name = null
default.replication.factor = 1
delegation.token.expiry.check.interval.ms = 3600000
delegation.token.expiry.time.ms = 86400000
delegation.token.master.key = null
delegation.token.max.lifetime.ms = 604800000
delegation.token.secret.key = null
delete.records.purgatory.purge.interval.requests = 1
delete.topic.enable = true
fetch.max.bytes = 57671680
fetch.purgatory.purge.interval.requests = 1000
group.initial.rebalance.delay.ms = 0
group.max.session.timeout.ms = 1800000
group.max.size = 2147483647
group.min.session.timeout.ms = 6000
host.name =
initial.broker.registration.timeout.ms = 60000
inter.broker.listener.name = null
inter.broker.protocol.version = 2.8-IV1
kafka.metrics.polling.interval.secs = 10
kafka.metrics.reporters = []
leader.imbalance.check.interval.seconds = 300
leader.imbalance.per.broker.percentage = 10
listener.security.protocol.map = PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
listeners = null
log.cleaner.backoff.ms = 15000
log.cleaner.dedupe.buffer.size = 134217728
log.cleaner.delete.retention.ms = 86400000
log.cleaner.enable = true
log.cleaner.io.buffer.load.factor = 0.9
log.cleaner.io.buffer.size = 524288
log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
log.cleaner.max.compaction.lag.ms = 9223372036854775807
log.cleaner.min.cleanable.ratio = 0.5
log.cleaner.min.compaction.lag.ms = 0
log.cleaner.threads = 1
log.cleanup.policy = [delete]
log.dir = /tmp/kafka-logs
log.dirs = /tmp/kafka-logs
log.flush.interval.messages = 9223372036854775807
log.flush.interval.ms = null
log.flush.offset.checkpoint.interval.ms = 60000
log.flush.scheduler.interval.ms = 9223372036854775807
log.flush.start.offset.checkpoint.interval.ms = 60000
log.index.interval.bytes = 4096
log.index.size.max.bytes = 10485760
log.message.downconversion.enable = true
log.message.format.version = 2.8-IV1
log.message.timestamp.difference.max.ms = 9223372036854775807
log.message.timestamp.type = CreateTime
log.preallocate = false
log.retention.bytes = -1
log.retention.check.interval.ms = 300000
log.retention.hours = 168
log.retention.minutes = null
log.retention.ms = null
log.roll.hours = 168
log.roll.jitter.hours = 0
log.roll.jitter.ms = null
log.roll.ms = null
log.segment.bytes = 1073741824
log.segment.delete.delay.ms = 60000
max.connection.creation.rate = 2147483647
max.connections = 2147483647
max.connections.per.ip = 2147483647
max.connections.per.ip.overrides =
max.incremental.fetch.session.cache.slots = 1000
message.max.bytes = 1048588
metadata.log.dir = null
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
min.insync.replicas = 1
node.id = -1
num.io.threads = 8
num.network.threads = 3
num.partitions = 1
num.recovery.threads.per.data.dir = 1
num.replica.alter.log.dirs.threads = null
num.replica.fetchers = 1
offset.metadata.max.bytes = 4096
offsets.commit.required.acks = -1
offsets.commit.timeout.ms = 5000
offsets.load.buffer.size = 5242880
offsets.retention.check.interval.ms = 600000
offsets.retention.minutes = 10080
offsets.topic.compression.codec = 0
offsets.topic.num.partitions = 50
offsets.topic.replication.factor = 1
offsets.topic.segment.bytes = 104857600
password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding
password.encoder.iterations = 4096
password.encoder.key.length = 128
password.encoder.keyfactory.algorithm = null
password.encoder.old.secret = null
password.encoder.secret = null
port = 9092
principal.builder.class = null
process.roles = []
producer.purgatory.purge.interval.requests = 1000
queued.max.request.bytes = -1
queued.max.requests = 500
quota.consumer.default = 9223372036854775807
quota.producer.default = 9223372036854775807
quota.window.num = 11
quota.window.size.seconds = 1
replica.fetch.backoff.ms = 1000
replica.fetch.max.bytes = 1048576
replica.fetch.min.bytes = 1
replica.fetch.response.max.bytes = 10485760
replica.fetch.wait.max.ms = 500
replica.high.watermark.checkpoint.interval.ms = 5000
replica.lag.time.max.ms = 30000
replica.selector.class = null
replica.socket.receive.buffer.bytes = 65536
replica.socket.timeout.ms = 30000
replication.quota.window.num = 11
replication.quota.window.size.seconds = 1
request.timeout.ms = 30000
reserved.broker.max.id = 1000
sasl.client.callback.handler.class = null
sasl.enabled.mechanisms = [GSSAPI]
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.principal.to.local.rules = [DEFAULT]
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism.controller.protocol = GSSAPI
sasl.mechanism.inter.broker.protocol = GSSAPI
sasl.server.callback.handler.class = null
security.inter.broker.protocol = PLAINTEXT
security.providers = null
socket.connection.setup.timeout.max.ms = 30000
socket.connection.setup.timeout.ms = 10000
socket.receive.buffer.bytes = 102400
socket.request.max.bytes = 104857600
socket.send.buffer.bytes = 102400
ssl.cipher.suites = []
ssl.client.auth = none
ssl.enabled.protocols = [TLSv1.2]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.principal.mapping.rules = DEFAULT
ssl.protocol = TLSv1.2
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
transaction.abort.timed.out.transaction.cleanup.interval.ms = 10000
transaction.max.timeout.ms = 900000
transaction.remove.expired.transaction.cleanup.interval.ms = 3600000
transaction.state.log.load.buffer.size = 5242880
transaction.state.log.min.isr = 1
transaction.state.log.num.partitions = 50
transaction.state.log.replication.factor = 1
transaction.state.log.segment.bytes = 104857600
transactional.id.expiration.ms = 604800000
unclean.leader.election.enable = false
zookeeper.clientCnxnSocket = null
zookeeper.connect = localhost:2181
zookeeper.connection.timeout.ms = 18000
zookeeper.max.in.flight.requests = 10
zookeeper.session.timeout.ms = 18000
zookeeper.set.acl = false
zookeeper.ssl.cipher.suites = null
zookeeper.ssl.client.enable = false
zookeeper.ssl.crl.enable = false
zookeeper.ssl.enabled.protocols = null
zookeeper.ssl.endpoint.identification.algorithm = HTTPS
zookeeper.ssl.keystore.location = null
zookeeper.ssl.keystore.password = null
zookeeper.ssl.keystore.type = null
zookeeper.ssl.ocsp.enable = false
zookeeper.ssl.protocol = TLSv1.2
zookeeper.ssl.truststore.location = null
zookeeper.ssl.truststore.password = null
zookeeper.ssl.truststore.type = null
zookeeper.sync.time.ms = 2000
(kafka.server.KafkaConfig)
[2021-05-10 14:17:37,429] INFO [ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2021-05-10 14:17:37,431] INFO [ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2021-05-10 14:17:37,436] INFO [ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2021-05-10 14:17:37,441] INFO [ThrottledChannelReaper-ControllerMutation]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2021-05-10 14:17:37,515] INFO Log directory /tmp/kafka-logs not found, creating it. (kafka.log.LogManager)
[2021-05-10 14:17:38,257] INFO Loading logs from log dirs ArrayBuffer(/tmp/kafka-logs) (kafka.log.LogManager)
[2021-05-10 14:17:38,415] INFO Attempting recovery for all logs in /tmp/kafka-logs since no clean shutdown file was found (kafka.log.LogManager)
[2021-05-10 14:17:38,941] INFO Loaded 0 logs in 658ms. (kafka.log.LogManager)
[2021-05-10 14:17:38,945] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager)
[2021-05-10 14:17:38,964] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager)
[2021-05-10 14:17:46,163] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas)
[2021-05-10 14:17:46,337] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.Acceptor)
[2021-05-10 14:17:47,115] INFO [SocketServer listenerType=ZK_BROKER, nodeId=0] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT) (kafka.network.SocketServer)
[2021-05-10 14:17:47,786] INFO [broker-0-to-controller-send-thread]: Starting (kafka.server.BrokerToControllerRequestThread)
[2021-05-10 14:17:48,320] INFO [ExpirationReaper-0-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2021-05-10 14:17:48,327] INFO [ExpirationReaper-0-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2021-05-10 14:17:48,330] INFO [ExpirationReaper-0-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2021-05-10 14:17:48,333] INFO [ExpirationReaper-0-ElectLeader]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2021-05-10 14:17:48,466] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler)
[2021-05-10 14:17:48,922] INFO Creating /brokers/ids/0 (is it secure? false) (kafka.zk.KafkaZkClient)
[2021-05-10 14:17:49,027] INFO Stat of the created znode at /brokers/ids/0 is: 25,25,1620636468991,1620636468991,1,0,0,72057745608736768,202,0,25
(kafka.zk.KafkaZkClient)
[2021-05-10 14:17:49,030] INFO Registered broker 0 at path /brokers/ids/0 with addresses: PLAINTEXT://localhost:9092, czxid (broker epoch): 25 (kafka.zk.KafkaZkClient)
[2021-05-10 14:17:49,383] INFO [ExpirationReaper-0-topic]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2021-05-10 14:17:49,391] INFO Successfully created /controller_epoch with initial epoch 0 (kafka.zk.KafkaZkClient)
[2021-05-10 14:17:49,423] INFO [ExpirationReaper-0-Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2021-05-10 14:17:49,425] INFO [ExpirationReaper-0-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2021-05-10 14:17:49,567] INFO Feature ZK node created at path: /feature (kafka.server.FinalizedFeatureChangeListener)
[2021-05-10 14:17:49,652] INFO [GroupCoordinator 0]: Starting up. (kafka.coordinator.group.GroupCoordinator)
[2021-05-10 14:17:49,660] INFO Updated cache from existing to latest FinalizedFeaturesAndEpoch(features=Features{}, epoch=0). (kafka.server.FinalizedFeatureCache)
[2021-05-10 14:17:49,682] INFO [GroupCoordinator 0]: Startup complete. (kafka.coordinator.group.GroupCoordinator)
[2021-05-10 14:17:50,362] INFO [ProducerId Manager 0]: Acquired new producerId block (brokerId:0,blockStartProducerId:0,blockEndProducerId:999) by writing to Zk with path version 1 (kafka.coordinator.transaction.ProducerIdManager)
[2021-05-10 14:17:50,363] INFO [TransactionCoordinator id=0] Starting up. (kafka.coordinator.transaction.TransactionCoordinator)
[2021-05-10 14:17:50,388] INFO [Transaction Marker Channel Manager 0]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager)
[2021-05-10 14:17:50,390] INFO [TransactionCoordinator id=0] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator)
[2021-05-10 14:17:51,027] INFO [ExpirationReaper-0-AlterAcls]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2021-05-10 14:17:51,192] INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread)
[2021-05-10 14:17:51,247] INFO [SocketServer listenerType=ZK_BROKER, nodeId=0] Starting socket server acceptors and processors (kafka.network.SocketServer)
[2021-05-10 14:17:51,293] INFO [SocketServer listenerType=ZK_BROKER, nodeId=0] Started data-plane acceptor and processor(s) for endpoint : ListenerName(PLAINTEXT) (kafka.network.SocketServer)
[2021-05-10 14:17:51,294] INFO [SocketServer listenerType=ZK_BROKER, nodeId=0] Started socket server acceptors and processors (kafka.network.SocketServer)
[2021-05-10 14:17:51,495] INFO Kafka version: 2.8.0 (org.apache.kafka.common.utils.AppInfoParser)
[2021-05-10 14:17:51,495] INFO Kafka commitId: ebb1d6e21cc92130 (org.apache.kafka.common.utils.AppInfoParser)
[2021-05-10 14:17:51,495] INFO Kafka startTimeMs: 1620636471294 (org.apache.kafka.common.utils.AppInfoParser)
[2021-05-10 14:17:51,505] INFO [KafkaServer id=0] started (kafka.server.KafkaServer)
[2021-05-10 14:17:51,520] INFO [broker-0-to-controller-send-thread]: Recorded new controller, from now on will use broker localhost:9092 (id: 0 rack: null) (kafka.server.BrokerToControllerRequestThread)apples-MacBook-Air:kafka_2.12-2.8.0 bhagvan.kommadi$ bin/kafka-server-start.sh config/server.properties
[2021-05-10 14:17:28,975] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$)
[2021-05-10 14:17:33,397] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util)
[2021-05-10 14:17:34,043] INFO Registered signal handlers for TERM, INT, HUP (org.apache.kafka.common.utils.LoggingSignalHandler)
[2021-05-10 14:17:34,108] INFO starting (kafka.server.KafkaServer)
[2021-05-10 14:17:34,112] INFO Connecting to zookeeper on localhost:2181 (kafka.server.KafkaServer)
[2021-05-10 14:17:34,319] INFO [ZooKeeperClient Kafka server] Initializing a new session to localhost:2181. (kafka.zookeeper.ZooKeeperClient)
[2021-05-10 14:17:34,382] INFO Client environment:zookeeper.version=3.5.9-83df9301aa5c2a5d284a9940177808c01bc35cef, built on 01/06/2021 20:03 GMT (org.apache.zookeeper.ZooKeeper)
[2021-05-10 14:17:34,382] INFO Client environment:host.name=localhost (org.apache.zookeeper.ZooKeeper)
[2021-05-10 14:17:34,382] INFO Client environment:java.version=1.8.0_101 (org.apache.zookeeper.ZooKeeper)
[2021-05-10 14:17:34,382] INFO Client environment:java.vendor=Oracle Corporation (org.apache.zookeeper.ZooKeeper)
[2021-05-10 14:17:34,383] INFO Client environment:java.home=/Library/Java/JavaVirtualMachines/jdk1.8.0_101.jdk/Contents/Home/jre (org.apache.zookeeper.ZooKeeper)
[2021-05-10 14:17:34,383] INFO Client environment:java.class.path=/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/activation-1.1.1.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/aopalliance-repackaged-2.6.1.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/argparse4j-0.7.0.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/audience-annotations-0.5.0.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/commons-cli-1.4.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/commons-lang3-3.8.1.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/connect-api-2.8.0.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/connect-basic-auth-extension-2.8.0.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/connect-file-2.8.0.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/connect-json-2.8.0.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/connect-mirror-2.8.0.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/connect-mirror-client-2.8.0.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/connect-runtime-2.8.0.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/connect-transforms-2.8.0.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/hk2-api-2.6.1.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/hk2-locator-2.6.1.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/hk2-utils-2.6.1.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jackson-annotations-2.10.5.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jackson-core-2.10.5.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jackson-databind-2.10.5.1.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jackson-dataformat-csv-2.10.5.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jackson-datatype-jdk8-2.10.5.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jackson-jaxrs-base-2.10.5.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jackson-jaxrs-json-provider-2.10.5.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jackson-module-jaxb-annotations-2.10.5.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jackson-module-paranamer-2.10.5.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jackson-module-scala_2.12-2.10.5.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jakarta.activation-api-1.2.1.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jakarta.annotation-api-1.3.5.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jakarta.inject-2.6.1.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jakarta.validation-api-2.0.2.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jakarta.ws.rs-api-2.1.6.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jakarta.xml.bind-api-2.3.2.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/javassist-3.27.0-GA.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/javax.servlet-api-3.1.0.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/javax.ws.rs-api-2.1.1.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jaxb-api-2.3.0.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jersey-client-2.31.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jersey-common-2.31.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jersey-container-servlet-2.31.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jersey-container-servlet-core-2.31.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jersey-hk2-2.31.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jersey-media-jaxb-2.31.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jersey-server-2.31.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jetty-client-9.4.39.v20210325.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jetty-continuation-9.4.39.v20210325.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jetty-http-9.4.39.v20210325.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jetty-io-9.4.39.v20210325.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jetty-security-9.4.39.v20210325.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jetty-server-9.4.39.v20210325.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jetty-servlet-9.4.39.v20210325.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jetty-servlets-9.4.39.v20210325.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jetty-util-9.4.39.v20210325.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jetty-util-ajax-9.4.39.v20210325.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jline-3.12.1.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jopt-simple-5.0.4.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/kafka-clients-2.8.0.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/kafka-log4j-appender-2.8.0.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/kafka-metadata-2.8.0.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/kafka-raft-2.8.0.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/kafka-shell-2.8.0.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/kafka-streams-2.8.0.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/kafka-streams-examples-2.8.0.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/kafka-streams-scala_2.12-2.8.0.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/kafka-streams-test-utils-2.8.0.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/kafka-tools-2.8.0.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/kafka_2.12-2.8.0-sources.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/kafka_2.12-2.8.0.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/log4j-1.2.17.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/lz4-java-1.7.1.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/maven-artifact-3.6.3.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/metrics-core-2.2.0.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/netty-buffer-4.1.62.Final.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/netty-codec-4.1.62.Final.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/netty-common-4.1.62.Final.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/netty-handler-4.1.62.Final.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/netty-resolver-4.1.62.Final.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/netty-transport-4.1.62.Final.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/netty-transport-native-epoll-4.1.62.Final.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/netty-transport-native-unix-common-4.1.62.Final.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/osgi-resource-locator-1.0.3.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/paranamer-2.8.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/plexus-utils-3.2.1.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/reflections-0.9.12.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/rocksdbjni-5.18.4.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/scala-collection-compat_2.12-2.3.0.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/scala-java8-compat_2.12-0.9.1.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/scala-library-2.12.13.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/scala-logging_2.12-3.9.2.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/scala-reflect-2.12.13.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/slf4j-api-1.7.30.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/slf4j-log4j12-1.7.30.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/snappy-java-1.1.8.1.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/zookeeper-3.5.9.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/zookeeper-jute-3.5.9.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/zstd-jni-1.4.9-1.jar (org.apache.zookeeper.ZooKeeper)
[2021-05-10 14:17:34,389] INFO Client environment:java.library.path=/Users/bhagvan.kommadi/Library/Java/Extensions:/Library/Java/Extensions:/Network/Library/Java/Extensions:/System/Library/Java/Extensions:/usr/lib/java:. (org.apache.zookeeper.ZooKeeper)
[2021-05-10 14:17:34,391] INFO Client environment:java.io.tmpdir=/var/folders/cr/0y892lq14qv7r24yl0gh0_dm0000gp/T/ (org.apache.zookeeper.ZooKeeper)
[2021-05-10 14:17:34,392] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper)
[2021-05-10 14:17:34,392] INFO Client environment:os.name=Mac OS X (org.apache.zookeeper.ZooKeeper)
[2021-05-10 14:17:34,393] INFO Client environment:os.arch=x86_64 (org.apache.zookeeper.ZooKeeper)
[2021-05-10 14:17:34,398] INFO Client environment:os.version=10.16 (org.apache.zookeeper.ZooKeeper)
[2021-05-10 14:17:34,400] INFO Client environment:user.name=bhagvan.kommadi (org.apache.zookeeper.ZooKeeper)
[2021-05-10 14:17:34,401] INFO Client environment:user.home=/Users/bhagvan.kommadi (org.apache.zookeeper.ZooKeeper)
[2021-05-10 14:17:34,401] INFO Client environment:user.dir=/Users/bhagvan.kommadi/Desktop/kafka_2.12-2.8.0 (org.apache.zookeeper.ZooKeeper)
[2021-05-10 14:17:34,401] INFO Client environment:os.memory.free=973MB (org.apache.zookeeper.ZooKeeper)
[2021-05-10 14:17:34,402] INFO Client environment:os.memory.max=1024MB (org.apache.zookeeper.ZooKeeper)
[2021-05-10 14:17:34,402] INFO Client environment:os.memory.total=1024MB (org.apache.zookeeper.ZooKeeper)
[2021-05-10 14:17:34,535] INFO Initiating client connection, connectString=localhost:2181 sessionTimeout=18000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@799f10e1 (org.apache.zookeeper.ZooKeeper)
[2021-05-10 14:17:34,568] INFO jute.maxbuffer value is 4194304 Bytes (org.apache.zookeeper.ClientCnxnSocket)
[2021-05-10 14:17:34,590] INFO zookeeper.request.timeout value is 0. feature enabled= (org.apache.zookeeper.ClientCnxn)
[2021-05-10 14:17:34,596] INFO [ZooKeeperClient Kafka server] Waiting until connected. (kafka.zookeeper.ZooKeeperClient)
[2021-05-10 14:17:34,606] INFO Opening socket connection to server localhost/0:0:0:0:0:0:0:1:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
[2021-05-10 14:17:34,728] INFO Socket connection established, initiating session, client: /0:0:0:0:0:0:0:1:51339, server: localhost/0:0:0:0:0:0:0:1:2181 (org.apache.zookeeper.ClientCnxn)
[2021-05-10 14:17:34,902] INFO Session establishment complete on server localhost/0:0:0:0:0:0:0:1:2181, sessionid = 0x10000234a530000, negotiated timeout = 18000 (org.apache.zookeeper.ClientCnxn)
[2021-05-10 14:17:34,954] INFO [ZooKeeperClient Kafka server] Connected. (kafka.zookeeper.ZooKeeperClient)
[2021-05-10 14:17:35,424] INFO [feature-zk-node-event-process-thread]: Starting (kafka.server.FinalizedFeatureChangeListener$ChangeNotificationProcessorThread)
[2021-05-10 14:17:35,456] INFO Feature ZK node at path: /feature does not exist (kafka.server.FinalizedFeatureChangeListener)
[2021-05-10 14:17:35,457] INFO Cleared cache (kafka.server.FinalizedFeatureCache)
[2021-05-10 14:17:36,213] INFO Cluster ID = amutkB48RcCbNccRQjgW3g (kafka.server.KafkaServer)
[2021-05-10 14:17:36,225] WARN No meta.properties file under dir /tmp/kafka-logs/meta.properties (kafka.server.BrokerMetadataCheckpoint)
[2021-05-10 14:17:36,521] INFO KafkaConfig values:
advertised.host.name = null
advertised.listeners = null
advertised.port = null
alter.config.policy.class.name = null
alter.log.dirs.replication.quota.window.num = 11
alter.log.dirs.replication.quota.window.size.seconds = 1
authorizer.class.name =
auto.create.topics.enable = true
auto.leader.rebalance.enable = true
background.threads = 10
broker.heartbeat.interval.ms = 2000
broker.id = 0
broker.id.generation.enable = true
broker.rack = null
broker.session.timeout.ms = 9000
client.quota.callback.class = null
compression.type = producer
connection.failed.authentication.delay.ms = 100
connections.max.idle.ms = 600000
connections.max.reauth.ms = 0
control.plane.listener.name = null
controlled.shutdown.enable = true
controlled.shutdown.max.retries = 3
controlled.shutdown.retry.backoff.ms = 5000
controller.listener.names = null
controller.quorum.append.linger.ms = 25
controller.quorum.election.backoff.max.ms = 1000
controller.quorum.election.timeout.ms = 1000
controller.quorum.fetch.timeout.ms = 2000
controller.quorum.request.timeout.ms = 2000
controller.quorum.retry.backoff.ms = 20
controller.quorum.voters = []
controller.quota.window.num = 11
controller.quota.window.size.seconds = 1
controller.socket.timeout.ms = 30000
create.topic.policy.class.name = null
default.replication.factor = 1
delegation.token.expiry.check.interval.ms = 3600000
delegation.token.expiry.time.ms = 86400000
delegation.token.master.key = null
delegation.token.max.lifetime.ms = 604800000
delegation.token.secret.key = null
delete.records.purgatory.purge.interval.requests = 1
delete.topic.enable = true
fetch.max.bytes = 57671680
fetch.purgatory.purge.interval.requests = 1000
group.initial.rebalance.delay.ms = 0
group.max.session.timeout.ms = 1800000
group.max.size = 2147483647
group.min.session.timeout.ms = 6000
host.name =
initial.broker.registration.timeout.ms = 60000
inter.broker.listener.name = null
inter.broker.protocol.version = 2.8-IV1
kafka.metrics.polling.interval.secs = 10
kafka.metrics.reporters = []
leader.imbalance.check.interval.seconds = 300
leader.imbalance.per.broker.percentage = 10
listener.security.protocol.map = PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
listeners = null
log.cleaner.backoff.ms = 15000
log.cleaner.dedupe.buffer.size = 134217728
log.cleaner.delete.retention.ms = 86400000
log.cleaner.enable = true
log.cleaner.io.buffer.load.factor = 0.9
log.cleaner.io.buffer.size = 524288
log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
log.cleaner.max.compaction.lag.ms = 9223372036854775807
log.cleaner.min.cleanable.ratio = 0.5
log.cleaner.min.compaction.lag.ms = 0
log.cleaner.threads = 1
log.cleanup.policy = [delete]
log.dir = /tmp/kafka-logs
log.dirs = /tmp/kafka-logs
log.flush.interval.messages = 9223372036854775807
log.flush.interval.ms = null
log.flush.offset.checkpoint.interval.ms = 60000
log.flush.scheduler.interval.ms = 9223372036854775807
log.flush.start.offset.checkpoint.interval.ms = 60000
log.index.interval.bytes = 4096
log.index.size.max.bytes = 10485760
log.message.downconversion.enable = true
log.message.format.version = 2.8-IV1
log.message.timestamp.difference.max.ms = 9223372036854775807
log.message.timestamp.type = CreateTime
log.preallocate = false
log.retention.bytes = -1
log.retention.check.interval.ms = 300000
log.retention.hours = 168
log.retention.minutes = null
log.retention.ms = null
log.roll.hours = 168
log.roll.jitter.hours = 0
log.roll.jitter.ms = null
log.roll.ms = null
log.segment.bytes = 1073741824
log.segment.delete.delay.ms = 60000
max.connection.creation.rate = 2147483647
max.connections = 2147483647
max.connections.per.ip = 2147483647
max.connections.per.ip.overrides =
max.incremental.fetch.session.cache.slots = 1000
message.max.bytes = 1048588
metadata.log.dir = null
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
min.insync.replicas = 1
node.id = -1
num.io.threads = 8
num.network.threads = 3
num.partitions = 1
num.recovery.threads.per.data.dir = 1
num.replica.alter.log.dirs.threads = null
num.replica.fetchers = 1
offset.metadata.max.bytes = 4096
offsets.commit.required.acks = -1
offsets.commit.timeout.ms = 5000
offsets.load.buffer.size = 5242880
offsets.retention.check.interval.ms = 600000
offsets.retention.minutes = 10080
offsets.topic.compression.codec = 0
offsets.topic.num.partitions = 50
offsets.topic.replication.factor = 1
offsets.topic.segment.bytes = 104857600
password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding
password.encoder.iterations = 4096
password.encoder.key.length = 128
password.encoder.keyfactory.algorithm = null
password.encoder.old.secret = null
password.encoder.secret = null
port = 9092
principal.builder.class = null
process.roles = []
producer.purgatory.purge.interval.requests = 1000
queued.max.request.bytes = -1
queued.max.requests = 500
quota.consumer.default = 9223372036854775807
quota.producer.default = 9223372036854775807
quota.window.num = 11
quota.window.size.seconds = 1
replica.fetch.backoff.ms = 1000
replica.fetch.max.bytes = 1048576
replica.fetch.min.bytes = 1
replica.fetch.response.max.bytes = 10485760
replica.fetch.wait.max.ms = 500
replica.high.watermark.checkpoint.interval.ms = 5000
replica.lag.time.max.ms = 30000
replica.selector.class = null
replica.socket.receive.buffer.bytes = 65536
replica.socket.timeout.ms = 30000
replication.quota.window.num = 11
replication.quota.window.size.seconds = 1
request.timeout.ms = 30000
reserved.broker.max.id = 1000
sasl.client.callback.handler.class = null
sasl.enabled.mechanisms = [GSSAPI]
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.principal.to.local.rules = [DEFAULT]
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism.controller.protocol = GSSAPI
sasl.mechanism.inter.broker.protocol = GSSAPI
sasl.server.callback.handler.class = null
security.inter.broker.protocol = PLAINTEXT
security.providers = null
socket.connection.setup.timeout.max.ms = 30000
socket.connection.setup.timeout.ms = 10000
socket.receive.buffer.bytes = 102400
socket.request.max.bytes = 104857600
socket.send.buffer.bytes = 102400
ssl.cipher.suites = []
ssl.client.auth = none
ssl.enabled.protocols = [TLSv1.2]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.principal.mapping.rules = DEFAULT
ssl.protocol = TLSv1.2
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
transaction.abort.timed.out.transaction.cleanup.interval.ms = 10000
transaction.max.timeout.ms = 900000
transaction.remove.expired.transaction.cleanup.interval.ms = 3600000
transaction.state.log.load.buffer.size = 5242880
transaction.state.log.min.isr = 1
transaction.state.log.num.partitions = 50
transaction.state.log.replication.factor = 1
transaction.state.log.segment.bytes = 104857600
transactional.id.expiration.ms = 604800000
unclean.leader.election.enable = false
zookeeper.clientCnxnSocket = null
zookeeper.connect = localhost:2181
zookeeper.connection.timeout.ms = 18000
zookeeper.max.in.flight.requests = 10
zookeeper.session.timeout.ms = 18000
zookeeper.set.acl = false
zookeeper.ssl.cipher.suites = null
zookeeper.ssl.client.enable = false
zookeeper.ssl.crl.enable = false
zookeeper.ssl.enabled.protocols = null
zookeeper.ssl.endpoint.identification.algorithm = HTTPS
zookeeper.ssl.keystore.location = null
zookeeper.ssl.keystore.password = null
zookeeper.ssl.keystore.type = null
zookeeper.ssl.ocsp.enable = false
zookeeper.ssl.protocol = TLSv1.2
zookeeper.ssl.truststore.location = null
zookeeper.ssl.truststore.password = null
zookeeper.ssl.truststore.type = null
zookeeper.sync.time.ms = 2000
(kafka.server.KafkaConfig)
[2021-05-10 14:17:36,552] INFO KafkaConfig values:
advertised.host.name = null
advertised.listeners = null
advertised.port = null
alter.config.policy.class.name = null
alter.log.dirs.replication.quota.window.num = 11
alter.log.dirs.replication.quota.window.size.seconds = 1
authorizer.class.name =
auto.create.topics.enable = true
auto.leader.rebalance.enable = true
background.threads = 10
broker.heartbeat.interval.ms = 2000
broker.id = 0
broker.id.generation.enable = true
broker.rack = null
broker.session.timeout.ms = 9000
client.quota.callback.class = null
compression.type = producer
connection.failed.authentication.delay.ms = 100
connections.max.idle.ms = 600000
connections.max.reauth.ms = 0
control.plane.listener.name = null
controlled.shutdown.enable = true
controlled.shutdown.max.retries = 3
controlled.shutdown.retry.backoff.ms = 5000
controller.listener.names = null
controller.quorum.append.linger.ms = 25
controller.quorum.election.backoff.max.ms = 1000
controller.quorum.election.timeout.ms = 1000
controller.quorum.fetch.timeout.ms = 2000
controller.quorum.request.timeout.ms = 2000
controller.quorum.retry.backoff.ms = 20
controller.quorum.voters = []
controller.quota.window.num = 11
controller.quota.window.size.seconds = 1
controller.socket.timeout.ms = 30000
create.topic.policy.class.name = null
default.replication.factor = 1
delegation.token.expiry.check.interval.ms = 3600000
delegation.token.expiry.time.ms = 86400000
delegation.token.master.key = null
delegation.token.max.lifetime.ms = 604800000
delegation.token.secret.key = null
delete.records.purgatory.purge.interval.requests = 1
delete.topic.enable = true
fetch.max.bytes = 57671680
fetch.purgatory.purge.interval.requests = 1000
group.initial.rebalance.delay.ms = 0
group.max.session.timeout.ms = 1800000
group.max.size = 2147483647
group.min.session.timeout.ms = 6000
host.name =
initial.broker.registration.timeout.ms = 60000
inter.broker.listener.name = null
inter.broker.protocol.version = 2.8-IV1
kafka.metrics.polling.interval.secs = 10
kafka.metrics.reporters = []
leader.imbalance.check.interval.seconds = 300
leader.imbalance.per.broker.percentage = 10
listener.security.protocol.map = PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
listeners = null
log.cleaner.backoff.ms = 15000
log.cleaner.dedupe.buffer.size = 134217728
log.cleaner.delete.retention.ms = 86400000
log.cleaner.enable = true
log.cleaner.io.buffer.load.factor = 0.9
log.cleaner.io.buffer.size = 524288
log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
log.cleaner.max.compaction.lag.ms = 9223372036854775807
log.cleaner.min.cleanable.ratio = 0.5
log.cleaner.min.compaction.lag.ms = 0
log.cleaner.threads = 1
log.cleanup.policy = [delete]
log.dir = /tmp/kafka-logs
log.dirs = /tmp/kafka-logs
log.flush.interval.messages = 9223372036854775807
log.flush.interval.ms = null
log.flush.offset.checkpoint.interval.ms = 60000
log.flush.scheduler.interval.ms = 9223372036854775807
log.flush.start.offset.checkpoint.interval.ms = 60000
log.index.interval.bytes = 4096
log.index.size.max.bytes = 10485760
log.message.downconversion.enable = true
log.message.format.version = 2.8-IV1
log.message.timestamp.difference.max.ms = 9223372036854775807
log.message.timestamp.type = CreateTime
log.preallocate = false
log.retention.bytes = -1
log.retention.check.interval.ms = 300000
log.retention.hours = 168
log.retention.minutes = null
log.retention.ms = null
log.roll.hours = 168
log.roll.jitter.hours = 0
log.roll.jitter.ms = null
log.roll.ms = null
log.segment.bytes = 1073741824
log.segment.delete.delay.ms = 60000
max.connection.creation.rate = 2147483647
max.connections = 2147483647
max.connections.per.ip = 2147483647
max.connections.per.ip.overrides =
max.incremental.fetch.session.cache.slots = 1000
message.max.bytes = 1048588
metadata.log.dir = null
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
min.insync.replicas = 1
node.id = -1
num.io.threads = 8
num.network.threads = 3
num.partitions = 1
num.recovery.threads.per.data.dir = 1
num.replica.alter.log.dirs.threads = null
num.replica.fetchers = 1
offset.metadata.max.bytes = 4096
offsets.commit.required.acks = -1
offsets.commit.timeout.ms = 5000
offsets.load.buffer.size = 5242880
offsets.retention.check.interval.ms = 600000
offsets.retention.minutes = 10080
offsets.topic.compression.codec = 0
offsets.topic.num.partitions = 50
offsets.topic.replication.factor = 1
offsets.topic.segment.bytes = 104857600
password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding
password.encoder.iterations = 4096
password.encoder.key.length = 128
password.encoder.keyfactory.algorithm = null
password.encoder.old.secret = null
password.encoder.secret = null
port = 9092
principal.builder.class = null
process.roles = []
producer.purgatory.purge.interval.requests = 1000
queued.max.request.bytes = -1
queued.max.requests = 500
quota.consumer.default = 9223372036854775807
quota.producer.default = 9223372036854775807
quota.window.num = 11
quota.window.size.seconds = 1
replica.fetch.backoff.ms = 1000
replica.fetch.max.bytes = 1048576
replica.fetch.min.bytes = 1
replica.fetch.response.max.bytes = 10485760
replica.fetch.wait.max.ms = 500
replica.high.watermark.checkpoint.interval.ms = 5000
replica.lag.time.max.ms = 30000
replica.selector.class = null
replica.socket.receive.buffer.bytes = 65536
replica.socket.timeout.ms = 30000
replication.quota.window.num = 11
replication.quota.window.size.seconds = 1
request.timeout.ms = 30000
reserved.broker.max.id = 1000
sasl.client.callback.handler.class = null
sasl.enabled.mechanisms = [GSSAPI]
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.principal.to.local.rules = [DEFAULT]
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism.controller.protocol = GSSAPI
sasl.mechanism.inter.broker.protocol = GSSAPI
sasl.server.callback.handler.class = null
security.inter.broker.protocol = PLAINTEXT
security.providers = null
socket.connection.setup.timeout.max.ms = 30000
socket.connection.setup.timeout.ms = 10000
socket.receive.buffer.bytes = 102400
socket.request.max.bytes = 104857600
socket.send.buffer.bytes = 102400
ssl.cipher.suites = []
ssl.client.auth = none
ssl.enabled.protocols = [TLSv1.2]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.principal.mapping.rules = DEFAULT
ssl.protocol = TLSv1.2
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
transaction.abort.timed.out.transaction.cleanup.interval.ms = 10000
transaction.max.timeout.ms = 900000
transaction.remove.expired.transaction.cleanup.interval.ms = 3600000
transaction.state.log.load.buffer.size = 5242880
transaction.state.log.min.isr = 1
transaction.state.log.num.partitions = 50
transaction.state.log.replication.factor = 1
transaction.state.log.segment.bytes = 104857600
transactional.id.expiration.ms = 604800000
unclean.leader.election.enable = false
zookeeper.clientCnxnSocket = null
zookeeper.connect = localhost:2181
zookeeper.connection.timeout.ms = 18000
zookeeper.max.in.flight.requests = 10
zookeeper.session.timeout.ms = 18000
zookeeper.set.acl = false
zookeeper.ssl.cipher.suites = null
zookeeper.ssl.client.enable = false
zookeeper.ssl.crl.enable = false
zookeeper.ssl.enabled.protocols = null
zookeeper.ssl.endpoint.identification.algorithm = HTTPS
zookeeper.ssl.keystore.location = null
zookeeper.ssl.keystore.password = null
zookeeper.ssl.keystore.type = null
zookeeper.ssl.ocsp.enable = false
zookeeper.ssl.protocol = TLSv1.2
zookeeper.ssl.truststore.location = null
zookeeper.ssl.truststore.password = null
zookeeper.ssl.truststore.type = null
zookeeper.sync.time.ms = 2000
(kafka.server.KafkaConfig)
[2021-05-10 14:17:37,429] INFO [ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2021-05-10 14:17:37,431] INFO [ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2021-05-10 14:17:37,436] INFO [ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2021-05-10 14:17:37,441] INFO [ThrottledChannelReaper-ControllerMutation]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2021-05-10 14:17:37,515] INFO Log directory /tmp/kafka-logs not found, creating it. (kafka.log.LogManager)
[2021-05-10 14:17:38,257] INFO Loading logs from log dirs ArrayBuffer(/tmp/kafka-logs) (kafka.log.LogManager)
[2021-05-10 14:17:38,415] INFO Attempting recovery for all logs in /tmp/kafka-logs since no clean shutdown file was found (kafka.log.LogManager)
[2021-05-10 14:17:38,941] INFO Loaded 0 logs in 658ms. (kafka.log.LogManager)
[2021-05-10 14:17:38,945] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager)
[2021-05-10 14:17:38,964] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager)
[2021-05-10 14:17:46,163] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas)
[2021-05-10 14:17:46,337] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.Acceptor)
[2021-05-10 14:17:47,115] INFO [SocketServer listenerType=ZK_BROKER, nodeId=0] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT) (kafka.network.SocketServer)
[2021-05-10 14:17:47,786] INFO [broker-0-to-controller-send-thread]: Starting (kafka.server.BrokerToControllerRequestThread)
[2021-05-10 14:17:48,320] INFO [ExpirationReaper-0-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2021-05-10 14:17:48,327] INFO [ExpirationReaper-0-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2021-05-10 14:17:48,330] INFO [ExpirationReaper-0-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2021-05-10 14:17:48,333] INFO [ExpirationReaper-0-ElectLeader]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2021-05-10 14:17:48,466] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler)
[2021-05-10 14:17:48,922] INFO Creating /brokers/ids/0 (is it secure? false) (kafka.zk.KafkaZkClient)
[2021-05-10 14:17:49,027] INFO Stat of the created znode at /brokers/ids/0 is: 25,25,1620636468991,1620636468991,1,0,0,72057745608736768,202,0,25
(kafka.zk.KafkaZkClient)
[2021-05-10 14:17:49,030] INFO Registered broker 0 at path /brokers/ids/0 with addresses: PLAINTEXT://localhost:9092, czxid (broker epoch): 25 (kafka.zk.KafkaZkClient)
[2021-05-10 14:17:49,383] INFO [ExpirationReaper-0-topic]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2021-05-10 14:17:49,391] INFO Successfully created /controller_epoch with initial epoch 0 (kafka.zk.KafkaZkClient)
[2021-05-10 14:17:49,423] INFO [ExpirationReaper-0-Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2021-05-10 14:17:49,425] INFO [ExpirationReaper-0-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2021-05-10 14:17:49,567] INFO Feature ZK node created at path: /feature (kafka.server.FinalizedFeatureChangeListener)
[2021-05-10 14:17:49,652] INFO [GroupCoordinator 0]: Starting up. (kafka.coordinator.group.GroupCoordinator)
[2021-05-10 14:17:49,660] INFO Updated cache from existing to latest FinalizedFeaturesAndEpoch(features=Features{}, epoch=0). (kafka.server.FinalizedFeatureCache)
[2021-05-10 14:17:49,682] INFO [GroupCoordinator 0]: Startup complete. (kafka.coordinator.group.GroupCoordinator)
[2021-05-10 14:17:50,362] INFO [ProducerId Manager 0]: Acquired new producerId block (brokerId:0,blockStartProducerId:0,blockEndProducerId:999) by writing to Zk with path version 1 (kafka.coordinator.transaction.ProducerIdManager)
[2021-05-10 14:17:50,363] INFO [TransactionCoordinator id=0] Starting up. (kafka.coordinator.transaction.TransactionCoordinator)
[2021-05-10 14:17:50,388] INFO [Transaction Marker Channel Manager 0]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager)
[2021-05-10 14:17:50,390] INFO [TransactionCoordinator id=0] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator)
[2021-05-10 14:17:51,027] INFO [ExpirationReaper-0-AlterAcls]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2021-05-10 14:17:51,192] INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread)
[2021-05-10 14:17:51,247] INFO [SocketServer listenerType=ZK_BROKER, nodeId=0] Starting socket server acceptors and processors (kafka.network.SocketServer)
[2021-05-10 14:17:51,293] INFO [SocketServer listenerType=ZK_BROKER, nodeId=0] Started data-plane acceptor and processor(s) for endpoint : ListenerName(PLAINTEXT) (kafka.network.SocketServer)
[2021-05-10 14:17:51,294] INFO [SocketServer listenerType=ZK_BROKER, nodeId=0] Started socket server acceptors and processors (kafka.network.SocketServer)
[2021-05-10 14:17:51,495] INFO Kafka version: 2.8.0 (org.apache.kafka.common.utils.AppInfoParser)
[2021-05-10 14:17:51,495] INFO Kafka commitId: ebb1d6e21cc92130 (org.apache.kafka.common.utils.AppInfoParser)
[2021-05-10 14:17:51,495] INFO Kafka startTimeMs: 1620636471294 (org.apache.kafka.common.utils.AppInfoParser)
[2021-05-10 14:17:51,505] INFO [KafkaServer id=0] started (kafka.server.KafkaServer)
[2021-05-10 14:17:51,520] INFO [broker-0-to-controller-send-thread]: Recorded new controller, from now on will use broker localhost:9092 (id: 0 rack: null) (kafka.server.BrokerToControllerRequestThread)
2.5 Simple Producer
Apache Kafka Producer API has Producer class which helps to connect to Kafka server. The class has send method to send messages to more than one topic. Sync and Async are different types of producers. Sync Producer sends the messages immediately. Async</code. Producer sends the messages in the background. Async is used for processing higher volume. From release 0.9, Async producer has call back for send method. Call back is used for error handler registration.
The Configuration settings for Producer API are attached below
| Configuration Settings | Description | |
|---|---|---|
| client.id | This setting identifies producer application | |
| producer.type | This value can be either sync or async | |
| acks | This controls the criteria under producer requests are considered complete. | |
| retries | This is the number of retires. If producer request fails, then automatically retry with specific value. | |
| bootstrap.servers | This has the bootstrapping list of brokers | |
| linger.ms | This setting adds a delay to wait for more records to build up | |
| key.serializer | This is the Key for the serializer interface. | |
| value.serializer | This is the value for the serializer interface. | |
| batch.size | this is the Buffer size. | |
| buffer.memory | This controls the total amount of memory available to the producer for buffering |
ProducerRecord class is used for record creation. The record has the topic name, partition count key and value pair. Below is the code for the sample producer to create and topic and send the message.
Apache Kafka Producer Example
import java.util.Properties;
import org.apache.kafka.clients.producer.Producer;
import org.apache.kafka.clients.producer.KafkaProducer;
import org.apache.kafka.clients.producer.ProducerRecord;
public class SimpleProducerExample {
public static void main(String[] args) throws Exception{
if(args.length == 0){
System.out.println("Enter the topic to be created ");
return;
}
String exampleTopicName = args[0].toString();
Properties properties = new Properties();
properties.put("bootstrap.servers", "localhost:9092");
properties.put("acks", "all");
properties.put("retries", 0);
properties.put("batch.size", 16384);
properties.put("linger.ms", 1);
properties.put("buffer.memory", 33554432);
properties.put("key.serializer",
"org.apache.kafka.common.serialization.StringSerializer");
properties.put("value.serializer",
"org.apache.kafka.common.serialization.StringSerializer");
Producer producer = new KafkaProducer
(properties);
for(int i = 0; i < 10; i++)
producer.send(new ProducerRecord(exampleTopicName,
Integer.toString(i), Integer.toString(i)));
System.out.println("Message is sent");
producer.close();
}
}
The command to compile the above code is shown below:
Apache Kafka Producer Compilation Command
javac -cp "/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/libs/*" SimpleProducerExample.java
The output of the above command is shown as below:
Apache Kafka Producer Compilation Output
apples-MacBook-Air:apachekafkaproducer bhagvan.kommadi$ javac -cp "/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/libs/*" SimpleProducerExample.java apples-MacBook-Air:apachekafkaproducer bhagvan.kommadi$
The command for executing the sample producer is shown below:
Apache Kafka Producer Execution Command
java -cp "/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/libs/*:." SimpleProducerExample ExampleTopic
The output of the above command is shown as below:
Apache Kafka Producer Execution Output
apples-MacBook-Air:apachekafkaproducer bhagvan.kommadi$ java -cp "/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/libs/*:." SimpleProducerExample ExampleTopic Message is sent
The output of the kafka server showing the example topic created is shown below:
Apache Kafka Server Output
[2021-05-10 14:17:49,682] INFO [GroupCoordinator 0]: Startup complete. (kafka.coordinator.group.GroupCoordinator)
[2021-05-10 14:17:50,362] INFO [ProducerId Manager 0]: Acquired new producerId block (brokerId:0,blockStartProducerId:0,blockEndProducerId:999) by writing to Zk with path version 1 (kafka.coordinator.transaction.ProducerIdManager)
[2021-05-10 14:17:50,363] INFO [TransactionCoordinator id=0] Starting up. (kafka.coordinator.transaction.TransactionCoordinator)
[2021-05-10 14:17:50,388] INFO [Transaction Marker Channel Manager 0]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager)
[2021-05-10 14:17:50,390] INFO [TransactionCoordinator id=0] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator)
[2021-05-10 14:17:51,027] INFO [ExpirationReaper-0-AlterAcls]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2021-05-10 14:17:51,192] INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread)
[2021-05-10 14:17:51,247] INFO [SocketServer listenerType=ZK_BROKER, nodeId=0] Starting socket server acceptors and processors (kafka.network.SocketServer)
[2021-05-10 14:17:51,293] INFO [SocketServer listenerType=ZK_BROKER, nodeId=0] Started data-plane acceptor and processor(s) for endpoint : ListenerName(PLAINTEXT) (kafka.network.SocketServer)
[2021-05-10 14:17:51,294] INFO [SocketServer listenerType=ZK_BROKER, nodeId=0] Started socket server acceptors and processors (kafka.network.SocketServer)
[2021-05-10 14:17:51,495] INFO Kafka version: 2.8.0 (org.apache.kafka.common.utils.AppInfoParser)
[2021-05-10 14:17:51,495] INFO Kafka commitId: ebb1d6e21cc92130 (org.apache.kafka.common.utils.AppInfoParser)
[2021-05-10 14:17:51,495] INFO Kafka startTimeMs: 1620636471294 (org.apache.kafka.common.utils.AppInfoParser)
[2021-05-10 14:17:51,505] INFO [KafkaServer id=0] started (kafka.server.KafkaServer)
[2021-05-10 14:17:51,520] INFO [broker-0-to-controller-send-thread]: Recorded new controller, from now on will use broker localhost:9092 (id: 0 rack: null) (kafka.server.BrokerToControllerRequestThread)
[2021-05-10 14:29:38,738] INFO Creating topic ExampleTopic with configuration {} and initial partition assignment Map(0 -> ArrayBuffer(0)) (kafka.zk.AdminZkClient)
[2021-05-10 14:29:39,261] INFO [ReplicaFetcherManager on broker 0] Removed fetcher for partitions Set(ExampleTopic-0) (kafka.server.ReplicaFetcherManager)
[2021-05-10 14:29:39,448] INFO [Log partition=ExampleTopic-0, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2021-05-10 14:29:39,454] INFO Created log for partition ExampleTopic-0 in /tmp/kafka-logs/ExampleTopic-0 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> [delete], flush.ms -> 9223372036854775807, segment.bytes -> 1073741824, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.8-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1048588, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager)
[2021-05-10 14:29:39,456] INFO [Partition ExampleTopic-0 broker=0] No checkpointed highwatermark is found for partition ExampleTopic-0 (kafka.cluster.Partition)
[2021-05-10 14:29:39,458] INFO [Partition ExampleTopic-0 broker=0] Log loaded for partition ExampleTopic-0 with initial high watermark 0 (kafka.cluster.Partition)
3. Download the Source Code
You can download the full source code of this example here: Apache Kafka Simple Producer Example



