kafkacat
Produce and consume Kafka messages
TLDR
View documentation for the original command
SYNOPSIS
kafkacat [options]
Producer Mode:
kafkacat -P [producer-options] -t <topic>
Consumer Mode:
kafkacat -C [consumer-options] -t <topic>
Metadata List Mode:
kafkacat -L [list-options]
PARAMETERS
-b <broker,...,broker>
Comma-separated list of Kafka broker host:port pairs to connect to (bootstrap servers). Example: localhost:9092,otherhost:9093.
-t <topic>
Specify the Kafka topic name for producing or consuming messages.
-p <partition>
Specify a specific partition ID. For producers, -1 means a random partition; for consumers, -1 means all partitions (default).
-o <offset>
Consumer start offset. Can be an absolute offset number, beginning (or -2), or end (or -1). Default is end.
-C
Enable consumer mode. Messages are consumed from the specified topic/partition.
-P
Enable producer mode. Messages are read from stdin and produced to the specified topic.
-L
Enable metadata list mode. Lists brokers, topics, and partitions available in the cluster.
-X <property=value>
Set a librdkafka configuration property. Can be specified multiple times. Example: -X compression.codec=snappy, -X security.protocol=SASL_SSL. Refer to librdkafka documentation for available properties.
-q
Quiet mode: Suppress debug and error messages to stderr.
-v
Verbose mode: Print verbose errors and debug messages to stderr.
-D <debug-contexts>
Enable librdkafka debugging contexts (e.g., broker,topic,protocol, or all). Useful for troubleshooting.
-Z
Output Kafka message headers for consumer mode or add headers for producer mode if specified via input.
-d <delimiter>
For consumer mode, set the message delimiter for output (e.g., '\n' for newline). For producer mode, it indicates the delimiter between key and value in input.
-e
In consumer mode, exit after the last message has been consumed from the assigned partitions.
-f <format_string>
For consumer mode, format the output message using a printf-like format string. E.g., '%t/%p/%o/%k/%s\n' for topic/partition/offset/key/payload.
-J
For consumer mode, output messages as JSON, including metadata like topic, partition, offset, timestamp, key, and payload.
-K <delimiter>
In producer mode, specifies the delimiter between the key and value in the input line from stdin. In consumer mode with -J, specifies the expected delimiter in the payload to split into key/value if not using a separate key field.
-H <header=value>
In producer mode, add a static header to all produced messages. Can be specified multiple times.
-k <key>
In producer mode, use the specified key for all messages, overriding any key read from input.
-s <string>
In producer mode, send the specified string as a message, instead of reading from stdin.
-z <compression_codec>
In producer mode, specify the compression codec (e.g., none, gzip, snappy, lz4, zstd). Overrides compression.codec property.
DESCRIPTION
kafkacat is a versatile and lightweight command-line utility for interacting with Apache Kafka clusters, often referred to as 'Kafka's netcat' or 'Kafka's cat'. Built on librdkafka (a C client library), it offers a non-JVM alternative for producing, consuming, and inspecting Kafka topics. It's ideal for scripting, debugging, and performing ad-hoc operations without the overhead of Java-based tools. kafkacat can read messages from stdin and write them to a Kafka topic, or consume messages from a topic and print them to stdout. It also provides robust capabilities for listing cluster metadata, including brokers, topics, and partitions, and supports extensive Kafka client configuration via the -X option.
CAVEATS
kafkacat is a highly efficient tool, but it's not a full-fledged Kafka client for all use cases. It does not natively support advanced consumer group rebalancing or exactly-once semantics as a default. It's primarily designed for lightweight, scripting-oriented interactions and debugging rather than as a replacement for robust application-level Kafka clients. High-volume operations can stress network and disk resources.
COMMON USAGE PATTERNS
Produce a message:
echo 'Hello Kafka!' | kafkacat -P -b localhost:9092 -t my_topic
Consume from beginning of a topic:
kafkacat -C -b localhost:9092 -t my_topic -o beginning
List cluster metadata:
kafkacat -L -b localhost:9092
Produce JSON messages with key and headers:
echo '{"key":"id_123", "value":{"data":"test"}}' | kafkacat -P -b localhost:9092 -t json_topic -K ':' -H 'event=test_event' -H 'source=kafkacat'
LIBRDKAFKA CONFIGURATION OPTIONS
The -X option is exceptionally powerful, allowing users to configure nearly any aspect of the underlying librdkafka client. This includes security settings (SSL, SASL), client IDs, request timeouts, and more. For a comprehensive list of configurable properties, refer to the official librdkafka documentation or the kafkacat man page. Example: -X 'sasl.mechanisms=PLAIN' -X 'sasl.username=user' -X 'sasl.password=pass'.
HISTORY
kafkacat was created by Magnus Edenhill, the lead developer of librdkafka (the C/C++ client for Apache Kafka). It was developed to provide a simple, robust, and fast command-line interface to Kafka without the need for the Java Virtual Machine (JVM). Its first public release was around 2014. It has since become a widely adopted tool in the Kafka ecosystem for quick data manipulation, debugging, and integration into shell scripts.