Success! kcat is now built

Success! kcat is now built

Usage: ./kcat <options> [file1 file2 .. | topic1 topic2 ..]]

kcat - Apache Kafka producer and consumer tool

https://github.com/edenhill/kcat

Copyright (c) 2014-2021, Magnus Edenhill

Version 1.7.0 (JSON, Avro, Transactions, IncrementalAssign, JSONVerbatim, librdkafka 1.7.0 builtin.features=snappy,ssl,sasl,regex,lz4,sasl_plain,sasl_scram,plugins,sasl_oauthbearer)

General options:

-C | -P | -L | -Q Mode: Consume, Produce, Metadata List, Query mode

-G <group-id> Mode: High-level KafkaConsumer (Kafka >=0.9 balanced consumer groups)

Expects a list of topics to subscribe to

-t <topic> Topic to consume from, produce to, or list

-p <partition> Partition

-b <brokers,..> Bootstrap broker(s) (host[:port])

-D <delim> Message delimiter string:

a-z | \r | \n | \t | \xNN ..

Default: \n

-K <delim> Key delimiter (same format as -D)

-c <cnt> Limit message count

-m <seconds> Metadata (et.al.) request timeout.

This limits how long kcat will block

while waiting for initial metadata to be

retrieved from the Kafka cluster.

It also sets the timeout for the producer's

transaction commits, init, aborts, etc.

Default: 5 seconds.

-F <config-file> Read configuration properties from file,

file format is "property=value".

The KCAT_CONFIG=path environment can also be used, but -F takes precedence.

The default configuration file is $HOME/.config/kcat.conf

-X list List available librdkafka configuration properties

-X prop=val Set librdkafka configuration property.

Properties prefixed with "topic." are

applied as topic properties.

-X schema.registry.prop=val Set libserdes configuration property for the Avro/Schema-Registry client.

-X dump Dump configuration and exit.

-d <dbg1,...> Enable librdkafka debugging:

all,generic,broker,topic,metadata,feature,queue,msg,protocol,cgrp,security,fetch,interceptor,plugin,consumer,admin,eos,mock,assignor,conf

-q Be quiet (verbosity set to 0)

-v Increase verbosity

-E Do not exit on non-fatal error

-V Print version

-h Print usage help

Producer options:

-z snappy|gzip|lz4 Message compression. Default: none

-p -1 Use random partitioner

-D <delim> Delimiter to split input into messages

-K <delim> Delimiter to split input key and message

-k <str> Use a fixed key for all messages.

If combined with -K, per-message keys

takes precendence.

-H <header=value> Add Message Headers (may be specified multiple times)

-l Send messages from a file separated by

delimiter, as with stdin.

(only one file allowed)

-T Output sent messages to stdout, acting like tee.

-c <cnt> Exit after producing this number of messages

-Z Send empty messages as NULL messages

file1 file2.. Read messages from files.

With -l, only one file permitted.

Otherwise, the entire file contents will

be sent as one single message.

-X transactional.id=.. Enable transactions and send all

messages in a single transaction which

is committed when stdin is closed or the

input file(s) are fully read.

If kcat is terminated through Ctrl-C

(et.al) the transaction will be aborted.

Consumer options:

-o <offset> Offset to start consuming from:

beginning | end | stored |

<value> (absolute offset) |

-<value> (relative offset from end)

s@<value> (timestamp in ms to start at)

e@<value> (timestamp in ms to stop at (not included))

-e Exit successfully when last message received

-f <fmt..> Output formatting string, see below.

Takes precedence over -D and -K.

-J Output with JSON envelope

-s key=<serdes> Deserialize non-NULL keys using <serdes>.

-s value=<serdes> Deserialize non-NULL values using <serdes>.

-s <serdes> Deserialize non-NULL keys and values using <serdes>.

Available deserializers (<serdes>):

<pack-str> - A combination of:

<: little-endian,

>: big-endian (recommended),

b: signed 8-bit integer

B: unsigned 8-bit integer

h: signed 16-bit integer

H: unsigned 16-bit integer

i: signed 32-bit integer

I: unsigned 32-bit integer

q: signed 64-bit integer

Q: unsigned 64-bit integer

c: ASCII character

s: remaining data is string

$: match end-of-input (no more bytes remaining or a parse error is raised).

Not including this token skips any

remaining data after the pack-str is

exhausted.

avro - Avro-formatted with schema in Schema-Registry (requires -r)

E.g.: -s key=i -s value=avro - key is 32-bit integer, value is Avro.

or: -s avro - both key and value are Avro-serialized

-r <url> Schema registry URL (when avro deserializer is used with -s)

-D <delim> Delimiter to separate messages on output

-K <delim> Print message keys prefixing the message

with specified delimiter.

-O Print message offset using -K delimiter

-c <cnt> Exit after consuming this number of messages

-Z Print NULL values and keys as "NULL" instead of empty.

For JSON (-J) the nullstr is always null.

-u Unbuffered output

Metadata options (-L):

-t <topic> Topic to query (optional)

Query options (-Q):

-t <t>:<p>:<ts> Get offset for topic <t>,

partition <p>, timestamp <ts>.

Timestamp is the number of milliseconds

since epoch UTC.

Requires broker >= 0.10.0.0 and librdkafka >= 0.9.3.

Multiple -t .. are allowed but a partition

must only occur once.

Format string tokens:

%s Message payload

%S Message payload length (or -1 for NULL)

%R Message payload length (or -1 for NULL) serialized

as a binary big endian 32-bit signed integer

%k Message key

%K Message key length (or -1 for NULL)

%T Message timestamp (milliseconds since epoch UTC)

%h Message headers (n=v CSV)

%t Topic

%p Partition

%o Message offset

\n \r \t Newlines, tab

\xXX \xNNN Any ASCII character

Example:

-f 'Topic %t [%p] at offset %o: key %k: %s\n'

JSON message envelope (on one line) when consuming with -J:

{ "topic": str, "partition": int, "offset": int,

"tstype": "create|logappend|unknown", "ts": int, // timestamp in milliseconds since epoch

"broker": int,

"headers": { "<name>": str, .. }, // optional

"key": str|json, "payload": str|json,

"key_error": str, "payload_error": str, //optional

"key_schema_id": int, "value_schema_id": int //optional

}

notes:

  • key_error and payload_error are only included if deserialization fails.

  • key_schema_id and value_schema_id are included for successfully deserialized Avro messages.

Consumer mode (writes messages to stdout):

kcat -b <broker> -t <topic> -p <partition>

or:

kcat -C -b ...

High-level KafkaConsumer mode:

kcat -b <broker> -G <group-id> topic1 top2 ^aregex\d+

Producer mode (reads messages from stdin):

... | kcat -b <broker> -t <topic> -p <partition>

or:

kcat -P -b ...

Metadata listing:

kcat -L -b <broker> [-t <topic>]

Query offset by timestamp:

kcat -Q -b broker -t <topic>:<partition>:<timestamp>

相关推荐
惊讶的猫1 小时前
redis分片集群
数据库·redis·缓存·分片集群·海量数据存储·高并发写
期待のcode1 小时前
Redis的主从复制与集群
运维·服务器·redis
jiunian_cn2 小时前
【Redis】渐进式遍历
数据库·redis·缓存
SoleMotive.3 小时前
谢飞机爆笑面经:Java大厂3轮12问真题拆解(Redis穿透/Kafka分区/MCP Agent)
redis·spring cloud·kafka·java面试·mcp
椰子今天很可爱3 小时前
Redis进阶
redis
jiunian_cn3 小时前
【Redis】数据库管理操作
数据库·redis·缓存
惊讶的猫3 小时前
Redis 哨兵(Sentinel)介绍
redis·redis哨兵
猫头虎4 小时前
基于信创openEuler系统安装部署OpenTeleDB开源数据库的实战教程
数据库·redis·sql·mysql·开源·nosql·database
静听山水4 小时前
Redis核心数据结构-ZSet
数据结构·redis
Dontla4 小时前
黑马大模型RAG与Agent智能体实战教程LangChain提示词——6、提示词工程(提示词优化、few-shot、金融文本信息抽取案例、金融文本匹配案例)
redis·金融·langchain