[MQ] Kafka

概述: Kafka

安装指南

安装 on Windows

Step1 安装 JDK

  • JDK 安装后:
  • 在"系统变量"中,找到 JAVA_HOME,如果没有则新建,将其值设置为 JDK 安装路径(如 C:\Program Files\Java\jdk-17)。
  • 编辑 Path 变量,添加 %JAVA_HOME%\bin
shell 复制代码
java --version

Step2 下载 Kafka 安装包

  • 下载安装包

https://kafka.apache.org/downloads
https://www.apache.org/dyn/closer.cgi?path=/kafka/2.3.0/kafka_2.12-2.3.0.tgz

txt 复制代码
wget https://archive.apache.org/dist/kafka/2.3.0/kafka_2.12-2.3.0.tgz
//wget https://dlcdn.apache.org/kafka/3.8.1/kafka_2.12-3.8.1.tgz
  • 解压到指定目录

如:

txt 复制代码
D:\Program\kafka
D:\Program\kafka\kafka_2.12-2.3.0

bin

config

libs

step3 配置 kafka

  • 修改配置文件 config/server.properties
  • 设置 log.dirs 为 Kafka 日志存储路径,如:
txt 复制代码
log.dirs=/tmp/kafka-logs
#log.dirs=D:/Program/kafka/logs

笔者选择保留原来的路径
确保 zookeeper.connect 指向 Zookeeper 地址,默认是 localhost:2181

step4 启动 Zookeeper

  • Kafka 依赖 Zookeeper,需先启动 Zookeeper。

在 kafka 目录下运行:

shell 复制代码
D:
cd D:\Program\kafka\kafka_2.12-2.3.0

.\bin\windows\zookeeper-server-start.bat .\config\zookeeper.properties

log

log 复制代码
D:\Program\kafka\kafka_2.12-2.3.0>.\bin\windows\zookeeper-server-start.bat .\config\zookeeper.properties
[2025-02-18 17:04:56,046] INFO Reading configuration from: .\config\zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig)
[2025-02-18 17:04:56,050] INFO autopurge.snapRetainCount set to 3 (org.apache.zookeeper.server.DatadirCleanupManager)
[2025-02-18 17:04:56,050] INFO autopurge.purgeInterval set to 0 (org.apache.zookeeper.server.DatadirCleanupManager)
[2025-02-18 17:04:56,050] INFO Purge task is not scheduled. (org.apache.zookeeper.server.DatadirCleanupManager)
[2025-02-18 17:04:56,051] WARN Either no config or no quorum defined in config, running  in standalone mode (org.apache.zookeeper.server.quorum.QuorumPeerMain)
[2025-02-18 17:04:56,071] INFO Reading configuration from: .\config\zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig)
[2025-02-18 17:04:56,072] INFO Starting server (org.apache.zookeeper.server.ZooKeeperServerMain)
[2025-02-18 17:04:56,086] INFO Server environment:zookeeper.version=3.4.14-4c25d480e66aadd371de8bd2fd8da255ac140bcf, built on 03/06/2019 16:18 GMT (org.apache.zookeeper.server.ZooKeeperServer)
[2025-02-18 17:04:56,086] INFO Server environment:host.name=111111.xxxx.com (org.apache.zookeeper.server.ZooKeeperServer)
[2025-02-18 17:04:56,086] INFO Server environment:java.version=1.8.0_261 (org.apache.zookeeper.server.ZooKeeperServer)
[2025-02-18 17:04:56,086] INFO Server environment:java.vendor=Oracle Corporation (org.apache.zookeeper.server.ZooKeeperServer)
[2025-02-18 17:04:56,086] INFO Server environment:java.home=D:\Program\Java\jdk1.8.0_261\jre (org.apache.zookeeper.server.ZooKeeperServer)
[2025-02-18 17:04:56,086] INFO Server environment:java.class.path=.;D:\Program\Java\jdk1.8.0_261\lib\dt.jar;D:\Program\Java\jdk1.8.0_261\lib\tools.jar;;D:\Program\kafka\kafka_2.12-2.3.0\libs\activation-1.1.1.jar;D:\Program\kafka\kafka_2.12-2.3.0\libs\aopalliance-repackaged-2.5.0.jar;D:\Program\kafka\kafka_2.12-2.3.0\libs\argparse4j-0.7.0.jar;D:\Program\kafka\kafka_2.12-2.3.0\libs\audience-annotations-0.5.0.jar;D:\Program\kafka\kafka_2.12-2.3.0\libs\commons-lang3-3.8.1.jar;D:\Program\kafka\kafka_2.12-2.3.0\libs\connect-api-2.3.0.jar;D:\Program\kafka\kafka_2.12-2.3.0\libs\connect-basic-auth-extension-2.3.0.jar;D:\Program\kafka\kafka_2.12-2.3.0\libs\connect-file-2.3.0.jar;D:\Program\kafka\kafka_2.12-2.3.0\libs\connect-json-2.3.0.jar;D:\Program\kafka\kafka_2.12-2.3.0\libs\connect-runtime-2.3.0.jar;D:\Program\kafka\kafka_2.12-2.3.0\libs\connect-transforms-2.3.0.jar;D:\Program\kafka\kafka_2.12-2.3.0\libs\guava-20.0.jar;D:\Program\kafka\kafka_2.12-2.3.0\libs\hk2-api-2.5.0.jar;D:\Program\kafka\kafka_2.12-2.3.0\libs\hk2-locator-2.5.0.jar;D:\Program\kafka\kafka_2.12-2.3.0\libs\hk2-utils-2.5.0.jar;D:\Program\kafka\kafka_2.12-2.3.0\libs\jackson-annotations-2.9.9.jar;D:\Program\kafka\kafka_2.12-2.3.0\libs\jackson-core-2.9.9.jar;D:\Program\kafka\kafka_2.12-2.3.0\libs\jackson-databind-2.9.9.jar;D:\Program\kafka\kafka_2.12-2.3.0\libs\jackson-dataformat-csv-2.9.9.jar;D:\Program\kafka\kafka_2.12-2.3.0\libs\jackson-datatype-jdk8-2.9.9.jar;D:\Program\kafka\kafka_2.12-2.3.0\libs\jackson-jaxrs-base-2.9.9.jar;D:\Program\kafka\kafka_2.12-2.3.0\libs\jackson-jaxrs-json-provider-2.9.9.jar;D:\Program\kafka\kafka_2.12-2.3.0\libs\jackson-module-jaxb-annotations-2.9.9.jar;D:\Program\kafka\kafka_2.12-2.3.0\libs\jackson-module-paranamer-2.9.9.jar;D:\Program\kafka\kafka_2.12-2.3.0\libs\jackson-module-scala_2.12-2.9.9.jar;D:\Program\kafka\kafka_2.12-2.3.0\libs\jakarta.annotation-api-1.3.4.jar;D:\Program\kafka\kafka_2.12-2.3.0\libs\jakarta.inject-2.5.0.jar;D:\Program\kafka\kafka_2.12-2.3.0\libs\jakarta.ws.rs-api-2.1.5.jar;D:\Program\kafka\kafka_2.12-2.3.0\libs\javassist-3.22.0-CR2.jar;D:\Program\kafka\kafka_2.12-2.3.0\libs\javax.servlet-api-3.1.0.jar;D:\Program\kafka\kafka_2.12-2.3.0\libs\javax.ws.rs-api-2.1.1.jar;D:\Program\kafka\kafka_2.12-2.3.0\libs\jaxb-api-2.3.0.jar;D:\Program\kafka\kafka_2.12-2.3.0\libs\jersey-client-2.28.jar;D:\Program\kafka\kafka_2.12-2.3.0\libs\jersey-common-2.28.jar;D:\Program\kafka\kafka_2.12-2.3.0\libs\jersey-container-servlet-2.28.jar;D:\Program\kafka\kafka_2.12-2.3.0\libs\jersey-container-servlet-core-2.28.jar;D:\Program\kafka\kafka_2.12-2.3.0\libs\jersey-hk2-2.28.jar;D:\Program\kafka\kafka_2.12-2.3.0\libs\jersey-media-jaxb-2.28.jar;D:\Program\kafka\kafka_2.12-2.3.0\libs\jersey-server-2.28.jar;D:\Program\kafka\kafka_2.12-2.3.0\libs\jetty-client-9.4.18.v20190429.jar;D:\Program\kafka\kafka_2.12-2.3.0\libs\jetty-continuation-9.4.18.v20190429.jar;D:\Program\kafka\kafka_2.12-2.3.0\libs\jetty-http-9.4.18.v20190429.jar;D:\Program\kafka\kafka_2.12-2.3.0\libs\jetty-io-9.4.18.v20190429.jar;D:\Program\kafka\kafka_2.12-2.3.0\libs\jetty-security-9.4.18.v20190429.jar;D:\Program\kafka\kafka_2.12-2.3.0\libs\jetty-server-9.4.18.v20190429.jar;D:\Program\kafka\kafka_2.12-2.3.0\libs\jetty-servlet-9.4.18.v20190429.jar;D:\Program\kafka\kafka_2.12-2.3.0\libs\jetty-servlets-9.4.18.v20190429.jar;D:\Program\kafka\kafka_2.12-2.3.0\libs\jetty-util-9.4.18.v20190429.jar;D:\Program\kafka\kafka_2.12-2.3.0\libs\jopt-simple-5.0.4.jar;D:\Program\kafka\kafka_2.12-2.3.0\libs\jsr305-3.0.2.jar;D:\Program\kafka\kafka_2.12-2.3.0\libs\kafka-clients-2.3.0.jar;D:\Program\kafka\kafka_2.12-2.3.0\libs\kafka-log4j-appender-2.3.0.jar;D:\Program\kafka\kafka_2.12-2.3.0\libs\kafka-streams-2.3.0.jar;D:\Program\kafka\kafka_2.12-2.3.0\libs\kafka-streams-examples-2.3.0.jar;D:\Program\kafka\kafka_2.12-2.3.0\libs\kafka-streams-scala_2.12-2.3.0.jar;D:\Program\kafka\kafka_2.12-2.3.0\libs\kafka-streams-test-utils-2.3.0.jar;D:\Program\kafka\kafka_2.12-2.3.0\libs\kafka-tools-2.3.0.jar;D:\Program\kafka\kafka_2.12-2.3.0\libs\kafka_2.12-2.3.0-javadoc.jar;D:\Program\kafka\kafka_2.12-2.3.0\libs\kafka_2.12-2.3.0-javadoc.jar.asc;D:\Program\kafka\kafka_2.12-2.3.0\libs\kafka_2.12-2.3.0-scaladoc.jar;D:\Program\kafka\kafka_2.12-2.3.0\libs\kafka_2.12-2.3.0-scaladoc.jar.asc;D:\Program\kafka\kafka_2.12-2.3.0\libs\kafka_2.12-2.3.0-sources.jar;D:\Program\kafka\kafka_2.12-2.3.0\libs\kafka_2.12-2.3.0-sources.jar.asc;D:\Program\kafka\kafka_2.12-2.3.0\libs\kafka_2.12-2.3.0-test-sources.jar;D:\Program\kafka\kafka_2.12-2.3.0\libs\kafka_2.12-2.3.0-test-sources.jar.asc;D:\Program\kafka\kafka_2.12-2.3.0\libs\kafka_2.12-2.3.0-test.jar;D:\Program\kafka\kafka_2.12-2.3.0\libs\kafka_2.12-2.3.0-test.jar.asc;D:\Program\kafka\kafka_2.12-2.3.0\libs\kafka_2.12-2.3.0.jar;D:\Program\kafka\kafka_2.12-2.3.0\libs\kafka_2.12-2.3.0.jar.asc;D:\Program\kafka\kafka_2.12-2.3.0\libs\log4j-1.2.17.jar;D:\Program\kafka\kafka_2.12-2.3.0\libs\lz4-java-1.6.0.jar;D:\Program\kafka\kafka_2.12-2.3.0\libs\maven-artifact-3.6.1.jar;D:\Program\kafka\kafka_2.12-2.3.0\libs\metrics-core-2.2.0.jar;D:\Program\kafka\kafka_2.12-2.3.0\libs\osgi-resource-locator-1.0.1.jar;D:\Program\kafka\kafka_2.12-2.3.0\libs\paranamer-2.8.jar;D:\Program\kafka\kafka_2.12-2.3.0\libs\plexus-utils-3.2.0.jar;D:\Program\kafka\kafka_2.12-2.3.0\libs\reflections-0.9.11.jar;D:\Program\kafka\kafka_2.12-2.3.0\libs\rocksdbjni-5.18.3.jar;D:\Program\kafka\kafka_2.12-2.3.0\libs\scala-library-2.12.8.jar;D:\Program\kafka\kafka_2.12-2.3.0\libs\scala-logging_2.12-3.9.0.jar;D:\Program\kafka\kafka_2.12-2.3.0\libs\scala-reflect-2.12.8.jar;D:\Program\kafka\kafka_2.12-2.3.0\libs\slf4j-api-1.7.26.jar;D:\Program\kafka\kafka_2.12-2.3.0\libs\slf4j-log4j12-1.7.26.jar;D:\Program\kafka\kafka_2.12-2.3.0\libs\snappy-java-1.1.7.3.jar;D:\Program\kafka\kafka_2.12-2.3.0\libs\spotbugs-annotations-3.1.9.jar;D:\Program\kafka\kafka_2.12-2.3.0\libs\validation-api-2.0.1.Final.jar;D:\Program\kafka\kafka_2.12-2.3.0\libs\zkclient-0.11.jar;D:\Program\kafka\kafka_2.12-2.3.0\libs\zookeeper-3.4.14.jar;D:\Program\kafka\kafka_2.12-2.3.0\libs\zstd-jni-1.4.0-1.jar (org.apache.zookeeper.server.ZooKeeperServer)
[2025-02-18 17:04:56,088] INFO Server environment:java.library.path=D:\Program\Java\jdk1.8.0_261\bin;C:\Windows\Sun\Java\bin;C:\Windows\system32;C:\Windows;C:\Program Files\YunShu\utils;c:\Users\111111\AppData\Local\Programs\Cursor\resources\app\bin;C:\Windows\system32;C:\Windows;C:\Windows\System32\Wbem;C:\Windows\System32\WindowsPowerShell\v1.0\;C:\Windows\System32\OpenSSH\;C:\Program Files\DFSK ;C;\Program Files (x86)\NVIDIA Corporation\PhysX\Common;D:\Program\influxdb\influxdb-1.8.4-1;D:\Program\nacos\nacos-server-2.0.3\nacos\bin;D:\Program\ActiveMQ\apache-artemis-2.19.1\bin;D:\Program\Neo4j\neo4j-community-3.5.35\bin;D:\Program\GNUWin\GnuWin32\bin;D:\Program\Arthas\lib\3.5.2\arthas;D:\Program\Apache-Tomcat\apache-tomcat-8.5.84\bin;C:\Program Files (x86)\Enterprise Vault\EVClient\x64\;D:\Program\WinMerge;C:\Program Files\dotnet\;C:\Users\111111\AppData\Local\Microsoft\WindowsApps;D:\Program\Java\jdk1.8.0_261\bin;D:\Program\Java\jdk1.8.0_261\jre\bin;D:\Program\git\cmd;D:\Program\IDEA\IDEA 2021.3.1\IntelliJ IDEA 2021.3.1\bin;D:\Program\maven\apache-maven-3.8.1\bin;D:\Program\gradle\gradle-6.8\bin;D:\Program\VSCode\bin;D:\Program\DiffUse;D:\Program\PyCharm\PyCharm2023.2.1\bin;c:\Users\;D:\Program\JMeter\apache-jmeter-5.5\bin;D:\Program\miktex\miktex-24.1\miktex\bin\x64\;D:\Program\nodejs\node-v20.11.1-win-x64;C:\insolu\client001;C:\Program Files\YunShu\utils;c:\Users\111111\AppData\Local\Programs\Cursor\resources\app\bin;C:\Windows\system32;C:\Windows;C:\Windows\System32\Wbem;C:\Windows\System32\WindowsPowerShell\v1.0\;C:\Windows\System32\OpenSSH\;C:\Program Files\DFSK ;C;\Program Files (x86)\NVIDIA Corporation\PhysX\Common;D:\Program\influxdb\influxdb-1.8.4-1;D:\Program\nacos\nacos-server-2.0.3\nacos\bin;D:\Program\ActiveMQ\apache-artemis-2.19.1\bin;D:\Program\Neo4j\neo4j-community-3.5.35\bin;D:\Program\GNUWin\GnuWin32\bin;D:\Program\Arthas\lib\3.5.2\arthas;D:\Program\Apache-Tomcat\apache-tomcat-8.5.84\bin;C:\Program Files (x86)\Enterprise Vault\EVClient\x64\;D:\Program\WinMerge;C:\Program Files\dotnet\;C:\Users\111111\AppData\Local\Microsoft\WindowsApps;D:\Program\Java\jdk1.8.0_261\bin;D:\Program\Java\jdk1.8.0_261\jre\bin;D:\Program\git\cmd;D:\Program\IDEA\IDEA 2021.3.1\IntelliJ IDEA 2021.3.1\bin;D:\Program\maven\apache-maven-3.8.1\bin;D:\Program\gradle\gradle-6.8\bin;D:\Program\VSCode\bin;D:\Program\DiffUse;D:\Program\PyCharm\PyCharm2023.2.1\bin;c:\Users\;D:\Program\JMeter\apache-jmeter-5.5\bin;D:\Program\miktex\miktex-24.1\miktex\bin\x64\;D:\Program\nodejs\node-v20.11.1-win-x64;C:\insolu\client001;D:\Program\netcat;C:\Users\111111\AppData\Local\Programs\Ollama;C:\Users\111111\AppData\Local\Microsoft\WinGet\Packages\jqlang.jq_Microsoft.Winget.Source_8wekyb3d8bbwe;D:\Program\Go\go\bin;D:\Program\Miniforge\Miniforge3\condabin;;. (org.apache.zookeeper.server.ZooKeeperServer)
[2025-02-18 17:04:56,089] INFO Server environment:java.io.tmpdir=C:\Users\111111\AppData\Local\Temp\ (org.apache.zookeeper.server.ZooKeeperServer)
[2025-02-18 17:04:56,089] INFO Server environment:java.compiler=<NA> (org.apache.zookeeper.server.ZooKeeperServer)
[2025-02-18 17:04:56,089] INFO Server environment:os.name=Windows 10 (org.apache.zookeeper.server.ZooKeeperServer)
[2025-02-18 17:04:56,089] INFO Server environment:os.arch=amd64 (org.apache.zookeeper.server.ZooKeeperServer)
[2025-02-18 17:04:56,089] INFO Server environment:os.version=10.0 (org.apache.zookeeper.server.ZooKeeperServer)
[2025-02-18 17:04:56,089] INFO Server environment:user.name=111111 (org.apache.zookeeper.server.ZooKeeperServer)
[2025-02-18 17:04:56,089] INFO Server environment:user.home=C:\Users\111111 (org.apache.zookeeper.server.ZooKeeperServer)
[2025-02-18 17:04:56,090] INFO Server environment:user.dir=D:\Program\kafka\kafka_2.12-2.3.0 (org.apache.zookeeper.server.ZooKeeperServer)
[2025-02-18 17:04:56,110] INFO tickTime set to 3000 (org.apache.zookeeper.server.ZooKeeperServer)
[2025-02-18 17:04:56,110] INFO minSessionTimeout set to -1 (org.apache.zookeeper.server.ZooKeeperServer)
[2025-02-18 17:04:56,110] INFO maxSessionTimeout set to -1 (org.apache.zookeeper.server.ZooKeeperServer)
[2025-02-18 17:04:56,141] INFO Using org.apache.zookeeper.server.NIOServerCnxnFactory as server connection factory (org.apache.zookeeper.server.ServerCnxnFactory)
[2025-02-18 17:04:56,146] INFO binding to port 0.0.0.0/0.0.0.0:2181 (org.apache.zookeeper.server.NIOServerCnxnFactory)

step5 启动 Kafka

  • 新开一个命令行窗口,进入 Kafka 目录,启动 Kafka:
shell 复制代码
D:
cd D:\Program\kafka\kafka_2.12-2.3.0
.\bin\windows\kafka-server-start.bat .\config\server.properties

step6 创建 Topic

  • 新开命令行窗口,进入 Kafka 目录,创建 Topic:
shell 复制代码
D:
cd D:\Program\kafka\kafka_2.12-2.3.0

.\bin\windows\kafka-topics.bat --create --topic flink_monitor_test --bootstrap-server localhost:9092 --partitions 1 --replication-factor 1

step7 生产和消费消息

  • 启动生产者
shell 复制代码
D:
cd D:\Program\kafka\kafka_2.12-2.3.0

.\bin\windows\kafka-console-producer.bat --topic flink_monitor_test --broker-list localhost:9092
> hello
> nihao
  • 启动消费者
shell 复制代码
D:
cd D:\Program\kafka\kafka_2.12-2.3.0

.\bin\windows\kafka-console-consumer.bat --topic flink_monitor_test --bootstrap-server localhost:9092 --from-beginning
> hello
> nihao

此窗口的数据,是生产者发过来的

  • 停止 Kafka 和 Zookeeper

按 Ctrl+C 停止 Kafka 和 Zookeeper。

stepX 常用命令

查看所有topic

shell 复制代码
> D:
> cd D:\Program\kafka\kafka_2.12-2.3.0

> .\bin\windows\kafka-topics.bat --list --bootstrap-server localhost:9092
__consumer_offsets
flink_monitor_test


或
> .\bin\windows\kafka-topics.bat --list --zookeeper localhost:2181
__consumer_offsets
flink_monitor_test

查看特定topic的详细信息

  • 要查看特定topic的详细信息(如分区数、副本数等)
shell 复制代码
> D:
> cd D:\Program\kafka\kafka_2.12-2.3.0

> .\bin\windows\kafka-topics.bat --describe --topic <your-topic> --bootstrap-server localhost:9092
Topic:flink_monitor_test        PartitionCount:1        ReplicationFactor:1     Configs:segment.bytes=1073741824
        Topic: flink_monitor_test       Partition: 0    Leader: 0       Replicas: 0     Isr: 0

查看所有消费者组

shell 复制代码
D:
cd D:\Program\kafka\kafka_2.12-2.3.0

.\bin\windows\kafka-consumer-groups.bat --bootstrap-server 127.0.0.1:9092 --list

查看某一消费者组的详细信息

shell 复制代码
D:
cd D:\Program\kafka\kafka_2.12-2.3.0

.\bin\windows\kafka-consumer-groups.bat --bootstrap-server 127.0.0.01:9092 --group xxx --describe

删除某一消费者组

shell 复制代码
D:
cd D:\Program\kafka\kafka_2.12-2.3.0

.\bin\kafka-consumer-groups.bat --bootstrap-server 127.0.0.1:9092 --delete --group group_1

FAQ for Kafka

Q: Kafka 未运行时,客户端推送topic消息,将出现何种错误信息?

[NetworkClient] processDisconnection:763 || [Producer clientId=producer-1] Connection to node -1 (/127.0.0.1:9092) could not be established. Broker may not be available.

log 复制代码
...
Connected to the target VM, address: '127.0.0.1:65221', transport: 'socket'
[TID: N/A] [xxx-app-test] [system] [2025/02/19 20:52:10.270] [WARN ] [kafka-producer-network-thread | producer-1] [NetworkClient] processDisconnection:763 || [Producer clientId=producer-1] Connection to node -1 (/127.0.0.1:9092) could not be established. Broker may not be available.
[TID: N/A] [xxx-app-test] [system] [2025/02/19 20:52:10.281] [INFO ] [main] [LogTest] main:25 || 这是一条信息日志
ERROR StatusConsoleListener Failed to send log message to Kafka!jsonMessage:{"timestamp":1739969530281,"level":"INFO","logger":"com.xxx.app.entry.LogTest","message":"这是一条信息日志","threadName":"main","errorMessage":null,"contextData":{"serverIp":"xx.xx.xx.xx","applicationName":"TestApp"}}
ERROR StatusConsoleListener Failed to send log message to Kafka!jsonMessage:{"timestamp":1739969530270,"level":"WARN","logger":"org.apache.kafka.clients.NetworkClient","message":"[Producer clientId=producer-1] Connection to node -1 (/127.0.0.1:9092) could not be established. Broker may not be available.","threadName":"kafka-producer-network-thread | producer-1","errorMessage":null,"contextData":{}}
  java.lang.RuntimeException: java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.TimeoutException: Topic flink_monitor_log_test not present in metadata after 3000 ms.
	at com.xxx.app.entry.KafkaAppender.sendToKafka(KafkaAppender.java:219)
	at com.xxx.app.entry.KafkaAppender.append(KafkaAppender.java:168)
	at org.apache.logging.log4j.core.config.AppenderControl.tryCallAppender(AppenderControl.java:161)
	at org.apache.logging.log4j.core.config.AppenderControl.callAppender0(AppenderControl.java:134)
	at org.apache.logging.log4j.core.config.AppenderControl.callAppenderPreventRecursion(AppenderControl.java:125)
	at org.apache.logging.log4j.core.config.AppenderControl.callAppender(AppenderControl.java:89)
	at org.apache.logging.log4j.core.config.LoggerConfig.callAppenders(LoggerConfig.java:683)
	at org.apache.logging.log4j.core.config.LoggerConfig.processLogEvent(LoggerConfig.java:641)
	at org.apache.logging.log4j.core.config.LoggerConfig.log(LoggerConfig.java:624)
	at org.apache.logging.log4j.core.config.LoggerConfig.log(LoggerConfig.java:560)
	at org.apache.logging.log4j.core.config.AwaitCompletionReliabilityStrategy.log(AwaitCompletionReliabilityStrategy.java:82)
	at org.apache.logging.log4j.core.Logger.log(Logger.java:163)
	at org.apache.logging.log4j.spi.AbstractLogger.tryLogMessage(AbstractLogger.java:2168)
	at org.apache.logging.log4j.spi.AbstractLogger.logMessageTrackRecursion(AbstractLogger.java:2122)
	at org.apache.logging.log4j.spi.AbstractLogger.logMessageSafely(AbstractLogger.java:2105)
	at org.apache.logging.log4j.spi.AbstractLogger.logMessage(AbstractLogger.java:1985)
	at org.apache.logging.log4j.spi.AbstractLogger.logIfEnabled(AbstractLogger.java:1838)
	at org.apache.logging.slf4j.Log4jLogger.info(Log4jLogger.java:180)
	at com.xxx.app.entry.LogTest.main(LogTest.java:25)
Caused by: java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.TimeoutException: Topic flink_monitor_log_test not present in metadata after 3000 ms.
	at org.apache.kafka.clients.producer.KafkaProducer$FutureFailure.<init>(KafkaProducer.java:1307)
	at org.apache.kafka.clients.producer.KafkaProducer.doSend(KafkaProducer.java:962)
	at org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:862)
	at org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:750)
	at com.xxx.app.entry.KafkaAppender.sendToKafka(KafkaAppender.java:206)
	... 18 more
Caused by: org.apache.kafka.common.errors.TimeoutException: Topic flink_monitor_log_test not present in metadata after 3000 ms.
java.lang.RuntimeException: java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.TimeoutException: Topic flink_monitor_log_test not present in metadata after 3000 ms.
	at com.xxx.app.entry.KafkaAppender.sendToKafka(KafkaAppender.java:219)
	at com.xxx.app.entry.KafkaAppender.append(KafkaAppender.java:168)
	at org.apache.logging.log4j.core.config.AppenderControl.tryCallAppender(AppenderControl.java:161)
	at org.apache.logging.log4j.core.config.AppenderControl.callAppender0(AppenderControl.java:134)
	at org.apache.logging.log4j.core.config.AppenderControl.callAppenderPreventRecursion(AppenderControl.java:125)
	at org.apache.logging.log4j.core.config.AppenderControl.callAppender(AppenderControl.java:89)
	at org.apache.logging.log4j.core.config.LoggerConfig.callAppenders(LoggerConfig.java:683)
	at org.apache.logging.log4j.core.config.LoggerConfig.processLogEvent(LoggerConfig.java:641)
	at org.apache.logging.log4j.core.config.LoggerConfig.log(LoggerConfig.java:624)
	at org.apache.logging.log4j.core.config.LoggerConfig.log(LoggerConfig.java:531)
	at org.apache.logging.log4j.core.config.AwaitCompletionReliabilityStrategy.log(AwaitCompletionReliabilityStrategy.java:63)
	at org.apache.logging.log4j.core.Logger.logMessage(Logger.java:155)
	at org.apache.logging.slf4j.Log4jLogger.log(Log4jLogger.java:378)
	at org.apache.kafka.common.utils.LogContext$LocationAwareKafkaLogger.writeLog(LogContext.java:434)
	at org.apache.kafka.common.utils.LogContext$LocationAwareKafkaLogger.warn(LogContext.java:287)
	at org.apache.kafka.clients.NetworkClient.processDisconnection(NetworkClient.java:763)
	at org.apache.kafka.clients.NetworkClient.handleDisconnections(NetworkClient.java:899)
	at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560)
	at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:324)
	at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:239)
	at java.lang.Thread.run(Thread.java:748)
Caused by: java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.TimeoutException: Topic flink_monitor_log_test not present in metadata after 3000 ms.
	at org.apache.kafka.clients.producer.KafkaProducer$FutureFailure.<init>(KafkaProducer.java:1307)
	at org.apache.kafka.clients.producer.KafkaProducer.doSend(KafkaProducer.java:962)
	at org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:862)
	at org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:750)
	at com.xxx.app.entry.KafkaAppender.sendToKafka(KafkaAppender.java:206)
	... 20 more
...

Q: Kafka GUI 客户端有哪些?

1. OffsetExplorer(原名 Kafka Tool)

  • 功能:支持查看 Kafka 集群中的主题、分区、消费者组、消息内容等,提供直观的 UI 界面。
  • 特点
    • 支持连接到 Zookeeper 或直接连接到 Kafka Broker。
    • 可以查看消息的偏移量、分区详情等。
    • 提供消息的实时消费功能。
  • 下载地址Kafka Tool 官网18。

2. Kafka King 【开源/推荐】

  • 功能:一个开源的 Kafka GUI 客户端,支持主题管理、消费者组监控、消息积压统计等。
  • 特点
    • 基于 Python 和 Flet 框架开发,界面现代且实用。
    • 支持批量创建主题、删除主题等操作。
    • 开源免费,适合开发者和运维人员使用。
  • 下载地址GitHub 或 Gitee5。

github

用户界面

  • 集群
  • 节点
  • 主题
  • 生产者
  • 消费者
  • 消费组
  • 巡检

3. PrettyZoo

  • 功能:主要用于管理 Zookeeper,但由于 Kafka 依赖 Zookeeper,因此也可以间接管理 Kafka。
  • 特点
    • 高颜值的图形化界面,基于 Apache Curator 和 JavaFX 实现。
    • 支持查看 Zookeeper 中存储的 Kafka 元数据。
    • 使用简单,适合需要深度调试 Kafka 和 Zookeeper 的用户。
  • 下载地址PrettyZoo 官网8。

4. Kafka Assistant 【免费/易用/推荐】

  • 功能:提供 Kafka 集群管理、主题管理、消息查看等功能。
  • 特点
    • 支持本机使用,适合本地开发和测试。
    • 提供收费版本,功能更强大。
  • 下载地址Kafka Assistant 官网10。
  • 下载

用户界面

  • 登录首页

  • broker

  • topics
  • group

  • ACLs

  • Streams

5. Conduktor

  • 功能:企业级 Kafka GUI 工具,支持集群管理、主题管理、消息生产和消费等。
  • 特点
    • 提供丰富的监控和调试功能。
    • 支持多集群管理,适合生产环境使用。
    • 提供免费版和付费版。
  • 下载地址Conduktor 官网

6. Kafdrop

  • 功能:一个基于 Web 的 Kafka GUI 工具,支持查看主题、分区、消费者组等。
  • 特点
    • 开源免费,适合部署在本地或服务器上。
    • 提供简单的 Web 界面,易于使用。
  • 下载地址GitHub

总结

以上工具各有特点,用户可以根据需求选择合适的 Kafka GUI 客户端:

  • 如果需要简单易用的工具,推荐 OffsetExplorerKafka King
  • 如果需要深度调试 Zookeeper,可以选择 PrettyZoo
  • 如果需要企业级功能,可以考虑 ConduktorKafka Assistant

X 参考文献

相关推荐
188_djh8 个月前
# Kafka_深入探秘者(3):kafka 消费者
java·spring boot·spring cloud·kafka·scala·apache kafka·kafka消息中间件
最笨的羊羊1 年前
Flink系列之:Apache Kafka SQL 连接器
flink系列·apache kafka·sql 连接器