【Kafka基础】Kafka高可用集群:2.8以下版本超详细部署指南,运维必看!

Apache Kafka是一个分布式流处理平台,广泛应用于构建实时数据管道和流应用程序。本文将详细介绍如何在三节点集群上部署Kafka 2.8以下版本。

1 环境准备

1.1 服务器信息

|---------|---------------|
| 主机名 | IP地址 |
| node4 | 192.168.10.33 |
| node5 | 192.168.10.34 |
| node6 | 192.168.10.35 |

1.2 系统信息

  • 操作系统:Linux(本文以CentOS 7为例)
  • Java环境:JDK 1.8或以上
  • 磁盘空间:建议至少50GB(根据实际需求调整)
  • 内存:建议至少4GB(根据实际负载调整)

2 部署步骤

2.1 系统基础配置

复制代码
# 在所有节点上执行以下操作
# 关闭防火墙(生产环境请根据实际情况配置安全组)
systemctl stop firewalld
systemctl disable firewalld

# 关闭SELinux
setenforce 0
sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config

# 配置主机名解析
cat >> /etc/hosts <<EOF
192.168.10.33 node4
192.168.10.34 node5
192.168.10.35 node6
EOF

# 验证java环境
java --version
[root@node4 home]# java --version
java 11.0.25 2024-10-15 LTS
Java(TM) SE Runtime Environment 18.9 (build 11.0.25+9-LTS-256)
Java HotSpot(TM) 64-Bit Server VM 18.9 (build 11.0.25+9-LTS-256, mixed mode)
[root@node4 home]# 

2.2 下载并解压安装包

Kafka下载地址: Index of /dist/kafka/2.7.1
Zookeeper下载地址: Index of /dist/zookeeper

复制代码
# 在所有节点执行如下操作
# 创建目录
mkdir -p /export/home/kafka_zk
# 上传安装包并解压
tar -zxvf kafka_2.13-2.7.1.tgz -C kafka_zk/
tar -zxvf apache-zookeeper-3.6.3-bin.tar.gz -C kafka_zk/

# 创建数据目录和日志目录
mkdir -p /export/home/kafka_zk/apache-zookeeper-3.6.3-bin/{data,logs}
mkdir -p /export/home/kafka_zk/kafka_2.13-2.7.1/logs

2.3 配置Zookeeper

Kafka 2.8以下版本依赖Zookeeper进行集群管理:

复制代码
# 在所有节点上执行如下操作
 # 编辑conf/zoo.cfg:备份conf/zoo.cfg文件并添加如下内容
cp conf/zoo.cfg conf/zoo.cfg_bak
cat >conf/zoo.cfg<<EOF
# 基础参数
tickTime=2000
initLimit=15
syncLimit=5
maxClientCnxns=100

# 数据存储
dataDir=/export/home/kafka_zk/apache-zookeeper-3.6.3-bin/data
dataLogDir=/export/home/kafka_zk/apache-zookeeper-3.6.3-bin/logs
# 建议将数据和日志放在不同磁盘:
# dataDir=/data1/zookeeper/data
# dataLogDir=/data2/zookeeper/logs

# 网络配置
clientPort=2181
clientPortAddress=0.0.0.0
admin.serverPort=8080
admin.enableServer=true

# 集群配置(三节点)
server.1=192.168.10.33:2888:3888
server.2=192.168.10.34:2888:3888
server.3=192.168.10.35:2888:3888
# 2888用于节点间数据同步,3888用于选举

# 高级调优
autopurge.snapRetainCount=5
autopurge.purgeInterval=48
preAllocSize=65536
snapCount=100000
maxSessionTimeout=60000
minSessionTimeout=4000

# 选举优化
electionAlg=3
standaloneEnabled=false
reconfigEnabled=false

# 网络优化
leaderServes=no
syncEnabled=true
forceSync=yes
globalOutstandingLimit=1000
EOF

2.4 创建zk的myid文件

2.4.1 节点1(node4)

复制代码
echo "1" > /export/home/kafka_zk/apache-zookeeper-3.6.3-bin/data/myid

2.4.2 节点2(node5)

复制代码
echo "2" > /export/home/kafka_zk/apache-zookeeper-3.6.3-bin/data/myid

2.4.3 节点3(node6)

复制代码
echo "3" > /export/home/kafka_zk/apache-zookeeper-3.6.3-bin/data/myid

2.5 配置Kafka

2.5.1 节点1(node4)

复制代码
# 编辑config/server.properties并添加如下内容
cp config/server.properties config/server.properties_bak
cat >config/server.properties<<EOF
# 基础配置
broker.id=1
listeners=PLAINTEXT://192.168.10.33:9092
advertised.listeners=PLAINTEXT://192.168.10.33:9092
log.dirs=/export/home/kafka_zk/kafka_2.13-2.7.1/logs
num.partitions=8
default.replication.factor=3
min.insync.replicas=2
zookeeper.connect=192.168.10.33:2181,192.168.10.34:2181,192.168.10.35:2181

# 网络配置
num.network.threads=8
num.io.threads=16
socket.send.buffer.bytes=1024000
socket.receive.buffer.bytes=1024000
socket.request.max.bytes=104857600

# 日志管理
log.segment.bytes=1073741824
log.retention.hours=168
log.retention.check.interval.ms=300000
log.cleanup.policy=delete
log.roll.jitter.ms=30000

# 副本与ISR管理
unclean.leader.election.enable=false
replica.lag.time.max.ms=30000
replica.fetch.max.bytes=1048576
replica.fetch.wait.max.ms=500

# 生产消费优化
message.max.bytes=10485760
fetch.max.bytes=10485760
max.in.flight.requests.per.connection=5
compression.type=producer

# 控制器配置
controlled.shutdown.enable=true
controlled.shutdown.max.retries=3
controlled.shutdown.retry.backoff.ms=5000

# 监控配置
confluent.metrics.reporter.bootstrap.servers=192.168.10.33:2181,192.168.10.34:2181,192.168.10.35:2181
confluent.metrics.reporter.topic.replicas=3
EOF

2.5.2 节点2(node5)

复制代码
# 编辑config/server.properties并添加如下内容
cp config/server.properties config/server.properties_bak
cat >config/server.properties<<EOF
# 基础配置
broker.id=2
listeners=PLAINTEXT://192.168.10.34:9092
advertised.listeners=PLAINTEXT://192.168.10.34:9092
log.dirs=/export/home/kafka_zk/kafka_2.13-2.7.1/logs
num.partitions=8
default.replication.factor=3
min.insync.replicas=2
zookeeper.connect=192.168.10.33:2181,192.168.10.34:2181,192.168.10.35:2181

# 网络配置
num.network.threads=8
num.io.threads=16
socket.send.buffer.bytes=1024000
socket.receive.buffer.bytes=1024000
socket.request.max.bytes=104857600

# 日志管理
log.segment.bytes=1073741824
log.retention.hours=168
log.retention.check.interval.ms=300000
log.cleanup.policy=delete
log.roll.jitter.ms=30000

# 副本与ISR管理
unclean.leader.election.enable=false
replica.lag.time.max.ms=30000
replica.fetch.max.bytes=1048576
replica.fetch.wait.max.ms=500

# 生产消费优化
message.max.bytes=10485760
fetch.max.bytes=10485760
max.in.flight.requests.per.connection=5
compression.type=producer

# 控制器配置
controlled.shutdown.enable=true
controlled.shutdown.max.retries=3
controlled.shutdown.retry.backoff.ms=5000

# 监控配置
confluent.metrics.reporter.bootstrap.servers=192.168.10.33:2181,192.168.10.34:2181,192.168.10.35:2181
confluent.metrics.reporter.topic.replicas=3
EOF

2.5.3 节点3(node6)

复制代码
# 编辑config/server.properties并添加如下内容
cp config/server.properties config/server.properties_bak
cat >config/server.properties<<EOF
# 基础配置
broker.id=3
listeners=PLAINTEXT://192.168.10.35:9092
advertised.listeners=PLAINTEXT://192.168.10.35:9092
log.dirs=/export/home/kafka_zk/kafka_2.13-2.7.1/logs
num.partitions=8
default.replication.factor=3
min.insync.replicas=2
zookeeper.connect=192.168.10.33:2181,192.168.10.34:2181,192.168.10.35:2181

# 网络配置
num.network.threads=8
num.io.threads=16
socket.send.buffer.bytes=1024000
socket.receive.buffer.bytes=1024000
socket.request.max.bytes=104857600

# 日志管理
log.segment.bytes=1073741824
log.retention.hours=168
log.retention.check.interval.ms=300000
log.cleanup.policy=delete
log.roll.jitter.ms=30000

# 副本与ISR管理
unclean.leader.election.enable=false
replica.lag.time.max.ms=30000
replica.fetch.max.bytes=1048576
replica.fetch.wait.max.ms=500

# 生产消费优化
message.max.bytes=10485760
fetch.max.bytes=10485760
max.in.flight.requests.per.connection=5
compression.type=producer

# 控制器配置
controlled.shutdown.enable=true
controlled.shutdown.max.retries=3
controlled.shutdown.retry.backoff.ms=5000

# 监控配置
confluent.metrics.reporter.bootstrap.servers=192.168.10.33:2181,192.168.10.34:2181,192.168.10.35:2181
confluent.metrics.reporter.topic.replicas=3
EOF

5 启动服务

5.1 启动Zookeeper集群

复制代码
# 在所有节点上执行 
/export/home/kafka_zk/apache-zookeeper-3.6.3-bin/bin/zkServer.sh start

5.1.1 验证Zookeeper集群状态

复制代码
for node in node4 node5 node6; do
  echo "=== $node ==="
  echo srvr | nc $node 2181 | grep -E "Mode|Zxid"
done

5.2 启动Kafka集群

复制代码
# 在所有节点上执行
/export/home/kafka_zk/kafka_2.13-2.7.1/bin/kafka-server-start.sh -daemon /export/home/kafka_zk/kafka_2.13-2.7.1/config/server.properties

5.2.1 验证Kafka集群状态

复制代码
# 在任一节点上执行以下命令验证集群状态
# 查看broker列表
/export/home/kafka_zk/kafka_2.13-2.7.1/bin/zookeeper-shell.sh localhost:2181 ls /brokers/ids

# 创建测试topic
/export/home/kafka_zk/kafka_2.13-2.7.1/bin/kafka-topics.sh --create --topic test --partitions 3 --replication-factor 3 --bootstrap-server node4:9092

# 查看topic详情
/export/home/kafka_zk/kafka_2.13-2.7.1/bin/kafka-topics.sh --describe --topic test --bootstrap-server node4:9092

# 生产消息
/export/home/kafka_zk/kafka_2.13-2.7.1/bin/kafka-console-producer.sh --topic test --bootstrap-server node4:9092

# 消费消息(在另一个终端)
/export/home/kafka_zk/kafka_2.13-2.7.1/bin/kafka-console-consumer.sh --topic test --from-beginning --bootstrap-server node4:9092

[root@node4 kafka_2.13-2.7.1]# /export/home/kafka_zk/kafka_2.13-2.7.1/bin/kafka-topics.sh --create --topic test --partitions 3 --replication-factor 3 --bootstrap-server node4:9092
Created topic test.
[root@node4 kafka_2.13-2.7.1]# /export/home/kafka_zk/kafka_2.13-2.7.1/bin/kafka-topics.sh --describe --topic test --bootstrap-server node4:9092
Topic: test     PartitionCount: 3       ReplicationFactor: 3    Configs: compression.type=producer,min.insync.replicas=2,segment.jitter.ms=30000,cleanup.policy=delete,segment.bytes=1073741824,max.message.bytes=10485760,unclean.leader.election.enable=false
        Topic: test     Partition: 0    Leader: 1       Replicas: 1,2,3 Isr: 1,2,3
        Topic: test     Partition: 1    Leader: 2       Replicas: 2,3,1 Isr: 2,3,1
        Topic: test     Partition: 2    Leader: 3       Replicas: 3,1,2 Isr: 3,1,2
[root@node4 kafka_2.13-2.7.1]# /export/home/kafka_zk/kafka_2.13-2.7.1/bin/kafka-console-producer.sh --topic test --bootstrap-server node4:9092
>test123
>test234
>test345
>test456
>^C[root@node4 kafka_2.13-2.7.1]# /export/home/kafka_zk/kafka_2.13-2.7.1/bin/kafka-console-consumer.sh --topic test --from-beginning --bootstrap-server node4:9092
test123
test345
test234
test456

6 集群管理

6.1 常用命令

6.1.1 查看所有topic

复制代码
/export/home/kafka_zk/kafka_2.13-2.7.1/bin/kafka-topics.sh --list --bootstrap-server node4:9092

[root@node4 kafka_2.13-2.7.1]# /export/home/kafka_zk/kafka_2.13-2.7.1/bin/kafka-topics.sh --list --bootstrap-server node4:9092
__consumer_offsets
test
[root@node4 kafka_2.13-2.7.1]# 

6.1.2 查看特定topic详情

复制代码
/export/home/kafka_zk/kafka_2.13-2.7.1/bin/kafka-topics.sh --describe --topic test --bootstrap-server node4:9092

[root@node4 kafka_2.13-2.7.1]# /export/home/kafka_zk/kafka_2.13-2.7.1/bin/kafka-topics.sh --describe --topic test --bootstrap-server node4:9092
Topic: test     PartitionCount: 3       ReplicationFactor: 3    Configs: compression.type=producer,min.insync.replicas=2,segment.jitter.ms=30000,cleanup.policy=delete,segment.bytes=1073741824,max.message.bytes=10485760,unclean.leader.election.enable=false
        Topic: test     Partition: 0    Leader: 1       Replicas: 1,2,3 Isr: 1,2,3
        Topic: test     Partition: 1    Leader: 2       Replicas: 2,3,1 Isr: 2,3,1
        Topic: test     Partition: 2    Leader: 3       Replicas: 3,1,2 Isr: 3,1,2
[root@node4 kafka_2.13-2.7.1]# 

6.1.3 删除topic

复制代码
/export/home/kafka_zk/kafka_2.13-2.7.1/bin/kafka-topics.sh --delete --topic test --bootstrap-server node4:9092

[root@node4 kafka_2.13-2.7.1]# /export/home/kafka_zk/kafka_2.13-2.7.1/bin/kafka-topics.sh --delete --topic test --bootstrap-server node4:9092
[root@node4 kafka_2.13-2.7.1]# /export/home/kafka_zk/kafka_2.13-2.7.1/bin/kafka-topics.sh --describe --topic test --bootstrap-server node4:9092
Error while executing topic command : Topic 'test' does not exist as expected
[2025-04-04 22:56:05,262] ERROR java.lang.IllegalArgumentException: Topic 'test' does not exist as expected
        at kafka.admin.TopicCommand$.kafka$admin$TopicCommand$$ensureTopicExists(TopicCommand.scala:539)
        at kafka.admin.TopicCommand$AdminClientTopicService.describeTopic(TopicCommand.scala:316)
        at kafka.admin.TopicCommand$.main(TopicCommand.scala:70)
        at kafka.admin.TopicCommand.main(TopicCommand.scala)
 (kafka.admin.TopicCommand$)
[root@node4 kafka_2.13-2.7.1]# 

6.1.4 查看消费者组

复制代码
/export/home/kafka_zk/kafka_2.13-2.7.1/bin/kafka-consumer-groups.sh --list --bootstrap-server node4:9092

6.2 服务管理

6.2.1 停止Kafka

复制代码
/export/home/kafka_zk/kafka_2.13-2.7.1/bin/kafka-server-stop.sh

6.2.2 停止Zookeeper

复制代码
/export/home/kafka_zk/apache-zookeeper-3.6.3-bin/bin/zkServer.sh stop

7 常见问题解决

Zookeeper无法启动:

  • 检查myid文件是否正确
  • 检查防火墙是否关闭
  • 检查数据目录权限
    Kafka无法连接Zookeeper:
  • 检查Zookeeper是否正常运行
  • 检查server.properties中的zookeeper.connect配置
    节点间通信问题:
  • 检查/etc/hosts配置
  • 检查网络连通性

8 总结

本文详细介绍了在三节点集群上部署Kafka 2.8以下版本的完整流程。通过合理的配置和验证,可以获得一个高可用的Kafka集群。对于生产环境,建议进一步考虑以下优化:

  • 监控方案(如Prometheus + Grafana)
  • 安全配置(SSL/SASL认证)
  • 性能调优(根据硬件配置调整JVM参数)
  • 日志保留策略优化
相关推荐
郭涤生1 小时前
Chapter 10: Batch Processing_《Designing Data-Intensive Application》
笔记·分布式
郭涤生3 小时前
微服务系统记录
笔记·分布式·微服务·架构
马达加斯加D3 小时前
MessageQueue --- RabbitMQ可靠传输
分布式·rabbitmq·ruby
西岭千秋雪_5 小时前
Sentinel核心源码分析(上)
spring boot·分布式·后端·spring cloud·微服务·sentinel
dengjiayue8 小时前
消息队列(kafka 与 rocketMQ)
分布式·kafka·rocketmq
东阳马生架构8 小时前
zk基础—4.zk实现分布式功能一
zookeeper·分布式应用
东阳马生架构9 小时前
zk基础—4.zk实现分布式功能二
分布式
ChinaRainbowSea10 小时前
8. RabbitMQ 消息队列 + 结合配合 Spring Boot 框架实现 “发布确认” 的功能
java·spring boot·分布式·后端·rabbitmq·java-rabbitmq
码界筑梦坊11 小时前
基于Spark的酒店数据分析系统
大数据·分布式·python·信息可视化·spark·毕业设计·个性化推荐