Kafka集群搭建

集群搭建

1、集群规划

IP:192.168.159.100 别名:test01 IP:192.168.159.101 别名:test02 IP:192.168.159.102 别名:test03
zookeeper zookeeper zookeeper
kafka broker1 kafka broker2 kafka broker3

2、前置准备

1.搭建ZooKeeper服务

Kafka虽然已经自带了ZooKeeper服务,但是考虑到其他服务也需要ZooKeeper服务,因此建议单独启动,如何部署zookeeper集群,请参照Zookeeper集群部署文档。

3、修改配置文件

1.修改/ect/hosts文件
shell 复制代码
192.168.159.100 test01
192.168.159.101 test02
192.168.159.102 test03
2.修改kafka安装目录下config/server.properties
shell 复制代码
#kafka在zookeeper种注册的ID
broker.id=100
#监听地址+端口,这个比较复杂,需要根据环境调整
listeners=PLAINTEXT://192.168.159.102:9092
#网络线程数量
num.network.threads=3
#IO线程数量
# The number of threads that the server uses for processing requests, which may include disk I/O
num.io.threads=8
#BUFFER 发送大小
socket.send.buffer.bytes=102400
#BUFFER 接收大小
# The receive buffer (SO_RCVBUF) used by the socket server
socket.receive.buffer.bytes=102400

# The maximum size of a request that the socket server will accept (protection against OOM)
socket.request.max.bytes=104857600
#日志路径
log.dirs=/usr/local/kafka/logs
#默认分区数
num.partitions=1

num.recovery.threads.per.data.dir=1

############################# Internal Topic Settings  #############################
# The replication factor for the group metadata internal topics "__consumer_offsets" and "__transaction_state"
# For anything other than development testing, a value greater than 1 is recommended to ensure availability such as 3.
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1

############################# Log Flush Policy #############################

# Messages are immediately written to the filesystem but by default we only fsync() to sync
# the OS cache lazily. The following configurations control the flush of data to disk.
# There are a few important trade-offs here:
#    1. Durability: Unflushed data may be lost if you are not using replication.
#    2. Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush.
#    3. Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to excessive seeks.
# The settings below allow one to configure the flush policy to flush data after a period of time or
# every N messages (or both). This can be done globally and overridden on a per-topic basis.

# The number of messages to accept before forcing a flush of data to disk
#log.flush.interval.messages=10000

# The maximum amount of time a message can sit in a log before we force a flush
#log.flush.interval.ms=1000

############################# Log Retention Policy #############################

# The following configurations control the disposal of log segments. The policy can
# be set to delete segments after a period of time, or after a given size has accumulated.
# A segment will be deleted whenever *either* of these criteria are met. Deletion always happens
# from the end of the log.

# The minimum age of a log file to be eligible for deletion due to age
log.retention.hours=168


log.retention.check.interval.ms=300000
#配置ZOOKEEPER的路径
zookeeper.connect=192.168.159.100:2181,192.168.159.101:2181,192.168.159.102:2181

# Timeout in ms for connecting to zookeeper
zookeeper.connection.timeout.ms=18000


group.initial.rebalance.delay.ms=0
3.配置环境变量
shell 复制代码
vim /etc/profile

#KAFKA_HOME
export KAFKA_HOME=/usr/local/kafka/
export PATH=$PATH:$KAFKA_HOME/bin

source /etc/profile
4.创建logs文件
java 复制代码
mkdir /usr/local/kafka/logs
5.配置集群启停脚本

注意: 停止 Kafka 集群时,一定要等 Kafka 所有节点进程全部停止后再停止 Zookeeper集群。因为 Zookeeper 集群当中记录着 Kafka 集群相关信息, Zookeeper 集群一旦先停止,Kafka 集群就没有办法再获取停止进程的信息,只能手动杀死 Kafka 进程了。

shell 复制代码
#! /bin/bash
case $1 in
"start"){
for i in test01 test02 test03
do
echo " --------启动 $i Kafka-------"
ssh $i "/usr/locl/kafka/bin/kafka-server-start.sh -daemon /usr/locl/kafka/config/server.properties"
done
};;
"stop"){
for i in test01 test02 test03
do
echo " --------停止 $i Kafka-------"
ssh $i "/usr/locl/kafka/bin/kafka-server-stop.sh "
done
};;
esac
shell 复制代码
#添加执行权限
chmod +x kf.sh start
#启动集群
./kf.sh stop
#关闭集群
相关推荐
^辞安5 小时前
RocketMQ为什么自研Nameserver而不用zookeeper?
分布式·zookeeper·rocketmq
在未来等你7 小时前
Kafka面试精讲 Day 8:日志清理与数据保留策略
大数据·分布式·面试·kafka·消息队列
poemyang8 小时前
“你还活着吗?” “我没死,只是网卡了!”——来自分布式世界的“生死契约”
分布式
echoyu.8 小时前
消息队列-初识kafka
java·分布式·后端·spring cloud·中间件·架构·kafka
明达智控技术9 小时前
MR30分布式I/O在面机装备中的应用
分布式·物联网·自动化
JAVA学习通11 小时前
【RabbitMQ】---RabbitMQ 工作流程和 web 界面介绍
分布式·rabbitmq
cg.family12 小时前
Doris 消费kafka消息
kafka·doris
安卓开发者12 小时前
鸿蒙NEXT应用数据持久化全面解析:从用户首选项到分布式数据库
数据库·分布式·harmonyos
趴着喝可乐13 小时前
openEuler2403安装部署Kafka
kafka·openeuler
JAVA学习通15 小时前
【RabbitMQ】如何在 Ubuntu 安装 RabbitMQ
分布式·rabbitmq