Kafka集群搭建

集群搭建

1、集群规划

IP:192.168.159.100 别名:test01 IP:192.168.159.101 别名:test02 IP:192.168.159.102 别名:test03
zookeeper zookeeper zookeeper
kafka broker1 kafka broker2 kafka broker3

2、前置准备

1.搭建ZooKeeper服务

Kafka虽然已经自带了ZooKeeper服务,但是考虑到其他服务也需要ZooKeeper服务,因此建议单独启动,如何部署zookeeper集群,请参照Zookeeper集群部署文档。

3、修改配置文件

1.修改/ect/hosts文件
shell 复制代码
192.168.159.100 test01
192.168.159.101 test02
192.168.159.102 test03
2.修改kafka安装目录下config/server.properties
shell 复制代码
#kafka在zookeeper种注册的ID
broker.id=100
#监听地址+端口,这个比较复杂,需要根据环境调整
listeners=PLAINTEXT://192.168.159.102:9092
#网络线程数量
num.network.threads=3
#IO线程数量
# The number of threads that the server uses for processing requests, which may include disk I/O
num.io.threads=8
#BUFFER 发送大小
socket.send.buffer.bytes=102400
#BUFFER 接收大小
# The receive buffer (SO_RCVBUF) used by the socket server
socket.receive.buffer.bytes=102400

# The maximum size of a request that the socket server will accept (protection against OOM)
socket.request.max.bytes=104857600
#日志路径
log.dirs=/usr/local/kafka/logs
#默认分区数
num.partitions=1

num.recovery.threads.per.data.dir=1

############################# Internal Topic Settings  #############################
# The replication factor for the group metadata internal topics "__consumer_offsets" and "__transaction_state"
# For anything other than development testing, a value greater than 1 is recommended to ensure availability such as 3.
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1

############################# Log Flush Policy #############################

# Messages are immediately written to the filesystem but by default we only fsync() to sync
# the OS cache lazily. The following configurations control the flush of data to disk.
# There are a few important trade-offs here:
#    1. Durability: Unflushed data may be lost if you are not using replication.
#    2. Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush.
#    3. Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to excessive seeks.
# The settings below allow one to configure the flush policy to flush data after a period of time or
# every N messages (or both). This can be done globally and overridden on a per-topic basis.

# The number of messages to accept before forcing a flush of data to disk
#log.flush.interval.messages=10000

# The maximum amount of time a message can sit in a log before we force a flush
#log.flush.interval.ms=1000

############################# Log Retention Policy #############################

# The following configurations control the disposal of log segments. The policy can
# be set to delete segments after a period of time, or after a given size has accumulated.
# A segment will be deleted whenever *either* of these criteria are met. Deletion always happens
# from the end of the log.

# The minimum age of a log file to be eligible for deletion due to age
log.retention.hours=168


log.retention.check.interval.ms=300000
#配置ZOOKEEPER的路径
zookeeper.connect=192.168.159.100:2181,192.168.159.101:2181,192.168.159.102:2181

# Timeout in ms for connecting to zookeeper
zookeeper.connection.timeout.ms=18000


group.initial.rebalance.delay.ms=0
3.配置环境变量
shell 复制代码
vim /etc/profile

#KAFKA_HOME
export KAFKA_HOME=/usr/local/kafka/
export PATH=$PATH:$KAFKA_HOME/bin

source /etc/profile
4.创建logs文件
java 复制代码
mkdir /usr/local/kafka/logs
5.配置集群启停脚本

注意: 停止 Kafka 集群时,一定要等 Kafka 所有节点进程全部停止后再停止 Zookeeper集群。因为 Zookeeper 集群当中记录着 Kafka 集群相关信息, Zookeeper 集群一旦先停止,Kafka 集群就没有办法再获取停止进程的信息,只能手动杀死 Kafka 进程了。

shell 复制代码
#! /bin/bash
case $1 in
"start"){
for i in test01 test02 test03
do
echo " --------启动 $i Kafka-------"
ssh $i "/usr/locl/kafka/bin/kafka-server-start.sh -daemon /usr/locl/kafka/config/server.properties"
done
};;
"stop"){
for i in test01 test02 test03
do
echo " --------停止 $i Kafka-------"
ssh $i "/usr/locl/kafka/bin/kafka-server-stop.sh "
done
};;
esac
shell 复制代码
#添加执行权限
chmod +x kf.sh start
#启动集群
./kf.sh stop
#关闭集群
相关推荐
lifallen2 小时前
HBase的异步WAL性能优化:RingBuffer的奥秘
大数据·数据库·分布式·算法·性能优化·apache·hbase
真上帝的左手3 小时前
12. 消息队列-RabbitMQ
分布式·rabbitmq
每天的每一天4 小时前
分布式文件系统06-分布式中间件弹性扩容与rebalance重平衡
分布式·中间件
新时代苦力工5 小时前
Redis 分布式Session
数据库·redis·分布式
运维行者_6 小时前
多数据中心运维:别让 “分布式” 变成 “混乱式”
运维·数据库·分布式·测试工具·自动化·负载均衡·故障告警
Code季风7 小时前
Redis 分布式锁深度解析:setnx 命令的核心作用与实现
redis·分布式·微服务
斯普信专业组14 小时前
Kafka-exporter采集参数调整方案
分布式·kafka
陌上 烟雨齐17 小时前
Kafka数据生产和发送
java·分布式·kafka
会编程的林俊杰1 天前
Redisson中的分布式锁
redis·分布式·redisson