一、环境介绍
zookeeper下载地址:https://zookeeper.apache.org/releases.html
kafka下载地址:https://kafka.apache.org/downloads
192.168.142.129 apache-zookeeper-3.8.4-bin.tar.gz kafka_2.13-3.6.0.tgz
192.168.142.130 apache-zookeeper-3.8.4-bin.tar.gz kafka_2.13-3.6.0.tgz
192.168.142.131 apache-zookeeper-3.8.4-bin.tar.gz kafka_2.13-3.6.0.tgz
二、kafka单机环境使用
1.启动服务
登录后复制
plain
单机环境使用kafka自带的zookeeper
tar zxf kafka_2.13-3.6.0.tgz -C /usr/local/
cd /usr/local/kafka_2.13-3.6.0/
nohup bin/zookeeper-server-start.sh config/zookeeper.properties &
nohup bin/kafka-server-start.sh config/server.properties &
2.创建topic
登录后复制
plain
bin/kafka-topics.sh --create --topic test --bootstrap-server localhost:9092
bin/kafka-topics.sh --describe --topic test --bootstrap-server localhost:9092 #查看topic
3.启动消息发送端和消费端
登录后复制
plain
bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test #启动消息发送者
>1111
>1234
>qwss
>hello
#重新打开个ssh连接,查看消息消费信息
bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic test #启动消息消费端
#当发送端发送完结束进程,消费端无法查看之前发送端发送的消息,需要--from-beginning参数
bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic test --from-beginning
#精确查找,指定从第四个以后开始
bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --partition 0 --offset 4 --topic test
--------------------分组消费-----------------------
bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --consumer-property group.id=testgroup --topic test
bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --consumer-property group.id=testgroup2 --topic test
#相同组如果启动多个消费端,那只有一个消费端能接收到消息,两个组能同时接收不受影响
#查看消费组情况
bin/kafka-consumer-groups.sh --bootstrap-server localhost:9092 --describe --group testgroup
4.停止服务
登录后复制
plain
cd /usr/local/kafka_2.13-3.6.0/
bin/kafka-server-stop.sh
bin/zookeeper-server-stop.sh
三、kafka集群
1.zookeeper集群部署
登录后复制
plain
tar zxf apache-zookeeper-3.8.4-bin.tar.gz -C /usr/local/
cd /usr/local/apache-zookeeper-3.8.4-bin/conf
cp zoo_sample.cfg zoo.cfg
mkdir /usr/local/apache-zookeeper-3.8.4-bin/data
cat zoo.cfg | grep -v '^#'
-----------------------#zookeeper配置集群----------------------------
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/usr/local/apache-zookeeper-3.8.4-bin/data
clientPort=2181
server.1=192.168.142.129:2888:3888
server.2=192.168.142.130:2888:3888
server.3=192.168.142.131:2888:3888
cd /usr/local/apache-zookeeper-3.8.4-bin/data
echo '1' > myid #根据集群server.*这个来决定myid文件里面的值
bin/zkServer.sh --config conf start
2.kafka集群部署
登录后复制
plain
tar zxf kafka_2.13-3.6.0.tgz -C /usr/local/
cd /usr/local/kafka_2.13-3.6.0/config
cat server.properties | grep -v '^#' | grep -v '^$'
---------------------------#kafka集群配置------------------------------
broker.id=0 #这里129主机是0,130主机是1,131主机是2
num.network.threads=3
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
log.dirs=/tmp/kafka-logs
num.partitions=2
num.recovery.threads.per.data.dir=1
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1
log.retention.hours=168
log.retention.check.interval.ms=300000
zookeeper.connect=192.168.142.129:2181,192.168.142.130:2181,192.168.142.131:2181
zookeeper.connection.timeout.ms=18000
group.initial.rebalance.delay.ms=0
bin/kafka-server-start.sh -daemon config/server.properties #-daemon后台启动
3.topic创建
登录后复制
plain
#创建和查看topic信息
bin/kafka-topics.sh --bootstrap-server 192.168.142.129:9092 --create --replication-factor 2 --partitions 4 --topic testTopic
bin/kafka-topics.sh --bootstrap-server 192.168.142.129:9092 --list
bin/kafka-topics.sh --bootstrap-server 192.168.142.129:9092 --describe --topic testTopic
停止130的kafka服务后,leader对应的服务信息改变。