kafka zookeeper 集群搭建

kafka zookeeoer 集群准备

节点 IP 备注
z1 192.168.101.131 zookeeper
z2 192.168.101.132 zookeeper
z3 192.168.101.133 zookeeper
k1 192.168.101.141 kafka
k2 192.168.101.142 kafka
k3 192.168.101.143 kafka

(一)节点配置

host配置

编辑节点/etc/hosts 文件,分发至每个节点

bash 复制代码
echo "192.168.101.131   z1" >> /etc/hosts
echo "192.168.101.132   z2" >> /etc/hosts
echo "192.168.101.133   z3" >> /etc/hosts
echo "192.168.101.141   k1" >> /etc/hosts
echo "192.168.101.142   k2" >> /etc/hosts
echo "192.168.101.143   k3" >> /etc/hosts

(二)zookeeper

  1. 配置java环境(jdk1.8以上)
    yum install java 或者手动配置java环境

  2. 安装 yum install -y zookeeper

  3. 修改配置文件

    进入z1、z2、z3节点的zookeeper配置目录,例如/opt/zookeeper/conf/zoo.cfg

    添加配置

bash 复制代码
# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial 
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between 
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
dataDir=/var/lib/zookeeper/data
# the port at which the clients will connect
clientPort=2181

server.1=z1:2888:3888
server.2=z2:2888:3888
server.3=z3:2888:3888
  1. 添加myid
    在上面配置dataDir对应的目录下新建文件myid,在z1节点中写如数字1,z2节点中写入数字2,z3节点中写入数字3
    z1节点 mkdir -p /var/lib/zookeeper/data/ && echo 1 > /var/lib/zookeeper/data/myid
    z2节点 mkdir -p /var/lib/zookeeper/data/ && echo 2 > /var/lib/zookeeper/data/myid
    z3节点 mkdir -p /var/lib/zookeeper/data/ && echo 3 > /var/lib/zookeeper/data/myid
  2. 启动zookeeper
    z1、z2、z3节点启动zookeeper服务 /opt/zookeeper/bin/zkServer.sh start /opt/zookeeper/conf/zoo.cfg
  3. 查询节点状态 /opt/zookeeper/bin/zkServer.sh status

(三)kafka(3.7.2)

采用官网下载安装包的方法

  1. 下载解压
    wget -P /usr/local/ https://mirrors.tuna.tsinghua.edu.cn/apache/kafka/3.7.2/kafka_2.12-3.7.2.tgz
    tar -zxvf /usr/local/kafka_2.12-3.7.2.tgz -C /usr/local
  2. 配置软连接
    ln -s /usr/local/kafka_2.12-3.7.2 /usr/local/kafka
  3. 更改用户权限(根据生产环境配置)
    mkdir -p /data/kafka-logs
    chown 用户:组 -R /data/kafka-logs
    chown 用户:组 -R /usr/local/kafka_2.12-3.7.2
  4. 配置环境变量(可选,将kafka可执行文件加入PATH)
    echo "export KAFKA_HOME=/usr/local/kafka" >> /etc/profile
    echo "export PATH=\$PATH:\$KAFKA_HOME/bin" >> /etc/profile
    source /etc/profile
bash 复制代码
# /etc/profile
...
export KAFKA_HOME=/usr/local/kafka
export PATH=$PATH:$KAFKA_HOME/bin
  1. 修改kafka配置
    配置server.properties 文件
    编辑config/server.properties 文件,修改broker.id和zookeeper.connect的值
  • 节点k1
    sed -i 's/broker\.id.*/broker.id=1/' /usr/local/kafka/config/server.properties
    sed -i 's/zookeeper.connect=.*/zookeeper.connect=z1:2181,z2:2181,z3:2181/' /usr/local/kafka/config/server.properties

  • 节点k2
    sed -i 's/broker\.id.*/broker.id=2/' /usr/local/kafka/config/server.properties
    sed -i 's/zookeeper.connect=.*/zookeeper.connect=z1:2181,z2:2181,z3:2181/' /usr/local/kafka/config/server.properties

  • 节点k3
    sed -i 's/broker\.id.*/broker.id=3/' /usr/local/kafka/config/server.properties
    sed -i 's/zookeeper.connect=.*/zookeeper.connect=z1:2181,z2:2181,z3:2181/' /usr/local/kafka/config/server.properties

配置完成后,每个节点只有broker.id不同,例如k1的配置如下

bash 复制代码
...
broker.id=1
...
zookeeper.connect=z1:2181,z2:2181,z3:2181
...
  1. 启动kafka
  • 后台启动kafka: sh /usr/local/kafka/bin/kafka-server-start.sh -daemon /usr/local/kafka/config/server.properties

前台启动(测试用) sh /usr/local/kafka/bin/kafka-server-start.sh /usr/local/kafka/config/server.properties

停止kafka sh /usr/local/kafka/bin/kafka-server-stop.sh

(四)Topic

创建topic
kafka-topics.sh --bootstrap-server k1:9092 --create --topic topic-a --partitions 3 --replication-factor 2

列出topic
kafka-topics.sh --bootstrap-server k1:9092 --list

描述topic
kafka-topics.sh --bootstrap-server k1:9092 --describe --topic topic-a

bash 复制代码
Topic: topic-a  TopicId: OXC_ux10Taq314OyL-dU9Q PartitionCount: 3       ReplicationFactor: 3    Configs: 
        Topic: topic-a  Partition: 0    Leader: 143     Replicas: 143,142,141   Isr: 143,142,141
        Topic: topic-a  Partition: 1    Leader: 142     Replicas: 142,141,143   Isr: 142,141,143
        Topic: topic-a  Partition: 2    Leader: 141     Replicas: 141,143,142   Isr: 141
每个分区都有一个Leader,负责处理对该分区的读写请求
每个分区都有三个副本,分布在不同的Broker上

删除topic
kafka-topics.sh --bootstrap-server k1:9092 --delete --topic topic-a

修改topic
kafka-topics.sh --bootstrap-server k1:9092 --alter --topic topic-a --partitions 4

(五)生产消费

  • 生产者命令
    kafka-console-producer.sh --bootstrap-server k1:9092 --topic topic-a
  • 消费者命令 --from-beginning 从头消费数据
    kafka-console-consumer.sh --bootstrap-server k1:9092 --topic topic-a --from-beginning
  • 指定分区,并且消费历史数据
    kafka-console-consumer.sh --bootstrap-server k1:9092 --topic topic-a --from-beginning --partition 2
相关推荐
Chen Jiacheng1 小时前
在 Spring Boot 2.7.x 中引入 Kafka-0.9 的实践
kafka·springboot·kafka0.9·springboot2.7.6
追风林6 小时前
mac 本地 docker 安装 kafka
macos·docker·kafka
austin流川枫9 小时前
Kafka如何配置确保dev开发不要消费test环境的消息
kafka
Double Point13 小时前
Java中LinkedBlockingQueue在异步处理Kafka数据中的应用
java·kafka·linq
biubiubiu070613 小时前
SpringBoot基础Kafka示例
spring boot·kafka·linq
AugustShuai14 小时前
阿里云Kafka分区清理
阿里云·kafka·云计算
种豆走天下14 小时前
用 Kafka、RabbitMQ、RocketMQ、Redis 、Nginx等组件优化
kafka·rabbitmq·rocketmq
m0_748245921 天前
Spring Boot 集成 Kafka
spring boot·kafka·linq
米二1 天前
Kafka精选面试题
分布式·面试·kafka