zookeeper(目前只有安装)

安装

流程

学kafka的时候安装

Apache ZooKeeper

安装地址:https://archive.apache.org/dist/zookeeper/zookeeper-3.5.7/apache-zookeeper-3.5.7-bin.tar.gz

解压

复制代码
tar -zxvf kafka_2.12-3.0.0.tgz -C /export/server/

改配置

XML 复制代码
cd config
cp zoo_sample.cfg zoo.cfg
vim zoo.cfg 

# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just 
# example sakes.
dataDir=/export/server/zookeeper-3.5.7/zkdata


# 这里和server.几 和下面的myid匹配上
# 2888是正常leader端口,3888是备用端口
server.1=hadoop1:2888:3888
server.2=hadoop2:2888:3888
server.3=hadoop3:2888:3888

:wq

在设置的dataDir目录下创建一个名字叫myid的文件(必须叫myid!!!)

写入一个数(几都行,只要三个集群别一样)

root@hadoop1 zkdata\]# echo 1 \> myid \[root@hadoop2 zkdata\]# echo 2 \> myid \[root@hadoop3 zkdata\]# echo 3 \> myid

fenfa

zoo.cfg配置解毒

XML 复制代码
# The number of milliseconds of each tick
# 每个滴答的毫秒数 心跳
tickTime=2000

# The number of ticks that the initial 
# synchronization phase can take 
# 初始同步阶段可以占用的心跳数  到了没连上就失败
initLimit=10

# The number of ticks that can pass between 
# sending a request and getting an acknowledgement
# 在发送请求和获得确认之间可以传递的心跳数
syncLimit=5

# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just 
# example sakes.
# 快照所在的目录。不使用/ tmp为存储、/ tmp这里只是例子的缘故。
dataDir=/export/server/zookeeper-3.5.7/zkdata
# the port at which the clients will connect
clientPort=2181

# the maximum number of client connections.
# increase this if you need to handle more clients
# 客户端连接的最大数量。如果需要处理更多客户端,请增加此值
#maxClientCnxns=60
#
# Be sure to read the maintenance section of the 
# administrator guide before turning on autopurge.
# 在开启自动督促之前,请务必阅读管理员指南的维护部分。
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
# dataDir中要保留的快照数量
#autopurge.snapRetainCount=3

# Purge task interval in hours
# Set to "0" to disable auto purge feature
# 清除任务间隔(小时)  设置为"0"表示禁用自动清除功能
#autopurge.purgeInterval=1

原神启动

hadoop1启动

root@hadoop1 zookeeper-3.5.7\]# bin/zkServer.sh start ZooKeeper JMX enabled by default Using config: /export/server/zookeeper-3.5.7/bin/../conf/zoo.cfg Starting zookeeper ... STARTED \[root@hadoop1 zookeeper-3.5.7\]# bin/zkServer.sh status ZooKeeper JMX enabled by default Using config: /export/server/zookeeper-3.5.7/bin/../conf/zoo.cfg Client port found: 2181. Client address: localhost. Error contacting service. It is probably not running. 一看状态居然失败了?因为另外两个没启动,不过半没法选出来leader

hadoop2/3也启动

bin/zkServer.sh start

现在hadoop1状态

root@hadoop1 zookeeper-3.5.7\]# bin/zkServer.sh status ZooKeeper JMX enabled by default Using config: /export/server/zookeeper-3.5.7/bin/../conf/zoo.cfg Client port found: 2181. Client address: localhost. Mode: follower \[root@hadoop2 zookeeper-3.5.7\]# bin/zkServer.sh status ZooKeeper JMX enabled by default Using config: /export/server/zookeeper-3.5.7/bin/../conf/zoo.cfg Client port found: 2181. Client address: localhost. **Mode: leader**(hadoop2成leader了) \[root@hadoop3 zookeeper-3.5.7\]# bin/zkServer.sh status ZooKeeper JMX enabled by default Using config: /export/server/zookeeper-3.5.7/bin/../conf/zoo.cfg Client port found: 2181. Client address: localhost. Mode: follower

批量启动脚本

复制代码
#!/bin/bash
case $1 in
start)
    for i in hadoop1 hadoop2 hadoop3
    do
        echo ----------------  zookeeper $i start ----------------
        ssh $i "/export/server/zookeeper-3.5.7/bin/zkServer.sh start"
    done
    ;;
stop)
    for i in hadoop1 hadoop2 hadoop3
    do
        echo ----------------  zookeeper $i stop ----------------
        ssh $i "/export/server/zookeeper-3.5.7/bin/zkServer.sh stop"
    done
    ;;
status)
    for i in hadoop1 hadoop2 hadoop3
    do
        echo ----------------  zookeeper $i status ----------------
        ssh $i "/export/server/zookeeper-3.5.7/bin/zkServer.sh status"
    done
    ;;
*)
    echo "Usage: $0 {start|stop|status}"
    ;;
esac

chmod 777 zk

相关推荐
勇敢打工人7 小时前
rabbitmq数据恢复
分布式·rabbitmq
ZStack开发者社区13 小时前
替代VMware vSAN | 五大角度解析ZStack分布式存储替代优势
分布式·云计算
R-sz14 小时前
使用Redisson实现同一业务类型串行执行的分布式锁方案,解决并发问题
分布式
音视频牛哥15 小时前
AI时代底层技术链:GPU、云原生与大模型的协同进化全解析
大数据·云原生·kubernetes·音视频·transformer·gpu算力·云原生cloud native
阿拉斯攀登17 小时前
深入微服务配置中心:Nacos注册中心的实操细节
java·微服务·云原生·springcloud
哈哈哈笑什么18 小时前
蜜雪冰城1分钱奶茶秒杀活动下,使用分片锁替代分布式锁去做秒杀系统
redis·分布式·后端
阿里云云原生19 小时前
从系统监控到业务洞察:ARMS 自定义指标采集功能全解析
云原生
哈哈哈笑什么19 小时前
高并发分布式Springcloud系统下,使用RabbitMQ实现订单支付完整闭环的实现方案(反向撤销+重试+补偿)
分布式·spring cloud·rabbitmq
周杰伦_Jay19 小时前
【 Kubernetes(K8s)完全指南】从入门到实战(含命令+配置+表格对比)
云原生·容器·kubernetes
哈哈哈笑什么20 小时前
分布式高并发Springcloud系统下的数据图同步断点续传方案【订单/商品/用户等】
分布式·后端·spring cloud