安装
流程
学kafka的时候安装
安装地址:https://archive.apache.org/dist/zookeeper/zookeeper-3.5.7/apache-zookeeper-3.5.7-bin.tar.gz
解压
tar -zxvf kafka_2.12-3.0.0.tgz -C /export/server/
改配置
XML
cd config
cp zoo_sample.cfg zoo.cfg
vim zoo.cfg
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just
# example sakes.
dataDir=/export/server/zookeeper-3.5.7/zkdata
# 这里和server.几 和下面的myid匹配上
# 2888是正常leader端口,3888是备用端口
server.1=hadoop1:2888:3888
server.2=hadoop2:2888:3888
server.3=hadoop3:2888:3888
:wq
在设置的dataDir目录下创建一个名字叫myid的文件(必须叫myid!!!)
写入一个数(几都行,只要三个集群别一样)
[root@hadoop1 zkdata]# echo 1 > myid
[root@hadoop2 zkdata]# echo 2 > myid
[root@hadoop3 zkdata]# echo 3 > myid
fenfa
zoo.cfg配置解毒
XML
# The number of milliseconds of each tick
# 每个滴答的毫秒数 心跳
tickTime=2000
# The number of ticks that the initial
# synchronization phase can take
# 初始同步阶段可以占用的心跳数 到了没连上就失败
initLimit=10
# The number of ticks that can pass between
# sending a request and getting an acknowledgement
# 在发送请求和获得确认之间可以传递的心跳数
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just
# example sakes.
# 快照所在的目录。不使用/ tmp为存储、/ tmp这里只是例子的缘故。
dataDir=/export/server/zookeeper-3.5.7/zkdata
# the port at which the clients will connect
clientPort=2181
# the maximum number of client connections.
# increase this if you need to handle more clients
# 客户端连接的最大数量。如果需要处理更多客户端,请增加此值
#maxClientCnxns=60
#
# Be sure to read the maintenance section of the
# administrator guide before turning on autopurge.
# 在开启自动督促之前,请务必阅读管理员指南的维护部分。
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
# dataDir中要保留的快照数量
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
# 清除任务间隔(小时) 设置为"0"表示禁用自动清除功能
#autopurge.purgeInterval=1
原神启动
hadoop1启动
[root@hadoop1 zookeeper-3.5.7]# bin/zkServer.sh start
ZooKeeper JMX enabled by default
Using config: /export/server/zookeeper-3.5.7/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
[root@hadoop1 zookeeper-3.5.7]# bin/zkServer.sh statusZooKeeper JMX enabled by default
Using config: /export/server/zookeeper-3.5.7/bin/../conf/zoo.cfg
Client port found: 2181. Client address: localhost.
Error contacting service. It is probably not running.
一看状态居然失败了?因为另外两个没启动,不过半没法选出来leader
hadoop2/3也启动
bin/zkServer.sh start
现在hadoop1状态
[root@hadoop1 zookeeper-3.5.7]# bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /export/server/zookeeper-3.5.7/bin/../conf/zoo.cfg
Client port found: 2181. Client address: localhost.
Mode: follower
[root@hadoop2 zookeeper-3.5.7]# bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /export/server/zookeeper-3.5.7/bin/../conf/zoo.cfg
Client port found: 2181. Client address: localhost.
Mode: leader(hadoop2成leader了)[root@hadoop3 zookeeper-3.5.7]# bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /export/server/zookeeper-3.5.7/bin/../conf/zoo.cfg
Client port found: 2181. Client address: localhost.
Mode: follower
批量启动脚本
#!/bin/bash
case $1 in
start)
for i in hadoop1 hadoop2 hadoop3
do
echo ---------------- zookeeper $i start ----------------
ssh $i "/export/server/zookeeper-3.5.7/bin/zkServer.sh start"
done
;;
stop)
for i in hadoop1 hadoop2 hadoop3
do
echo ---------------- zookeeper $i stop ----------------
ssh $i "/export/server/zookeeper-3.5.7/bin/zkServer.sh stop"
done
;;
status)
for i in hadoop1 hadoop2 hadoop3
do
echo ---------------- zookeeper $i status ----------------
ssh $i "/export/server/zookeeper-3.5.7/bin/zkServer.sh status"
done
;;
*)
echo "Usage: $0 {start|stop|status}"
;;
esac
chmod 777 zk