Zookeeper 集群安装与脚本化管理详解

安装之前:先关闭所有服务器的防火墙!!!!!!!!!!!!

systemctl stop firewalld 关闭防火墙

systemctl disable firewalld 开机不启动防火墙

1.上传 /opt/modules下面

2.解压到/opt/installs下面

复制代码
tar -zxvf zookeeper-3.4.10.tar.gz -C /opt/installs/

3.重命名

复制代码
mv zookeeper-3.4.10/ zookeeper

4.修改配置文件

进入/opt/installs/zookeeper/conf文件夹,重命名zoo_sample.cfg

复制代码
mv  zoo_sample.cfg  zoo.cfg

将配置文件修改

复制代码
# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial 
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between 
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just 
# example sakes.
dataDir=/opt/installs/zookeeper/zkData
# the port at which the clients will connect
clientPort=2181
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
#
# Be sure to read the maintenance section of the 
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
#autopurge.purgeInterval=1
server.1=bigdata01:2888:3888
server.2=bigdata02:2888:3888
server.3=bigdata03:2888:3888

记得在zookeeper中创建zkData文件夹,以及myid文件

5.配置环境变量:

复制代码
export ZOOKEEPER_HOME=/opt/installs/zookeeper
export PATH=$PATH:$ZOOKEEPER_HOME/bin

刷新一下环境变量:

复制代码
source /etc/profile

接着配置第二台和第三台:

复制代码
xsync.sh /opt/installs/zookeeper

xsync.sh /etc/profile

xcall.sh source /etc/profile

在bigdata02中,修改myid 为2

bigdata03中,修改myid为3

6.每一台电脑上,都启动zkServer

zkServer.sh start

查看状态:

因为zookeeper安装的节点比较多,每一个一个个操作非常的繁琐,所以我们可以编写一个脚本,管理zookeeper集群。

在/usr/local/bin 下面,创建zk.sh

复制代码
#!/bin/bash

# 获取参数
COMMAND=$1
if [ ! $COMMAND ]; then
    echo "please input your option in [start | stop | status]"
    exit -1
fi
if [ $COMMAND != "start" -a $COMMAND != "stop" -a $COMMAND != "status" ]; then
    echo "please input your option in [start | stop | status]"
    exit -1
fi

# 所有的服务器
HOSTS=( bigdata01 bigdata02 bigdata03 )
for HOST in ${HOSTS[*]}
do
    ssh -T $HOST << TERMINATOR
    echo "---------- $HOST ----------"
    zkServer.sh $COMMAND 2> /dev/null | grep -ivh SSL
    exit
TERMINATOR
done

加权限chomd u+x zk.sh

使用:

zk.sh start

zk.sh stop

zk.sh status

相关推荐
没有bug.的程序员2 小时前
服务网格 Service Mesh:微服务通信的终极进化
java·分布式·微服务·云原生·service_mesh
啊啊啊啊8438 小时前
Kubernetes 1.20集群部署
云原生·容器·kubernetes
笨手笨脚の9 小时前
Kafka-1 初识消息引擎系统
分布式·kafka·消息队列·消息引擎系统
忧郁的橙子.11 小时前
十二、kubernetes 1.29 之 存储 Volume、pv/pvc
云原生·容器·kubernetes
2351611 小时前
【MQ】RabbitMQ:架构、工作模式、高可用与流程解析
java·分布式·架构·kafka·rabbitmq·rocketmq·java-rabbitmq
xrkhy12 小时前
分布式之RabbitMQ的使用(3)QueueBuilder
分布式·rabbitmq
__XYZ12 小时前
RedisTemplate 实现分布式锁
java·spring boot·redis·分布式·junit
小诸葛的博客13 小时前
k8s lease使用案例
云原生·容器·kubernetes
m0_4646082614 小时前
Kubernetes 集群调度与PV和PVC
云原生·容器·kubernetes
失散1315 小时前
分布式专题——44 ElasticSearch安装
java·分布式·elasticsearch·架构