ZooKeeper集群部署(容器)

文章目录

一、ZooKeeper基本概念

ZooKeeper是一个分布式且开源的分布式应用程序的协调服务(管理分布式服务)

ZooKeeper提供的主要功能包括:

  • 配置管理
  • 分布式锁
  • 集群管理

二、ZooKeeper集群部署

1、前置环境准备

1.1 关闭防火墙等限制

bash 复制代码
systemctl disable firewalld --now
setenforce 0
sed  -i -r 's/SELINUX=[ep].*/SELINUX=disabled/g' /etc/selinux/config

1.2 安装docker环境

(1)安装docker

bash 复制代码
curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo
wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
yum makecache

# yum-utils软件用于提供yum-config-manager程序
yum install -y yum-utils

# 使用yum-config-manager创建docker阿里存储库
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

yum install docker-ce-20.10.6 docker-ce-cli-20.10.6 -y

(2)配置docker国内加速器

bash 复制代码
mkdir /etc/docker
cat <<EOF > /etc/docker/daemon.json
{
 "registry-mirrors": [
"https://vm1wbfhf.mirror.aliyuncs.com",
"http://f1361db2.m.daocloud.io",
"https://hub-mirror.c.163.com",
"https://docker.mirrors.ustc.edu.cn",
"https://mirror.baidubce.com",
"https://ustc-edu-cn.mirror.aliyuncs.com",
"https://registry.cn-hangzhou.aliyuncs.com",
"https://ccr.ccs.tencentyun.com",
"https://hub.daocloud.io",
"https://docker.shootchat.top",
"https://do.nark.eu.org",
"https://dockerproxy.com",
"https://docker.m.daocloud.io",
"https://dockerhub.timeweb.cloud",
"https://docker.shootchat.top",
"https://do.nark.eu.org"],
"exec-opts": ["native.cgroupdriver=systemd"]
}
EOF

(3)启动docker并设置开机自启

bash 复制代码
systemctl enable docker --now
systemctl status docker

1.3 安装docker-compose环境

bash 复制代码
DOCKER_COMPOSE_VERSION="v2.27.0"
sudo curl -L "https://github.com/docker/compose/releases/download/${DOCKER_COMPOSE_VERSION}/docker-compose-$(uname -s)-$(uname -m)" \
  -o /usr/local/bin/docker-compose
  
chmod +x /usr/local/bin/docker-compose

2、ZooKeeper伪集群部署(可选)

ZooKeeper伪集群指的是,将集群部署到同一台服务器中

2.1 创建目录,添加docker-compose文件

bash 复制代码
mkdir /data/software/zookeeper-cluster -p
cd /data/software/zookeeper-cluster
vim docker-compose.yml

version: '3.4'

services:
  zk1:
    image: zookeeper:3.4.14
    restart: always
    hostname: zk1
    container_name: zk1
    ports:
    - 2181:2181
    - 2888:2888
    - 3888:3888
    volumes:
    - "./data/zk1-data:/data"
    - "./datalog/zk1-datalog:/datalog"
    - "./logs/zk1-logs:/logs"
    environment:
      ZOO_MY_ID: 1
      ZOO_SERVERS: server.1=zk1:2888:3888 server.2=zk2:2888:3888 server.3=zk3:2888:3888
      TZ: Asia/Shanghai
    healthcheck:
      test: ["CMD", "sh", "-c", "nc -z 127.0.0.1 2181"]
      interval: 10s
      timeout: 5s
      retries: 3
    networks:
      - zookeeper-net

  zk2:
    image: zookeeper:3.4.14
    restart: always
    hostname: zk2
    container_name: zk2
    ports:
    - 22181:2181
    - 22888:2888
    - 23888:3888
    volumes:
    - "./data/zk2-data:/data"
    - "./datalog/zk2-datalog:/datalog"
    - "./logs/zk2-logs:/logs"
    environment:
      ZOO_MY_ID: 2
      ZOO_SERVERS: server.1=zk1:2888:3888 server.2=zk2:2888:3888 server.3=zk3:2888:3888
      TZ: Asia/Shanghai
    healthcheck:
      test: ["CMD", "sh", "-c", "nc -z 127.0.0.1 2181"]
      interval: 10s
      timeout: 5s
      retries: 3
    networks:
      - zookeeper-net

  zk3:
    image: zookeeper:3.4.14
    restart: always
    hostname: zk3
    container_name: zk3
    ports:
    - 32181:2181
    - 32888:2888
    - 33888:3888
    volumes:
    - "./data/zk3-data:/data"
    - "./datalog/zk3-datalog:/datalog"
    - "./logs/zk3-logs:/logs"
    environment:
      ZOO_MY_ID: 3
      ZOO_SERVERS: server.1=zk1:2888:3888 server.2=zk2:2888:3888 server.3=zk3:2888:3888
      TZ: Asia/Shanghai
    healthcheck:
      test: ["CMD", "sh", "-c", "nc -z 127.0.0.1 2181"]
      interval: 10s
      timeout: 5s
      retries: 3
    networks:
      - zookeeper-net
networks:
  zookeeper-net:
    driver: bridge

2.2 启动zk集群:

bash 复制代码
cd /data/software/zookeeper-cluster
docker-compose up -d
docker-compose logs -f 

3、ZooKeeper集群部署(可选)

3.1 集群环境说明

序号 IP地址 主机名称
1 16.32.15.116 zk1
2 16.32.15.200 zk2
3 16.32.15.201 zk3

3.2 zk1主机相关操作

(1)创建目录,添加docker-compose文件

bash 复制代码
mkdir /data/software/zookeeper -p
cd /data/software/zookeeper
vim docker-compose.yml

version: '3.4'

services:
  zk1:                                  # 三个节点对应不同名称 [ zk2 | zk3 ]
    image: zookeeper:3.4.14
    restart: always
    hostname: zk1                       # 三个节点对应不同名称 [ zk2 | zk3 ]
    container_name: zk1                 # 三个节点对应不同名称 [ zk2 | zk3 ]
    network_mode: "host"
    volumes:
      - "./data:/data"
      - "./datalog:/datalog"
      - "./logs:/logs"
    environment:
      ZOO_MY_ID: 1                      # 三个节点对应不同ID [ 2 | 3 ]
      ZOO_SERVERS: server.1=zk1:2888:3888 server.2=zk2:2888:3888 server.3=zk3:2888:3888
      TZ: Asia/Shanghai
      JVMFLAGS: "-Xmx1024m -Xms521m"
    extra_hosts:                        # 设置容器hosts为宿主机IP地址
      - "zk1:16.32.15.116"
      - "zk2:16.32.15.200"
      - "zk3:16.32.15.201"
    healthcheck:
      test: ["CMD", "sh", "-c", "nc -z 127.0.0.1 2181"]
      interval: 10s
      timeout: 5s
      retries: 3
    mem_limit: 2g                         # 内存硬限制
    mem_reservation: 1500m                # 内存软限制
    logging:                              # 日志大小限制
      driver: json-file
      options:
        max-size: "50m"
        max-file: "10"

(2)启动zk1容器:

bash 复制代码
cd /data/software/zookeeper
docker-compose up -d
docker-compose logs -f 

3.3 zk2主机相关操作

(1)创建目录,添加docker-compose文件

bash 复制代码
mkdir /data/software/zookeeper -p
cd /data/software/zookeeper
vim docker-compose.yml

version: '3.4'

services:
  zk2:
    image: zookeeper:3.4.14
    restart: always
    hostname: zk2
    container_name: zk2
    network_mode: "host"
    volumes:
      - "./data:/data"
      - "./datalog:/datalog"
      - "./logs:/logs"
    environment:
      ZOO_MY_ID: 2
      ZOO_SERVERS: server.1=zk1:2888:3888 server.2=zk2:2888:3888 server.3=zk3:2888:3888
      TZ: Asia/Shanghai
      JVMFLAGS: "-Xmx1024m -Xms521m"
    extra_hosts:                        # 设置容器hosts为宿主机IP地址
      - "zk1:16.32.15.116"
      - "zk2:16.32.15.200"
      - "zk3:16.32.15.201"
    healthcheck:
      test: ["CMD", "sh", "-c", "nc -z 127.0.0.1 2181"]
      interval: 10s
      timeout: 5s
      retries: 3
    mem_limit: 2g                         # 内存硬限制
    mem_reservation: 1500m                # 内存软限制
    logging:                              # 日志大小限制
      driver: json-file
      options:
        max-size: "50m"
        max-file: "10"

(2)启动zk2容器:

bash 复制代码
cd /data/software/zookeeper
docker-compose up -d
docker-compose logs -f 

3.4 zk3主机相关操作

(1)创建目录,添加docker-compose文件

bash 复制代码
mkdir /data/software/zookeeper -p
cd /data/software/zookeeper
vim docker-compose.yml

version: '3.4'

services:
  zk3:
    image: zookeeper:3.4.14
    restart: always
    hostname: zk3
    container_name: zk3
    network_mode: "host"
    volumes:
      - "./data:/data"
      - "./datalog:/datalog"
      - "./logs:/logs"
    environment:
      ZOO_MY_ID: 3
      ZOO_SERVERS: server.1=zk1:2888:3888 server.2=zk2:2888:3888 server.3=zk3:2888:3888
      TZ: Asia/Shanghai
      JVMFLAGS: "-Xmx1024m -Xms521m"
    extra_hosts:                        # 设置容器hosts为宿主机IP地址
      - "zk1:16.32.15.116"
      - "zk2:16.32.15.200"
      - "zk3:16.32.15.201"
    healthcheck:
      test: ["CMD", "sh", "-c", "nc -z 127.0.0.1 2181"]
      interval: 10s
      timeout: 5s
      retries: 3
    mem_limit: 2g                         # 内存硬限制
    mem_reservation: 1500m                # 内存软限制
    logging:                              # 日志大小限制
      driver: json-file
      options:
        max-size: "50m"
        max-file: "10"       

(2)启动zk3容器:

bash 复制代码
cd /data/software/zookeeper
docker-compose up -d
docker-compose logs -f 

三、ZooKeeper集群验证

1、查看集群角色

bash 复制代码
yum -y install nc
zkList=(16.32.15.116 16.32.15.200 16.32.15.201)
for zkhost in ${zkList[@]};do zkMode=$(echo stat | nc ${zkhost} 2181 | grep Mode);echo [${zkhost}] ${zkMode};done

2、数据同步测试

bash 复制代码
# zk1主机创建数据
docker exec -it zk1 bin/zkCli.sh
create /test "QIN TEST 666...."

# zk2主机查看数据
docker exec -it zk2 bin/zkCli.sh
get /test

3、选举leader测试

  1. 查看leader主机IP地址
bash 复制代码
zkList=(16.32.15.116 16.32.15.200 16.32.15.201)
for zkhost in ${zkList[@]};do zkMode=$(echo stat | nc ${zkhost} 2181 | grep Mode);echo [${zkhost}] ${zkMode};done
  1. 将leader主机停止掉模仿服务器宕机
bash 复制代码
# leader主机操作(zk2)
cd /data/software/zookeeper
docker-compose down
  1. 查看选举出新的leader主机
bash 复制代码
zkList=(16.32.15.116 16.32.15.200 16.32.15.201)
for zkhost in ${zkList[@]};do zkMode=$(echo stat | nc ${zkhost} 2181 | grep Mode);echo [${zkhost}] ${zkMode};done
相关推荐
Hello.Reader18 分钟前
Flink ZooKeeper HA 实战原理、必配项、Kerberos、安全与稳定性调优
安全·zookeeper·flink
陈桴浮海3 小时前
Kustomize实战:从0到1实现K8s多环境配置管理与资源部署
云原生·容器·kubernetes
qq_12498707534 小时前
基于Hadoop的信贷风险评估的数据可视化分析与预测系统的设计与实现(源码+论文+部署+安装)
大数据·人工智能·hadoop·分布式·信息可视化·毕业设计·计算机毕业设计
ShiLiu_mtx6 小时前
k8s - 7
云原生·容器·kubernetes
Coder_Boy_6 小时前
基于Spring AI的分布式在线考试系统-事件处理架构实现方案
人工智能·spring boot·分布式·spring
袁煦丞 cpolar内网穿透实验室7 小时前
远程调试内网 Kafka 不再求运维!cpolar 内网穿透实验室第 791 个成功挑战
运维·分布式·kafka·远程工作·内网穿透·cpolar
人间打气筒(Ada)8 小时前
GlusterFS实现KVM高可用及热迁移
分布式·虚拟化·kvm·高可用·glusterfs·热迁移
xu_yule8 小时前
Redis存储(15)Redis的应用_分布式锁_Lua脚本/Redlock算法
数据库·redis·分布式
難釋懷12 小时前
分布式锁的原子性问题
分布式