ZooKeeper集群部署(容器)

文章目录

一、ZooKeeper基本概念

ZooKeeper是一个分布式且开源的分布式应用程序的协调服务(管理分布式服务)

ZooKeeper提供的主要功能包括:

  • 配置管理
  • 分布式锁
  • 集群管理

二、ZooKeeper集群部署

1、前置环境准备

1.1 关闭防火墙等限制

bash 复制代码
systemctl disable firewalld --now
setenforce 0
sed  -i -r 's/SELINUX=[ep].*/SELINUX=disabled/g' /etc/selinux/config

1.2 安装docker环境

(1)安装docker

bash 复制代码
curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo
wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
yum makecache

# yum-utils软件用于提供yum-config-manager程序
yum install -y yum-utils

# 使用yum-config-manager创建docker阿里存储库
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

yum install docker-ce-20.10.6 docker-ce-cli-20.10.6 -y

(2)配置docker国内加速器

bash 复制代码
mkdir /etc/docker
cat <<EOF > /etc/docker/daemon.json
{
 "registry-mirrors": [
"https://vm1wbfhf.mirror.aliyuncs.com",
"http://f1361db2.m.daocloud.io",
"https://hub-mirror.c.163.com",
"https://docker.mirrors.ustc.edu.cn",
"https://mirror.baidubce.com",
"https://ustc-edu-cn.mirror.aliyuncs.com",
"https://registry.cn-hangzhou.aliyuncs.com",
"https://ccr.ccs.tencentyun.com",
"https://hub.daocloud.io",
"https://docker.shootchat.top",
"https://do.nark.eu.org",
"https://dockerproxy.com",
"https://docker.m.daocloud.io",
"https://dockerhub.timeweb.cloud",
"https://docker.shootchat.top",
"https://do.nark.eu.org"],
"exec-opts": ["native.cgroupdriver=systemd"]
}
EOF

(3)启动docker并设置开机自启

bash 复制代码
systemctl enable docker --now
systemctl status docker

1.3 安装docker-compose环境

bash 复制代码
DOCKER_COMPOSE_VERSION="v2.27.0"
sudo curl -L "https://github.com/docker/compose/releases/download/${DOCKER_COMPOSE_VERSION}/docker-compose-$(uname -s)-$(uname -m)" \
  -o /usr/local/bin/docker-compose
  
chmod +x /usr/local/bin/docker-compose

2、ZooKeeper伪集群部署(可选)

ZooKeeper伪集群指的是,将集群部署到同一台服务器中

2.1 创建目录,添加docker-compose文件

bash 复制代码
mkdir /data/software/zookeeper-cluster -p
cd /data/software/zookeeper-cluster
vim docker-compose.yml

version: '3.4'

services:
  zk1:
    image: zookeeper:3.4.14
    restart: always
    hostname: zk1
    container_name: zk1
    ports:
    - 2181:2181
    - 2888:2888
    - 3888:3888
    volumes:
    - "./data/zk1-data:/data"
    - "./datalog/zk1-datalog:/datalog"
    - "./logs/zk1-logs:/logs"
    environment:
      ZOO_MY_ID: 1
      ZOO_SERVERS: server.1=zk1:2888:3888 server.2=zk2:2888:3888 server.3=zk3:2888:3888
      TZ: Asia/Shanghai
    healthcheck:
      test: ["CMD", "sh", "-c", "nc -z 127.0.0.1 2181"]
      interval: 10s
      timeout: 5s
      retries: 3
    networks:
      - zookeeper-net

  zk2:
    image: zookeeper:3.4.14
    restart: always
    hostname: zk2
    container_name: zk2
    ports:
    - 22181:2181
    - 22888:2888
    - 23888:3888
    volumes:
    - "./data/zk2-data:/data"
    - "./datalog/zk2-datalog:/datalog"
    - "./logs/zk2-logs:/logs"
    environment:
      ZOO_MY_ID: 2
      ZOO_SERVERS: server.1=zk1:2888:3888 server.2=zk2:2888:3888 server.3=zk3:2888:3888
      TZ: Asia/Shanghai
    healthcheck:
      test: ["CMD", "sh", "-c", "nc -z 127.0.0.1 2181"]
      interval: 10s
      timeout: 5s
      retries: 3
    networks:
      - zookeeper-net

  zk3:
    image: zookeeper:3.4.14
    restart: always
    hostname: zk3
    container_name: zk3
    ports:
    - 32181:2181
    - 32888:2888
    - 33888:3888
    volumes:
    - "./data/zk3-data:/data"
    - "./datalog/zk3-datalog:/datalog"
    - "./logs/zk3-logs:/logs"
    environment:
      ZOO_MY_ID: 3
      ZOO_SERVERS: server.1=zk1:2888:3888 server.2=zk2:2888:3888 server.3=zk3:2888:3888
      TZ: Asia/Shanghai
    healthcheck:
      test: ["CMD", "sh", "-c", "nc -z 127.0.0.1 2181"]
      interval: 10s
      timeout: 5s
      retries: 3
    networks:
      - zookeeper-net
networks:
  zookeeper-net:
    driver: bridge

2.2 启动zk集群:

bash 复制代码
cd /data/software/zookeeper-cluster
docker-compose up -d
docker-compose logs -f 

3、ZooKeeper集群部署(可选)

3.1 集群环境说明

序号 IP地址 主机名称
1 16.32.15.116 zk1
2 16.32.15.200 zk2
3 16.32.15.201 zk3

3.2 zk1主机相关操作

(1)创建目录,添加docker-compose文件

bash 复制代码
mkdir /data/software/zookeeper -p
cd /data/software/zookeeper
vim docker-compose.yml

version: '3.4'

services:
  zk1:                                  # 三个节点对应不同名称 [ zk2 | zk3 ]
    image: zookeeper:3.4.14
    restart: always
    hostname: zk1                       # 三个节点对应不同名称 [ zk2 | zk3 ]
    container_name: zk1                 # 三个节点对应不同名称 [ zk2 | zk3 ]
    network_mode: "host"
    volumes:
      - "./data:/data"
      - "./datalog:/datalog"
      - "./logs:/logs"
    environment:
      ZOO_MY_ID: 1                      # 三个节点对应不同ID [ 2 | 3 ]
      ZOO_SERVERS: server.1=zk1:2888:3888 server.2=zk2:2888:3888 server.3=zk3:2888:3888
      TZ: Asia/Shanghai
      JVMFLAGS: "-Xmx1024m -Xms521m"
    extra_hosts:                        # 设置容器hosts为宿主机IP地址
      - "zk1:16.32.15.116"
      - "zk2:16.32.15.200"
      - "zk3:16.32.15.201"
    healthcheck:
      test: ["CMD", "sh", "-c", "nc -z 127.0.0.1 2181"]
      interval: 10s
      timeout: 5s
      retries: 3
    mem_limit: 2g                         # 内存硬限制
    mem_reservation: 1500m                # 内存软限制
    logging:                              # 日志大小限制
      driver: json-file
      options:
        max-size: "50m"
        max-file: "10"

(2)启动zk1容器:

bash 复制代码
cd /data/software/zookeeper
docker-compose up -d
docker-compose logs -f 

3.3 zk2主机相关操作

(1)创建目录,添加docker-compose文件

bash 复制代码
mkdir /data/software/zookeeper -p
cd /data/software/zookeeper
vim docker-compose.yml

version: '3.4'

services:
  zk2:
    image: zookeeper:3.4.14
    restart: always
    hostname: zk2
    container_name: zk2
    network_mode: "host"
    volumes:
      - "./data:/data"
      - "./datalog:/datalog"
      - "./logs:/logs"
    environment:
      ZOO_MY_ID: 2
      ZOO_SERVERS: server.1=zk1:2888:3888 server.2=zk2:2888:3888 server.3=zk3:2888:3888
      TZ: Asia/Shanghai
      JVMFLAGS: "-Xmx1024m -Xms521m"
    extra_hosts:                        # 设置容器hosts为宿主机IP地址
      - "zk1:16.32.15.116"
      - "zk2:16.32.15.200"
      - "zk3:16.32.15.201"
    healthcheck:
      test: ["CMD", "sh", "-c", "nc -z 127.0.0.1 2181"]
      interval: 10s
      timeout: 5s
      retries: 3
    mem_limit: 2g                         # 内存硬限制
    mem_reservation: 1500m                # 内存软限制
    logging:                              # 日志大小限制
      driver: json-file
      options:
        max-size: "50m"
        max-file: "10"

(2)启动zk2容器:

bash 复制代码
cd /data/software/zookeeper
docker-compose up -d
docker-compose logs -f 

3.4 zk3主机相关操作

(1)创建目录,添加docker-compose文件

bash 复制代码
mkdir /data/software/zookeeper -p
cd /data/software/zookeeper
vim docker-compose.yml

version: '3.4'

services:
  zk3:
    image: zookeeper:3.4.14
    restart: always
    hostname: zk3
    container_name: zk3
    network_mode: "host"
    volumes:
      - "./data:/data"
      - "./datalog:/datalog"
      - "./logs:/logs"
    environment:
      ZOO_MY_ID: 3
      ZOO_SERVERS: server.1=zk1:2888:3888 server.2=zk2:2888:3888 server.3=zk3:2888:3888
      TZ: Asia/Shanghai
      JVMFLAGS: "-Xmx1024m -Xms521m"
    extra_hosts:                        # 设置容器hosts为宿主机IP地址
      - "zk1:16.32.15.116"
      - "zk2:16.32.15.200"
      - "zk3:16.32.15.201"
    healthcheck:
      test: ["CMD", "sh", "-c", "nc -z 127.0.0.1 2181"]
      interval: 10s
      timeout: 5s
      retries: 3
    mem_limit: 2g                         # 内存硬限制
    mem_reservation: 1500m                # 内存软限制
    logging:                              # 日志大小限制
      driver: json-file
      options:
        max-size: "50m"
        max-file: "10"       

(2)启动zk3容器:

bash 复制代码
cd /data/software/zookeeper
docker-compose up -d
docker-compose logs -f 

三、ZooKeeper集群验证

1、查看集群角色

bash 复制代码
yum -y install nc
zkList=(16.32.15.116 16.32.15.200 16.32.15.201)
for zkhost in ${zkList[@]};do zkMode=$(echo stat | nc ${zkhost} 2181 | grep Mode);echo [${zkhost}] ${zkMode};done

2、数据同步测试

bash 复制代码
# zk1主机创建数据
docker exec -it zk1 bin/zkCli.sh
create /test "QIN TEST 666...."

# zk2主机查看数据
docker exec -it zk2 bin/zkCli.sh
get /test

3、选举leader测试

  1. 查看leader主机IP地址
bash 复制代码
zkList=(16.32.15.116 16.32.15.200 16.32.15.201)
for zkhost in ${zkList[@]};do zkMode=$(echo stat | nc ${zkhost} 2181 | grep Mode);echo [${zkhost}] ${zkMode};done
  1. 将leader主机停止掉模仿服务器宕机
bash 复制代码
# leader主机操作(zk2)
cd /data/software/zookeeper
docker-compose down
  1. 查看选举出新的leader主机
bash 复制代码
zkList=(16.32.15.116 16.32.15.200 16.32.15.201)
for zkhost in ${zkList[@]};do zkMode=$(echo stat | nc ${zkhost} 2181 | grep Mode);echo [${zkhost}] ${zkMode};done
相关推荐
Chan167 小时前
【 SpringCloud | 微服务 MQ基础 】
java·spring·spring cloud·微服务·云原生·rabbitmq
小鸡脚来咯8 小时前
RabbitMQ入门
分布式·rabbitmq
2201_761199048 小时前
k8s4部署
云原生·容器·kubernetes
慌ZHANG9 小时前
云原生技术驱动 IT 架构现代化转型:企业实践与落地策略全解
云原生
qq_4639448610 小时前
【Spark征服之路-2.2-安装部署Spark(二)】
大数据·分布式·spark
西京刀客10 小时前
k8s热更新-subPath 不支持热更新
云原生·容器·kubernetes·configmap·subpath
敖云岚10 小时前
【Redis】分布式锁的介绍与演进之路
数据库·redis·分布式
正在努力Coding11 小时前
kafka(windows)
分布式·kafka
佛祖让我来巡山13 小时前
【Zookeeper从入门到实战】SpringBoot整合完整指南
zookeeper
爱瑞瑞14 小时前
云原生学习笔记(五) 构建 Docker 镜像与运行容器
云原生