现如今,业务系统对于缓存Redis的依赖似乎是必不可少的,我们可以在各种各样的系统中看到Redis的身影。考虑到系统运行的稳定性,Redis的应用和MySQL数据库一样需要做到高可用部署。
一、Redis 的多种高可用方案
常见的Redis的高可用方案有以下几种:
-
Redis Replication(主从复制):Redis的主从复制可以实现数据的备份和读写分离。通过配置主节点和从节点,主节点将数据异步复制到从节点上。当主节点发生故障时,一个从节点可以被提升为新的主节点,实现故障转移。主从复制适用于对读操作较多、对可用性要求较高的场景。
-
Redis Sentinel(哨兵模式):哨兵模式是Redis官方推荐的实现高可用的方案之一。通过运行一个或多个Sentinel进程,监控Redis主节点的状态。当主节点故障时,Sentinel会自动进行故障转移,将其中一个从节点提升为新的主节点。哨兵还可以监控从节点并进行故障恢复。哨兵模式适用于对高可用性要求不是特别高的场景。
-
Redis Cluster(集群模式):Redis Cluster是Redis官方提供的高可用和分布式解决方案。通过将多个Redis实例组成一个集群,Redis Cluster提供了自动的数据分片和高可用性。数据被分配到不同的节点上,并使用Gossip协议进行节点之间的通信。当有节点发生故障时,Redis Cluster可以自动将数据迁移到其他正常的节点上。Redis Cluster适用于对可用性和扩展性要求较高的场景。集群模式只能存储在db0。
-
第三方中间件/解决方案:除了Redis官方提供的高可用方案,还有一些第三方中间件或解决方案可以用于实现Redis的高可用,如Codis、Twemproxy等。这些中间件提供了更多的功能和扩展性,如代理、负载均衡、故障恢复等。
在选择高可用方案时,需要考虑系统的可用性需求、数据一致性要求、网络拓扑等因素。同时,也要注意进行适当的测试和监控,确保Redis集群的稳定性和高可用性。
二、使用Docker Compose安装Redis并配置哨兵模式(Redis Sentinel)
1. 环境准备
集群的架构一般服务器为奇数台,所以,如果是采用集群模式,那么至少准备3台Linux服务器,受生产环境所限,我们只有两台Linux服务器,但是我们可以使用Docker搭建多个Redis服务(Redis主服务1、Redis从服务2、Redis从服务3):
- 192.168.0.210 (Redis主服务1、Redis从服务2)
- 192.168.0.195 (Redis从服务3)
2. 准备Redis文件存放目录
-
准备Redis存储目录,在两台主从服务器上分别执行一下命令
在192.168.0.210服务器上执行Redis主服务1所需目录及权限命令mkdir -p /opt/container/redis/master/data /opt/container/redis/master/conf /opt/container/redis/master/logs /opt/container/redis/sentinel/data /opt/container/redis/sentinel/conf /opt/container/redis/sentinel/logs
chmod -R 777 /opt/container/redis/master/data /opt/container/redis/master/conf /opt/container/redis/master/logs /opt/container/redis/sentinel/data /opt/container/redis/sentinel/conf /opt/container/redis/sentinel/logs
在192.168.0.210服务器上执行Redis从服务2所需目录及权限命令
mkdir -p /opt/container/redis/slave1/data /opt/container/redis/slave1/conf /opt/container/redis/slave1/logs /opt/container/redis/sentinel1/data /opt/container/redis/sentinel1/conf /opt/container/redis/sentinel1/logs
chmod -R 777 /opt/container/redis/slave1/data /opt/container/redis/slave1/conf /opt/container/redis/slave1/logs /opt/container/redis/sentinel1/data /opt/container/redis/sentinel1/conf /opt/container/redis/sentinel1/logs
在192.168.0.195服务器上执行Redis从服务3所需目录及权限命令
mkdir -p /opt/container/redis/slave2/data /opt/container/redis/slave2/conf /opt/container/redis/slave2/logs /opt/container/redis/sentinel2/data /opt/container/redis/sentinel2/conf /opt/container/redis/sentinel2/logs
chmod -R 777 /opt/container/redis/slave2/data /opt/container/redis/slave2/conf /opt/container/redis/slave2/logs /opt/container/redis/sentinel2/data /opt/container/redis/sentinel2/conf /opt/container/redis/sentinel2/logs
/opt/container/redis/ ** /data 用于存放Redis数据文件
/opt/container/redis/ ** /conf 用于存放Redis配置文件
/opt/container/redis/ ** /logs 用于存放Redis日志文件
/opt/container/sentinel/ ** /data 用于存放Redis哨兵数据文件
/opt/container/sentinel/ ** /conf 用于存放Redis哨兵配置文件
/opt/container/sentinel/ ** /logs 用于存放Redis哨兵日志文件
3. Redis服务器docker-compose.yml文件
-
192.168.0.210服务器上部署两个Redis服务(Redis主服务1、Redis从服务2)的docker-compose-redis.yml文件
version: '3'
services:
##redis主配置
redisMaster:
image: redis:latest
restart: always
container_name: redis-master
command: redis-server /usr/local/etc/redis/redis.conf
##将26381映射到26381上
ports:
- "26381:26381"
volumes:
##数据目录,要确保先创建好
- /opt/container/redis/master/data:/data
- /opt/container/redis/master/conf/redis.conf:/usr/local/etc/redis/redis.conf
- /opt/container/redis/master/logs:/logs
- "/etc/localtime:/etc/localtime"
- "/usr/share/zoneinfo/Asia/Shanghai:/etc/timezone"
##redis从配置
redisSlave1:
image: redis:latest
restart: always
container_name: redis-slave-1
command: redis-server /usr/local/etc/redis/redis.conf
##将26382映射到26382上
ports:
- "26382:26382"
volumes:
##数据目录,要确保先创建好
- /opt/container/redis/slave1/data:/data
- /opt/container/redis/slave1/conf/redis.conf:/usr/local/etc/redis/redis.conf
- /opt/container/redis/slave1/logs:/logs
- "/etc/localtime:/etc/localtime"
- "/usr/share/zoneinfo/Asia/Shanghai:/etc/timezone"
sentinel:
image: redis:latest
restart: always
container_name: redis-sentinel
ports:
- 36381:36381
command: redis-sentinel /opt/redis/sentinel/sentinel.conf
volumes:
- /opt/container/redis/sentinel/data:/data
- /opt/container/redis/sentinel/conf/sentinel.conf:/opt/redis/sentinel/sentinel.conf
- /opt/container/redis/sentinel/logs:/logs
- "/etc/localtime:/etc/localtime"
- "/usr/share/zoneinfo/Asia/Shanghai:/etc/timezone"
sentinel1:
image: redis:latest
restart: always
container_name: redis-sentinel-1
ports:
- 36382:36382
command: redis-sentinel /opt/redis/sentinel/sentinel1.conf
volumes:
- /opt/container/redis/sentinel1/data:/data
- /opt/container/redis/sentinel1/conf/sentinel1.conf:/opt/redis/sentinel/sentinel1.conf
- /opt/container/redis/sentinel1/logs:/logs
- "/etc/localtime:/etc/localtime"
- "/usr/share/zoneinfo/Asia/Shanghai:/etc/timezone" -
192.168.0.195服务器上部署Redis服务(Redis从服务3)的docker-compose-redis-slave.yml文件
version: '3'
services:
##redis从2配置
redisSlave2:
image: redis:latest
restart: always
container_name: redis-slave-2
command: redis-server /usr/local/etc/redis/redis.conf
##将26383映射到26383上
ports:
- "26383:26383"
volumes:
##数据目录,要确保先创建好
- /opt/container/redis/slave2/data:/data
- /opt/container/redis/slave2/conf/redis.conf:/usr/local/etc/redis/redis.conf
- /opt/container/redis/slave2/logs:/logs
- "/etc/localtime:/etc/localtime"
- "/usr/share/zoneinfo/Asia/Shanghai:/etc/timezone"
sentinel2:
image: redis:latest
restart: always
container_name: redis-sentinel-2
ports:
- 36383:36383
command: redis-sentinel /opt/redis/sentinel/sentinel2.conf
volumes:
- /opt/container/redis/sentinel2/data:/data
- /opt/container/redis/sentinel2/conf/sentinel2.conf:/opt/redis/sentinel/sentinel2.conf
- /opt/container/redis/sentinel2/logs:/logs
- "/etc/localtime:/etc/localtime"
- "/usr/share/zoneinfo/Asia/Shanghai:/etc/timezone"
4. 准备Redis服务的配置文件
-
192.168.0.210主Redis配置文件,配置端口26381,Redis密码,并将配置文件redis.conf上传至/opt/container/redis/master/conf目录
appendonly yes
port 26381
appendfilename appendonly.aof
appendfsync everysec
auto-aof-rewrite-min-size 10M
auto-aof-rewrite-percentage 100
requirepass "设置密码"masterauth "设置密码"
replica-read-only no -
192.168.0.210从Redis配置文件,配置端口26382,密码,并将配置文件redis.conf上传至/opt/container/redis/slave1/conf目录
appendonly yes
port 26382
appendfilename appendonly.aof
appendfsync everysec
auto-aof-rewrite-min-size 10M
auto-aof-rewrite-percentage 100
requirepass "设置密码"replicaof 192.168.0.210 26381
masterauth "设置密码"
replica-read-only no -
192.168.0.195从Redis配置文件,配置端口26383,密码,并将配置文件redis.conf上传至/opt/container/redis/slave2/conf目录
appendonly yes
port 26383
appendfilename appendonly.aof
appendfsync everysec
auto-aof-rewrite-min-size 10M
auto-aof-rewrite-percentage 100
requirepass "设置密码"replicaof 192.168.0.210 26381
masterauth "设置密码"
replica-read-only no
4. 准备Redis哨兵的配置文件,每个Redis服务对应一个哨兵配置
-
192.168.0.210主Redis哨兵配置文件,配置哨兵sentinel实例运行的端口36381,Redis密码,并将配置文件sentinel.conf上传至/opt/container/redis/sentinel/conf目录
哨兵sentinel实例运行的端口
port 36381
daemonize no
pidfile /var/run/redis-sentinel.pid
dir /tmp
sentinel monitor mymaster 192.168.0.210 26381 2
sentinel auth-pass mymaster "Redis密码"指定多少毫秒之后 主节点没有应答哨兵sentinel 此时 哨兵主观上认为主节点下线 默认30秒
sentinel down-after-milliseconds mymaster 30000
指定了在发生failover主备切换时最多可以有多少个slave同时对新的master进行同步,这个数字越小,完成failover所需的时间就越长
sentinel parallel-syncs mymaster 1
故障转移的超时时间
sentinel failover-timeout mymaster 180000
sentinel deny-scripts-reconfig yes -
192.168.0.210从Redis哨兵配置文件,配置哨兵sentinel1实例运行的端口36382,Redis密码,并将配置文件sentinel1.conf上传至/opt/container/redis/sentinel1/conf目录
哨兵sentinel实例运行的端口
port 36382
daemonize no
pidfile /var/run/redis-sentinel1.pid
dir /tmp
sentinel monitor mymaster 192.168.0.210 26381 2
sentinel auth-pass mymaster "Redis密码"指定多少毫秒之后 主节点没有应答哨兵sentinel 此时 哨兵主观上认为主节点下线 默认30秒
sentinel down-after-milliseconds mymaster 30000
指定了在发生failover主备切换时最多可以有多少个slave同时对新的master进行同步,这个数字越小,完成failover所需的时间就越长
sentinel parallel-syncs mymaster 1
故障转移的超时时间
sentinel failover-timeout mymaster 180000
sentinel deny-scripts-reconfig yes -
192.168.0.195从Redis哨兵配置文件,配置哨兵sentinel2实例运行的端口36383,Redis密码,并将配置文件sentinel2.conf上传至/opt/container/redis/sentinel2/conf目录
appendonly yes
port 26383
appendfilename appendonly.aof
appendfsync everysec
auto-aof-rewrite-min-size 10M
auto-aof-rewrite-percentage 100
requirepass "设置密码"replicaof 192.168.0.210 26381
masterauth "设置密码"
replica-read-only no
5. 在两台服务器上分别执行docker-compose安装启动命令
将docker-compose-redis.yml上传至/opt/software目录,这个目录可以自己选择,然后到目录下执行安装启动命令
docker-compose -f docker-compose-redis.yml up -d
[root@localhost software]# docker-compose -f docker-compose-redis.yml up -d
[+] Running 10/10
⠿ sentinel1 Pulled 15.7s
⠿ redisSlave1 Pulled 15.7s
⠿ sentinel Pulled 3.4s
⠿ a2abf6c4d29d Already exists 0.0s
⠿ c7a4e4382001 Pull complete 0.4s
⠿ 4044b9ba67c9 Pull complete 1.0s
⠿ c8388a79482f Pull complete 2.3s
⠿ 413c8bb60be2 Pull complete 2.3s
⠿ 1abfd3011519 Pull complete 2.4s
⠿ redisMaster Pulled 15.7s
WARN[0015] Found orphan containers ([mysql nginx]) for this project. If you removed or renamed this service in your compose file, you can run this command with the --remove-orphans flag to clean it up.
[+] Running 4/4
⠿ Container redis-sentinel-1 Started 0.5s
⠿ Container redis-slave-1 Started 0.6s
⠿ Container redis-master Started 0.6s
⠿ Container redis-sentinel Started 0.5s
通过docker ps命令可以看到redis和redis哨兵已经安装并启动成功
[root@localhost software]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
f609322cabaa redis:latest "docker-entrypoint.s..." 59 seconds ago Up 58 seconds 6379/tcp, 0.0.0.0:26381->26381/tcp, :::26381->26381/tcp redis-master
18b75828b5b7 redis:latest "docker-entrypoint.s..." 59 seconds ago Up 58 seconds 6379/tcp, 0.0.0.0:36381->36381/tcp, :::36381->36381/tcp redis-sentinel
f0f9a037c7ae redis:latest "docker-entrypoint.s..." 59 seconds ago Up 58 seconds 6379/tcp, 0.0.0.0:26382->26382/tcp, :::26382->26382/tcp redis-slave-1
e51d3b0bc696 redis:latest "docker-entrypoint.s..." 59 seconds ago Up 58 seconds 6379/tcp, 0.0.0.0:36382->36382/tcp, :::36382->36382/tcp redis-sentinel-1
6. 通过命令查看redis哨兵状态
-
进入docker容器
docker exec -it f609322cabaa bash
-
进入Redis目录
cd /usr/local/bin
-
运行info Replication命令查看主节点信息,我们可以看到connected_slaves有2个,其中slave0的ip为172.18.0.1,这是因为其获取的是docker的ip地址
./redis-cli -h 127.0.0.1 -p 26381 -a "密码" info Replication
Replication
role:master
connected_slaves:2
slave0:ip=172.18.0.1,port=26382,state=online,offset=700123,lag=0
slave1:ip=192.168.0.195,port=26383,state=online,offset=700123,lag=0
master_failover_state:no-failover
master_replid:9ee56f68d25b71158544f6cfafc677822c401ec3
master_replid2:0000000000000000000000000000000000000000
master_repl_offset:700123
second_repl_offset:-1
repl_backlog_active:1
repl_backlog_size:1048576
repl_backlog_first_byte_offset:1
repl_backlog_histlen:700123 -
同样的操作方式进入到Sentinel容器,通过INFO Sentinel命令查看哨兵信息,我们可以看到有两个redis从服务,三个哨兵
root@fba6d91e10f6:/usr/local/bin# ./redis-cli -h 127.0.0.1 -p 36381 INFO Sentinel
Sentinel
sentinel_masters:1
sentinel_tilt:0
sentinel_running_scripts:0
sentinel_scripts_queue_length:0
sentinel_simulate_failure_flags:0
master0:name=mymaster,status=ok,address=192.168.0.210:26381,slaves=2,sentinels=3
7. 测试redis哨兵主备切换
-
手动关闭主Redis服务
docker stop 5541698b65a1
-
进入到Sentinel容器,通过INFO Sentinel命令查看哨兵信息,可以看到主Redis服务地址已经切换到192.168.0.195:26383
root@fba6d91e10f6:/usr/local/bin# ./redis-cli -h 127.0.0.1 -p 36381 INFO Sentinel
Sentinel
sentinel_masters:1
sentinel_tilt:0
sentinel_running_scripts:0
sentinel_scripts_queue_length:0
sentinel_simulate_failure_flags:0
master0:name=mymaster,status=ok,address=192.168.0.195:26383,slaves=3,sentinels=3 -
进入到192.168.0.195的Redis容器,info Replication命令查看节点信息,我们可以看到此Redis节点已变为主服务,有一个从服务slave0:ip=192.168.0.210,port=26382
./redis-cli -h 127.0.0.1 -p 26383 -a "密码" info Replication
Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
Replication
role:master
connected_slaves:1
slave0:ip=192.168.0.210,port=26382,state=online,offset=894509,lag=0
master_failover_state:no-failover
master_replid:e9ebebdff0a5f7b7622c6c5fbfed7b1e44d84ae2
master_replid2:9ee56f68d25b71158544f6cfafc677822c401ec3
master_repl_offset:894509
second_repl_offset:884279
repl_backlog_active:1
repl_backlog_size:1048576
repl_backlog_first_byte_offset:846
repl_backlog_histlen:893664
三、配置并测试Redis集群模式(Redis Cluster)
Redis哨兵模式其实质还是主从复制,只不过加了哨兵进行自动主备切换,Redis集群模式(Redis Cluster)才是真正意义上的集群部署,它可以将数据进行分布式分片存储,但是其只能存储到db0。
1. 环境准备
我们仍然使用上面的Linux服务器环境进行安装redis集群:在192.168.0.210上安装redisCluster1、redisCluster2、redisCluster3三台Reids服务;在在192.168.0.195上安装redisCluster4、redisCluster5、redisCluster6三台Reids服务。
2. 准备Redis文件存放目录
-
准备Redis存储目录,在两台主从服务器上分别执行一下命令
在192.168.0.210服务器上执行Redis集群所需目录及权限命令mkdir -p /opt/container/redis/cluster1/data /opt/container/redis/cluster1/conf /opt/container/redis/cluster1/logs /opt/container/redis/cluster2/data /opt/container/redis/cluster2/conf /opt/container/redis/cluster2/logs /opt/container/redis/cluster3/data /opt/container/redis/cluster3/conf /opt/container/redis/cluster3/logs
chmod -R 777 /opt/container/redis/cluster1/data /opt/container/redis/cluster1/conf /opt/container/redis/cluster1/logs /opt/container/redis/cluster2/data /opt/container/redis/cluster2/conf /opt/container/redis/cluster2/logs /opt/container/redis/cluster3/data /opt/container/redis/cluster3/conf /opt/container/redis/cluster3/logs
在192.168.0.195服务器上执行Redis集群所需目录及权限命令
mkdir -p /opt/container/redis/cluster4/data /opt/container/redis/cluster4/conf /opt/container/redis/cluster4/logs /opt/container/redis/cluster5/data /opt/container/redis/cluster5/conf /opt/container/redis/cluster5/logs /opt/container/redis/cluster6/data /opt/container/redis/cluster6/conf /opt/container/redis/cluster6/logs
chmod -R 777 /opt/container/redis/cluster4/data /opt/container/redis/cluster4/conf /opt/container/redis/cluster4/logs /opt/container/redis/cluster5/data /opt/container/redis/cluster5/conf /opt/container/redis/cluster5/logs /opt/container/redis/cluster6/data /opt/container/redis/cluster6/conf /opt/container/redis/cluster6/logs
-
192.168.0.210 /192.168.0.195Redis配置文件、端口(12381、12382、12383、12384、12385、12386)、密码,并将配置文件redis.conf上传至对应目录,只需要端口不同
port 12381
cluster-enabled yes #启动集群模式
cluster-config-file nodes-1.conf
cluster-node-timeout 5000
cluster-announce-ip 192.168.0.210
cluster-announce-port 12381
cluster-announce-bus-port 22381
bind 0.0.0.0
protected-mode no
appendonly yes
#如果要设置密码需要增加如下配置:
#(设置redis访问密码)
requirepass 密码
#(设置集群节点间访问密码,跟上面一致)
masterauth 密码
3. Redis服务器docker-compose.yml文件
-
192.168.0.210服务器上部署两个Redis服务的docker-compose-redis-cluster.yml文件
version: '3'
services:
##redis节点1配置
redisCluster1:
image: redis:latest
restart: always
network_mode: "host"
container_name: redis-cluster1
command: redis-server /usr/local/etc/redis/redis.conf
volumes:
##数据目录,要确保先创建好
- /opt/container/redis/cluster1/data:/data
- /opt/container/redis/cluster1/conf/redis.conf:/usr/local/etc/redis/redis.conf
- /opt/container/redis/cluster1/logs:/logs
- "/etc/localtime:/etc/localtime"
- "/usr/share/zoneinfo/Asia/Shanghai:/etc/timezone"
##redis节点2配置
redisCluster2:
image: redis:latest
restart: always
network_mode: "host"
container_name: redis-cluster2
command: redis-server /usr/local/etc/redis/redis.conf
volumes:
##数据目录,要确保先创建好
- /opt/container/redis/cluster2/data:/data
- /opt/container/redis/cluster2/conf/redis.conf:/usr/local/etc/redis/redis.conf
- /opt/container/redis/cluster2/logs:/logs
- "/etc/localtime:/etc/localtime"
- "/usr/share/zoneinfo/Asia/Shanghai:/etc/timezone"
##redis节点3配置
redisCluster3:
image: redis:latest
restart: always
network_mode: "host"
container_name: redis-cluster3
command: redis-server /usr/local/etc/redis/redis.conf
volumes:
##数据目录,要确保先创建好
- /opt/container/redis/cluster3/data:/data
- /opt/container/redis/cluster3/conf/redis.conf:/usr/local/etc/redis/redis.conf
- /opt/container/redis/cluster3/logs:/logs
- "/etc/localtime:/etc/localtime"
- "/usr/share/zoneinfo/Asia/Shanghai:/etc/timezone" -
192.168.0.195服务器上部署三个Redis服务的docker-compose-redis-cluster.yml文件
version: '3'
services:
##redis节点4配置
redisCluster4:
image: redis:latest
restart: always
network_mode: "host"
container_name: redis-cluster4
command: redis-server /usr/local/etc/redis/redis.conf
volumes:
##数据目录,要确保先创建好
- /opt/container/redis/cluster4/data:/data
- /opt/container/redis/cluster4/conf/redis.conf:/usr/local/etc/redis/redis.conf
- /opt/container/redis/cluster4/logs:/logs
- "/etc/localtime:/etc/localtime"
- "/usr/share/zoneinfo/Asia/Shanghai:/etc/timezone"
##redis节点5配置
redisCluster5:
image: redis:latest
restart: always
network_mode: "host"
container_name: redis-cluster5
command: redis-server /usr/local/etc/redis/redis.conf
volumes:
##数据目录,要确保先创建好
- /opt/container/redis/cluster5/data:/data
- /opt/container/redis/cluster5/conf/redis.conf:/usr/local/etc/redis/redis.conf
- /opt/container/redis/cluster5/logs:/logs
- "/etc/localtime:/etc/localtime"
- "/usr/share/zoneinfo/Asia/Shanghai:/etc/timezone"
##redis节点6配置
redisCluster6:
image: redis:latest
restart: always
network_mode: "host"
container_name: redis-cluster6
command: redis-server /usr/local/etc/redis/redis.conf
volumes:
##数据目录,要确保先创建好
- /opt/container/redis/cluster6/data:/data
- /opt/container/redis/cluster6/conf/redis.conf:/usr/local/etc/redis/redis.conf
- /opt/container/redis/cluster6/logs:/logs
- "/etc/localtime:/etc/localtime"
- "/usr/share/zoneinfo/Asia/Shanghai:/etc/timezone"
4. 在两台服务器上分别执行docker-compose安装启动命令
将docker-compose-redis-cluster.yml上传至/opt/software目录,这个目录可以自己选择,然后到目录下执行安装启动命令
docker-compose -f docker-compose-redis-cluster.yml up -d
[+] Running 3/3
⠿ Container redis-cluster3 Started 0.5s
⠿ Container redis-cluster2 Started 0.4s
⠿ Container redis-cluster1 Started 0.5s
通过docker ps命令可以看到redis和redis哨兵已经安装并启动成功
[root@localhost software]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
67a48962160d redis:latest "docker-entrypoint.s..." 10 seconds ago Up 9 seconds 6379/tcp, 0.0.0.0:52381->52381/tcp, :::52381->52381/tcp redis-cluster1
b10f669691b3 redis:latest "docker-entrypoint.s..." 10 seconds ago Up 9 seconds 6379/tcp, 0.0.0.0:52383->52383/tcp, :::52383->52383/tcp redis-cluster3
d3899c9c01f6 redis:latest "docker-entrypoint.s..." 10 seconds ago Up 9 seconds 6379/tcp, 0.0.0.0:52382->52382/tcp, :::52382->52382/tcp redis-cluster2
5. 在两台服务器上分别执行docker-compose安装启动命令
-
登录到docker容器中
docker exec -it 67a48962160d bash
-
用redis-cli创建整个redis集群
cd /usr/local/bin
root@localhost:/usr/local/bin# ./redis-cli --cluster create 192.168.0.210:12381 192.168.0.210:12382 192.168.0.210:12383 192.168.0.195:12384 192.168.0.195:12385 192.168.0.195:12386 --cluster-replicas 1 -a "密码"
Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.Performing hash slots allocation on 6 nodes...
Master[0] -> Slots 0 - 5460
Master[1] -> Slots 5461 - 10922
Master[2] -> Slots 10923 - 16383
Adding replica 192.168.0.195:12386 to 192.168.0.210:12381
Adding replica 192.168.0.210:12383 to 192.168.0.195:12384
Adding replica 192.168.0.195:12385 to 192.168.0.210:12382
M: 061d7b1bf2f93df7cbf261e47a7981800d636e63 192.168.0.210:12381
slots:[0-5460] (5461 slots) master
M: 6cfbf9677ab802483ddc7cbb715fe770c8de884a 192.168.0.210:12382
slots:[10923-16383] (5461 slots) master
S: 5afc2e7d2da8f9d7f7ad6e99d5ad04ffbf5bdfe5 192.168.0.210:12383
replicates d54562615048044b43e368db71789829d76fa263
M: d54562615048044b43e368db71789829d76fa263 192.168.0.195:12384
slots:[5461-10922] (5462 slots) master
S: 9137f05da40ce173a975fa4a5e86e65b9d3fe4e3 192.168.0.195:12385
replicates 6cfbf9677ab802483ddc7cbb715fe770c8de884a
S: de04b0b6d207bd8653f2cb738cb47443c427810e 192.168.0.195:12386
replicates 061d7b1bf2f93df7cbf261e47a7981800d636e63
Can I set the above configuration? (type 'yes' to accept): yes
Nodes configuration updated
Assign a different config epoch to each node
Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to joinPerforming Cluster Check (using node 192.168.0.210:12381)
M: 061d7b1bf2f93df7cbf261e47a7981800d636e63 192.168.0.210:12381
slots:[0-5460] (5461 slots) master
1 additional replica(s)
S: 5afc2e7d2da8f9d7f7ad6e99d5ad04ffbf5bdfe5 192.168.0.210:12383
slots: (0 slots) slave
replicates d54562615048044b43e368db71789829d76fa263
S: 9137f05da40ce173a975fa4a5e86e65b9d3fe4e3 192.168.0.195:12385
slots: (0 slots) slave
replicates 6cfbf9677ab802483ddc7cbb715fe770c8de884a
M: d54562615048044b43e368db71789829d76fa263 192.168.0.195:12384
slots:[5461-10922] (5462 slots) master
1 additional replica(s)
S: de04b0b6d207bd8653f2cb738cb47443c427810e 192.168.0.195:12386
slots: (0 slots) slave
replicates 061d7b1bf2f93df7cbf261e47a7981800d636e63
M: 6cfbf9677ab802483ddc7cbb715fe770c8de884a 192.168.0.210:12382
slots:[10923-16383] (5461 slots) master
1 additional replica(s)
[OK] All nodes agree about slots configuration.
Check for open slots...
Check slots coverage...
[OK] All 16384 slots covered. -
登录redis,并查看集群信息
查看集群信息
cluster info
查看节点列表
cluster nodes
[root@localhost software]# docker exec -it 407e3847371a bash
root@localhost:/data# cd /usr/local/bin
root@localhost:/usr/local/bin# redis-cli -c -h 127.0.0.1 -p 12383 -a "密码"
127.0.0.1:12383> cluster info
cluster_state:ok
cluster_slots_assigned:16384
cluster_slots_ok:16384
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:6
cluster_size:3
cluster_current_epoch:6
cluster_my_epoch:4
cluster_stats_messages_ping_sent:173
cluster_stats_messages_pong_sent:163
cluster_stats_messages_meet_sent:1
cluster_stats_messages_sent:337
cluster_stats_messages_ping_received:163
cluster_stats_messages_pong_received:174
cluster_stats_messages_received:337
127.0.0.1:12383> cluster nodes
061d7b1bf2f93df7cbf261e47a7981800d636e63 192.168.0.210:12381@22381 master - 0 1696952366541 1 connected 0-5460
de04b0b6d207bd8653f2cb738cb47443c427810e 192.168.0.195:12386@22386 slave 061d7b1bf2f93df7cbf261e47a7981800d636e63 0 1696952366541 1 connected
5afc2e7d2da8f9d7f7ad6e99d5ad04ffbf5bdfe5 192.168.0.210:12383@22383 slave d54562615048044b43e368db71789829d76fa263 0 1696952366039 4 connected
9137f05da40ce173a975fa4a5e86e65b9d3fe4e3 192.168.0.195:12385@22385 slave 6cfbf9677ab802483ddc7cbb715fe770c8de884a 0 1696952367043 2 connected
6cfbf9677ab802483ddc7cbb715fe770c8de884a 192.168.0.210:12382@22382 master - 0 1696952365537 2 connected 10923-16383
d54562615048044b43e368db71789829d76fa263 192.168.0.195:12383@22384 myself,master - 0 1696952366000 4 connected 5461-10922
搭建Redis集群模式一定要注意,docker-compose配置一定要将network_mode设置为"host"模式,且不需要端口映射。如果创建集群时,一直显示等待连接,那么需要配置防火墙,放开Redis集群端口。