实验简介
一、Redis 安装部署实验
该实验核心是完成 Redis 的源码编译安装与系统服务配置,为后续主从、哨兵、集群等功能搭建基础运行环境。
实验步骤与核心要点
- 依赖安装 :先安装编译所需的基础依赖(
make、gcc、initscripts),解决源码编译的环境依赖问题。 - 源码编译 :下载 Redis 7.4.8 源码包,解压后执行
make && make install完成编译安装;修改install_server.sh脚本(注释掉 systemd 检测逻辑),适配 systemd 系统的服务安装。 - 服务配置 :通过
install_server.sh脚本配置 Redis 服务,指定端口(6379)、配置文件路径(/etc/redis/redis.conf)、日志路径、数据目录等,最终将 Redis 注册为系统服务,实现systemctl管理(启动、停止、查看状态)。 - 验证 :通过
netstat查看 6379 端口监听状态,确认 Redis 服务正常启动。
实验目的
掌握 Redis 源码安装流程,解决不同系统(systemd)下的服务适配问题,搭建可管控的 Redis 基础运行实例。
二、Redis 主从复制实验
主从复制是 Redis 实现数据冗余、读写分离的基础,该实验通过配置主从节点,实现主节点写数据、从节点同步数据且只读的效果。
实验步骤与核心要点
- 主节点配置 :修改主节点(redis-node1,IP 172.25.254.10)配置文件,关闭保护模式(
protected-mode no),解除 IP 绑定限制(bind * -::*),重启服务。 - 从节点配置 :修改两个从节点(redis-node2/3,IP 20/30)配置文件,同样关闭保护模式、解除 IP 绑定,新增
replicaof 主节点IP 端口配置,指定主节点,重启服务。 - 状态验证 :
- 主节点执行
info replication,可查看到 2 个在线的从节点; - 从节点执行
info replication,可查看到主节点 IP、端口及同步状态(master_link_status:up)。
- 主节点执行
- 数据同步测试 :主节点写入数据(
set name lee),从节点可读取到该数据;从节点尝试写入数据时,会提示READONLY只读错误,验证主从读写分离特性。
实验目的
理解 Redis 主从复制的核心逻辑,掌握主从节点配置方法,验证数据单向同步和只读从节点的特性,为高可用(哨兵)和集群打下基础。
三、Redis 哨兵模式实验
哨兵模式是基于主从复制的高可用方案,核心是实现主节点故障时的自动故障转移,该实验通过配置哨兵节点监控主从集群,验证故障切换能力。
实验步骤与核心要点
- 哨兵配置 :
- 主节点复制
sentinel.conf到 /etc/redis/,修改配置:关闭保护模式、指定哨兵端口(26379)、监控主节点(sentinel monitor mymaster 主节点IP 6379 2,2 表示需 2 票确认主节点下线)、设置主节点下线超时(10 秒)、故障转移超时(3 分钟)等; - 将主节点的哨兵配置文件同步到两个从节点。
- 主节点复制
- 启动哨兵 :所有节点(主 + 从)启动哨兵进程(
redis-sentinel /etc/redis/sentinel.conf),哨兵会自动发现主从节点关系。 - 故障切换测试 :
- 手动关闭主节点 Redis 服务(
SHUTDOWN),哨兵日志会记录主节点 "主观下线→客观下线→选举新主节点" 的过程; - 哨兵自动将其中一个从节点(如 redis-node2,IP 20)提升为新主节点,另一个从节点(redis-node3)同步新主节点数据;
- 恢复原主节点(redis-node1)后,其会自动成为新主节点的从节点,同步数据。
- 手动关闭主节点 Redis 服务(
- 状态验证 :新主节点执行
info replication,可查看到原主节点、另一个从节点均为其从节点,验证故障转移成功。
实验目的
掌握哨兵配置方法,理解哨兵 "监控、通知、自动故障转移" 三大核心功能,验证 Redis 主从集群的高可用自动切换能力。
四、Redis 集群(Cluster)实验
Redis 集群是为解决单节点性能瓶颈、实现数据分片存储的分布式方案,该实验涵盖集群搭建、扩容、缩容全流程,验证分片存储和高可用特性。
实验步骤与核心要点
- 集群基础配置 :所有 6 个节点(IP 10/20/30/40/50/60)修改配置文件,开启集群功能(
cluster-enabled yes)、指定集群配置文件、设置节点超时时间,重启服务。 - 集群搭建 :通过
redis-cli --cluster create命令创建集群,指定 6 个节点 IP: 端口,--cluster-replicas 1表示每个主节点对应 1 个从节点;系统自动分配哈希槽(16384 个):主节点 10/20/30 分别承担 0-5460、5461-10922、10923-16383 槽位,从节点 40/50/60 分别对应主节点 30/10/20。 - 集群验证 :
redis-cli --cluster info查看集群状态(槽位分配、节点数、主从关系);redis-cli --cluster check检测槽位覆盖情况,确认所有 16384 个槽位均被分配。
- 集群扩容 :
- 添加新主节点(IP 70):通过
add-node命令将 70 加入集群,此时新节点无槽位; - 分配槽位:通过
reshard命令将 4096 个槽位从原有主节点分配给 70; - 添加新从节点(IP 80):通过
add-node --cluster-slave --cluster-master-id 70节点ID,将 80 设为 70 的从节点,验证扩容后槽位均匀分布。
- 添加新主节点(IP 70):通过
- 集群缩容 :
- 槽位回收:通过
reshard将 70 节点的 4096 个槽位回收至原主节点 10; - 删除节点:通过
del-node命令依次删除 70(主)和 80(从)节点; - 验证缩容:检查集群槽位重新分配,确认所有槽位覆盖且节点数恢复为 6(3 主 3 从)。
- 槽位回收:通过
实验目的
理解 Redis 集群的哈希槽分片机制,掌握集群搭建、扩容、缩容的核心命令,验证分布式存储和动态扩缩容能力,适配大规模数据存储场景。
Redis的安装
官方下载地址:http://download.redis.io/releases/
http://download.redis.io/releases/
rpm包方式安装
[root@redis-node1 ~]# dnf install redis -y
正在更新 Subscription Management 软件仓库。
无法读取客户身份
本系统尚未在权利服务器中注册。可使用 "rhc" 或 "subscription-manager" 进行注册。
上次元数据过期检查:0:13:51 前,执行于 2026年03月08日 星期日 14时10分45秒。
依赖关系解决。
==========================================================================================
软件包 架构 版本 仓库 大小
==========================================================================================
安装:
redis x86_64 6.2.17-1.el9_5 AppStream 1.3 M
事务概要
==========================================================================================
安装 1 软件包
总计:1.3 M
安装大小:4.7 M
下载软件包:
运行事务检查
事务检查成功。
运行事务测试
事务测试成功。
运行事务
准备中 : 1/1
运行脚本: redis-6.2.17-1.el9_5.x86_64 1/1
安装 : redis-6.2.17-1.el9_5.x86_64 1/1
运行脚本: redis-6.2.17-1.el9_5.x86_64 1/1
验证 : redis-6.2.17-1.el9_5.x86_64 1/1
已更新安装的产品。
已安装:
redis-6.2.17-1.el9_5.x86_64
完毕!
源码安装
#安装依赖
[root@redis-node1 ~]# dnf install make gcc initscripts -y
#在一台主机中不能既用rpm安装也用源码安装
[root@redis-node1 ~]# wget https://download.redis.io/releases/redis-7.4.8.tar.gz
[root@redis-node1 ~]# tar zxf redis-7.4.8.tar.gz
[root@redis-node1 ~]# cd redis-7.4.8/
[root@redis-node1 redis-7.4.8]# make && make install
[root@redis-node1 redis-7.4.8]# cd utils/
[root@redis-node1 utils]# vim install_server.sh
[root@redis-node1 utils]# ./install_server.sh
Welcome to the redis service installer
This script will help you easily set up a running redis server
Please select the redis port for this instance: [6379]
Selecting default: 6379
Please select the redis config file name [/etc/redis/6379.conf]
Selected default - /etc/redis/6379.conf
Please select the redis log file name [/var/log/redis_6379.log]
Selected default - /var/log/redis_6379.log
Please select the data directory for this instance [/var/lib/redis/6379]
Selected default - /var/lib/redis/6379
Please select the redis executable path [/usr/local/bin/redis-server]
Selected config:
Port : 6379
Config file : /etc/redis/6379.conf
Log file : /var/log/redis_6379.log
Data dir : /var/lib/redis/6379
Executable : /usr/local/bin/redis-server
Cli Executable : /usr/local/bin/redis-cli
Is this ok? Then press ENTER to go on or Ctrl-C to abort.
Copied /tmp/6379.conf => /etc/init.d/redis_6379
Installing service...
Successfully added to chkconfig!
Successfully added to runlevels 345!
Starting Redis server...
Installation successful!
[root@redis-node1 utils]# service redis_6379 status
Redis is running (7890)
[root@redis-node3 utils]# systemctl daemon-reload
[root@redis-node3 utils]# systemctl status redis_6379.service
○ redis_6379.service - LSB: start and stop redis_6379
Loaded: loaded (/etc/rc.d/init.d/redis_6379; generated)
Active: inactive (dead)
Docs: man:systemd-sysv-generator(8)
[root@redis-node1 utils]# systemctl start redis_6379.service
[root@redis-node1 utils]# systemctl status redis_6379.service
● redis_6379.service - LSB: start and stop redis_6379
Loaded: loaded (/etc/rc.d/init.d/redis_6379; generated)
Active: active (exited) since Sun 2026-03-08 15:44:35 CST; 3s ago
Docs: man:systemd-sysv-generator(8)
Process: 7934 ExecStart=/etc/rc.d/init.d/redis_6379 start (code=exited, status=0/SUCCESS)
CPU: 1ms
3月 08 15:44:35 redis-node1 systemd[1]: Starting LSB: start and stop redis_6379...
3月 08 15:44:35 redis-node1 redis_6379[7934]: /var/run/redis_6379.pid exists, process is already running or crashed
3月 08 15:44:35 redis-node3 systemd[1]: Started LSB: start and stop redis_6379.
[root@redis-node1 utils]# netstat -antlpe | grep redis
tcp 0 0 127.0.0.1:6379 0.0.0.0:* LISTEN 0 44148 7890/redis-server 1
tcp6 0 0 ::1:6379 :::* LISTEN 0 44149 7890/redis-server 1
#查看信息
[root@redis-node1 ~]# redis-cli
127.0.0.1:6379>
Redis主从复制
Redis主节点配置
[root@redis-node1 ~]# vim /etc/redis/6379.conf
#bind 127.0.0.1 -::1
bind * -::*
protected-mode no
[root@redis-node1 ~]# systemctl restart redis_6379.service
配置Redis从节点
#在redis-node2节点
[root@redis-node2 ~]# vim /etc/redis/6379.conf
#bind 127.0.0.1 -::1
bind * -::*
protected-mode no
replicaof 172.25.254.10 6379
[root@redis-node2 ~]# systemctl restart redis_6379.service
#在redis-node3节点
[root@redis-node3 ~]# vim /etc/redis/6379.conf
#bind 127.0.0.1 -::1
bind * -::*
protected-mode no
replicaof 172.25.254.10 6379
[root@redis-node3 ~]# systemctl restart redis_6379.service
查看状态并测试
状态查看
[root@redis-node1 ~]# redis-cli
127.0.0.1:6379> info replication
# Replication
role:master
connected_slaves:2
slave0:ip=172.25.254.30,port=6379,state=online,offset=0,lag=0
slave1:ip=172.25.254.20,port=6379,state=online,offset=0,lag=0
master_failover_state:no-failover
master_replid:13c564f42f50338bb5038fb68a23036a0b3920f3
master_replid2:0000000000000000000000000000000000000000
master_repl_offset:0
second_repl_offset:-1
repl_backlog_active:1
repl_backlog_size:1048576
repl_backlog_first_byte_offset:1
repl_backlog_histlen:0
[root@redis-node2 ~]# redis-cli
127.0.0.1:6379> info replication
# Replication
role:slave
master_host:172.25.254.10
master_port:6379
master_link_status:up
master_last_io_seconds_ago:1
master_sync_in_progress:0
slave_read_repl_offset:70
slave_repl_offset:70
slave_priority:100
slave_read_only:1
replica_announced:1
connected_slaves:0
master_failover_state:no-failover
master_replid:13c564f42f50338bb5038fb68a23036a0b3920f3
master_replid2:0000000000000000000000000000000000000000
master_repl_offset:70
second_repl_offset:-1
repl_backlog_active:1
repl_backlog_size:1048576
repl_backlog_first_byte_offset:1
repl_backlog_histlen:70
[root@redis-node3 ~]# redis-cli
127.0.0.1:6379> info replication
# Replication
role:slave
master_host:172.25.254.10
master_port:6379
master_link_status:up
master_last_io_seconds_ago:2
master_sync_in_progress:0
slave_read_repl_offset:98
slave_repl_offset:98
slave_priority:100
slave_read_only:1
replica_announced:1
connected_slaves:0
master_failover_state:no-failover
master_replid:13c564f42f50338bb5038fb68a23036a0b3920f3
master_replid2:0000000000000000000000000000000000000000
master_repl_offset:98
second_repl_offset:-1
repl_backlog_active:1
repl_backlog_size:1048576
repl_backlog_first_byte_offset:1
repl_backlog_histlen:98
测试数据同步性
[root@redis-node1 ~]# redis-cli
127.0.0.1:6379> set name lee
OK
127.0.0.1:6379> get name
"lee"
[root@redis-node2 ~]# redis-cli
127.0.0.1:6379> get name
"lee"
[root@redis-node3 ~]# redis-cli
127.0.0.1:6379> get name
"lee"
#在从节点中不能写入数据
[root@redis-node2 ~]# redis-cli
127.0.0.1:6379> get name
"lee"
127.0.0.1:6379> set test 123
(error) READONLY You can't write against a read only replica.
配置Redis哨兵模式
#redis主节点
[root@redis-node1 ~]# cd redis-7.4.8/
[root@redis-node1 redis-7.4.8]# cp -p sentinel.conf /etc/redis/
[root@redis-node1 redis-7.4.8]# vim /etc/redis/sentinel.conf
protected-mode no #关闭保护模式
port 26379 #监听端口
daemonize no #进入不打如后台
pidfile /var/run/redis-sentinel.pid #sentinel进程pid文件
loglevel notice #日志级别
sentinel monitor mymaster 172.25.254.10 6379 2 #创建sentinel监控监控master主机,2表示必须得到2票
sentinel down-after-milliseconds mymaster 10000 #master中断时长,10秒连不上视为master下线
sentinel parallel-syncs mymaster 1 #发生故障转移后,同时开始同步新master数据的slave数量
sentinel failover-timeout mymaster 180000 #整个故障切换的超时时间为3分钟
#在从节点关闭protected-mode模式
[root@redis-node2 ~]# vim /etc/redis/redis.conf
protected-mode no
[root@redis-node2 ~]# systemctl restart redis_6379.service
[root@redis-node3 ~]# vim /etc/redis/redis.conf
protected-mode no
[root@redis-node3 ~]# systemctl restart redis_6379.service
#在主节点复制sentinel.conf到从节点
[root@redis-node1 ~]# scp /etc/redis/sentinel.conf root@172.25.254.20:/etc/redis/
sentinel.conf
[root@redis-node1 ~]# scp /etc/redis/sentinel.conf root@172.25.254.30:/etc/redis/
sentinel.conf
# 在node2、node3上分别执行
vim /etc/redis/sentinel.conf
sentinel myid 42dcfae1f4deefeef4ba03dcf38832703701212a #修改为不同值
#所有节点开启哨兵
[root@redis-node1 ~]# redis-sentinel /etc/redis/sentinel.conf
2118:X 14 Mar 2026 10:51:14.694 # WARNING Memory overcommit must be enabled! Without it, a background save or replication may fail under low memory condition. Being disabled, it can also cause failures without low memory condition, see https://github.com/jemalloc/jemalloc/issues/1328. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
2118:X 14 Mar 2026 10:51:14.694 * oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
2118:X 14 Mar 2026 10:51:14.694 * Redis version=7.4.8, bits=64, commit=00000000, modified=1, pid=2118, just started
2118:X 14 Mar 2026 10:51:14.694 * Configuration loaded
2118:X 14 Mar 2026 10:51:14.694 * Increased maximum number of open files to 10032 (it was originally set to 1024).
2118:X 14 Mar 2026 10:51:14.694 * monotonic clock: POSIX clock_gettime
_._
_.-``__ ''-._
_.-`` `. `_. ''-._ Redis Community Edition
.-`` .-```. ```\/ _.,_ ''-._ 7.4.8 (00000000/1) 64 bit
( ' , .-` | `, ) Running in sentinel mode
|`-._`-...-` __...-.``-._|'` _.-'| Port: 26379
| `-._ `._ / _.-' | PID: 2118
`-._ `-._ `-./ _.-' _.-'
|`-._`-._ `-.__.-' _.-'_.-'|
| `-._`-._ _.-'_.-' | https://redis.io
`-._ `-._`-.__.-'_.-' _.-'
|`-._`-._ `-.__.-' _.-'_.-'|
| `-._`-._ _.-'_.-' |
`-._ `-._`-.__.-'_.-' _.-'
`-._ `-.__.-' _.-'
`-._ _.-'
`-.__.-'
2118:X 14 Mar 2026 10:51:14.695 * Sentinel ID is 42dcfae1f4deefeef4ba03dcf38832703701212a
2118:X 14 Mar 2026 10:51:14.695 # +monitor master mymaster 172.25.254.10 6379 quorum 2
2118:X 14 Mar 2026 10:51:19.205 * +sentinel sentinel 42dcfae1f4deefeef4ba03dcf38832703701212b 172.25.254.20 26379 @ mymaster 172.25.254.10 6379
2118:X 14 Mar 2026 10:51:19.207 * Sentinel new configuration saved on disk
2118:X 14 Mar 2026 10:51:22.067 * +sentinel-address-switch master mymaster 172.25.254.10 6379 ip 172.25.254.30 port 26379 for 42dcfae1f4deefeef4ba03dcf38832703701212b
2118:X 14 Mar 2026 10:51:22.069 * Sentinel new configuration saved on disk
2118:X 14 Mar 2026 10:51:22.885 * +sentinel-address-switch master mymaster 172.25.254.10 6379 ip 172.25.254.20 port 26379 for 42dcfae1f4deefeef4ba03dcf38832703701212b
2118:X 14 Mar 2026 10:51:22.886 * Sentinel new configuration saved on disk
2118:X 14 Mar 2026 10:51:22.920 * +sentinel-address-switch master mymaster 172.25.254.10 6379 ip 172.25.254.30 port 26379 for 42dcfae1f4deefeef4ba03dcf38832703701212b
2118:X 14 Mar 2026 10:51:22.921 * Sentinel new configuration saved on disk
2118:X 14 Mar 2026 10:51:23.268 * +sentinel-address-switch master mymaster 172.25.254.10 6379 ip 172.25.254.20 port 26379 for 42dcfae1f4deefeef4ba03dcf38832703701212b
测试故障切换
#关闭10redis
[root@redis-node1 6379]# redis-cli
127.0.0.1:6379> SHUTDOWN
not connected> quit
#切换信息
2118:X 14 Mar 2026 10:52:25.871 * +slave slave 172.25.254.10:6379 172.25.254.10 6379 @ mymaster 172.25.254.20 6379 #主节点被切换到20
#在30中查看信息
[root@redis-node3 ~]# redis-cli
127.0.0.1:6379> info replication
# Replication
role:slave
master_host:172.25.254.20
master_port:6379
master_link_status:up
master_last_io_seconds_ago:0
master_sync_in_progress:0
slave_read_repl_offset:41337
slave_repl_offset:41337
slave_priority:100
slave_read_only:1
replica_announced:1
connected_slaves:0
master_failover_state:no-failover
master_replid:7cef6fecc37f62b401f6944e21647d9a411a58f9
master_replid2:057f3ef728d7c28003890756a4f03688775c99da
master_repl_offset:41337
second_repl_offset:28823
repl_backlog_active:1
repl_backlog_size:1048576
repl_backlog_first_byte_offset:16878
repl_backlog_histlen:24460
#恢复10redis
[root@redis-node1 ~]# /etc/init.d/redis_6379 start
Starting Redis server...
#在20查看信息
[root@redis-node2 ~]# redis-cli
127.0.0.1:6379> info replication
# Replication
role:master
connected_slaves:2
slave0:ip=172.25.254.30,port=6379,state=online,offset=69178,lag=0
slave1:ip=172.25.254.10,port=6379,state=online,offset=69037,lag=1
master_failover_state:no-failover
master_replid:7cef6fecc37f62b401f6944e21647d9a411a58f9
master_replid2:057f3ef728d7c28003890756a4f03688775c99da
master_repl_offset:69178
second_repl_offset:28823
repl_backlog_active:1
repl_backlog_size:1048576
repl_backlog_first_byte_offset:16878
repl_backlog_histlen:52301
Redis Cluster
修改所有节点配置文件
[root@redis-node1 ~]# vim /etc/redis/6379.conf
masterauth "123456" #集群主从认证
cluster-enabled yes #开启cluster集群功能
cluster-config-file nodes-6379.conf #指定集群配置文件
cluster-node-timeout 15000 #节点加入集群的超时时间单位是ms
[root@redis-node1 ~]# /etc/init.d/redis_6379 stop
启动集群
[root@redis-node1 ~]# redis-cli --cluster create 172.25.254.10:6379 172.25.254.20:6379 172.25.254.30:6379 172.25.254.40:6379 172.25.254.50:6379 172.25.254.60:6379 --cluster-replicas 1
>>> Performing hash slots allocation on 6 nodes...
Master[0] -> Slots 0 - 5460
Master[1] -> Slots 5461 - 10922
Master[2] -> Slots 10923 - 16383
Adding replica 172.25.254.50:6379 to 172.25.254.10:6379
Adding replica 172.25.254.60:6379 to 172.25.254.20:6379
Adding replica 172.25.254.40:6379 to 172.25.254.30:6379
M: 8db833f3c3bc6b8f93e87111f13f56d366f833a0 172.25.254.10:6379
slots:[0-5460] (5461 slots) master
M: ca599940209f55c07d06951480703bb0a5d8873a 172.25.254.20:6379
slots:[5461-10922] (5462 slots) master
M: d9300173b75149d3056f0ee3edec063f8ec66e9a 172.25.254.30:6379
slots:[10923-16383] (5461 slots) master
S: 32d797eb30094b77edb896abcc0b0fc91ccdb4fd 172.25.254.40:6379
replicates d9300173b75149d3056f0ee3edec063f8ec66e9a
S: ba6ef067c63d30c213493eb48d43427015018898 172.25.254.50:6379
replicates 8db833f3c3bc6b8f93e87111f13f56d366f833a0
S: c939a04358edc1ce7a1c1a44561d77fb402025fd 172.25.254.60:6379
replicates ca599940209f55c07d06951480703bb0a5d8873a
Can I set the above configuration? (type 'yes' to accept): yes #输入内容
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join
..
>>> Performing Cluster Check (using node 172.25.254.10:6379)
M: 8db833f3c3bc6b8f93e87111f13f56d366f833a0 172.25.254.10:6379
slots:[0-5460] (5461 slots) master
1 additional replica(s)
S: c939a04358edc1ce7a1c1a44561d77fb402025fd 172.25.254.60:6379
slots: (0 slots) slave
replicates ca599940209f55c07d06951480703bb0a5d8873a
M: d9300173b75149d3056f0ee3edec063f8ec66e9a 172.25.254.30:6379
slots:[10923-16383] (5461 slots) master
1 additional replica(s)
M: ca599940209f55c07d06951480703bb0a5d8873a 172.25.254.20:6379
slots:[5461-10922] (5462 slots) master
1 additional replica(s)
S: ba6ef067c63d30c213493eb48d43427015018898 172.25.254.50:6379
slots: (0 slots) slave
replicates 8db833f3c3bc6b8f93e87111f13f56d366f833a0
S: 32d797eb30094b77edb896abcc0b0fc91ccdb4fd 172.25.254.40:6379
slots: (0 slots) slave
replicates d9300173b75149d3056f0ee3edec063f8ec66e9a
[OK] All nodes agree about slots configuration.
#查看集群状态
[root@redis-node1 ~]# redis-cli --cluster info 172.25.254.10:6379
172.25.254.10:6379 (8db833f3...) -> 0 keys | 5461 slots | 1 slaves.
172.25.254.30:6379 (d9300173...) -> 0 keys | 5461 slots | 1 slaves.
172.25.254.20:6379 (ca599940...) -> 0 keys | 5462 slots | 1 slaves.
[OK] 0 keys in 3 masters.
0.00 keys per slot on average.
#查看集群信息
[root@redis-node1 ~]# redis-cli cluster info
cluster_state:ok
cluster_slots_assigned:16384
cluster_slots_ok:16384
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:6
cluster_size:3
cluster_current_epoch:6
cluster_my_epoch:1
cluster_stats_messages_ping_sent:168
cluster_stats_messages_pong_sent:163
cluster_stats_messages_sent:331
cluster_stats_messages_ping_received:158
cluster_stats_messages_pong_received:168
cluster_stats_messages_meet_received:5
cluster_stats_messages_received:331
total_cluster_links_buffer_limit_exceeded:0
#检测当前集群
[root@redis-node1 ~]# redis-cli --cluster check 172.25.254.10:6379
172.25.254.10:6379 (8db833f3...) -> 0 keys | 5461 slots | 1 slaves.
172.25.254.30:6379 (d9300173...) -> 0 keys | 5461 slots | 1 slaves.
172.25.254.20:6379 (ca599940...) -> 0 keys | 5462 slots | 1 slaves.
[OK] 0 keys in 3 masters.
0.00 keys per slot on average.
>>> Performing Cluster Check (using node 172.25.254.10:6379)
M: 8db833f3c3bc6b8f93e87111f13f56d366f833a0 172.25.254.10:6379
slots:[0-5460] (5461 slots) master
1 additional replica(s)
S: c939a04358edc1ce7a1c1a44561d77fb402025fd 172.25.254.60:6379
slots: (0 slots) slave
replicates ca599940209f55c07d06951480703bb0a5d8873a
M: d9300173b75149d3056f0ee3edec063f8ec66e9a 172.25.254.30:6379
slots:[10923-16383] (5461 slots) master
1 additional replica(s)
M: ca599940209f55c07d06951480703bb0a5d8873a 172.25.254.20:6379
slots:[5461-10922] (5462 slots) master
1 additional replica(s)
S: ba6ef067c63d30c213493eb48d43427015018898 172.25.254.50:6379
slots: (0 slots) slave
replicates 8db833f3c3bc6b8f93e87111f13f56d366f833a0
S: 32d797eb30094b77edb896abcc0b0fc91ccdb4fd 172.25.254.40:6379
slots: (0 slots) slave
replicates d9300173b75149d3056f0ee3edec063f8ec66e9a
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
集群扩容
#添加master
[root@redis-node1 ~]# redis-cli --cluster add-node 172.25.254.70:6379 172.25.254.10:6379
[root@redis-node1 ~]# redis-cli --cluster check 172.25.254.10:6379
172.25.254.10:6379 (8db833f3...) -> 0 keys | 5461 slots | 1 slaves.
172.25.254.70:6379 (dfabfe07...) -> 0 keys | 0 slots | 0 slaves.
172.25.254.30:6379 (d9300173...) -> 0 keys | 5461 slots | 1 slaves.
172.25.254.20:6379 (ca599940...) -> 1 keys | 5462 slots | 1 slaves.
[OK] 1 keys in 4 masters.
0.00 keys per slot on average.
>>> Performing Cluster Check (using node 172.25.254.10:6379)
M: 8db833f3c3bc6b8f93e87111f13f56d366f833a0 172.25.254.10:6379
slots:[0-5460] (5461 slots) master
1 additional replica(s)
M: dfabfe07170ac9b5d20a5a7a70c836877bd64504 172.25.254.70:6379
slots: (0 slots) master
S: c939a04358edc1ce7a1c1a44561d77fb402025fd 172.25.254.60:6379
slots: (0 slots) slave
replicates ca599940209f55c07d06951480703bb0a5d8873a
M: d9300173b75149d3056f0ee3edec063f8ec66e9a 172.25.254.30:6379
slots:[10923-16383] (5461 slots) master
1 additional replica(s)
M: ca599940209f55c07d06951480703bb0a5d8873a 172.25.254.20:6379
slots:[5461-10922] (5462 slots) master
1 additional replica(s)
S: ba6ef067c63d30c213493eb48d43427015018898 172.25.254.50:6379
slots: (0 slots) slave
replicates 8db833f3c3bc6b8f93e87111f13f56d366f833a0
S: 32d797eb30094b77edb896abcc0b0fc91ccdb4fd 172.25.254.40:6379
slots: (0 slots) slave
replicates d9300173b75149d3056f0ee3edec063f8ec66e9a
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
#分配solt给新加入的主机
>>> Performing Cluster Check (using node 172.25.254.10:6379)
M: 8db833f3c3bc6b8f93e87111f13f56d366f833a0 172.25.254.10:6379
slots:[0-5460] (5461 slots) master
1 additional replica(s)
M: dfabfe07170ac9b5d20a5a7a70c836877bd64504 172.25.254.70:6379
slots: (0 slots) master
S: c939a04358edc1ce7a1c1a44561d77fb402025fd 172.25.254.60:6379
slots: (0 slots) slave
replicates ca599940209f55c07d06951480703bb0a5d8873a
M: d9300173b75149d3056f0ee3edec063f8ec66e9a 172.25.254.30:6379
slots:[10923-16383] (5461 slots) master
1 additional replica(s)
M: ca599940209f55c07d06951480703bb0a5d8873a 172.25.254.20:6379
slots:[5461-10922] (5462 slots) master
1 additional replica(s)
S: ba6ef067c63d30c213493eb48d43427015018898 172.25.254.50:6379
slots: (0 slots) slave
replicates 8db833f3c3bc6b8f93e87111f13f56d366f833a0
S: 32d797eb30094b77edb896abcc0b0fc91ccdb4fd 172.25.254.40:6379
slots: (0 slots) slave
replicates d9300173b75149d3056f0ee3edec063f8ec66e9a
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
How many slots do you want to move (from 1 to 16384)? 4096 #分配solt的数量
What is the receiving node ID? dfabfe07170ac9b5d20a5a7a70c836877bd64504
Please enter all the source node IDs.
Type 'all' to use all the nodes as source nodes for the hash slots.
Type 'done' once you entered all the source nodes IDs.
Source node #1: all #solt来源
Ready to move 4096 slots.
#给新主机添加slave
[root@redis-node1 ~]# redis-cli --cluster add-node 172.25.254.80:6379 172.25.254.10:6379 --cluster-slave --cluster-master-id dfabfe07170ac9b5d20a5a7a70c836877bd64504
[root@redis-node1 ~]# redis-cli --cluster check 172.25.254.10:6379 172.25.254.10:6379 (8db833f3...) -> 0 keys | 4096 slots | 1 slaves.
172.25.254.70:6379 (dfabfe07...) -> 1 keys | 4096 slots | 1 slaves.
172.25.254.30:6379 (d9300173...) -> 0 keys | 4096 slots | 1 slaves.
172.25.254.20:6379 (ca599940...) -> 0 keys | 4096 slots | 1 slaves.
[OK] 1 keys in 4 masters.
0.00 keys per slot on average.
>>> Performing Cluster Check (using node 172.25.254.10:6379)
M: 8db833f3c3bc6b8f93e87111f13f56d366f833a0 172.25.254.10:6379
slots:[1365-5460] (4096 slots) master
1 additional replica(s)
S: 1176ee294e6b5071ca57e93374d04ac22028daed 172.25.254.80:6379
slots: (0 slots) slave
replicates dfabfe07170ac9b5d20a5a7a70c836877bd64504
M: dfabfe07170ac9b5d20a5a7a70c836877bd64504 172.25.254.70:6379
slots:[0-1364],[5461-6826],[10923-12287] (4096 slots) master
1 additional replica(s)
S: c939a04358edc1ce7a1c1a44561d77fb402025fd 172.25.254.60:6379
slots: (0 slots) slave
replicates ca599940209f55c07d06951480703bb0a5d8873a
M: d9300173b75149d3056f0ee3edec063f8ec66e9a 172.25.254.30:6379
slots:[12288-16383] (4096 slots) master
1 additional replica(s)
M: ca599940209f55c07d06951480703bb0a5d8873a 172.25.254.20:6379
slots:[6827-10922] (4096 slots) master
1 additional replica(s)
S: ba6ef067c63d30c213493eb48d43427015018898 172.25.254.50:6379
slots: (0 slots) slave
replicates 8db833f3c3bc6b8f93e87111f13f56d366f833a0
S: 32d797eb30094b77edb896abcc0b0fc91ccdb4fd 172.25.254.40:6379
slots: (0 slots) slave
replicates d9300173b75149d3056f0ee3edec063f8ec66e9a
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
集群缩容
#集群槽位回收到10主机中
[root@redis-node1 ~]# redis-cli --cluster reshard 172.25.254.10:6379
>>> Performing Cluster Check (using node 172.25.254.10:6379)
M: 8db833f3c3bc6b8f93e87111f13f56d366f833a0 172.25.254.10:6379
slots:[1365-5460] (4096 slots) master
1 additional replica(s)
S: 1176ee294e6b5071ca57e93374d04ac22028daed 172.25.254.80:6379
slots: (0 slots) slave
replicates dfabfe07170ac9b5d20a5a7a70c836877bd64504
M: dfabfe07170ac9b5d20a5a7a70c836877bd64504 172.25.254.70:6379
slots:[0-1364],[5461-6826],[10923-12287] (4096 slots) master
1 additional replica(s)
S: c939a04358edc1ce7a1c1a44561d77fb402025fd 172.25.254.60:6379
slots: (0 slots) slave
replicates ca599940209f55c07d06951480703bb0a5d8873a
M: d9300173b75149d3056f0ee3edec063f8ec66e9a 172.25.254.30:6379
slots:[12288-16383] (4096 slots) master
1 additional replica(s)
M: ca599940209f55c07d06951480703bb0a5d8873a 172.25.254.20:6379
slots:[6827-10922] (4096 slots) master
1 additional replica(s)
S: ba6ef067c63d30c213493eb48d43427015018898 172.25.254.50:6379
slots: (0 slots) slave
replicates 8db833f3c3bc6b8f93e87111f13f56d366f833a0
S: 32d797eb30094b77edb896abcc0b0fc91ccdb4fd 172.25.254.40:6379
slots: (0 slots) slave
replicates d9300173b75149d3056f0ee3edec063f8ec66e9a
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
How many slots do you want to move (from 1 to 16384)? 4096
What is the receiving node ID? 8db833f3c3bc6b8f93e87111f13f56d366f833a0 #10id
Please enter all the source node IDs.
Type 'all' to use all the nodes as source nodes for the hash slots.
Type 'done' once you entered all the source nodes IDs.
Source node #1: dfabfe07170ac9b5d20a5a7a70c836877bd64504 #70id
Source node #2: done
#删除70和80节点
[root@redis-node1 ~]# redis-cli --cluster del-node 172.25.254.10:6379 dfabfe07170ac9b5d20a5a7a70c836877bd64504
>>> Removing node dfabfe07170ac9b5d20a5a7a70c836877bd64504 from cluster 172.25.254.10:6379
>>> Sending CLUSTER FORGET messages to the cluster...
>>> Sending CLUSTER RESET SOFT to the deleted node.
[root@redis-node1 ~]# redis-cli --cluster del-node 172.25.254.10:6379 1176ee294e6b5071ca57e93374d04ac22028daed
>>> Removing node 1176ee294e6b5071ca57e93374d04ac22028daed from cluster 172.25.254.10:6379
>>> Sending CLUSTER FORGET messages to the cluster...
>>> Sending CLUSTER RESET SOFT to the deleted node.
[root@redis-node1 ~]# redis-cli --cluster check 172.25.254.10:6379
172.25.254.10:6379 (8db833f3...) -> 1 keys | 8192 slots | 1 slaves.
172.25.254.30:6379 (d9300173...) -> 0 keys | 4096 slots | 1 slaves.
172.25.254.20:6379 (ca599940...) -> 0 keys | 4096 slots | 1 slaves.
[OK] 1 keys in 3 masters.
0.00 keys per slot on average.
>>> Performing Cluster Check (using node 172.25.254.10:6379)
M: 8db833f3c3bc6b8f93e87111f13f56d366f833a0 172.25.254.10:6379
slots:[0-6826],[10923-12287] (8192 slots) master
1 additional replica(s)
S: c939a04358edc1ce7a1c1a44561d77fb402025fd 172.25.254.60:6379
slots: (0 slots) slave
replicates ca599940209f55c07d06951480703bb0a5d8873a
M: d9300173b75149d3056f0ee3edec063f8ec66e9a 172.25.254.30:6379
slots:[12288-16383] (4096 slots) master
1 additional replica(s)
M: ca599940209f55c07d06951480703bb0a5d8873a 172.25.254.20:6379
slots:[6827-10922] (4096 slots) master
1 additional replica(s)
S: ba6ef067c63d30c213493eb48d43427015018898 172.25.254.50:6379
slots: (0 slots) slave
replicates 8db833f3c3bc6b8f93e87111f13f56d366f833a0
S: 32d797eb30094b77edb896abcc0b0fc91ccdb4fd 172.25.254.40:6379
slots: (0 slots) slave
replicates d9300173b75149d3056f0ee3edec063f8ec66e9a
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.