目录
[五、Redis Cluster](#五、Redis Cluster)
一、Redis介绍
定义:Redis 是开源、高性能的内存型键值数据库,支持数据持久化,兼顾高速读写与数据安全。
优势:
速度快,基于内存,C语言实现
单线程
持久化
支持多种数据结构
支持多种编程语言
简单:代码短小精悍
主从复制
支持高可用和分布式
为什么单线程很快?
纯内存
非阻塞
避免线程切换和静态消耗
二、安装部署Redis
准备三台主机:
redis-node1:172.25.254.10
redis-node2:172.25.254.20
redis-node3:172.25.254.30
1、安装依赖
cpp
[root@redis-node1 ~]# dnf install make gcc initscripts -y
[root@redis-node2 ~]# dnf install make gcc initscripts -y
[root@redis-node3 ~]# dnf install make gcc initscripts -y
2、源码编译redis-node1
cpp
[root@redis-node1 ~]# wget https://download.redis.io/releases/redis-7.4.8.tar.gz
[root@redis-node1 ~]# ls
anaconda-ks.cfg redis-7.4.8.tar.gz
[root@redis-node1 ~]# tar zxf redis-7.4.8.tar.gz
[root@redis-node1 ~]# cd redis-7.4.8/
[root@redis-node1 redis-7.4.8]# ls
00-RELEASENOTES deps MANIFESTO runtest SECURITY.md TLS.md
BUGS INSTALL README.md runtest-cluster sentinel.conf utils
CODE_OF_CONDUCT.md LICENSE.txt redis.conf runtest-moduleapi src
CONTRIBUTING.md Makefile REDISCONTRIBUTIONS.txt runtest-sentinel tests
# 执行编译命令
[root@redis-node1 redis-7.4.8]# make
[root@redis-node1 redis-7.4.8]# make install
# 启动redis
[root@redis-node1 redis-7.4.8]# cd utils/
[root@redis-node1 utils]# ls
build-static-symbols.tcl graphs reply_schema_linter.js
cluster_fail_time.tcl hyperloglog req-res-log-validator.py
corrupt_rdb.c install_server.sh req-res-validator
create-cluster lru speed-regression.tcl
generate-command-code.py redis-copy.rb srandmember
generate-commands-json.py redis_init_script systemd-redis_multiple_servers@.service
generate-fmtargs.py redis_init_script.tpl systemd-redis_server.service
generate-module-api-doc.rb redis-sha1.rb tracking_collisions.c
gen-test-certs.sh releasetools whatisdoing.sh
[root@redis-node1 utils]# ./install_server.sh
Welcome to the redis service installer
This script will help you easily set up a running redis server
This systems seems to use systemd. # 提示系统使用的是systemd的初始化方式
Please take a look at the provided example service unit files in this directory, and adapt and install them. Sorry!
[root@redis-node1 utils]# vim install_server.sh
##bail if this system is managed by systemd
#_pid_1_exe="$(readlink -f /proc/1/exe)"
#if [ "${_pid_1_exe##*/}" = systemd ]
#then
# echo "This systems seems to use systemd."
# echo "Please take a look at the provided example service unit files in this directory, and adapt and install them. Sorry!"
# exit 1
#fi
# 之后在尝试启动
[root@redis-node1 utils]# ./install_server.sh
Welcome to the redis service installer
This script will help you easily set up a running redis server
Please select the redis port for this instance: [6379]
Selecting default: 6379
Please select the redis config file name [/etc/redis/6379.conf] /etc/redis/redis.conf
Please select the redis log file name [/var/log/redis_6379.log]
Selected default - /var/log/redis_6379.log
Please select the data directory for this instance [/var/lib/redis/6379]
Selected default - /var/lib/redis/6379
Please select the redis executable path [/usr/local/bin/redis-server]
Selected config:
Port : 6379
Config file : /etc/redis/redis.conf
Log file : /var/log/redis_6379.log
Data dir : /var/lib/redis/6379
Executable : /usr/local/bin/redis-server
Cli Executable : /usr/local/bin/redis-cli
Is this ok? Then press ENTER to go on or Ctrl-C to abort.
Copied /tmp/6379.conf => /etc/init.d/redis_6379
Installing service...
Successfully added to chkconfig!
Successfully added to runlevels 345!
Starting Redis server...
Installation successful!
# 以下两种方式也可以成功启动,这里是因为我们前面一步已经成功启动了
[root@redis-node1 utils]# /etc/init.d/redis_6379 start
/var/run/redis_6379.pid exists, process is already running or crashed
# 这个是开机自启
[root@redis-node1 utils]# chkconfig redis_6379 on
# 使用systemd系统的启动方式
[root@redis-node1 utils]# systemctl daemon-reload
[root@redis-node1 utils]# systemctl start redis_6379.service
[root@redis-node1 utils]# systemctl status redis_6379.service
● redis_6379.service - LSB: start and stop redis_6379
Loaded: loaded (/etc/rc.d/init.d/redis_6379; generated)
Active: active (running) since Wed 2026-03-11 10:57:47 CST; 1min 50s ago
Docs: man:systemd-sysv-generator(8)
Tasks: 6 (limit: 10858)
Memory: 7.1M
CPU: 386ms
CGroup: /system.slice/redis_6379.service
└─31452 "/usr/local/bin/redis-server 127.0.0.1:6379"
[root@redis-node1 utils]# netstat -antlupe | grep redis
tcp 0 0 127.0.0.1:6379 0.0.0.0:* LISTEN 0 113437 31452/redis-server
tcp6 0 0 ::1:6379 :::* LISTEN 0 113438 31452/redis-server
3、部署其他主机(node2,3)
cpp
# 当然其他主机也需要进行配置,scp拷贝过去
[root@redis-node1 ~]# ls
anaconda-ks.cfg redis-7.4.8 redis-7.4.8.tar.gz
[root@redis-node1 ~]# scp redis-7.4.8.tar.gz root@172.25.254.20:/root/
Warning: Permanently added '172.25.254.20' (ED25519) to the list of known hosts.
redis-7.4.8.tar.gz 100% 3456KB 202.4MB/s 00:00
[root@redis-node1 ~]# scp redis-7.4.8.tar.gz root@172.25.254.30:/root/
Warning: Permanently added '172.25.254.30' (ED25519) to the list of known hosts.
redis-7.4.8.tar.gz
# 在node2,3执行编译
[root@redis-node2 ~]# ls
anaconda-ks.cfg redis-7.4.8.tar.gz
[root@redis-node2 ~]# tar zxf redis-7.4.8.tar.gz
[root@redis-node2 ~]# ls
anaconda-ks.cfg redis-7.4.8 redis-7.4.8.tar.gz
[root@redis-node2 ~]# cd redis-7.4.8/
[root@redis-node2 redis-7.4.8]# make && make install
[root@redis-node3 ~]# ls
anaconda-ks.cfg redis-7.4.8.tar.gz
[root@redis-node3 ~]# tar zxf redis-7.4.8.tar.gz
[root@redis-node3 ~]# cd redis-7.4.8/
[root@redis-node3 redis-7.4.8]# make && make install
# 以下以node2为例,node3执行同意的步骤
[root@redis-node2 redis-7.4.8]# cd utils/
[root@redis-node2 utils]# vim install_server.sh
##bail if this system is managed by systemd
#_pid_1_exe="$(readlink -f /proc/1/exe)"
#if [ "${_pid_1_exe##*/}" = systemd ]
#then
# echo "This systems seems to use systemd."
# echo "Please take a look at the provided example service unit files in this directory, and adapt and install them. Sorry!"
# exit 1
#fi
[root@redis-node2 utils]# ./install_server.sh
# 同意可以使用systemd启动方式
[root@redis-node2 utils]# systemctl daemon-reload
[root@redis-node2 utils]# systemctl start redis_6379.service
[root@redis-node2 utils]# systemctl status redis_6379.service
● redis_6379.service - LSB: start and stop redis_6379
Loaded: loaded (/etc/rc.d/init.d/redis_6379; generated)
Active: active (exited) since Wed 2026-03-11 11:26:28 CST; 9s ago
Docs: man:systemd-sysv-generator(8)
Process: 37360 ExecStart=/etc/rc.d/init.d/redis_6379 start (code=exited, status=0/SUCCESS)
CPU: 1ms
3月 11 11:26:28 redis-node2 systemd[1]: Starting LSB: start and stop redis_6379...
3月 11 11:26:28 redis-node2 redis_6379[37360]: /var/run/redis_6379.pid exists, process is already running or crashed
3月 11 11:26:28 redis-node2 systemd[1]: Started LSB: start and stop redis_6379.
[root@redis-node2 utils]# netstat -antlupe | grep redis
tcp 0 0 127.0.0.1:6379 0.0.0.0:* LISTEN 0 77623 37185/redis-server
tcp6 0 0 ::1:6379 :::* LISTEN 0 77624 37185/redis-server
4、基本操作
cpp
# 查看配置
127.0.0.1:6379> config get bind
1) "bind"
2) "127.0.0.1 -::1"
127.0.0.1:6379> config get *
cpp
# 写入和读取数据
127.0.0.1:6379> set name mozi
OK
127.0.0.1:6379> get name
"mozi"
127.0.0.1:6379> set name mozi ex 3
OK
127.0.0.1:6379> get name
"mozi"
127.0.0.1:6379> get name
"mozi"
127.0.0.1:6379> get name
(nil)
# 如果没有设定数据过期时间会一直存在, /var/lib/redis/6379/dump.rdb内存快照中
127.0.0.1:6379> set name mozi
OK
127.0.0.1:6379> keys *
1) "name"
cpp
# 选择数据库 redisa中有0-15个数据库
127.0.0.1:6379> select 1
OK
127.0.0.1:6379[1]> get name
(nil)
127.0.0.1:6379[1]> select 0
OK
127.0.0.1:6379> select 16
(error) ERR DB index is out of range
cpp
# 移动数据
127.0.0.1:6379> set name mozi
OK
127.0.0.1:6379> move name 1
(integer) 1
127.0.0.1:6379> get name
(nil)
127.0.0.1:6379> select 1
OK
127.0.0.1:6379[1]> get name
"mozi"
cpp
# 改变键名
127.0.0.1:6379[1]> rename name id
OK
127.0.0.1:6379[1]> get name
(nil)
127.0.0.1:6379[1]> get id
"mozi"
cpp
# 设定数据过期时间
127.0.0.1:6379[1]> set name mozi ex 20
OK
127.0.0.1:6379[1]> get name
"mozi"
127.0.0.1:6379[1]> expire name 3
(integer) 1
127.0.0.1:6379[1]> get name
"mozi"
127.0.0.1:6379[1]> get name
(nil)
127.0.0.1:6379[1]>
cpp
# 删除
127.0.0.1:6379[1]> set name mozi
OK
127.0.0.1:6379[1]> get name
"mozi"
127.0.0.1:6379[1]> del name
(integer) 1
127.0.0.1:6379[1]> get name
(nil)
cpp
# 持久化保存
127.0.0.1:6379[1]> persist name
(integer) 0
cpp
# 判断key是否存在
127.0.0.1:6379[1]> exists name
(integer) 1
127.0.0.1:6379[1]> exists mozi
(integer) 0
cpp
# 清空当前库
127.0.0.1:6379[1]> flushdb
OK
127.0.0.1:6379[1]> get name
(nil)
cpp
# 清空所有库
127.0.0.1:6379[1]> FLUSHALL
OK
三、Redis主从复制
redis-node1 master 172.25.254.10
redis-node2 slave 172.25.254.20
redis-node3 slave 172.25.254.30
1、Redis主节点配置
cpp
# 在redis-node1中配置
[root@redis-node1 ~]# vim /etc/redis/redis.conf
#bind 127.0.0.1 -::1
bind * -::*
protected-mode no
[root@redis-node1 ~]# systemctl restart redis_6379.service
[root@redis-node1 ~]# netstat -antlupe | grep redis
tcp 0 0 0.0.0.0:6379 0.0.0.0:* LISTEN 0 169932 47861/redis-server
tcp6 0 0 :::6379 :::* LISTEN 0 169933 47861/redis-server
2、配置Redis从节点
cpp
# 在redis-node2节点
[root@redis-node2 utils]# vim /etc/redis/redis.conf
#bind 127.0.0.1 -::1
bind * -::*
protected-mode no
replicaof 172.25.254.10 6379
[root@redis-node2 utils]# systemctl restart redis_6379.service
# 在redis-node3中
[root@redis-node3 utils]# vim /etc/redis/redis.conf
#bind 127.0.0.1 -::1
bind * -::*
protected-mode no
replicaof 172.25.254.10 6379
[root@redis-node3 utils]# systemctl restart redis_6379.service
3、查看状态
cpp
# 测试准备,此时查看name为空
[root@redis-node2 ~]# redis-cli
127.0.0.1:6379> get name
(nil)
# 环境干净,开始测试
[root@redis-node1 ~]# redis-cli
127.0.0.1:6379> set name lee
OK
127.0.0.1:6379> get name
"lee"
# 此时就可以直接在redis-node2中直接查看到name
127.0.0.1:6379> get name
"lee"
# 但是如果我们在redis-node2进行写入数据,会显示失败,是因为它是只读副本,只能读不能写入
127.0.0.1:6379> set test 123
(error) READONLY You can't write against a read only replica.
# redis-node1中查看
[root@redis-node1 ~]# redis-cli
127.0.0.1:6379> info replication
# Replication
role:master
connected_slaves:2
slave0:ip=172.25.254.20,port=6379,state=online,offset=2533,lag=1
slave1:ip=172.25.254.30,port=6379,state=online,offset=2533,lag=0
master_failover_state:no-failover
master_replid:49e561be22aa55f13aa99a837d2119fa59e2f1fa
master_replid2:0000000000000000000000000000000000000000
master_repl_offset:2533
second_repl_offset:-1
repl_backlog_active:1
repl_backlog_size:1048576
repl_backlog_first_byte_offset:1
repl_backlog_histlen:2533
# redis-node2和redis-node3是一致的
[root@redis-node2 ~]# redis-cli
127.0.0.1:6379> info replication
# Replication
role:slave
master_host:172.25.254.10
master_port:6379
master_link_status:up
master_last_io_seconds_ago:6
master_sync_in_progress:0
slave_read_repl_offset:2631
slave_repl_offset:2631
slave_priority:100
slave_read_only:1
replica_announced:1
connected_slaves:0
master_failover_state:no-failover
master_replid:49e561be22aa55f13aa99a837d2119fa59e2f1fa
master_replid2:0000000000000000000000000000000000000000
master_repl_offset:2631
second_repl_offset:-1
repl_backlog_active:1
repl_backlog_size:1048576
repl_backlog_first_byte_offset:1
repl_backlog_histlen:2631
4、测试数据同步性
cpp
[root@redis-node1 ~]# redis-cli
127.0.0.1:6379> set name lee
OK
127.0.0.1:6379> get name
"lee"
[root@redis-node2 ~]# redis-cli
127.0.0.1:6379> get name
"lee"
# 在从节点中不能写入数据,只能从主节点中写入
[root@redis-node3 ~]# redis-cli
127.0.0.1:6379> get name
"lee"
127.0.0.1:6379> set test 123
(error) READONLY You can't write against a read only replica.
四、Redis哨兵模式(高可用)
问题:
在生产环境中如果master和slave中的网络出现故障,由于哨兵的存在会把master提出去
当网络恢复后,master发现环境发生改变,master就会把自己的身份转换成slave
master变成slave后会把网络故障那段时间写入自己中的数据清掉,这样数据就丢失了。
解决:
master在被写入数据时会持续连接slave,mater确保有2个slave可以写入我才允许写入
如果slave数量少于2个便拒绝写入
1、配置Redis哨兵模式
cpp
# redis主节点
[root@redis-node1 ~]# cd redis-7.4.8/
[root@redis-node1 redis-7.4.8]# cp -p sentinel.conf /etc/redis
[root@redis-node1 ~]# vim /etc/redis/sentinel.conf
protected-mode no #关闭保护模式
port 26379 #监听端口
daemonize no #进入不打如后台
pidfile /var/run/redis-sentinel.pid #sentinel进程pid文件
loglevel notice #日志级别
sentinel monitor mymaster 172.25.254.100 6379 2 #创建sentinel监控监控master主机,2表示必须得到2票
sentinel down-after-milliseconds mymaster 10000 #master中断时长,10秒连不上视为master下线
sentinel parallel-syncs mymaster 1 #发生故障转移后,同时开始同步新master数据的slave数量
sentinel failover-timeout mymaster 180000 #整个故障切换的超时时间为3分钟
# 这个配置以后有妙用
[root@redis-node1 ~]# cp -p /etc/redis/sentinel.conf /mnt/
# 在从节点(node2,3)关闭protected-mode模式
[root@redis-node2 ~]# vim /etc/redis/redis.conf
protected-mode no
[root@redis-node2 ~]# systemctl restart redis_6379.service
[root@redis-node3 utils]# vim /etc/redis/redis.conf
protected-mode no
[root@redis-node3 utils]# systemctl restart redis_6379.service
# 在主节点复制sentinel.conf到从节点
[root@redis-node1 ~]# scp /etc/redis/sentinel.conf root@172.25.254.20:/etc/redis/
sentinel.conf 100% 14KB 20.7MB/s 00:00
[root@redis-node1 ~]# scp /etc/redis/sentinel.conf root@172.25.254.30:/etc/redis/
sentinel.conf 100% 14KB 24.2MB/s 00:00
[root@redis-node2 ~]# cp /etc/redis/sentinel.conf /mnt/
[root@redis-node3 ~]# cp /etc/redis/sentinel.conf /mnt/
# 所有节点开启哨兵(以redis-node1为例)
[root@redis-node1 ~]# redis-sentinel /etc/redis/sentinel.conf
86305:X 11 Mar 2026 17:06:56.527 # WARNING Memory overcommit must be enabled! Without it, a background save or replication may fail under low memory condition. Being disabled, it can also cause failures without low memory condition, see https://github.com/jemalloc/jemalloc/issues/1328. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
86305:X 11 Mar 2026 17:06:56.527 * oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
86305:X 11 Mar 2026 17:06:56.527 * Redis version=7.4.8, bits=64, commit=00000000, modified=0, pid=86305, just started
86305:X 11 Mar 2026 17:06:56.527 * Configuration loaded
86305:X 11 Mar 2026 17:06:56.527 * Increased maximum number of open files to 10032 (it was originally set to 1024).
86305:X 11 Mar 2026 17:06:56.527 * monotonic clock: POSIX clock_gettime
_._
_.-``__ ''-._
_.-`` `. `_. ''-._ Redis Community Edition
.-`` .-```. ```\/ _.,_ ''-._ 7.4.8 (00000000/0) 64 bit
( ' , .-` | `, ) Running in sentinel mode
|`-._`-...-` __...-.``-._|'` _.-'| Port: 26379
| `-._ `._ / _.-' | PID: 86305
`-._ `-._ `-./ _.-' _.-'
|`-._`-._ `-.__.-' _.-'_.-'|
| `-._`-._ _.-'_.-' | https://redis.io
`-._ `-._`-.__.-'_.-' _.-'
|`-._`-._ `-.__.-' _.-'_.-'|
| `-._`-._ _.-'_.-' |
`-._ `-._`-.__.-'_.-' _.-'
`-._ `-.__.-' _.-'
`-._ _.-'
`-.__.-'
86305:X 11 Mar 2026 17:06:56.529 * Sentinel new configuration saved on disk
86305:X 11 Mar 2026 17:06:56.529 * Sentinel ID is a4ef233006ec86e90b8902d935cef73dae37f29a
86305:X 11 Mar 2026 17:06:56.529 # +monitor master mymaster 172.25.254.10 6379 quorum 2
86305:X 11 Mar 2026 17:06:56.532 * +slave slave 172.25.254.20:6379 172.25.254.20 6379 @ mymaster 172.25.254.10 6379
86305:X 11 Mar 2026 17:06:56.532 * Sentinel new configuration saved on disk
86305:X 11 Mar 2026 17:06:56.532 * +slave slave 172.25.254.30:6379 172.25.254.30 6379 @ mymaster 172.25.254.10 6379
86305:X 11 Mar 2026 17:06:56.533 * Sentinel new configuration saved on disk
86305:X 11 Mar 2026 17:07:16.056 * +sentinel sentinel 516ade363f7d4c83d7aa3a10f356b9a9d8938aeb 172.25.254.20 26379 @ mymaster 172.25.254.10 6379
86305:X 11 Mar 2026 17:07:16.057 * Sentinel new configuration saved on disk
86305:X 11 Mar 2026 17:07:29.498 * +sentinel sentinel 38130b59086656355d97bc69ab20d2dd9420af07 172.25.254.30 26379 @ mymaster 172.25.254.10 6379
86305:X 11 Mar 2026 17:07:29.500 * Sentinel new configuration saved on disk
2、测试故障切换
cpp
[root@redis-node1 ~]# redis-cli
127.0.0.1:6379> shutdown
(1.00s)
not connected>
# 切换信息(最后一行我们就可以看到此时master就自动切换到172.25.254.30)
51625:X 11 Mar 2026 17:07:13.973 * Sentinel new configuration saved on disk
51625:X 11 Mar 2026 17:07:13.973 * Sentinel ID is 516ade363f7d4c83d7aa3a10f356b9a9d8938aeb
51625:X 11 Mar 2026 17:07:13.973 # +monitor master mymaster 172.25.254.10 6379 quorum 2
51625:X 11 Mar 2026 17:07:13.975 * +slave slave 172.25.254.20:6379 172.25.254.20 6379 @ mymaster 172.25.254.10 6379
51625:X 11 Mar 2026 17:07:13.976 * Sentinel new configuration saved on disk
51625:X 11 Mar 2026 17:07:13.976 * +slave slave 172.25.254.30:6379 172.25.254.30 6379 @ mymaster 172.25.254.10 6379
51625:X 11 Mar 2026 17:07:13.977 * Sentinel new configuration saved on disk
51625:X 11 Mar 2026 17:07:15.028 * +sentinel sentinel a4ef233006ec86e90b8902d935cef73dae37f29a 172.25.254.10 26379 @ mymaster 172.25.254.10 6379
51625:X 11 Mar 2026 17:07:15.030 * Sentinel new configuration saved on disk
51625:X 11 Mar 2026 17:07:29.499 * +sentinel sentinel 38130b59086656355d97bc69ab20d2dd9420af07 172.25.254.30 26379 @ mymaster 172.25.254.10 6379
51625:X 11 Mar 2026 17:07:29.502 * Sentinel new configuration saved on disk
51625:X 11 Mar 2026 17:18:03.079 * Sentinel new configuration saved on disk
51625:X 11 Mar 2026 17:18:03.079 # +new-epoch 1
51625:X 11 Mar 2026 17:18:03.081 * Sentinel new configuration saved on disk
51625:X 11 Mar 2026 17:18:03.081 # +vote-for-leader a4ef233006ec86e90b8902d935cef73dae37f29a 1
51625:X 11 Mar 2026 17:18:03.106 # +sdown master mymaster 172.25.254.10 6379
51625:X 11 Mar 2026 17:18:03.211 # +odown master mymaster 172.25.254.10 6379 #quorum 3/2
51625:X 11 Mar 2026 17:18:03.211 * Next failover delay: I will not start a failover before Wed Mar 11 17:24:03 2026
51625:X 11 Mar 2026 17:18:03.804 # +config-update-from sentinel a4ef233006ec86e90b8902d935cef73dae37f29a 172.25.254.10 26379 @ mymaster 172.25.254.10 6379
51625:X 11 Mar 2026 17:18:03.804 # +switch-master mymaster 172.25.254.10 6379 172.25.254.30 6379
51625:X 11 Mar 2026 17:18:03.804 * +slave slave 172.25.254.20:6379 172.25.254.20 6379 @ mymaster 172.25.254.30 6379
51625:X 11 Mar 2026 17:18:03.804 * +slave slave 172.25.254.10:6379 172.25.254.10 6379 @ mymaster 172.25.254.30 6379
51625:X 11 Mar 2026 17:18:03.811 * Sentinel new configuration saved on disk
51625:X 11 Mar 2026 17:18:13.844 # +sdown slave 172.25.254.10:6379 172.25.254.10 6379 @ mymaster 172.25.254.30 6379
# 在redis-node1和redis-node3中验证:
[root@redis-node2 ~]# redis-cli
127.0.0.1:6379> info replication
# Replication
role:slave
master_host:172.25.254.30
master_port:6379
master_link_status:up
master_last_io_seconds_ago:0
master_sync_in_progress:0
slave_read_repl_offset:153143
slave_repl_offset:153143
slave_priority:100
slave_read_only:1
replica_announced:1
connected_slaves:0
master_failover_state:no-failover
master_replid:41d95f05974b5e80bab5b18bec5da7759acc5d48
master_replid2:49e561be22aa55f13aa99a837d2119fa59e2f1fa
master_repl_offset:153143
second_repl_offset:138077
repl_backlog_active:1
repl_backlog_size:1048576
repl_backlog_first_byte_offset:4550
repl_backlog_histlen:148594
[root@redis-node3 ~]# redis-cli
127.0.0.1:6379> info replication
# Replication
role:master
connected_slaves:1
slave0:ip=172.25.254.20,port=6379,state=online,offset=158247,lag=0
master_failover_state:no-failover[root@redis-node1 ~]# /etc/init.d/redis_6379 start
Starting Redis server...
master_replid:41d95f05974b5e80bab5b18bec5da7759acc5d48
master_replid2:49e561be22aa55f13aa99a837d2119fa59e2f1fa
master_repl_offset:158388
second_repl_offset:138077
repl_backlog_active:1
repl_backlog_size:1048576
repl_backlog_first_byte_offset:4620
repl_backlog_histlen:153769
# 恢复redis-node1时:
[root@redis-node1 ~]# /etc/init.d/redis_6379 start
Starting Redis server...
# 此时我们就可以在作为master的redis-node3中查看信息
127.0.0.1:6379> info replication
# Replication
role:master
connected_slaves:2
slave0:ip=172.25.254.20,port=6379,state=online,offset=222690,lag=0
slave1:ip=172.25.254.10,port=6379,state=online,offset=222549,lag=1 # 此时redis-node1就成为slave
master_failover_state:no-failover
master_replid:41d95f05974b5e80bab5b18bec5da7759acc5d48
master_replid2:49e561be22aa55f13aa99a837d2119fa59e2f1fa
master_repl_offset:222690
second_repl_offset:138077
repl_backlog_active:1
repl_backlog_size:1048576
repl_backlog_first_byte_offset:4620
repl_backlog_histlen:218071
五、Redis Cluster
在哨兵sentinel机制中,可以解决redis高可用问题,即当master故障后可以自动将slave提升为master,从而可以保证redis服务的正常使用,但是无法解决redis单机写入的瓶颈问题,即单机redis写入性能受限于单机的内存大小、并发数量、网卡速率等因素。
redis 3.0版本之后推出了无中心架构的redis cluster机制,在无中心的redis集群当中,其每个节点保存当前节点数据和整个集群状态,每个节点都和其他所有节点连接
创建的前提
1.每个redis node节点采用相同的硬件配置、相同的密码、相同的redis版本。
2.每个节点必须开启的参数
cluster-enabled yes #必须开启集群状态,开启后redis进程会有cluster显示
cluster-config-file nodes-6380.conf #此文件有redis cluster集群自动创建和维护,不需要任何手动操作
3.所有redis服务器必须没有任何数据
4.先启动为单机redis且没有任何key value
1、环境准备
cpp
[root@redis-node1 ~]# for i in 10 20 30 40 50 60 ; do ssh -l root 172.25.254.$i dnf install make gcc initscripts -y ;done
[root@redis-node1 ~]# wget https://download.redis.io/releases/redis-7.4.8.tar.gz
# 以上操作都在安装部署里都有详细步骤
[root@redis-node1 redis]# netstat -antlupe | grep redis
tcp 0 0 0.0.0.0:6379 0.0.0.0:* LISTEN 0 102786 47992/redis-server
tcp6 0 0 :::6379 :::* LISTEN 0 102787 47992/redis-server
[root@redis-node2 redis]# netstat -antlupe | grep redis
tcp 0 0 0.0.0.0:6379 0.0.0.0:* LISTEN 0 102786 47992/redis-server
tcp6 0 0 :::6379 :::* LISTEN 0 102787 47992/redis-server
[root@redis-node3 redis]# netstat -antlupe | grep redis
tcp 0 0 0.0.0.0:6379 0.0.0.0:* LISTEN 0 102786 47992/redis-server
tcp6 0 0 :::6379 :::* LISTEN 0 102787 47992/redis-server
[root@redis-node4 redis]# netstat -antlupe | grep redis
tcp 0 0 0.0.0.0:6379 0.0.0.0:* LISTEN 0 102786 47992/redis-server
tcp6 0 0 :::6379 :::* LISTEN 0 102787 47992/redis-server
[root@redis-node5 redis]# netstat -antlupe | grep redis
tcp 0 0 0.0.0.0:6379 0.0.0.0:* LISTEN 0 102786 47992/redis-server
tcp6 0 0 :::6379 :::* LISTEN 0 102787 47992/redis-server
[root@redis-node6 redis]# netstat -antlupe | grep redis
tcp 0 0 0.0.0.0:6379 0.0.0.0:* LISTEN 0 102786 47992/redis-server
tcp6 0 0 :::6379 :::* LISTEN 0 102787 47992/redis-server
2、修改所有节点配置文件
cpp
# 所有节点都需要进行配置
[root@redis-node1 ~]# vim /etc/redis/6379.conf
masterauth "123456" # 集群主从认证
equirepass "123456" # redis登陆密码 redis-cli 命令连接redis后要用"auth 密码"进行认证
cluster-enabled yes
cluster-config-file nodes-6379.conf
cluster-node-timeout 15000
# 之后记得需要重启
[root@redis-node1 ~]# /etc/init.d/redis_6379 restart
Stopping ...
Redis stopped
Starting Redis server...
# 拷贝给其他节点
[root@redis-node1 ~]# for i in 10 20 30 40 50 60 ; do scp /etc/redis/6379.conf root@172.25.254.$i:/etc/redis/6379.conf;done
6379.conf 100% 107KB 93.4MB/s 00:00
6379.conf 100% 107KB 8.1MB/s 00:00
6379.conf 100% 107KB 28.0MB/s 00:00
6379.conf 100% 107KB 26.6MB/s 00:00
6379.conf 100% 107KB 17.1MB/s 00:00
6379.conf 100% 107KB 32.7MB/s 00:00
# 全部重启
[root@redis-node1 ~]# for i in 10 20 30 40 50 60 ; do ssh root@172.25.254.$i /etc/init.d/redis_6379 restart;done
3、启动集群
cpp
[root@redis-node1 ~]# redis-cli --cluster create 172.25.254.10:6379 172.25.254.20:6379 172.25.254.30:6379 172.25.254.40:6379 172.25.254.50:6379 172.25.254.60:6379 --cluster-replicas 1
>>> Performing hash slots allocation on 6 nodes...
Master[0] -> Slots 0 - 5460
Master[1] -> Slots 5461 - 10922
Master[2] -> Slots 10923 - 16383
Adding replica 172.25.254.50:6379 to 172.25.254.10:6379
Adding replica 172.25.254.60:6379 to 172.25.254.20:6379
Adding replica 172.25.254.40:6379 to 172.25.254.30:6379
M: 8db833f3c3bc6b8f93e87111f13f56d366f833a0 172.25.254.10:6379
slots:[0-5460] (5461 slots) master
M: ca599940209f55c07d06951480703bb0a5d8873a 172.25.254.20:6379
slots:[5461-10922] (5462 slots) master
M: d9300173b75149d3056f0ee3edec063f8ec66e9a 172.25.254.30:6379
slots:[10923-16383] (5461 slots) master
S: 32d797eb30094b77edb896abcc0b0fc91ccdb4fd 172.25.254.40:6379
replicates d9300173b75149d3056f0ee3edec063f8ec66e9a
S: ba6ef067c63d30c213493eb48d43427015018898 172.25.254.50:6379
replicates 8db833f3c3bc6b8f93e87111f13f56d366f833a0
S: c939a04358edc1ce7a1c1a44561d77fb402025fd 172.25.254.60:6379
replicates ca599940209f55c07d06951480703bb0a5d8873a
Can I set the above configuration? (type 'yes' to accept): yes #输入内容
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join
..
>>> Performing Cluster Check (using node 172.25.254.10:6379)
M: 8db833f3c3bc6b8f93e87111f13f56d366f833a0 172.25.254.10:6379
slots:[0-5460] (5461 slots) master
1 additional replica(s)
S: c939a04358edc1ce7a1c1a44561d77fb402025fd 172.25.254.60:6379
slots: (0 slots) slave
replicates ca599940209f55c07d06951480703bb0a5d8873a
M: d9300173b75149d3056f0ee3edec063f8ec66e9a 172.25.254.30:6379
slots:[10923-16383] (5461 slots) master
1 additional replica(s)
M: ca599940209f55c07d06951480703bb0a5d8873a 172.25.254.20:6379
slots:[5461-10922] (5462 slots) master
1 additional replica(s)
S: ba6ef067c63d30c213493eb48d43427015018898 172.25.254.50:6379
slots: (0 slots) slave
replicates 8db833f3c3bc6b8f93e87111f13f56d366f833a0
S: 32d797eb30094b77edb896abcc0b0fc91ccdb4fd 172.25.254.40:6379
slots: (0 slots) slave
replicates d9300173b75149d3056f0ee3edec063f8ec66e9a
[OK] All nodes agree about slots configuration.
#查看集群状态
[root@redis-node1 ~]# redis-cli --cluster info 172.25.254.10:6379
172.25.254.10:6379 (8db833f3...) -> 0 keys | 5461 slots | 1 slaves.
172.25.254.30:6379 (d9300173...) -> 0 keys | 5461 slots | 1 slaves.
172.25.254.20:6379 (ca599940...) -> 0 keys | 5462 slots | 1 slaves.
[OK] 0 keys in 3 masters.
0.00 keys per slot on average.
#查看集群信息
[root@redis-node1 ~]# redis-cli cluster info
cluster_state:ok
cluster_slots_assigned:16384
cluster_slots_ok:16384
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:6
cluster_size:3
cluster_current_epoch:6
cluster_my_epoch:1
cluster_stats_messages_ping_sent:168
cluster_stats_messages_pong_sent:163
cluster_stats_messages_sent:331
cluster_stats_messages_ping_received:158
cluster_stats_messages_pong_received:168
cluster_stats_messages_meet_received:5
cluster_stats_messages_received:331
total_cluster_links_buffer_limit_exceeded:0
#检测当前集群
[root@redis-node1 ~]# redis-cli --cluster check 172.25.254.10:6379
172.25.254.10:6379 (8db833f3...) -> 0 keys | 5461 slots | 1 slaves.
172.25.254.30:6379 (d9300173...) -> 0 keys | 5461 slots | 1 slaves.
172.25.254.20:6379 (ca599940...) -> 0 keys | 5462 slots | 1 slaves.
[OK] 0 keys in 3 masters.
0.00 keys per slot on average.
>>> Performing Cluster Check (using node 172.25.254.10:6379)
M: 8db833f3c3bc6b8f93e87111f13f56d366f833a0 172.25.254.10:6379
slots:[0-5460] (5461 slots) master
1 additional replica(s)
S: c939a04358edc1ce7a1c1a44561d77fb402025fd 172.25.254.60:6379
slots: (0 slots) slave
replicates ca599940209f55c07d06951480703bb0a5d8873a
M: d9300173b75149d3056f0ee3edec063f8ec66e9a 172.25.254.30:6379
slots:[10923-16383] (5461 slots) master
1 additional replica(s)
M: ca599940209f55c07d06951480703bb0a5d8873a 172.25.254.20:6379
slots:[5461-10922] (5462 slots) master
1 additional replica(s)
S: ba6ef067c63d30c213493eb48d43427015018898 172.25.254.50:6379
slots: (0 slots) slave
replicates 8db833f3c3bc6b8f93e87111f13f56d366f833a0
S: 32d797eb30094b77edb896abcc0b0fc91ccdb4fd 172.25.254.40:6379
slots: (0 slots) slave
replicates d9300173b75149d3056f0ee3edec063f8ec66e9a
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
4、集群扩容
cpp
#添加master
[root@redis-node1 ~]# redis-cli --cluster add-node 172.25.254.70:6379 172.25.254.10:6379
[root@redis-node1 ~]# redis-cli --cluster check 172.25.254.10:6379
172.25.254.10:6379 (8db833f3...) -> 0 keys | 5461 slots | 1 slaves.
172.25.254.70:6379 (dfabfe07...) -> 0 keys | 0 slots | 0 slaves.
172.25.254.30:6379 (d9300173...) -> 0 keys | 5461 slots | 1 slaves.
172.25.254.20:6379 (ca599940...) -> 1 keys | 5462 slots | 1 slaves.
[OK] 1 keys in 4 masters.
0.00 keys per slot on average.
>>> Performing Cluster Check (using node 172.25.254.10:6379)
M: 8db833f3c3bc6b8f93e87111f13f56d366f833a0 172.25.254.10:6379
slots:[0-5460] (5461 slots) master
1 additional replica(s)
M: dfabfe07170ac9b5d20a5a7a70c836877bd64504 172.25.254.70:6379
slots: (0 slots) master
S: c939a04358edc1ce7a1c1a44561d77fb402025fd 172.25.254.60:6379
slots: (0 slots) slave
replicates ca599940209f55c07d06951480703bb0a5d8873a
M: d9300173b75149d3056f0ee3edec063f8ec66e9a 172.25.254.30:6379
slots:[10923-16383] (5461 slots) master
1 additional replica(s)
M: ca599940209f55c07d06951480703bb0a5d8873a 172.25.254.20:6379
slots:[5461-10922] (5462 slots) master
1 additional replica(s)
S: ba6ef067c63d30c213493eb48d43427015018898 172.25.254.50:6379
slots: (0 slots) slave
replicates 8db833f3c3bc6b8f93e87111f13f56d366f833a0
S: 32d797eb30094b77edb896abcc0b0fc91ccdb4fd 172.25.254.40:6379
slots: (0 slots) slave
replicates d9300173b75149d3056f0ee3edec063f8ec66e9a
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
#分配solt给新加入的主机
>>> Performing Cluster Check (using node 172.25.254.10:6379)
M: 8db833f3c3bc6b8f93e87111f13f56d366f833a0 172.25.254.10:6379
slots:[0-5460] (5461 slots) master
1 additional replica(s)
M: dfabfe07170ac9b5d20a5a7a70c836877bd64504 172.25.254.70:6379
slots: (0 slots) master
S: c939a04358edc1ce7a1c1a44561d77fb402025fd 172.25.254.60:6379
slots: (0 slots) slave
replicates ca599940209f55c07d06951480703bb0a5d8873a
M: d9300173b75149d3056f0ee3edec063f8ec66e9a 172.25.254.30:6379
slots:[10923-16383] (5461 slots) master
1 additional replica(s)
M: ca599940209f55c07d06951480703bb0a5d8873a 172.25.254.20:6379
slots:[5461-10922] (5462 slots) master
1 additional replica(s)
S: ba6ef067c63d30c213493eb48d43427015018898 172.25.254.50:6379
slots: (0 slots) slave
replicates 8db833f3c3bc6b8f93e87111f13f56d366f833a0
S: 32d797eb30094b77edb896abcc0b0fc91ccdb4fd 172.25.254.40:6379
slots: (0 slots) slave
replicates d9300173b75149d3056f0ee3edec063f8ec66e9a
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
How many slots do you want to move (from 1 to 16384)? 4096 #分配solt的数量
What is the receiving node ID? dfabfe07170ac9b5d20a5a7a70c836877bd64504
Please enter all the source node IDs.
Type 'all' to use all the nodes as source nodes for the hash slots.
Type 'done' once you entered all the source nodes IDs.
Source node #1: all #solt来源
Ready to move 4096 slots.
#给新主机添加slave
[root@redis-node1 ~]# redis-cli --cluster add-node 172.25.254.80:6379 172.25.254.10:6379 --cluster-slave --cluster-master-id dfabfe07170ac9b5d20a5a7a70c836877bd64504
[root@redis-node1 ~]# redis-cli --cluster check 172.25.254.10:6379 172.25.254.10:6379 (8db833f3...) -> 0 keys | 4096 slots | 1 slaves.
172.25.254.70:6379 (dfabfe07...) -> 1 keys | 4096 slots | 1 slaves.
172.25.254.30:6379 (d9300173...) -> 0 keys | 4096 slots | 1 slaves.
172.25.254.20:6379 (ca599940...) -> 0 keys | 4096 slots | 1 slaves.
[OK] 1 keys in 4 masters.
0.00 keys per slot on average.
>>> Performing Cluster Check (using node 172.25.254.10:6379)
M: 8db833f3c3bc6b8f93e87111f13f56d366f833a0 172.25.254.10:6379
slots:[1365-5460] (4096 slots) master
1 additional replica(s)
S: 1176ee294e6b5071ca57e93374d04ac22028daed 172.25.254.80:6379
slots: (0 slots) slave
replicates dfabfe07170ac9b5d20a5a7a70c836877bd64504
M: dfabfe07170ac9b5d20a5a7a70c836877bd64504 172.25.254.70:6379
slots:[0-1364],[5461-6826],[10923-12287] (4096 slots) master
1 additional replica(s)
S: c939a04358edc1ce7a1c1a44561d77fb402025fd 172.25.254.60:6379
slots: (0 slots) slave
replicates ca599940209f55c07d06951480703bb0a5d8873a
M: d9300173b75149d3056f0ee3edec063f8ec66e9a 172.25.254.30:6379
slots:[12288-16383] (4096 slots) master
1 additional replica(s)
M: ca599940209f55c07d06951480703bb0a5d8873a 172.25.254.20:6379
slots:[6827-10922] (4096 slots) master
1 additional replica(s)
S: ba6ef067c63d30c213493eb48d43427015018898 172.25.254.50:6379
slots: (0 slots) slave
replicates 8db833f3c3bc6b8f93e87111f13f56d366f833a0
S: 32d797eb30094b77edb896abcc0b0fc91ccdb4fd 172.25.254.40:6379
slots: (0 slots) slave
replicates d9300173b75149d3056f0ee3edec063f8ec66e9a
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
5、集群缩容
cpp
#集群槽位回收到10主机中
[root@redis-node1 ~]# redis-cli --cluster reshard 172.25.254.10:6379
>>> Performing Cluster Check (using node 172.25.254.10:6379)
M: 8db833f3c3bc6b8f93e87111f13f56d366f833a0 172.25.254.10:6379
slots:[1365-5460] (4096 slots) master
1 additional replica(s)
S: 1176ee294e6b5071ca57e93374d04ac22028daed 172.25.254.80:6379
slots: (0 slots) slave
replicates dfabfe07170ac9b5d20a5a7a70c836877bd64504
M: dfabfe07170ac9b5d20a5a7a70c836877bd64504 172.25.254.70:6379
slots:[0-1364],[5461-6826],[10923-12287] (4096 slots) master
1 additional replica(s)
S: c939a04358edc1ce7a1c1a44561d77fb402025fd 172.25.254.60:6379
slots: (0 slots) slave
replicates ca599940209f55c07d06951480703bb0a5d8873a
M: d9300173b75149d3056f0ee3edec063f8ec66e9a 172.25.254.30:6379
slots:[12288-16383] (4096 slots) master
1 additional replica(s)
M: ca599940209f55c07d06951480703bb0a5d8873a 172.25.254.20:6379
slots:[6827-10922] (4096 slots) master
1 additional replica(s)
S: ba6ef067c63d30c213493eb48d43427015018898 172.25.254.50:6379
slots: (0 slots) slave
replicates 8db833f3c3bc6b8f93e87111f13f56d366f833a0
S: 32d797eb30094b77edb896abcc0b0fc91ccdb4fd 172.25.254.40:6379
slots: (0 slots) slave
replicates d9300173b75149d3056f0ee3edec063f8ec66e9a
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
How many slots do you want to move (from 1 to 16384)? 4096
What is the receiving node ID? 8db833f3c3bc6b8f93e87111f13f56d366f833a0 #10id
Please enter all the source node IDs.
Type 'all' to use all the nodes as source nodes for the hash slots.
Type 'done' once you entered all the source nodes IDs.
Source node #1: dfabfe07170ac9b5d20a5a7a70c836877bd64504 #70id
Source node #2: done
#删除70和80节点
[root@redis-node1 ~]# redis-cli --cluster del-node 172.25.254.10:6379 dfabfe07170ac9b5d20a5a7a70c836877bd64504
>>> Removing node dfabfe07170ac9b5d20a5a7a70c836877bd64504 from cluster 172.25.254.10:6379
>>> Sending CLUSTER FORGET messages to the cluster...
>>> Sending CLUSTER RESET SOFT to the deleted node.
[root@redis-node1 ~]# redis-cli --cluster del-node 172.25.254.10:6379 1176ee294e6b5071ca57e93374d04ac22028daed
>>> Removing node 1176ee294e6b5071ca57e93374d04ac22028daed from cluster 172.25.254.10:6379
>>> Sending CLUSTER FORGET messages to the cluster...
>>> Sending CLUSTER RESET SOFT to the deleted node.
[root@redis-node1 ~]# redis-cli --cluster check 172.25.254.10:6379
172.25.254.10:6379 (8db833f3...) -> 1 keys | 8192 slots | 1 slaves.
172.25.254.30:6379 (d9300173...) -> 0 keys | 4096 slots | 1 slaves.
172.25.254.20:6379 (ca599940...) -> 0 keys | 4096 slots | 1 slaves.
[OK] 1 keys in 3 masters.
0.00 keys per slot on average.
>>> Performing Cluster Check (using node 172.25.254.10:6379)
M: 8db833f3c3bc6b8f93e87111f13f56d366f833a0 172.25.254.10:6379
slots:[0-6826],[10923-12287] (8192 slots) master
1 additional replica(s)
S: c939a04358edc1ce7a1c1a44561d77fb402025fd 172.25.254.60:6379
slots: (0 slots) slave
replicates ca599940209f55c07d06951480703bb0a5d8873a
M: d9300173b75149d3056f0ee3edec063f8ec66e9a 172.25.254.30:6379
slots:[12288-16383] (4096 slots) master
1 additional replica(s)
M: ca599940209f55c07d06951480703bb0a5d8873a 172.25.254.20:6379
slots:[6827-10922] (4096 slots) master
1 additional replica(s)
S: ba6ef067c63d30c213493eb48d43427015018898 172.25.254.50:6379
slots: (0 slots) slave
replicates 8db833f3c3bc6b8f93e87111f13f56d366f833a0
S: 32d797eb30094b77edb896abcc0b0fc91ccdb4fd 172.25.254.40:6379
slots: (0 slots) slave
replicates d9300173b75149d3056f0ee3edec063f8ec66e9a
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.