Redis ①⑤-集群

集群

广义的集群就是多台服务器构成一个分布式系统,那么就可以称为一个集群。

狭义的集群就是 Redis 集群,它是一种基于分布式存储的解决方案,它将数据分布到多个节点上,每个节点存储一部分数据,这样可以提高系统的容量。

此处的关键,就是引入了多台机器,然后每台机器存储一部分数据。但是如何将这些以一个较为平均的方式分布这多台机器上,就是 Redis 集群要做的了。

分片(Sharding)

分片就是把数据分为多份,每份一片,一片就可以是一个集群。

哈希求余

该方法借鉴了哈希表的方式,通过一个哈希函数将键转换为整数,然后将整数进行取模运算,得到的结果就是数组的下标。

假设此处有三个切片,编号有 0、1、2,那么可以将键通过哈希函数(比如 md5 加密算法)转换为整数,然后取模运算(% 3(切片个数)),得到的结果就是 0 或 1 或 2。

该算法实现起来简单,也较易读懂。

但是,一旦服务器集群需要扩容,就需要更高的成本了。

假设,现在需要将三个分片扩容至四个分片,那么计算公式就变成了:hash(key) % 4,这就导致先前已经存储的数据需要重新分布,这就需要大量的迁移操作。

一致性哈希

一致性哈希把 整个哈希空间 (例如 0 ~ 2^32 - 1组织成一个环(Hash Ring)

  • 每个节点(比如缓存服务器)根据哈希函数映射到环上的某个位置。
  • 每个数据项(比如 key-value 中的 key)也通过哈希函数映射到环上的某个位置。
  • 数据项将被存储到 "顺时针方向上第一个遇到的节点" 中。

假设哈希空间为 0 ~ 359(模拟一个 360 度的圆)

我们有三个服务器:

  • A 映射到 0
  • B 映射到 120
  • C 映射到 240

如果一个 key user123 哈希后落在了 150,那么它会被分配给 C(因为 C 在它顺时针方向上)。

当需要扩容时,只需要将部分数据迁移到新的节点上即可,不需要迁移过的数据。

为了避免节点分布不均带来的数据倾斜 问题,我们会给每个实际节点分配多个虚拟节点。虚拟节点也被映射到哈希环上,但它们都指向同一个物理节点。比如:服务器A → A#1、A#2、A#3 分布到环上的不同位置。

哈希槽分区

Redis 集群采用的便是该算法。

哈希槽(Hash Slot)是一个抽象的"桶"或"区间",系统预先定义好一个固定数量的槽位,比如:

Redis Cluster 中使用 16384 个哈希槽 (编号 0~16383

然后将这些槽 映射到不同的节点上,每个数据通过哈希函数决定落在哪个槽,从而知道它应该落在哪个节点。

哈希槽的基本流程:

  1. 预定义哈希槽数目:比如 16384 个槽。

  2. 对 key 求哈希值:Redis 用 CRC16 算法对 key 进行哈希,然后对 16384 取模:

    slot = CRC16(key) % 16384

  3. 槽映射到节点:集群会根据槽的数量和机器数,将 16384 个槽 平均或按需分配给多个节点。

  • 比如 3 个节点,可能分配为:
    • Node A:0 ~ 5461,共 5462 个槽
    • Node B:5462 ~ 10923,共 5462 个槽
    • Node C:10924 ~ 16383,共 5460 个槽
  1. 定位 key 所属节点:客户端计算出 key 对应的槽号后,就可以直接访问负责该槽的节点。

每个切片都会使用 "位图" 来表示当前持有多少槽位:值为 1 持有该槽位,值为 0 不持有。

此处的分槽位不一定就是连续的,也可以是不连续的。

如果增加一个节点,那么其扩容可以是这样:

  • Node A:0 ~ 4095,共 4096 个槽位
  • Node B:5462 ~ 9557,共 4096 个槽位
  • Node C:10924 ~ 15019,共 4096 个槽位
  • Node D:4096 ~ 5461 + 9558 ~ 10923 + 15019 ~ 16383,共 4096 个槽位

这样,新增节点可以将槽位平均分配到其他节点上,而不会造成数据倾斜。

Redis 集群是最多有 16384 个分片吗?

并非如此. 如果一个分片只有一个槽位,这对于集群的数据均匀其实是难以保证的.

实际上 Redis 的作者建议集群分片数不应该超过 1000。

而且,16000 这么大规模的集群,本身的可用性也是一个大问题. 一个系统越复杂,出现故障的概率是越高的

为什么是 16384 个槽位?

Redis 作者的答案:

  • Normal heartbeat packets carry the full configuration of a node,that can be replaced in an idempotent way with the old in order to update an old config. This means they contain the slots configuration for a node,in raw form,that uses 2k of space with 16k slots,but would use a prohibitive 8k of space using 65k slots.
  • At the same time,it is unlikely that Redis Cluster would scale to more than 1000 master nodes because of other design tradeoffs.

So 16k was in the right range to ensure enough slots per master with a max of 1000 masters,but a small enough number to propagate the slot configuration as a raw bitmap easily. Note that in small clusters,the bitmap would be hard to compress,because when N is small,the bitmap would have slots/N bits set. That is a large percentage of bits set.

翻译过来的意思就是:

  • 节点之间通过心跳包通信。心跳包中包含了该节点持有哪些 slots。这个是使用位图 这样的数据结构表示的。表示 16384 (16k)slots,需要的位图大小是 2KB。如果给定的 slots 数更多了,比如 65536 个了,此时就需要消耗更多的空间,8 KB 位图表示了。8 KB,对于内存来说不算什么,但是在频繁的网络心跳包中,还是一个不小的开销的.
  • 另一方面,Redis 集群一般不建议超过 1000 个分片。所以 16k 对于最大 1000 个分片来说是足够用的,同时也会使对应的槽位配置位图体积不至于很大。

使用 Docker 搭建 Redis 集群

创建是十一个节点的集群,其中九个作为工作节点,另外两个作为备用节点。九个工作节点分为三个分片,每个分片有三个节点,一个主节点和两个从节点。

使用 shell 脚本创建十一个节点的配置 ,在 ./redis-cluster 目录创建 generate.sh 文件,内容如下:

bash 复制代码
for port in $(seq 1 9); \
do \
mkdir -p redis${port}/
touch redis${port}/redis.conf
cat << EOF > redis${port}/redis.conf
port 6379
bind 0.0.0.0
protected-mode no
appendonly yes
cluster-enabled yes
cluster-config-file nodes.conf
cluster-node-timeout 5000
cluster-announce-ip 172.30.0.10${port}
cluster-announce-port 6379
cluster-announce-bus-port 16379
EOF
done

# 注意 cluster-announce-ip 的值有变化.
for port in $(seq 10 11); \
do \
mkdir -p redis${port}/
touch redis${port}/redis.conf
cat << EOF > redis${port}/redis.conf
port 6379
bind 0.0.0.0
protected-mode no
appendonly yes
cluster-enabled yes
cluster-config-file nodes.conf
cluster-node-timeout 5000
cluster-announce-ip 172.30.0.1${port}
cluster-announce-port 6379
cluster-announce-bus-port 16379
EOF
done

通过如下命令运行该 shell 脚本:

bash 复制代码
bash generate.sh

创建 docker-compose.yml 文件,内容如下:

yml 复制代码
version: '3.7'
networks:
  mynet:
    ipam:
      config:
        - subnet: 172.30.0.0/24
services:
  redis1:
    image: 'redis:5.0.9'
    container_name: redis1
    restart: always
    volumes:
      - ./redis1/:/etc/redis/
    ports:
      - 6371:6379
      - 16371:16379
    command:
      redis-server /etc/redis/redis.conf
    networks:
      mynet:
        ipv4_address: 172.30.0.101

  redis2:
    image: 'redis:5.0.9'
    container_name: redis2
    restart: always
    volumes:
      - ./redis2/:/etc/redis/
    ports:
      - 6372:6379
      - 16372:16379
    command:
      redis-server /etc/redis/redis.conf
    networks:
      mynet:
        ipv4_address: 172.30.0.102

  redis3:
    image: 'redis:5.0.9'
    container_name: redis3
    restart: always
    volumes:
      - ./redis3/:/etc/redis/
    ports:
      - 6373:6379
      - 16373:16379
    command:
      redis-server /etc/redis/redis.conf
    networks:
      mynet:
        ipv4_address: 172.30.0.103

  redis4:
    image: 'redis:5.0.9'
    container_name: redis4
    restart: always
    volumes:
      - ./redis4/:/etc/redis/
    ports:
      - 6374:6379
      - 16374:16379
    command:
      redis-server /etc/redis/redis.conf
    networks:
      mynet:
        ipv4_address: 172.30.0.104

  redis5:
    image: 'redis:5.0.9'
    container_name: redis5
    restart: always
    volumes:
      - ./redis5/:/etc/redis/
    ports:
      - 6375:6379
      - 16375:16379
    command:
      redis-server /etc/redis/redis.conf
    networks:
      mynet:
        ipv4_address: 172.30.0.105

  redis6:
    image: 'redis:5.0.9'
    container_name: redis6
    restart: always
    volumes:
      - ./redis6/:/etc/redis/
    ports:
      - 6376:6379
      - 16376:16379
    command:
      redis-server /etc/redis/redis.conf
    networks:
      mynet:
        ipv4_address: 172.30.0.106

  redis7:
    image: 'redis:5.0.9'
    container_name: redis7
    restart: always
    volumes:
      - ./redis7/:/etc/redis/
    ports:
      - 6377:6379
      - 16377:16379
    command:
      redis-server /etc/redis/redis.conf
    networks:
      mynet:
        ipv4_address: 172.30.0.107

  redis8:
    image: 'redis:5.0.9'
    container_name: redis8
    restart: always
    volumes:
      - ./redis8/:/etc/redis/
    ports:
      - 6378:6379
      - 16378:16379
    command:
      redis-server /etc/redis/redis.conf
    networks:
      mynet:
        ipv4_address: 172.30.0.108

  redis9:
    image: 'redis:5.0.9'
    container_name: redis9
    restart: always
    volumes:
      - ./redis9/:/etc/redis/
    ports:
      - 6379:6379
      - 16379:16379
    command:
      redis-server /etc/redis/redis.conf
    networks:
      mynet:
        ipv4_address: 172.30.0.109

  redis10:
    image: 'redis:5.0.9'
    container_name: redis10
    restart: always
    volumes:
      - ./redis10/:/etc/redis/
    ports:
      - 6380:6379
      - 16380:16379
    command:
      redis-server /etc/redis/redis.conf
    networks:
      mynet:
        ipv4_address: 172.30.0.110

  redis11:
    image: 'redis:5.0.9'
    container_name: redis11
    restart: always
    volumes:
      - ./redis11/:/etc/redis/
    ports:
      - 6381:6379
      - 16381:16379
    command:
      redis-server /etc/redis/redis.conf
    networks:
      mynet:
        ipv4_address: 172.30.0.111

启动上述十一个节点:

bash 复制代码
# 启动集群
docker-compose up -d

# 查看启动日志
docker-compose logs

# 查看节点状态
docker ps -a
ps -ef | grep redis
netstat -anp | grep docker

# 停止集群
docker-compose down

创建成功后,此时这十一个节点都是各自为战,还没有构建集群关系。通过下述命令自动构建集群关系:

bash 复制代码
redis-cli --cluster create 172.30.0.101:6379 172.30.0.102:6379 172.30.0.103:6379 172.30.0.104:6379 172.30.0.105:6379 172.30.0.106:6379 172.30.0.107:6379 172.30.0.108:6379 172.30.0.109:6379 --cluster-replicas 2
  • --cluster-replicas 2:表示每个主节点有两个从节点,Redis 会自动根据这个值计算出,该集群有三个分片,也就是有三个主节点,然后每个主节点有两个从节点。
  • 至于是哪些个节点当主节点,哪些个节点当从节点,Redis 自己会自动安排。

执行成功后,会返回如下信息:

复制代码
>>> Performing hash slots allocation on 9 nodes...
Master[0] -> Slots 0 - 5460
Master[1] -> Slots 5461 - 10922
Master[2] -> Slots 10923 - 16383
Adding replica 172.30.0.105:6379 to 172.30.0.101:6379
Adding replica 172.30.0.106:6379 to 172.30.0.101:6379
Adding replica 172.30.0.107:6379 to 172.30.0.102:6379
Adding replica 172.30.0.108:6379 to 172.30.0.102:6379
Adding replica 172.30.0.109:6379 to 172.30.0.103:6379
Adding replica 172.30.0.104:6379 to 172.30.0.103:6379
M: c5f8b7455d58394bdc924076aa67337fee0e8e78 172.30.0.101:6379
   slots:[0-5460] (5461 slots) master
M: 7a01d02d441ffa8565b105669d8c91ad67a16dc0 172.30.0.102:6379
   slots:[5461-10922] (5462 slots) master
M: 64f51e287aaf56510e468960a0f33869e95d63d2 172.30.0.103:6379
   slots:[10923-16383] (5461 slots) master
S: 12ba01fef886bdf28ab06723b86c253a5ea552e7 172.30.0.104:6379
   replicates 64f51e287aaf56510e468960a0f33869e95d63d2
S: 7ef3832ed5af43faf51c4e930f1240adf054424d 172.30.0.105:6379
   replicates c5f8b7455d58394bdc924076aa67337fee0e8e78
S: ad5f851b17668442fbdcb274d7a711f9d6093c01 172.30.0.106:6379
   replicates c5f8b7455d58394bdc924076aa67337fee0e8e78
S: 5847897a62e673c09a62fd44493d2343abe007ec 172.30.0.107:6379
   replicates 7a01d02d441ffa8565b105669d8c91ad67a16dc0
S: 95ab1c0e516cdba9703cbf1411349d2708e599af 172.30.0.108:6379
   replicates 7a01d02d441ffa8565b105669d8c91ad67a16dc0
S: e14cd04c2c8db1c2f2f201c594676c72a49fa983 172.30.0.109:6379
   replicates 64f51e287aaf56510e468960a0f33869e95d63d2
Can I set the above configuration? (type 'yes' to accept): yes
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join
....
>>> Performing Cluster Check (using node 172.30.0.101:6379)
M: c5f8b7455d58394bdc924076aa67337fee0e8e78 172.30.0.101:6379
   slots:[0-5460] (5461 slots) master
   2 additional replica(s)
M: 64f51e287aaf56510e468960a0f33869e95d63d2 172.30.0.103:6379
   slots:[10923-16383] (5461 slots) master
   2 additional replica(s)
S: ad5f851b17668442fbdcb274d7a711f9d6093c01 172.30.0.106:6379
   slots: (0 slots) slave
   replicates c5f8b7455d58394bdc924076aa67337fee0e8e78
S: 12ba01fef886bdf28ab06723b86c253a5ea552e7 172.30.0.104:6379
   slots: (0 slots) slave
   replicates 64f51e287aaf56510e468960a0f33869e95d63d2
S: e14cd04c2c8db1c2f2f201c594676c72a49fa983 172.30.0.109:6379
   slots: (0 slots) slave
   replicates 64f51e287aaf56510e468960a0f33869e95d63d2
S: 95ab1c0e516cdba9703cbf1411349d2708e599af 172.30.0.108:6379
   slots: (0 slots) slave
   replicates 7a01d02d441ffa8565b105669d8c91ad67a16dc0
S: 5847897a62e673c09a62fd44493d2343abe007ec 172.30.0.107:6379
   slots: (0 slots) slave
   replicates 7a01d02d441ffa8565b105669d8c91ad67a16dc0
S: 7ef3832ed5af43faf51c4e930f1240adf054424d 172.30.0.105:6379
   slots: (0 slots) slave
   replicates c5f8b7455d58394bdc924076aa67337fee0e8e78
M: 7a01d02d441ffa8565b105669d8c91ad67a16dc0 172.30.0.102:6379
   slots:[5461-10922] (5462 slots) master
   2 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

至此,Redis 集群已经构建完成,各个节点彼此之间已经建立了集群关系。

通过下述命令可以连上任意一个节点:

bash 复制代码
# 直接通过 IP 加 端口号的方式连接
redis-cli -h 172.30.0.101 -p 6379

# 通过先前配置好的端口映射的方式连接
redis-cli -p 6371

通过下述命令可以查看集群节点的信息:

bash 复制代码
cluster nodes
复制代码
64f51e287aaf56510e468960a0f33869e95d63d2 172.30.0.103:6379@16379 master - 0 1747217049208 3 connected 10923-16383
ad5f851b17668442fbdcb274d7a711f9d6093c01 172.30.0.106:6379@16379 slave c5f8b7455d58394bdc924076aa67337fee0e8e78 0 1747217048000 6 connected
12ba01fef886bdf28ab06723b86c253a5ea552e7 172.30.0.104:6379@16379 slave 64f51e287aaf56510e468960a0f33869e95d63d2 0 1747217048204 4 connected
e14cd04c2c8db1c2f2f201c594676c72a49fa983 172.30.0.109:6379@16379 slave 64f51e287aaf56510e468960a0f33869e95d63d2 0 1747217048606 9 connected
c5f8b7455d58394bdc924076aa67337fee0e8e78 172.30.0.101:6379@16379 myself,master - 0 1747217045000 1 connected 0-5460
95ab1c0e516cdba9703cbf1411349d2708e599af 172.30.0.108:6379@16379 slave 7a01d02d441ffa8565b105669d8c91ad67a16dc0 0 1747217047000 8 connected
5847897a62e673c09a62fd44493d2343abe007ec 172.30.0.107:6379@16379 slave 7a01d02d441ffa8565b105669d8c91ad67a16dc0 0 1747217047000 7 connected
7ef3832ed5af43faf51c4e930f1240adf054424d 172.30.0.105:6379@16379 slave c5f8b7455d58394bdc924076aa67337fee0e8e78 0 1747217048000 5 connected
7a01d02d441ffa8565b105669d8c91ad67a16dc0 172.30.0.102:6379@16379 master - 0 1747217047000 2 connected 5461-10922

我们执行set k1 111 会返回如下信息:

复制代码
127.0.0.1:6371> set k1 111
(error) MOVED 12706 172.30.0.103:6379
127.0.0.1:6371> 

k1 键被哈希后得到的值为 12706,而 12706 这个槽位被 172.30.0.103:6379 这个主节点和其附属的两个从节点拥有。所以你不能在 172.30.0.101:6379 这个节点上执行 set k1 111,而应该在 172.30.0.103:6379 这个主节点上执行。

如果是就想在该节点上执行,那么在连接上节点时加上 -c

bash 复制代码
redis-cli -h 172.30.0.103 -p 6379 -c
redis-cli -p 6379 -c

继续执行 set k1 111 后,会出现如下信息:

复制代码
127.0.0.1:6371> set k1 111
-> Redirected to slot [12706] located at 172.30.0.103:6379
OK
172.30.0.103:6379> 

可以看到命令被执行成功,并且 Redis 自动将命令的执行重定向到了 172.30.0.103:6379 这个主节点上,并且也将你的连接也自动重定向到了该节点。

如果执行 get 命令也会有相同的效果:

复制代码
127.0.0.1:6371> get k1
-> Redirected to slot [12706] located at 172.30.0.103:6379
"111"
172.30.0.103:6379> 

但这样就并非所有命令都可以在任意节点运行了 ,比如 mget、mset 等批量操作的命令就会出现如下信息:

复制代码
127.0.0.1:6371> mget k1 k2
(error) CROSSSLOT Keys in request don't hash to the same slot
127.0.0.1:6371> 

可以看到,由于批量执行的 key 的哈希值不在一个分片管理的槽内,所以就不知道重定向到哪了。

不过也并非无解,可以通过 hash tag 解决。

故障处理

当三个分片的某一个主节点故障后,Redis 集群会自动发挥类似 Redis 哨兵的作用,重新分配该主节点旗下的从节点成为新的主节点。

bash 复制代码
docker stop redis1
复制代码
127.0.0.1:6373> cluster nodes
95ab1c0e516cdba9703cbf1411349d2708e599af 172.30.0.108:6379@16379 slave 7a01d02d441ffa8565b105669d8c91ad67a16dc0 0 1747218236599 8 connected
7a01d02d441ffa8565b105669d8c91ad67a16dc0 172.30.0.102:6379@16379 master - 0 1747218237000 2 connected 5461-10922
7ef3832ed5af43faf51c4e930f1240adf054424d 172.30.0.105:6379@16379 slave ad5f851b17668442fbdcb274d7a711f9d6093c01 0 1747218237000 10 connected
e14cd04c2c8db1c2f2f201c594676c72a49fa983 172.30.0.109:6379@16379 slave 64f51e287aaf56510e468960a0f33869e95d63d2 0 1747218236000 9 connected
12ba01fef886bdf28ab06723b86c253a5ea552e7 172.30.0.104:6379@16379 slave 64f51e287aaf56510e468960a0f33869e95d63d2 0 1747218237802 4 connected
ad5f851b17668442fbdcb274d7a711f9d6093c01 172.30.0.106:6379@16379 master - 0 1747218237000 10 connected 0-5460
64f51e287aaf56510e468960a0f33869e95d63d2 172.30.0.103:6379@16379 myself,master - 0 1747218235000 3 connected 10923-16383
5847897a62e673c09a62fd44493d2343abe007ec 172.30.0.107:6379@16379 slave 7a01d02d441ffa8565b105669d8c91ad67a16dc0 0 1747218237000 7 connected
c5f8b7455d58394bdc924076aa67337fee0e8e78 172.30.0.101:6379@16379 master,fail - 1747218217249 1747218215000 1 connected
127.0.0.1:6373> 

可以看到 redis1 这个节点显示 fail 状态,且其旗下的 172.30.0.106:6379 成为新的主节点。

redis1 又恢复后,可以看到如下信息:

bash 复制代码
docker start redis1
复制代码
95ab1c0e516cdba9703cbf1411349d2708e599af 172.30.0.108:6379@16379 slave 7a01d02d441ffa8565b105669d8c91ad67a16dc0 0 1747218365000 8 connected
7a01d02d441ffa8565b105669d8c91ad67a16dc0 172.30.0.102:6379@16379 master - 0 1747218365595 2 connected 5461-10922
7ef3832ed5af43faf51c4e930f1240adf054424d 172.30.0.105:6379@16379 slave ad5f851b17668442fbdcb274d7a711f9d6093c01 0 1747218366000 10 connected
e14cd04c2c8db1c2f2f201c594676c72a49fa983 172.30.0.109:6379@16379 slave 64f51e287aaf56510e468960a0f33869e95d63d2 0 1747218366096 9 connected
12ba01fef886bdf28ab06723b86c253a5ea552e7 172.30.0.104:6379@16379 slave 64f51e287aaf56510e468960a0f33869e95d63d2 0 1747218364594 4 connected
ad5f851b17668442fbdcb274d7a711f9d6093c01 172.30.0.106:6379@16379 master - 0 1747218365000 10 connected 0-5460
64f51e287aaf56510e468960a0f33869e95d63d2 172.30.0.103:6379@16379 myself,master - 0 1747218364000 3 connected 10923-16383
5847897a62e673c09a62fd44493d2343abe007ec 172.30.0.107:6379@16379 slave 7a01d02d441ffa8565b105669d8c91ad67a16dc0 0 1747218366598 7 connected
c5f8b7455d58394bdc924076aa67337fee0e8e78 172.30.0.101:6379@16379 slave ad5f851b17668442fbdcb274d7a711f9d6093c01 0 1747218365095 10 connected

可以看到,redis1 并没有恢复为主节点,而是成为了 106 旗下的从节点。

故障判定

  1. 节点 A 给 节点 B 发送 ping 包,B 就会给 A 返回一个 pong 包。pingpong 除了 message type 属性之外,其他部分都是一样的。这里包含了集群的配置信息(该节点的 id,该节点从属于哪个分片,是主节点还是从节点,从属于谁,持有哪些 slots 的位图...)。
  2. 每个节点,每秒钟,都会给一些随机的节点发起 ping 包,而不是全发一遍.这样设定是为了避免在节点很多的时候,心跳包也非常多(比如有 9 个节点,如果全发,就是 9 * 872 组心跳了,而且这是按照 N^2 这样的级别增长的)。
  3. 当节点 A 给节点 B 发起 ping 包,B 不能如期回应的时候,此时 A 就会尝试重置和 B 的 tcp 连接,看能否连接成功。如果仍然连接失败,A 就会把 B 设为 PFAIL 状态(相当于主观下线)。
  4. A 判定 B 为 PFAIL 之后,会通过 Redis 内置的 Gossip 协议,和其他节点进行沟通,向其他节点确认 B 的状态。(每个节点都会维护一个自己的 "下线列表",由于视角不同,每个节点的下线列表也不一定相同)。
  5. 此时 A 发现其他很多节点,也认为 B 为 PFAIL,并且数目超过总集群个数的一半,那么 A 就会把 B 标记成 FAIL(相当于客观下线),并且把这个消息同步给其他节点(其他节点收到之后,也会把 B 标记成 FAIL)。

故障转移

上述例字中,B 故障,并且 A 把 B FAIL 的消息告知集群中的其他节点。

  • 如果 B 是从节点,那么不需要进行故障迁移。
  • 如果 B 是主节点,那么就会由 B 的从节点(比如 C 和 D)触发故障迁移了。
  • 所谓故障迁移,就是指把从节点提拔成主节点,继续给整个 Redis 集群提供支持。

具体流程如下:

  1. 从节点判定自己是否具有参选资格。如果从节点和主节点已经太久没通信(此时认为从节点的数据和主节点差异太大了),时间超过阈值,就失去竞选资格。
  2. 具有资格的节点,比如 C 和 D,就会先休眠一定时间。休眠时间 = 500ms 基础时间 + [0, 500ms] 随机时间 + 排名 * 1000msoffset 的值越大,则排名越靠前(越小)。
  3. 比如 C 的休眠时间到了,C 就会给其他所有集群中的节点,进行拉票操作。但是只有主节点才有投票资格。
  4. 主节点就会把自己的票投给 C (每个主节点只有 1 票)。当 C 收到的票数超过主节点数目的一半,C 就会晋升成主节点。(C 自己负责执行 slaveof no one,并且让 D 执行 slaveof C
  5. 同时,C 还会把自己成为主节点的消息,同步给其他集群的节点。大家也都会更新自己保存的集群结构信息。

某个或者某些节点宕机,有的时候会引起整个集群都宕机 (称为 fail 状态)。以下三种情况会出现集群宕机:

  • 某个分片,所有的主节点和从节点都挂了
  • 某个分片,主节点挂了,但是没有从节点
  • 超过半数的 master 节点都挂了

集群扩容

集群扩容是一件风险很高,成本较大的操作,在日常开发中,也很少会进行扩容操作。

110109 加入集群,110 为主节点 109110 的从节点。

110 添加到集群:

bash 复制代码
redis-cli --cluster add-node 172.30.0.110:6379 172.30.0.101:6379

前一个参数表示加哪一个节点到集群,后一个参数为集群中的任意一个节点。

输入完后会看到如下信息:

复制代码
>>> Adding node 172.30.0.110:6379 to cluster 172.30.0.101:6379
>>> Performing Cluster Check (using node 172.30.0.101:6379)
S: c5f8b7455d58394bdc924076aa67337fee0e8e78 172.30.0.101:6379
   slots: (0 slots) slave
   replicates ad5f851b17668442fbdcb274d7a711f9d6093c01
M: 7a01d02d441ffa8565b105669d8c91ad67a16dc0 172.30.0.102:6379
   slots:[5461-10922] (5462 slots) master
   2 additional replica(s)
M: ad5f851b17668442fbdcb274d7a711f9d6093c01 172.30.0.106:6379
   slots:[0-5460] (5461 slots) master
   2 additional replica(s)
S: 5847897a62e673c09a62fd44493d2343abe007ec 172.30.0.107:6379
   slots: (0 slots) slave
   replicates 7a01d02d441ffa8565b105669d8c91ad67a16dc0
S: 95ab1c0e516cdba9703cbf1411349d2708e599af 172.30.0.108:6379
   slots: (0 slots) slave
   replicates 7a01d02d441ffa8565b105669d8c91ad67a16dc0
S: 7ef3832ed5af43faf51c4e930f1240adf054424d 172.30.0.105:6379
   slots: (0 slots) slave
   replicates ad5f851b17668442fbdcb274d7a711f9d6093c01
S: e14cd04c2c8db1c2f2f201c594676c72a49fa983 172.30.0.109:6379
   slots: (0 slots) slave
   replicates 64f51e287aaf56510e468960a0f33869e95d63d2
M: 64f51e287aaf56510e468960a0f33869e95d63d2 172.30.0.103:6379
   slots:[10923-16383] (5461 slots) master
   2 additional replica(s)
S: 12ba01fef886bdf28ab06723b86c253a5ea552e7 172.30.0.104:6379
   slots: (0 slots) slave
   replicates 64f51e287aaf56510e468960a0f33869e95d63d2
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
>>> Getting functions from cluster
>>> Failed retrieving Functions from the cluster, skip this step as Redis version do not support function command (error = 'ERR unknown command `FUNCTION`, with args beginning with: `DUMP`, ')
>>> Send CLUSTER MEET to node 172.30.0.110:6379 to make it join the cluster.
[OK] New node added correctly.

通过查看 cluster nodes 查看集群节点信息:

复制代码
95ab1c0e516cdba9703cbf1411349d2708e599af 172.30.0.108:6379@16379 slave 7a01d02d441ffa8565b105669d8c91ad67a16dc0 0 1747223369580 8 connected
7a01d02d441ffa8565b105669d8c91ad67a16dc0 172.30.0.102:6379@16379 master - 0 1747223369580 2 connected 5461-10922
7ef3832ed5af43faf51c4e930f1240adf054424d 172.30.0.105:6379@16379 slave ad5f851b17668442fbdcb274d7a711f9d6093c01 0 1747223368000 10 connected
e14cd04c2c8db1c2f2f201c594676c72a49fa983 172.30.0.109:6379@16379 slave 64f51e287aaf56510e468960a0f33869e95d63d2 0 1747223370080 9 connected
12ba01fef886bdf28ab06723b86c253a5ea552e7 172.30.0.104:6379@16379 slave 64f51e287aaf56510e468960a0f33869e95d63d2 0 1747223369479 4 connected
ad5f851b17668442fbdcb274d7a711f9d6093c01 172.30.0.106:6379@16379 master - 0 1747223368477 10 connected 0-5460
64f51e287aaf56510e468960a0f33869e95d63d2 172.30.0.103:6379@16379 myself,master - 0 1747223368000 3 connected 10923-16383
5847897a62e673c09a62fd44493d2343abe007ec 172.30.0.107:6379@16379 slave 7a01d02d441ffa8565b105669d8c91ad67a16dc0 0 1747223368577 7 connected
49a3447fe4156a2590f1cc2924aad53af436100b 172.30.0.110:6379@16379 master - 0 1747223369078 0 connected
c5f8b7455d58394bdc924076aa67337fee0e8e78 172.30.0.101:6379@16379 slave ad5f851b17668442fbdcb274d7a711f9d6093c01 0 1747223369000 10 connected

可以看到,110 变成了一个主节点,也就新加了一个分片,但现在还没有给它分配槽位。

现在给 110 分配槽位:

bash 复制代码
redis-cli --cluster reshard 172.30.0.101:6379

执行后,有如下信息:

复制代码
>>> Performing Cluster Check (using node 172.30.0.101:6379)
S: c5f8b7455d58394bdc924076aa67337fee0e8e78 172.30.0.101:6379
   slots: (0 slots) slave
   replicates ad5f851b17668442fbdcb274d7a711f9d6093c01
M: 7a01d02d441ffa8565b105669d8c91ad67a16dc0 172.30.0.102:6379
   slots:[5461-10922] (5462 slots) master
   2 additional replica(s)
M: ad5f851b17668442fbdcb274d7a711f9d6093c01 172.30.0.106:6379
   slots:[0-5460] (5461 slots) master
   2 additional replica(s)
S: 5847897a62e673c09a62fd44493d2343abe007ec 172.30.0.107:6379
   slots: (0 slots) slave
   replicates 7a01d02d441ffa8565b105669d8c91ad67a16dc0
S: 95ab1c0e516cdba9703cbf1411349d2708e599af 172.30.0.108:6379
   slots: (0 slots) slave
   replicates 7a01d02d441ffa8565b105669d8c91ad67a16dc0
S: 7ef3832ed5af43faf51c4e930f1240adf054424d 172.30.0.105:6379
   slots: (0 slots) slave
   replicates ad5f851b17668442fbdcb274d7a711f9d6093c01
M: 49a3447fe4156a2590f1cc2924aad53af436100b 172.30.0.110:6379
   slots: (0 slots) master
S: e14cd04c2c8db1c2f2f201c594676c72a49fa983 172.30.0.109:6379
   slots: (0 slots) slave
   replicates 64f51e287aaf56510e468960a0f33869e95d63d2
M: 64f51e287aaf56510e468960a0f33869e95d63d2 172.30.0.103:6379
   slots:[10923-16383] (5461 slots) master
   2 additional replica(s)
S: 12ba01fef886bdf28ab06723b86c253a5ea552e7 172.30.0.104:6379
   slots: (0 slots) slave
   replicates 64f51e287aaf56510e468960a0f33869e95d63d2
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
How many slots do you want to move (from 1 to 16384)? 4096
What is the receiving node ID? 49a3447fe4156a2590f1cc2924aad53af436100b
Please enter all the source node IDs.
  Type 'all' to use all the nodes as source nodes for the hash slots.
  Type 'done' once you entered all the source nodes IDs.
Source node #1: all

Ready to move 4096 slots.
  Source nodes:
    M: 7a01d02d441ffa8565b105669d8c91ad67a16dc0 172.30.0.102:6379
       slots:[5461-10922] (5462 slots) master
       2 additional replica(s)
    M: ad5f851b17668442fbdcb274d7a711f9d6093c01 172.30.0.106:6379
       slots:[0-5460] (5461 slots) master
       2 additional replica(s)
    M: 64f51e287aaf56510e468960a0f33869e95d63d2 172.30.0.103:6379
       slots:[10923-16383] (5461 slots) master
       2 additional replica(s)
  Destination node:
    M: 49a3447fe4156a2590f1cc2924aad53af436100b 172.30.0.110:6379
       slots: (0 slots) master
  Resharding plan:
    Moving slot 5461 from 7a01d02d441ffa8565b105669d8c91ad67a16dc0
    ......
    Moving slot 12287 from 64f51e287aaf56510e468960a0f33869e95d63d2
Do you want to proceed with the proposed reshard plan (yes/no)? yes
......
Moving slot 12285 from 172.30.0.103:6379 to 172.30.0.110:6379: 

这里有几个关键点:

  • How many slots do you want to move (from 1 to 16384)? 4096:表示要给 110 分配的槽位数量,总共有 16384 个槽位,现在又四个分片,平均每个分片分 4096 个槽位。
  • What is the receiving node ID? 49a3447fe4156a2590f1cc2924aad53af436100b:表示要把槽位分配给哪个节点,输入哪个节点的 ID,这里是 110 的 ID。
  • Please enter all the source node IDs. Type 'all' to use all the nodes as source nodes for the hash slots. Type 'done' once you entered all the source nodes IDs. Source node #1: all:这里有两种模式,all 表示把所有其他主节点的槽位都分配过来一些。done 表示指定哪个主节点分配过来,在 done 前输入节点 ID。

通过查看 cluster nodes 查看集群节点信息:

复制代码
95ab1c0e516cdba9703cbf1411349d2708e599af 172.30.0.108:6379@16379 slave 7a01d02d441ffa8565b105669d8c91ad67a16dc0 0 1747223707551 8 connected
7a01d02d441ffa8565b105669d8c91ad67a16dc0 172.30.0.102:6379@16379 master - 0 1747223707551 2 connected 6827-10922
7ef3832ed5af43faf51c4e930f1240adf054424d 172.30.0.105:6379@16379 slave ad5f851b17668442fbdcb274d7a711f9d6093c01 0 1747223707000 10 connected
e14cd04c2c8db1c2f2f201c594676c72a49fa983 172.30.0.109:6379@16379 slave 64f51e287aaf56510e468960a0f33869e95d63d2 0 1747223707750 9 connected
12ba01fef886bdf28ab06723b86c253a5ea552e7 172.30.0.104:6379@16379 slave 49a3447fe4156a2590f1cc2924aad53af436100b 0 1747223708252 11 connected
ad5f851b17668442fbdcb274d7a711f9d6093c01 172.30.0.106:6379@16379 master - 0 1747223706000 10 connected 1365-5460
64f51e287aaf56510e468960a0f33869e95d63d2 172.30.0.103:6379@16379 myself,master - 0 1747223707000 3 connected 12288-16383
5847897a62e673c09a62fd44493d2343abe007ec 172.30.0.107:6379@16379 slave 7a01d02d441ffa8565b105669d8c91ad67a16dc0 0 1747223706000 7 connected
49a3447fe4156a2590f1cc2924aad53af436100b 172.30.0.110:6379@16379 master - 0 1747223708051 11 connected 0-1364 5461-6826 10923-12287
c5f8b7455d58394bdc924076aa67337fee0e8e78 172.30.0.101:6379@16379 slave ad5f851b17668442fbdcb274d7a711f9d6093c01 0 1747223707551 10 connected

可以看到,此时 110 已经分配了 4096 个槽位。

111 加入到 110 旗下的从节点中:

bash 复制代码
redis-cli --cluster add-node 172.30.0.111:6379 172.30.0.101:6379 --cluster-slave --cluster-master-id [172.30.1.110 节点的 nodeId]

redis-cli --cluster add-node 172.30.0.111:6379 172.30.0.101:6379 --cluster-slave --cluster-master-id 49a3447fe4156a2590f1cc2924aad53af436100b
复制代码
>>> Adding node 172.30.0.111:6379 to cluster 172.30.0.101:6379
>>> Performing Cluster Check (using node 172.30.0.101:6379)
S: c5f8b7455d58394bdc924076aa67337fee0e8e78 172.30.0.101:6379
   slots: (0 slots) slave
   replicates ad5f851b17668442fbdcb274d7a711f9d6093c01
M: 7a01d02d441ffa8565b105669d8c91ad67a16dc0 172.30.0.102:6379
   slots:[6827-10922] (4096 slots) master
   2 additional replica(s)
M: ad5f851b17668442fbdcb274d7a711f9d6093c01 172.30.0.106:6379
   slots:[1365-5460] (4096 slots) master
   2 additional replica(s)
S: 5847897a62e673c09a62fd44493d2343abe007ec 172.30.0.107:6379
   slots: (0 slots) slave
   replicates 7a01d02d441ffa8565b105669d8c91ad67a16dc0
S: 95ab1c0e516cdba9703cbf1411349d2708e599af 172.30.0.108:6379
   slots: (0 slots) slave
   replicates 7a01d02d441ffa8565b105669d8c91ad67a16dc0
S: 7ef3832ed5af43faf51c4e930f1240adf054424d 172.30.0.105:6379
   slots: (0 slots) slave
   replicates ad5f851b17668442fbdcb274d7a711f9d6093c01
M: 49a3447fe4156a2590f1cc2924aad53af436100b 172.30.0.110:6379
   slots:[0-1364],[5461-6826],[10923-12287] (4096 slots) master
   1 additional replica(s)
S: e14cd04c2c8db1c2f2f201c594676c72a49fa983 172.30.0.109:6379
   slots: (0 slots) slave
   replicates 64f51e287aaf56510e468960a0f33869e95d63d2
M: 64f51e287aaf56510e468960a0f33869e95d63d2 172.30.0.103:6379
   slots:[12288-16383] (4096 slots) master
   1 additional replica(s)
S: 12ba01fef886bdf28ab06723b86c253a5ea552e7 172.30.0.104:6379
   slots: (0 slots) slave
   replicates 49a3447fe4156a2590f1cc2924aad53af436100b
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
>>> Send CLUSTER MEET to node 172.30.0.111:6379 to make it join the cluster.
Waiting for the cluster to join
..
>>> Configure node as replica of 172.30.0.110:6379.
[OK] New node added correctly.

通过查看 cluster nodes 查看集群节点信息:

复制代码
7df32f4548725bbd3fcbdbd267f7eec62b716170 172.30.0.111:6379@16379 slave 49a3447fe4156a2590f1cc2924aad53af436100b 0 1747224300599 11 connected
95ab1c0e516cdba9703cbf1411349d2708e599af 172.30.0.108:6379@16379 slave 7a01d02d441ffa8565b105669d8c91ad67a16dc0 0 1747224300599 8 connected
7a01d02d441ffa8565b105669d8c91ad67a16dc0 172.30.0.102:6379@16379 master - 0 1747224300000 2 connected 6827-10922
7ef3832ed5af43faf51c4e930f1240adf054424d 172.30.0.105:6379@16379 slave ad5f851b17668442fbdcb274d7a711f9d6093c01 0 1747224300000 10 connected
e14cd04c2c8db1c2f2f201c594676c72a49fa983 172.30.0.109:6379@16379 slave 64f51e287aaf56510e468960a0f33869e95d63d2 0 1747224301701 9 connected
12ba01fef886bdf28ab06723b86c253a5ea552e7 172.30.0.104:6379@16379 slave 49a3447fe4156a2590f1cc2924aad53af436100b 0 1747224300698 11 connected
ad5f851b17668442fbdcb274d7a711f9d6093c01 172.30.0.106:6379@16379 master - 0 1747224300000 10 connected 1365-5460
64f51e287aaf56510e468960a0f33869e95d63d2 172.30.0.103:6379@16379 myself,master - 0 1747224299000 3 connected 12288-16383
5847897a62e673c09a62fd44493d2343abe007ec 172.30.0.107:6379@16379 slave 7a01d02d441ffa8565b105669d8c91ad67a16dc0 0 1747224300000 7 connected
49a3447fe4156a2590f1cc2924aad53af436100b 172.30.0.110:6379@16379 master - 0 1747224300599 11 connected 0-1364 5461-6826 10923-12287
c5f8b7455d58394bdc924076aa67337fee0e8e78 172.30.0.101:6379@16379 slave ad5f851b17668442fbdcb274d7a711f9d6093c01 0 1747224300599 10 connected
1701 9 connected
12ba01fef886bdf28ab06723b86c253a5ea552e7 172.30.0.104:6379@16379 slave 49a3447fe4156a2590f1cc2924aad53af436100b 0 1747224300698 11 connected
ad5f851b17668442fbdcb274d7a711f9d6093c01 172.30.0.106:6379@16379 master - 0 1747224300000 10 connected 1365-5460
64f51e287aaf56510e468960a0f33869e95d63d2 172.30.0.103:6379@16379 myself,master - 0 1747224299000 3 connected 12288-16383
5847897a62e673c09a62fd44493d2343abe007ec 172.30.0.107:6379@16379 slave 7a01d02d441ffa8565b105669d8c91ad67a16dc0 0 1747224300000 7 connected
49a3447fe4156a2590f1cc2924aad53af436100b 172.30.0.110:6379@16379 master - 0 1747224300599 11 connected 0-1364 5461-6826 10923-12287
c5f8b7455d58394bdc924076aa67337fee0e8e78 172.30.0.101:6379@16379 slave ad5f851b17668442fbdcb274d7a711f9d6093c01 0 1747224300599 10 connected