k8s上面的Redis集群链接不上master的解决办法

问题描述

之前在k8s上面部署了一台node,然后创建了6个redis的pod,构建了一个redis的集群,正常运行。

最近添加了一台slave node,然后把其中的几个redis的pod调度到了slave node上面,结果集群就起不来了,看了下log报如下错误:

127.0.0.1:6379> get test
(error) CLUSTERDOWN The cluster is down


127.0.0.1:6379> cluster info
cluster_state:fail
cluster_slots_assigned:16384
cluster_slots_ok:0
cluster_slots_pfail:16384
cluster_slots_fail:0
cluster_known_nodes:6
cluster_size:3
cluster_current_epoch:14
cluster_my_epoch:14
cluster_stats_messages_ping_sent:4
cluster_stats_messages_sent:4
cluster_stats_messages_received:0
total_cluster_links_buffer_limit_exceeded:0


$ kubectl logs redis-app-5
... ...
1:S 19 Nov 2024 01:58:13.251 * Connecting to MASTER 172.16.43.44:6379
1:S 19 Nov 2024 01:58:13.251 * MASTER <-> REPLICA sync started
1:S 19 Nov 2024 01:58:13.251 * Cluster state changed: ok
1:S 19 Nov 2024 01:58:20.754 # Cluster state changed: fail
1:S 19 Nov 2024 01:59:14.979 # Timeout connecting to the MASTER...
1:S 19 Nov 2024 01:59:14.979 * Reconnecting to MASTER 172.16.43.44:6379 after failure
1:S 19 Nov 2024 01:59:14.979 * MASTER <-> REPLICA sync started
1:S 19 Nov 2024 02:00:15.422 # Timeout connecting to the MASTER...
1:S 19 Nov 2024 02:00:15.422 * Reconnecting to MASTER 172.16.43.44:6379 after failure
1:S 19 Nov 2024 02:00:15.422 * MASTER <-> REPLICA sync started
1:S 19 Nov 2024 02:01:16.357 # Timeout connecting to the MASTER...
1:S 19 Nov 2024 02:01:16.357 * Reconnecting to MASTER 172.16.43.44:6379 after failure
1:S 19 Nov 2024 02:01:16.357 * MASTER <-> REPLICA sync started

问题分析

这种情况是redis的pod已经重新启动了,相应的ip地址可能已经变掉了,但是集群部署还是按照重启之前的配置来的,所以导致启动失败。

我的解决办法:

  1. 首先查看各个redis pod的信息

    $  kubectl describe pod redis-app | grep IP
                      cni.projectcalico.org/podIP: 172.16.178.201/32
                      cni.projectcalico.org/podIPs: 172.16.178.201/32
    IP:               172.16.178.201
    IPs:
      IP:           172.16.178.201
                      cni.projectcalico.org/podIP: 172.16.178.202/32
                      cni.projectcalico.org/podIPs: 172.16.178.202/32
    IP:               172.16.178.202
    IPs:
      IP:           172.16.178.202
                      cni.projectcalico.org/podIP: 172.16.43.1/32
                      cni.projectcalico.org/podIPs: 172.16.43.1/32
    IP:               172.16.43.1
    IPs:
      IP:           172.16.43.1
                      cni.projectcalico.org/podIP: 172.16.178.203/32
                      cni.projectcalico.org/podIPs: 172.16.178.203/32
    IP:               172.16.178.203
    IPs:
      IP:           172.16.178.203
                      cni.projectcalico.org/podIP: 172.16.43.63/32
                      cni.projectcalico.org/podIPs: 172.16.43.63/32
    IP:               172.16.43.63
    IPs:
      IP:           172.16.43.63
                      cni.projectcalico.org/podIP: 172.16.178.204/32
                      cni.projectcalico.org/podIPs: 172.16.178.204/32
    IP:               172.16.178.204
    IPs:
      IP:           172.16.178.204
    
    
    
    $ kubectl get pods -o wide
    NAME                                 READY   STATUS    RESTARTS         AGE     IP               NODE       NOMINATED NODE   READINESS GATES
    redis-app-0                          1/1     Running   0                2m34s   172.16.178.201   kevin-s1   <none>           <none>
    redis-app-1                          1/1     Running   0                2m32s   172.16.178.202   kevin-s1   <none>           <none>
    redis-app-2                          1/1     Running   0                2m30s   172.16.43.1      kevin-pc   <none>           <none>
    redis-app-3                          1/1     Running   0                2m26s   172.16.178.203   kevin-s1   <none>           <none>
    redis-app-4                          1/1     Running   0                2m24s   172.16.43.63     kevin-pc   <none>           <none>
    redis-app-5                          1/1     Running   0                2m19s   172.16.178.204   kevin-s1   <none>           <none>
    
  2. 然后通过inem0o/redis-trib根据pods的最新ip重新创建集群

    $ sudo docker run --rm -ti inem0o/redis-trib create --replicas 1 172.16.178.201:6379 172.16.178.202:6379 172.16.43.1:6379 172.16.43.63:6379 172.16.178.204:6379 172.16.178.203:6379
    
  3. 在创建集群的过程中可能会遇到下面这个错误

    $ sudo docker run --rm -ti inem0o/redis-trib create --replicas 1 172.16.178.201:6379 172.16.178.202:6379 172.16.43.1:6379 172.16.43.63:6379 172.16.178.204:6379 172.16.178.203:6379

    Creating cluster
    [ERR] Node 172.16.178.201:6379 is not empty. Either the node already knows other nodes (check with CLUSTER NODES) or contains some key in database 0.

这种情况是因为redis的pod没有被重置,需要登录出问题的pod然后用redis-cii重置集群

$ kubectl exec -it redis-app-0 -- redis-cli -h 172.16.178.201 -p 6379
172.16.178.201:6379> CLUSTER RESET
OK

出问题的pod全部重置完之后再执行上面的命令,集群重新构建成功

$ sudo docker run --rm -ti inem0o/redis-trib create --replicas 1 172.16.178.201:6379 172.16.178.202:6379 172.16.43.1:6379 172.16.43.63:6379 172.16.178.204:6379 172.16.178.203:6379
>>> Creating cluster
>>> Performing hash slots allocation on 6 nodes...
Using 3 masters:
172.16.178.201:6379
172.16.178.202:6379
172.16.43.1:6379
Adding replica 172.16.43.63:6379 to 172.16.178.201:6379
Adding replica 172.16.178.204:6379 to 172.16.178.202:6379
Adding replica 172.16.178.203:6379 to 172.16.43.1:6379
M: 57d9f345d23e7bf7dd2f331e14d9d7143aa9617f 172.16.178.201:6379
   slots:0-5460 (5461 slots) master
M: f5d617c0ed655dd6afa32c5d4ec6260713668639 172.16.178.202:6379
   slots:5461-10922 (5462 slots) master
M: 808de7e00f10fe17a5582cd76a533159a25006d8 172.16.43.1:6379
   slots:10923-16383 (5461 slots) master
S: 44ac042b99b9b73051b05d1be3d98cf475f67f0a 172.16.43.63:6379
   replicates 57d9f345d23e7bf7dd2f331e14d9d7143aa9617f
S: 8db8f89b7b28d0ce098de275340e3c4679fd342d 172.16.178.204:6379
   replicates f5d617c0ed655dd6afa32c5d4ec6260713668639
S: 2f5860e62f03ea17d398bbe447a6f1d428ae8698 172.16.178.203:6379
   replicates 808de7e00f10fe17a5582cd76a533159a25006d8
Can I set the above configuration? (type 'yes' to accept): yes
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join.
>>> Performing Cluster Check (using node 172.16.178.201:6379)
M: 57d9f345d23e7bf7dd2f331e14d9d7143aa9617f 172.16.178.201:6379
   slots:0-5460 (5461 slots) master
   1 additional replica(s)
S: 44ac042b99b9b73051b05d1be3d98cf475f67f0a 172.16.43.63:6379@16379
   slots: (0 slots) slave
   replicates 57d9f345d23e7bf7dd2f331e14d9d7143aa9617f
M: f5d617c0ed655dd6afa32c5d4ec6260713668639 172.16.178.202:6379@16379
   slots:5461-10922 (5462 slots) master
   1 additional replica(s)
S: 8db8f89b7b28d0ce098de275340e3c4679fd342d 172.16.178.204:6379@16379
   slots: (0 slots) slave
   replicates f5d617c0ed655dd6afa32c5d4ec6260713668639
S: 2f5860e62f03ea17d398bbe447a6f1d428ae8698 172.16.178.203:6379@16379
   slots: (0 slots) slave
   replicates 808de7e00f10fe17a5582cd76a533159a25006d8
M: 808de7e00f10fe17a5582cd76a533159a25006d8 172.16.43.1:6379@16379
   slots:10923-16383 (5461 slots) master
   1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

查看集群状态:

$ kubectl exec -it redis-app-3 -- redis-cli
127.0.0.1:6379> cluster info
cluster_state:ok
cluster_slots_assigned:16384
cluster_slots_ok:16384
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:6
cluster_size:3
cluster_current_epoch:14
cluster_my_epoch:3
cluster_stats_messages_ping_sent:39
cluster_stats_messages_pong_sent:40
cluster_stats_messages_meet_sent:1
cluster_stats_messages_sent:80
cluster_stats_messages_ping_received:40
cluster_stats_messages_pong_received:36
cluster_stats_messages_received:76
total_cluster_links_buffer_limit_exceeded:0

至此问题解决。

=========================================================================

2024.11.20更新

=========================================================================

因为重启node之后,node上面的pod也会被重启,而redis pod 的IP地址是启动时候随机分配的,所以重启node可能会导致集群再次down掉,另一种解决办法就是在构建集群的时候,使用各个redis pod的DNS名称构造,DNS名称的格式是:

<statefulset-name>-<ordinal>.<service-name>.<namespace>.svc.cluster.local

其中的<statefulset-name><service-name>和<namespace>可以从redis-stateful.yaml里面获取到

-<ordinal>就是instance的编号

例如下面这个redis-stateful.yaml

$ cat redis-stateful.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: redis-app
spec:
  serviceName: redis-service
  replicas: 6
  selector:
    matchLabels:
      app: redis
      appCluster: redis-cluster
  template:
    metadata:
      labels:
        app: redis
        appCluster: redis-cluster
    spec:
      containers:
      - name: redis
        image: redis
        imagePullPolicy: IfNotPresent
        command: ["/bin/bash", "-ce", "tail -f /dev/null"]
        command: ["redis-server"]
        args:
          - "/etc/redis/redis.conf"
          - "--protected-mode"
          - "no"
        ports:
            - name: redis
              containerPort: 6379
              protocol: "TCP"
            - name: cluster
              containerPort: 16379
              protocol: "TCP"
        volumeMounts:
          - name: "redis-conf"
            mountPath: "/etc/redis"
          - name: "redis-data"
            mountPath: "/var/lib/redis"
      volumes:
      - name: "redis-conf"
        configMap:
          name: "redis-conf"
          items:
            - key: "redis.conf"
              path: "redis.conf"
  volumeClaimTemplates:
  - metadata:
      name: redis-data
    spec:
      accessModes: [ "ReadWriteMany" ]
      storageClassName: "redis"
      resources:
        requests:
          storage: 1Gi

其中

<statefulset-name>是redis-app

<service-name>是redis-service

<namespace>默认为default (你可以配置自己的namespace)

因为起了6个节点,所以<ordinal>是0~5

有了以上信息,那么构造集群的命令就是

$ kubectl exec -it redis-app-0 -n default -- redis-cli --cluster create 
redis-app-1.redis-service.default.svc.cluster.local:6379 
redis-app-2.redis-service.default.svc.cluster.local:6379 
redis-app-3.redis-service.default.svc.cluster.local:6379 
redis-app-4.redis-service.default.svc.cluster.local:6379 
redis-app-5.redis-service.default.svc.cluster.local:6379 
redis-app-0.redis-service.default.svc.cluster.local:6379 --cluster-replicas 1

至此redis集群构造成功,重启node之后,集群的节点还是可以通过DNS name连接成功。

127.0.0.1:6379> cluster info
cluster_state:ok
cluster_slots_assigned:16384
cluster_slots_ok:16384
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:6
cluster_size:3
cluster_current_epoch:15
cluster_my_epoch:15
cluster_stats_messages_ping_sent:1204
cluster_stats_messages_pong_sent:1195
cluster_stats_messages_meet_sent:1
cluster_stats_messages_sent:2400
cluster_stats_messages_ping_received:1195
cluster_stats_messages_pong_received:1200
cluster_stats_messages_received:2395
total_cluster_links_buffer_limit_exceeded:0
相关推荐
Y编程小白41 分钟前
Redis可视化工具--RedisDesktopManager的安装
数据库·redis·缓存
心惠天意42 分钟前
docker-compose篇---创建jupyter并可用sudo的创建方式
docker·jupyter·容器
huaweichenai2 小时前
windows下修改docker的镜像存储地址
运维·docker·容器
周杰伦_Jay2 小时前
详细介绍:Kubernetes(K8s)的技术架构(核心概念、调度和资源管理、安全性、持续集成与持续部署、网络和服务发现)
网络·ci/cd·架构·kubernetes·服务发现·ai编程
东软吴彦祖3 小时前
包安装利用 LNMP 实现 phpMyAdmin 的负载均衡并利用Redis实现会话保持nginx
linux·redis·mysql·nginx·缓存·负载均衡
DZSpace5 小时前
使用 Helm 安装 Redis 集群
数据库·redis·缓存
周杰伦_Jay5 小时前
详细介绍:云原生技术细节(关键组成部分、优势和挑战、常用云原生工具)
java·云原生·容器·架构·kubernetes·jenkins·devops
元气满满的热码式5 小时前
K8S中Pod控制器之DaemonSet(DS)控制器
云原生·容器·kubernetes
昵称难产中5 小时前
浅谈云计算21 | Docker容器技术
docker·容器·云计算
夏子曦5 小时前
k8s 蓝绿发布、滚动发布、灰度发布
云原生·容器·kubernetes