Redis源码分析之集群故障转移


Redis系列文章

原理篇

源码篇

问题分析


Redis源码分析之集群故障转移

为了能够保存更多的数据,会采用横向扩展的方式,实现切换集群。从Redis3.0开始,官方提供了一个Redis Cluster的方案,用于实现切片集群。本文主要从源码上,分析故障转移是如何处理的。

1. 故障发现

1.1 pfail(possible failure)

当一个节点A在指定时间内,没有收到另外一个节点B对ping的响应时(超时时间由cluster-node-timeout配置项指定的时间),A节点就会将B节点标记为pfail。

在clusterCron中,会对节点进行检测。在当前时间减去上一次发送ping时间,和当前时间减去数据接收时间中,取较小值,然后和cluster_node_timeout比较,如果大于,则判断该节点为pfail.

c 复制代码
//cluster.c#clusterCron
...
mstime_t delay = now - node->ping_sent;
mstime_t data_delay = now - node->data_received;
if (data_delay < delay) delay = data_delay;
if (delay > server.cluster_node_timeout) {
	/* Timeout reached. Set the node as possibly failing if it is
             * not already in this state. */
	if (!(node->flags & (CLUSTER_NODE_PFAIL|CLUSTER_NODE_FAIL))) {
		serverLog(LL_DEBUG,"*** NODE %.40s possibly failing",
		                    node->name);
		node->flags |= CLUSTER_NODE_PFAIL;
		update_state = 1;
	}
}

另外每执行10次clusterCron,会随机从5个节点中,找出最长时间没有接收pong回复的节点。然后发送ping命令。

c 复制代码
//cluster.c#clusterCron
...
if (!(iteration % 10)) {
	int j;
	/* Check a few random nodes and ping the one with the oldest
         * pong_received time. */
	for (j = 0; j < 5; j++) {
		de = dictGetRandomKey(server.cluster->nodes);
		clusterNode *this = dictGetVal(de);
		/* Don't ping nodes disconnected or with a ping currently active. */
		if (this->link == NULL || this->ping_sent != 0) continue;
		if (this->flags & (CLUSTER_NODE_MYSELF|CLUSTER_NODE_HANDSHAKE))
		                continue;
		if (min_pong_node == NULL || min_pong > this->pong_received) {
			min_pong_node = this;
			min_pong = this->pong_received;
		}
	}
	if (min_pong_node) {
		serverLog(LL_DEBUG,"Pinging node %.40s", min_pong_node->name);
		clusterSendPing(min_pong_node->link, CLUSTERMSG_TYPE_PING);
	}
}

1.2 fail

当大多数Master节点确认B为pfail之后,就会将B标记为fail。fail状态的节点才会需要执行主从切换。

1)需要主节点参与决策,是因为集群模式下,只有处理槽的主节点才负责读写请求和集群槽等关键信息的维护。

2)需要半数以上处理槽的主节点,是为了应对网络分区等原因造成集群分割情况。

节点之间会相互通信,其中会调用到clusterProcessGossipSection方法,内部有一段判断。如果发送节点为Master,且认为携带的节点信息为fail或pfail,则会调用markNodeAsFailingIfNeeded方法,进行判断是否标记为fail。

c 复制代码
//cluster.c#clusterProcessGossipSection
if (sender && nodeIsMaster(sender) && node != myself) {
	if (flags & (CLUSTER_NODE_FAIL|CLUSTER_NODE_PFAIL)) {
		if (clusterNodeAddFailureReport(node,sender)) {
			serverLog(LL_VERBOSE,
			                            "Node %.40s reported node %.40s as not reachable.",
			                            sender->name, node->name);
		}
		markNodeAsFailingIfNeeded(node);
	} else {
		if (clusterNodeDelFailureReport(node,sender)) {
			serverLog(LL_VERBOSE,
			                            "Node %.40s reported node %.40s is back online.",
			                            sender->name, node->name);
		}
	}
}

判断为fail节点数需要大于等于 (N/2 + 1),会从node的fail_reports中获取判定是pfail的节点数,如果达到大多数认定,则会将node的标记为fail。最后如果当前节点时Master节点,则会对于判定该Node状态为fail,进行广播(发送CLUSTERMSG_TYPE_FAIL命令给其他所有node节点)。

c 复制代码
//cluster.c#markNodeAsFailingIfNeeded
void markNodeAsFailingIfNeeded(clusterNode *node) {
    int failures;
    int needed_quorum = (server.cluster->size / 2) + 1;

    if (!nodeTimedOut(node)) return; /* We can reach it. */
    if (nodeFailed(node)) return; /* Already FAILing. */

    failures = clusterNodeFailureReportsCount(node);
    /* Also count myself as a voter if I'm a master. */
    if (nodeIsMaster(myself)) failures++;
    if (failures < needed_quorum) return; /* No weak agreement from masters. */

    serverLog(LL_NOTICE,
        "Marking node %.40s as failing (quorum reached).", node->name);

    /* Mark the node as failing. */
    node->flags &= ~CLUSTER_NODE_PFAIL;
    node->flags |= CLUSTER_NODE_FAIL;
    node->fail_time = mstime();

    /* Broadcast the failing node name to everybody, forcing all the other
     * reachable nodes to flag the node as FAIL. */
    if (nodeIsMaster(myself)) clusterSendFail(node->name);
    clusterDoBeforeSleep(CLUSTER_TODO_UPDATE_STATE|CLUSTER_TODO_SAVE_CONFIG);
}

node的fail_reports是其他节点对该节点的pfail判断。而每次调用clusterNodeFailureReportsCount会判断fail_reports中的信息是否还有效。如果超过cluster_node_timeout 的2倍,则认为是该判断失效,从fail_reports列表中删除该节点。这里主要是针对故障误报的情况。另外如果在cluster_node_timeout *2的时间内,无法收集到一半以上槽节点的下线报告,那么之前的下线报告就会过期,那么故障节点永远无法标记为客观下线,从而导致故障转移失败。

c 复制代码
//cluster.c#clusterNodeCleanupFailureReports
void clusterNodeCleanupFailureReports(clusterNode *node) {
    list *l = node->fail_reports;
    listNode *ln;
    listIter li;
    clusterNodeFailReport *fr;
    mstime_t maxtime = server.cluster_node_timeout *
                     CLUSTER_FAIL_REPORT_VALIDITY_MULT;
    mstime_t now = mstime();

    listRewind(l,&li);
    while ((ln = listNext(&li)) != NULL) {
        fr = ln->value;
        if (now - fr->time > maxtime) listDelNode(l,ln);
    }
}

其他节点接收到后,会对根据广播消息携带的nodename进行查找,找到后对该几点进行设置为fail状态。同时记录下fail的时间

c 复制代码
//cluster.c#clusterProcessPacket
...
failing = clusterLookupNode(hdr->data.fail.about.nodename);
if (failing &&
                !(failing->flags & (CLUSTER_NODE_FAIL|CLUSTER_NODE_MYSELF))) {
	serverLog(LL_NOTICE,
	                    "FAIL message received from %.40s about %.40s",
	                    hdr->sender, hdr->data.fail.about.nodename);
	failing->flags |= CLUSTER_NODE_FAIL;
	failing->fail_time = now;
	failing->flags &= ~CLUSTER_NODE_PFAIL;
	clusterDoBeforeSleep(CLUSTER_TODO_SAVE_CONFIG|
	                                     CLUSTER_TODO_UPDATE_STATE);
}

2. 故障恢复

当故障节点为主节点并且状态为fail时,需要从它的从几点中选出一个替换它,从而保证集群的高可用。下线主节点的所有从节点承担故障恢复的义务,当从节点通过内部定时任务发现自身复制的主节点为fail时,将会触发故障恢复流程。

故障恢复流程:

1)资格检查

2)准备选举时间

3)发起选举

4)选举投票

5)替换主节点

2.1 资格检查

每个从节点都要检查和主节点最后交互的时间(或断线时间),判断是否有资格替换故障转移的主节点。如果从节点与主节点最后交互的时间(或断线时间),超过server.cluster_node_timeout * server.cluster_slave_validity_factor + server.repl_ping_slave_period * 1000 ,则认为当前从节点不具备故障转移资格。

cluster_slave_validity_factor默认为10,repl_ping_slave_period 默认为10,cluster_node_timeout默认为15s,也就是默认从节点与主节点交互超过160s,则该从从节点不具备故障转移资格。

c 复制代码
//cluster.c#clusterHandleSlaveFailover
if (server.repl_state == REPL_STATE_CONNECTED) {
	data_age = (mstime_t)(server.unixtime - server.master->lastinteraction)
	                   * 1000;
} else {
	data_age = (mstime_t)(server.unixtime - server.repl_down_since) * 1000;
}

if (data_age > server.cluster_node_timeout)
        data_age -= server.cluster_node_timeout;

if (server.cluster_slave_validity_factor &&
        data_age >
        (((mstime_t)server.repl_ping_slave_period * 1000) +
         (server.cluster_node_timeout * server.cluster_slave_validity_factor))) {
	if (!manual_failover) {
		clusterLogCantFailover(CLUSTER_CANT_FAILOVER_DATA_AGE);
		return;
	}
}

2.2 准备选举时间

从节点符合故障转移资格后,更新触发故障选举的时间,只有达到该时间后才能执行后续流程。这里会采用延迟触发机制,主要是通过对多个从节点使用不同的延迟选举时间来支持优先级问题。复制偏移量越大,说明从几点延迟越低,就具有更高的优先级来替换故障主节点。

auth_age为当前时间减去failover_auth_time,第一次failover_auth_time为0。auth_retry_time为 4 倍的cluster_node_timeout。然后会对failover_auth_time设置时间为当前时间 + 500 + random() % 500。另外会获取当前节点根据复制偏移量的排名,然后failover_auth_time会再加上 排名 * 1000 时间。这样就可以让偏移量更大的优先处理。

如果是手动故障,则设置failover_auth_time设置为当前时间。

最后会广播当前节点的复制进度。

c 复制代码
//cluster.c#clusterHandleSlaveFailover
mstime_t auth_age = mstime() - server.cluster->failover_auth_time;
...
auth_timeout = server.cluster_node_timeout*2;
if (auth_timeout < 2000) auth_timeout = 2000;
auth_retry_time = auth_timeout*2;
...
if (auth_age > auth_retry_time) {
	server.cluster->failover_auth_time = mstime() +
	            500 + 
	/* Fixed delay of 500 milliseconds, let FAIL msg propagate. */
	random() % 500;
	/* Random delay between 0 and 500 milliseconds. */
	server.cluster->failover_auth_count = 0;
	server.cluster->failover_auth_sent = 0;
	server.cluster->failover_auth_rank = clusterGetSlaveRank();
	/* We add another delay that is proportional to the slave rank.
         * Specifically 1 second * rank. This way slaves that have a probably
         * less updated replication offset, are penalized. */
	server.cluster->failover_auth_time +=
	            server.cluster->failover_auth_rank * 1000;
	/* However if this is a manual failover, no delay is needed. */
    //手动故障转移
	if (server.cluster->mf_end) {
		server.cluster->failover_auth_time = mstime();
		server.cluster->failover_auth_rank = 0;
		clusterDoBeforeSleep(CLUSTER_TODO_HANDLE_FAILOVER);
	}
	serverLog(LL_WARNING,
	            "Start of election delayed for %lld milliseconds "
	            "(rank #%d, offset %lld).",
	            server.cluster->failover_auth_time - mstime(),
	            server.cluster->failover_auth_rank,
	            replicationGetSlaveOffset());
	/* Now that we have a scheduled election, broadcast our offset
         * to all the other slaves so that they'll updated their offsets
         * if our offset is better. */
	clusterBroadcastPong(CLUSTER_BROADCAST_LOCAL_SLAVES);
	return;
}

在clusterGetSlaveRank中,会获取当前节点的偏移量,然后在获取主节点的的从节点。然后判断它们的复制偏移量。如果从节点的偏移量比当前节点大,则排名加1。

c 复制代码
//cluster.c#clusterGetSlaveRank
int clusterGetSlaveRank(void) {
    long long myoffset;
    int j, rank = 0;
    clusterNode *master;

    serverAssert(nodeIsSlave(myself));
    master = myself->slaveof;
    if (master == NULL) return 0; /* Never called by slaves without master. */

    myoffset = replicationGetSlaveOffset();
    for (j = 0; j < master->numslaves; j++)
        if (master->slaves[j] != myself &&
            !nodeCantFailover(master->slaves[j]) &&
            master->slaves[j]->repl_offset > myoffset) rank++;
    return rank;
}

在等延迟时间过程中,还会重新计算排名。如果当前节点的排名更后了,那么会在加上响应的延迟时间。

c 复制代码
//cluster.c#clusterHandleSlaveFailover
if (server.cluster->failover_auth_sent == 0 &&
        server.cluster->mf_end == 0) {
	int newrank = clusterGetSlaveRank();
	if (newrank > server.cluster->failover_auth_rank) {
		long long added_delay =
		                (newrank - server.cluster->failover_auth_rank) * 1000;
		server.cluster->failover_auth_time += added_delay;
		server.cluster->failover_auth_rank = newrank;
		serverLog(LL_WARNING,
		                "Replica rank updated to #%d, added %lld milliseconds of delay.",
		                newrank, added_delay);
	}
}

如果当前时间小于failover_auth_time,则直接返回。

c 复制代码
//cluster.c#clusterHandleSlaveFailover
if (mstime() < server.cluster->failover_auth_time) {
	clusterLogCantFailover(CLUSTER_CANT_FAILOVER_WAITING_DELAY);
	return;
}

2.3 发起选举

当从节点达到failover_auth_time时间,则会发起选举流程。会做两件事

1)更新配置纪元。

2)广播选举消息。在集群内广播选举消息(FAILOVER_AUTH_REQUEST),并记录已经发送过的状态,保证该从节点在一个配置纪元内只能发送一次选举。

c 复制代码
//cluster.c#clusterHandleSlaveFailover
if (server.cluster->failover_auth_sent == 0) {
	server.cluster->currentEpoch++;
	server.cluster->failover_auth_epoch = server.cluster->currentEpoch;
	serverLog(LL_WARNING,"Starting a failover election for epoch %llu.",
	            (unsigned long long) server.cluster->currentEpoch);
	clusterRequestFailoverAuth();
	server.cluster->failover_auth_sent = 1;
	clusterDoBeforeSleep(CLUSTER_TODO_SAVE_CONFIG|
	                             CLUSTER_TODO_UPDATE_STATE|
	                             CLUSTER_TODO_FSYNC_CONFIG);
	return;
	/* Wait for replies. */
}

2.4 选举投票

1)只有持有槽的主节点才会处理故障选举投票。

2)如果投票的配置纪元,小于该节点当前的配置纪元,则不进行投票

3)在一个配置纪元内有唯一的投票,如果该配置纪元已经投票过了,那么相同的配置纪元内,其他从节点的选举消息会被忽略。

4)如果当前节点投票开始,超过cluster_node_timeout * 2时间内没有获取足够数量的投票,则本次选举会被忽略。

c 复制代码
//cluster.c#clusterSendFailoverAuthIfNeeded
if (nodeIsSlave(myself) || myself->numslots == 0) return;
if (requestCurrentEpoch < server.cluster->currentEpoch) {
	return;
}
if (server.cluster->lastVoteEpoch == server.cluster->currentEpoch) {
    return;
}
if (mstime() - node->slaveof->voted_time < server.cluster_node_timeout * 2){
 	return;
}

如果以上条件都满足,则会给这个从节点投票。记录上一次投票的纪元;记录投票时间;回复该从节点消息。

c 复制代码
//cluster.c#clusterSendFailoverAuthIfNeeded
server.cluster->lastVoteEpoch = server.cluster->currentEpoch;
node->slaveof->voted_time = mstime();
clusterDoBeforeSleep(CLUSTER_TODO_SAVE_CONFIG|CLUSTER_TODO_FSYNC_CONFIG);
clusterSendFailoverAuth(node);

该从节点收到消息后,会判断回复的节点是否是主节点,是否拥有槽,以及配置纪元的判断。满足条件,则会在failover_auth_count 加1

c 复制代码
//cluster.c#clusterProcessPacket
if (nodeIsMaster(sender) && sender->numslots > 0 &&
            senderCurrentEpoch >= server.cluster->failover_auth_epoch) {
	server.cluster->failover_auth_count++;
	/* Maybe we reached a quorum here, set a flag to make sure
             * we check ASAP. */
	clusterDoBeforeSleep(CLUSTER_TODO_HANDLE_FAILOVER);
}

当投票数failover_auth_count,超过半数,则可以开始替换主节点了。会调用clusterFailoverReplaceYourMaster方法

c 复制代码
//cluster.c#clusterHandleSlaveFailover
int needed_quorum = (server.cluster->size / 2) + 1;
...
if (server.cluster->failover_auth_count >= needed_quorum) {
	/* We have the quorum, we can finally failover the master. */
	serverLog(LL_WARNING,
	            "Failover election won: I'm the new master.");
	/* Update my configEpoch to the epoch of the election. */
	if (myself->configEpoch < server.cluster->failover_auth_epoch) 
	{
		myself->configEpoch = server.cluster->failover_auth_epoch;
		serverLog(LL_WARNING,
		                "configEpoch set to %llu after successful failover",
		                (unsigned long long) myself->configEpoch);
	}
	/* Take responsibility for the cluster slots. */
	clusterFailoverReplaceYourMaster();
} else {
	clusterLogCantFailover(CLUSTER_CANT_FAILOVER_WAITING_VOTES);
}

2.5 替换主节点

当从节点收集到足够的选票后,触发替换主节点操作:

1)当前从节点取消复制,变为主节点。

2)执行clusterDelSlot操作,撤销故障主节点负责的槽,并把这些槽委派给自己。

3)向集群广播自己的pong消息,通知集群内所有的节点,该从节点变成主节点并接管故障主节点的槽信息。

1)首先会判断一些信息,然后会把当前从节点在cluster上的信息设置为master,主要是设置状态,去掉旧master上的当前从节点信息(调用cluster.c#clusterSetNodeAsMaster),然后会设置复制集中为master,主要是去掉和旧master的连接信息等(调用replication.c#replicationUnsetMaster)

c 复制代码
//cluster.c#clusterFailoverReplaceYourMaster
clusterNode *oldmaster = myself->slaveof;
if (nodeIsMaster(myself) || oldmaster == NULL) return;
/* 1) Turn this node into a master. */
clusterSetNodeAsMaster(myself);
replicationUnsetMaster();

2)然后把会旧master负责的槽信息,委派到当前从节点。

c 复制代码
//cluster.c#clusterFailoverReplaceYourMaster
for (j = 0; j < CLUSTER_SLOTS; j++) {
	if (clusterNodeGetSlotBit(oldmaster,j)) {
		clusterDelSlot(j);
		clusterAddSlot(myself,j);
	}
}

3)会做集群状态的判断,然后保存配置。

c 复制代码
//cluster.c#clusterFailoverReplaceYourMaster
clusterUpdateState();
clusterSaveConfigOrDie(1);

4)广播PONG消息,通知集群内所有节点。

c 复制代码
//cluster.c#clusterFailoverReplaceYourMaster
clusterBroadcastPong(CLUSTER_BROADCAST_ALL);

5)如果是手动故障转移,则会重置一些标记。

c 复制代码
//cluster.c#clusterFailoverReplaceYourMaster
resetManualFailover();

3. 手动故障转移

当一个从节点(指定哪个从节点升级为主节点,所以该命令在从节点上执行)接收到CLUSTER FAILOVER [FORCE | TAKEOVER]命令,会执行时故障转移。当可选参数为FORCE 时,并不会判断复制偏移量是否和Master相同。当可选参数为TAKEOVER,并不要求集群内达成共识,会直接提升当前从节点为主节点。没有配置参数,手动故障转移,以下内容主要考虑这种情况。

1)当接收到命令时,会向对应的主节点发送一个mfstart包,通知主节点,从节点要开始进行手动切换。以下代码中,会执行clusterSendMFStart方法,内部会发送MFSTART消息给主节点。

java 复制代码
//cluster.c#clusterCommand
server.cluster->mf_end = mstime() + CLUSTER_MF_TIMEOUT;
if (takeover) {
	clusterBumpConfigEpochWithoutConsensus();
	clusterFailoverReplaceYourMaster();
} else if (force) {
	serverLog(LL_WARNING,"Forced failover user request accepted.");
	server.cluster->mf_can_start = 1;
} else {
	serverLog(LL_WARNING,"Manual failover user request accepted.");
	clusterSendMFStart(myself->slaveof);
}

2)主节点接收到消息后,会阻塞所有客户端命令的执行。之后主节点在周期性函数clusterCron中发送的ping包会在包头部分做特殊标记。

c 复制代码
//cluster.c#clusterProcessPacket
if (!sender || sender->slaveof != myself) return 1;
resetManualFailover();
server.cluster->mf_end = now + CLUSTER_MF_TIMEOUT;
server.cluster->mf_slave = sender;
pauseClients(now+(CLUSTER_MF_TIMEOUT*CLUSTER_MF_PAUSE_MULT));

在redisServer结构体中有两个字段,clients_paused和clients_pause_end_time。阻塞所有客户端命令执行,即为先将clients_paused置为1,然后将clients_pause_end_time置为当前时间 + 2倍的CLUSTER_MF_TIMEOUT(默认5s),也就是说,默认会阻塞10s。

c 复制代码
//networking.c#pauseClients
void pauseClients(mstime_t end) {
    if (!server.clients_paused || end > server.clients_pause_end_time)
        server.clients_pause_end_time = end;
    server.clients_paused = 1;
}

在接收客户端数据中,如果是处于停止状态,则会暂停处理命令。

c 复制代码
//networking.c#processInputBuffer
while(c->qb_pos < sdslen(c->querybuf)) {
	/* Return if clients are paused. */
	if (!(c->flags & CLIENT_SLAVE) && clientsArePaused()) break;
	...
	   if (processCommand(c) == C_OK)
	   ...
}

在clusterCron中,会判断是否是手动故障转移,如果是的话,则会发送PING给从节点。

c 复制代码
//clusterCron
if (server.cluster->mf_end &&
            nodeIsMaster(myself) &&
            server.cluster->mf_slave == node &&
            node->link) {
	clusterSendPing(node->link, CLUSTERMSG_TYPE_PING);
	continue;
}

在发送PING消息的头部,会设置复制偏移量,如果是手动故障转移,则会设置mflags状态为CLUSTERMSG_FLAG0_PAUSED。

c 复制代码
//cluster.c#clusterBuildMessageHdr --》 cluster.c#clusterBuildMessageHdr
if (nodeIsSlave(myself))
        offset = replicationGetSlaveOffset();
else
        offset = server.master_repl_offset;
hdr->offset = htonu64(offset);
/* Set the message flags. */
if (nodeIsMaster(myself) && server.cluster->mf_end)
        hdr->mflags[0] |= CLUSTERMSG_FLAG0_PAUSED;

3)当从节点收到主节点的ping包并且检测到特殊标记之后,会从包头中获取主节点的值偏移量信息。

c 复制代码
//cluster.c#clusterProcessPacket
if (sender && !nodeInHandshake(sender)) {
	...
	if (server.cluster->mf_end &&
	            nodeIsSlave(myself) &&
	            myself->slaveof == sender &&
	            hdr->mflags[0] & CLUSTERMSG_FLAG0_PAUSED &&
	            server.cluster->mf_master_offset == 0) {
		server.cluster->mf_master_offset = sender->repl_offset;
	}
}

4)从节点在周期性函数clusterCron中(内部会调用clusterHandleManualFailover方法),检验当前处理的偏移量与主节点复制偏移是否相等,当相等时,会开始主从切换流程(设置mf_can_start为1)。在clusterHandleSlaveFailover内部会有一些是否手动故障转移判断,切换流程和自动切换类似。

c 复制代码
//cluster.c#clusterCron --》 cluster.c#clusterHandleManualFailover
void clusterHandleManualFailover(void) {
    /* Return ASAP if no manual failover is in progress. */
    if (server.cluster->mf_end == 0) return;

    /* If mf_can_start is non-zero, the failover was already triggered so the
     * next steps are performed by clusterHandleSlaveFailover(). */
    if (server.cluster->mf_can_start) return;

    if (server.cluster->mf_master_offset == 0) return; /* Wait for offset... */

    if (server.cluster->mf_master_offset == replicationGetSlaveOffset()) {
        /* Our replication offset matches the master replication offset
         * announced after clients were paused. We can start the failover. */
        server.cluster->mf_can_start = 1;
    }
}

5)切换完成后,主节点会将阻塞的所有客户端命令通过发送+MOVED指令(因为当前主节点负责的槽已经迁移了)重定向到新的主节点。该过程可以看到,手动执行主从切换不会丢失任何数据,也不会丢失任何执行命令,只在切换过程中有暂时的停顿。

注:以下代码仅为猜测该步骤的可能情况。

旧Master收到新Master广播的Pong消息后,会在内部调用clusterUpdateSlotsConfigWith方法。

c 复制代码
 //cluster.c#clusterProcessPacket
 if (sender && nodeIsMaster(sender) && dirty_slots)
            clusterUpdateSlotsConfigWith(sender,senderConfigEpoch,hdr->myslots);

在clusterUpdateSlotsConfigWith内部方法中,可以判断产生了新的master,并且当前的master负责的槽为0,所以内部会调用clusterSetMaster方法。

c 复制代码
//cluster.c#clusterUpdateSlotsConfigWith
...
if (newmaster && curmaster->numslots == 0) {
        clusterSetMaster(sender); 
 }

在clusterSetMaster方法中,会重新设置master,将自身设置为slave。然后会调用resetManualFailover方法,内部会去掉手动故障转移的一些标记,同时也会打开原来阻塞的客户端命令,然后会调用命令,发现当前的节点不再负责当前槽了,会进行重定向。

c 复制代码
void clusterSetMaster(clusterNode *n) {
    serverAssert(n != myself);
    serverAssert(myself->numslots == 0);

    if (nodeIsMaster(myself)) {
        myself->flags &= ~(CLUSTER_NODE_MASTER|CLUSTER_NODE_MIGRATE_TO);
        myself->flags |= CLUSTER_NODE_SLAVE;
        clusterCloseAllSlots();
    } else {
        if (myself->slaveof)
            clusterNodeRemoveSlave(myself->slaveof,myself);
    }
    myself->slaveof = n;
    clusterNodeAddSlave(n,myself);
    replicationSetMaster(n->ip, n->port);
    resetManualFailover();
}

4. 参考资料

1)《Redis开发与运维》

2)《Redis5源码分析》

3)Redis 5.0源码分析

相关推荐
BergerLee38 分钟前
对不经常变动的数据集合添加Redis缓存
数据库·redis·缓存
huapiaoy1 小时前
Redis中数据类型的使用(hash和list)
redis·算法·哈希算法
姜学迁1 小时前
Rust-枚举
开发语言·后端·rust
爱学习的小健1 小时前
MQTT--Java整合EMQX
后端
北极小狐2 小时前
Java vs JavaScript:类型系统的艺术 - 从 Object 到 any,从静态到动态
后端
【D'accumulation】2 小时前
令牌主动失效机制范例(利用redis)注释分析
java·spring boot·redis·后端
2401_854391082 小时前
高效开发:SpringBoot网上租赁系统实现细节
java·spring boot·后端
Cikiss2 小时前
微服务实战——SpringCache 整合 Redis
java·redis·后端·微服务
Cikiss2 小时前
微服务实战——平台属性
java·数据库·后端·微服务
OEC小胖胖2 小时前
Spring Boot + MyBatis 项目中常用注解详解(万字长篇解读)
java·spring boot·后端·spring·mybatis·web