背景描述
某跨境电商平台生产环境使用Kubernetes(v1.23.17)管理500+微服务。某日机房突发市电中断,UPS未能及时接管导致:
-
3节点ETCD集群(v3.5.4)全部异常掉电
-
Control-Plane节点无法启动api-server
-
业务Pod出现大规模CrashLoopBackOff
故障现象
kube-apiserver日志报错:
journalctl -u kube-apiserver | grep -C 5 'etcd'
输出关键信息:
error while dialing: dial tcp 172.21.8.101:2379: connect: connection refused
storage backend: etcd3 client is not responding
手动检查ETCD状态:
ETCDCTL_API=3 etcdctl --endpoints=https://172.21.8.101:2379 \
--cacert=/etc/kubernetes/pki/etcd/ca.crt \
--cert=/etc/kubernetes/pki/etcd/server.crt \
--key=/etc/kubernetes/pki/etcd/server.key endpoint status
返回错误:
{"code":14,"message":"transport: authentication handshake failed: x509: certificate has expired or is not yet valid"}
故障分析
-
证书损坏:ETCD进程异常终止导致证书状态不一致
-
数据损坏:WAL日志写入中断引发数据页校验失败
-
集群分裂:三节点同时掉电导致无法形成法定人数
恢复方案
执行步骤:
阶段一:数据恢复
- 获取最新有效快照(5小时前):
# 查找有效备份(运维团队每日2次快照)
ls -lh /backup/etcd/
-rw-r----- 1 root root 2.1G Mar 15 04:00 etcd-snapshot-20240315.db
-rw-r----- 1 root root 2.1G Mar 15 16:00 etcd-snapshot-20240315-2.db
# 验证快照完整性
ETCDCTL_API=3 etcdctl --write-out=table snapshot status \
/backup/etcd/etcd-snapshot-20240315.db
- 全量恢复ETCD集群:
# 停止所有ETCD节点
systemctl stop etcd
# 清空损坏数据目录
rm -rf /var/lib/etcd/member/
# 执行恢复操作(所有节点)
ETCDCTL_API=3 etcdctl snapshot restore /backup/etcd/etcd-snapshot-20240315.db \
--data-dir /var/lib/etcd/new \
--initial-cluster "etcd1=https://172.21.8.101:2380,etcd2=https://172.21.8.102:2380,etcd3=https://172.21.8.103:2380" \
--initial-cluster-token k8s-etcd-cluster \
--initial-advertise-peer-urls https://172.21.8.101:2380
# 迁移数据目录
mv /var/lib/etcd/new/member /var/lib/etcd/
阶段二:集群重启
# 启动ETCD服务(所有节点)
systemctl start etcd && journalctl -u etcd -f
# 验证集群健康状态
ETCDCTL_API=3 etcdctl endpoint health --cluster
阶段三:K8S组件恢复
# 重启control-plane组件
systemctl restart kube-apiserver kube-controller-manager kube-scheduler
# 验证节点状态
kubectl get nodes -o wide
kubectl get pods --all-namespaces
数据一致性保障
5小时数据缺口处理方案:
-
业务层补偿:
-
从MySQL binlog恢复交易数据(last_commit=0x4a3f2c)
-
Redis AOF日志重放恢复会话状态
-
消息队列重新投递未ACK消息
-
-
基础设施增强:
# 配置每30分钟增量快照
crontab -e
*/30 * * * * etcdctl snapshot save /backup/etcd/incr-snapshot-$(date +\%Y\%m%d-\%H\%M).db
-
电源优化方案:
-
部署APC Smart-UPS 3000VA
-
配置Nut监控服务:
-
bash
apt install nut -y
vim /etc/nut/upsmon.conf
MONITOR [email protected] 1 monuser secret master
恢复结果
-
业务完全恢复耗时2小时38分钟
-
订单数据损失率0.12%(通过补偿机制恢复)
-
ETCD集群P99写入延迟下降15%(得益于碎片整理)
经验总结
-
备份策略:必须遵循3-2-1原则(3副本、2种介质、1个离线)
-
3-2-1原则的落地实现:
# 多介质备份示例(本地磁盘+对象存储+磁带) aws s3 cp /backup/etcd/ s3://k8s-prod-backup/etcd/ --recursive --storage-class DEEP_ARCHIVE ltfs -o device=/dev/nst0 /mnt/tape && cp /backup/etcd/*.db /mnt/tape/
-
3副本:本地磁盘(SSD)、AWS S3 Glacier、LTO-8磁带
-
2种介质:电子介质(云存储)+物理介质(磁带)
-
1个离线:每周人工更换磁带并转移至防爆保险柜
-
-
备份生命周期管理:
# 自动清理旧备份(保留策略) find /backup/etcd/ -name "*.db" -mtime +30 -exec rm -vf {} \; aws s3 ls s3://k8s-prod-backup/etcd/ | awk '{print $4}' | sort -r | tail -n +30 | xargs -I {} aws s3 rm s3://k8s-prod-backup/etcd/{}
-
热备份保留7天
-
冷备份保留30天
-
归档备份保留5年
-
-
断电防护:UPS容量需按实际负载的150%配置
-
数据验证 :每次备份后必须执行
etcdctl snapshot status
-
监控覆盖:增加ETCD_WAL_FSYNC_DURATION_SECONDS指标告警
# 实际恢复过程中的关键操作记录
[operator@k8s-master01 ~]$ etcdctl snapshot restore ...
Members:[{ID:1a2b3c4d Name:etcd1 PeerURLs:[https://172.21.8.101:2380]}...]
Restored cluster ID: 7d89f1a3b5c6d7e
[operator@k8s-master01 ~]# systemctl status etcd
● etcd.service - etcd key-value store
Loaded: loaded (/usr/lib/systemd/system/etcd.service; enabled;)
Active: active (running) since Fri 2024-03-15 21:15:03 CST; 5s ago