k8s etcd备份与恢复

停止api-server(k8s的所有master节点)
   # 所有master节点执行:
   # 停api-server

   mkdir -p tpm_api_conf
   mv  /etc/kubernetes/manifests/kube-apiserver.yaml /root/tpm_api_conf/
etcd备份(集群中某一节点)
   # etcd备份

   ETCDCTL_API=3;/usr/local/bin/etcdctl --endpoints='https://192.168.1.30:2379' --cacert="/etc/ssl/etcd/ssl/ca.pem" --cert="/etc/ssl/etcd/ssl/admin-ks-master01.pem" --key="/etc/ssl/etcd/ssl/admin-ks-master01-key.pem" snapshot save  snapshot_20230928.db 
查看备份数据状态
   # 查看备份数据状态

   ETCDCTL_API=3;/usr/local/bin/etcdctl --write-out=table snapshot status snapshot_20230928.db
停止etcd服务并备份数据目录(etcd所有节点)
   # 停etcd:

   systemctl stop etcd
   mv /var/lib/etcd/ /root/etcd_bak
各个etcd节点恢复数据
   # 节点30:

   ETCDCTL_API=3;/usr/local/bin/etcdctl snapshot restore /root/snapshot_20230928.db \
    --name etcd-ks-master01  \
    --cert="/etc/ssl/etcd/ssl/admin-ks-master01.pem" \
    --key="/etc/ssl/etcd/ssl/admin-ks-master01-key.pem"  \
    --cacert="/etc/ssl/etcd/ssl/ca.pem"   \
    --endpoints="https://127.0.0.1:2379" \
    --initial-advertise-peer-urls="https://192.168.1.30:2380"  \
    --initial-cluster="etcd-ks-master01=https://192.168.1.30:2380,etcd-ks-master02=https://192.168.1.31:2380,etcd-ks-master03=https://192.168.1.32:2380" \
    --data-dir=/var/lib/etcd
    
   # 节点31:

   ETCDCTL_API=3;/usr/local/bin/etcdctl snapshot restore /root/snapshot_20230928.db \
    --name etcd-ks-master02  \
    --cert="/etc/ssl/etcd/ssl/admin-ks-master02.pem" \
    --key="/etc/ssl/etcd/ssl/admin-ks-master02-key.pem"  \
    --cacert="/etc/ssl/etcd/ssl/ca.pem"   \
    --endpoints="https://127.0.0.1:2379" \
    --initial-advertise-peer-urls="https://192.168.1.31:2380"  \
    --initial-cluster="etcd-ks-master01=https://192.168.1.30:2380,etcd-ks-master02=https://192.168.1.31:2380,etcd-ks-master03=https://192.168.1.32:2380" \
    --data-dir=/var/lib/etcd
    
   # 节点32:

   ETCDCTL_API=3;/usr/local/bin/etcdctl snapshot restore /root/snapshot_20230928.db \
    --name etcd-ks-master03  \
    --cert="/etc/ssl/etcd/ssl/admin-ks-master03.pem" \
    --key="/etc/ssl/etcd/ssl/admin-ks-master03-key.pem"  \
    --cacert="/etc/ssl/etcd/ssl/ca.pem"   \
    --endpoints="https://127.0.0.1:2379" \
    --initial-advertise-peer-urls="https://192.168.1.32:2380"  \
    --initial-cluster="etcd-ks-master01=https://192.168.1.30:2380,etcd-ks-master02=https://192.168.1.31:2380,etcd-ks-master03=https://192.168.1.32:2380" \
    --data-dir=/var/lib/etcd
etcd节点修改数据目录权限并启动etcd
   # 所有节点执行:
   # 修改属组
   chown -R etcd:root /var/lib/etcd/

   # 启动etcd
   systemctl start etcd
启动api-server(所有k8s master节点)
   # etcd启动完成后,恢复api配置
   mv  /root/tpm_api_conf/kube-apiserver.yaml   /etc/kubernetes/manifests/kube-apiserver.yaml 
相关推荐
全能全知者13 分钟前
docker快速安装与配置mongoDB
mongodb·docker·容器
beifengtz3 小时前
推荐一款ETCD桌面客户端——Etcd Workbench
etcd·etcd客户端
ZHOU西口4 小时前
微服务实战系列之玩转Docker(十八)
分布式·docker·云原生·架构·数据安全·etcd·rbac
景天科技苑6 小时前
【云原生开发】K8S多集群资源管理平台架构设计
云原生·容器·kubernetes·k8s·云原生开发·k8s管理系统
wclass-zhengge7 小时前
K8S篇(基本介绍)
云原生·容器·kubernetes
颜淡慕潇7 小时前
【K8S问题系列 |1 】Kubernetes 中 NodePort 类型的 Service 无法访问【已解决】
后端·云原生·容器·kubernetes·问题解决
川石课堂软件测试9 小时前
性能测试|docker容器下搭建JMeter+Grafana+Influxdb监控可视化平台
运维·javascript·深度学习·jmeter·docker·容器·grafana
昌sit!15 小时前
K8S node节点没有相应的pod镜像运行故障处理办法
云原生·容器·kubernetes
A ?Charis18 小时前
Gitlab-runner running on Kubernetes - hostAliases
容器·kubernetes·gitlab
wclass-zhengge18 小时前
Docker篇(Docker Compose)
运维·docker·容器