etcd故障节点

root@k8s-master1 \~\]# kubectl get cs NAME STATUS MESSAGE ERROR controller-manager Healthy ok scheduler Healthy ok etcd-1 Healthy {"health":"true"} etcd-0 Healthy {"health":"true"} etcd-2 Unhealthy HTTP probe failed with statuscode: 503 ETCDCTL_API=3 /opt/etcd/bin/etcdctl --cacert=/opt/etcd/ssl/ca.pem --cert=/opt/etcd/ssl/server.pem --key=/opt/etcd/ssl/server-key.pem --endpoints="https://192.168.233.91:2379,https://192.168.233.93:2379,https://192.168.233.94:2379" endpoint health --write-out=table 1.将有故障的etcd节点remove出集群: ETCDCTL_API=3 /opt/etcd/bin/etcdctl --cacert=/opt/etcd/ssl/ca.pem --cert=/opt/etcd/ssl/server.pem --key=/opt/etcd/ssl/server-key.pem --endpoints="https://192.168.233.91:2379,https://192.168.233.93:2379,https://192.168.233.94:2379" --write-out=table member list cf4f326398a30bd2 86ec40d44e54cf0a /opt/etcd/bin/etcdctl --cacert=/opt/etcd/ssl/ca.pem --cert=/opt/etcd/ssl/server.pem --key=/opt/etcd/ssl/server-key.pem --endpoints="https://192.168.233.91:2379,https://192.168.233.93:2379,https://192.168.233.94:2379" member remove 故障节点的id 2、来到故障节点 rm -rf /var/lib/etcd/default.etcd/member/ 修改etcd配置文件,将下面new修改为: vim /opt/etcd/cfg/etcd 修改前: ETCD_INITIAL_CLUSTER_STATE="new" 修改后: ETCD_INITIAL_CLUSTER_STATE="existing" 3、重新加入etcd集群: /opt/etcd/bin/etcdctl --cacert=/opt/etcd/ssl/ca.pem --cert=/opt/etcd/ssl/server.pem --key=/opt/etcd/ssl/server-key.pem --endpoints="https://192.168.233.91:2379,https://192.168.233.93:2379,https://192.168.233.94:2379" member add etcd-2 --peer-urls=https://192.168.233.94:2380 4、重启etcd故障节点

相关推荐
退役小学生呀9 天前
三、kubectl使用详解
云原生·容器·kubernetes·k8s
被困者9 天前
Linux部署Sonic前后端(详细版)(腾讯云)
spring cloud·云原生·eureka
程序员小潘9 天前
Kubernetes多容器Pod实战
云原生·容器·kubernetes
阿里云云原生9 天前
语音生成+情感复刻,Cosyvoice2.0 极简云端部署
云原生·serverless
阿里云云原生9 天前
编程简单了,部署依旧很难|Karpathy 演讲的 5 点解读
云原生
flyair_China9 天前
【云原生】基础篇
云原生
香蕉割草机10 天前
云原生/容器相关概念记录
云原生
阿里云云原生10 天前
量贩零食上云,原生的最划算
云原生
ponnylv11 天前
微服务拆分之术与道:从原则到实践的深度解析
微服务·云原生·架构
xingsfdz11 天前
Java微服务-新建demo
微服务·云原生·架构