k8s-node2 NotReady 节点NotReady如何解决?

从集群中移除 k8s-node2

bash 复制代码
[root@k8s-master ~]# kubectl delete node k8s-node2
node "k8s-node2" deleted

重置 k8s-node2

登录到 k8s-node2 上,使用以下命令重置该节点,使其退出集群并恢复到初始状态:

bash 复制代码
[root@k8s-node2 ~]# sudo kubeadm reset
[reset] WARNING: Changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted.
[reset] Are you sure you want to proceed? [y/N]: y
[preflight] Running pre-flight checks
W1227 04:47:55.563584    1677 removeetcdmember.go:79] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] No etcd config found. Assuming external etcd
[reset] Please, manually reset etcd to prevent further issues
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
[reset] Deleting contents of stateful directories: [/var/lib/kubelet /var/lib/dockershim /var/run/kubernetes /var/lib/cni]

The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d

The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the "iptables" command.

If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.

The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.

重新加入节点

k8s-master 上运行以下命令获取重新加入集群所需的 kubeadm join 命令:

bash 复制代码
[root@k8s-master ~]# kubeadm token create --print-join-command
kubeadm join 192.168.58.231:6443 --token dnht7t.ym4jms2ctru89j2z --discovery-token-ca-cert-hash sha256:82bc8471036711f1c3d81b733082935177e773396e8bb9a5d15f2a0bf95b137e 

检查节点状态

bash 复制代码
[root@k8s-master ~]# kubectl get nodes
NAME         STATUS   ROLES                  AGE    VERSION
k8s-master   Ready    control-plane,master   44h    v1.21.10
k8s-node1    Ready    <none>                 44h    v1.21.10
k8s-node2    Ready    <none>                 108s   v1.21.10
[root@k8s-master ~]# kubectl get pod -A 
NAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGE
kube-system   calico-kube-controllers-697d846cf4-79hpj   1/1     Running   1          44h
kube-system   calico-node-58ss2                          1/1     Running   1          44h
kube-system   calico-node-gc547                          1/1     Running   1          44h
kube-system   calico-node-hdhxf                          1/1     Running   1          44h
kube-system   coredns-6f6b8cc4f6-5nbb6                   1/1     Running   1          44h
kube-system   coredns-6f6b8cc4f6-q9rhc                   1/1     Running   1          44h
kube-system   etcd-k8s-master                            1/1     Running   1          44h
kube-system   kube-apiserver-k8s-master                  1/1     Running   1          44h
kube-system   kube-controller-manager-k8s-master         1/1     Running   1          44h
kube-system   kube-proxy-7hp6l                           1/1     Running   1          44h
kube-system   kube-proxy-ddhnb                           1/1     Running   1          44h
kube-system   kube-proxy-dwcgd                           1/1     Running   1          44h
kube-system   kube-scheduler-k8s-master                  1/1     Running   1          44h
相关推荐
小猿姐1 小时前
闲谈KubeBlocks For MongoDB设计实现
mongodb·云原生·kubernetes
thinktik3 小时前
AWS EKS 集成Load Balancer Controller 对外暴露互联网可访问API [AWS 中国宁夏区]
后端·kubernetes·aws
忧郁的橙子.4 小时前
十六、kubernetes 1.29 之 集群安全机制
安全·容器·kubernetes
早睡冠军候选人5 小时前
Ansible学习----Ansible Playbook
运维·服务器·学习·云原生·容器·ansible
三坛海会大神5557 小时前
k8s(六)Pod的资源控制器
云原生·容器·kubernetes
缘的猿7 小时前
Docker 与 K8s 网络模型全解析
docker·容器·kubernetes
运维栈记8 小时前
使用Grafana监控K8S中的异常Pod
docker·kubernetes·grafana
荣光波比8 小时前
K8S(十二)—— Kubernetes安全机制深度解析与实践:从认证到RBAC授权
安全·容器·kubernetes
liming4958 小时前
k8s 安装 kuboardV3 报错
云原生·容器·kubernetes
ajax_beijing8 小时前
k8s的ReplicaSet介绍
运维·云原生