背景
搭建集群的同事未规划网络,导致其中有一台master ip是192.168.7.173,和其他集群节点的IP192.168.0.x或192.168.1.x相隔太远,现在需要对网络做整改,方便管理配置诸如绑定限速等操作。
master节点是3节点的。此博客属于事后记录
Note:并没有想的那么简单
思路
- ectd中踢出master1节点
- kubectl踢出master1节点
- 更改master1节点IP地址
由于当前master集群前面没有haproxy、nginx等反向代理,所以集群中的一些地方配置的是master1的节点IP或master1节点IP:6443作为访问集群的入口的,坑基本上也都在这里,遇到问题解决问题 - master1节点执行kubeadm reset 及清理cni插件信息
- 更改所有的节点的hosts文件
- 执行kubeadm join重新将节点加入集群
- 验证各个组件&排障
实施
系统版本
bash
root@dev-k8s-master01:~# cat /etc/os-release
NAME="Ubuntu"
VERSION="18.04.6 LTS (Bionic Beaver)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 18.04.6 LTS"
VERSION_ID="18.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=bionic
UBUNTU_CODENAME=bionic
etcd踢出master1节点
使用etcdctl命令将要更改IP的master节点踢出集群
bash
export ETCDCTL_API=3
# 查看集群成员信息
etcdctl --endpoints=https://192.168.7.173:2379,https://192.168.1.17:2379,https://192.168.1.38:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/server.crt --key=/etc/kubernetes/pki/etcd/server.key member list --write-out=table
# 备注 使用member list 查看不到当前集群的etcd主节点
# 查看集群etcd主节点
endpoints=https://192.168.7.173:2379,https://192.168.1.17:2379,https://192.168.1.38:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/server.crt --key=/etc/kubernetes/pki/etcd/server.key endpoint status --write-out=table
执行结果如下
bash
root@dev-k8s-master03:~# etcdctl --endpoints=https://192.168.1.15:2379,https://192.168.1.17:2379,https://192.168.1.38:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/server.crt --key=/etc/kubernetes/pki/etcd/server.key member list --write-out=table
+------------------+---------+------------------+---------------------------+---------------------------+------------+
| ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | IS LEARNER |
+------------------+---------+------------------+---------------------------+---------------------------+------------+
| 802c824d9f96584b | started | dev-k8s-master03 | https://192.168.1.38:2380 | https://192.168.1.38:2379 | false |
| c3e9ff62e7bd6a70 | started | dev-k8s-master01 | https://192.168.1.15:2380 | https://192.168.1.15:2379 | false |
| ef1d4aa461844a8a | started | dev-k8s-master02 | https://192.168.1.17:2380 | https://192.168.1.17:2379 | false |
+------------------+---------+------------------+---------------------------+---------------------------+------------+
root@dev-k8s-master03:~# etcdctl --endpoints=https://192.168.1.15:2379,https://192.168.1.17:2379,https://192.168.1.38:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/server.crt --key=/etc/kubernetes/pki/etcd/server.key endpoint status --write-out=table
+---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |
+---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| https://192.168.1.15:2379 | c3e9ff62e7bd6a70 | 3.5.0 | 718 MB | false | false | 235 | 548780443 | 548780443 | |
| https://192.168.1.17:2379 | ef1d4aa461844a8a | 3.5.0 | 718 MB | false | false | 235 | 548780444 | 548780444 | |
| https://192.168.1.38:2379 | 802c824d9f96584b | 3.5.0 | 718 MB | true | false | 235 | 548780444 | 548780444 | |
+---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
kubectl将master1踢出集群
bash
kubectl get node -o wide
kubectl cordon k8s-master01
kubectl delete node k8s-master01
kubeadm reset
bash
root@dev-k8s-master01:~# kubeadm reset
[reset] Reading configuration from the cluster...
[reset] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
W0524 17:45:02.703143 6814 reset.go:101] [reset] Unable to fetch the kubeadm-config ConfigMap from cluster: failed to get config map: Get "https://192.168.7.173:6443/api/v1/namespaces/kube-system/configmaps/kubeadm-config?timeout=10s": dial tcp 192.168.7.173:6443: connect: no route to host
[reset] WARNING: Changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted.
[reset] Are you sure you want to proceed? [y/N]: y
[preflight] Running pre-flight checks
W0524 17:45:22.610779 6814 removeetcdmember.go:80] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
[reset] Deleting contents of stateful directories: [/var/lib/etcd /var/lib/kubelet /var/lib/dockershim /var/run/kubernetes /var/lib/cni]
The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d
The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the "iptables" command.
If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.
The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.
清理cni插件&iptables规则
bash
root@dev-k8s-master01:~# mv /etc/cni/net.d /tmp/
root@dev-k8s-master01:~# iptables-save > /tmp/iptables.bak
root@dev-k8s-master01:~# iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -t raw -F && iptables -t security -F && iptables -X && su
# 验证
iptables -vnL
生成密钥&重新加入集群
bash
root@dev-k8s-master01:~# kubeadm join 192.168.1.38:6443 --token ngq4b9.vylcwrghfiayv8au --discovery-token-ca-cert-hash sha256:b4c2xxxxx2ab6cf397255ff13c179e --control-plane --certificate-key e2d52bxxxab84edd --v=5
I0524 17:57:55.749888 10646 join.go:405] [preflight] found NodeName empty; using OS hostname as NodeName
I0524 17:57:55.749934 10646 join.go:409] [preflight] found advertiseAddress empty; using default interface's IP address as advertiseAddress
I0524 17:57:55.749974 10646 initconfiguration.go:116] detected and using CRI socket: /var/run/dockershim.sock
I0524 17:57:55.750479 10646 interface.go:431] Looking for default routes with IPv4 addresses
I0524 17:57:55.750548 10646 interface.go:436] Default route transits interface "ens18"
I0524 17:57:55.750709 10646 interface.go:208] Interface ens18 is up
I0524 17:57:55.750784 10646 interface.go:256] Interface "ens18" has 2 addresses :[192.168.1.15/21 fe80::68bf:2bff:feee:6c6e/64].
I0524 17:57:55.750809 10646 interface.go:223] Checking addr 192.168.1.15/21.
I0524 17:57:55.750822 10646 interface.go:230] IP found 192.168.1.15
I0524 17:57:55.750851 10646 interface.go:262] Found valid IPv4 address 192.168.1.15 for interface "ens18".
I0524 17:57:55.750871 10646 interface.go:442] Found active IP 192.168.1.15
[preflight] Running pre-flight checks
I0524 17:57:55.750986 10646 preflight.go:92] [preflight] Running general checks
I0524 17:57:55.751036 10646 checks.go:245] validating the existence and emptiness of directory /etc/kubernetes/manifests
I0524 17:57:55.751096 10646 checks.go:282] validating the existence of file /etc/kubernetes/kubelet.conf
I0524 17:57:55.751107 10646 checks.go:282] validating the existence of file /etc/kubernetes/bootstrap-kubelet.conf
I0524 17:57:55.751119 10646 checks.go:106] validating the container runtime
I0524 17:57:55.812507 10646 checks.go:132] validating if the "docker" service is enabled and active
I0524 17:57:55.831211 10646 checks.go:331] validating the contents of file /proc/sys/net/bridge/bridge-nf-call-iptables
I0524 17:57:55.831268 10646 checks.go:331] validating the contents of file /proc/sys/net/ipv4/ip_forward
I0524 17:57:55.831297 10646 checks.go:649] validating whether swap is enabled or not
I0524 17:57:55.831322 10646 checks.go:372] validating the presence of executable conntrack
I0524 17:57:55.831337 10646 checks.go:372] validating the presence of executable ip
I0524 17:57:55.831351 10646 checks.go:372] validating the presence of executable iptables
I0524 17:57:55.831368 10646 checks.go:372] validating the presence of executable mount
I0524 17:57:55.831382 10646 checks.go:372] validating the presence of executable nsenter
I0524 17:57:55.831392 10646 checks.go:372] validating the presence of executable ebtables
I0524 17:57:55.831403 10646 checks.go:372] validating the presence of executable ethtool
I0524 17:57:55.831419 10646 checks.go:372] validating the presence of executable socat
I0524 17:57:55.831430 10646 checks.go:372] validating the presence of executable tc
I0524 17:57:55.831442 10646 checks.go:372] validating the presence of executable touch
I0524 17:57:55.831455 10646 checks.go:520] running all checks
I0524 17:57:55.900002 10646 checks.go:403] checking whether the given node name is valid and reachable using net.LookupHost
I0524 17:57:55.900158 10646 checks.go:618] validating kubelet version
I0524 17:57:55.953966 10646 checks.go:132] validating if the "kubelet" service is enabled and active
I0524 17:57:55.970602 10646 checks.go:205] validating availability of port 10250
I0524 17:57:55.970805 10646 checks.go:432] validating if the connectivity type is via proxy or direct
I0524 17:57:55.970857 10646 join.go:475] [preflight] Discovering cluster-info
I0524 17:57:55.970891 10646 token.go:80] [discovery] Created cluster-info discovery client, requesting info from "192.168.1.38:6443"
I0524 17:57:55.978324 10646 token.go:118] [discovery] Requesting info from "192.168.1.38:6443" again to validate TLS against the pinned public key
I0524 17:57:55.983254 10646 token.go:135] [discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "192.168.1.38:6443"
I0524 17:57:55.983268 10646 discovery.go:52] [discovery] Using provided TLSBootstrapToken as authentication credentials for the join process
I0524 17:57:55.983276 10646 join.go:489] [preflight] Fetching init configuration
I0524 17:57:55.983281 10646 join.go:534] [preflight] Retrieving KubeConfig objects
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
I0524 17:57:55.991118 10646 interface.go:431] Looking for default routes with IPv4 addresses
I0524 17:57:55.991128 10646 interface.go:436] Default route transits interface "ens18"
I0524 17:57:55.991236 10646 interface.go:208] Interface ens18 is up
I0524 17:57:55.991269 10646 interface.go:256] Interface "ens18" has 2 addresses :[192.168.1.15/21 fe80::68bf:2bff:feee:6c6e/64].
I0524 17:57:55.991284 10646 interface.go:223] Checking addr 192.168.1.15/21.
I0524 17:57:55.991288 10646 interface.go:230] IP found 192.168.1.15
I0524 17:57:55.991292 10646 interface.go:262] Found valid IPv4 address 192.168.1.15 for interface "ens18".
I0524 17:57:55.991295 10646 interface.go:442] Found active IP 192.168.1.15
I0524 17:57:55.994214 10646 preflight.go:103] [preflight] Running configuration dependant checks
[preflight] Running pre-flight checks before initializing the new control plane instance
I0524 17:57:55.994260 10646 checks.go:577] validating Kubernetes and kubeadm version
I0524 17:57:55.994304 10646 checks.go:170] validating if the firewall is enabled and active
I0524 17:57:56.001072 10646 checks.go:205] validating availability of port 6443
I0524 17:57:56.001119 10646 checks.go:205] validating availability of port 10259
I0524 17:57:56.001134 10646 checks.go:205] validating availability of port 10257
I0524 17:57:56.001149 10646 checks.go:282] validating the existence of file /etc/kubernetes/manifests/kube-apiserver.yaml
I0524 17:57:56.001162 10646 checks.go:282] validating the existence of file /etc/kubernetes/manifests/kube-controller-manager.yaml
I0524 17:57:56.001170 10646 checks.go:282] validating the existence of file /etc/kubernetes/manifests/kube-scheduler.yaml
I0524 17:57:56.001174 10646 checks.go:282] validating the existence of file /etc/kubernetes/manifests/etcd.yaml
I0524 17:57:56.001178 10646 checks.go:432] validating if the connectivity type is via proxy or direct
I0524 17:57:56.001195 10646 checks.go:471] validating http connectivity to first IP address in the CIDR
I0524 17:57:56.001208 10646 checks.go:471] validating http connectivity to first IP address in the CIDR
I0524 17:57:56.001217 10646 checks.go:205] validating availability of port 2379
I0524 17:57:56.001235 10646 checks.go:205] validating availability of port 2380
I0524 17:57:56.001247 10646 checks.go:245] validating the existence and emptiness of directory /var/lib/etcd
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
I0524 17:57:56.001349 10646 checks.go:838] using image pull policy: IfNotPresent
I0524 17:57:56.018435 10646 checks.go:847] image exists: registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.22.4
I0524 17:57:56.033614 10646 checks.go:847] image exists: registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.22.4
I0524 17:57:56.049090 10646 checks.go:847] image exists: registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.22.4
I0524 17:57:56.064531 10646 checks.go:847] image exists: registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.22.4
I0524 17:57:56.082128 10646 checks.go:847] image exists: registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.5
I0524 17:57:56.097267 10646 checks.go:847] image exists: registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.5.0-0
I0524 17:57:56.113455 10646 checks.go:847] image exists: registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:v1.8.4
[download-certs] Downloading the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[certs] Using certificateDir folder "/etc/kubernetes/pki"
I0524 17:57:56.118314 10646 certs.go:46] creating PKI assets
I0524 17:57:56.118376 10646 certs.go:487] validating certificate period for etcd/ca certificate
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [dev-k8s-master01 localhost] and IPs [192.168.1.15 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [dev-k8s-master01 localhost] and IPs [192.168.1.15 127.0.0.1 ::1]
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/healthcheck-client" certificate and key
I0524 17:57:57.010895 10646 certs.go:487] validating certificate period for ca certificate
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [dev-k8s-master01 dev-k8s-master02 dev-k8s-master03 k8s-dev-master.ex-ai.com kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.1.15 192.168.7.173 192.168.1.38 192.168.0.163 192.168.0.90]
I0524 17:57:57.339536 10646 certs.go:487] validating certificate period for front-proxy-ca certificate
[certs] Generating "front-proxy-client" certificate and key
[certs] Valid certificates and keys now exist in "/etc/kubernetes/pki"
I0524 17:57:57.385838 10646 certs.go:77] creating new public/private key files for signing service account users
[certs] Using the existing "sa" key
[kubeconfig] Generating kubeconfig files
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
I0524 17:57:58.009494 10646 manifests.go:99] [control-plane] getting StaticPodSpecs
I0524 17:57:58.009751 10646 certs.go:487] validating certificate period for CA certificate
I0524 17:57:58.009827 10646 manifests.go:125] [control-plane] adding volume "ca-certs" for component "kube-apiserver"
I0524 17:57:58.009841 10646 manifests.go:125] [control-plane] adding volume "etc-ca-certificates" for component "kube-apiserver"
I0524 17:57:58.009847 10646 manifests.go:125] [control-plane] adding volume "etc-pki" for component "kube-apiserver"
I0524 17:57:58.009854 10646 manifests.go:125] [control-plane] adding volume "k8s-certs" for component "kube-apiserver"
I0524 17:57:58.009862 10646 manifests.go:125] [control-plane] adding volume "usr-local-share-ca-certificates" for component "kube-apiserver"
I0524 17:57:58.009869 10646 manifests.go:125] [control-plane] adding volume "usr-share-ca-certificates" for component "kube-apiserver"
I0524 17:57:58.016124 10646 manifests.go:154] [control-plane] wrote static Pod manifest for component "kube-apiserver" to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
I0524 17:57:58.016145 10646 manifests.go:99] [control-plane] getting StaticPodSpecs
I0524 17:57:58.016322 10646 manifests.go:125] [control-plane] adding volume "ca-certs" for component "kube-controller-manager"
I0524 17:57:58.016335 10646 manifests.go:125] [control-plane] adding volume "etc-ca-certificates" for component "kube-controller-manager"
I0524 17:57:58.016341 10646 manifests.go:125] [control-plane] adding volume "etc-pki" for component "kube-controller-manager"
I0524 17:57:58.016348 10646 manifests.go:125] [control-plane] adding volume "flexvolume-dir" for component "kube-controller-manager"
I0524 17:57:58.016356 10646 manifests.go:125] [control-plane] adding volume "k8s-certs" for component "kube-controller-manager"
I0524 17:57:58.016363 10646 manifests.go:125] [control-plane] adding volume "kubeconfig" for component "kube-controller-manager"
I0524 17:57:58.016370 10646 manifests.go:125] [control-plane] adding volume "usr-local-share-ca-certificates" for component "kube-controller-manager"
I0524 17:57:58.016377 10646 manifests.go:125] [control-plane] adding volume "usr-share-ca-certificates" for component "kube-controller-manager"
I0524 17:57:58.016980 10646 manifests.go:154] [control-plane] wrote static Pod manifest for component "kube-controller-manager" to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[control-plane] Creating static Pod manifest for "kube-scheduler"
I0524 17:57:58.016998 10646 manifests.go:99] [control-plane] getting StaticPodSpecs
I0524 17:57:58.017171 10646 manifests.go:125] [control-plane] adding volume "kubeconfig" for component "kube-scheduler"
I0524 17:57:58.017507 10646 manifests.go:154] [control-plane] wrote static Pod manifest for component "kube-scheduler" to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[check-etcd] Checking that the etcd cluster is healthy
I0524 17:57:58.018268 10646 local.go:71] [etcd] Checking etcd cluster health
I0524 17:57:58.018282 10646 local.go:74] creating etcd client that connects to etcd pods
I0524 17:57:58.018291 10646 etcd.go:166] retrieving etcd endpoints from "kubeadm.kubernetes.io/etcd.advertise-client-urls" annotation in etcd Pods
I0524 17:58:02.138320 10646 etcd.go:166] retrieving etcd endpoints from "kubeadm.kubernetes.io/etcd.advertise-client-urls" annotation in etcd Pods
I0524 17:58:05.187304 10646 etcd.go:166] retrieving etcd endpoints from "kubeadm.kubernetes.io/etcd.advertise-client-urls" annotation in etcd Pods
I0524 17:58:08.258177 10646 etcd.go:166] retrieving etcd endpoints from "kubeadm.kubernetes.io/etcd.advertise-client-urls" annotation in etcd Pods
I0524 17:58:11.356253 10646 etcd.go:166] retrieving etcd endpoints from "kubeadm.kubernetes.io/etcd.advertise-client-urls" annotation in etcd Pods
I0524 17:58:14.366123 10646 etcd.go:166] retrieving etcd endpoints from "kubeadm.kubernetes.io/etcd.advertise-client-urls" annotation in etcd Pods
I0524 17:58:17.447385 10646 etcd.go:166] retrieving etcd endpoints from "kubeadm.kubernetes.io/etcd.advertise-client-urls" annotation in etcd Pods
I0524 17:58:20.513218 10646 etcd.go:166] retrieving etcd endpoints from "kubeadm.kubernetes.io/etcd.advertise-client-urls" annotation in etcd Pods
I0524 17:58:23.606232 10646 etcd.go:166] retrieving etcd endpoints from "kubeadm.kubernetes.io/etcd.advertise-client-urls" annotation in etcd Pods
I0524 17:58:26.699159 10646 etcd.go:166] retrieving etcd endpoints from "kubeadm.kubernetes.io/etcd.advertise-client-urls" annotation in etcd Pods
I0524 17:58:29.800693 10646 etcd.go:166] retrieving etcd endpoints from "kubeadm.kubernetes.io/etcd.advertise-client-urls" annotation in etcd Pods
I0524 17:58:32.813094 10646 etcd.go:166] retrieving etcd endpoints from "kubeadm.kubernetes.io/etcd.advertise-client-urls" annotation in etcd Pods
I0524 17:58:35.901599 10646 etcd.go:166] retrieving etcd endpoints from "kubeadm.kubernetes.io/etcd.advertise-client-urls" annotation in etcd Pods
I0524 17:58:38.967241 10646 etcd.go:166] retrieving etcd endpoints from "kubeadm.kubernetes.io/etcd.advertise-client-urls" annotation in etcd Pods
I0524 17:58:42.054078 10646 etcd.go:166] retrieving etcd endpoints from "kubeadm.kubernetes.io/etcd.advertise-client-urls" annotation in etcd Pods
I0524 17:58:45.107540 10646 etcd.go:166] retrieving etcd endpoints from "kubeadm.kubernetes.io/etcd.advertise-client-urls" annotation in etcd Pods
I0524 17:58:48.180650 10646 etcd.go:166] retrieving etcd endpoints from "kubeadm.kubernetes.io/etcd.advertise-client-urls" annotation in etcd Pods
I0524 17:58:51.291295 10646 etcd.go:166] retrieving etcd endpoints from "kubeadm.kubernetes.io/etcd.advertise-client-urls" annotation in etcd Pods
I0524 17:58:54.317130 10646 etcd.go:166] retrieving etcd endpoints from "kubeadm.kubernetes.io/etcd.advertise-client-urls" annotation in etcd Pods
I0524 17:58:57.388214 10646 etcd.go:166] retrieving etcd endpoints from "kubeadm.kubernetes.io/etcd.advertise-client-urls" annotation in etcd Pods
I0524 17:59:00.476226 10646 etcd.go:166] retrieving etcd endpoints from "kubeadm.kubernetes.io/etcd.advertise-client-urls" annotation in etcd Pods
I0524 17:59:03.569171 10646 etcd.go:166] retrieving etcd endpoints from "kubeadm.kubernetes.io/etcd.advertise-client-urls" annotation in etcd Pods
I0524 17:59:06.669514 10646 etcd.go:166] retrieving etcd endpoints from "kubeadm.kubernetes.io/etcd.advertise-client-urls" annotation in etcd Pods
I0524 17:59:09.685159 10646 etcd.go:166] retrieving etcd endpoints from "kubeadm.kubernetes.io/etcd.advertise-client-urls" annotation in etcd Pods
I0524 17:59:12.757184 10646 etcd.go:166] retrieving etcd endpoints from "kubeadm.kubernetes.io/etcd.advertise-client-urls" annotation in etcd Pods
I0524 17:59:15.875321 10646 etcd.go:166] retrieving etcd endpoints from "kubeadm.kubernetes.io/etcd.advertise-client-urls" annotation in etcd Pods
I0524 17:59:18.892172 10646 etcd.go:166] retrieving etcd endpoints from "kubeadm.kubernetes.io/etcd.advertise-client-urls" annotation in etcd Pods
I0524 17:59:22.030165 10646 etcd.go:166] retrieving etcd endpoints from "kubeadm.kubernetes.io/etcd.advertise-client-urls" annotation in etcd Pods
I0524 17:59:25.085195 10646 etcd.go:166] retrieving etcd endpoints from "kubeadm.kubernetes.io/etcd.advertise-client-urls" annotation in etcd Pods
I0524 17:59:28.139599 10646 etcd.go:166] retrieving etcd endpoints from "kubeadm.kubernetes.io/etcd.advertise-client-urls" annotation in etcd Pods
Get "https://192.168.7.173:6443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Detcd%2Ctier%3Dcontrol-plane": dial tcp 192.168.7.173:6443: connect: no route to host
could not retrieve the list of etcd endpoints
k8s.io/kubernetes/cmd/kubeadm/app/util/etcd.getRawEtcdEndpointsFromPodAnnotation
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/cmd/kubeadm/app/util/etcd/etcd.go:155
k8s.io/kubernetes/cmd/kubeadm/app/util/etcd.getEtcdEndpointsWithBackoff
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/cmd/kubeadm/app/util/etcd/etcd.go:131
k8s.io/kubernetes/cmd/kubeadm/app/util/etcd.getEtcdEndpoints
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/cmd/kubeadm/app/util/etcd/etcd.go:127
k8s.io/kubernetes/cmd/kubeadm/app/util/etcd.NewFromCluster
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/cmd/kubeadm/app/util/etcd/etcd.go:98
k8s.io/kubernetes/cmd/kubeadm/app/phases/etcd.CheckLocalEtcdClusterStatus
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/cmd/kubeadm/app/phases/etcd/local.go:75
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/join.runCheckEtcdPhase
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/join/checketcd.go:69
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run.func1
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:234
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).visitAll
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:421
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:207
k8s.io/kubernetes/cmd/kubeadm/app/cmd.newCmdJoin.func1
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/join.go:174
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:852
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:960
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:897
k8s.io/kubernetes/cmd/kubeadm/app.Run
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/cmd/kubeadm/app/kubeadm.go:50
main.main
_output/local/go/src/k8s.io/kubernetes/cmd/kubeadm/kubeadm.go:25
runtime.main
/usr/local/go/src/runtime/proc.go:225
runtime.goexit
/usr/local/go/src/runtime/asm_amd64.s:1371
error execution phase check-etcd
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run.func1
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:235
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).visitAll
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:421
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:207
k8s.io/kubernetes/cmd/kubeadm/app/cmd.newCmdJoin.func1
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/join.go:174
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:852
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:960
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:897
k8s.io/kubernetes/cmd/kubeadm/app.Run
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/cmd/kubeadm/app/kubeadm.go:50
main.main
_output/local/go/src/k8s.io/kubernetes/cmd/kubeadm/kubeadm.go:25
runtime.main
/usr/local/go/src/runtime/proc.go:225
runtime.goexit
/usr/local/go/src/runtime/asm_amd64.s:1371
报错1
Get "https://192.168.7.173:6443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Detcd%2Ctier%3Dcontrol-plane": dial tcp 192.168.7.173:6443: connect: no route to host
解决:在活着的master上更改kube-system ns下的kubeadm-config这个cm
bash
root@dev-k8s-master03:~# kubectl edit cm kubeadm-config
controlPlaneEndpoint: 192.168.1.38:6443
有空继续补充