【Kubernetes】使用kubeadm安装三节点K8s集群遇到的问题总结

**安装环境:**CentOS 7.9(升级内核:3.10.0-1160.119.1)

**Docker-ce版本:**26.1.4

**Kubernetes版本:**1.28.1

差不多有两年多没有安装K8s集群了,上一套K8s集群的版本还是1.24.12,现在K8s版本都已经迭代到了1.33.12-1.36.1了。本来准备用之前的K8s集群做一些实验,但是由于一些莫名其妙的错误(不是证书过期),K8s集群无法成功启动。索性,全部推倒重来,把原有环境删除,直接重装虚拟机、重新搭建K8s集群。

众所周知,由于CentOS 7官方不再更新维护,所以官方的在线yum源也没有了。虽然可以配置阿里云的更新源,但是在安装一些组件的过程中,还是出现了一些小的问题,导致需要去网上搜索一些包,比如下面的几个包:

objectivec 复制代码
[ 1 ] socat-1.7.3.2-2.el7.x86_64.rpm
[ 2 ] conntrack-tools-1.4.4-7.el7.x86_64.rpm
[ 3 ] cri-dockerd-0.3.14-3.el7.x86_64.rpm

整个部署过程中,最折磨我的是,kubelet服务一直无法成功启动, 使用 journalctl -xeu kubelet 命令,发现一些错误,但是由于在CRT中的展示问题,一直无法看到完整的信息,零零碎碎的发现了下面的一些有用的报错信息

objectivec 复制代码
"CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed pulling image \"registry.k8s.io/pause:3.9\": Error response from daemon: Head \"https://europe-west3-docker.pkg.dev/v2/k8s-artifacts-prod/images/pause/manifests/3.9\": dial tcp 74.125.135.82:443: co

上网查资料,说要在K8s集群的所有节点修改一些什么文件,但是我的K8s集群里面确实找不到那个文件,后来发现解决方案其实很简单。不就是他找不到这个标签【 registry.k8s.io/pause:3.9 】的镜像吗?那我直接把阿里云下来的镜像tag重命名一下即可,Just so easy !

objectivec 复制代码
[root@k8s-master ~]# docker images
REPOSITORY                                                        TAG        IMAGE ID       CREATED         SIZE
registry.aliyuncs.com/google_containers/kube-apiserver            v1.28.15   9dc6939e7c57   18 months ago   125MB
registry.aliyuncs.com/google_containers/kube-scheduler            v1.28.15   9d3465f8477c   18 months ago   59.3MB
registry.aliyuncs.com/google_containers/kube-controller-manager   v1.28.15   10541d8af03f   18 months ago   121MB
registry.aliyuncs.com/google_containers/kube-proxy                v1.28.15   ba6d7f8bc25b   18 months ago   81.8MB
registry.aliyuncs.com/google_containers/kube-apiserver            v1.28.1    5c801295c21d   2 years ago     126MB
registry.aliyuncs.com/google_containers/kube-proxy                v1.28.1    6cdbabde3874   2 years ago     73.1MB
registry.aliyuncs.com/google_containers/kube-controller-manager   v1.28.1    821b3dfea27b   2 years ago     122MB
registry.aliyuncs.com/google_containers/kube-scheduler            v1.28.1    b462ce0c8b1f   2 years ago     60.1MB
registry.aliyuncs.com/google_containers/etcd                      3.5.9-0    73deb9a3f702   2 years ago     294MB
registry.aliyuncs.com/google_containers/coredns                   v1.10.1    ead0a4a53df8   3 years ago     53.6MB
registry.aliyuncs.com/google_containers/pause                     3.9        e6f181688397   3 years ago     744kB
[root@k8s-master ~]# 
[root@k8s-master ~]# 
[root@k8s-master ~]# docker tag registry.aliyuncs.com/google_containers/pause:3.9 registry.k8s.io/pause:3.9
[root@k8s-master ~]# docker images
REPOSITORY                                                        TAG        IMAGE ID       CREATED         SIZE
registry.aliyuncs.com/google_containers/kube-apiserver            v1.28.15   9dc6939e7c57   18 months ago   125MB
registry.aliyuncs.com/google_containers/kube-controller-manager   v1.28.15   10541d8af03f   18 months ago   121MB
registry.aliyuncs.com/google_containers/kube-proxy                v1.28.15   ba6d7f8bc25b   18 months ago   81.8MB
registry.aliyuncs.com/google_containers/kube-scheduler            v1.28.15   9d3465f8477c   18 months ago   59.3MB
registry.aliyuncs.com/google_containers/kube-apiserver            v1.28.1    5c801295c21d   2 years ago     126MB
registry.aliyuncs.com/google_containers/kube-proxy                v1.28.1    6cdbabde3874   2 years ago     73.1MB
registry.aliyuncs.com/google_containers/kube-controller-manager   v1.28.1    821b3dfea27b   2 years ago     122MB
registry.aliyuncs.com/google_containers/kube-scheduler            v1.28.1    b462ce0c8b1f   2 years ago     60.1MB
registry.aliyuncs.com/google_containers/etcd                      3.5.9-0    73deb9a3f702   2 years ago     294MB
registry.aliyuncs.com/google_containers/coredns                   v1.10.1    ead0a4a53df8   3 years ago     53.6MB
registry.aliyuncs.com/google_containers/pause                     3.9        e6f181688397   3 years ago     744kB
registry.k8s.io/pause                                             3.9        e6f181688397   3 years ago     744kB

然后再执行kubeadm init,就成功了。

objectivec 复制代码
[root@k8s-master ~]# kubeadm init   --apiserver-advertise-address=192.168.223.201   --image-repository=registry.aliyuncs.com/google_containers   --kubernetes-version=v1.28.1   --service-cidr=10.96.0.0/12   --pod-network-cidr=10.244.0.0/16   --cri-socket unix:///var/run/cri-dockerd.sock   --ignore-preflight-errors=all
[init] Using Kubernetes version: v1.28.1
[preflight] Running pre-flight checks
        [WARNING Port-6443]: Port 6443 is in use
        [WARNING Port-10259]: Port 10259 is in use
        [WARNING FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml]: /etc/kubernetes/manifests/kube-apiserver.yaml already exists
        [WARNING FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml]: /etc/kubernetes/manifests/kube-controller-manager.yaml already exists
        [WARNING FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml]: /etc/kubernetes/manifests/kube-scheduler.yaml already exists
        [WARNING FileAvailable--etc-kubernetes-manifests-etcd.yaml]: /etc/kubernetes/manifests/etcd.yaml already exists
        [WARNING Port-10250]: Port 10250 is in use
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
W0513 16:36:48.370228    4888 checks.go:835] detected that the sandbox image "registry.k8s.io/pause:3.9" of the container runtime is inconsistent with that used by kubeadm. It is recommended that using "registry.aliyuncs.com/google_containers/pause:3.9" as the CRI sandbox image.
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/admin.conf"
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/scheduler.conf"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 10.609911 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s-master as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node k8s-master as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: qj3fqz.gsjtjsxyupomoatw
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.223.201:6443 --token qj3fqz.gsjtjsxyupomoatw \
        --discovery-token-ca-cert-hash sha256:577f38f2bce4d90a4a3b8cdd48101d5cc859fb8a65a91e7ae7941add34be4f18 

下面是之前执行 kubeadm init 出现的报错信息

objectivec 复制代码
[root@k8s-master kubernetes]# kubeadm init \
>   --apiserver-advertise-address=192.168.223.201 \
>   --image-repository=registry.aliyuncs.com/google_containers \
>   --kubernetes-version=v1.28.1 \
>   --service-cidr=10.96.0.0/12 \
>   --pod-network-cidr=10.244.0.0/16 \
>   --cri-socket unix:///var/run/cri-dockerd.sock \
>   --ignore-preflight-errors=all
[init] Using Kubernetes version: v1.28.1
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
W0513 16:12:55.126377    4012 checks.go:835] detected that the sandbox image "registry.k8s.io/pause:3.9" of the container runtime is inconsistent with that used by kubeadm. It is recommended that using "registry.aliyuncs.com/google_containers/pause:3.9" as the CRI sandbox image.
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.223.201]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.223.201 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.223.201 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.

Unfortunately, an error has occurred:
        timed out waiting for the condition

This error is likely caused by:
        - The kubelet is not running
        - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
        - 'systemctl status kubelet'
        - 'journalctl -xeu kubelet'

Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
        - 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock ps -a | grep kube | grep -v pause'
        Once you have found the failing container, you can inspect its logs with:
        - 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock logs CONTAINERID'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher

最后一个问题,其实我几年前也遇到过的,也自己解决过的。但是昨天就是没注意,再次犯下错误,其实就是有calico.yaml的两行配置没有按照原配置文件的格式进行对齐。

问题的现象是,K8s集群的节点一直为NotReady,且CoreDNS的两个Pod实例一直出于Pending的状态,如下所示:

objectivec 复制代码
[root@k8s-master ~]# kubectl get nodes
NAME         STATUS     ROLES           AGE   VERSION
k8s-master   NotReady   control-plane   15h   v1.28.1
k8s-node01   NotReady   <none>          15h   v1.28.1
k8s-node02   NotReady   <none>          15h   v1.28.1

[root@k8s-master ~]# watch kubectl get pods --all-namespaces
Every 2.0s: kubectl get pods --all-namespaces                                                                                                                                             Thu May 14 07:46:11 2026

NAMESPACE     NAME                                 READY   STATUS    RESTARTS      AGE
kube-system   coredns-66f779496c-4kmmc             0/1     Pending   0             15h
kube-system   coredns-66f779496c-v64d8             0/1     Pending   0             15h
kube-system   etcd-k8s-master                      1/1     Running   1 (21m ago)   15h
kube-system   kube-apiserver-k8s-master            1/1     Running   1 (21m ago)   15h
kube-system   kube-controller-manager-k8s-master   1/1     Running   1 (21m ago)   15h
kube-system   kube-proxy-mptxg                     1/1     Running   1 (25m ago)   15h
kube-system   kube-proxy-qc587                     1/1     Running   1 (21m ago)   15h
kube-system   kube-proxy-vwjd4                     1/1     Running   1 (24m ago)   15h
kube-system   kube-scheduler-k8s-master            1/1     Running   1 (21m ago)   15h

其实在部署calico的时候,最后一行的错误提示信息已经告诉我了,但是我一直没在意

objectivec 复制代码
[root@k8s-master ~]# kubectl apply -f calico.yaml
poddisruptionbudget.policy/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
serviceaccount/calico-node created
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
error: error parsing calico.yaml: error converting YAML to JSON: yaml: line 204: did not find expected '-' indicator

现在我们来看,错误到底有多低级!

正确的配置,注意 CALICO_IPV4POOL_CIDR 和它的value

objectivec 复制代码
            # The default IPv4 pool to create on startup if none exists. Pod IPs will be
            # chosen from this range. Changing this value after installation will have
            # no effect. This should fall within `--cluster-cidr`.
            - name: CALICO_IPV4POOL_CIDR
              value: "10.244.0.0/16"
            # Disable file logging so `kubectl logs` works.
            - name: CALICO_DISABLE_FILE_LOGGING
              value: "true"
            # Set Felix endpoint to host default action to ACCEPT.
            - name: FELIX_DEFAULTENDPOINTTOHOSTACTION
              value: "ACCEPT"

错误的配置,原先这两行是用#注释的,我删除#号以后,手欠,给它向右缩了两个空格......

objectivec 复制代码
            # The default IPv4 pool to create on startup if none exists. Pod IPs will be
            # chosen from this range. Changing this value after installation will have
            # no effect. This should fall within `--cluster-cidr`.
              - name: CALICO_IPV4POOL_CIDR
                value: "10.244.0.0/16"
            # Disable file logging so `kubectl logs` works.
            - name: CALICO_DISABLE_FILE_LOGGING
              value: "true"
            # Set Felix endpoint to host default action to ACCEPT.
            - name: FELIX_DEFAULTENDPOINTTOHOSTACTION
              value: "ACCEPT"

删除calico.yaml部署,并且重新部署,成功!

objectivec 复制代码
[root@k8s-master ~]# kubectl delete -f calico.yaml 
poddisruptionbudget.policy "calico-kube-controllers" deleted
serviceaccount "calico-kube-controllers" deleted
serviceaccount "calico-node" deleted
configmap "calico-config" deleted
customresourcedefinition.apiextensions.k8s.io "bgpconfigurations.crd.projectcalico.org" deleted
customresourcedefinition.apiextensions.k8s.io "bgppeers.crd.projectcalico.org" deleted
customresourcedefinition.apiextensions.k8s.io "blockaffinities.crd.projectcalico.org" deleted
customresourcedefinition.apiextensions.k8s.io "caliconodestatuses.crd.projectcalico.org" deleted
customresourcedefinition.apiextensions.k8s.io "clusterinformations.crd.projectcalico.org" deleted
customresourcedefinition.apiextensions.k8s.io "felixconfigurations.crd.projectcalico.org" deleted
customresourcedefinition.apiextensions.k8s.io "globalnetworkpolicies.crd.projectcalico.org" deleted
customresourcedefinition.apiextensions.k8s.io "globalnetworksets.crd.projectcalico.org" deleted
customresourcedefinition.apiextensions.k8s.io "hostendpoints.crd.projectcalico.org" deleted
customresourcedefinition.apiextensions.k8s.io "ipamblocks.crd.projectcalico.org" deleted
customresourcedefinition.apiextensions.k8s.io "ipamconfigs.crd.projectcalico.org" deleted
customresourcedefinition.apiextensions.k8s.io "ipamhandles.crd.projectcalico.org" deleted
customresourcedefinition.apiextensions.k8s.io "ippools.crd.projectcalico.org" deleted
customresourcedefinition.apiextensions.k8s.io "ipreservations.crd.projectcalico.org" deleted
customresourcedefinition.apiextensions.k8s.io "kubecontrollersconfigurations.crd.projectcalico.org" deleted
customresourcedefinition.apiextensions.k8s.io "networkpolicies.crd.projectcalico.org" deleted
customresourcedefinition.apiextensions.k8s.io "networksets.crd.projectcalico.org" deleted
clusterrole.rbac.authorization.k8s.io "calico-kube-controllers" deleted
clusterrole.rbac.authorization.k8s.io "calico-node" deleted
clusterrolebinding.rbac.authorization.k8s.io "calico-kube-controllers" deleted
clusterrolebinding.rbac.authorization.k8s.io "calico-node" deleted
Error from server (NotFound): error when deleting "calico.yaml": daemonsets.apps "calico-node" not found
Error from server (NotFound): error when deleting "calico.yaml": deployments.apps "calico-kube-controllers" not found
[root@k8s-master ~]# kubectl apply -f calico.yaml 
poddisruptionbudget.policy/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
serviceaccount/calico-node created
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
deployment.apps/calico-kube-controllers created

随着calico-node和coredns的pod的初始化和成功运行,K8s集群节点的状态,也成功变为ready。

objectivec 复制代码
[root@k8s-master ~]# kubectl get pods -A
NAMESPACE     NAME                                       READY   STATUS     RESTARTS      AGE
kube-system   calico-kube-controllers-658d97c59c-g9ct4   0/1     Pending    0             24s
kube-system   calico-node-4r75j                          0/1     Init:1/3   0             24s
kube-system   calico-node-5wx74                          0/1     Init:1/3   0             24s
kube-system   calico-node-ndztg                          0/1     Init:2/3   0             24s
kube-system   coredns-66f779496c-4kmmc                   0/1     Pending    0             15h
kube-system   coredns-66f779496c-v64d8                   0/1     Pending    0             15h
kube-system   etcd-k8s-master                            1/1     Running    1 (23m ago)   15h
kube-system   kube-apiserver-k8s-master                  1/1     Running    1 (23m ago)   15h
kube-system   kube-controller-manager-k8s-master         1/1     Running    1 (23m ago)   15h
kube-system   kube-proxy-mptxg                           1/1     Running    1 (27m ago)   15h
kube-system   kube-proxy-qc587                           1/1     Running    1 (23m ago)   15h
kube-system   kube-proxy-vwjd4                           1/1     Running    1 (27m ago)   15h
kube-system   kube-scheduler-k8s-master                  1/1     Running    1 (23m ago)   15h
[root@k8s-master ~]# kubectl get pods -A
NAMESPACE     NAME                                       READY   STATUS              RESTARTS      AGE
kube-system   calico-kube-controllers-658d97c59c-g9ct4   0/1     ContainerCreating   0             46s
kube-system   calico-node-4r75j                          0/1     Init:1/3            0             46s
kube-system   calico-node-5wx74                          0/1     Init:1/3            0             46s
kube-system   calico-node-ndztg                          0/1     Running             0             46s
kube-system   coredns-66f779496c-4kmmc                   0/1     Running             0             15h
kube-system   coredns-66f779496c-v64d8                   0/1     Running             0             15h
kube-system   etcd-k8s-master                            1/1     Running             1 (23m ago)   15h
kube-system   kube-apiserver-k8s-master                  1/1     Running             1 (23m ago)   15h
kube-system   kube-controller-manager-k8s-master         1/1     Running             1 (23m ago)   15h
kube-system   kube-proxy-mptxg                           1/1     Running             1 (27m ago)   15h
kube-system   kube-proxy-qc587                           1/1     Running             1 (23m ago)   15h
kube-system   kube-proxy-vwjd4                           1/1     Running             1 (27m ago)   15h
kube-system   kube-scheduler-k8s-master                  1/1     Running             1 (23m ago)   15h
[root@k8s-master ~]# kubectl get pods -A
NAMESPACE     NAME                                       READY   STATUS    RESTARTS      AGE
kube-system   calico-kube-controllers-658d97c59c-g9ct4   1/1     Running   0             94s
kube-system   calico-node-4r75j                          1/1     Running   0             94s
kube-system   calico-node-5wx74                          1/1     Running   0             94s
kube-system   calico-node-ndztg                          1/1     Running   0             94s
kube-system   coredns-66f779496c-4kmmc                   1/1     Running   0             15h
kube-system   coredns-66f779496c-v64d8                   1/1     Running   0             15h
kube-system   etcd-k8s-master                            1/1     Running   1 (24m ago)   15h
kube-system   kube-apiserver-k8s-master                  1/1     Running   1 (24m ago)   15h
kube-system   kube-controller-manager-k8s-master         1/1     Running   1 (24m ago)   15h
kube-system   kube-proxy-mptxg                           1/1     Running   1 (28m ago)   15h
kube-system   kube-proxy-qc587                           1/1     Running   1 (24m ago)   15h
kube-system   kube-proxy-vwjd4                           1/1     Running   1 (28m ago)   15h
kube-system   kube-scheduler-k8s-master                  1/1     Running   1 (24m ago)   15h
[root@k8s-master ~]# kubectl get nodes
NAME         STATUS   ROLES           AGE   VERSION
k8s-master   Ready    control-plane   15h   v1.28.1
k8s-node01   Ready    <none>          15h   v1.28.1
k8s-node02   Ready    <none>          15h   v1.28.1

昨天折腾了差不多一天,没搞定,今天早上上班成功搞定。虽然整个过程没太多技术含量,但是,能遇到问题并且成功解决问题,还是感觉挺有成就感的。希望分享出来,希望可以对遇到类似问题的小伙伴有所帮助和启发。

相关推荐
容器魔方1 小时前
Kthena Router ScorePlugin 架构与基准测试分析
人工智能·云原生·容器·架构·开源
眷蓝天1 小时前
Kubernetes Helm 包管理详解
云原生·容器·kubernetes
吴声子夜歌2 小时前
Java——通用容器类
java·容器
叶~小兮2 小时前
K8S优先级、Pod驱逐、HPA扩缩容 学习笔记
笔记·学习·kubernetes
xingfujie2 小时前
第2章:服务器规划与基础环境配置
linux·运维·微服务·云原生·容器·kubernetes·负载均衡
Raink老师2 小时前
【AI面试临阵磨枪-56】大模型服务部署:Docker、K8s、GPU 调度、推理加速
人工智能·面试·kubernetes·ai 面试
ℳ₯㎕ddzོꦿ࿐2 小时前
实战指南:使用 Docker Compose 优雅部署 MongoDB 并自动初始化用户
mongodb·docker·容器
yyyyy_abc2 小时前
docker学习笔记
运维·docker·容器
一起逃去看海吧3 小时前
Dify-01-docker安装 和 dify部署
运维·docker·容器