安装K8s集群

文章首发于我的博客 :https://blog.liuzijian.com/post/9aa6d426-a01c-05b0-6f7a-5da4343f0f9e.html

因阿里云加速服务调整,镜像加速服务自2024年7月起不再支持,拉取镜像,下载网络插件等操作,需要科学上网访问DockerHub。
安装全过程均使用ROOT权限。

1.安装前准备工作

这里采用3台CentOS虚拟机进行集群安装,安装前需要环境准备:

  1. 使用虚拟机VMware新建一个NAT类型网络(一般都会默认自带,只进行设置即可),我的起名叫VMnet8,设置VMnet8的子网IP为192.168.228.0,子网掩码为255.255.255.0,网关地址为192.168.228.2,起止IP地址范围192.168.228.3-192.168.228.254,并勾选"将主机虚拟适配器连接到此网络"为物理机分配IP地址,一般会分配192.168.228.1给物理机。

  2. VMware安装3台Linux虚拟机,可以安装一台,然后完整克隆,这里我采用的安装镜像版本是CentOS-7-x86_64-Minimal-2009,安装完成后,设置网络为VMnet8,将IP获取方式由DHCP修改为静态,并将IP地址分别设置为192.168.228.131192.168.228.132192.168.228.133,再分别设置网关,子网掩码,DNS,MAC地址,并保证MAC地址互相不重复。

系统下载地址 https://mirrors.aliyun.com/centos/7/isos/x86_64/

2.安装docker

K8s是个容器编排工具,需要容器环境,每台机器都要安装docker环境。

2.1 修改机器的yum源

我使用的版本无法从官方镜像源下载软件包,所以替换为阿里云的yum源

打开配置文件

bash 复制代码
vi /etc/yum.repos.d/CentOS-Base.repo

复制粘贴以下内容进去

ini 复制代码
[base]
name=CentOS-$releasever - Base
baseurl=https://mirrors.aliyun.com/centos/$releasever/os/$basearch/
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/centos/RPM-GPG-KEY-CentOS-7

[updates]
name=CentOS-$releasever - Updates
baseurl=https://mirrors.aliyun.com/centos/$releasever/updates/$basearch/
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/centos/RPM-GPG-KEY-CentOS-7

[extras]
name=CentOS-$releasever - Extras
baseurl=https://mirrors.aliyun.com/centos/$releasever/extras/$basearch/
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/centos/RPM-GPG-KEY-CentOS-7

[centosplus]
name=CentOS-$releasever - Plus
baseurl=https://mirrors.aliyun.com/centos/$releasever/centosplus/$basearch/
gpgcheck=1
enabled=0
gpgkey=https://mirrors.aliyun.com/centos/RPM-GPG-KEY-CentOS-7

刷新yum

bash 复制代码
yum clean all

2.2 安装yum-utils

bash 复制代码
yum install -y yum-utils

2.3 添加docker的yum源

执行命令,添加docker的阿里云yum加速地址

bash 复制代码
sudo yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

2.4 安装docker

执行安装命令,安装docker的三个组件

bash 复制代码
yum install -y   docker-ce-20.10.7    docker-ce-cli-20.10.7    containerd.io-1.4.6

2.5 设置docker

立即启动docker,并设置开机启动

bash 复制代码
systemctl enable docker --now

打印docker信息,确定安装和启动完成

bash 复制代码
docker info

2.6 设置docker镜像加速

登录阿里云容器镜像服务https://cr.console.aliyun.com/cn-shanghai/instances/mirrors,并获取自己的加速地址。

bash 复制代码
sudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json <<-'EOF'
{
  "registry-mirrors": ["https://***************.mirror.aliyuncs.com"]
}
EOF
sudo systemctl daemon-reload
sudo systemctl restart docker

3.安装K8s集群

3.1 安装条件

  • 兼容的Linux发行版(Ubuntu,CentOS等等)
  • 机器需要2GB内存,CPU2核及以上
  • 集群中机器网络彼此互通
  • 集群中不可有重复的主机名
  • 集群中不可有重复的MAC地址

3.2 安装规划

主节点一台机器,从节点两台机器

  • 主节点master

    主机名:k8s131, IP:192.168.228.131

  • 从节点node

    主机名:k8s132, IP:192.168.228.132

    主机名:k8s133, IP:192.168.228.133

3.3 安装前设置

三台机器都要进行设置。

1.设置主机名

在对应IP地址的机器上分别设置主机名为k8s131k8s132k8s133

bash 复制代码
hostnamectl set-hostname xxx

执行后,检查是否都设置好了

bash 复制代码
hostname

会话exit退出后重连shell,就会看到shell的计算机名已经变成了自己设置的计算机名root@k8s131

text 复制代码
Last login: Thu Nov 21 17:47:02 2024 from 192.168.228.1
[root@k8s133 ~]# 

2.关闭交换分区

使用free -m命令,可以查看交换分区情况,安装K8s前,需要关闭交换分区,且永久关闭

text 复制代码
[root@localhost ~]# free -m
              total        used        free      shared  buff/cache   available
Mem:           1819         309        1166           9         342        1360
Swap:          2047           0        2047

执行命令,永久关闭交换分区(将Swap设置为 0 0 0)

bash 复制代码
swapoff -a  
sed -ri 's/.*swap.*/#&/' /etc/fstab

3.禁用,并永久禁用SELinux

bash 复制代码
sudo setenforce 0
sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config

4.设置允许iptables检查桥接流量

将IPV6流量桥接到IPV4网卡上,是K8s官方要求的做法

bash 复制代码
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
br_netfilter
EOF

cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF

sudo sysctl --system

5.关闭防火墙

CentOS7默认打开防火墙firewalld,3台都需要关闭firewalld

bash 复制代码
systemctl stop firewalld
systemctl disable firewalld

3.4 安装K8s组件

1.安装kubelet、kubeadm、kubectl

首先添加k8s软件包的yum源到三台机器

bash 复制代码
cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
   http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
exclude=kubelet kubeadm kubectl
EOF

三台机器依次执行安装命令

bash 复制代码
sudo yum install -y kubelet-1.20.9 kubeadm-1.20.9 kubectl-1.20.9 --disableexcludes=kubernetes

三台机器都启动kubelet,并设置开机启动

bash 复制代码
sudo systemctl enable --now kubelet

不停执行systemctl status kubelet会发现服务一直处于闪断状态,因为kubelet一直在等待

3.5 初始化主节点

使用kubeadm引导集群,初始化主节点。

1.首先所有机器都要先下载需要的镜像

bash 复制代码
docker pull registry.k8s.io/kube-apiserver:v1.20.9
docker pull registry.k8s.io/kube-proxy:v1.20.9
docker pull registry.k8s.io/kube-controller-manager:v1.20.9
docker pull registry.k8s.io/kube-scheduler:v1.20.9
docker pull registry.k8s.io/coredns:1.7.0
docker pull registry.k8s.io/etcd:3.4.13-0
docker pull registry.k8s.io/pause:3.2

下载完成后docker images验证

2.设置主节点域名映射

3台机器都必须添加master节点(又叫集群的入口节点)的域名映射,IP按实际情况修改。

bash 复制代码
echo "192.168.228.131  cluster-endpoint" >> /etc/hosts

3.初始化主节点

主节点所在机器上执行命令,初始化主节点

参数解释

  • --service-cidr, --pod-network-cidr 两项设置的网络范围不能重叠,也不能同服务器所在网络范围重叠。
  • --apiserver-advertise-address改成自己主节点的IP。
  • --control-plane-endpoint改为自己设置的主节点域名。

命令

bash 复制代码
kubeadm init \
--image-repository registry.k8s.io \
--apiserver-advertise-address=192.168.228.131 \
--control-plane-endpoint=cluster-endpoint \
--kubernetes-version=v1.20.9 \
--service-cidr=10.96.0.0/16 \
--pod-network-cidr=192.168.23.0/24 

等待初始化完成后,终端输出提示成功,说明初始化完成。输出的内容需要复制保存备用。

text 复制代码
[root@k8s131 ~]# kubeadm init \
> --image-repository registry.k8s.io \
> --apiserver-advertise-address=192.168.228.131 \
> --control-plane-endpoint=cluster-endpoint \
> --kubernetes-version=v1.20.9 \
> --service-cidr=10.96.0.0/16 \
> --pod-network-cidr=192.168.23.0/24 
[init] Using Kubernetes version: v1.20.9
[preflight] Running pre-flight checks
	[WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly
	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.7. Latest validated version: 19.03
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [cluster-endpoint k8s131 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.228.131]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s131 localhost] and IPs [192.168.228.131 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s131 localhost] and IPs [192.168.228.131 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 14.007116 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.20" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s131 as control-plane by adding the labels "node-role.kubernetes.io/master=''" and "node-role.kubernetes.io/control-plane='' (deprecated)"
[mark-control-plane] Marking the node k8s131 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: 4qn4kj.52saric9a3vqnk1w
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:

  kubeadm join cluster-endpoint:6443 --token 4qn4kj.52saric9a3vqnk1w \
    --discovery-token-ca-cert-hash sha256:7f12181600006aeb62fb38bcb82582809a9ad1911e49065f1fd13f9c68c95774 \
    --control-plane 

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join cluster-endpoint:6443 --token 4qn4kj.52saric9a3vqnk1w \
    --discovery-token-ca-cert-hash sha256:7f12181600006aeb62fb38bcb82582809a9ad1911e49065f1fd13f9c68c95774 

4.配置文件目录

按照上述的输出提示"To start using your cluster, you need to run the following as a regular user" 还需要需要在主节点执行下面的命令

bash 复制代码
  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

5.部署网络插件

按照上述的输出提示 You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml",接下来还需要部署一个pod网络插件到master节点上。

输入命令kubectl get nodes -A,测试下主节点状态,返回NotReady是因为还没有部署网络插件。

text 复制代码
[root@k8s131 ~]# kubectl get nodes -A
NAME     STATUS     ROLES                  AGE    VERSION
k8s131   NotReady   control-plane,master   5h5m   v1.20.9

k8s支持多种网络插件,例如calico,安装前要先将它的编排文件下载到本地目录

bash 复制代码
curl https://docs.projectcalico.org/v3.20/manifests/calico.yaml -O  

配置文件中找到以下内容

text 复制代码
# - name: CALICO_IPV4POOL_CIDR
#   value: "192.168.0.0/16"

将默认的192.168.0.0/16修改为--pod-network-cidr=指定的地址192.168.23.0/24,并解除注释

text 复制代码
- name: CALICO_IPV4POOL_CIDR
  value: "192.168.23.0/24"

在主节点执行命令,部署网络插件calico,部署过程需要联网下载镜像

bash 复制代码
kubectl apply -f calico.yaml
text 复制代码
[root@k8s131 opt]# kubectl apply -f calico.yaml 
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
serviceaccount/calico-node created
deployment.apps/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
poddisruptionbudget.policy/calico-kube-controllers created

输入kubectl get pods -A查看网络插件pod部署情况

状态为ContainerCreatingInit说明正在下载部署

text 复制代码
[root@k8s131 opt]# kubectl get pods -A
NAMESPACE     NAME                                       READY   STATUS              RESTARTS   AGE
kube-system   calico-kube-controllers-577f77cb5c-wpxh7   0/1     ContainerCreating   0          21m
kube-system   calico-node-7wb4z                          0/1     Init:2/3            0          9m3s
kube-system   coredns-76c6f6bbc9-4q5f9                   0/1     ContainerCreating   0          5h41m
kube-system   coredns-76c6f6bbc9-nkdcl                   0/1     ContainerCreating   0          5h41m
kube-system   etcd-k8s131                                1/1     Running             1          5h41m
kube-system   kube-apiserver-k8s131                      1/1     Running             1          5h41m
kube-system   kube-controller-manager-k8s131             1/1     Running             1          5h41m
kube-system   kube-proxy-nt5jf                           1/1     Running             1          5h41m
kube-system   kube-scheduler-k8s131                      1/1     Running             1          5h41m

完成后状态均为Running

text 复制代码
[root@k8s131 opt]# kubectl get pods -A
NAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGE
kube-system   calico-kube-controllers-577f77cb5c-wpxh7   1/1     Running   0          24m
kube-system   calico-node-7wb4z                          1/1     Running   0          12m
kube-system   coredns-76c6f6bbc9-4q5f9                   1/1     Running   0          5h44m
kube-system   coredns-76c6f6bbc9-nkdcl                   1/1     Running   0          5h44m
kube-system   etcd-k8s131                                1/1     Running   1          5h45m
kube-system   kube-apiserver-k8s131                      1/1     Running   1          5h45m
kube-system   kube-controller-manager-k8s131             1/1     Running   1          5h45m
kube-system   kube-proxy-nt5jf                           1/1     Running   1          5h44m
kube-system   kube-scheduler-k8s131                      1/1     Running   1          5h45m

再次输入命令kubectl get nodes -A测试主节点状态,已经变为Ready,准备就绪,网络插件到此部署完成。

text 复制代码
[root@k8s131 opt]# kubectl get nodes -A
NAME     STATUS   ROLES                  AGE     VERSION
k8s131   Ready    control-plane,master   5h53m   v1.20.9

主节点至此初始化完成。

3.6 从节点加入集群

从节点加入集群的命令,在初始化主节点命令输出的内容中就有,在之前复制的输出内容中找到Then you can join any number of worker nodes by running the following on each as root:,并找到它下面的一行命令在每个从节点执行,该命令24小时内有效

bash 复制代码
kubeadm join cluster-endpoint:6443 --token 4qn4kj.52saric9a3vqnk1w \
    --discovery-token-ca-cert-hash sha256:7f12181600006aeb62fb38bcb82582809a9ad1911e49065f1fd13f9c68c95774 

如果令牌忘记了,或者超过了24小时,在master节点上执行下面的命令,生成新的令牌

bash 复制代码
 kubeadm token create --print-join-command

在两个从节点执行这个命令,执行后,以下提示,说明加入成功

text 复制代码
[root@k8s132 ~]# kubeadm join cluster-endpoint:6443 --token bjme49.uhg7ubgjn2m16b76     --discovery-token-ca-cert-hash sha256:7f12181600006aeb62fb38bcb82582809a9ad1911e49065f1fd13f9c68c95774
[preflight] Running pre-flight checks
	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.7. Latest validated version: 19.03
	[WARNING Hostname]: hostname "k8s132" could not be reached
	[WARNING Hostname]: hostname "k8s132": lookup k8s132 on 192.168.228.2:53: server misbehaving
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

回到主节点执行命令kubectl get nodes -A,查看节点状态,可以看到两个从节点了,但是状态是NotReady,此时执行kubectl get pods -A查看pods状态,发现是因为从节点的网络插件未初始化成功完成导致,此时需要耐心等待网络插件加载完成,以我的经验来说加载网络插件需要下载很久。

管理集群要靠主节点,kubectl命令只能在主节点执行

text 复制代码
[root@k8s131 ~]# kubectl get nodes -A
NAME     STATUS     ROLES                  AGE     VERSION
k8s131   Ready      control-plane,master   2d23h   v1.20.9
k8s132   NotReady   <none>                 6m49s   v1.20.9
k8s133   NotReady   <none>                 6m41s   v1.20.9
text 复制代码
[root@k8s131 ~]# kubectl get pods -A
NAMESPACE     NAME                                       READY   STATUS                  RESTARTS   AGE
kube-system   calico-kube-controllers-577f77cb5c-wpxh7   1/1     Running                 1          2d18h
kube-system   calico-node-7wb4z                          1/1     Running                 1          2d18h
kube-system   calico-node-plcdp                          0/1     Init:ImagePullBackOff   0          8m22s
kube-system   calico-node-zwmrg                          0/1     Init:ImagePullBackOff   0          8m30s
kube-system   coredns-76c6f6bbc9-4q5f9                   1/1     Running                 1          2d23h
kube-system   coredns-76c6f6bbc9-nkdcl                   1/1     Running                 1          2d23h
kube-system   etcd-k8s131                                1/1     Running                 2          2d23h
kube-system   kube-apiserver-k8s131                      1/1     Running                 2          2d23h
kube-system   kube-controller-manager-k8s131             1/1     Running                 2          2d23h
kube-system   kube-proxy-flrq9                           1/1     Running                 0          8m22s
kube-system   kube-proxy-nt5jf                           1/1     Running                 2          2d23h
kube-system   kube-proxy-tcrjv                           1/1     Running                 0          8m30s
kube-system   kube-scheduler-k8s131                      1/1     Running                 2          2d23h

等到kubectl get pods -A全部变为Running,再次测试kubectl get nodes -A

text 复制代码
[root@k8s131 ~]# kubectl get nodes -A
NAME     STATUS   ROLES                  AGE   VERSION
k8s131   Ready    control-plane,master   3d    v1.20.9
k8s132   Ready    <none>                 23m   v1.20.9
k8s133   Ready    <none>                 22m   v1.20.9

所有节点状态都是Ready,至此,两个从节点加入了K8s集群并进入就绪状态,整个K8s集群安装完成!

相关推荐
音符犹如代码1 分钟前
ZooKeeper 实战指南:从入门到场景解析
分布式·微服务·zookeeper·云原生·中间件·架构
Cat God 00712 分钟前
基于Docker搭建kafka集群
docker·容器·kafka
Cat God 00727 分钟前
基于 Docker 部署 Kafka(KRaft + SASL/PLAIN 认证)
docker·容器·kafka
源图客41 分钟前
Nacos3.1.1部署(Docker)
运维·docker·容器
从零开始学习人工智能1 小时前
《8076 能通 9003 却超时?一次 Docker 容器跨网段排障小记》
运维·docker·容器
运维栈记13 小时前
如何排错运行在Kubernetes集群中的服务?
云原生·容器·kubernetes
阿里云云原生15 小时前
直播回顾丨详解阿里云函数计算 AgentRun,手把手带你走进 Agentic AI 生产时代
云原生
木卫二号Coding16 小时前
affine+docker+postgresql+备份数据库
数据库·docker·容器
檀越剑指大厂16 小时前
查看 Docker 镜像详情的几种常用方法
docker·容器·eureka
java_logo17 小时前
Webtop Docker 容器化部署指南:基于浏览器的Linux桌面环境
linux·docker·容器·webtop·webtop部署教程·docker部署webtop·linux桌面