2024年最新版-Kubeadm快速部署Kubernetes集群(K8S)

Kubernetes集群部署

文章目录

资源列表

操作系统 配置 主机名 IP 所需软件
CentOS 7.9 2C4G k8s-master 192.168.93.101 Docker Ce、kube-apiserver、kube-controller-manager、kube-scheduler、kubelet、Etcd、kube-proxy
CentOS 7.9 2C4G k8s-node01 192.168.93.102 Docker CE、kubectl、kube-proxy、Flnnel
CentOS 7.9 2C4G k8s-node02 192.168.93.103 Docker CE、kubectl、kube-proxy、Flnnel

基础环境

  • 关闭防火墙
bash 复制代码
systemctl stop firewalld
systemctl disable firewalld
  • 关闭内核安全机制
bash 复制代码
setenforce 0
sed -i "s/^SELINUX=.*/SELINUX=disabled/g" /etc/selinux/config
  • 修改主机名

    hostnamectl set-hostname k8s-master
    hostnamectl set-hostname k8s-node01
    hostnamectl set-hostname k8s-node02

一、环境准备(三台主机都要执行)

  • 在正式开始部署kubernetes集群之前,先要进行如下准备工作。基础环境相关配置操作,在三台主机k8s-master、k8s-node01、k8s-node02上都需要执行,下面以k8s-master主机为例进行操作演示

1.1、绑定hosts

bash 复制代码
[root@k8s-master ~]# cat >> /etc/hosts << EOF
192.168.93.101 k8s-master
192.168.93.102 k8s-node01
192.168.93.103 k8s-node02
EOF

1.2、安装常用软件

bash 复制代码
[root@k8s-master ~]# yum -y install vim lrzsz unzip wget net-tools tree bash-completion telnet

1.3、关闭交换分区

  • kubeadm不支持swap
bash 复制代码
# 临时关闭
[root@k8s-master ~]# swapoff -a
# 永久关闭
[root@k8s-master ~]# sed -i '/swap/s/^/#/' /etc/fstab 

1.4、时间同步

bash 复制代码
[root@k8s-master ~]# yum -y install ntpdate
[root@k8s-master ~]# ntpdate ntp.aliyun.com

二、Docker环境部署(三台主机都要执行)

  • 完成基础环境准备之后,在三台主机上分别部署Docker环境,因为kubernetes对容器的编排需要Docker的支持。以k8s-master主机为例进行操作演示,首先安装一些Docker的依赖包,然后将Docker的YUM源设置成国内地址,最后通过YUM方式安装Docker并启动

2.1、安装依赖包

  • 在正式安装Docker之前,需要先将Docker运行所需的一些依赖包安装好
bash 复制代码
[root@k8s-master ~]# yum install -y yum-utils device-mapper-persistent-data lvm2

2.2、添加YUM软件源

  • 使用YUM方式安装Docker时,推荐阿里云的YUM源,因为国外的DockerYUM源不能用了
bash 复制代码
[root@k8s-master ~]# yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

2.3、更新YUM源缓存并安装Docker

  • 安装kubernetes的版本和docker的版本进行在网上找一下看看什么版本相互适配,本次已经找好了版本,跟着安装即可
bash 复制代码
# 列出所有docker版本
[root@k8s-master ~]# yum list docker-ce.x86_64 --showduplicates | sort -r
# 快速建立yum缓存
[root@k8s-master ~]# yum makecache fast
# 安装指定版本docker
[root@k8s-master ~]# yum -y install docker-ce-19.03.15 docker-ce-cli-19.03.15

2.4、启动Docker

bash 复制代码
[root@k8s-master ~]# systemctl start docker
[root@k8s-master ~]# systemctl enable docker

2.5、配置阿里云加速器

  • 将镜像加速器地址直接写入/etc/docker/daemon.json文件内,如果文件不存在,可直接新建文件并保存。通过该扩展名可以看出,daemon.json的内容必须符合json格式,书写时要注意
bash 复制代码
# 以下设置了cgroup驱动和阿里云加速器地址
[root@k8s-master ~]# vim /etc/docker/daemon.json
{  
  "exec-opts": ["native.cgroupdriver=systemd"],  
  "registry-mirrors": ["https://u9noolvn.mirror.aliyuncs.com"]  
}
[root@k8s-master ~]# systemctl daemon-reload 
[root@k8s-master ~]# systemctl restart docker

2.6、内核优化

bash 复制代码
# 在Docker的使用过程中有时会看到下面这个警告信息
WARNING: bridge-nf-call-iptables is disabled
WARNING: bridge-nf-call-ip6tables is disabled


# 这中镜像信息可以通过配置内核参数的方式来消除
[root@k8s-master ~]# cat >> /etc/sysctl.conf << EOF
# 启用ipv6桥接转发
net.bridge.bridge-nf-call-ip6tables = 1
# 启用ipv4桥接转发
net.bridge.bridge-nf-call-iptables = 1
# 开启路由转发功能
net.ipv4.ip_forward = 1
# 禁用swap分区
vm.swappiness = 0
EOF
# 往内核中加载模块
[root@k8s-master ~]# modprobe br_netfilter
# 加载文件内容
[root@k8s-master ~]# sysctl -p
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
vm.swappiness = 0

三、部署Kubernetes集群

  • 准备好基础环境和Docker环境,下面就开始通过kubeadm来部署kubernetes集群

3.1、三台主机配置Kubernetes的YUM源

  • 这里使用的Kubernetes源同样推荐使用阿里云的
bash 复制代码
[root@k8s-master ~]# vim /etc/yum.repos.d/k8s.repo
[k8s]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
# 快速清理yum缓存
[root@k8s-master ~]# yum makecache fast

3.2、三台主机安装Kubelet、Kubeadm和Kubectl

  • yum list kubectl --showduplicates | sort -r 列出k8s版本信息
  • kubectl:命令行管理工具、kubeadm:安装K8S集群工具、kubelet管理容器工具
bash 复制代码
# 安装指定版本
[root@k8s-master ~]# yum install -y kubelet-1.18.0 kubeadm-1.18.0 kubectl-1.18.0
# 查看k8s版本
[root@k8s-master ~]# kubelet --version
Kubernetes v1.18.0

# 如果在命令执行过程中出现索引gpg检查失败的情况,使用yum install -y --nogpgcheck kubelet-1.18.0 kubeadm-1.18.0 kubectl-1.18.0来安装

3.3、三台主机设置Kubelet开机启动

  • 操作节点:k8s-master、k8s-node01、k8s-node02
  • kubelet刚安装好后,通过systemctl start kubelet方式是无法启动的,需要加入节点或初始化为master后才可以启动成功
bash 复制代码
[root@k8s-master ~]# systemctl enable kubelet.service

3.4、master节点生成初始化配置文件

  • Kubeadm提供了很多配置项,kubeadm配置在kubernetes集群中是存储在ConfigMap中的,也可将这些配置写入配置文件,方便管理复杂的配置项。kubeadm配置内容是通过kubeadm config命令写入配置文件的
bash 复制代码
# 生成初始化配置文件
[root@k8s-master ~]# kubeadm config print init-defaults > init-config.yaml
W0615 08:50:40.154637   10202 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]


# 其中,kubeadm config除了用于输出配置项到文件中,还提供了其他一些常用功能
kubeadm config view:查看当前集群中的配置值
kubeadm config print join-defaults:输出kubeadm join默认参数文件的内容
kubeadm config images list:列出所需的镜像列表
kubeadm config images pull:拉取镜像到本地
kubeadm config upload from-flags:由配置参数生成ConfigMap

3.5、master节点修改初始化配置文件

bash 复制代码
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 192.168.93.101   # master节点IP地址
  bindPort: 6443
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  name: k8s-master    # 如果使用域名保证可以解析,或直接使用IP地址
  taints:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/etcd    # etcd容器挂载到本地的目录
imageRepository: registry.aliyuncs.com/google_containers  # 默认地址国内无法访问,修改为国内地址
kind: ClusterConfiguration
kubernetesVersion: v1.18.0
networking:
  dnsDomain: cluster.local
  serviceSubnet: 10.96.0.0/12	# service资源的网段,集群内部的网络
  podSubnet: 10.244.0.0/16   # 新增加Pod资源网段,需要与下面的pod网络插件地址一致
scheduler: {}

3.6、master节点拉取所需镜像

bash 复制代码
# 查看初始化需要的镜像
[root@k8s-master ~]# kubeadm config images list --config=init-config.yaml
W0615 09:00:11.145158   10221 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
registry.aliyuncs.com/google_containers/kube-apiserver:v1.18.0
registry.aliyuncs.com/google_containers/kube-controller-manager:v1.18.0
registry.aliyuncs.com/google_containers/kube-scheduler:v1.18.0
registry.aliyuncs.com/google_containers/kube-proxy:v1.18.0
registry.aliyuncs.com/google_containers/pause:3.2
registry.aliyuncs.com/google_containers/etcd:3.4.3-0
registry.aliyuncs.com/google_containers/coredns:1.6.7


# 拉取镜像
[root@k8s-master ~]# kubeadm config images pull --config=init-config.yaml
W0615 09:00:38.350044   10227 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-apiserver:v1.18.0
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-controller-manager:v1.18.0
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-scheduler:v1.18.0
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-proxy:v1.18.0
[config/images] Pulled registry.aliyuncs.com/google_containers/pause:3.2
[config/images] Pulled registry.aliyuncs.com/google_containers/etcd:3.4.3-0
[config/images] Pulled registry.aliyuncs.com/google_containers/coredns:1.6.7


# 查看拉取的镜像
[root@k8s-master ~]# docker images
REPOSITORY                                                        TAG                 IMAGE ID            CREATED             SIZE
registry.aliyuncs.com/google_containers/kube-proxy                v1.18.0             43940c34f24f        4 years ago         117MB
registry.aliyuncs.com/google_containers/kube-apiserver            v1.18.0             74060cea7f70        4 years ago         173MB
registry.aliyuncs.com/google_containers/kube-controller-manager   v1.18.0             d3e55153f52f        4 years ago         162MB
registry.aliyuncs.com/google_containers/kube-scheduler            v1.18.0             a31f78c7c8ce        4 years ago         95.3MB
registry.aliyuncs.com/google_containers/pause                     3.2                 80d28bedfe5d        4 years ago         683kB
registry.aliyuncs.com/google_containers/coredns                   1.6.7               67da37a9a360        4 years ago         43.8MB
registry.aliyuncs.com/google_containers/etcd                      3.4.3-0             303ce5db0e90        4 years ago         288MB

3.7、master节点初始化k8s-master

  • kubeadm通过初始化安装是不包括网络插件的,也就是说初始化之后不具备相关网络功能的,比如k8s-master节点上查看信息都是"Not Ready"状态、Pod的CoreDNS无法提供服务等
  • 若初始化失败执行:kubeadm reset、rm -rf $HOME/.kube、/etc/kubernetes/、/var/lib/etcd/
  • 初始化操作本次实验提供两个办法,但是使用的是办法一进行的k8s-master节点的初始化操作
3.7.1、办法一
bash 复制代码
[root@k8s-master ~]# kubeadm init --config=init-config.yaml
W0615 09:09:02.736872   10425 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.18.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.93.101]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.93.101 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.93.101 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
W0615 09:09:04.854021   10425 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-scheduler"
W0615 09:09:04.854832   10425 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 15.502401 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s-master as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node k8s-master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: abcdef.0123456789abcdef
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy




###################################################################
Your Kubernetes control-plane has initialized successfully!
###################################################################






To start using your cluster, you need to run the following as a regular user:






###################################################################
  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config
###################################################################






You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:






###################################################################
kubeadm join 192.168.93.101:6443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:6e241cac434bff98306479bddda1fc912eda3d3f56f73a23373977fef40d5082 
###################################################################
3.7.2、办法二
bash 复制代码
# 下面的ip地址更改为master节点的ip,k8s版本更具实际的版本进行更改
[root@k8s-master ~]# kubeadm init --apiserver-advertise-address=192.168.93.101 --image-repository registry.aliyuncs.com/google_containers --kubernetes-version v1.18.0 --service-cidr=10.1.0.0/16 --pod-network-cidr=10.244.0.0/16

3.8、master节点复制k8s认证文件到用户的home目录

bash 复制代码
[root@k8s-master ~]# mkdir -p $HOME/.kube
[root@k8s-master ~]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@k8s-master ~]# chown $(id -u):$(id -g) $HOME/.kube/config

3.9、node节点加入集群

  • 直接把k8s-master节点初始化之后的最后回显的token复制粘贴到node节点回车即可,无须做任何配置
bash 复制代码
[root@k8s-node01 ~]# kubeadm join 192.168.93.101:6443 --token abcdef.0123456789abcdef \
>     --discovery-token-ca-cert-hash sha256:6e241cac434bff98306479bddda1fc912eda3d3f56f73a23373977fef40d5082
W0615 09:17:57.223102    9376 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.18" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.


[root@k8s-node02 ~]# kubeadm join 192.168.93.101:6443 --token abcdef.0123456789abcdef \
>     --discovery-token-ca-cert-hash sha256:6e241cac434bff98306479bddda1fc912eda3d3f56f73a23373977fef40d5082
W0615 09:18:01.227038   19243 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.18" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

3.10、在k8s-master查看节点状态

  • 前面已经提到了,在初始化k8s-master时并没有网络相关的配置,所以无法跟node节点通信,因此状态都是"Not Ready"。但是通过kubeadm join加入的node节点已经在k8s-master上可以看到
bash 复制代码
[root@k8s-master ~]# kubectl get nodes
NAME         STATUS     ROLES    AGE     VERSION
k8s-master   NotReady   master   11m     v1.18.0
k8s-node01   NotReady   <none>   2m22s   v1.18.0
k8s-node02   NotReady   <none>   2m18s   v1.18.0

四、安装flannel网络插件

  • flannel是一个轻量级的网络插件,基于虚拟网络的方式,使用了多种后端实现,如基于Overlay的VXLAN和基于Host-Gateway的方式。它创建了一个覆盖整个集群的群集网络,使得Pod可以跨节点通信

4.1、拉取flannel镜像

  • 需要提前使用魔法拉取flannel镜像,因为国内的aliyun镜像仓库没有flannel镜像
  • 如果下载不下来,或者网络插件文件没有,可以评论或者私信免费提供
bash 复制代码
# 注意:集群内所有节点都需要这两个镜像,不然状态即使安装了flannel网络插件,状态也是"Not Ready"
[root@k8s-master ~]# docker pull docker.io/flannel/flannel-cni-plugin:v1.1.2
[root@k8s-master ~]# docker pull docker.io/flannel/flannel:v0.21.5
[root@k8s-master ~]# docker images | grep fl
flannel/flannel                                                   v0.21.5             a6c0cb5dbd21        13 months ago       68.9MB
flannel/flannel-cni-plugin                                        v1.1.2              7a2dcab94698        18 months ago       7.97MB

4.2、安装flannel网络插件

  • 可以使用的网络插件有很多,本次使用flannel
bash 复制代码
[root@k8s-master ~]# kubectl apply -f kube-flannel.yaml 
namespace/kube-flannel created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created

4.3、查看节点状态

bash 复制代码
# 查看节点状态
[root@k8s-master ~]# kubectl get nodes
NAME         STATUS   ROLES    AGE   VERSION
k8s-master   Ready    master   23m   v1.18.0
k8s-node01   Ready    <none>   14m   v1.18.0
k8s-node02   Ready    <none>   14m   v1.18.0

# 查看指定pod状态
[root@k8s-master ~]# kubectl get pods -n kube-system
NAME                                 READY   STATUS    RESTARTS   AGE
coredns-7ff77c879f-25bzd             1/1     Running   0          23m
coredns-7ff77c879f-wp885             1/1     Running   0          23m
etcd-k8s-master                      1/1     Running   0          24m
kube-apiserver-k8s-master            1/1     Running   0          24m
kube-controller-manager-k8s-master   1/1     Running   0          24m
kube-proxy-2tphl                     1/1     Running   0          15m
kube-proxy-hqppj                     1/1     Running   0          15m
kube-proxy-rfxw2                     1/1     Running   0          23m
kube-scheduler-k8s-master            1/1     Running   0          24m

# 查看所有pod状态
[root@k8s-master ~]# kubectl get pods -A
NAMESPACE      NAME                                 READY   STATUS    RESTARTS   AGE
kube-flannel   kube-flannel-ds-h727x                1/1     Running   0          77s
kube-flannel   kube-flannel-ds-kbztr                1/1     Running   0          77s
kube-flannel   kube-flannel-ds-nw9pr                1/1     Running   0          77s
kube-system    coredns-7ff77c879f-25bzd             1/1     Running   0          24m
kube-system    coredns-7ff77c879f-wp885             1/1     Running   0          24m
kube-system    etcd-k8s-master                      1/1     Running   0          24m
kube-system    kube-apiserver-k8s-master            1/1     Running   0          24m
kube-system    kube-controller-manager-k8s-master   1/1     Running   0          24m
kube-system    kube-proxy-2tphl                     1/1     Running   0          15m
kube-system    kube-proxy-hqppj                     1/1     Running   0          15m
kube-system    kube-proxy-rfxw2                     1/1     Running   0          24m
kube-system    kube-scheduler-k8s-master            1/1     Running   0          24m

# 至此,通过Kubeadm快速安装Kubernetes集群已经完成
相关推荐
drebander5 小时前
Docker 安全基础:权限、用户、隔离机制
安全·docker·容器
Marcel1116 小时前
WSL2使用Kind创建K8S集群时出现IPV6网络创建失败
云原生·kubernetes·kind
IT_张三8 小时前
Ubuntu Linux运维实战指南4_文件系统基础知识
linux·运维·ubuntu
陈译8 小时前
Grafana——如何迁移Grafana到一台新服务器
运维·服务器·grafana
柳鲲鹏8 小时前
docker push镜像到阿里云
阿里云·docker·容器
wangjun51598 小时前
linux redis ipv6、ipv4 只接收本地访问、接收本地和远程访问
linux·运维·服务器
铁头乔8 小时前
IoTDB 断电后无法启动 DataNode,日志提示 Meet error while starting up
数据库·开源·时序数据库·iotdb
eaglesstone9 小时前
centos 9 时间同步服务
linux·运维·centos
信阳农夫9 小时前
linux中yum是干啥的?
linux·运维·服务器
黑客老李9 小时前
新手小白如何挖掘cnvd通用漏洞之存储xss漏洞(利用xss钓鱼)
java·运维·服务器·前端·xss