【云原生】Kubernetes-kubeadm升级版本

一、版本升级

  • 当我们要用到新版本的一些功能和特性的时候或者当前版本太旧无法满足需要的时候势必要对Kubernetes集群进行升级。

1.1、升级Master节点

1.1.1、腾空节点
bash 复制代码
[root@master ~]# kubectl get node
NAME     STATUS   ROLES                  AGE   VERSION
master   Ready    control-plane,master   38h   v1.22.0
node1    Ready    <none>                 38h   v1.22.0
node2    Ready    <none>                 38h   v1.22.0


# 安全的将master节点从集群驱逐
[root@master ~]# kubectl drain master --ignore-daemonsets
node/master cordoned
WARNING: ignoring DaemonSet-managed Pods: kube-flannel/kube-flannel-ds-lhm8v, kube-system/kube-proxy-lcn2b
evicting pod kube-system/coredns-7f6cbbb7b8-j7nr6
pod/coredns-7f6cbbb7b8-j7nr6 evicted
node/master evicted
1.1.2、升级Kubeadm
bash 复制代码
[root@master ~]# yum -y install kubelet-1.23.17 kubeadm-1.23.17 kubectl-1.23.17
[root@master ~]# sudo systemctl daemon-reload 
[root@master ~]# sudo systemctl restart kubelet
1.1.3、验证升级计划
bash 复制代码
[root@master ~]# kubeadm upgrade plan
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks.
[upgrade] Running cluster health checks
[upgrade] Fetching available versions to upgrade to
[upgrade/versions] Cluster version: v1.22.0
[upgrade/versions] kubeadm version: v1.23.17
I0703 07:56:15.347208    8190 version.go:256] remote version is much newer: v1.30.2; falling back to: stable-1.23
[upgrade/versions] Target version: v1.23.17
[upgrade/versions] Latest version in the v1.22 series: v1.22.17

Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
COMPONENT   CURRENT        TARGET
kubelet     2 x v1.22.0    v1.22.17
            1 x v1.23.17   v1.22.17

Upgrade to the latest version in the v1.22 series:

COMPONENT                 CURRENT   TARGET
kube-apiserver            v1.22.0   v1.22.17
kube-controller-manager   v1.22.0   v1.22.17
kube-scheduler            v1.22.0   v1.22.17
kube-proxy                v1.22.0   v1.22.17
CoreDNS                   v1.8.4    v1.8.6
etcd                      3.5.0-0   3.5.6-0

You can now apply the upgrade by executing the following command:

	kubeadm upgrade apply v1.22.17

_____________________________________________________________________

Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
COMPONENT   CURRENT        TARGET
kubelet     2 x v1.22.0    v1.23.17
            1 x v1.23.17   v1.23.17

Upgrade to the latest stable version:

COMPONENT                 CURRENT   TARGET
kube-apiserver            v1.22.0   v1.23.17
kube-controller-manager   v1.22.0   v1.23.17
kube-scheduler            v1.22.0   v1.23.17
kube-proxy                v1.22.0   v1.23.17
CoreDNS                   v1.8.4    v1.8.6
etcd                      3.5.0-0   3.5.6-0

You can now apply the upgrade by executing the following command:

	kubeadm upgrade apply v1.23.17

_____________________________________________________________________


The table below shows the current state of component configs as understood by this version of kubeadm.
Configs that have a "yes" mark in the "MANUAL UPGRADE REQUIRED" column require manual config upgrade or
resetting to kubeadm defaults before a successful upgrade can be performed. The version to manually
upgrade to is denoted in the "PREFERRED VERSION" column.

API GROUP                 CURRENT VERSION   PREFERRED VERSION   MANUAL UPGRADE REQUIRED
kubeproxy.config.k8s.io   v1alpha1          v1alpha1            no
kubelet.config.k8s.io     v1beta1           v1beta1             no
_____________________________________________________________________
1.1.4、升级节点
  • 执行命令期间会出现一个需要输入"Y",输入即可
bash 复制代码
[root@master ~]# kubeadm upgrade apply v1.23.17
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks.
[upgrade] Running cluster health checks
[upgrade/version] You have chosen to change the cluster version to "v1.23.17"
[upgrade/versions] Cluster version: v1.22.0
[upgrade/versions] kubeadm version: v1.23.17
[upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]: y
[upgrade/prepull] Pulling images required for setting up a Kubernetes cluster
[upgrade/prepull] This might take a minute or two, depending on the speed of your internet connection
[upgrade/prepull] You can also perform this action in beforehand using 'kubeadm config images pull'
[upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.23.17"...
Static pod: kube-apiserver-master hash: 12131de84306dadc9d0191ed909b4d4b
Static pod: kube-controller-manager-master hash: 490158cfa9bf61568bfb1041eeb6cd63
Static pod: kube-scheduler-master hash: 7335187f1f1ff8625c2b204effa87733
[upgrade/etcd] Upgrading to TLS for etcd
Static pod: etcd-master hash: d31805a69b42e6f8e4f15b8b07f2f46b
[upgrade/staticpods] Preparing for "etcd" upgrade
[upgrade/staticpods] Renewing etcd-server certificate
[upgrade/staticpods] Renewing etcd-peer certificate
[upgrade/staticpods] Renewing etcd-healthcheck-client certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/etcd.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2024-07-03-08-00-24/etcd.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: etcd-master hash: d31805a69b42e6f8e4f15b8b07f2f46b
Static pod: etcd-master hash: d31805a69b42e6f8e4f15b8b07f2f46b
Static pod: etcd-master hash: bb046c07785cacd3a3c86aa213e3bc49
[apiclient] Found 1 Pods for label selector component=etcd
[upgrade/staticpods] Component "etcd" upgraded successfully!
[upgrade/etcd] Waiting for etcd to become available
[upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests2423469519"
[upgrade/staticpods] Preparing for "kube-apiserver" upgrade
[upgrade/staticpods] Renewing apiserver certificate
[upgrade/staticpods] Renewing apiserver-kubelet-client certificate
[upgrade/staticpods] Renewing front-proxy-client certificate
[upgrade/staticpods] Renewing apiserver-etcd-client certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2024-07-03-08-00-24/kube-apiserver.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-apiserver-master hash: 12131de84306dadc9d0191ed909b4d4b
Static pod: kube-apiserver-master hash: 12131de84306dadc9d0191ed909b4d4b
Static pod: kube-apiserver-master hash: 6a92ca49a6918af156581bf41e142517
[apiclient] Found 1 Pods for label selector component=kube-apiserver
[upgrade/staticpods] Component "kube-apiserver" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-controller-manager" upgrade
[upgrade/staticpods] Renewing controller-manager.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2024-07-03-08-00-24/kube-controller-manager.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-controller-manager-master hash: 490158cfa9bf61568bfb1041eeb6cd63
Static pod: kube-controller-manager-master hash: 490158cfa9bf61568bfb1041eeb6cd63
Static pod: kube-controller-manager-master hash: 490158cfa9bf61568bfb1041eeb6cd63
Static pod: kube-controller-manager-master hash: 490158cfa9bf61568bfb1041eeb6cd63
Static pod: kube-controller-manager-master hash: 490158cfa9bf61568bfb1041eeb6cd63
Static pod: kube-controller-manager-master hash: 490158cfa9bf61568bfb1041eeb6cd63
Static pod: kube-controller-manager-master hash: 490158cfa9bf61568bfb1041eeb6cd63
Static pod: kube-controller-manager-master hash: 490158cfa9bf61568bfb1041eeb6cd63
Static pod: kube-controller-manager-master hash: 490158cfa9bf61568bfb1041eeb6cd63
Static pod: kube-controller-manager-master hash: 490158cfa9bf61568bfb1041eeb6cd63
Static pod: kube-controller-manager-master hash: 490158cfa9bf61568bfb1041eeb6cd63
Static pod: kube-controller-manager-master hash: 490158cfa9bf61568bfb1041eeb6cd63
Static pod: kube-controller-manager-master hash: 490158cfa9bf61568bfb1041eeb6cd63
Static pod: kube-controller-manager-master hash: 490158cfa9bf61568bfb1041eeb6cd63
Static pod: kube-controller-manager-master hash: 490158cfa9bf61568bfb1041eeb6cd63
Static pod: kube-controller-manager-master hash: 490158cfa9bf61568bfb1041eeb6cd63
Static pod: kube-controller-manager-master hash: 490158cfa9bf61568bfb1041eeb6cd63
Static pod: kube-controller-manager-master hash: 490158cfa9bf61568bfb1041eeb6cd63
Static pod: kube-controller-manager-master hash: 490158cfa9bf61568bfb1041eeb6cd63
Static pod: kube-controller-manager-master hash: 490158cfa9bf61568bfb1041eeb6cd63
Static pod: kube-controller-manager-master hash: 490158cfa9bf61568bfb1041eeb6cd63
Static pod: kube-controller-manager-master hash: 490158cfa9bf61568bfb1041eeb6cd63
Static pod: kube-controller-manager-master hash: 490158cfa9bf61568bfb1041eeb6cd63
Static pod: kube-controller-manager-master hash: 490158cfa9bf61568bfb1041eeb6cd63
Static pod: kube-controller-manager-master hash: 490158cfa9bf61568bfb1041eeb6cd63
Static pod: kube-controller-manager-master hash: 490158cfa9bf61568bfb1041eeb6cd63
Static pod: kube-controller-manager-master hash: 490158cfa9bf61568bfb1041eeb6cd63
Static pod: kube-controller-manager-master hash: 490158cfa9bf61568bfb1041eeb6cd63
Static pod: kube-controller-manager-master hash: 490158cfa9bf61568bfb1041eeb6cd63
Static pod: kube-controller-manager-master hash: c051469dae3f75b5f59e524ff3454c1c
[apiclient] Found 1 Pods for label selector component=kube-controller-manager
[upgrade/staticpods] Component "kube-controller-manager" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-scheduler" upgrade
[upgrade/staticpods] Renewing scheduler.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2024-07-03-08-00-24/kube-scheduler.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-scheduler-master hash: 7335187f1f1ff8625c2b204effa87733
Static pod: kube-scheduler-master hash: 7335187f1f1ff8625c2b204effa87733
Static pod: kube-scheduler-master hash: 7335187f1f1ff8625c2b204effa87733
Static pod: kube-scheduler-master hash: 7335187f1f1ff8625c2b204effa87733
Static pod: kube-scheduler-master hash: 7335187f1f1ff8625c2b204effa87733
Static pod: kube-scheduler-master hash: 7335187f1f1ff8625c2b204effa87733
Static pod: kube-scheduler-master hash: 7335187f1f1ff8625c2b204effa87733
Static pod: kube-scheduler-master hash: 7335187f1f1ff8625c2b204effa87733
Static pod: kube-scheduler-master hash: 7335187f1f1ff8625c2b204effa87733
Static pod: kube-scheduler-master hash: 7335187f1f1ff8625c2b204effa87733
Static pod: kube-scheduler-master hash: 7335187f1f1ff8625c2b204effa87733
Static pod: kube-scheduler-master hash: 7335187f1f1ff8625c2b204effa87733
Static pod: kube-scheduler-master hash: 7335187f1f1ff8625c2b204effa87733
Static pod: kube-scheduler-master hash: 7335187f1f1ff8625c2b204effa87733
Static pod: kube-scheduler-master hash: 7335187f1f1ff8625c2b204effa87733
Static pod: kube-scheduler-master hash: 7335187f1f1ff8625c2b204effa87733
Static pod: kube-scheduler-master hash: 7335187f1f1ff8625c2b204effa87733
Static pod: kube-scheduler-master hash: 7335187f1f1ff8625c2b204effa87733
Static pod: kube-scheduler-master hash: 7335187f1f1ff8625c2b204effa87733
Static pod: kube-scheduler-master hash: 7335187f1f1ff8625c2b204effa87733
Static pod: kube-scheduler-master hash: 7335187f1f1ff8625c2b204effa87733
Static pod: kube-scheduler-master hash: 7335187f1f1ff8625c2b204effa87733
Static pod: kube-scheduler-master hash: 7335187f1f1ff8625c2b204effa87733
Static pod: kube-scheduler-master hash: 7335187f1f1ff8625c2b204effa87733
Static pod: kube-scheduler-master hash: 7335187f1f1ff8625c2b204effa87733
Static pod: kube-scheduler-master hash: 7335187f1f1ff8625c2b204effa87733
Static pod: kube-scheduler-master hash: 7335187f1f1ff8625c2b204effa87733
Static pod: kube-scheduler-master hash: 7335187f1f1ff8625c2b204effa87733
Static pod: kube-scheduler-master hash: 7335187f1f1ff8625c2b204effa87733
Static pod: kube-scheduler-master hash: 5baca7afd4a5a2d44d846da93480249e
[apiclient] Found 1 Pods for label selector component=kube-scheduler
[upgrade/staticpods] Component "kube-scheduler" upgraded successfully!
[upgrade/postupgrade] Applying label node-role.kubernetes.io/control-plane='' to Nodes with label node-role.kubernetes.io/master='' (deprecated)
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.23" in namespace kube-system with the configuration for the kubelets in the cluster
NOTE: The "kubelet-config-1.23" naming of the kubelet ConfigMap is deprecated. Once the UnversionedKubeletConfigMap feature gate graduates to Beta the default name will become just "kubelet-config". Kubeadm upgrade will handle this transition transparently.
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

[upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.23.17". Enjoy!

[upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.
1.1.5、解除节点保护
bash 复制代码
[root@master ~]# kubectl uncordon master
node/master uncordoned
[root@master ~]# kubectl get node
NAME     STATUS   ROLES                  AGE   VERSION
master   Ready    control-plane,master   39h   v1.23.17
node1    Ready    <none>                 39h   v1.22.0
node2    Ready    <none>                 39h   v1.22.0

1.2、升级Node节点

  • 在所有需要升级的node节点上操作,以node1进行演示
1.2.1、腾空节点
bash 复制代码
# 要升级哪个节点就腾空哪个节点(安全的驱逐集群)
[root@master ~]# kubectl get node
NAME     STATUS   ROLES                  AGE   VERSION
master   Ready    control-plane,master   39h   v1.23.17
node1    Ready    <none>                 39h   v1.22.0
node2    Ready    <none>                 39h   v1.22.0
[root@master ~]# kubectl drain node1 --ignore-daemonsets
node/node1 cordoned
WARNING: ignoring DaemonSet-managed Pods: kube-flannel/kube-flannel-ds-tsd6l, kube-system/kube-proxy-98z57
evicting pod kube-system/coredns-6d8c4cb4d-vdwfz
pod/coredns-6d8c4cb4d-vdwfz evicted
node/node1 drained
1.2.2、升级Kubeadm
bash 复制代码
# 在需要升级的节点上操作
[root@node1 ~]# yum -y install kubelet-1.23.17 kubeadm-1.23.17 kubectl-1.23.17
[root@node1 ~]# sudo systemctl daemon-reload 
[root@node1 ~]# sudo systemctl restart kubelet
1.2.3、升级节点
bash 复制代码
# 在需要升级的节点上操作
[root@node1 ~]# kubeadm upgrade node
[upgrade] Reading configuration from the cluster...
[upgrade] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks
[preflight] Skipping prepull. Not a control plane node.
[upgrade] Skipping phase. Not a control plane node.
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[upgrade] The configuration for this node was successfully updated!
[upgrade] Now you should go ahead and upgrade the kubelet package using your package manager.
1.2.4、解除节点保护
bash 复制代码
[root@master ~]# kubectl uncordon node1
node/node1 uncordoned
[root@master ~]# kubectl get node
NAME     STATUS   ROLES                  AGE   VERSION
master   Ready    control-plane,master   39h   v1.23.17
node1    Ready    <none>                 39h   v1.23.17
node2    Ready    <none>                 39h   v1.22.0
相关推荐
€☞扫地僧☜€1 小时前
docker 拉取MySQL8.0镜像以及安装
运维·数据库·docker·容器
全能全知者2 小时前
docker快速安装与配置mongoDB
mongodb·docker·容器
为什么这亚子4 小时前
九、Go语言快速入门之map
运维·开发语言·后端·算法·云原生·golang·云计算
ZHOU西口5 小时前
微服务实战系列之玩转Docker(十八)
分布式·docker·云原生·架构·数据安全·etcd·rbac
牛角上的男孩6 小时前
Istio Gateway发布服务
云原生·gateway·istio
JuiceFS7 小时前
好未来:多云环境下基于 JuiceFS 建设低运维模型仓库
运维·云原生
景天科技苑8 小时前
【云原生开发】K8S多集群资源管理平台架构设计
云原生·容器·kubernetes·k8s·云原生开发·k8s管理系统
wclass-zhengge9 小时前
K8S篇(基本介绍)
云原生·容器·kubernetes
颜淡慕潇9 小时前
【K8S问题系列 |1 】Kubernetes 中 NodePort 类型的 Service 无法访问【已解决】
后端·云原生·容器·kubernetes·问题解决
川石课堂软件测试11 小时前
性能测试|docker容器下搭建JMeter+Grafana+Influxdb监控可视化平台
运维·javascript·深度学习·jmeter·docker·容器·grafana