Kubernetes 版本升级

Kubernetes 版本升级

kubernetes 集群后期会更新和证书的有效期等原因,可能在某个阶段会对当前的软件版本进行调整,可能是版本升级,也可能是版本回退。但是这些场景的操作方法基本上是一样的。

Kubernetes 集群支持大版本和小版本的升级,但是大版本只能升级到下一次版本,不支持跨版本升级 比如:1.20.1可以升级到1.20.6和1.21.3,但不支持升级到1.22.0 升级 kubernetes 集群前,必须先升级 kubeadm 版本

注意: 升级有风险,有条件建议新建集群,而非升级 生产环境应该业务相对空闲时,逐个节点离线升级后再上线,而非全部升级

1.升级流程说明

复制代码
https://kubernetes.io/zh-cn/docs/tasks/administer-cluster/kubeadm/kubeadm•upgrade/

本页介绍如何将 kubeadm 创建的 Kubernetes 集群从 1.33.x 版本 升级到 1.34.x 版本以及从 1.34.x 升 级到 1.34.y(其中 y > x )。略过次版本号的升级是不被支持的。更多详情请访问版本偏差策略。 升级 Kubernetes 集群是一个复杂的过程,需要谨慎操作,以确保集群的稳定性和数据的安全性。 升级 Kubernetes 的过程通常包括以下几个步骤:

升级工作的基本流程如下:

1.升级第一台控制平面节点

  • 更改软件包仓库

  • 确定要升级到哪个版本

  • 升级 kubeadm 验证升级计划

  • 执行升级计划 kubeadm upgrade apply

  • 腾空节点

  • 升级 kubelet 和 kubectl 解除节点的保护

2.升级其它的控制平面节点

  • 更改软件包仓库

  • 升级 kubeadm

  • 执行 "kubeadm upgrade node

  • 腾空节点

  • 升级 kubelet 和 kubectl

  • 解除节点的保护

3.升级工作节点

  • 更改软件包仓库

  • 升级 kubeadm

  • 执行 kubeadm upgrade node

  • 腾空节点升级 kubelet 和 kubectl

  • 取消对节点的保护

1.小版本升级

升级前的准备工作

1.1.备份集群

在升级之前,务必备份集群的状态,包括:

  • ETCD 数据:使用 etcdctl 备份 ETCD。

  • Kubernetes 资源:使用 kubectl 导出资源。

  • 持久化存储数据:备份 PersistentVolume 中的数据。

1.2.检查当前集群版本

复制代码
root@master01:~# kubectl get nodes
NAME       STATUS   ROLES           AGE     VERSION
master01   Ready    control-plane   4d21h   v1.32.11
node01     Ready    <none>          4d21h   v1.32.11
node02     Ready    <none>          4d21h   v1.32.11

首先,确认当前 Kubernetes 集群的版本。

复制代码
kubectl version
复制代码
root@master01:~# kubectl version
Client Version: v1.32.12
Kustomize Version: v5.5.0
Server Version: v1.32.11

1.3.查看升级路径

Kubernetes 的升级路径是有限制的,通常只能升级到次要版本(例如,从 1.24.x 升级到 1.25.x )。不能跳过次要版本(例如,从 1.23.x 直接升级到 1.25.x 是不允许的)。 查看 Kubernetes 官方文档,确认升级路径:

复制代码
Kubernetes 版本和版本偏差支持策略:https://kubernetes.io/docs/setup/release/version-skew-policy/

范例:从1.27无法直接升级到1.34

查看1.32当前的最新版本

复制代码
apt-get update
apt-cache madison kubeadm | awk '$3 ~ /^1\.33\./ {print $3}' | sort -V | uniq

看到最新是1.32.12-1.1

复制代码
root@master01:~# kubeadm upgrade plan
[preflight] Running pre-flight checks.
[upgrade/config] Reading configuration from the "kubeadm-config" ConfigMap in namespace "kube-system"...
[upgrade/config] Use 'kubeadm init phase upload-config --config your-config.yaml' to re-upload it.
[upgrade] Running cluster health checks
[upgrade] Fetching available versions to upgrade to
[upgrade/versions] Cluster version: 1.32.11
[upgrade/versions] kubeadm version: v1.32.12
I0225 17:36:39.168130  493355 version.go:261] remote version is much newer: v1.35.1; falling back to: stable-1.32
[upgrade/versions] Target version: v1.32.12
[upgrade/versions] Latest version in the v1.32 series: v1.32.12
​
Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
COMPONENT   NODE      CURRENT   TARGET
​
Upgrade to the latest version in the v1.32 series:
​
COMPONENT                 NODE       CURRENT    TARGET
kube-apiserver            master01   v1.32.11   v1.32.12
kube-controller-manager   master01   v1.32.11   v1.32.12
kube-scheduler            master01   v1.32.11   v1.32.12
kube-proxy                           1.32.11    v1.32.12
CoreDNS                              v1.11.3    v1.11.3
etcd                      master01   3.5.24-0   3.5.24-0
​
You can now apply the upgrade by executing the following command:
​
    kubeadm upgrade apply v1.32.12
​
_____________________________________________________________________
​
​
The table below shows the current state of component configs as understood by this version of kubeadm.
Configs that have a "yes" mark in the "MANUAL UPGRADE REQUIRED" column require manual config upgrade or
resetting to kubeadm defaults before a successful upgrade can be performed. The version to manually
upgrade to is denoted in the "PREFERRED VERSION" column.
​
API GROUP                 CURRENT VERSION   PREFERRED VERSION   MANUAL UPGRADE REQUIRED
kubeproxy.config.k8s.io   v1alpha1          v1alpha1            no
kubelet.config.k8s.io     v1beta1           v1beta1             no
_____________________________________________________________________
​

1.4.升级 Kubernetes 控制平面

控制平面包括以下组件:

  • API Server

  • Controller Manager

  • Scheduler

  • ETCD

1.4.1.升级 kubeadm kubectl kubelet

如果使用 kubeadm 部署的 Kubernetes 集群,首先需要升级 kubeadm 。

  1. 升级 kubeadm: 使用包管理器(如 yum 或 apt )升级 kubeadm 。

    复制代码
    sudo apt-get update
    sudo apt-get install -y kubeadm=1.32.12-1.1 kubectl=1.32.12-1.1 kubelet=1.32.12-1.1
  2. 验证 kubeadm 版本:

    复制代码
    kubeadm version

1.4.2.执行升级计划

规划升级顺序:如果集群有多个控制平面节点,逐个升级每个节点。

升级第一个控制平面节点:

复制代码
kubeadm upgrade plan
kubeadm upgrade apply v1.32.12

升级成功输出:

复制代码
root@master01:~# kubeadm upgrade apply v1.32.12
[upgrade] Reading configuration from the "kubeadm-config" ConfigMap in namespace "kube-system"...
[upgrade] Use 'kubeadm init phase upload-config --config your-config.yaml' to re-upload it.
[upgrade/preflight] Running preflight checks
[upgrade] Running cluster health checks
[upgrade/preflight] You have chosen to upgrade the cluster version to "v1.32.12"
[upgrade/versions] Cluster version: v1.32.11
[upgrade/versions] kubeadm version: v1.32.12
[upgrade] Are you sure you want to proceed? [y/N]: y
[upgrade/preflight] Pulling images required for setting up a Kubernetes cluster
[upgrade/preflight] This might take a minute or two, depending on the speed of your internet connection
[upgrade/preflight] You can also perform this action beforehand using 'kubeadm config images pull'
W0225 17:43:26.118955  501167 checks.go:843] detected that the sandbox image "registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.9" of the container runtime is inconsistent with that used by kubeadm.It is recommended to use "registry.aliyuncs.com/google_containers/pause:3.10" as the CRI sandbox image.
[upgrade/control-plane] Upgrading your static Pod-hosted control plane to version "v1.32.12" (timeout: 5m0s)...
[upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests143958793"
[upgrade/staticpods] Preparing for "etcd" upgrade
[upgrade/staticpods] Renewing etcd-server certificate
[upgrade/staticpods] Renewing etcd-peer certificate
[upgrade/staticpods] Renewing etcd-healthcheck-client certificate
[upgrade/staticpods] Restarting the etcd static pod and backing up its manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2026-02-25-17-44-01/etcd.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This can take up to 5m0s
[apiclient] Found 1 Pods for label selector component=etcd
[upgrade/staticpods] Component "etcd" upgraded successfully!
[upgrade/etcd] Waiting for etcd to become available
[upgrade/staticpods] Preparing for "kube-apiserver" upgrade
[upgrade/staticpods] Renewing apiserver certificate
[upgrade/staticpods] Renewing apiserver-kubelet-client certificate
[upgrade/staticpods] Renewing front-proxy-client certificate
[upgrade/staticpods] Renewing apiserver-etcd-client certificate
[upgrade/staticpods] Moving new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backing up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2026-02-25-17-44-01/kube-apiserver.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This can take up to 5m0s
[apiclient] Found 1 Pods for label selector component=kube-apiserver
[upgrade/staticpods] Component "kube-apiserver" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-controller-manager" upgrade
[upgrade/staticpods] Renewing controller-manager.conf certificate
[upgrade/staticpods] Moving new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backing up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2026-02-25-17-44-01/kube-controller-manager.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This can take up to 5m0s
[apiclient] Found 1 Pods for label selector component=kube-controller-manager
[upgrade/staticpods] Component "kube-controller-manager" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-scheduler" upgrade
[upgrade/staticpods] Renewing scheduler.conf certificate
[upgrade/staticpods] Moving new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backing up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2026-02-25-17-44-01/kube-scheduler.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This can take up to 5m0s
[apiclient] Found 1 Pods for label selector component=kube-scheduler
[upgrade/staticpods] Component "kube-scheduler" upgraded successfully!
[upgrade/control-plane] The control plane instance for this node was successfully upgraded!
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upgrad/kubeconfig] The kubeconfig files for this node were successfully upgraded!
W0225 17:45:47.412165  501167 postupgrade.go:117] Using temporary directory /etc/kubernetes/tmp/kubeadm-kubelet-config963280749 for kubelet config. To override it set the environment variable KUBEADM_UPGRADE_DRYRUN_DIR
[upgrade] Backing up kubelet config file to /etc/kubernetes/tmp/kubeadm-kubelet-config963280749/config.yaml
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[upgrade/kubelet-config] The kubelet configuration for this node was successfully upgraded!
[upgrade/bootstrap-token] Configuring bootstrap token and cluster-info RBAC rules
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
​
[upgrade] SUCCESS! A control plane node of your cluster was upgraded to "v1.32.12".
​
[upgrade] Now please proceed with upgrading the rest of the nodes by following the right order.
​

1.4.3.升级后查看集群版本

复制代码
root@master01:~# kubectl get nodes
NAME       STATUS   ROLES           AGE     VERSION
master01   Ready    control-plane   4d21h   v1.32.12
node01     Ready    <none>          4d21h   v1.32.12
node02     Ready    <none>          4d21h   v1.32.12

会发现小版本只需要master节点执行升级命令后,工作节点都会升级,因为小版本是使用用一个apt源。

2.大版本升级

升级之前最好做好备份,避免升级失败造成不可逆损失。

当前没有可用的 1.33 版本。

因为现在的源是:https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.32/deb/

所以 kubeadm/kubelet/kubectl 只能看到 1.32 系列。

2.1.升级控制平面

2.1.2.换apt源

先换源,这里换到1.33

复制代码
# 1) 先备份
cp /etc/apt/sources.list.d/kubernetes.list /etc/apt/sources.list.d/kubernetes.list.bak
​
# 2) 切到 1.33 源(要看 1.34/1.35 就把 v1.33 改掉)
cat >/etc/apt/sources.list.d/kubernetes.list <<'EOF'
deb [arch=amd64 signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.33/deb/ /
EOF
​
# 3) 刷新并查看可用版本
apt-get update
apt-cache madison kubeadm | awk '$3 ~ /^1\.33\./ {print $3}' | sort -V | uniq
apt-cache madison kubelet | awk '$3 ~ /^1\.33\./ {print $3}' | sort -V | uniq
apt-cache madison kubectl | awk '$3 ~ /^1\.33\./ {print $3}' | sort -V | uniq

看到现在最新的是1.33.8-1.1

复制代码
root@master01:~# apt-cache madison kubeadm | awk '$3 ~ /^1\.33\./ {print $3}' | sort -V | uniq
1.33.0-1.1
1.33.1-1.1
1.33.2-1.1
1.33.3-1.1
1.33.4-1.1
1.33.5-1.1
1.33.6-1.1
1.33.7-1.1
1.33.8-1.1

2.1.3.升级 kubeadm

复制代码
apt-get install -y kubeadm=1.33.8-1.1

查看kubeadm版本

复制代码
root@master01:~# kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"33", EmulationMajor:"", EmulationMinor:"", MinCompatibilityMajor:"", MinCompatibilityMinor:"", GitVersion:"v1.33.8", GitCommit:"5adfc48e19d5fbc4af5b0d31aeb9f0c13c01cf5d", GitTreeState:"clean", BuildDate:"2026-02-10T12:56:14Z", GoVersion:"go1.24.12", Compiler:"gc", Platform:"linux/amd64"}

查看升级计划

复制代码
root@master01:~# kubeadm upgrade plan
[preflight] Running pre-flight checks.
[upgrade/config] Reading configuration from the "kubeadm-config" ConfigMap in namespace "kube-system"...
[upgrade/config] Use 'kubeadm init phase upload-config --config your-config-file' to re-upload it.
[upgrade] Running cluster health checks
[upgrade] Fetching available versions to upgrade to
[upgrade/versions] Cluster version: 1.32.12
[upgrade/versions] kubeadm version: v1.33.8
I0225 18:08:08.036673  532002 version.go:261] remote version is much newer: v1.35.1; falling back to: stable-1.33
[upgrade/versions] Target version: v1.33.8
[upgrade/versions] Latest version in the v1.32 series: v1.32.12
​
Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
COMPONENT   NODE       CURRENT    TARGET
kubelet     master01   v1.32.12   v1.33.8
kubelet     node01     v1.32.12   v1.33.8
kubelet     node02     v1.32.12   v1.33.8
​
Upgrade to the latest stable version:
​
COMPONENT                 NODE       CURRENT    TARGET
kube-apiserver            master01   v1.32.12   v1.33.8
kube-controller-manager   master01   v1.32.12   v1.33.8
kube-scheduler            master01   v1.32.12   v1.33.8
kube-proxy                           1.32.12    v1.33.8
CoreDNS                              v1.11.3    v1.12.0
etcd                      master01   3.5.24-0   3.5.24-0
​
You can now apply the upgrade by executing the following command:
​
    kubeadm upgrade apply v1.33.8
​
_____________________________________________________________________
​
​
The table below shows the current state of component configs as understood by this version of kubeadm.
Configs that have a "yes" mark in the "MANUAL UPGRADE REQUIRED" column require manual config upgrade or
resetting to kubeadm defaults before a successful upgrade can be performed. The version to manually
upgrade to is denoted in the "PREFERRED VERSION" column.
​
API GROUP                 CURRENT VERSION   PREFERRED VERSION   MANUAL UPGRADE REQUIRED
kubeproxy.config.k8s.io   v1alpha1          v1alpha1            no
kubelet.config.k8s.io     v1beta1           v1beta1             no
_____________________________________________________________________
​

可行的升级路线v1.32.12 -->v1.33.8

2.1.4.执行升级计划

复制代码
kubeadm upgrade apply v1.33.8
复制代码
root@master01:~# kubeadm upgrade apply v1.33.8
[upgrade] Reading configuration from the "kubeadm-config" ConfigMap in namespace "kube-system"...
[upgrade] Use 'kubeadm init phase upload-config --config your-config-file' to re-upload it.
[upgrade/preflight] Running preflight checks
[upgrade] Running cluster health checks
[upgrade/preflight] You have chosen to upgrade the cluster version to "v1.33.8"
[upgrade/versions] Cluster version: v1.32.12
[upgrade/versions] kubeadm version: v1.33.8
[upgrade] Are you sure you want to proceed? [y/N]: y
[upgrade/preflight] Pulling images required for setting up a Kubernetes cluster
[upgrade/preflight] This might take a minute or two, depending on the speed of your internet connection
[upgrade/preflight] You can also perform this action beforehand using 'kubeadm config images pull'
W0225 18:10:20.781781  534433 checks.go:843] detected that the sandbox image "registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.9" of the container runtime is inconsistent with that used by kubeadm.It is recommended to use "registry.aliyuncs.com/google_containers/pause:3.10" as the CRI sandbox image.
[upgrade/control-plane] Upgrading your static Pod-hosted control plane to version "v1.33.8" (timeout: 5m0s)...
[upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests1531048502"
[upgrade/staticpods] Preparing for "etcd" upgrade
[upgrade/staticpods] Renewing etcd-server certificate
[upgrade/staticpods] Renewing etcd-peer certificate
[upgrade/staticpods] Renewing etcd-healthcheck-client certificate
[upgrade/staticpods] Restarting the etcd static pod and backing up its manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2026-02-25-18-10-58/etcd.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This can take up to 5m0s
[apiclient] Found 1 Pods for label selector component=etcd
[upgrade/staticpods] Component "etcd" upgraded successfully!
[upgrade/etcd] Waiting for etcd to become available
[upgrade/staticpods] Preparing for "kube-apiserver" upgrade
[upgrade/staticpods] Renewing apiserver certificate
[upgrade/staticpods] Renewing apiserver-kubelet-client certificate
[upgrade/staticpods] Renewing front-proxy-client certificate
[upgrade/staticpods] Renewing apiserver-etcd-client certificate
[upgrade/staticpods] Moving new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backing up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2026-02-25-18-10-58/kube-apiserver.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This can take up to 5m0s
[apiclient] Found 1 Pods for label selector component=kube-apiserver
[upgrade/staticpods] Component "kube-apiserver" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-controller-manager" upgrade
[upgrade/staticpods] Renewing controller-manager.conf certificate
[upgrade/staticpods] Moving new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backing up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2026-02-25-18-10-58/kube-controller-manager.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This can take up to 5m0s
[apiclient] Found 1 Pods for label selector component=kube-controller-manager
[upgrade/staticpods] Component "kube-controller-manager" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-scheduler" upgrade
[upgrade/staticpods] Renewing scheduler.conf certificate
[upgrade/staticpods] Moving new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backing up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2026-02-25-18-10-58/kube-scheduler.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This can take up to 5m0s
[apiclient] Found 1 Pods for label selector component=kube-scheduler
[upgrade/staticpods] Component "kube-scheduler" upgraded successfully!
[upgrade/control-plane] The control plane instance for this node was successfully upgraded!
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upgrade/kubeconfig] The kubeconfig files for this node were successfully upgraded!
W0225 18:12:20.751905  534433 postupgrade.go:117] Using temporary directory /etc/kubernetes/tmp/kubeadm-kubelet-config2605389634 for kubelet config. To override it set the environment variable KUBEADM_UPGRADE_DRYRUN_DIR
[upgrade] Backing up kubelet config file to /etc/kubernetes/tmp/kubeadm-kubelet-config2605389634/config.yaml
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[upgrade/kubelet-config] The kubelet configuration for this node was successfully upgraded!
[upgrade/bootstrap-token] Configuring bootstrap token and cluster-info RBAC rules
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
​
[upgrade] SUCCESS! A control plane node of your cluster was upgraded to "v1.33.8".
​
[upgrade] Now please proceed with upgrading the rest of the nodes by following the right order.
​

如果有其他控制平面节点,一并升级

2.1.5.升级 kubelet 和 kubectl

复制代码
sudo apt-get install -y kubelet=1.33.8-1.1 kubectl=1.33.8-1.1
sudo systemctl restart kubelet
复制代码
root@master01:~# sudo apt-get install -y kubelet=1.33.8-1.1 kubectl=1.33.8-1.1
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
The following packages will be upgraded:
  kubectl kubelet
2 upgraded, 0 newly installed, 0 to remove and 76 not upgraded.
Need to get 27.6 MB of archives.
After this operation, 5,665 kB of additional disk space will be used.
Get:1 https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.33/deb  kubectl 1.33.8-1.1 [11.7 MB]
Get:2 https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.33/deb  kubelet 1.33.8-1.1 [15.9 MB]
Fetched 27.6 MB in 7s (3,687 kB/s)                                                                                                                                                                                                  
(Reading database ... 155866 files and directories currently installed.)
Preparing to unpack .../kubectl_1.33.8-1.1_amd64.deb ...
Unpacking kubectl (1.33.8-1.1) over (1.32.12-1.1) ...
Preparing to unpack .../kubelet_1.33.8-1.1_amd64.deb ...
Unpacking kubelet (1.33.8-1.1) over (1.32.12-1.1) ...
Setting up kubectl (1.33.8-1.1) ...
Setting up kubelet (1.33.8-1.1) ...
Scanning processes...                                                                                                                                                                                                                
Scanning candidates...                                                                                                                                                                                                               
Scanning linux images...                                                                                                                                                                                                             
​
Running kernel seems to be up-to-date.
​
Restarting services...
 systemctl restart kubelet.service
​
No containers need to be restarted.
​
No user sessions are running outdated binaries.
​
No VM guests are running outdated hypervisor (qemu) binaries on this host.
​

重启kubelet后查看版本和集群信息

复制代码
root@master01:~# sudo systemctl restart kubelet
root@master01:~# kubectl version
Client Version: v1.33.8
Kustomize Version: v5.6.0
Server Version: v1.33.8
root@master01:~# kubectl get nodes
NAME       STATUS   ROLES           AGE     VERSION
master01   Ready    control-plane   4d21h   v1.33.8
node01     Ready    <none>          4d21h   v1.32.12
node02     Ready    <none>          4d21h   v1.32.12

2.2.升级工作节点

2.2.1.驱逐工作节点

将需要更新的节点冻结并驱离所有请求

复制代码
kubectl cordon node01
复制代码
root@master01:~# kubectl cordon node01
node/node01 cordoned
root@master01:~# kubectl get nodes node01 -o yaml|grep -i -A5 taints
  taints:
  - effect: NoSchedule
    key: node.kubernetes.io/unschedulable
    timeAdded: "2026-02-25T10:34:49Z"
  unschedulable: true
status:
root@master01:~# kubectl get nodes
NAME       STATUS                     ROLES           AGE     VERSION
master01   Ready                      control-plane   4d22h   v1.33.8
node01     Ready,SchedulingDisabled   <none>          4d22h   v1.32.12
node02     Ready                      <none>          4d22h   v1.32.12

驱离节点上面的现有资源

复制代码
root@master01:~# kubectl drain node01 --delete-emptydir-data --ignore-daemonsets --force
node/node01 already cordoned
Warning: ignoring DaemonSet-managed Pods: kube-system/calico-node-ctmv2, kube-system/kube-proxy-6ldzt, metallb-system/speaker-78bmr, velero/node-agent-b7qzn
evicting pod velero/velero-7ff99567c7-5qj4q
evicting pod default/myweb-7d756bcd87-kcbz7
evicting pod default/myweb-7d756bcd87-pn27p
evicting pod kube-system/coredns-757cc6c8f8-6kn95
pod/myweb-7d756bcd87-kcbz7 evicted
pod/myweb-7d756bcd87-pn27p evicted
pod/velero-7ff99567c7-5qj4q evicted
pod/coredns-757cc6c8f8-6kn95 evicted
node/node01 drained
​

查看当前节点不存在pod资源

复制代码
root@master01:~# kubectl get pods -o wide
NAME                     READY   STATUS    RESTARTS     AGE    IP               NODE     NOMINATED NODE   READINESS GATES
myweb-7d756bcd87-chbsv   1/1     Running   0            103m   172.16.140.95    node02   <none>           <none>
myweb-7d756bcd87-s7wtc   1/1     Running   0            40s    172.16.140.104   node02   <none>           <none>
myweb-7d756bcd87-z6b9l   1/1     Running   0            40s    172.16.140.105   node02   <none>           <none>
network-tools            1/1     Running   1 (8h ago)   27h    172.16.140.90    node02   <none>           <none>

pod全在node02

2.2.2.换apt源

在已被冻结的工作节点上升级 kubelet 。

需要工作节点更换下载源

复制代码
# 1) 先备份
cp /etc/apt/sources.list.d/kubernetes.list /etc/apt/sources.list.d/kubernetes.list.bak
​
# 2) 切到 1.33 源(要看 1.34/1.35 就把 v1.33 改掉)
cat >/etc/apt/sources.list.d/kubernetes.list <<'EOF'
deb [arch=amd64 signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.33/deb/ /
EOF
​
# 3) 刷新并查看可用版本
apt-get update
apt-cache madison kubeadm | awk '$3 ~ /^1\.33\./ {print $3}' | sort -V | uniq
apt-cache madison kubelet | awk '$3 ~ /^1\.33\./ {print $3}' | sort -V | uniq
apt-cache madison kubectl | awk '$3 ~ /^1\.33\./ {print $3}' | sort -V | uniq

2.2.3.升级 kubeadm kubelet

这里可以不用升级kubectl,升级了也没事。

复制代码
sudo apt-get install -y kubeadm=1.33.8-1.1 kubectl=1.33.8-1.1 kubelet=1.33.8-1.1

升级后重启:

复制代码
sudo systemctl restart kubelet

查看version

复制代码
root@node01:~# kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"32", GitVersion:"v1.32.12", GitCommit:"39350c63df5a2a3bdd2b506bf2b166d05e5af44d", GitTreeState:"clean", BuildDate:"2026-02-10T12:56:44Z", GoVersion:"go1.24.12", Compiler:"gc", Platform:"linux/amd64"}
root@node01:~# kubectl version
Client Version: v1.33.8
Kustomize Version: v5.6.0
The connection to the server localhost:8080 was refused - did you specify the right host or port?
root@node01:~# kubelet --version
Kubernetes v1.33.8

2.2.4.在worker节点执行升级 worker 节点

复制代码
root@node01:~# kubeadm upgrade node 
[upgrade] Reading configuration from the "kubeadm-config" ConfigMap in namespace "kube-system"...
[upgrade] Use 'kubeadm init phase upload-config --config your-config-file' to re-upload it.
[upgrade/preflight] Running pre-flight checks
[upgrade/preflight] Skipping prepull. Not a control plane node.
[upgrade/control-plane] Skipping phase. Not a control plane node.
[upgrade/kubeconfig] Skipping phase. Not a control plane node.
W0225 18:47:04.462673  414842 postupgrade.go:117] Using temporary directory /etc/kubernetes/tmp/kubeadm-kubelet-config3416757202 for kubelet config. To override it set the environment variable KUBEADM_UPGRADE_DRYRUN_DIR
[upgrade] Backing up kubelet config file to /etc/kubernetes/tmp/kubeadm-kubelet-config3416757202/config.yaml
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[upgrade/kubelet-config] The kubelet configuration for this node was successfully upgraded!
[upgrade/addon] Skipping the addon/coredns phase. Not a control plane node.
[upgrade/addon] Skipping the addon/kube-proxy phase. Not a control plane node.

重启

复制代码
systemctl daemon-reload && systemctl restart kubelet

验证升级版本

复制代码
root@master01:~# kubectl get nodes
NAME       STATUS                     ROLES           AGE     VERSION
master01   Ready                      control-plane   4d22h   v1.33.8
node01     Ready,SchedulingDisabled   <none>          4d22h   v1.33.8
node02     Ready                      <none>          4d22h   v1.32.12

恢复worker节点可以正常调度

复制代码
kubectl uncordon node01
复制代码
root@master01:~# kubectl uncordon node01
node/node01 uncordoned
root@master01:~# kubectl get nodes node01 -o yaml|grep -i -A5 taints
root@master01:~# kubectl get nodes
NAME       STATUS   ROLES           AGE     VERSION
master01   Ready    control-plane   4d22h   v1.33.8
node01     Ready    <none>          4d22h   v1.33.8
node02     Ready    <none>          4d22h   v1.32.12

SchedulingDisabled污点去除。

2.2.5.将其它worker节点逐个升级

重复以上过程,逐个升级所有node节点

2.2.6.验证升级结果

复制代码
root@master01:~# kubectl get node
NAME       STATUS   ROLES           AGE     VERSION
master01   Ready    control-plane   4d22h   v1.33.8
node01     Ready    <none>          4d22h   v1.33.8
node02     Ready    <none>          4d22h   v1.33.8
​
#创建daemonset,检查是否所有节点都存在对应的pod
root@master01:~# kubectl apply -f - <<'EOF'
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: nginx-ds
  namespace: default
spec:
  selector:
    matchLabels:
      app: nginx-ds
  template:
    metadata:
      labels:
        app: nginx-ds
    spec:
      containers:
      - name: nginx
        image: nginx:latest
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 80
EOF
#查看pod调度情况
root@master01:~# kubectl get pod -o wide
NAME                     READY   STATUS    RESTARTS     AGE    IP               NODE     NOMINATED NODE   READINESS GATES
myweb-7d756bcd87-chbsv   1/1     Running   0            121m   172.16.140.95    node02   <none>           <none>
myweb-7d756bcd87-s7wtc   1/1     Running   0            18m    172.16.140.104   node02   <none>           <none>
myweb-7d756bcd87-z6b9l   1/1     Running   0            18m    172.16.140.105   node02   <none>           <none>
network-tools            1/1     Running   1 (9h ago)   27h    172.16.140.90    node02   <none>           <none>
nginx-ds-bj7q2           1/1     Running   0            51s    172.16.196.161   node01   <none>           <none>
nginx-ds-wdn9b           1/1     Running   0            51s    172.16.140.107   node02   <none>           <none>
#node01和node02均存在nginx-ds。
​
#检查证书也自动更新至最新时间,时区问题加上8小时。
root@master01:~# kubeadm certs check-expiration
[check-expiration] Reading configuration from the "kubeadm-config" ConfigMap in namespace "kube-system"...
[check-expiration] Use 'kubeadm init phase upload-config --config your-config-file' to re-upload it.
​
CERTIFICATE                EXPIRES                  RESIDUAL TIME   CERTIFICATE AUTHORITY   EXTERNALLY MANAGED
admin.conf                 Feb 25, 2027 10:26 UTC   364d            ca                      no      
apiserver                  Feb 25, 2027 10:10 UTC   364d            ca                      no      
apiserver-etcd-client      Feb 25, 2027 10:10 UTC   364d            etcd-ca                 no      
apiserver-kubelet-client   Feb 25, 2027 10:10 UTC   364d            ca                      no      
controller-manager.conf    Feb 25, 2027 10:10 UTC   364d            ca                      no      
etcd-healthcheck-client    Feb 25, 2027 10:10 UTC   364d            etcd-ca                 no      
etcd-peer                  Feb 25, 2027 10:10 UTC   364d            etcd-ca                 no      
etcd-server                Feb 25, 2027 10:10 UTC   364d            etcd-ca                 no      
front-proxy-client         Feb 25, 2027 10:10 UTC   364d            front-proxy-ca          no      
scheduler.conf             Feb 25, 2027 10:10 UTC   364d            ca                      no      
super-admin.conf           Feb 25, 2027 10:26 UTC   364d            ca                      no      
​
CERTIFICATE AUTHORITY   EXPIRES                  RESIDUAL TIME   EXTERNALLY MANAGED
ca                      Feb 18, 2036 12:27 UTC   9y              no      
etcd-ca                 Feb 18, 2036 12:27 UTC   9y              no      
front-proxy-ca          Feb 18, 2036 12:27 UTC   9y              no      
​

到此k8s集群升级成功。

相关推荐
Elastic 中国社区官方博客2 小时前
使用 Jina Embeddings v5 和 Elasticsearch 构建“与你的网站数据聊天”的 agent
大数据·人工智能·elasticsearch·搜索引擎·容器·全文检索·jina
Bruce_Liuxiaowei2 小时前
OpenClaw Docker容器升级实战:从v2026.2.22-2到v2026.2.24的安全配置变更与故障排除
安全·docker·ai·容器·openclaw
sanyii3131313 小时前
k8s核心资源Pod-主容器之存活性探测
云原生·容器·kubernetes
兴趣使然黄小黄3 小时前
【Docker】Docker架构详解:核心组件及其应用指南
docker·容器·架构
shughui15 小时前
Docker Desktop下载、安装、配置、使用
运维·docker·容器·自动化
EverydayJoy^v^19 小时前
Kubernetes 知识点(1)——基础依赖
云原生·容器·kubernetes
hopsky19 小时前
Docker Compose 启动的容器内存 监控
docker·容器·eureka
FrameNotWork21 小时前
多设备 Android Logcat 自动采集方案:基于 Docker + Shell 实现日志按天切割与自动清理
android·docker·容器
Haoea!21 小时前
Docker + Harbor 私有镜像仓库搭建
运维·docker·容器