Kubernetes Deployment 之扩缩容与滚动更新

Kubernetes Deployment 之扩缩容与滚动更新

Deployment 扩缩容

扩缩容非常简单,我们可以直接调整 replica 副本数目,然后 kubectl apply指定进行动态更新。下面将nginx-deployment动态改为 1 个 Pod 和 3 个 Pod 的操作

yaml 复制代码
apiVersion: apps/v1
kind: Deployment
metadata:
  name: deployment-nginx
  namespace: default
  labels:
    app: deployment-nginx
spec:
  replicas: 2 # 将此处动态改为 1 或者 3
  selector:
    matchLabels:
      app: pod-nginx
  template:
    metadata:
      labels:
        app: pod-nginx
    spec:
      containers:
        - name: nginx
          image: docker.io/k8s-test:v1.0
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 80
复制代码
root@k8s-master1:~# kubectl apply -f deploy-nginx.yaml
deployment.apps/deployment-nginx configured
root@k8s-master1:~# kubectl get deploy
NAME               READY   UP-TO-DATE   AVAILABLE   AGE
deployment-nginx   1/1     1            1           17h
root@k8s-master1:~# kubectl apply -f deploy-nginx.yaml
deployment.apps/deployment-nginx configured
root@k8s-master1:~# kubectl get deploy
NAME               READY   UP-TO-DATE   AVAILABLE   AGE
deployment-nginx   3/3     3            3           17h

# 它也会识别原来的 Pod 有没有被更新,如果没有被更新,它仍然会继续被保留运行
root@k8s-master1:~# kubectl get pods -owide
NAME                                READY   STATUS    RESTARTS      AGE   IP               NODE          NOMINATED NODE   READINESS GATES
deployment-nginx-6977747dd9-knx9r   1/1     Running   0             31s   10.244.194.126   k8s-worker1   <none>           <none>
deployment-nginx-6977747dd9-nqxgd   1/1     Running   0             31s   10.244.194.125   k8s-worker1   <none>           <none>
deployment-nginx-6977747dd9-sz42q   1/1     Running   1 (34m ago)   17h   10.244.126.24    k8s-worker2   <none>           <none>

Deployment 滚动更新策略

Deployment 更新策略分为两种,分别为RecreateRollingUpdateRecreate是将所有 Pod 全部杀死,然后启动新版本,这是一个极其危险的行为,几乎不被使用。滚动更新是指,新增一定数量的 Pod 后,再杀死一定的数量的 Pod。Deployment 默认使用滚动更新策略。

实现滚动更新需要设置MaxSurge最大 Pod 新增数量和MaxUnavailable最大 Pod 不可用数据数目。它们可以设置成整数或者百分比,且两者不能同时为0,不然无法根本更新。以副本数 3 为例,如果我们 MaxSurge设置为 10%,那么我们新增个数为 0.3,这里需要向上取整 ,即个数为1;同理我们 MaxUnavailable设置为 10%,这里我们允许不可用的个数也为 0.3,这里要向下取整为 0;最终我们的更新策略是要等到 4 个 Pod 完全在运行状态,滚动更新才回去杀死 1 个老 Pod。

Deployment 滚动更新使用

  1. 我们先将 k8s-test:v1.0 镜像标记为 k8s-test:v1.1

    复制代码
    root@k8s-worker1:~# ctr -n k8s.io image tag docker.io/library/k8s-test:v1.0 docker.io/library/k8s-test:v1.1
    docker.io/library/k8s-test:v1.1
    root@k8s-worker2:~# ctr -n k8s.io image tag docker.io/library/k8s-test:v1.0 docker.io/library/k8s-test:v1.1
    docker.io/library/k8s-test:v1.1
  2. 修改 yaml镜像版本号和更新策略

    yaml 复制代码
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: deployment-nginx
      namespace: default
      labels:
        app: deployment-nginx
    spec:
      strategy:
        rollingUpdate:
          maxSurge: 10%
          maxUnavailable: 10%
      replicas: 2
      selector:
        matchLabels:
          app: pod-nginx
      template:
        metadata:
          labels:
            app: pod-nginx
        spec:
          containers:
            - name: nginx
              image: docker.io/k8s-test:v1.1
              imagePullPolicy: IfNotPresent
              ports:
                - containerPort: 80
  3. 执行如下指令开启 Pods 监听

    复制代码
    kubectl get pods -owide -w
  4. 更新过程如下,旧的 Pod 会先被标记,但不是真的被终止

    复制代码
    root@k8s-master1:~# kubectl get pods -owide -w
    NAME                                READY   STATUS    RESTARTS      AGE   IP               NODE          NOMINATED NODE   READINESS GATES
    deployment-nginx-6977747dd9-knx9r   1/1     Running   0             30m   10.244.194.126   k8s-worker1   <none>           <none>
    deployment-nginx-6977747dd9-nqxgd   1/1     Running   0             30m   10.244.194.125   k8s-worker1   <none>           <none>
    deployment-nginx-6977747dd9-sz42q   1/1     Running   1 (63m ago)   17h   10.244.126.24    k8s-worker2   <none>           <none>
    deployment-nginx-6977747dd9-nqxgd   1/1     Terminating   0             35m   10.244.194.125   k8s-worker1   <none>           <none>
    deployment-nginx-6c94d644bd-ft9s6   0/1     Pending       0             0s    <none>           <none>        <none>           <none>
    deployment-nginx-6c94d644bd-ft9s6   0/1     Pending       0             0s    <none>           k8s-worker2   <none>           <none>
    deployment-nginx-6c94d644bd-ft9s6   0/1     ContainerCreating   0             0s    <none>           k8s-worker2   <none>           <none>
    deployment-nginx-6977747dd9-nqxgd   1/1     Terminating         0             35m   10.244.194.125   k8s-worker1   <none>           <none>
    deployment-nginx-6977747dd9-nqxgd   0/1     Terminating         0             35m   <none>           k8s-worker1   <none>           <none>
    deployment-nginx-6c94d644bd-ft9s6   0/1     ContainerCreating   0             0s    <none>           k8s-worker2   <none>           <none>
    deployment-nginx-6977747dd9-nqxgd   0/1     Terminating         0             35m   10.244.194.125   k8s-worker1   <none>           <none>
    deployment-nginx-6c94d644bd-ft9s6   1/1     Running             0             1s    10.244.126.26    k8s-worker2   <none>           <none>
    deployment-nginx-6977747dd9-nqxgd   0/1     Terminating         0             35m   10.244.194.125   k8s-worker1   <none>           <none>
    deployment-nginx-6977747dd9-nqxgd   0/1     Terminating         0             35m   10.244.194.125   k8s-worker1   <none>           <none>
    deployment-nginx-6977747dd9-sz42q   1/1     Terminating         1 (69m ago)   17h   10.244.126.24    k8s-worker2   <none>           <none>
    deployment-nginx-6c94d644bd-sj4hv   0/1     Pending             0             0s    <none>           <none>        <none>           <none>
    deployment-nginx-6c94d644bd-sj4hv   0/1     Pending             0             0s    <none>           k8s-worker1   <none>           <none>
    deployment-nginx-6c94d644bd-sj4hv   0/1     ContainerCreating   0             0s    <none>           k8s-worker1   <none>           <none>
    deployment-nginx-6977747dd9-sz42q   1/1     Terminating         1 (69m ago)   17h   10.244.126.24    k8s-worker2   <none>           <none>
    deployment-nginx-6977747dd9-sz42q   0/1     Terminating         1 (69m ago)   17h   <none>           k8s-worker2   <none>           <none>
    deployment-nginx-6c94d644bd-sj4hv   0/1     ContainerCreating   0             0s    <none>           k8s-worker1   <none>           <none>
    deployment-nginx-6c94d644bd-sj4hv   1/1     Running             0             1s    10.244.194.127   k8s-worker1   <none>           <none>
    deployment-nginx-6977747dd9-sz42q   0/1     Terminating         1 (69m ago)   17h   10.244.126.24    k8s-worker2   <none>           <none>
    deployment-nginx-6977747dd9-knx9r   1/1     Terminating         0             35m   10.244.194.126   k8s-worker1   <none>           <none>
    deployment-nginx-6977747dd9-sz42q   0/1     Terminating         1 (69m ago)   17h   10.244.126.24    k8s-worker2   <none>           <none>
    deployment-nginx-6977747dd9-sz42q   0/1     Terminating         1 (69m ago)   17h   10.244.126.24    k8s-worker2   <none>           <none>
    deployment-nginx-6977747dd9-knx9r   1/1     Terminating         0             35m   10.244.194.126   k8s-worker1   <none>           <none>
    deployment-nginx-6977747dd9-knx9r   0/1     Terminating         0             35m   <none>           k8s-worker1   <none>           <none>
    deployment-nginx-6977747dd9-knx9r   0/1     Terminating         0             35m   10.244.194.126   k8s-worker1   <none>           <none>
    deployment-nginx-6977747dd9-knx9r   0/1     Terminating         0             35m   10.244.194.126   k8s-worker1   <none>           <none>
    deployment-nginx-6977747dd9-knx9r   0/1     Terminating         0             35m   10.244.194.126   k8s-worker1   <none>           <none>
  5. 查看更新后的结果

    复制代码
    root@k8s-master1:~# kubectl get pods -owide
    NAME                                READY   STATUS    RESTARTS   AGE     IP               NODE          NOMINATED NODE   READINESS GATES
    deployment-nginx-6c94d644bd-ft9s6   1/1     Running   0          4m55s   10.244.126.26    k8s-worker2   <none>           <none>
    deployment-nginx-6c94d644bd-sj4hv   1/1     Running   0          4m54s   10.244.194.127   k8s-worker1   <none>           <none>
    root@k8s-master1:~# kubectl get deploy
    NAME               READY   UP-TO-DATE   AVAILABLE   AGE
    deployment-nginx   2/2     2            2           17h
    root@k8s-master1:~# kubectl get rs
    NAME                          DESIRED   CURRENT   READY   AGE
    deployment-nginx-6977747dd9   0         0         0       17h
    deployment-nginx-6c94d644bd   2         2         2       6m1s
相关推荐
AI攻城狮9 小时前
OpenClaw 里 TAVILY_API_KEY 明明写在 ~/.bashrc,为什么还是失效?一次完整排查与修复
人工智能·云原生·aigc
Sheffield13 小时前
Alpine是什么,为什么是Docker首选?
linux·docker·容器
阿里云云原生1 天前
零配置部署顶级模型!函数计算一键解锁 Qwen3.5
云原生
AI攻城狮1 天前
Kimi Bot + OpenClaw 完整配置指南:5 步实现本地 AI Agent 集成
人工智能·云原生·aigc
AI攻城狮2 天前
RAG Chunking 为什么这么难?5 大挑战 + 最佳实践指南
人工智能·云原生·aigc
可观测性用观测云3 天前
云原生网关 Ingress-Nginx 链路追踪实战:OpenTelemetry 采集与观测云集成方案
nginx·kubernetes
哈里谢顿4 天前
Kubernetes Operator核心概念、实现原理和实战开发
云原生
阿里云云原生4 天前
你的 OpenClaw 真的在受控运行吗?
云原生
阿里云云原生4 天前
5 分钟零代码改造,让 Go 应用自动获得全链路可观测能力
云原生·go
Shanyoufusu124 天前
RKE2 单节点集群安装 Rancher+ 私有镜像仓库搭建 完整教程
云原生