Kubernetes Deployment 之扩缩容与滚动更新

Kubernetes Deployment 之扩缩容与滚动更新

Deployment 扩缩容

扩缩容非常简单,我们可以直接调整 replica 副本数目,然后 kubectl apply指定进行动态更新。下面将nginx-deployment动态改为 1 个 Pod 和 3 个 Pod 的操作

yaml 复制代码
apiVersion: apps/v1
kind: Deployment
metadata:
  name: deployment-nginx
  namespace: default
  labels:
    app: deployment-nginx
spec:
  replicas: 2 # 将此处动态改为 1 或者 3
  selector:
    matchLabels:
      app: pod-nginx
  template:
    metadata:
      labels:
        app: pod-nginx
    spec:
      containers:
        - name: nginx
          image: docker.io/k8s-test:v1.0
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 80
root@k8s-master1:~# kubectl apply -f deploy-nginx.yaml
deployment.apps/deployment-nginx configured
root@k8s-master1:~# kubectl get deploy
NAME               READY   UP-TO-DATE   AVAILABLE   AGE
deployment-nginx   1/1     1            1           17h
root@k8s-master1:~# kubectl apply -f deploy-nginx.yaml
deployment.apps/deployment-nginx configured
root@k8s-master1:~# kubectl get deploy
NAME               READY   UP-TO-DATE   AVAILABLE   AGE
deployment-nginx   3/3     3            3           17h

# 它也会识别原来的 Pod 有没有被更新,如果没有被更新,它仍然会继续被保留运行
root@k8s-master1:~# kubectl get pods -owide
NAME                                READY   STATUS    RESTARTS      AGE   IP               NODE          NOMINATED NODE   READINESS GATES
deployment-nginx-6977747dd9-knx9r   1/1     Running   0             31s   10.244.194.126   k8s-worker1   <none>           <none>
deployment-nginx-6977747dd9-nqxgd   1/1     Running   0             31s   10.244.194.125   k8s-worker1   <none>           <none>
deployment-nginx-6977747dd9-sz42q   1/1     Running   1 (34m ago)   17h   10.244.126.24    k8s-worker2   <none>           <none>

Deployment 滚动更新策略

Deployment 更新策略分为两种,分别为RecreateRollingUpdateRecreate是将所有 Pod 全部杀死,然后启动新版本,这是一个极其危险的行为,几乎不被使用。滚动更新是指,新增一定数量的 Pod 后,再杀死一定的数量的 Pod。Deployment 默认使用滚动更新策略。

实现滚动更新需要设置MaxSurge最大 Pod 新增数量和MaxUnavailable最大 Pod 不可用数据数目。它们可以设置成整数或者百分比,且两者不能同时为0,不然无法根本更新。以副本数 3 为例,如果我们 MaxSurge设置为 10%,那么我们新增个数为 0.3,这里需要向上取整 ,即个数为1;同理我们 MaxUnavailable设置为 10%,这里我们允许不可用的个数也为 0.3,这里要向下取整为 0;最终我们的更新策略是要等到 4 个 Pod 完全在运行状态,滚动更新才回去杀死 1 个老 Pod。

Deployment 滚动更新使用

  1. 我们先将 k8s-test:v1.0 镜像标记为 k8s-test:v1.1

    root@k8s-worker1:~# ctr -n k8s.io image tag docker.io/library/k8s-test:v1.0 docker.io/library/k8s-test:v1.1
    docker.io/library/k8s-test:v1.1
    root@k8s-worker2:~# ctr -n k8s.io image tag docker.io/library/k8s-test:v1.0 docker.io/library/k8s-test:v1.1
    docker.io/library/k8s-test:v1.1
    
  2. 修改 yaml镜像版本号和更新策略

    yaml 复制代码
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: deployment-nginx
      namespace: default
      labels:
        app: deployment-nginx
    spec:
      strategy:
        rollingUpdate:
          maxSurge: 10%
          maxUnavailable: 10%
      replicas: 2
      selector:
        matchLabels:
          app: pod-nginx
      template:
        metadata:
          labels:
            app: pod-nginx
        spec:
          containers:
            - name: nginx
              image: docker.io/k8s-test:v1.1
              imagePullPolicy: IfNotPresent
              ports:
                - containerPort: 80
  3. 执行如下指令开启 Pods 监听

    kubectl get pods -owide -w
    
  4. 更新过程如下,旧的 Pod 会先被标记,但不是真的被终止

    root@k8s-master1:~# kubectl get pods -owide -w
    NAME                                READY   STATUS    RESTARTS      AGE   IP               NODE          NOMINATED NODE   READINESS GATES
    deployment-nginx-6977747dd9-knx9r   1/1     Running   0             30m   10.244.194.126   k8s-worker1   <none>           <none>
    deployment-nginx-6977747dd9-nqxgd   1/1     Running   0             30m   10.244.194.125   k8s-worker1   <none>           <none>
    deployment-nginx-6977747dd9-sz42q   1/1     Running   1 (63m ago)   17h   10.244.126.24    k8s-worker2   <none>           <none>
    deployment-nginx-6977747dd9-nqxgd   1/1     Terminating   0             35m   10.244.194.125   k8s-worker1   <none>           <none>
    deployment-nginx-6c94d644bd-ft9s6   0/1     Pending       0             0s    <none>           <none>        <none>           <none>
    deployment-nginx-6c94d644bd-ft9s6   0/1     Pending       0             0s    <none>           k8s-worker2   <none>           <none>
    deployment-nginx-6c94d644bd-ft9s6   0/1     ContainerCreating   0             0s    <none>           k8s-worker2   <none>           <none>
    deployment-nginx-6977747dd9-nqxgd   1/1     Terminating         0             35m   10.244.194.125   k8s-worker1   <none>           <none>
    deployment-nginx-6977747dd9-nqxgd   0/1     Terminating         0             35m   <none>           k8s-worker1   <none>           <none>
    deployment-nginx-6c94d644bd-ft9s6   0/1     ContainerCreating   0             0s    <none>           k8s-worker2   <none>           <none>
    deployment-nginx-6977747dd9-nqxgd   0/1     Terminating         0             35m   10.244.194.125   k8s-worker1   <none>           <none>
    deployment-nginx-6c94d644bd-ft9s6   1/1     Running             0             1s    10.244.126.26    k8s-worker2   <none>           <none>
    deployment-nginx-6977747dd9-nqxgd   0/1     Terminating         0             35m   10.244.194.125   k8s-worker1   <none>           <none>
    deployment-nginx-6977747dd9-nqxgd   0/1     Terminating         0             35m   10.244.194.125   k8s-worker1   <none>           <none>
    deployment-nginx-6977747dd9-sz42q   1/1     Terminating         1 (69m ago)   17h   10.244.126.24    k8s-worker2   <none>           <none>
    deployment-nginx-6c94d644bd-sj4hv   0/1     Pending             0             0s    <none>           <none>        <none>           <none>
    deployment-nginx-6c94d644bd-sj4hv   0/1     Pending             0             0s    <none>           k8s-worker1   <none>           <none>
    deployment-nginx-6c94d644bd-sj4hv   0/1     ContainerCreating   0             0s    <none>           k8s-worker1   <none>           <none>
    deployment-nginx-6977747dd9-sz42q   1/1     Terminating         1 (69m ago)   17h   10.244.126.24    k8s-worker2   <none>           <none>
    deployment-nginx-6977747dd9-sz42q   0/1     Terminating         1 (69m ago)   17h   <none>           k8s-worker2   <none>           <none>
    deployment-nginx-6c94d644bd-sj4hv   0/1     ContainerCreating   0             0s    <none>           k8s-worker1   <none>           <none>
    deployment-nginx-6c94d644bd-sj4hv   1/1     Running             0             1s    10.244.194.127   k8s-worker1   <none>           <none>
    deployment-nginx-6977747dd9-sz42q   0/1     Terminating         1 (69m ago)   17h   10.244.126.24    k8s-worker2   <none>           <none>
    deployment-nginx-6977747dd9-knx9r   1/1     Terminating         0             35m   10.244.194.126   k8s-worker1   <none>           <none>
    deployment-nginx-6977747dd9-sz42q   0/1     Terminating         1 (69m ago)   17h   10.244.126.24    k8s-worker2   <none>           <none>
    deployment-nginx-6977747dd9-sz42q   0/1     Terminating         1 (69m ago)   17h   10.244.126.24    k8s-worker2   <none>           <none>
    deployment-nginx-6977747dd9-knx9r   1/1     Terminating         0             35m   10.244.194.126   k8s-worker1   <none>           <none>
    deployment-nginx-6977747dd9-knx9r   0/1     Terminating         0             35m   <none>           k8s-worker1   <none>           <none>
    deployment-nginx-6977747dd9-knx9r   0/1     Terminating         0             35m   10.244.194.126   k8s-worker1   <none>           <none>
    deployment-nginx-6977747dd9-knx9r   0/1     Terminating         0             35m   10.244.194.126   k8s-worker1   <none>           <none>
    deployment-nginx-6977747dd9-knx9r   0/1     Terminating         0             35m   10.244.194.126   k8s-worker1   <none>           <none>
    
  5. 查看更新后的结果

    root@k8s-master1:~# kubectl get pods -owide
    NAME                                READY   STATUS    RESTARTS   AGE     IP               NODE          NOMINATED NODE   READINESS GATES
    deployment-nginx-6c94d644bd-ft9s6   1/1     Running   0          4m55s   10.244.126.26    k8s-worker2   <none>           <none>
    deployment-nginx-6c94d644bd-sj4hv   1/1     Running   0          4m54s   10.244.194.127   k8s-worker1   <none>           <none>
    root@k8s-master1:~# kubectl get deploy
    NAME               READY   UP-TO-DATE   AVAILABLE   AGE
    deployment-nginx   2/2     2            2           17h
    root@k8s-master1:~# kubectl get rs
    NAME                          DESIRED   CURRENT   READY   AGE
    deployment-nginx-6977747dd9   0         0         0       17h
    deployment-nginx-6c94d644bd   2         2         2       6m1s
    
相关推荐
全能全知者18 分钟前
docker快速安装与配置mongoDB
mongodb·docker·容器
为什么这亚子2 小时前
九、Go语言快速入门之map
运维·开发语言·后端·算法·云原生·golang·云计算
ZHOU西口4 小时前
微服务实战系列之玩转Docker(十八)
分布式·docker·云原生·架构·数据安全·etcd·rbac
牛角上的男孩4 小时前
Istio Gateway发布服务
云原生·gateway·istio
JuiceFS6 小时前
好未来:多云环境下基于 JuiceFS 建设低运维模型仓库
运维·云原生
景天科技苑6 小时前
【云原生开发】K8S多集群资源管理平台架构设计
云原生·容器·kubernetes·k8s·云原生开发·k8s管理系统
wclass-zhengge7 小时前
K8S篇(基本介绍)
云原生·容器·kubernetes
颜淡慕潇7 小时前
【K8S问题系列 |1 】Kubernetes 中 NodePort 类型的 Service 无法访问【已解决】
后端·云原生·容器·kubernetes·问题解决
川石课堂软件测试9 小时前
性能测试|docker容器下搭建JMeter+Grafana+Influxdb监控可视化平台
运维·javascript·深度学习·jmeter·docker·容器·grafana
昌sit!15 小时前
K8S node节点没有相应的pod镜像运行故障处理办法
云原生·容器·kubernetes