K8S中的Pod生命周期之重启策略

三种策略

Kubernetes 中的 Pod 支持以下三种重启策略:

  • Always

    • 描述:无论容器退出的原因是什么,都会自动重启容器。

    • 默认值:如果未指定重启策略,Kubernetes 默认使用 Always。

  • OnFailure

    • 描述:仅当容器以非零退出码终止时,才会重启容器。

    • 条件:需要指定退出码来触发重启。

  • Never

    • 描述:不论容器退出的原因是什么,都不会重启容器。

重启延迟

  • 首次重启:首次需要重启的容器将立即进行重启。

  • 后续重启:随后如果再次需要重启,kubelet 将会引入延迟,延迟时长从 10 秒开始,并呈指数增长。

  • 延迟时长序列:10s、20s、40s、80s、160s,之后达到最大延迟时长。

  • 最大延迟时长:300s,这是后续重启操作的最大延迟时长。

Never

可以发现pod不会重启

bash 复制代码
[root@k8s-master ~]# vim pod-restartpolicy.yaml
---
apiVersion: v1
kind: Pod
metadata:
  name: pod-restartpolicy
  namespace: test
spec:
  restartPolicy: Never   #论容器退出的原因是什么,都不会重启容器
  containers:
  - name: nginx
    image: nginx:1.17.1
    ports:
    - containerPort: 80
      name: nginx-port
    livenessProbe:
      httpGet:
        scheme: HTTP
        port: 80
        path: /hello
[root@k8s-master ~]# kubectl apply -f pod-restartpolicy.yaml 
Error from server (NotFound): error when creating "pod-restartpolicy.yaml": namespaces "test" not found
[root@k8s-master ~]# kubectl create ns test
namespace/test created
[root@k8s-master ~]# kubectl apply -f pod-restartpolicy.yaml 
pod/pod-restartpolicy created
[root@k8s-master ~]#  kubectl describe pod pod-restartpolicy -n test
Name:         pod-restartpolicy
Namespace:    test
Priority:     0
Node:         k8s-node2/192.168.58.233
Start Time:   Tue, 14 Jan 2025 20:45:51 -0500
Labels:       <none>
Annotations:  cni.projectcalico.org/containerID: 77f405fd3543f24391d29b1f878fff24dda621f6583dd7df8e7020da258b9f4d
              cni.projectcalico.org/podIP: 10.244.169.129/32
              cni.projectcalico.org/podIPs: 10.244.169.129/32
Status:       Pending
IP:           
IPs:          <none>
Containers:
  nginx:
    Container ID:   
    Image:          nginx:1.17.1
    Image ID:       
    Port:           80/TCP
    Host Port:      0/TCP
    State:          Waiting
      Reason:       ContainerCreating
    Ready:          False
    Restart Count:  0
    Liveness:       http-get http://:80/hello delay=0s timeout=1s period=10s #success=1 #failure=3
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-sf6xn (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  kube-api-access-sf6xn:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  22s   default-scheduler  Successfully assigned test/pod-restartpolicy to k8s-node2
  Normal  Pulling    20s   kubelet            Pulling image "nginx:1.17.1"
[root@k8s-master ~]#  kubectl describe pod pod-restartpolicy -n test
Name:         pod-restartpolicy
Namespace:    test
Priority:     0
Node:         k8s-node2/192.168.58.233
Start Time:   Tue, 14 Jan 2025 20:45:51 -0500
Labels:       <none>
Annotations:  cni.projectcalico.org/containerID: 77f405fd3543f24391d29b1f878fff24dda621f6583dd7df8e7020da258b9f4d
              cni.projectcalico.org/podIP: 
              cni.projectcalico.org/podIPs: 
Status:       Running
IP:           10.244.169.129
IPs:
  IP:  10.244.169.129
Containers:
  nginx:
    Container ID:   docker://19f7e2ca6a7f4a9487b75fc5dee7d85cf2baef4547ae8b6f1d68f8dfd5d7bb1a
    Image:          nginx:1.17.1
    Image ID:       docker-pullable://nginx@sha256:b4b9b3eee194703fc2fa8afa5b7510c77ae70cfba567af1376a573a967c03dbb
    Port:           80/TCP
    Host Port:      0/TCP
    State:          Running
      Started:      Tue, 14 Jan 2025 20:46:18 -0500
    Ready:          True
    Restart Count:  0
    Liveness:       http-get http://:80/hello delay=0s timeout=1s period=10s #success=1 #failure=3
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-sf6xn (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  kube-api-access-sf6xn:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason     Age               From               Message
  ----     ------     ----              ----               -------
  Normal   Scheduled  50s               default-scheduler  Successfully assigned test/pod-restartpolicy to k8s-node2
  Normal   Pulling    48s               kubelet            Pulling image "nginx:1.17.1"
  Normal   Pulled     24s               kubelet            Successfully pulled image "nginx:1.17.1" in 24.531177953s
  Normal   Created    23s               kubelet            Created container nginx
  Normal   Started    23s               kubelet            Started container nginx
  Warning  Unhealthy  0s (x3 over 20s)  kubelet            Liveness probe failed: HTTP probe failed with statuscode: 404
  Normal   Killing    0s                kubelet            Stopping container nginx
[root@k8s-master ~]# kubectl get pod pod-restartpolicy -n test
NAME                READY   STATUS      RESTARTS   AGE
pod-restartpolicy   0/1     Completed   0          53s

Always

可以发现pod会一直重启

bash 复制代码
[root@k8s-master ~]# vim pod-restartpolicy.yaml 
^C[root@k8s-master ~]#  cat pod-restartpolicy.yaml 
---
apiVersion: v1
kind: Pod
metadata:
  name: pod-restartpolicy
  namespace: test
spec:
  restartPolicy: Always   #论容器退出的原因是什么,都不会重启容器
  containers:
  - name: nginx
    image: nginx:1.17.1
    ports:
    - containerPort: 80
      name: nginx-port
    livenessProbe:
      httpGet:
        scheme: HTTP
        port: 80
        path: /hello

[root@k8s-master ~]# kubectl delete pod-restartpolicy.yaml 
error: the server doesn't have a resource type "pod-restartpolicy"
[root@k8s-master ~]# kubectl delete -f pod-restartpolicy.yaml 
pod "pod-restartpolicy" deleted
[root@k8s-master ~]# kubectl apply -f pod-restartpolicy.yaml 
pod/pod-restartpolicy created
[root@k8s-master ~]# kubectl get pod pod-restartpolicy -n test -w
NAME                READY   STATUS    RESTARTS   AGE
pod-restartpolicy   1/1     Running   0          5s
pod-restartpolicy   1/1     Running   1          31s

Pod常见状态转换场景

Pod中的容器数 Pod状态 发生事件 Always OnFailure Never
包含一个容器 Running 容器成功退出 Running Succeeded Succeeded
包含一个容器 Running 容器失败退出 Running Running Failed
包含两个容器 Running 1个容器失败退出 Running Running Running
包含两个容器 Running 容器内存溢出挂掉 Running Running Failed

注释:

  • 对于 Always 重启策略,容器将立即重启。

  • 对于 OnFailureNever 重启策略,如果容器成功退出且退出码为0,Pod状态将变为Succeeded。

  • 对于 Always 重启策略,容器将立即重启。

  • 对于 OnFailure 重启策略,容器将以非零退出码退出,因此会重启。

  • 对于 Never 重启策略,容器将不会重启,Pod状态将变为Failed。

  • 对于 Always 重启策略,由于内存溢出导致的容器终止将重启容器。

  • 对于 OnFailure 重启策略,内存溢出导致的容器终止会触发重启,因为退出码是非零的。

  • 对于 Never 重启策略,容器将不会重启,Pod中其他容器继续运行,但失败的容器状态将为Terminated。

相关推荐
小安运维日记2 小时前
CKS认证 | 使用kubeadm部署K8s高可用集群(v1.26)
云原生·容器·kubernetes
淡黄的Cherry4 小时前
k8s集成MinIo
云原生·容器·kubernetes
蓝绿色~菠菜4 小时前
【k8s】k8s部署Argo CD
云原生·容器·kubernetes
元气满满的热码式4 小时前
K8S中Pod控制器之Deployment(Deploy)控制器
云原生·容器·kubernetes
昵称难产中5 小时前
浅谈云计算22 | Kubernetes容器编排引擎
容器·kubernetes·云计算
言之。5 小时前
【k8s面试题2025】1、练气期
云原生·容器·kubernetes
ohoy5 小时前
k8s集群安装
java·容器·kubernetes
Mistra丶5 小时前
Cloud Foundry,K8S,Mesos Marathon弹性扩缩容特性对比
云原生·容器·kubernetes·cloud foundry·mesos marathon
刘什么洋啊Zz5 小时前
K8S--边车容器
运维·云原生·容器·kubernetes