【云原生】kubernetes中pod的生命周期、探测钩子的实战应用案例解析

✨✨ 欢迎大家来到景天科技苑✨✨

🎈🎈 养成好习惯,先赞后看哦~🎈🎈

🏆 作者简介:景天科技苑

🏆《头衔》:大厂架构师,华为云开发者社区专家博主,阿里云开发者社区专家博主,CSDN全栈领域优质创作者,掘金优秀博主,51CTO博客专家等。

🏆《博客》:Python全栈,前后端开发,小程序开发,人工智能,js逆向,App逆向,网络系统安全,数据分析,Django,fastapi,flask等框架,云原生k8s,linux,shell脚本等实操经验,网站搭建,数据库等分享。

所属的专栏: 云原生K8S,零基础到进阶实战
景天的主页: 景天科技苑

文章目录

  • 1.Pod生命周期
    • [1.1 init容器](#1.1 init容器)
    • [1.2 主容器](#1.2 主容器)
    • [1.3 创建pod需要经过哪些阶段?](#1.3 创建pod需要经过哪些阶段?)
  • 2.Pod容器探测和钩子
    • [2.1 容器钩子:postStart和preStop](#2.1 容器钩子:postStart和preStop)
    • [2.2 存活性探测livenessProbe和就绪性探测readinessProbe](#2.2 存活性探测livenessProbe和就绪性探测readinessProbe)
      • [1、LivenessProbe 探针使用示例](#1、LivenessProbe 探针使用示例)
      • [2、ReadinessProbe 探针使用示例](#2、ReadinessProbe 探针使用示例)
      • [3、ReadinessProbe + LivenessProbe 配合使用示例](#3、ReadinessProbe + LivenessProbe 配合使用示例)

1.Pod生命周期

1.1 init容器

Pod 里面可以有一个或者多个容器,部署应用的容器可以称为主容器,

在创建Pod时候,Pod 中可以有一个或多个先于主容器启动的Init容器,这个init容器就可以成为初始化容器,

初始化容器一旦执行完,它从启动开始到初始化代码执行完就退出了,它不会一直存在,

所以在主容器启动之前执行初始化,初始化容器可以有多个,多个初始化容器是要串行执行的,先执行初始化容器1,

在执行初始化容器2等,等初始化容器执行完初始化就退出了,然后再执行主容器,主容器一退出,pod就结束了,

主容器退出的时间点就是pod的结束点,它俩时间轴是一致的;

Init容器就是做初始化工作的容器。可以有一个或多个,如果多个按照定义的顺序依次执行,

只有所有的初始化容器执行完后,主容器才启动。由于一个Pod里的存储卷是共享的,

所以Init Container里产生的数据可以被主容器使用到,Init Container可以在多种K8S资源里被使用到,

如Deployment、DaemonSet, StatefulSet、Job等,但都是在Pod启动时,在主容器启动前执行,做初始化工作。

kubectl explain pod.spec.containers 这里定义的才是主容器

Init容器与普通的容器区别是:

1、Init 容器不支持 Readiness,因为它们必须在Pod就绪之前运行完成

2、每个Init容器必须运行成功,下一个才能够运行

3、如果 Pod 的 Init 容器失败,Kubernetes 会不断地重启该 Pod,直到 Init 容器成功为止,

然而,如果Pod对应的restartPolicy值为 Never,它不会重新启动。

创建个init资源清单:

bash 复制代码
[root@master01 pod-test ]# cat init.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: myapp-pod
  labels:
    app: myapp
spec:
  containers:
  - name: myapp-container
    image: busybox:1.28
    command: ['sh', '-c', 'echo The app is running! && sleep 3600']
  initContainers:
  - name: init-myservice
    image: busybox:1.28
    command: ['sh', '-c', "until nslookup myservice.$(cat /var/run/secrets/kubernetes.io/serviceaccount/namespace).svc.cluster.local; do echo waiting for myservice; sleep 2; done"]
  - name: init-mydb
    image: busybox:1.28
    command: ['sh', '-c', "until nslookup mydb.$(cat /var/run/secrets/kubernetes.io/serviceaccount/namespace).svc.cluster.local; do echo waiting for mydb; sleep 2; done"]
bash 复制代码
[root@master01 pod-test ]# kubectl get pods -owide
NAME                          READY   STATUS     RESTARTS   AGE     IP               NODE     NOMINATED NODE   READINESS GATES
demo-nodeselector             1/1     Running    3          2d23h   172.21.231.155   node02   <none>           <none>
myapp-deploy                  1/1     Running    1          22h     172.21.231.158   node02   <none>           <none>
myapp-pod                     0/1     Init:0/2   0          52s     172.29.55.35     node01   <none>           <none>
nginx-test-64b444bff5-6t2mb   1/1     Running    0          93m     172.29.55.33     node01   <none>           <none>
nginx-test-64b444bff5-ltj29   1/1     Running    0          93m     172.21.231.160   node02   <none>           <none>
pod-node-affinity-demo-2      1/1     Running    2          2d6h    172.21.231.157   node02   <none>           <none>
pod-restart                   1/1     Running    0          70m     172.29.55.34     node01   <none>           <none>
test                          1/1     Running    4          5d23h   172.21.231.159   node02   <none>           <none>

myapp-pod 一直是init状态:

bash 复制代码
[root@master01 pod-test ]# kubectl logs myapp-pod 
Error from server (BadRequest): container "myapp-container" in pod "myapp-pod" is waiting to start: PodInitializing

直到解析出service才会初始化成功,创建主容器,不然初始化一直在循环

创建service文件:

bash 复制代码
[root@master01 pod-test ]# cat service.yaml 
apiVersion: v1
kind: Service
metadata:
  name: myservice
spec:
  ports:
  - protocol: TCP
    port: 80
    targetPort: 9376
---
apiVersion: v1
kind: Service
metadata:
  name: mydb
spec:
  ports:
  - protocol: TCP
    port: 80
    targetPort: 9377
bash 复制代码
[root@master01 pod-test ]# kubectl apply -f service.yaml 
service/myservice created
service/mydb created
bash 复制代码
[root@master01 pod-test ]# kubectl get pods -owide
NAME                          READY   STATUS    RESTARTS   AGE     IP               NODE     NOMINATED NODE   READINESS GATES
demo-nodeselector             1/1     Running   3          2d23h   172.21.231.155   node02   <none>           <none>
myapp-deploy                  1/1     Running   1          22h     172.21.231.158   node02   <none>           <none>
myapp-pod                     1/1     Running   0          5m10s   172.29.55.35     node01   <none>           <none>
nginx-test-64b444bff5-6t2mb   1/1     Running   0          97m     172.29.55.33     node01   <none>           <none>
nginx-test-64b444bff5-ltj29   1/1     Running   0          97m     172.21.231.160   node02   <none>           <none>
pod-node-affinity-demo-2      1/1     Running   2          2d6h    172.21.231.157   node02   <none>           <none>
pod-restart                   1/1     Running   0          75m     172.29.55.34     node01   <none>           <none>
test                          1/1     Running   4          5d23h   172.21.231.159   node02   <none>           <none>

可以看到myapp-pod 已经启动

主容器在创建之前,先经历两个初始化容器步骤

1.2 主容器

容器钩子

初始化容器启动之后,开始启动主容器,在主容器启动之前有一个post start hook(容器启动后钩子)和pre stop hook(容器结束前钩子),

无论启动后还是结束前所做的事我们可以把它放两个钩子,这个钩子就表示用户可以用它来钩住一些命令,来执行它,做开场前的预设,

结束前的清理,如awk有begin,end,和这个效果类似;

  • postStart:该钩子在容器被创建后立刻触发,通知容器它已经被创建。

    如果该钩子对应的hook handler执行失败,则该容器会被杀死,并根据该容器的重启策略决定是否要重启该容器,这个钩子不需要传递任何参数。

  • preStop:该钩子在容器被删除前触发,其所对应的hook handler必须在删除该容器的请求发送给Docker daemon之前完成。

    在该钩子对应的hook handler完成后不论执行的结果如何,

    Docker daemon会发送一个SGTERN信号量给Docker daemon来删除该容器,这个钩子不需要传递任何参数。

在k8s中支持两类对pod的检测:

第一类叫做livenessprobe(pod存活性探测):

存活探针主要作用是,用指定的方式检测pod中的容器应用是否正常运行,

如果检测失败,则认为容器不健康,那么Kubelet将根据Pod中设置的 restartPolicy来判断Pod 是否要进行重启操作,

如果容器配置中没有配置 livenessProbe,Kubelet 将认为存活探针探测一直为成功状态,一直是running。

第二类是状态检readinessprobe(pod就绪性探测):

用于判断容器中应用是否启动完成,

当探测成功后才使Pod对外提供网络访问,设置容器Ready状态为true,如果探测失败,则设置容器的Ready状态为false。

查看官方解读用法:

livenessProbe:

bash 复制代码
[root@master01 pod-test ]# kubectl explain pods.spec.containers.livenessProbe
KIND:     Pod
VERSION:  v1

RESOURCE: livenessProbe <Object>

DESCRIPTION:
     Periodic probe of container liveness. Container will be restarted if the
     probe fails. Cannot be updated. More info:
     https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes

     Probe describes a health check to be performed against a container to
     determine whether it is alive or ready to receive traffic.

FIELDS:
   exec	<Object>
     One and only one of the following should be specified. Exec specifies the
     action to take.

   failureThreshold	<integer>
     Minimum consecutive failures for the probe to be considered failed after
     having succeeded. Defaults to 3. Minimum value is 1.

   httpGet	<Object>
     HTTPGet specifies the http request to perform.

   initialDelaySeconds	<integer>
     Number of seconds after the container has started before liveness probes
     are initiated. More info:
     https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes

   periodSeconds	<integer>
     How often (in seconds) to perform the probe. Default to 10 seconds. Minimum
     value is 1.

   successThreshold	<integer>
     Minimum consecutive successes for the probe to be considered successful
     after having failed. Defaults to 1. Must be 1 for liveness and startup.
     Minimum value is 1.

   tcpSocket	<Object>
     TCPSocket specifies an action involving a TCP port. TCP hooks not yet
     supported

   terminationGracePeriodSeconds	<integer>
     Optional duration in seconds the pod needs to terminate gracefully upon
     probe failure. The grace period is the duration in seconds after the
     processes running in the pod are sent a termination signal and the time
     when the processes are forcibly halted with a kill signal. Set this value
     longer than the expected cleanup time for your process. If this value is
     nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this
     value overrides the value provided by the pod spec. Value must be
     non-negative integer. The value zero indicates stop immediately via the
     kill signal (no opportunity to shut down). This is an alpha field and
     requires enabling ProbeTerminationGracePeriod feature gate.

   timeoutSeconds	<integer>
     Number of seconds after which the probe times out. Defaults to 1 second.
     Minimum value is 1. More info:
     https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes

readinessProbe:

bash 复制代码
[root@master01 pod-test ]# kubectl explain pods.spec.containers.readinessProbe
KIND:     Pod
VERSION:  v1

RESOURCE: readinessProbe <Object>

DESCRIPTION:
     Periodic probe of container service readiness. Container will be removed
     from service endpoints if the probe fails. Cannot be updated. More info:
     https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes

     Probe describes a health check to be performed against a container to
     determine whether it is alive or ready to receive traffic.

FIELDS:
   exec	<Object>
     One and only one of the following should be specified. Exec specifies the
     action to take.

   failureThreshold	<integer>
     Minimum consecutive failures for the probe to be considered failed after
     having succeeded. Defaults to 3. Minimum value is 1.

   httpGet	<Object>
     HTTPGet specifies the http request to perform.

   initialDelaySeconds	<integer>
     Number of seconds after the container has started before liveness probes
     are initiated. More info:
     https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes

   periodSeconds	<integer>
     How often (in seconds) to perform the probe. Default to 10 seconds. Minimum
     value is 1.

   successThreshold	<integer>
     Minimum consecutive successes for the probe to be considered successful
     after having failed. Defaults to 1. Must be 1 for liveness and startup.
     Minimum value is 1.

   tcpSocket	<Object>
     TCPSocket specifies an action involving a TCP port. TCP hooks not yet
     supported

   terminationGracePeriodSeconds	<integer>
     Optional duration in seconds the pod needs to terminate gracefully upon
     probe failure. The grace period is the duration in seconds after the
     processes running in the pod are sent a termination signal and the time
     when the processes are forcibly halted with a kill signal. Set this value
     longer than the expected cleanup time for your process. If this value is
     nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this
     value overrides the value provided by the pod spec. Value must be
     non-negative integer. The value zero indicates stop immediately via the
     kill signal (no opportunity to shut down). This is an alpha field and
     requires enabling ProbeTerminationGracePeriod feature gate.

   timeoutSeconds	<integer>
     Number of seconds after which the probe times out. Defaults to 1 second.
     Minimum value is 1. More info:
     https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes

1.3 创建pod需要经过哪些阶段?

当用户创建pod时,这个请求给apiserver,apiserver把创建请求的状态保存在etcd中;

接下来apiserver会请求scheduler来完成调度,

如果调度成功,会把调度的结果(如调度到哪个节点上了,运行在哪个节点上了,把它更新到etcd的pod资源状态中)保存在etcd中,

一旦存到etcd中并且完成更新以后,如调度到node01上,

那么node01节点上的kubelet通过apiserver当中的状态变化知道有一些任务被执行了,

所以此时此kubelet会拿到用户创建时所提交的清单,这个清单会在当前节点上运行或者启动这个pod,

如果创建成功或者失败会有一个当前状态,当前这个状态会发给apiserver,apiserver在存到etcd中;

在这个过程中,etcd和apiserver一直在打交道,不停的交互,scheduler也参与其中,负责调度pod到合适的node节点上,

这个就是pod的创建过程

pod在整个生命周期中有非常多的用户行为:

1、初始化容器完成初始化

2、主容器启动后可以做启动后钩子 postStart

3、主容器结束前可以做结束前钩子 preStop

4、在主容器运行中可以做一些健康检测,如liveness probe,readness probe

查看生命周期钩子官方解读:

bash 复制代码
[root@master01 pod-test ]# kubectl explain pods.spec.containers.lifecycle
KIND:     Pod
VERSION:  v1

RESOURCE: lifecycle <Object>

DESCRIPTION:
     Actions that the management system should take in response to container
     lifecycle events. Cannot be updated.

     Lifecycle describes actions that the management system should take in
     response to container lifecycle events. For the PostStart and PreStop
     lifecycle handlers, management of the container blocks until the action is
     complete, unless the container process fails, in which case the handler is
     aborted.

FIELDS:
   postStart	<Object>
     PostStart is called immediately after a container is created. If the
     handler fails, the container is terminated and restarted according to its
     restart policy. Other management of the container blocks until the hook
     completes. More info:
     https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks

   preStop	<Object>
     PreStop is called immediately before a container is terminated due to an
     API request or management event such as liveness/startup probe failure,
     preemption, resource contention, etc. The handler is not called if the
     container crashes or exits. The reason for termination is passed to the
     handler. The Pod's termination grace period countdown begins before the
     PreStop hooked is executed. Regardless of the outcome of the handler, the
     container will eventually terminate within the Pod's termination grace
     period. Other management of the container blocks until the hook completes
     or until the termination grace period is reached. More info:
     https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks
bash 复制代码
[root@master01 pod-test ]# kubectl explain pods.spec.containers.lifecycle.postStart
KIND:     Pod
VERSION:  v1

RESOURCE: postStart <Object>

DESCRIPTION:
     PostStart is called immediately after a container is created. If the
     handler fails, the container is terminated and restarted according to its
     restart policy. Other management of the container blocks until the hook
     completes. More info:
     https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks

     Handler defines a specific action that should be taken

FIELDS:
   exec	<Object>
     One and only one of the following should be specified. Exec specifies the
     action to take.

   httpGet	<Object>
     HTTPGet specifies the http request to perform.

   tcpSocket	<Object>
     TCPSocket specifies an action involving a TCP port. TCP hooks not yet
     supported

2.Pod容器探测和钩子

2.1 容器钩子:postStart和preStop

  • postStart:容器创建成功后,运行前的任务,用于资源部署、环境准备等。
  • preStop:在容器被终止前的任务,用于优雅关闭应用程序、通知其他系统等。
bash 复制代码
[root@master01 pod-test ]# kubectl explain pods.spec.containers.lifecycle.postStart.exec
KIND:     Pod
VERSION:  v1

RESOURCE: exec <Object>

DESCRIPTION:
     One and only one of the following should be specified. Exec specifies the
     action to take.

     ExecAction describes a "run in container" action.

FIELDS:
   command	<[]string>
     Command is the command line to execute inside the container, the working
     directory for the command is root ('/') in the container's filesystem. The
     command is simply exec'd, it is not run inside a shell, so traditional
     shell instructions ('|', etc) won't work. To use a shell, you need to
     explicitly call out to that shell. Exit status of 0 is treated as
     live/healthy and non-zero is unhealthy.

演示postStart和preStop用法

bash 复制代码
......
containers:
- image: sample:v2  
     name: war
     lifecycle:
      postStart:
       exec:
         command:
          - "cp"
          - "/sample.war"
          - "/app"
      prestop:
       httpGet:
        host: monitor.com
        path: /waring
        port: 8080
        scheme: HTTP
......

以上示例中,定义了一个Pod,包含一个JAVA的web应用容器,其中设置了PostStart和PreStop回调函数。

即在容器创建成功后,复制/sample.war到/app文件夹中。

而在容器终止之前,发送HTTP请求到http://monitor.com:8080/waring,即向监控系统发送警告。

2.2 存活性探测livenessProbe和就绪性探测readinessProbe

  • livenessProbe:存活性探测

    许多应用程序经过长时间运行,最终过渡到无法运行的状态,除了重启,无法恢复。

    通常情况下,K8S会发现应用程序已经终止,然后重启应用程序pod。

    有时应用程序可能因为某些原因(后端服务故障等)导致暂时无法对外提供服务,但应用软件没有终止,

    导致K8S无法隔离有故障的pod,调用者可能会访问到有故障的pod,导致业务不稳定。

    K8S提供livenessProbe来检测容器是否正常运行,并且对相应状况进行相应的补救措施。

  • readinessProbe:就绪性探测

    在没有配置readinessProbe的资源对象中,pod中的容器启动完成后,就认为pod中的应用程序可以对外提供服务,

    该pod就会加入相对应的service,对外提供服务。但有时一些应用程序启动后,需要较长时间的加载才能对外服务,

    如果这时对外提供服务,执行结果必然无法达到预期效果,影响用户体验。比如使用tomcat的应用程序来说,

    并不是简单地说tomcat启动成功就可以对外提供服务的,还需要等待spring容器初始化,数据库连接上等等。

目前LivenessProbe和ReadinessProbe两种探针都支持下面三种探测方法:

  • 1、Exec Action:在容器中执行指定的命令,如果执行成功,退出码为 0 则探测成功。
  • 2、TCPSocket Action:通过容器的 IP 地址和端口号执行 TCP 检查,如果能够建立 TCP 连接,则表明容器健康。
  • 3、HTTPGet Action:通过容器的IP地址、端口号及路径调用 HTTP Get方法,如果响应的状态码大于等于200且小于400,则认为容器健康

探针探测结果有以下值:

  • 1、Success:表示通过检测。
  • 2、Failure:表示未通过检测。
  • 3、Unknown:表示检测没有正常进行。

Pod探针相关的属性:

探针(Probe)有许多可选字段,可以用来更加精确的控制Liveness和Readiness两种探针的行为

initialDelaySeconds: Pod启动后首次进行检查的等待时间,单位"秒"。

periodSeconds: 检查的间隔时间,默认为10s,单位"秒" 最小值1秒。

timeoutSeconds: 探针执行检测请求后,等待响应的超时时间,默认为1s,单位"秒"。

successThreshold: 连续探测几次成功,才认为探测成功,默认为 1,在 Liveness 探针中必须为1,最小值为1。

failureThreshold: 探测失败的重试次数,重试一定次数后将认为失败,

在 readiness 探针中,Pod会被标记为未就绪,默认为 3,最小值为 1

两种探针区别:

ReadinessProbe 和 livenessProbe 可以使用相同探测方式,只是对 Pod 的处置方式不同:

readinessProbe 当检测失败后,将 Pod 的 IP:Port 从对应的 EndPoint 列表中删除。

livenessProbe 当检测失败后,将杀死容器并根据 Pod 的重启策略来决定作出对应的措施。

bash 复制代码
[root@master01 probe ]# kubectl explain pods.spec.containers.readinessProbe.httpGet
KIND:     Pod
VERSION:  v1

RESOURCE: httpGet <Object>

DESCRIPTION:
     HTTPGet specifies the http request to perform.

     HTTPGetAction describes an action based on HTTP Get requests.

FIELDS:
   host	<string>
     Host name to connect to, defaults to the pod IP. You probably want to set
     "Host" in httpHeaders instead.

   httpHeaders	<[]Object>
     Custom headers to set in the request. HTTP allows repeated headers.

   path	<string>
     Path to access on the HTTP server.

   port	<string> -required-
     Name or number of the port to access on the container. Number must be in
     the range 1 to 65535. Name must be an IANA_SVC_NAME.

   scheme	<string>
     Scheme to use for connecting to the host. Defaults to HTTP.

Pod探针使用示例:

1、LivenessProbe 探针使用示例

(1)、通过exec方式做健康探测

示例文件 liveness-exec.yaml

bash 复制代码
apiVersion: v1
kind: Pod
metadata:
  name: liveness-exec
  labels:
    app: liveness
spec:
  containers:
  - name: liveness
    image: busybox
    args:                       #创建测试探针探测的文件
    - /bin/sh
    - -c
    - touch /tmp/healthy; sleep 30; rm -rf /tmp/healthy; sleep 600
    livenessProbe:
      initialDelaySeconds: 10   #延迟检测时间
      periodSeconds: 5          #检测时间间隔
      exec:
        command:
        - cat
        - /tmp/healthy

容器启动设置执行的命令:

/bin/sh -c "touch /tmp/healthy; sleep 30; rm -rf /tmp/healthy; sleep 600"

容器在初始化后,首先创建一个 /tmp/healthy 文件,然后执行睡眠命令,睡眠 30 秒,到时间后执行删除 /tmp/healthy 文件命令。

而设置的存活探针检检测方式为执行 shell 命令,用 cat 命令输出 healthy 文件的内容,

如果能成功执行这条命令,存活探针就认为探测成功,否则探测失败。在前 30 秒内,

由于文件存在,所以存活探针探测时执行 cat /tmp/healthy 命令成功执行。30 秒后 healthy 文件被删除,

所以执行命令失败,Kubernetes 会根据 Pod 设置的重启策略来判断,是否重启 Pod。

(2)、通过HTTP方式做健康探测:

bash 复制代码
[root@master01 probe ]# vim liveness-http.yaml
apiVersion: v1
kind: Pod
metadata:
  name: liveness-http
  labels:
    test: liveness
spec:
  containers:
  - name: liveness
    image: mydlqclub/springboot-helloworld:0.0.1
    livenessProbe:
      initialDelaySeconds: 20   #延迟加载时间,表示在容器启动后,延时多少秒才开始探测
      periodSeconds: 5          #重试时间间隔
      timeoutSeconds: 10        #超时时间设置
      httpGet:
        scheme: HTTP
        port: 8081
        path: /actuator/health
bash 复制代码
[root@master01 probe ]# kubectl get pods  -owide
NAME                          READY   STATUS    RESTARTS   AGE   IP               NODE     NOMINATED NODE   READINESS GATES
demo-nodeselector             1/1     Running   9          14d   172.21.231.190   node02   <none>           <none>
liveness-exec                 1/1     Running   13         40m   172.21.231.191   node02   <none>           <none>
liveness-http                 1/1     Running   5          8d    172.29.55.60     node01   <none>           <none>
liveness-tcp                  1/1     Running   4          8d    172.29.55.58     node01   <none>           <none>
myapp-pod                     1/1     Running   67         11d   172.29.55.59     node01   <none>           <none>
nginx-test-64b444bff5-6t2mb   1/1     Running   7          11d   172.29.55.57     node01   <none>           <none>
nginx-test-64b444bff5-ltj29   1/1     Running   6          11d   172.21.231.189   node02   <none>           <none>
pod-node-affinity-demo-2      1/1     Running   8          14d   172.21.231.187   node02   <none>           <none>
test                          1/1     Running   10         17d   172.21.231.186   node02   <none>           <none>

host默认是pod的ip

curl -I url返回状态码

bash 复制代码
[root@master01 probe ]# curl -I http://172.29.55.60:8081/actuator/health
HTTP/1.1 200 
Content-Type: application/vnd.spring-boot.actuator.v2+json;charset=UTF-8
Transfer-Encoding: chunked
Date: Tue, 16 Aug 2022 06:32:57 GMT

可见返回码是200,存活性探测成功

上面 Pod 中启动的容器是一个 SpringBoot 应用,其中引用了 Actuator 组件,提供了 /actuator/health 健康检查地址,

存活探针可以使用 HTTPGet 方式向服务发起请求,请求 8081 端口的 /actuator/health 路径来进行存活判断:

任何大于或等于200且小于400的代码表示探测成功。

任何其他代码表示失败。

如果探测失败,则会杀死 Pod 进行重启操作。

httpGet探测方式有如下可选的控制字段:

scheme: 用于连接host的协议,默认为HTTP。

host:要连接的主机名,默认为Pod IP,可以在http request head中设置host头部。

port:容器上要访问端口号或名称。

path:http服务器上的访问URI。

httpHeaders:自定义HTTP请求headers,HTTP允许重复headers。

(3)、通过TCP方式做健康探测:

bash 复制代码
[root@master01 probe ]# kubectl explain pods.spec.containers.readinessProbe.tcpSocket
KIND:     Pod
VERSION:  v1

RESOURCE: tcpSocket <Object>

DESCRIPTION:
     TCPSocket specifies an action involving a TCP port. TCP hooks not yet
     supported

     TCPSocketAction describes an action based on opening a socket

FIELDS:
   host	<string>
     Optional: Host name to connect to, defaults to the pod IP.

   port	<string> -required-
     Number or name of the port to access on the container. Number must be in
     the range 1 to 65535. Name must be an IANA_SVC_NAME.
bash 复制代码
[root@master01 probe ]# vim liveness-tcp.yaml
apiVersion: v1
kind: Pod
metadata:
  name: liveness-tcp
  labels:
    app: liveness
spec:
  containers:
  - name: liveness
    image: nginx
    livenessProbe:
      initialDelaySeconds: 15
      periodSeconds: 20
      tcpSocket:
        port: 80
bash 复制代码
[root@master01 probe ]# kubectl get pods
NAME                          READY   STATUS    RESTARTS   AGE
demo-nodeselector             1/1     Running   5          6d21h
liveness-http                 1/1     Running   0          11m
liveness-tcp                  1/1     Running   0          93s
myapp-pod                     1/1     Running   32         3d22h
nginx-test-64b444bff5-6t2mb   1/1     Running   3          3d23h
nginx-test-64b444bff5-ltj29   1/1     Running   2          3d23h
pod-node-affinity-demo-2      1/1     Running   4          6d4h
test                          1/1     Running   6          9d

2、ReadinessProbe 探针使用示例

Pod 的ReadinessProbe 探针使用方式和 LivenessProbe 探针探测方法一样,也是支持三种,

只是一个是用于探测应用的存活,

一个是判断是否对外提供流量的条件。

这里用一个 Springboot 项目,设置 ReadinessProbe

探测 SpringBoot 项目的 8081 端口下的 /actuator/health 接口,如果探测成功则代表内部程序以及启动,就开放对外提供接口访问,

否则内部应用没有成功启动,暂不对外提供访问,直到就绪探针探测成功。

bash 复制代码
[root@master01 probe ]# cat readiness-exec.yaml 
apiVersion: v1
kind: Service
metadata:
  name: springboot
  labels:
    app: springboot
spec:
  type: NodePort
  ports:
  - name: server
    port: 8080
    targetPort: 8080
    nodePort: 31180
  - name: management
    port: 8081
    targetPort: 8081
    nodePort: 31181
  selector:
    app: springboot
---
apiVersion: v1
kind: Pod
metadata:
  name: springboot
  labels:
    app: springboot
spec:
  containers:
  - name: springboot
    image: mydlqclub/springboot-helloworld:0.0.1
    ports:
    - name: server
      containerPort: 8080
    - name: management
      containerPort: 8081
    readinessProbe:
      initialDelaySeconds: 20   
      periodSeconds: 5          
      timeoutSeconds: 10   
      httpGet:
        scheme: HTTP
        port: 8081
        path: /actuator/health
bash 复制代码
[root@master01 probe ]# kubectl get service
NAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                         AGE
kubernetes   ClusterIP   192.168.0.1      <none>        443/TCP                         18d
mydb         ClusterIP   192.168.138.78   <none>        80/TCP                          11d
myservice    ClusterIP   192.168.81.58    <none>        80/TCP                          11d
springboot   NodePort    192.168.236.10   <none>        8080:31180/TCP,8081:31181/TCP   86s
bash 复制代码
[root@master01 probe ]# kubectl get pods -owide
NAME                          READY   STATUS             RESTARTS   AGE   IP               NODE     NOMINATED NODE   READINESS GATES
demo-nodeselector             1/1     Running            9          14d   172.21.231.190   node02   <none>           <none>
liveness-exec                 0/1     CrashLoopBackOff   20         66m   172.21.231.191   node02   <none>           <none>
liveness-http                 1/1     Running            5          8d    172.29.55.60     node01   <none>           <none>
liveness-tcp                  1/1     Running            4          8d    172.29.55.58     node01   <none>           <none>
myapp-pod                     1/1     Running            68         11d   172.29.55.59     node01   <none>           <none>
nginx-test-64b444bff5-6t2mb   1/1     Running            7          12d   172.29.55.57     node01   <none>           <none>
nginx-test-64b444bff5-ltj29   1/1     Running            6          12d   172.21.231.189   node02   <none>           <none>
pod-node-affinity-demo-2      1/1     Running            8          14d   172.21.231.187   node02   <none>           <none>
springboot                    1/1     Running            0          9m    172.29.55.61     node01   <none>           <none>
test                          1/1     Running            10         17d   172.21.231.186   node02   <none>           <none>

验证

bash 复制代码
[root@master01 probe ]# curl -I http://172.29.55.61:8081/actuator/health
HTTP/1.1 200 
Content-Type: application/vnd.spring-boot.actuator.v2+json;charset=UTF-8
Transfer-Encoding: chunked
Date: Tue, 16 Aug 2022 07:08:12 GMT

3、ReadinessProbe + LivenessProbe 配合使用示例

一般程序中需要设置两种探针结合使用,并且也要结合实际情况,来配置初始化检查时间和检测间隔,

下面列一个简单的 SpringBoot 项目的 Deployment 例子。

bash 复制代码
[root@master01 probe ]# cat live-redi.yaml 
apiVersion: v1
kind: Service
metadata:
  name: springboot1
  labels:
    app: springboot1
spec:
  type: NodePort
  ports:
  - name: server
    port: 8080
    targetPort: 8080
    nodePort: 30180
  - name: management1
    port: 8081
    targetPort: 8081
    nodePort: 30181
  selector:
    app: springboot1
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: springboot1
  labels:
    app: springboot1
spec:
  replicas: 1
  selector:
    matchLabels:
      app: springboot1
  template:
    metadata:
      name: springboot1
      labels:
        app: springboot1
    spec:
      containers:
      - name: readiness
        image: mydlqclub/springboot-helloworld:0.0.1
        ports:
        - name: server1 
          containerPort: 8080
        - name: management1
          containerPort: 8081
        readinessProbe:
          initialDelaySeconds: 20 
          periodSeconds: 5      
          timeoutSeconds: 10        
          httpGet:
            scheme: HTTP
            port: 8081
            path: /actuator/health
        livenessProbe:
          initialDelaySeconds: 30 
          periodSeconds: 10 
          timeoutSeconds: 5 
          httpGet:
            scheme: HTTP
            port: 8081
            path: /actuator/health

kind类型是deployment时。创建pod要定义template字段

bash 复制代码
[root@master01 probe ]# kubectl get pods -owide
NAME                           READY   STATUS             RESTARTS   AGE   IP               NODE     NOMINATED NODE   READINESS GATES
demo-nodeselector              1/1     Running            9          14d   172.21.231.190   node02   <none>           <none>
liveness-exec                  0/1     CrashLoopBackOff   26         90m   172.21.231.191   node02   <none>           <none>
liveness-http                  1/1     Running            5          8d    172.29.55.60     node01   <none>           <none>
liveness-tcp                   1/1     Running            4          8d    172.29.55.58     node01   <none>           <none>
myapp-pod                      1/1     Running            68         11d   172.29.55.59     node01   <none>           <none>
nginx-test-64b444bff5-6t2mb    1/1     Running            7          12d   172.29.55.57     node01   <none>           <none>
nginx-test-64b444bff5-ltj29    1/1     Running            6          12d   172.21.231.189   node02   <none>           <none>
pod-node-affinity-demo-2       1/1     Running            8          14d   172.21.231.187   node02   <none>           <none>
springboot                     1/1     Running            0          32m   172.29.55.61     node01   <none>           <none>
springboot1-7cf5df696d-pk5bv   1/1     Running            0          67s   172.29.55.62     node01   <none>           <none>
test                           1/1     Running            10         17d   172.21.231.186   node02   <none>           <none>
bash 复制代码
[root@master01 probe ]# kubectl get service
NAME          TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                         AGE
kubernetes    ClusterIP   192.168.0.1      <none>        443/TCP                         18d
mydb          ClusterIP   192.168.138.78   <none>        80/TCP                          11d
myservice     ClusterIP   192.168.81.58    <none>        80/TCP                          11d
springboot    NodePort    192.168.236.10   <none>        8080:31180/TCP,8081:31181/TCP   32m
springboot1   NodePort    192.168.78.180   <none>        8080:30180/TCP,8081:30181/TCP   89s
相关推荐
wuxingge3 小时前
k8s1.30.0高可用集群部署
云原生·容器·kubernetes
志凌海纳SmartX4 小时前
趋势洞察|AI 能否带动裸金属 K8s 强势崛起?
云原生·容器·kubernetes
锅总4 小时前
nacos与k8s service健康检查详解
云原生·容器·kubernetes
BUG弄潮儿5 小时前
k8s 集群安装
云原生·容器·kubernetes
Code_Artist5 小时前
Docker镜像加速解决方案:配置HTTP代理,让Docker学会科学上网!
docker·云原生·容器
何遇mirror5 小时前
云原生基础-云计算概览
后端·云原生·云计算
颜淡慕潇6 小时前
【K8S系列】kubectl describe pod显示ImagePullBackOff,如何进一步排查?
后端·云原生·容器·kubernetes
Linux运维日记7 小时前
k8s1.31版本最新版本集群使用容器镜像仓库Harbor
linux·docker·云原生·容器·kubernetes
一名路过的小码农9 小时前
ceph 18.2.4二次开发,docker镜像制作
ceph·docker·容器
AI_小站9 小时前
RAG 示例:使用 langchain、Redis、llama.cpp 构建一个 kubernetes 知识库问答
人工智能·程序人生·langchain·kubernetes·llama·知识库·rag