Linux——k8s、deployment、pod

  1. 声明式配置文件:要求集群中的某一个资源,处于指定的状态。
  2. 集群中都有哪些可以管理的资源?
  3. 控制器: 用来控制pod数量、运行参数
  4. deployment 管理灵活,而pod的创建、删除、运行、更新等均无需直接操作pod,只需要更新deployment的配置

1.如何控制pod

deployment ----> replica set (基于template 运行的pod的数量)---> 创建服务需要的pod
[root@control ~]# cat nginx-deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  selector:
    matchLabels:
      app: nginx
  replicas: 3
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:latest
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 80


[root@control ~]# source  .kube/k8s_bash_completion
[root@control ~]# kubectl create -f  nginx-deployment.yml
deployment.apps/nginx-deployment created
[root@control ~]# kubectl get deployments.apps -l app=nginx
No resources found in default namespace.
[root@control ~]# kubectl get deployments.apps
NAME               READY   UP-TO-DATE   AVAILABLE   AGE
my-nginx           2/2     2            2           3d21h
nginx-deployment   3/3     3            3           29s
[root@control ~]# kubectl describe deployments.apps nginx-deployment
Name:                   nginx-deployment
Namespace:              default
CreationTimestamp:      Tue, 24 Sep 2024 14:27:32 +0800
Labels:                 <none>
Annotations:            deployment.kubernetes.io/revision: 1
Selector:               app=nginx
Replicas:               3 desired | 3 updated | 3 total | 3 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  25% max unavailable, 25% max surge
Pod Template:
  Labels:  app=nginx
  Containers:
   nginx:
    Image:         nginx:latest
    Port:          80/TCP
    Host Port:     0/TCP
    Environment:   <none>
    Mounts:        <none>
  Volumes:         <none>
  Node-Selectors:  <none>
  Tolerations:     <none>
Conditions:
  Type           Status  Reason
  ----           ------  ------
  Available      True    MinimumReplicasAvailable
  Progressing    True    NewReplicaSetAvailable
OldReplicaSets:  <none>
NewReplicaSet:   nginx-deployment-bf56f49c (3/3 replicas created)
Events:
  Type    Reason             Age   From                   Message
  ----    ------             ----  ----                   -------
  Normal  ScalingReplicaSet  85s   deployment-controller  Scaled up replica set nginx-deployment-bf56f49c to 3
[root@control ~]# kubectl get rs  nginx-deployment-bf56f49c
NAME                        DESIRED   CURRENT   READY   AGE
nginx-deployment-bf56f49c   3         3         3       2m22s
[root@control ~]# kubectl get pods -l app=nginx
NAME                              READY   STATUS    RESTARTS   AGE
nginx-deployment-bf56f49c-9c8gw   1/1     Running   0          6m48s
nginx-deployment-bf56f49c-n59hg   1/1     Running   0          6m48s
nginx-deployment-bf56f49c-v887p   1/1     Running   0          6m48s
[root@control ~]# kubectl get pods -l app=nginx --show-labels
NAME                              READY   STATUS    RESTARTS   AGE     LABELS
nginx-deployment-bf56f49c-9c8gw   1/1     Running   0          7m32s   app=nginx,pod-template-hash=bf56f49c
nginx-deployment-bf56f49c-n59hg   1/1     Running   0          7m32s   app=nginx,pod-template-hash=bf56f49c
nginx-deployment-bf56f49c-v887p   1/1     Running   0          7m32s   app=nginx,pod-template-hash=bf56f49c
//为了保证工作节点有,需要更新的镜像,手动上传镜像到工作节点
[root@control ~]# docker save -o nginx-19.1.tar nginx:1.19.1
[root@control ~]# scp nginx-19.1.tar root@node1:/root
root@node1's password:
nginx-19.1.tar                                                                                                                                           0%    0     0nginx-19.1.tar                                                                                                                                          40%   52MB  52nginx-19.1.tar                                                                                                                                         100%  130MB  47
[root@control ~]# scp nginx-19.1.tar root@node2:/root
root@node2's password:
nginx-19.1.tar                                                                                                                                         100%  130MB  51.7MB/s   00:02
[root@node1 ~]# ctr -n k8s.io image import  nginx-19.1.tar
[root@node2 ~]# ctr -n k8s.io image import  nginx-19.1.tar
// 查看工作节点保存的镜像
[root@node2 ~]# crictl -r unix:///var/run/containerd/containerd.sock images

返回控制节点,进行更新操作
// 更新镜像
[root@control ~]# kubectl set image deployments nginx-deployment nginx=nginx:1.19.1
deployment.apps/nginx-deployment image updated
[root@control ~]# kubectl get pods -l app=nginx
NAME                                READY   STATUS    RESTARTS   AGE
nginx-deployment-7f588fbd68-bjd67   1/1     Running   0          9s
nginx-deployment-7f588fbd68-n2fls   1/1     Running   0          5s
nginx-deployment-7f588fbd68-qnwdn   1/1     Running   0          8s
[root@control ~]# kubectl describe deployments.apps nginx-deployment
Name:                   nginx-deployment
Namespace:              default
CreationTimestamp:      Tue, 24 Sep 2024 14:27:32 +0800
Labels:                 <none>
Annotations:            deployment.kubernetes.io/revision: 2
Selector:               app=nginx
Replicas:               3 desired | 3 updated | 3 total | 3 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  25% max unavailable, 25% max surge
Pod Template:
  Labels:  app=nginx
  Containers:
   nginx:
    Image:         nginx:1.19.1
    Port:          80/TCP
    Host Port:     0/TCP
    Environment:   <none>
    Mounts:        <none>
  Volumes:         <none>
  Node-Selectors:  <none>
  Tolerations:     <none>
Conditions:
  Type           Status  Reason
  ----           ------  ------
  Available      True    MinimumReplicasAvailable
  Progressing    True    NewReplicaSetAvailable
OldReplicaSets:  nginx-deployment-bf56f49c (0/0 replicas created)
NewReplicaSet:   nginx-deployment-7f588fbd68 (3/3 replicas created)
Events:
  Type    Reason             Age   From                   Message
  ----    ------             ----  ----                   -------
  Normal  ScalingReplicaSet  15m   deployment-controller  Scaled up replica set nginx-deployment-bf56f49c to 3
  Normal  ScalingReplicaSet  49s   deployment-controller  Scaled up replica set nginx-deployment-7f588fbd68 to 1
  Normal  ScalingReplicaSet  48s   deployment-controller  Scaled down replica set nginx-deployment-bf56f49c to 2 from 3
  Normal  ScalingReplicaSet  48s   deployment-controller  Scaled up replica set nginx-deployment-7f588fbd68 to 2 from 1
  Normal  ScalingReplicaSet  45s   deployment-controller  Scaled down replica set nginx-deployment-bf56f49c to 1 from 2
  Normal  ScalingReplicaSet  45s   deployment-controller  Scaled up replica set nginx-deployment-7f588fbd68 to 3 from 2
  Normal  ScalingReplicaSet  44s   deployment-controller  Scaled down replica set nginx-deployment-bf56f49c to 0 from 1
// deployment 更新后创建新的rs来启动新的容器,并保留原本的rs
// rs 一般只保留两个,除非额外配置
[root@control ~]# kubectl get rs
NAME                          DESIRED   CURRENT   READY   AGE
my-nginx-7549dd6888           2         2         2       3d21h
nginx-deployment-7f588fbd68   3         3         3       2m18s
nginx-deployment-bf56f49c     0         0         0       16m
[root@control ~]# kubectl edit deployments.apps nginx-deployment
deployment.apps/nginx-deployment edited
// 展示更新过程
// 失败的原因是无法拉取nginx 1.191 镜像
[root@control ~]# kubectl rollout status deployment nginx-deployment  
Waiting for deployment "nginx-deployment" rollout to finish: 1 out of 3 new replicas have been updated...
^C
[root@control ~]# kubectl get pods -l app=nginx

// 一个pod 状态错误 三个pod状态正常
NAME                                READY   STATUS             RESTARTS   AGE
nginx-deployment-6dc44567c6-85glr   0/1     ImagePullBackOff   0          54s
nginx-deployment-7f588fbd68-bjd67   1/1     Running            0          8m58s
nginx-deployment-7f588fbd68-n2fls   1/1     Running            0          8m54s
nginx-deployment-7f588fbd68-qnwdn   1/1     Running            0          8m57s
[root@control ~]# kubectl get deployments.apps nginx-deployment
NAME               READY   UP-TO-DATE   AVAILABLE   AGE
nginx-deployment   3/3     1            3           23m
// 查看更新历史
[root@control ~]# kubectl rollout history deployment nginx-deployment
deployment.apps/nginx-deployment
REVISION  CHANGE-CAUSE
1         <none>
2         <none>
3         <none>

// 查看第二次更新的参数
[root@control ~]# kubectl rollout history deployment nginx-deployment --revision=2
deployment.apps/nginx-deployment with revision #2
Pod Template:
  Labels:       app=nginx
        pod-template-hash=7f588fbd68
  Containers:
   nginx:
    Image:      nginx:1.19.1
    Port:       80/TCP
    Host Port:  0/TCP
    Environment:        <none>
    Mounts:     <none>
  Volumes:      <none>
  Node-Selectors:       <none>
  Tolerations:  <none>


// 回滚失败的更新
[root@control ~]# kubectl rollout undo deployment nginx-deployment
deployment.apps/nginx-deployment rolled back
[root@control ~]# kubectl get deployments.apps nginx-deployment
NAME               READY   UP-TO-DATE   AVAILABLE   AGE
nginx-deployment   3/3     3            3           27m

deployment 是最常见的无状态服务部署方式,常见的用例包括:

  1. 使用 Deployment 来创建 ReplicaSet。ReplicaSet 在后台创建 pod。检查启动状态,看它是成功还是失败。
  2. 然后,通过更新 Deployment 的 PodTemplateSpec 字段来声明 Pod 的新状态。这会创建一个新的 ReplicaSet,Deployment 会按照控制的速率将 pod 从旧的 ReplicaSet 移动到新的 ReplicaSet 中。
  3. 如果当前状态不稳定,回滚到之前的 Deployment revision。每次回滚都会更新 Deployment 的 revision。
  4. 扩容 Deployment 以满足更高的负载。
  5. 暂停 Deployment 来应用 PodTemplateSpec 的多个修复,然后恢复上线。
  6. 根据 Deployment 的状态判断上线是否 hang 住了。
  7. 清除旧的不必要的 ReplicaSet。

无状态服务和有状态服务最直接的区别就是:

  1. pod是否需要分配固定的存储
  2. pod是否需要固定的网络参数
  3. pod 启动顺序是否有序

对于有状态服务,一般使用statefulSet来实现,常见用例:

  1. 稳定的持久化存储,即 Pod 重新调度后还是能访问到相同的持久化数据,基于 PVC 来实现,eg: 关系型数据库
  2. 稳定的网络标志,即 Pod 重新调度后其 PodName 和 HostName 不变,基于 Headless Service(即没有 Cluster IP 的 Service)来实现
  3. 有序部署,有序扩展,即 Pod 是有顺序的,在部署或者扩展的时候要依据定义的顺序依次依次进行(即从 0 到 N-1,在下一个 Pod 运行之前所有之前的 Pod 必须都是 Running 和 Ready 状态),基于 init containers 来实现
  4. 有序收缩,有序删除(即从 N-1 到 0)

使用nginx 作为反向代理 // 无状态服务 不涉及任何真实应用数据

使用lnmp架构运行web应用 // 有状态服务 与应用代码,且数据库保存应用数据,因此数据一定需要进行持久化

使用nginx 作为statefulset的练习:

[root@control ~]# cat nginx-stateful.yml
apiVersion: v1
kind: Service
metadata:
  name: nginx
  labels:
    app: nginx
spec:
  ports:
  - port: 80
    name: web
  clusterIP: None
  selector:
    app: nginx
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: web
spec:
  serviceName: "nginx"
  replicas: 2
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.19.1
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 80
          name: web
        volumeMounts:
        - name: www
          mountPath: /usr/share/nginx/html
  volumeClaimTemplates:
  - metadata:
      name: www
    spec:
      storageClassName: local-storage
      accessModes: [ "ReadWriteOnce" ]
      resources:
        requests:
          storage: 4Gi

[root@control ~]# cat pv-1.yml
apiVersion: v1
kind: PersistentVolume
metadata:
  name: example-pv
spec:
  capacity:
    storage: 4Gi
  volumeMode: Filesystem
  accessModes:
  - ReadWriteOnce
  persistentVolumeReclaimPolicy: Delete
  storageClassName: local-storage
  local:
    path: /mnt/lv/swap
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: kubernetes.io/hostname
          operator: In
          values:
          - node1
[root@control ~]# cat pv-2.yml
apiVersion: v1
kind: PersistentVolume
metadata:
  name: example-pv2
spec:
  capacity:
    storage: 4Gi
  volumeMode: Filesystem
  accessModes:
  - ReadWriteOnce
  persistentVolumeReclaimPolicy: Delete
  storageClassName: local-storage
  local:
    path: /mnt/lv/swap
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: kubernetes.io/hostname
          operator: In
          values:
          - node2
[root@control ~]# cat local_storage.yml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
[root@control ~]# kubectl create -f local_storage.yml
storageclass.storage.k8s.io/local-storage created
[root@control ~]# kubectl get storageclasses.storage.k8s.io
NAME            PROVISIONER                    RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
local-storage   kubernetes.io/no-provisioner   Delete          WaitForFirstConsumer   false                  2s

node1 he node2
[root@node1 ~]# mkfs.xfs -f /dev/cs_bogon/swap
meta-data=/dev/cs_bogon/swap     isize=512    agcount=4, agsize=256768 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=1, sparse=1, rmapbt=0
         =                       reflink=1    bigtime=1 inobtcount=1 nrext64=0
data     =                       bsize=4096   blocks=1027072, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0, ftype=1
log      =internal log           bsize=4096   blocks=16384, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
[root@node1 ~]# mkdir /mnt/lv/swap -p
[root@node1 ~]# mount /dev/cs_bogon/swap /mnt/lv/swap/

返回控制节点:
创建持久卷
[root@control ~]# kubectl create -f pv-1.yml
persistentvolume/example-pv created
[root@control ~]# kubectl create -f pv-2.yml
Error from server (AlreadyExists): error when creating "pv-2.yml": persistentvolumes "example-pv" already exists
[root@control ~]# vim pv-2.yml
[root@control ~]# kubectl create -f pv-2.yml
persistentvolume/example-pv2 created
[root@control ~]# kubectl get pv
NAME          CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS    VOLUMEATTRIBUTESCLASS   REASON   AGE
example-pv    4Gi        RWO            Delete           Available           local-storage   <unset>                          25s
example-pv2   4Gi        RWO            Delete           Available           local-storage   <unset>                          6s
[root@control ~]# kubectl apply -f nginx-stateful.yml
service/nginx created
statefulset.apps/web created
[root@control ~]# kubectl get statefulsets.apps
NAME   READY   AGE
web    2/2     7s
[root@control ~]# kubectl get pvc
NAME        STATUS   VOLUME        CAPACITY   ACCESS MODES   STORAGECLASS    VOLUMEATTRIBUTESCLASS   AGE
www-web-0   Bound    example-pv    4Gi        RWO            local-storage   <unset>                 16s
www-web-1   Bound    example-pv2   4Gi        RWO            local-storage   <unset>                 14s
[root@control ~]# kubectl describe statefulsets.apps web
Name:               web
Namespace:          default
CreationTimestamp:  Tue, 24 Sep 2024 17:18:48 +0800
Selector:           app=nginx
Labels:             <none>
Annotations:        <none>
Replicas:           2 desired | 2 total
Update Strategy:    RollingUpdate
  Partition:        0
Pods Status:        2 Running / 0 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=nginx
  Containers:
   nginx:
    Image:        nginx:1.19.1
    Port:         80/TCP
    Host Port:    0/TCP
    Environment:  <none>
    Mounts:
      /usr/share/nginx/html from www (rw)
  Volumes:         <none>
  Node-Selectors:  <none>
  Tolerations:     <none>
Volume Claims:
  Name:          www
  StorageClass:  local-storage
  Labels:        <none>
  Annotations:   <none>
  Capacity:      4Gi
  Access Modes:  [ReadWriteOnce]
Events:
  Type    Reason            Age   From                    Message
  ----    ------            ----  ----                    -------
  Normal  SuccessfulCreate  55s   statefulset-controller  create Claim www-web-0 Pod web-0 in StatefulSet web success
  Normal  SuccessfulCreate  55s   statefulset-controller  create Pod web-0 in StatefulSet web successful
  Normal  SuccessfulCreate  53s   statefulset-controller  create Claim www-web-1 Pod web-1 in StatefulSet web success
  Normal  SuccessfulCreate  53s   statefulset-controller  create Pod web-1 in StatefulSet web successful
[root@control ~]# kubectl get pods --watch -l app=nginx
NAME                                READY   STATUS    RESTARTS   AGE
nginx-deployment-7f588fbd68-bjd67   1/1     Running   0          160m
nginx-deployment-7f588fbd68-n2fls   1/1     Running   0          160m
nginx-deployment-7f588fbd68-qnwdn   1/1     Running   0          160m
web-0                               1/1     Running   0          3m15s
web-1                               1/1     Running   0          3m13s
^C[root@control ~]# kubectl delete pod -l app=nginx
pod "nginx-deployment-7f588fbd68-bjd67" deleted
pod "nginx-deployment-7f588fbd68-n2fls" deleted
pod "nginx-deployment-7f588fbd68-qnwdn" deleted
pod "web-0" deleted
pod "web-1" deleted

[root@control ~]#
[root@control ~]# kubectl get statefulsets.apps
NAME   READY   AGE
web    2/2     4m43s
[root@control ~]# kubectl get pods  -l app=nginx
NAME                                READY   STATUS    RESTARTS   AGE
nginx-deployment-7f588fbd68-4lrb5   1/1     Running   0          36s
nginx-deployment-7f588fbd68-8jnm5   1/1     Running   0          36s
nginx-deployment-7f588fbd68-mvgzg   1/1     Running   0          36s
web-0                               1/1     Running   0          35s
web-1                               1/1     Running   0          33s
[root@control ~]# kubectl describe statefulsets.apps web
Name:               web
Namespace:          default
CreationTimestamp:  Tue, 24 Sep 2024 17:18:48 +0800
Selector:           app=nginx
Labels:             <none>
Annotations:        <none>
Replicas:           2 desired | 2 total
Update Strategy:    RollingUpdate
  Partition:        0
Pods Status:        2 Running / 0 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=nginx
  Containers:
   nginx:
    Image:        nginx:1.19.1
    Port:         80/TCP
    Host Port:    0/TCP
    Environment:  <none>
    Mounts:
      /usr/share/nginx/html from www (rw)
  Volumes:         <none>
  Node-Selectors:  <none>
  Tolerations:     <none>
Volume Claims:
  Name:          www
  StorageClass:  local-storage
  Labels:        <none>
  Annotations:   <none>
  Capacity:      4Gi
  Access Modes:  [ReadWriteOnce]
Events:
  Type    Reason            Age                  From                    Message
  ----    ------            ----                 ----                    -------
  Normal  SuccessfulCreate  5m16s                statefulset-controller  create Claim www-web-0 Pod web-0 in StatefulSet web success
  Normal  SuccessfulCreate  5m14s                statefulset-controller  create Claim www-web-1 Pod web-1 in StatefulSet web success
  Normal  SuccessfulCreate  48s (x2 over 5m16s)  statefulset-controller  create Pod web-0 in StatefulSet web successful
  Normal  SuccessfulCreate  46s (x2 over 5m14s)  statefulset-controller  create Pod web-1 in StatefulSet web successful
[root@control ~]# for i in 0 1; do kubectl exec web-$i -- sh -c 'hostname'; done
web-0
web-1

结论:pod名称和持久卷的绑定都不会发生改变,因此可以实现数据和通信(无论pod如何变更,可以基于pod的名称进行通信)的固定。

相关推荐
肖永威4 分钟前
CentOS环境上离线安装python3及相关包
linux·运维·机器学习·centos
tian2kong7 分钟前
Centos 7 修改YUM镜像源地址为阿里云镜像地址
linux·阿里云·centos
mengao12349 分钟前
centos 服务器 docker 使用代理
服务器·docker·centos
布鲁格若门11 分钟前
CentOS 7 桌面版安装 cuda 12.4
linux·运维·centos·cuda
Eternal-Student16 分钟前
【docker 保存】将Docker镜像保存为一个离线的tar归档文件
运维·docker·容器
不是二师兄的八戒18 分钟前
本地 PHP 和 Java 开发环境 Docker 化与配置开机自启
java·docker·php
C-cat.19 分钟前
Linux|进程程序替换
linux·服务器·microsoft
dessler19 分钟前
云计算&虚拟化-kvm-扩缩容cpu
linux·运维·云计算
怀澈12220 分钟前
高性能服务器模型之Reactor(单线程版本)
linux·服务器·网络·c++
DC_BLOG23 分钟前
Linux-Apache静态资源
linux·运维·apache