k8s分散部署节点之pod反亲和性(podAntiAffinity)

使用背景和场景

业务中的某个关键服务,配置了多个replica,结果在部署时,发现多个相同的副本同时部署在同一个主机上,结果主机故障时,所有副本同时漂移了,导致服务间断性中断

基于以上背景,实现一个服务的多个副本分散到不同的主机上,使每个主机有且只能运行服务的一个副本,这里用到的是Pod anti-affinity属性,即pod反亲和性,特性是根据已经运行在node上的pod的label,不再将相同label的pod也调度到该node,实现每个node上只运行一个副本的pod

pod亲和性和反亲和性的区别

亲和性(podAffinity):和指定label的pod部署在相同node上

反亲和性(podAntiAffinity):不想和指定label的pod的服务部署在相同node上

podAntiAffinity实战部署

反亲和性分软性要求和硬性要求

requiredDuringSchedulingIgnoredDuringExecution:硬性要求,必须满足条件,保证分散部署的效果最好使用用此方式

preferredDuringSchedulingIgnoredDuringExecution:软性要求,可以不完全满足,即有可能同一node上可以跑多个副本

bash 复制代码
# 配置如下,只需要修改label的配置,即matchExpressions中的key和values的值

# 硬性要求
# 如果节点上的pod标签存在满足app=nginx,则不能部署到节点上
    spec:
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
          - labelSelector:
              matchExpressions:
              - key: app
                operator: In
                values:
                - nginx
            topologyKey: "kubernetes.io/hostname"

# 软性要求
# 如果节点上的pod标签存在满足app=nginx,也可以部署到节点上,尽可能先部署到其它节点,如果没有满足也可以部署到此节点(大概是这么理解吧)
    spec:
      affinity:
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
          - weight: 100
            podAffinityTerm:
            - labelSelector:
                matchExpressions:
                - key: app
                  operator: In
                  values:
                  - nginx
              topologyKey: "kubernetes.io/hostname"

附完整的deployment.yaml配置

bash 复制代码
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx
  labels:
    app: nginx
spec:
  replicas: 3
  strategy:
    rollingUpdate:
      maxSurge: 30%
      maxUnavailable: 0
    type: RollingUpdate
  minReadySeconds: 10
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
          - labelSelector:
              matchExpressions:
              - key: app
                operator: In
                values:
                - nginx
            topologyKey: "kubernetes.io/hostname"
      restartPolicy: "Always"
      containers:
      - name: nginx
        image: nginx
        imagePullPolicy: "IfNotPresent"
        ports:
        - containerPort: 80
          name: http
          protocol: TCP

实际生产环境用的pod反亲和性

bash 复制代码
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            # Never schedule multiple replicas on the same node
            - topologyKey: kubernetes.io/hostname
              labelSelector:
                matchLabels:
                  app.kubernetes.io/name: ${service}
                  app.kubernetes.io/instance: ${service}
bash 复制代码
apiVersion:  apps/v1
kind: Deployment
metadata:
  name: ${service}
  labels:
    app.kubernetes.io/name: ${service}
    app.kubernetes.io/version: 0.0.0
    app.kubernetes.io/instance: ${service}
    environment: ${env}
spec:
  replicas: ${replicas}
  revisionHistoryLimit: 5
  selector:
    matchLabels:
      app.kubernetes.io/name: ${service}
  strategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
    type: RollingUpdate
  template:
    metadata:
      labels:
        app.kubernetes.io/name: ${service}
        app.kubernetes.io/version: 0.0.0
        app.kubernetes.io/instance: ${service}
        logging: "false"
        armsPilotAutoEnable: "off"
        armsPilotCreateAppName: "${service}-${env}"
    spec:
      serviceAccountName: default
      dnsPolicy: ClusterFirst
      imagePullSecrets:
        - name: gemdale-registry.cn-shenzhen.cr.aliyuncs.com-secret
      containers:
        - name: ${service}
          image: ${image}
          imagePullPolicy: IfNotPresent
          env:
            - name: CONSUL_HOST
              valueFrom:
                fieldRef:
                  fieldPath: status.hostIP
            - name: ELASTIC_APM_SERVER_URLS
              value: http://apm-server.logging:8200
            - name: HOST_IP
              valueFrom:
                fieldRef:
                  fieldPath: status.hostIP
            - name: SERVER_PORT
              value: "80"
            - name: JAVA_OPTS
              value: -Duser.timezone=Asia/Shanghai
            - name: WFWAPP
              value: wfw-applog

          volumeMounts:
            - mountPath: /data/appdata/
              name: appdata
            - mountPath: /data/config-repo/
              name: config-repo
            - mountPath: /data/logs/
              name: logs
            - mountPath: /mnt/hgfs/
              name: mnt-hgfs
          ports:
            - containerPort: 80
              name: http
          resources:
            {}

      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
              - matchExpressions:
                  - key: microservice
                    operator: In
                    values:
                      - "true"
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            # Never schedule multiple replicas on the same node
            - topologyKey: kubernetes.io/hostname
              labelSelector:
                matchLabels:
                  app.kubernetes.io/name: ${service}
                  app.kubernetes.io/instance: ${service}
      volumes:
        - hostPath:
            path: /data/appdata/
            type: DirectoryOrCreate
          name: appdata
        - hostPath:
            path: /data/config-repo/
            type: DirectoryOrCreate
          name: config-repo
        - hostPath:
            path: /data/logs/
            type: DirectoryOrCreate
          name: logs
        - hostPath:
            path: /mnt/hgfs/
            type: DirectoryOrCreate
          name: mnt-hgfs
---
apiVersion: v1
kind: Service
metadata:
  name: ${service}
  labels:
    app.kubernetes.io/name: ${service}
    app.kubernetes.io/version: 0.0.0
    app.kubernetes.io/instance: ${service}
    environment: ${env}
spec:
  type: ClusterIP
  ports:
    - name: http
      port: 80
      protocol: TCP
      targetPort: http
  selector:
    app.kubernetes.io/name: ${service}
    app.kubernetes.io/instance: ${service}
相关推荐
福大大架构师每日一题8 小时前
22.1 k8s不同role级别的服务发现
容器·kubernetes·服务发现
莹雨潇潇8 小时前
Docker 快速入门(Ubuntu版)
java·前端·docker·容器
weixin_453965009 小时前
[单master节点k8s部署]30.ceph分布式存储(一)
分布式·ceph·kubernetes
weixin_453965009 小时前
[单master节点k8s部署]32.ceph分布式存储(三)
分布式·ceph·kubernetes
tangdou3690986559 小时前
1分钟搞懂K8S中的NodeSelector
云原生·容器·kubernetes
Lansonli10 小时前
云原生(四十一) | 阿里云ECS服务器介绍
服务器·阿里云·云原生
Dylanioucn11 小时前
【分布式微服务云原生】掌握分布式缓存:Redis与Memcached的深入解析与实战指南
分布式·缓存·云原生
tangdou36909865512 小时前
Docker系列-5种方案超详细讲解docker数据存储持久化(volume,bind mounts,NFS等)
docker·容器
later_rql12 小时前
k8s-集群部署1
云原生·容器·kubernetes
weixin_4539650014 小时前
[单master节点k8s部署]31.ceph分布式存储(二)
分布式·ceph·kubernetes