k8s分散部署节点之pod反亲和性(podAntiAffinity)

使用背景和场景

业务中的某个关键服务,配置了多个replica,结果在部署时,发现多个相同的副本同时部署在同一个主机上,结果主机故障时,所有副本同时漂移了,导致服务间断性中断

基于以上背景,实现一个服务的多个副本分散到不同的主机上,使每个主机有且只能运行服务的一个副本,这里用到的是Pod anti-affinity属性,即pod反亲和性,特性是根据已经运行在node上的pod的label,不再将相同label的pod也调度到该node,实现每个node上只运行一个副本的pod

pod亲和性和反亲和性的区别

亲和性(podAffinity):和指定label的pod部署在相同node上

反亲和性(podAntiAffinity):不想和指定label的pod的服务部署在相同node上

podAntiAffinity实战部署

反亲和性分软性要求和硬性要求

requiredDuringSchedulingIgnoredDuringExecution:硬性要求,必须满足条件,保证分散部署的效果最好使用用此方式

preferredDuringSchedulingIgnoredDuringExecution:软性要求,可以不完全满足,即有可能同一node上可以跑多个副本

bash 复制代码
# 配置如下,只需要修改label的配置,即matchExpressions中的key和values的值

# 硬性要求
# 如果节点上的pod标签存在满足app=nginx,则不能部署到节点上
    spec:
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
          - labelSelector:
              matchExpressions:
              - key: app
                operator: In
                values:
                - nginx
            topologyKey: "kubernetes.io/hostname"

# 软性要求
# 如果节点上的pod标签存在满足app=nginx,也可以部署到节点上,尽可能先部署到其它节点,如果没有满足也可以部署到此节点(大概是这么理解吧)
    spec:
      affinity:
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
          - weight: 100
            podAffinityTerm:
            - labelSelector:
                matchExpressions:
                - key: app
                  operator: In
                  values:
                  - nginx
              topologyKey: "kubernetes.io/hostname"

附完整的deployment.yaml配置

bash 复制代码
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx
  labels:
    app: nginx
spec:
  replicas: 3
  strategy:
    rollingUpdate:
      maxSurge: 30%
      maxUnavailable: 0
    type: RollingUpdate
  minReadySeconds: 10
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
          - labelSelector:
              matchExpressions:
              - key: app
                operator: In
                values:
                - nginx
            topologyKey: "kubernetes.io/hostname"
      restartPolicy: "Always"
      containers:
      - name: nginx
        image: nginx
        imagePullPolicy: "IfNotPresent"
        ports:
        - containerPort: 80
          name: http
          protocol: TCP

实际生产环境用的pod反亲和性

bash 复制代码
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            # Never schedule multiple replicas on the same node
            - topologyKey: kubernetes.io/hostname
              labelSelector:
                matchLabels:
                  app.kubernetes.io/name: ${service}
                  app.kubernetes.io/instance: ${service}
bash 复制代码
apiVersion:  apps/v1
kind: Deployment
metadata:
  name: ${service}
  labels:
    app.kubernetes.io/name: ${service}
    app.kubernetes.io/version: 0.0.0
    app.kubernetes.io/instance: ${service}
    environment: ${env}
spec:
  replicas: ${replicas}
  revisionHistoryLimit: 5
  selector:
    matchLabels:
      app.kubernetes.io/name: ${service}
  strategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
    type: RollingUpdate
  template:
    metadata:
      labels:
        app.kubernetes.io/name: ${service}
        app.kubernetes.io/version: 0.0.0
        app.kubernetes.io/instance: ${service}
        logging: "false"
        armsPilotAutoEnable: "off"
        armsPilotCreateAppName: "${service}-${env}"
    spec:
      serviceAccountName: default
      dnsPolicy: ClusterFirst
      imagePullSecrets:
        - name: gemdale-registry.cn-shenzhen.cr.aliyuncs.com-secret
      containers:
        - name: ${service}
          image: ${image}
          imagePullPolicy: IfNotPresent
          env:
            - name: CONSUL_HOST
              valueFrom:
                fieldRef:
                  fieldPath: status.hostIP
            - name: ELASTIC_APM_SERVER_URLS
              value: http://apm-server.logging:8200
            - name: HOST_IP
              valueFrom:
                fieldRef:
                  fieldPath: status.hostIP
            - name: SERVER_PORT
              value: "80"
            - name: JAVA_OPTS
              value: -Duser.timezone=Asia/Shanghai
            - name: WFWAPP
              value: wfw-applog

          volumeMounts:
            - mountPath: /data/appdata/
              name: appdata
            - mountPath: /data/config-repo/
              name: config-repo
            - mountPath: /data/logs/
              name: logs
            - mountPath: /mnt/hgfs/
              name: mnt-hgfs
          ports:
            - containerPort: 80
              name: http
          resources:
            {}

      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
              - matchExpressions:
                  - key: microservice
                    operator: In
                    values:
                      - "true"
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            # Never schedule multiple replicas on the same node
            - topologyKey: kubernetes.io/hostname
              labelSelector:
                matchLabels:
                  app.kubernetes.io/name: ${service}
                  app.kubernetes.io/instance: ${service}
      volumes:
        - hostPath:
            path: /data/appdata/
            type: DirectoryOrCreate
          name: appdata
        - hostPath:
            path: /data/config-repo/
            type: DirectoryOrCreate
          name: config-repo
        - hostPath:
            path: /data/logs/
            type: DirectoryOrCreate
          name: logs
        - hostPath:
            path: /mnt/hgfs/
            type: DirectoryOrCreate
          name: mnt-hgfs
---
apiVersion: v1
kind: Service
metadata:
  name: ${service}
  labels:
    app.kubernetes.io/name: ${service}
    app.kubernetes.io/version: 0.0.0
    app.kubernetes.io/instance: ${service}
    environment: ${env}
spec:
  type: ClusterIP
  ports:
    - name: http
      port: 80
      protocol: TCP
      targetPort: http
  selector:
    app.kubernetes.io/name: ${service}
    app.kubernetes.io/instance: ${service}
相关推荐
南猿北者1 小时前
docker容器
docker·容器
YCyjs3 小时前
K8S群集调度二
云原生·容器·kubernetes
Hoxy.R3 小时前
K8s小白入门
云原生·容器·kubernetes
€☞扫地僧☜€6 小时前
docker 拉取MySQL8.0镜像以及安装
运维·数据库·docker·容器
全能全知者7 小时前
docker快速安装与配置mongoDB
mongodb·docker·容器
为什么这亚子9 小时前
九、Go语言快速入门之map
运维·开发语言·后端·算法·云原生·golang·云计算
ZHOU西口10 小时前
微服务实战系列之玩转Docker(十八)
分布式·docker·云原生·架构·数据安全·etcd·rbac
牛角上的男孩11 小时前
Istio Gateway发布服务
云原生·gateway·istio
JuiceFS12 小时前
好未来:多云环境下基于 JuiceFS 建设低运维模型仓库
运维·云原生
景天科技苑13 小时前
【云原生开发】K8S多集群资源管理平台架构设计
云原生·容器·kubernetes·k8s·云原生开发·k8s管理系统