使用背景和场景
业务中的某个关键服务,配置了多个replica,结果在部署时,发现多个相同的副本同时部署在同一个主机上,结果主机故障时,所有副本同时漂移了,导致服务间断性中断
基于以上背景,实现一个服务的多个副本分散到不同的主机上,使每个主机有且只能运行服务的一个副本,这里用到的是Pod anti-affinity属性,即pod反亲和性,特性是根据已经运行在node上的pod的label,不再将相同label的pod也调度到该node,实现每个node上只运行一个副本的pod
pod亲和性和反亲和性的区别
亲和性(podAffinity):和指定label的pod部署在相同node上
反亲和性(podAntiAffinity):不想和指定label的pod的服务部署在相同node上
podAntiAffinity实战部署
反亲和性分软性要求和硬性要求
requiredDuringSchedulingIgnoredDuringExecution:硬性要求,必须满足条件,保证分散部署的效果最好使用用此方式
preferredDuringSchedulingIgnoredDuringExecution:软性要求,可以不完全满足,即有可能同一node上可以跑多个副本
bash
# 配置如下,只需要修改label的配置,即matchExpressions中的key和values的值
# 硬性要求
# 如果节点上的pod标签存在满足app=nginx,则不能部署到节点上
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- nginx
topologyKey: "kubernetes.io/hostname"
# 软性要求
# 如果节点上的pod标签存在满足app=nginx,也可以部署到节点上,尽可能先部署到其它节点,如果没有满足也可以部署到此节点(大概是这么理解吧)
spec:
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- nginx
topologyKey: "kubernetes.io/hostname"
附完整的deployment.yaml配置
bash
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
labels:
app: nginx
spec:
replicas: 3
strategy:
rollingUpdate:
maxSurge: 30%
maxUnavailable: 0
type: RollingUpdate
minReadySeconds: 10
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- nginx
topologyKey: "kubernetes.io/hostname"
restartPolicy: "Always"
containers:
- name: nginx
image: nginx
imagePullPolicy: "IfNotPresent"
ports:
- containerPort: 80
name: http
protocol: TCP
实际生产环境用的pod反亲和性
bash
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
# Never schedule multiple replicas on the same node
- topologyKey: kubernetes.io/hostname
labelSelector:
matchLabels:
app.kubernetes.io/name: ${service}
app.kubernetes.io/instance: ${service}
bash
apiVersion: apps/v1
kind: Deployment
metadata:
name: ${service}
labels:
app.kubernetes.io/name: ${service}
app.kubernetes.io/version: 0.0.0
app.kubernetes.io/instance: ${service}
environment: ${env}
spec:
replicas: ${replicas}
revisionHistoryLimit: 5
selector:
matchLabels:
app.kubernetes.io/name: ${service}
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
labels:
app.kubernetes.io/name: ${service}
app.kubernetes.io/version: 0.0.0
app.kubernetes.io/instance: ${service}
logging: "false"
armsPilotAutoEnable: "off"
armsPilotCreateAppName: "${service}-${env}"
spec:
serviceAccountName: default
dnsPolicy: ClusterFirst
imagePullSecrets:
- name: gemdale-registry.cn-shenzhen.cr.aliyuncs.com-secret
containers:
- name: ${service}
image: ${image}
imagePullPolicy: IfNotPresent
env:
- name: CONSUL_HOST
valueFrom:
fieldRef:
fieldPath: status.hostIP
- name: ELASTIC_APM_SERVER_URLS
value: http://apm-server.logging:8200
- name: HOST_IP
valueFrom:
fieldRef:
fieldPath: status.hostIP
- name: SERVER_PORT
value: "80"
- name: JAVA_OPTS
value: -Duser.timezone=Asia/Shanghai
- name: WFWAPP
value: wfw-applog
volumeMounts:
- mountPath: /data/appdata/
name: appdata
- mountPath: /data/config-repo/
name: config-repo
- mountPath: /data/logs/
name: logs
- mountPath: /mnt/hgfs/
name: mnt-hgfs
ports:
- containerPort: 80
name: http
resources:
{}
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: microservice
operator: In
values:
- "true"
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
# Never schedule multiple replicas on the same node
- topologyKey: kubernetes.io/hostname
labelSelector:
matchLabels:
app.kubernetes.io/name: ${service}
app.kubernetes.io/instance: ${service}
volumes:
- hostPath:
path: /data/appdata/
type: DirectoryOrCreate
name: appdata
- hostPath:
path: /data/config-repo/
type: DirectoryOrCreate
name: config-repo
- hostPath:
path: /data/logs/
type: DirectoryOrCreate
name: logs
- hostPath:
path: /mnt/hgfs/
type: DirectoryOrCreate
name: mnt-hgfs
---
apiVersion: v1
kind: Service
metadata:
name: ${service}
labels:
app.kubernetes.io/name: ${service}
app.kubernetes.io/version: 0.0.0
app.kubernetes.io/instance: ${service}
environment: ${env}
spec:
type: ClusterIP
ports:
- name: http
port: 80
protocol: TCP
targetPort: http
selector:
app.kubernetes.io/name: ${service}
app.kubernetes.io/instance: ${service}