目录
- 概述和对比
- Deployment详解
- StatefulSet详解
- DaemonSet详解
- 使用场景和最佳实践
- 实战示例
1. 概述和对比
基本概念
这三种都是Kubernetes中的工作负载控制器(Workload Controllers),用于管理Pod的生命周期,但各有不同的特点和适用场景。
核心对比表
特性 | Deployment | StatefulSet | DaemonSet |
---|---|---|---|
主要用途 | 无状态应用部署 | 有状态应用部署 | 系统级守护进程 |
Pod身份 | 随机生成 | 固定有序标识 | 节点绑定 |
部署策略 | 滚动更新/重新创建 | 有序更新 | 滚动更新 |
存储 | 临时存储 | 持久化存储 | 通常无状态 |
扩缩容 | 支持任意扩缩容 | 有序扩缩容 | 每节点一个Pod |
网络标识 | 动态分配 | 稳定的网络标识 | 节点IP |
典型应用 | Web应用、API服务 | 数据库、消息队列 | 日志收集、监控 |
2. Deployment详解
2.1 基本概念
Deployment是Kubernetes中最常用的控制器,专门用于管理无状态应用。
2.2 主要特性
yaml
# Deployment的核心特性
✅ 声明式更新 - 描述期望状态,K8s自动达成
✅ 滚动更新 - 零停机时间更新应用
✅ 回滚机制 - 快速回滚到之前版本
✅ 扩缩容 - 水平扩展Pod副本数
✅ 自愈能力 - 自动替换失败的Pod
2.3 Deployment YAML详解
基础Deployment示例
yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
namespace: default
labels:
app: nginx
version: v1.0
annotations:
deployment.kubernetes.io/revision: "1"
spec:
# 副本数量
replicas: 3
# 选择器 - 必须匹配template中的labels
selector:
matchLabels:
app: nginx
# 更新策略
strategy:
type: RollingUpdate # 滚动更新
rollingUpdate:
maxUnavailable: 1 # 更新时最多1个Pod不可用
maxSurge: 1 # 更新时最多多创建1个Pod
# Pod模板
template:
metadata:
labels:
app: nginx
version: v1.0
spec:
containers:
- name: nginx
image: nginx:1.20
ports:
- containerPort: 80
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
# 健康检查
livenessProbe:
httpGet:
path: /
port: 80
initialDelaySeconds: 30
periodSeconds: 10
readinessProbe:
httpGet:
path: /
port: 80
initialDelaySeconds: 5
periodSeconds: 5
高级Deployment配置
yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-app-deployment
labels:
app: web-app
spec:
replicas: 5
selector:
matchLabels:
app: web-app
# 高级更新策略
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 25% # 可以用百分比
maxSurge: 25%
# 进度期限 - 10分钟内必须完成部署
progressDeadlineSeconds: 600
# 修订历史限制 - 保留10个版本用于回滚
revisionHistoryLimit: 10
template:
metadata:
labels:
app: web-app
tier: frontend
spec:
# 容器配置
containers:
- name: web-app
image: myapp:v2.0
ports:
- containerPort: 8080
# 环境变量
env:
- name: ENV
value: "production"
- name: DB_HOST
valueFrom:
configMapKeyRef:
name: app-config
key: database_host
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: app-secrets
key: db_password
# 资源限制
resources:
requests:
memory: "256Mi"
cpu: "200m"
limits:
memory: "512Mi"
cpu: "500m"
# 卷挂载
volumeMounts:
- name: app-storage
mountPath: /app/data
- name: config-volume
mountPath: /app/config
# 卷定义
volumes:
- name: app-storage
emptyDir: {}
- name: config-volume
configMap:
name: app-config
# Pod调度策略
affinity:
podAntiAffinity: # Pod反亲和性 - 不在同一节点
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: app
operator: In
values:
- web-app
topologyKey: kubernetes.io/hostname
2.4 Deployment操作命令
创建和管理
bash
# 创建Deployment
kubectl create deployment nginx --image=nginx:1.20 --replicas=3
# 应用YAML文件
kubectl apply -f deployment.yaml
# 查看Deployment
kubectl get deployments
kubectl describe deployment nginx-deployment
# 查看Deployment详细信息
kubectl get deployment nginx-deployment -o yaml
扩缩容操作
bash
# 手动扩缩容
kubectl scale deployment nginx-deployment --replicas=5
# 自动扩缩容(HPA)
kubectl autoscale deployment nginx-deployment --cpu-percent=70 --min=3 --max=10
# 查看扩缩容状态
kubectl get hpa
更新和回滚
bash
# 更新镜像
kubectl set image deployment/nginx-deployment nginx=nginx:1.21
# 查看更新状态
kubectl rollout status deployment/nginx-deployment
# 查看更新历史
kubectl rollout history deployment/nginx-deployment
# 回滚到上一版本
kubectl rollout undo deployment/nginx-deployment
# 回滚到指定版本
kubectl rollout undo deployment/nginx-deployment --to-revision=2
# 暂停/恢复更新
kubectl rollout pause deployment/nginx-deployment
kubectl rollout resume deployment/nginx-deployment
3. StatefulSet详解
3.1 基本概念
StatefulSet专门用于管理有状态应用,为每个Pod提供稳定的网络标识和持久化存储。
3.2 主要特性
yaml
# StatefulSet的核心特性
✅ 稳定的Pod标识 - nginx-0, nginx-1, nginx-2
✅ 有序部署和终止 - 按顺序启动和停止
✅ 稳定的网络标识 - 固定的DNS名称
✅ 持久化存储 - 每个Pod独立的PVC
✅ 有序扩缩容 - 按序号顺序操作
3.3 StatefulSet YAML详解
基础StatefulSet示例
yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web-statefulset
spec:
# 服务名称 - 必须与Headless Service匹配
serviceName: "web-service"
replicas: 3
selector:
matchLabels:
app: web
template:
metadata:
labels:
app: web
spec:
containers:
- name: web
image: nginx:1.20
ports:
- containerPort: 80
volumeMounts:
- name: web-storage
mountPath: /usr/share/nginx/html
# 卷声明模板 - 为每个Pod创建PVC
volumeClaimTemplates:
- metadata:
name: web-storage
spec:
accessModes: ["ReadWriteOnce"]
storageClassName: "standard"
resources:
requests:
storage: 1Gi
---
# Headless Service - 提供稳定的网络标识
apiVersion: v1
kind: Service
metadata:
name: web-service
spec:
clusterIP: None # Headless Service
selector:
app: web
ports:
- port: 80
targetPort: 80
数据库StatefulSet示例(MySQL)
yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mysql-statefulset
spec:
serviceName: mysql-headless
replicas: 3
selector:
matchLabels:
app: mysql
template:
metadata:
labels:
app: mysql
spec:
initContainers:
# 初始化容器 - 设置MySQL配置
- name: init-mysql
image: mysql:5.7
command:
- bash
- "-c"
- |
set -ex
# 根据Pod序号生成server-id
[[ $HOSTNAME =~ -([0-9]+)$ ]] || exit 1
ordinal=${BASH_REMATCH[1]}
echo [mysqld] > /mnt/conf.d/server-id.cnf
echo server-id=$((100 + $ordinal)) >> /mnt/conf.d/server-id.cnf
# 配置主从复制
if [[ $ordinal -eq 0 ]]; then
cp /mnt/config-map/master.cnf /mnt/conf.d/
else
cp /mnt/config-map/slave.cnf /mnt/conf.d/
fi
volumeMounts:
- name: conf
mountPath: /mnt/conf.d
- name: config-map
mountPath: /mnt/config-map
containers:
- name: mysql
image: mysql:5.7
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-secret
key: root-password
ports:
- name: mysql
containerPort: 3306
volumeMounts:
- name: data
mountPath: /var/lib/mysql
subPath: mysql
- name: conf
mountPath: /etc/mysql/conf.d
resources:
requests:
cpu: 500m
memory: 1Gi
limits:
cpu: 1000m
memory: 2Gi
# 健康检查
livenessProbe:
exec:
command:
- mysqladmin
- ping
- -h
- localhost
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 5
readinessProbe:
exec:
command:
- mysql
- -h
- 127.0.0.1
- -e
- "SELECT 1"
initialDelaySeconds: 5
periodSeconds: 2
timeoutSeconds: 1
volumes:
- name: conf
emptyDir: {}
- name: config-map
configMap:
name: mysql-config
# 每个Pod独立的持久化存储
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes: ["ReadWriteOnce"]
storageClassName: "fast-ssd"
resources:
requests:
storage: 10Gi
---
# MySQL配置ConfigMap
apiVersion: v1
kind: ConfigMap
metadata:
name: mysql-config
data:
master.cnf: |
[mysqld]
log-bin=mysql-bin
binlog-format=ROW
slave.cnf: |
[mysqld]
super-read-only
read-only
---
# MySQL Secret
apiVersion: v1
kind: Secret
metadata:
name: mysql-secret
type: Opaque
data:
root-password: bXlzcWxyb290cGFzcw== # mysqlrootpass的base64编码
3.4 StatefulSet操作命令
基本操作
bash
# 创建StatefulSet
kubectl apply -f statefulset.yaml
# 查看StatefulSet
kubectl get statefulsets
kubectl describe statefulset web-statefulset
# 查看Pod(注意有序命名)
kubectl get pods -l app=web
# 输出:web-statefulset-0, web-statefulset-1, web-statefulset-2
扩缩容操作
bash
# 扩容(有序扩容)
kubectl scale statefulset web-statefulset --replicas=5
# 缩容(有序缩容,从最大序号开始)
kubectl scale statefulset web-statefulset --replicas=2
# 查看扩缩容过程
kubectl get pods -l app=web -w
更新操作
bash
# 更新镜像(有序更新)
kubectl patch statefulset web-statefulset -p '{"spec":{"template":{"spec":{"containers":[{"name":"web","image":"nginx:1.21"}]}}}}'
# 查看更新状态
kubectl rollout status statefulset/web-statefulset
# 查看更新历史
kubectl rollout history statefulset/web-statefulset
删除操作
bash
# 删除StatefulSet但保留Pod
kubectl delete statefulset web-statefulset --cascade=false
# 删除StatefulSet和Pod
kubectl delete statefulset web-statefulset
# 手动删除PVC(StatefulSet删除时不会自动删除PVC)
kubectl delete pvc -l app=web
4. DaemonSet详解
4.1 基本概念
DaemonSet确保在每个(或某些)节点上运行一个Pod副本,通常用于系统级守护进程。
4.2 主要特性
yaml
# DaemonSet的核心特性
✅ 节点覆盖 - 每个节点运行一个Pod
✅ 自动调度 - 新节点自动部署Pod
✅ 节点亲和性 - 可选择特定节点
✅ 系统级别 - 通常具有特权访问
✅ 滚动更新 - 支持更新策略
4.3 DaemonSet YAML详解
基础DaemonSet示例(日志收集)
yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: fluentd-daemonset
namespace: kube-system
labels:
app: fluentd
spec:
selector:
matchLabels:
app: fluentd
# 更新策略
updateStrategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
template:
metadata:
labels:
app: fluentd
spec:
# 容忍master节点的污点
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
containers:
- name: fluentd
image: fluent/fluentd:v1.12
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 200Mi
# 挂载主机目录
volumeMounts:
- name: varlog
mountPath: /var/log
readOnly: true
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
- name: fluentd-config
mountPath: /fluentd/etc
# 挂载主机卷
volumes:
- name: varlog
hostPath:
path: /var/log
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
- name: fluentd-config
configMap:
name: fluentd-config
# 服务账户
serviceAccountName: fluentd
# 终止优雅期
terminationGracePeriodSeconds: 30
监控DaemonSet示例(Node Exporter)
yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: node-exporter
namespace: monitoring
labels:
app: node-exporter
spec:
selector:
matchLabels:
app: node-exporter
template:
metadata:
labels:
app: node-exporter
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "9100"
spec:
# 主机网络模式
hostNetwork: true
hostPID: true
# 节点选择器 - 只在Linux节点运行
nodeSelector:
kubernetes.io/os: linux
# 容忍所有污点
tolerations:
- operator: Exists
effect: NoSchedule
containers:
- name: node-exporter
image: prom/node-exporter:v1.3.1
args:
- --path.procfs=/host/proc
- --path.sysfs=/host/sys
- --path.rootfs=/host/root
- --collector.filesystem.ignored-mount-points
- ^/(dev|proc|sys|var/lib/docker/.+)($|/)
- --collector.filesystem.ignored-fs-types
- ^(autofs|binfmt_misc|cgroup|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|mqueue|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|sysfs|tracefs)$
ports:
- containerPort: 9100
hostPort: 9100
name: metrics
resources:
limits:
cpu: 200m
memory: 50Mi
requests:
cpu: 100m
memory: 30Mi
# 安全上下文
securityContext:
runAsNonRoot: true
runAsUser: 65534
volumeMounts:
- name: proc
mountPath: /host/proc
readOnly: true
- name: sys
mountPath: /host/sys
readOnly: true
- name: root
mountPath: /host/root
mountPropagation: HostToContainer
readOnly: true
volumes:
- name: proc
hostPath:
path: /proc
- name: sys
hostPath:
path: /sys
- name: root
hostPath:
path: /
---
# ServiceAccount
apiVersion: v1
kind: ServiceAccount
metadata:
name: node-exporter
namespace: monitoring
网络插件DaemonSet示例
yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: calico-node
namespace: kube-system
spec:
selector:
matchLabels:
k8s-app: calico-node
updateStrategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
template:
metadata:
labels:
k8s-app: calico-node
spec:
# 优先级类 - 系统级关键组件
priorityClassName: system-node-critical
# 主机网络
hostNetwork: true
# 容忍所有污点
tolerations:
- operator: Exists
effect: NoSchedule
- operator: Exists
effect: NoExecute
# 服务账户
serviceAccountName: calico-node
# 初始化容器
initContainers:
- name: upgrade-ipam
image: calico/cni:v3.20.0
command: ["/opt/cni/bin/calico-ipam", "-upgrade"]
env:
- name: KUBERNETES_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
volumeMounts:
- mountPath: /var/lib/cni/networks
name: host-local-net-dir
- mountPath: /host/opt/cni/bin
name: cni-bin-dir
containers:
- name: calico-node
image: calico/node:v3.20.0
env:
- name: DATASTORE_TYPE
value: "kubernetes"
- name: FELIX_TYPHAK8SSERVICENAME
value: "calico-typha"
- name: CALICO_NETWORKING_BACKEND
value: "bird"
- name: CLUSTER_TYPE
value: "k8s,bgp"
- name: IP
value: "autodetect"
- name: CALICO_IPV4POOL_IPIP
value: "Always"
- name: FELIX_IPINIPMTU
value: "1440"
- name: CALICO_IPV4POOL_CIDR
value: "192.168.0.0/16"
- name: CALICO_DISABLE_FILE_LOGGING
value: "true"
- name: FELIX_DEFAULTENDPOINTTOHOSTACTION
value: "ACCEPT"
- name: FELIX_IPV6SUPPORT
value: "false"
- name: FELIX_LOGSEVERITYSCREEN
value: "info"
- name: FELIX_HEALTHENABLED
value: "true"
- name: NODENAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
securityContext:
privileged: true
resources:
requests:
cpu: 250m
livenessProbe:
exec:
command:
- /bin/calico-node
- -felix-live
- -bird-live
periodSeconds: 10
initialDelaySeconds: 10
failureThreshold: 6
readinessProbe:
exec:
command:
- /bin/calico-node
- -felix-ready
- -bird-ready
periodSeconds: 10
volumeMounts:
- mountPath: /lib/modules
name: lib-modules
readOnly: true
- mountPath: /run/xtables.lock
name: xtables-lock
- mountPath: /var/run/calico
name: var-run-calico
- mountPath: /var/lib/calico
name: var-lib-calico
- mountPath: /var/run/nodeagent
name: policysync
- mountPath: /sys/fs/bpf
name: bpffs
- mountPath: /var/log/calico/cni
name: cni-log-dir
readOnly: true
- mountPath: /host/opt/cni/bin
name: cni-bin-dir
- mountPath: /host/etc/cni/net.d
name: cni-net-dir
volumes:
- name: lib-modules
hostPath:
path: /lib/modules
- name: var-run-calico
hostPath:
path: /var/run/calico
- name: var-lib-calico
hostPath:
path: /var/lib/calico
- name: xtables-lock
hostPath:
path: /run/xtables.lock
type: FileOrCreate
- name: policysync
hostPath:
type: DirectoryOrCreate
path: /var/run/nodeagent
- name: bpffs
hostPath:
path: /sys/fs/bpf
type: Directory
- name: cni-bin-dir
hostPath:
path: /opt/cni/bin
- name: cni-net-dir
hostPath:
path: /etc/cni/net.d
- name: cni-log-dir
hostPath:
path: /var/log/calico/cni
- name: host-local-net-dir
hostPath:
path: /var/lib/cni/networks
4.4 DaemonSet操作命令
基本操作
bash
# 创建DaemonSet
kubectl apply -f daemonset.yaml
# 查看DaemonSet
kubectl get daemonsets -n kube-system
kubectl describe daemonset fluentd-daemonset -n kube-system
# 查看DaemonSet管理的Pod
kubectl get pods -l app=fluentd -o wide
更新操作
bash
# 更新DaemonSet镜像
kubectl set image daemonset/fluentd-daemonset fluentd=fluent/fluentd:v1.13 -n kube-system
# 查看更新状态
kubectl rollout status daemonset/fluentd-daemonset -n kube-system
# 查看更新历史
kubectl rollout history daemonset/fluentd-daemonset -n kube-system
# 回滚DaemonSet
kubectl rollout undo daemonset/fluentd-daemonset -n kube-system
节点管理
bash
# 查看DaemonSet在各节点的分布
kubectl get pods -o wide -l app=fluentd
# 从特定节点删除DaemonSet Pod(通过节点污点)
kubectl taint nodes node1 key=value:NoSchedule
# 为特定节点添加标签,让DaemonSet只在这些节点运行
kubectl label nodes node1 monitoring=enabled
5. 使用场景和最佳实践
5.1 Deployment使用场景
yaml
适用场景:
✅ Web应用和API服务
✅ 微服务架构
✅ 无状态的后端服务
✅ 前端静态资源服务
✅ 缓存层(如Redis集群)
最佳实践:
🔹 设置合适的资源限制
🔹 配置健康检查
🔹 使用多副本提高可用性
🔹 合理设置更新策略
🔹 配置Pod反亲和性避免单点故障
Deployment最佳实践示例
yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: best-practice-app
spec:
replicas: 3
selector:
matchLabels:
app: best-practice-app
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
maxSurge: 1
template:
metadata:
labels:
app: best-practice-app
spec:
# 反亲和性 - 避免Pod调度到同一节点
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: app
operator: In
values:
- best-practice-app
topologyKey: kubernetes.io/hostname
containers:
- name: app
image: myapp:latest
# 资源限制
resources:
requests:
memory: "128Mi"
cpu: "100m"
limits:
memory: "256Mi"
cpu: "200m"
# 健康检查
livenessProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 30
periodSeconds: 10
failureThreshold: 3
readinessProbe:
httpGet:
path: /ready
port: 8080
initialDelaySeconds: 5
periodSeconds: 5
failureThreshold: 3
# 优雅停机
lifecycle:
preStop:
exec:
command: ["/bin/sh", "-c", "sleep 10"]
5.2 StatefulSet使用场景
yaml
适用场景:
✅ 数据库(MySQL, PostgreSQL, MongoDB)
✅ 分布式存储(Elasticsearch, Cassandra)
✅ 消息队列(Kafka, RabbitMQ)
✅ 缓存集群(Redis Cluster)
✅ 分布式协调服务(Zookeeper, etcd)
最佳实践:
🔹 使用Headless Service提供稳定网络标识
🔹 配置适当的存储类和PVC
🔹 设置初始化容器处理集群配置
🔹 配置有序的启动和停止策略
🔹 实现数据备份和恢复机制
5.3 DaemonSet使用场景
yaml
适用场景:
✅ 日志收集(Fluentd, Filebeat, Logstash)
✅ 监控代理(Node Exporter, cAdvisor)
✅ 网络插件(Calico, Flannel, Weave)
✅ 存储驱动(Ceph, GlusterFS客户端)
✅ 安全扫描(Falco, Twistlock)
最佳实践:
🔹 使用主机网络和主机PID(如需要)
🔹 配置适当的容忍度和节点选择器
🔹 设置资源限制避免影响主机
🔹 使用特权容器(谨慎使用)
🔹 配置滚动更新策略
6. 实战示例
6.1 完整的三层应用架构
前端Deployment
yaml
# 前端 - Nginx反向代理
apiVersion: apps/v1
kind: Deployment
metadata:
name: frontend-deployment
labels:
tier: frontend
spec:
replicas: 3
selector:
matchLabels:
app: frontend
tier: frontend
template:
metadata:
labels:
app: frontend
tier: frontend
spec:
containers:
- name: nginx
image: nginx:1.20
ports:
- containerPort: 80
volumeMounts:
- name: nginx-config
mountPath: /etc/nginx/conf.d
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 200m
memory: 256Mi
volumes:
- name: nginx-config
configMap:
name: nginx-config
---
apiVersion: v1
kind: Service
metadata:
name: frontend-service
spec:
selector:
app: frontend
tier: frontend
ports:
- port: 80
targetPort: 80
type: LoadBalancer
后端Deployment
yaml
# 后端 - Web API服务
apiVersion: apps/v1
kind: Deployment
metadata:
name: backend-deployment
labels:
tier: backend
spec:
replicas: 5
selector:
matchLabels:
app: backend
tier: backend
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
maxSurge: 2
template:
metadata:
labels:
app: backend
tier: backend
spec:
containers:
- name: api-server
image: myapi:v1.0
ports:
- containerPort: 8080
env:
- name: DATABASE_URL
valueFrom:
secretKeyRef:
name: app-secrets
key: database-url
- name: REDIS_URL
value: "redis://redis-service:6379"
resources:
requests:
cpu: 200m
memory: 256Mi
limits:
cpu: 500m
memory: 512Mi
livenessProbe:
httpGet:
path: /api/health
port: 8080
initialDelaySeconds: 30
periodSeconds: 10
readinessProbe:
httpGet:
path: /api/ready
port: 8080
initialDelaySeconds: 5
periodSeconds: 5
---
apiVersion: v1
kind: Service
metadata:
name: backend-service
spec:
selector:
app: backend
tier: backend
ports:
- port: 8080
targetPort: 8080
type: ClusterIP
数据库StatefulSet
yaml
# 数据库 - PostgreSQL主从集群
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: postgres-statefulset
labels:
tier: database
spec:
serviceName: postgres-headless
replicas: 3
selector:
matchLabels:
app: postgres
tier: database
template:
metadata:
labels:
app: postgres
tier: database
spec:
initContainers:
- name: postgres-init
image: postgres:13
command:
- bash
- -c
- |
if [[ "$HOSTNAME" == "postgres-statefulset-0" ]]; then
echo "Initializing master database..."
else
echo "Configuring slave database..."
# 从主库复制数据
pg_basebackup -h postgres-statefulset-0.postgres-headless -D /var/lib/postgresql/data -U replicator -v -P -W
fi
env:
- name: PGPASSWORD
valueFrom:
secretKeyRef:
name: postgres-secret
key: replication-password
volumeMounts:
- name: postgres-storage
mountPath: /var/lib/postgresql/data
containers:
- name: postgres
image: postgres:13
ports:
- containerPort: 5432
env:
- name: POSTGRES_DB
value: myapp
- name: POSTGRES_USER
value: myuser
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: postgres-secret
key: password
- name: POSTGRES_REPLICATION_USER
value: replicator
- name: POSTGRES_REPLICATION_PASSWORD
valueFrom:
secretKeyRef:
name: postgres-secret
key: replication-password
volumeMounts:
- name: postgres-storage
mountPath: /var/lib/postgresql/data
- name: postgres-config
mountPath: /etc/postgresql
resources:
requests:
cpu: 500m
memory: 1Gi
limits:
cpu: 1000m
memory: 2Gi
livenessProbe:
exec:
command:
- pg_isready
- -U
- myuser
- -d
- myapp
initialDelaySeconds: 30
periodSeconds: 10
volumes:
- name: postgres-config
configMap:
name: postgres-config
volumeClaimTemplates:
- metadata:
name: postgres-storage
spec:
accessModes: ["ReadWriteOnce"]
storageClassName: fast-ssd
resources:
requests:
storage: 20Gi
---
# Headless Service for StatefulSet
apiVersion: v1
kind: Service
metadata:
name: postgres-headless
spec:
clusterIP: None
selector:
app: postgres
tier: database
ports:
- port: 5432
targetPort: 5432
---
# Regular Service for client connections
apiVersion: v1
kind: Service
metadata:
name: postgres-service
spec:
selector:
app: postgres
tier: database
ports:
- port: 5432
targetPort: 5432
type: ClusterIP
监控DaemonSet
yaml
# 监控 - Node Exporter + Log Collector
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: monitoring-daemonset
namespace: monitoring
labels:
component: monitoring
spec:
selector:
matchLabels:
app: monitoring-agent
template:
metadata:
labels:
app: monitoring-agent
spec:
hostNetwork: true
hostPID: true
tolerations:
- operator: Exists
effect: NoSchedule
containers:
# Node Exporter容器
- name: node-exporter
image: prom/node-exporter:v1.3.1
args:
- --path.procfs=/host/proc
- --path.sysfs=/host/sys
- --collector.filesystem.ignored-mount-points
- ^/(dev|proc|sys|var/lib/docker/.+)($|/)
ports:
- containerPort: 9100
hostPort: 9100
name: metrics
resources:
requests:
cpu: 100m
memory: 50Mi
limits:
cpu: 200m
memory: 100Mi
volumeMounts:
- name: proc
mountPath: /host/proc
readOnly: true
- name: sys
mountPath: /host/sys
readOnly: true
# Fluentd日志收集容器
- name: fluentd
image: fluent/fluentd:v1.12
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 200m
memory: 256Mi
volumeMounts:
- name: varlog
mountPath: /var/log
readOnly: true
- name: containers
mountPath: /var/lib/docker/containers
readOnly: true
- name: fluentd-config
mountPath: /fluentd/etc
volumes:
- name: proc
hostPath:
path: /proc
- name: sys
hostPath:
path: /sys
- name: varlog
hostPath:
path: /var/log
- name: containers
hostPath:
path: /var/lib/docker/containers
- name: fluentd-config
configMap:
name: fluentd-config
6.2 部署和管理脚本
一键部署脚本
bash
#!/bin/bash
# deploy-app.sh - 一键部署三层应用
set -e
NAMESPACE="myapp"
ENVIRONMENT="${1:-dev}"
echo "🚀 开始部署应用到环境: $ENVIRONMENT"
# 创建命名空间
kubectl create namespace $NAMESPACE --dry-run=client -o yaml | kubectl apply -f -
# 创建ConfigMaps和Secrets
echo "📝 创建配置和密钥..."
kubectl apply -f configs/ -n $NAMESPACE
kubectl apply -f secrets/ -n $NAMESPACE
# 部署数据库StatefulSet
echo "🗃️ 部署数据库..."
kubectl apply -f database/statefulset.yaml -n $NAMESPACE
# 等待数据库就绪
echo "⏳ 等待数据库启动..."
kubectl wait --for=condition=ready pod -l app=postgres --timeout=300s -n $NAMESPACE
# 部署后端Deployment
echo "⚙️ 部署后端服务..."
kubectl apply -f backend/deployment.yaml -n $NAMESPACE
kubectl wait --for=condition=available deployment/backend-deployment --timeout=300s -n $NAMESPACE
# 部署前端Deployment
echo "🌐 部署前端服务..."
kubectl apply -f frontend/deployment.yaml -n $NAMESPACE
kubectl wait --for=condition=available deployment/frontend-deployment --timeout=300s -n $NAMESPACE
# 部署监控DaemonSet
echo "📊 部署监控组件..."
kubectl apply -f monitoring/daemonset.yaml -n monitoring
echo "✅ 部署完成!"
echo "🔗 获取访问地址:"
kubectl get services -n $NAMESPACE
健康检查脚本
bash
#!/bin/bash
# health-check.sh - 应用健康检查
NAMESPACE="myapp"
echo "🏥 开始健康检查..."
# 检查Deployment状态
echo "📋 检查Deployment状态:"
kubectl get deployments -n $NAMESPACE
# 检查StatefulSet状态
echo "📊 检查StatefulSet状态:"
kubectl get statefulsets -n $NAMESPACE
# 检查DaemonSet状态
echo "👥 检查DaemonSet状态:"
kubectl get daemonsets -n monitoring
# 检查Pod状态
echo "🏃 检查Pod状态:"
kubectl get pods -n $NAMESPACE -o wide
# 检查服务状态
echo "🌐 检查Service状态:"
kubectl get services -n $NAMESPACE
# 检查存储状态
echo "💾 检查存储状态:"
kubectl get pvc -n $NAMESPACE
# 运行连接测试
echo "🔗 运行连接测试..."
kubectl run test-pod --image=busybox --rm -it --restart=Never -n $NAMESPACE -- sh -c "
echo '测试数据库连接...'
nc -zv postgres-service 5432
echo '测试后端API...'
wget -qO- http://backend-service:8080/api/health
echo '测试前端服务...'
wget -qO- http://frontend-service
"
echo "✅ 健康检查完成!"
扩缩容脚本
bash
#!/bin/bash
# scale-app.sh - 应用扩缩容
NAMESPACE="myapp"
ACTION="${1:-scale}"
COMPONENT="${2:-all}"
REPLICAS="${3:-3}"
echo "📏 执行扩缩容操作: $ACTION $COMPONENT -> $REPLICAS"
scale_deployment() {
local name=$1
local replicas=$2
echo "🔄 扩缩容 Deployment $name 到 $replicas 个副本"
kubectl scale deployment $name --replicas=$replicas -n $NAMESPACE
kubectl wait --for=condition=available deployment/$name --timeout=300s -n $NAMESPACE
}
scale_statefulset() {
local name=$1
local replicas=$2
echo "🔄 扩缩容 StatefulSet $name 到 $replicas 个副本"
kubectl scale statefulset $name --replicas=$replicas -n $NAMESPACE
# 等待扩缩容完成
for ((i=0; i<replicas; i++)); do
kubectl wait --for=condition=ready pod/$name-$i --timeout=300s -n $NAMESPACE
done
}
case $COMPONENT in
"frontend")
scale_deployment "frontend-deployment" $REPLICAS
;;
"backend")
scale_deployment "backend-deployment" $REPLICAS
;;
"database")
scale_statefulset "postgres-statefulset" $REPLICAS
;;
"all")
scale_deployment "frontend-deployment" $REPLICAS
scale_deployment "backend-deployment" $REPLICAS
# 数据库通常不需要频繁扩缩容
;;
*)
echo "❌ 未知组件: $COMPONENT"
echo "可用组件: frontend, backend, database, all"
exit 1
;;
esac
echo "✅ 扩缩容操作完成!"
kubectl get pods -n $NAMESPACE -o wide
总结
选择决策树
makefile
选择合适的控制器:
├── 应用是否有状态?
│ ├── 无状态 → Deployment
│ └── 有状态 → StatefulSet
├── 是否需要在每个节点运行?
│ └── 是 → DaemonSet
└── 特殊需求?
├── 批处理任务 → Job/CronJob
├── 一次性任务 → Job
└── 定时任务 → CronJob
最佳实践总结
- 资源管理: 总是设置requests和limits
- 健康检查: 配置liveness和readiness探针
- 滚动更新: 使用合适的更新策略
- 监控日志: 实现全面的可观测性
- 安全考虑: 遵循最小权限原则
- 备份恢复: 为有状态应用制定备份策略
通过理解这三种控制器的特点和使用场景,你可以更好地设计和部署Kubernetes应用!🚀
Similar code found with 3 license types