Kubernetes (K8s) 完全指南:Java 开发者的容器编排实践

引言

当你的 Java 应用需要:

  • 在多台服务器上运行并自动负载均衡

  • 实现零停机部署和自动回滚

  • 根据负载自动扩缩容

  • 自动重启失败的容器

  • 管理配置和密钥

这时,单纯使用 Docker 已经不够了,你需要 Kubernetes。本文将带你从零开始掌握 Kubernetes,学会如何在 K8s 上部署和管理 Java 应用。

什么是 Kubernetes?

Kubernetes(简称 K8s,因为 K 和 s 之间有 8 个字母)是 Google 开源的容器编排平台,用于自动化部署、扩展和管理容器化应用程序。

为什么需要 Kubernetes?

场景 传统方式 Kubernetes 方式
应用部署 手动 SSH 到服务器部署 声明式配置,自动部署
负载均衡 配置 Nginx/HAProxy 内置 Service 自动负载
扩容 手动启动新实例 一条命令或自动扩容
故障恢复 人工监控和重启 自动检测和重启
配置管理 配置文件分散 ConfigMap 统一管理
滚动更新 手动逐台更新 自动滚动更新

核心概念

1. 集群架构

复制代码
┌─────────────────────────────────────────────┐
│              Kubernetes Cluster              │
├─────────────────────────────────────────────┤
│  Master Node (Control Plane)                │
│  ┌──────────┬──────────┬──────────────┐    │
│  │ API      │ Scheduler│ Controller   │    │
│  │ Server   │          │ Manager      │    │
│  └──────────┴──────────┴──────────────┘    │
├─────────────────────────────────────────────┤
│  Worker Nodes                                │
│  ┌────────────┐  ┌────────────┐            │
│  │  Node 1    │  │  Node 2    │            │
│  │  ┌──────┐  │  │  ┌──────┐  │            │
│  │  │ Pod  │  │  │  │ Pod  │  │            │
│  │  │┌────┐│  │  │  │┌────┐│  │            │
│  │  ││App ││  │  │  ││App ││  │            │
│  │  │└────┘│  │  │  │└────┘│  │            │
│  │  └──────┘  │  │  └──────┘  │            │
│  └────────────┘  └────────────┘            │
└─────────────────────────────────────────────┘

2. 核心组件

Pod
  • K8s 中最小的部署单元

  • 可以包含一个或多个容器

  • 共享网络和存储

  • 通常一个 Pod 运行一个应用容器

Deployment
  • 管理 Pod 的副本数量

  • 支持滚动更新和回滚

  • 声明式管理应用的期望状态

Service
  • 为 Pod 提供稳定的网络访问入口

  • 自动负载均衡

  • 服务发现

ConfigMap & Secret
  • ConfigMap:存储非敏感配置

  • Secret:存储敏感信息(密码、密钥等)

Namespace
  • 逻辑隔离不同的环境或团队

  • 如:dev、test、prod

Ingress
  • 管理外部访问集群内服务的 HTTP/HTTPS 路由

  • 提供负载均衡、SSL 终止等功能

Volume & PersistentVolume
  • Volume:Pod 级别的存储

  • PersistentVolume:集群级别的持久化存储

安装 Kubernetes

本地开发环境

1. Minikube(推荐用于学习)
复制代码
# macOS
brew install minikube
​
# Linux
curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
sudo install minikube-linux-amd64 /usr/local/bin/minikube
​
# Windows
# 使用 Chocolatey
choco install minikube
​
# 启动 Minikube
minikube start --driver=docker --cpus=4 --memory=8192
​
# 验证
kubectl cluster-info
kubectl get nodes
2. Docker Desktop(内置 Kubernetes)
  1. 打开 Docker Desktop

  2. Settings → Kubernetes → Enable Kubernetes

  3. 等待 Kubernetes 启动完成

3. Kind(Kubernetes in Docker)
复制代码
# 安装 Kind
brew install kind  # macOS
# 或
go install sigs.k8s.io/kind@latest
​
# 创建集群
kind create cluster --name dev-cluster
​
# 查看集群
kind get clusters

安装 kubectl

kubectl 是 Kubernetes 的命令行工具。

复制代码
# macOS
brew install kubectl
​
# Linux
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl
​
# Windows
choco install kubernetes-cli
​
# 验证
kubectl version --client

kubectl 常用命令

集群管理

复制代码
# 查看集群信息
kubectl cluster-info
​
# 查看节点
kubectl get nodes
​
# 查看所有资源
kubectl get all
​
# 查看特定命名空间的资源
kubectl get all -n <namespace>
​
# 查看集群健康状态
kubectl get componentstatuses

Pod 操作

复制代码
# 查看 Pod
kubectl get pods
kubectl get pods -o wide  # 显示更多信息
kubectl get pods --all-namespaces
​
# 查看 Pod 详情
kubectl describe pod <pod-name>
​
# 查看 Pod 日志
kubectl logs <pod-name>
kubectl logs -f <pod-name>  # 实时查看
kubectl logs <pod-name> -c <container-name>  # 多容器 Pod
​
# 进入 Pod
kubectl exec -it <pod-name> -- /bin/bash
​
# 删除 Pod
kubectl delete pod <pod-name>

Deployment 操作

复制代码
# 创建 Deployment
kubectl create deployment myapp --image=myapp:1.0
​
# 查看 Deployment
kubectl get deployments
kubectl describe deployment <deployment-name>
​
# 扩容/缩容
kubectl scale deployment <deployment-name> --replicas=5
​
# 更新镜像
kubectl set image deployment/<deployment-name> <container-name>=<new-image>
​
# 查看更新状态
kubectl rollout status deployment/<deployment-name>
​
# 查看更新历史
kubectl rollout history deployment/<deployment-name>
​
# 回滚
kubectl rollout undo deployment/<deployment-name>
kubectl rollout undo deployment/<deployment-name> --to-revision=2
​
# 删除 Deployment
kubectl delete deployment <deployment-name>

Service 操作

复制代码
# 创建 Service
kubectl expose deployment <deployment-name> --port=8080 --target-port=8080
​
# 查看 Service
kubectl get services
kubectl describe service <service-name>
​
# 删除 Service
kubectl delete service <service-name>

配置管理

复制代码
# 应用配置文件
kubectl apply -f deployment.yaml
kubectl apply -f ./k8s/  # 应用目录下所有文件

# 查看资源配置
kubectl get deployment <name> -o yaml

# 编辑资源
kubectl edit deployment <name>

# 删除资源
kubectl delete -f deployment.yaml

命名空间操作

复制代码
# 查看命名空间
kubectl get namespaces

# 创建命名空间
kubectl create namespace dev

# 删除命名空间
kubectl delete namespace dev

# 在特定命名空间操作
kubectl get pods -n dev
kubectl apply -f app.yaml -n dev

调试命令

复制代码
# 查看事件
kubectl get events
kubectl get events --sort-by=.metadata.creationTimestamp

# 查看资源使用情况
kubectl top nodes
kubectl top pods

# 端口转发(本地访问 Pod)
kubectl port-forward pod/<pod-name> 8080:8080
kubectl port-forward service/<service-name> 8080:8080

# 复制文件
kubectl cp <pod-name>:/path/to/file ./local-file
kubectl cp ./local-file <pod-name>:/path/to/file

部署 Java 应用到 Kubernetes

1. 准备 Docker 镜像

首先需要构建 Java 应用的 Docker 镜像(参考 Docker 教程)。

复制代码
# 构建镜像
docker build -t myapp:1.0 .

# 推送到镜像仓库(Docker Hub 或私有仓库)
docker tag myapp:1.0 myusername/myapp:1.0
docker push myusername/myapp:1.0

2. 创建 Deployment

创建 deployment.yaml

复制代码
apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp
  labels:
    app: myapp
spec:
  replicas: 3  # 运行 3 个副本
  selector:
    matchLabels:
      app: myapp
  template:
    metadata:
      labels:
        app: myapp
    spec:
      containers:
      - name: myapp
        image: myusername/myapp:1.0
        ports:
        - containerPort: 8080
        env:
        - name: SPRING_PROFILES_ACTIVE
          value: "prod"
        resources:
          requests:
            memory: "256Mi"
            cpu: "250m"
          limits:
            memory: "512Mi"
            cpu: "500m"
        livenessProbe:
          httpGet:
            path: /actuator/health
            port: 8080
          initialDelaySeconds: 60
          periodSeconds: 10
        readinessProbe:
          httpGet:
            path: /actuator/health
            port: 8080
          initialDelaySeconds: 30
          periodSeconds: 5

3. 创建 Service

创建 service.yaml

复制代码
apiVersion: v1
kind: Service
metadata:
  name: myapp-service
spec:
  selector:
    app: myapp
  ports:
  - protocol: TCP
    port: 80
    targetPort: 8080
  type: LoadBalancer  # 或 ClusterIP、NodePort

4. 部署应用

复制代码
# 应用配置
kubectl apply -f deployment.yaml
kubectl apply -f service.yaml

# 查看部署状态
kubectl get deployments
kubectl get pods
kubectl get services

# 查看详细信息
kubectl describe deployment myapp
kubectl logs -f <pod-name>

配置管理

使用 ConfigMap

创建 configmap.yaml

复制代码
apiVersion: v1
kind: ConfigMap
metadata:
  name: myapp-config
data:
  application.properties: |
    server.port=8080
    spring.application.name=myapp
    logging.level.root=INFO
  database.url: "jdbc:mysql://mysql-service:3306/mydb"

在 Deployment 中使用:

复制代码
apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp
spec:
  template:
    spec:
      containers:
      - name: myapp
        image: myusername/myapp:1.0
        env:
        - name: DATABASE_URL
          valueFrom:
            configMapKeyRef:
              name: myapp-config
              key: database.url
        volumeMounts:
        - name: config
          mountPath: /app/config
      volumes:
      - name: config
        configMap:
          name: myapp-config

使用 Secret

创建 secret.yaml

复制代码
apiVersion: v1
kind: Secret
metadata:
  name: myapp-secret
type: Opaque
data:
  # 注意:值需要 base64 编码
  database-password: bXlzZWNyZXRwYXNzd29yZA==
  api-key: YXBpa2V5MTIzNDU2Nzg5MA==

在 Deployment 中使用:

复制代码
apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp
spec:
  template:
    spec:
      containers:
      - name: myapp
        image: myusername/myapp:1.0
        env:
        - name: DATABASE_PASSWORD
          valueFrom:
            secretKeyRef:
              name: myapp-secret
              key: database-password
        - name: API_KEY
          valueFrom:
            secretKeyRef:
              name: myapp-secret
              key: api-key

创建 Secret(命令行方式):

复制代码
# 从文件创建
kubectl create secret generic myapp-secret \
  --from-file=./secret-file.txt

# 从字面值创建
kubectl create secret generic myapp-secret \
  --from-literal=password=mypassword \
  --from-literal=api-key=myapikey

# 查看 Secret(不显示值)
kubectl get secrets
kubectl describe secret myapp-secret

# 查看 Secret 值
kubectl get secret myapp-secret -o yaml

完整示例:Spring Boot + MySQL

项目结构

复制代码
k8s/
├── namespace.yaml
├── mysql-pv.yaml
├── mysql-deployment.yaml
├── mysql-service.yaml
├── app-configmap.yaml
├── app-secret.yaml
├── app-deployment.yaml
├── app-service.yaml
└── ingress.yaml

1. namespace.yaml

复制代码
apiVersion: v1
kind: Namespace
metadata:
  name: myapp-prod

2. mysql-pv.yaml

复制代码
apiVersion: v1
kind: PersistentVolume
metadata:
  name: mysql-pv
  namespace: myapp-prod
spec:
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/mnt/data/mysql"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: mysql-pvc
  namespace: myapp-prod
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 10Gi

3. mysql-deployment.yaml

复制代码
apiVersion: apps/v1
kind: Deployment
metadata:
  name: mysql
  namespace: myapp-prod
spec:
  replicas: 1
  selector:
    matchLabels:
      app: mysql
  template:
    metadata:
      labels:
        app: mysql
    spec:
      containers:
      - name: mysql
        image: mysql:8.0
        ports:
        - containerPort: 3306
        env:
        - name: MYSQL_ROOT_PASSWORD
          valueFrom:
            secretKeyRef:
              name: myapp-secret
              key: mysql-root-password
        - name: MYSQL_DATABASE
          value: mydb
        volumeMounts:
        - name: mysql-storage
          mountPath: /var/lib/mysql
      volumes:
      - name: mysql-storage
        persistentVolumeClaim:
          claimName: mysql-pvc

4. mysql-service.yaml

复制代码
apiVersion: v1
kind: Service
metadata:
  name: mysql-service
  namespace: myapp-prod
spec:
  selector:
    app: mysql
  ports:
  - port: 3306
    targetPort: 3306
  clusterIP: None  # Headless Service

5. app-configmap.yaml

复制代码
apiVersion: v1
kind: ConfigMap
metadata:
  name: myapp-config
  namespace: myapp-prod
data:
  application.properties: |
    server.port=8080
    spring.application.name=myapp
    spring.datasource.url=jdbc:mysql://mysql-service:3306/mydb?useSSL=false&serverTimezone=UTC
    spring.datasource.username=root
    spring.jpa.hibernate.ddl-auto=update
    spring.jpa.show-sql=false
    logging.level.root=INFO

6. app-secret.yaml

复制代码
apiVersion: v1
kind: Secret
metadata:
  name: myapp-secret
  namespace: myapp-prod
type: Opaque
data:
  mysql-root-password: cm9vdDEyMw==  # root123
  spring-datasource-password: cm9vdDEyMw==

7. app-deployment.yaml

复制代码
apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp
  namespace: myapp-prod
  labels:
    app: myapp
spec:
  replicas: 3
  selector:
    matchLabels:
      app: myapp
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 1
  template:
    metadata:
      labels:
        app: myapp
    spec:
      containers:
      - name: myapp
        image: myusername/myapp:1.0
        imagePullPolicy: Always
        ports:
        - containerPort: 8080
          name: http
        env:
        - name: SPRING_PROFILES_ACTIVE
          value: "prod"
        - name: SPRING_DATASOURCE_PASSWORD
          valueFrom:
            secretKeyRef:
              name: myapp-secret
              key: spring-datasource-password
        - name: JAVA_OPTS
          value: "-Xmx512m -Xms256m -XX:+UseContainerSupport"
        resources:
          requests:
            memory: "512Mi"
            cpu: "500m"
          limits:
            memory: "1Gi"
            cpu: "1000m"
        livenessProbe:
          httpGet:
            path: /actuator/health/liveness
            port: 8080
          initialDelaySeconds: 90
          periodSeconds: 10
          timeoutSeconds: 5
          failureThreshold: 3
        readinessProbe:
          httpGet:
            path: /actuator/health/readiness
            port: 8080
          initialDelaySeconds: 60
          periodSeconds: 5
          timeoutSeconds: 3
          failureThreshold: 3
        volumeMounts:
        - name: config
          mountPath: /app/config
      volumes:
      - name: config
        configMap:
          name: myapp-config

8. app-service.yaml

复制代码
apiVersion: v1
kind: Service
metadata:
  name: myapp-service
  namespace: myapp-prod
spec:
  selector:
    app: myapp
  ports:
  - protocol: TCP
    port: 80
    targetPort: 8080
  type: ClusterIP

9. ingress.yaml

复制代码
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: myapp-ingress
  namespace: myapp-prod
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
    cert-manager.io/cluster-issuer: "letsencrypt-prod"
spec:
  ingressClassName: nginx
  tls:
  - hosts:
    - myapp.example.com
    secretName: myapp-tls
  rules:
  - host: myapp.example.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: myapp-service
            port:
              number: 80

部署步骤

复制代码
# 1. 创建命名空间
kubectl apply -f namespace.yaml

# 2. 创建 Secret(先创建,因为其他资源依赖它)
kubectl apply -f app-secret.yaml

# 3. 部署 MySQL
kubectl apply -f mysql-pv.yaml
kubectl apply -f mysql-deployment.yaml
kubectl apply -f mysql-service.yaml

# 4. 等待 MySQL 就绪
kubectl wait --for=condition=ready pod -l app=mysql -n myapp-prod --timeout=120s

# 5. 部署应用
kubectl apply -f app-configmap.yaml
kubectl apply -f app-deployment.yaml
kubectl apply -f app-service.yaml

# 6. 配置 Ingress(可选)
kubectl apply -f ingress.yaml

# 7. 验证部署
kubectl get all -n myapp-prod
kubectl get pods -n myapp-prod -w

滚动更新和回滚

滚动更新

复制代码
# 方式 1:更新镜像
kubectl set image deployment/myapp myapp=myusername/myapp:2.0 -n myapp-prod

# 方式 2:编辑 Deployment
kubectl edit deployment myapp -n myapp-prod

# 方式 3:应用新的配置文件
kubectl apply -f app-deployment.yaml

# 查看更新过程
kubectl rollout status deployment/myapp -n myapp-prod

# 暂停更新
kubectl rollout pause deployment/myapp -n myapp-prod

# 继续更新
kubectl rollout resume deployment/myapp -n myapp-prod

回滚

复制代码
# 查看历史版本
kubectl rollout history deployment/myapp -n myapp-prod

# 查看特定版本详情
kubectl rollout history deployment/myapp --revision=2 -n myapp-prod

# 回滚到上一个版本
kubectl rollout undo deployment/myapp -n myapp-prod

# 回滚到指定版本
kubectl rollout undo deployment/myapp --to-revision=2 -n myapp-prod

自动扩缩容

水平 Pod 自动扩缩容(HPA)

创建 hpa.yaml

复制代码
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
  name: myapp-hpa
  namespace: myapp-prod
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: myapp
  minReplicas: 3
  maxReplicas: 10
  metrics:
  - type: Resource
    resource:
      name: cpu
      target:
        type: Utilization
        averageUtilization: 70
  - type: Resource
    resource:
      name: memory
      target:
        type: Utilization
        averageUtilization: 80
  behavior:
    scaleDown:
      stabilizationWindowSeconds: 300
      policies:
      - type: Percent
        value: 50
        periodSeconds: 60
    scaleUp:
      stabilizationWindowSeconds: 0
      policies:
      - type: Percent
        value: 100
        periodSeconds: 30

应用 HPA:

复制代码
kubectl apply -f hpa.yaml

# 查看 HPA 状态
kubectl get hpa -n myapp-prod
kubectl describe hpa myapp-hpa -n myapp-prod

# 查看自动扩缩容事件
kubectl get events --sort-by=.metadata.creationTimestamp -n myapp-prod

注意:使用 HPA 需要安装 Metrics Server:

复制代码
kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml

监控和日志

查看日志

复制代码
# 查看 Pod 日志
kubectl logs -f <pod-name> -n myapp-prod

# 查看前一个容器的日志(容器崩溃重启后)
kubectl logs <pod-name> --previous -n myapp-prod

# 查看多个 Pod 的日志
kubectl logs -l app=myapp -n myapp-prod --tail=100

# 导出日志到文件
kubectl logs <pod-name> -n myapp-prod > app.log

监控资源使用

复制代码
# 查看节点资源
kubectl top nodes

# 查看 Pod 资源
kubectl top pods -n myapp-prod

# 持续监控
watch kubectl top pods -n myapp-prod

最佳实践

1. 资源限制

始终为容器设置资源请求和限制:

复制代码
resources:
  requests:
    memory: "256Mi"
    cpu: "250m"
  limits:
    memory: "512Mi"
    cpu: "500m"

2. 健康检查

配置 Liveness 和 Readiness 探针:

复制代码
livenessProbe:
  httpGet:
    path: /actuator/health/liveness
    port: 8080
  initialDelaySeconds: 90
  periodSeconds: 10

readinessProbe:
  httpGet:
    path: /actuator/health/readiness
    port: 8080
  initialDelaySeconds: 60
  periodSeconds: 5

3. 使用命名空间隔离环境

复制代码
# 开发环境
kubectl create namespace dev

# 测试环境
kubectl create namespace test

# 生产环境
kubectl create namespace prod

4. 标签和选择器

使用标签组织资源:

复制代码
metadata:
  labels:
    app: myapp
    version: v1.0
    environment: prod
    team: backend

5. 滚动更新策略

复制代码
strategy:
  type: RollingUpdate
  rollingUpdate:
    maxSurge: 1        # 最多可以比期望副本数多 1 个
    maxUnavailable: 0  # 更新过程中不允许有 Pod 不可用

6. 使用 Init Container

在主容器启动前执行初始化任务:

复制代码
spec:
  initContainers:
  - name: wait-for-db
    image: busybox:1.28
    command: ['sh', '-c', 'until nc -z mysql-service 3306; do echo waiting for db; sleep 2; done;']

7. 配置优雅关闭

复制代码
lifecycle:
  preStop:
    exec:
      command: ["/bin/sh", "-c", "sleep 15"]

在 Java 应用中:

复制代码
# application.properties
server.shutdown=graceful
spring.lifecycle.timeout-per-shutdown-phase=30s

8. 使用 NetworkPolicy 增强安全性

复制代码
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: myapp-network-policy
  namespace: myapp-prod
spec:
  podSelector:
    matchLabels:
      app: myapp
  policyTypes:
  - Ingress
  - Egress
  ingress:
  - from:
    - podSelector:
        matchLabels:
          app: nginx-ingress
    ports:
    - protocol: TCP
      port: 8080
  egress:
  - to:
    - podSelector:
        matchLabels:
          app: mysql
    ports:
    - protocol: TCP
      port: 3306

常见问题排查

1. Pod 启动失败

复制代码
# 查看 Pod 状态
kubectl get pods -n myapp-prod

# 查看详细信息
kubectl describe pod <pod-name> -n myapp-prod

# 查看日志
kubectl logs <pod-name> -n myapp-prod

# 常见原因:
# - 镜像拉取失败(ImagePullBackOff)
# - 资源不足(Insufficient memory/cpu)
# - 配置错误(CrashLoopBackOff)

2. Service 无法访问

复制代码
# 检查 Service
kubectl get svc -n myapp-prod
kubectl describe svc myapp-service -n myapp-prod

# 检查 Endpoints
kubectl get endpoints myapp-service -n myapp-prod

# 测试内部连接
kubectl run -it --rm debug --image=curlimages/curl --restart=Never -- sh
curl http://myapp-service.myapp-prod.svc.cluster.local

3. 配置未生效

复制代码
# 查看 ConfigMap
kubectl get configmap myapp-config -n myapp-prod -o yaml

# 查看 Secret
kubectl get secret myapp-secret -n myapp-prod -o yaml

# 重启 Pod 使配置生效
kubectl rollout restart deployment/myapp -n myapp-prod

4. 性能问题

复制代码
# 检查资源使用
kubectl top nodes
kubectl top pods -n myapp-prod

# 检查事件
kubectl get events --sort-by=.metadata.creationTimestamp -n myapp-prod

# 检查 HPA
kubectl get hpa -n myapp-prod
kubectl describe hpa myapp-hpa -n myapp-prod

Helm:Kubernetes 包管理器

Helm 是 Kubernetes 的包管理工具,可以简化应用的部署和管理。

安装 Helm

复制代码
# macOS
brew install helm

# Linux
curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash

# Windows
choco install kubernetes-helm

# 验证安装
helm version

创建 Helm Chart

复制代码
# 创建新的 Chart
helm create myapp

# Chart 目录结构
myapp/
├── Chart.yaml          # Chart 元数据
├── values.yaml         # 默认配置值
├── charts/             # 依赖的 charts
└── templates/          # Kubernetes 资源模板
    ├── deployment.yaml
    ├── service.yaml
    ├── ingress.yaml
    └── _helpers.tpl

Chart.yaml 示例

复制代码
apiVersion: v2
name: myapp
description: A Helm chart for Java Spring Boot application
type: application
version: 1.0.0
appVersion: "1.0"
keywords:
  - java
  - spring-boot
maintainers:
  - name: Your Name
    email: your.email@example.com

values.yaml 示例

复制代码
replicaCount: 3

image:
  repository: myusername/myapp
  tag: "1.0"
  pullPolicy: IfNotPresent

service:
  type: ClusterIP
  port: 80
  targetPort: 8080

ingress:
  enabled: true
  className: nginx
  hosts:
    - host: myapp.example.com
      paths:
        - path: /
          pathType: Prefix
  tls:
    - secretName: myapp-tls
      hosts:
        - myapp.example.com

resources:
  limits:
    cpu: 1000m
    memory: 1Gi
  requests:
    cpu: 500m
    memory: 512Mi

autoscaling:
  enabled: true
  minReplicas: 3
  maxReplicas: 10
  targetCPUUtilizationPercentage: 70

env:
  - name: SPRING_PROFILES_ACTIVE
    value: "prod"
  - name: JAVA_OPTS
    value: "-Xmx512m -Xms256m"

mysql:
  enabled: true
  auth:
    rootPassword: "root123"
    database: "mydb"
  primary:
    persistence:
      enabled: true
      size: 10Gi

templates/deployment.yaml 示例

复制代码
apiVersion: apps/v1
kind: Deployment
metadata:
  name: {{ include "myapp.fullname" . }}
  labels:
    {{- include "myapp.labels" . | nindent 4 }}
spec:
  {{- if not .Values.autoscaling.enabled }}
  replicas: {{ .Values.replicaCount }}
  {{- end }}
  selector:
    matchLabels:
      {{- include "myapp.selectorLabels" . | nindent 6 }}
  template:
    metadata:
      labels:
        {{- include "myapp.selectorLabels" . | nindent 8 }}
    spec:
      containers:
      - name: {{ .Chart.Name }}
        image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
        imagePullPolicy: {{ .Values.image.pullPolicy }}
        ports:
        - name: http
          containerPort: {{ .Values.service.targetPort }}
          protocol: TCP
        env:
        {{- toYaml .Values.env | nindent 8 }}
        livenessProbe:
          httpGet:
            path: /actuator/health/liveness
            port: http
          initialDelaySeconds: 90
          periodSeconds: 10
        readinessProbe:
          httpGet:
            path: /actuator/health/readiness
            port: http
          initialDelaySeconds: 60
          periodSeconds: 5
        resources:
          {{- toYaml .Values.resources | nindent 10 }}

使用 Helm 部署

复制代码
# 验证 Chart 语法
helm lint myapp/

# 渲染模板查看生成的资源
helm template myapp myapp/ --values myapp/values.yaml

# 安装 Chart
helm install myapp-release myapp/ -n myapp-prod --create-namespace

# 使用自定义值文件
helm install myapp-release myapp/ -f values-prod.yaml -n myapp-prod

# 查看安装的 Release
helm list -n myapp-prod

# 查看 Release 状态
helm status myapp-release -n myapp-prod

# 升级 Release
helm upgrade myapp-release myapp/ -n myapp-prod

# 回滚 Release
helm rollback myapp-release 1 -n myapp-prod

# 卸载 Release
helm uninstall myapp-release -n myapp-prod

使用现有的 Helm Charts

复制代码
# 添加 Chart 仓库
helm repo add bitnami https://charts.bitnami.com/bitnami
helm repo update

# 搜索 Chart
helm search repo mysql

# 查看 Chart 信息
helm show chart bitnami/mysql
helm show values bitnami/mysql

# 安装 Chart
helm install my-mysql bitnami/mysql \
  --set auth.rootPassword=secretpassword \
  --set auth.database=mydb \
  -n myapp-prod

CI/CD 集成

GitLab CI 示例

.gitlab-ci.yml

复制代码
stages:
  - build
  - test
  - deploy

variables:
  DOCKER_IMAGE: $CI_REGISTRY_IMAGE:$CI_COMMIT_SHORT_SHA
  KUBE_NAMESPACE: myapp-prod

build:
  stage: build
  image: maven:3.8-openjdk-17
  script:
    - mvn clean package -DskipTests
  artifacts:
    paths:
      - target/*.jar
    expire_in: 1 hour

docker-build:
  stage: build
  image: docker:latest
  services:
    - docker:dind
  script:
    - docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
    - docker build -t $DOCKER_IMAGE .
    - docker push $DOCKER_IMAGE
  dependencies:
    - build

test:
  stage: test
  image: maven:3.8-openjdk-17
  script:
    - mvn test
  dependencies:
    - build

deploy-prod:
  stage: deploy
  image: bitnami/kubectl:latest
  script:
    - kubectl config use-context $KUBE_CONTEXT
    - kubectl set image deployment/myapp myapp=$DOCKER_IMAGE -n $KUBE_NAMESPACE
    - kubectl rollout status deployment/myapp -n $KUBE_NAMESPACE
  only:
    - main
  when: manual

GitHub Actions 示例

.github/workflows/deploy.yml

复制代码
name: Build and Deploy to K8s

on:
  push:
    branches: [ main ]
  pull_request:
    branches: [ main ]

env:
  REGISTRY: docker.io
  IMAGE_NAME: myusername/myapp

jobs:
  build-and-test:
    runs-on: ubuntu-latest
    steps:
    - uses: actions/checkout@v3
    
    - name: Set up JDK 17
      uses: actions/setup-java@v3
      with:
        java-version: '17'
        distribution: 'temurin'
        cache: maven
    
    - name: Build with Maven
      run: mvn clean package -DskipTests
    
    - name: Run tests
      run: mvn test
    
    - name: Upload artifact
      uses: actions/upload-artifact@v3
      with:
        name: app-jar
        path: target/*.jar

  docker-build-push:
    needs: build-and-test
    runs-on: ubuntu-latest
    steps:
    - uses: actions/checkout@v3
    
    - name: Download artifact
      uses: actions/download-artifact@v3
      with:
        name: app-jar
        path: target/
    
    - name: Set up Docker Buildx
      uses: docker/setup-buildx-action@v2
    
    - name: Log in to Docker Hub
      uses: docker/login-action@v2
      with:
        username: ${{ secrets.DOCKER_USERNAME }}
        password: ${{ secrets.DOCKER_PASSWORD }}
    
    - name: Build and push
      uses: docker/build-push-action@v4
      with:
        context: .
        push: true
        tags: |
          ${{ env.IMAGE_NAME }}:${{ github.sha }}
          ${{ env.IMAGE_NAME }}:latest

  deploy-to-k8s:
    needs: docker-build-push
    runs-on: ubuntu-latest
    if: github.ref == 'refs/heads/main'
    steps:
    - uses: actions/checkout@v3
    
    - name: Set up kubectl
      uses: azure/setup-kubectl@v3
    
    - name: Configure kubectl
      run: |
        echo "${{ secrets.KUBE_CONFIG }}" | base64 -d > kubeconfig
        export KUBECONFIG=./kubeconfig
    
    - name: Deploy to Kubernetes
      run: |
        kubectl set image deployment/myapp \
          myapp=${{ env.IMAGE_NAME }}:${{ github.sha }} \
          -n myapp-prod
        kubectl rollout status deployment/myapp -n myapp-prod

生产环境注意事项

1. 安全性

使用 RBAC(基于角色的访问控制)
复制代码
apiVersion: v1
kind: ServiceAccount
metadata:
  name: myapp-sa
  namespace: myapp-prod
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: myapp-role
  namespace: myapp-prod
rules:
- apiGroups: [""]
  resources: ["pods", "services"]
  verbs: ["get", "list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: myapp-rolebinding
  namespace: myapp-prod
subjects:
- kind: ServiceAccount
  name: myapp-sa
  namespace: myapp-prod
roleRef:
  kind: Role
  name: myapp-role
  apiGroup: rbac.authorization.k8s.io

在 Deployment 中使用:

复制代码
spec:
  template:
    spec:
      serviceAccountName: myapp-sa
安全上下文
复制代码
securityContext:
  runAsNonRoot: true
  runAsUser: 1000
  fsGroup: 1000
  capabilities:
    drop:
      - ALL
  readOnlyRootFilesystem: true
Pod Security Policy
复制代码
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
  name: restricted
spec:
  privileged: false
  allowPrivilegeEscalation: false
  requiredDropCapabilities:
    - ALL
  volumes:
    - 'configMap'
    - 'emptyDir'
    - 'projected'
    - 'secret'
  runAsUser:
    rule: 'MustRunAsNonRoot'
  seLinux:
    rule: 'RunAsAny'
  fsGroup:
    rule: 'RunAsAny'

2. 资源配额

复制代码
apiVersion: v1
kind: ResourceQuota
metadata:
  name: myapp-quota
  namespace: myapp-prod
spec:
  hard:
    requests.cpu: "10"
    requests.memory: 20Gi
    limits.cpu: "20"
    limits.memory: 40Gi
    pods: "20"
    services: "10"

3. 限制范围

复制代码
apiVersion: v1
kind: LimitRange
metadata:
  name: myapp-limits
  namespace: myapp-prod
spec:
  limits:
  - max:
      cpu: "2"
      memory: 2Gi
    min:
      cpu: 100m
      memory: 128Mi
    default:
      cpu: 500m
      memory: 512Mi
    defaultRequest:
      cpu: 250m
      memory: 256Mi
    type: Container

4. 备份策略

使用 Velero 进行集群备份:

复制代码
# 安装 Velero
velero install \
  --provider aws \
  --plugins velero/velero-plugin-for-aws:v1.5.0 \
  --bucket velero-backups \
  --secret-file ./credentials-velero

# 备份命名空间
velero backup create myapp-backup --include-namespaces myapp-prod

# 定期备份
velero schedule create myapp-daily \
  --schedule="0 2 * * *" \
  --include-namespaces myapp-prod

# 恢复备份
velero restore create --from-backup myapp-backup

5. 灾难恢复

复制代码
# 导出所有资源
kubectl get all --all-namespaces -o yaml > cluster-backup.yaml

# 导出特定命名空间
kubectl get all -n myapp-prod -o yaml > myapp-backup.yaml

# 恢复
kubectl apply -f myapp-backup.yaml

监控和可观测性

安装 Prometheus 和 Grafana

复制代码
# 添加 Helm 仓库
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo update

# 安装 Prometheus Operator
helm install prometheus prometheus-community/kube-prometheus-stack \
  -n monitoring --create-namespace

# 访问 Grafana
kubectl port-forward -n monitoring svc/prometheus-grafana 3000:80

# 默认登录:admin / prom-operator

ServiceMonitor 示例

复制代码
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
  name: myapp-metrics
  namespace: myapp-prod
  labels:
    release: prometheus
spec:
  selector:
    matchLabels:
      app: myapp
  endpoints:
  - port: http
    path: /actuator/prometheus
    interval: 30s

Spring Boot 配置

复制代码
<!-- pom.xml -->
<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-actuator</artifactId>
</dependency>
<dependency>
    <groupId>io.micrometer</groupId>
    <artifactId>micrometer-registry-prometheus</artifactId>
</dependency>
复制代码
# application.properties
management.endpoints.web.exposure.include=health,info,metrics,prometheus
management.endpoint.health.probes.enabled=true
management.health.livenessState.enabled=true
management.health.readinessState.enabled=true
management.metrics.export.prometheus.enabled=true

日志聚合(EFK Stack)

复制代码
# 安装 Elasticsearch
helm install elasticsearch elastic/elasticsearch \
  --set replicas=1 \
  --set minimumMasterNodes=1 \
  -n logging --create-namespace

# 安装 Fluentd
helm install fluentd bitnami/fluentd \
  --set aggregator.configMap=fluentd-config \
  -n logging

# 安装 Kibana
helm install kibana elastic/kibana \
  --set elasticsearchHosts=http://elasticsearch-master:9200 \
  -n logging

# 访问 Kibana
kubectl port-forward -n logging svc/kibana-kibana 5601:5601

性能优化

1. JVM 优化

复制代码
env:
- name: JAVA_OPTS
  value: >-
    -XX:+UseContainerSupport
    -XX:MaxRAMPercentage=75.0
    -XX:InitialRAMPercentage=50.0
    -XX:+UseG1GC
    -XX:MaxGCPauseMillis=200
    -XX:+ParallelRefProcEnabled
    -XX:+UseStringDeduplication
    -Djava.security.egd=file:/dev/./urandom

2. 连接池优化

复制代码
# application.properties
spring.datasource.hikari.maximum-pool-size=20
spring.datasource.hikari.minimum-idle=5
spring.datasource.hikari.connection-timeout=30000
spring.datasource.hikari.idle-timeout=600000
spring.datasource.hikari.max-lifetime=1800000

3. Pod 优先级

复制代码
apiVersion: scheduling.k8s.io/v1
kind: PriorityClass
metadata:
  name: high-priority
value: 1000000
globalDefault: false
description: "High priority class for production apps"
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp
spec:
  template:
    spec:
      priorityClassName: high-priority

4. 节点亲和性

复制代码
affinity:
  nodeAffinity:
    requiredDuringSchedulingIgnoredDuringExecution:
      nodeSelectorTerms:
      - matchExpressions:
        - key: node-type
          operator: In
          values:
          - high-memory
  podAntiAffinity:
    preferredDuringSchedulingIgnoredDuringExecution:
    - weight: 100
      podAffinityTerm:
        labelSelector:
          matchExpressions:
          - key: app
            operator: In
            values:
            - myapp
        topologyKey: kubernetes.io/hostname

总结

Kubernetes 为 Java 应用提供了强大的容器编排能力:

核心优势

  • 自动化运维:自动部署、扩缩容、故障恢复

  • 服务发现和负载均衡:内置的服务发现和负载均衡机制

  • 声明式配置:通过 YAML 文件管理所有资源

  • 滚动更新:零停机部署和快速回滚

  • 资源优化:高效利用集群资源

学习路径

  1. 基础阶段:理解 Pod、Deployment、Service 等核心概念

  2. 实践阶段:在 Minikube 上部署简单应用

  3. 进阶阶段:学习 ConfigMap、Secret、Volume 等高级特性

  4. 生产阶段:掌握监控、日志、安全、CI/CD 等生产实践

下一步

  • 学习 Service Mesh(Istio、Linkerd)

  • 深入了解 Kubernetes Operators

  • 掌握多集群管理

  • 了解云原生生态(CNCF 项目)

Kubernetes 已成为云原生时代 Java 开发者的必备技能。从简单的应用部署开始,逐步掌握高级特性,你将能够构建可靠、可扩展的生产级应用。

参考资源

Happy Kubernetes Journey! ☸️

相关推荐
罗超驿2 小时前
15. Java异常处理全解析:从底层原理到实战避坑指南
java·异常处理·开发实战·编程技巧·自定义异常·try-catch
柒.梧.2 小时前
吃透Spring Bean:生命周期、单例特性、作用域及扩展方式
java·后端·spring
zihan03212 小时前
若依(RuoYi)框架核心升级:全面适配 SpringData JPA,替换 MyBatis 持久层方案
java·开发语言·前端框架·mybatis·若依升级springboot
神奇大叔3 小时前
Java 配置文件记录
java·开发语言
锥栗3 小时前
【其他】基于Trae的大模型智能应用开发
android·java·数据库
毕设源码-郭学长3 小时前
【开题答辩全过程】以 个人任务管理系统APP为例,包含答辩的问题和答案
java
专注VB编程开发20年3 小时前
vb.net,c#线程池 Dim tasks As New List(Of Task) 线程多了,后面几个可能要等一二秒后再启动
java·linux·jvm
莫寒清3 小时前
MyBatis 中 ${} 和 #{} 有什么区别?
java·面试·mybatis
2301_804947584 小时前
nginx
java·服务器·nginx