云原生(Kubernetes service微服务)

实验简介

本实验聚焦于 Kubernetes(K8s)生态中服务暴露与流量精细化管理的核心能力,围绕 Service 资源多类型配置、Service 工作模式优化、Ingress-nginx 七层网关部署与高级特性展开实战操作,旨在掌握 K8s 集群内 Pod 服务的稳定访问、外部暴露及流量管控的核心逻辑与实操方法。

实验背景

K8s 中 Pod 具有临时性(如重建、扩缩容、调度),无法通过固定 IP 提供服务;Service 作为 Pod 的访问抽象层,可实现 Pod 的稳定访问,而 Ingress 则在 Service 之上提供七层(HTTP/HTTPS)流量路由、加密、认证等高级能力,是生产环境中服务暴露与流量管理的核心组件。本实验通过从基础到进阶的实操,覆盖服务暴露全链路核心场景。

实验核心目标

  1. 掌握 K8s Service 不同类型(ClusterIP/Headless、NodePort、LoadBalancer、ExternalName)的配置、验证与适用场景;
  2. 优化 Service 工作模式(kube-proxy 切换为 IPVS),提升服务转发性能;
  3. 部署 Ingress-nginx 并实现高级流量管控:基于域名的路由、TLS 加密、HTTP 基础认证、URL 重写;
  4. 理解金丝雀发布(Canary)的核心思路,为灰度发布等流量调度场景奠定基础;
  5. 掌握生产级服务暴露的关键配置(如端口管理、负载均衡器 MetalLB 部署、证书与认证管理)。

实验核心内容

  1. 基础 Service 配置:从 ClusterIP(含无头服务 Headless)入手,验证 Pod 的集群内稳定访问;扩展至 NodePort 实现集群外访问,调整 NodePort 端口范围适配实际需求;部署 MetalLB 实现 LoadBalancer 类型 Service 的外部负载均衡能力;通过 ExternalName 实现服务域名映射。
  2. Service 性能优化:将 kube-proxy 工作模式从默认模式切换为 IPVS,提升服务转发效率。
  3. Ingress-nginx 实战:部署 Ingress-nginx 控制器,实现基于域名的多服务路由、TLS 加密访问、HTTP 基础认证鉴权、URL 重写等高级特性,覆盖七层流量管控的核心场景。
  4. 金丝雀发布铺垫:为流量灰度调度(金丝雀发布)提供实操基础,理解精细化流量管控的实现思路。

第一部分:基础 ClusterIP 与 IPVS 优化

复制代码
# 1. 创建 ClusterIP 类型的 Service 和 Deployment
[root@master ~]# cat <<EOF > clusterIP.yml
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: webcluster
  name: webcluster
spec:
  replicas: 2
  selector:
    matchLabels:
      app: webcluster
  template:
    metadata:
      labels:
        app: webcluster
    spec:
      containers:
      - image: myapp:v1
        name: myapp
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app: webcluster
  name: webcluster
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
  selector:
    app: webcluster
  type: ClusterIP
EOF
[root@master ~]# kubectl apply -f clusterIP.yml
deployment.apps/webcluster configured
Warning: resource services/webcluster is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
service/webcluster configured

# 2. 优化 kube-proxy 为 IPVS 模式
[root@master ~]# dnf install ipvsadm -y
正在更新 Subscription Management 软件仓库。
无法读取客户身份

本系统尚未在权利服务器中注册。可使用 "rhc" 或 "subscription-manager" 进行注册。

docker                                                7.9 kB/s | 2.0 kB     00:00    
docker                                                123 kB/s |  49 kB     00:00    
kubernetes                                             11 kB/s | 1.7 kB     00:00    
AppStream                                             3.1 MB/s | 3.2 kB     00:00    
BaseOS                                                2.7 MB/s | 2.7 kB     00:00    
依赖关系解决。
======================================================================================
 软件包            架构             版本                    仓库                 大小
======================================================================================
安装:
 ipvsadm           x86_64           1.31-6.el9              AppStream            54 k

事务概要
======================================================================================
安装  1 软件包

总计:54 k
安装大小:89 k
下载软件包:
运行事务检查
事务检查成功。
运行事务测试
事务测试成功。
运行事务
  准备中  :                                                                       1/1 
  安装    : ipvsadm-1.31-6.el9.x86_64                                             1/1 
  运行脚本: ipvsadm-1.31-6.el9.x86_64                                             1/1 
  验证    : ipvsadm-1.31-6.el9.x86_64                                             1/1 
已更新安装的产品。

已安装:
  ipvsadm-1.31-6.el9.x86_64                                                           

完毕!
[root@master ~]# kubectl get configmap kube-proxy -n kube-system -o yaml > kube-proxy.yaml
[root@master ~]# sed -i 's/mode: ""/mode: "ipvs"/g' kube-proxy.yaml
[root@master ~]# kubectl apply -f kube-proxy.yaml
Warning: resource configmaps/kube-proxy is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
configmap/kube-proxy configured
[root@master ~]# kubectl -n kube-system delete pods -l k8s-app=kube-proxy
pod "kube-proxy-2hsc9" deleted from kube-system namespace
pod "kube-proxy-j6zt7" deleted from kube-system namespace
pod "kube-proxy-j79jj" deleted from kube-system namespace
pod "kube-proxy-qqmgx" deleted from kube-system namespace

# 3. 查看 IPVS 规则是否生效
[root@master ~]watch -n 1 ipvsadm -LnLn
Every 1.0s: ipvsadm -Ln                               master: Tue Apr 14 16:27:53 2026

IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  10.96.0.1:443 rr
  -> 172.25.254.100:6443          Masq    1      0          0
TCP  10.96.0.10:53 rr
  -> 10.244.0.2:53                Masq    1      0          0
  -> 10.244.0.3:53                Masq    1      0          0
TCP  10.96.0.10:9153 rr
  -> 10.244.0.2:9153              Masq    1      0          0
  -> 10.244.0.3:9153              Masq    1      0          0
TCP  10.105.14.242:80 rr
UDP  10.96.0.10:53 rr
  -> 10.244.0.2:53                Masq    1      0          0
  -> 10.244.0.3:53                Masq    1      0          0
Every 1.0s: ipvsadm -Ln                               master: Tue Apr 14 16:27:53 2026

IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  10.96.0.1:443 rr
  -> 172.25.254.100:6443          Masq    1      0          0
TCP  10.96.0.10:53 rr
  -> 10.244.0.2:53                Masq    1      0          0
  -> 10.244.0.3:53                Masq    1      0          0
TCP  10.96.0.10:9153 rr
  -> 10.244.0.2:9153              Masq    1      0          0
  -> 10.244.0.3:9153              Masq    1      0          0
TCP  10.105.14.242:80 rr
UDP  10.96.0.10:53 rr
  -> 10.244.0.2:53                Masq    1      0          0
  -> 10.244.0.3:53                Masq    1      0          0
Every 1.0s: ipvsadm -Ln                               master: Tue Apr 14 16:27:53 2026

IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  10.96.0.1:443 rr
  -> 172.25.254.100:6443          Masq    1      0          0
TCP  10.96.0.10:53 rr
  -> 10.244.0.2:53                Masq    1      0          0
  -> 10.244.0.3:53                Masq    1      0          0
TCP  10.96.0.10:9153 rr
  -> 10.244.0.2:9153              Masq    1      0          0
  -> 10.244.0.3:9153              Masq    1      0          0
TCP  10.105.14.242:80 rr
UDP  10.96.0.10:53 rr
  -> 10.244.0.2:53                Masq    1      0          0
  -> 10.244.0.3:53                Masq    1      0          0

第二部分:Service 类型切换 (Headless 与 NodePort)

复制代码
# 1. 测试 Headless 无头服务 (无 ClusterIP)
[root@master ~]# kubectl delete -f clusterIP.yml
deployment.apps "webcluster" deleted from default namespace
service "webcluster" deleted from default namespace
[root@master ~]# sed -i '/type: ClusterIP/a\  clusterIP: None' clusterIP.yml
[root@master ~]# kubectl apply -f clusterIP.yml
deployment.apps/webcluster created
service/webcluster created

# 测试解析 (此时会返回后端的多个 Pod IP)
[root@master ~]# dig webcluster.default.svc.cluster.local @10.96.0.10

; <<>> DiG 9.16.23-RH <<>> webcluster.default.svc.cluster.local @10.96.0.10
;; global options: +cmd
;; Got answer:
;; WARNING: .local is reserved for Multicast DNS
;; You are currently testing what happens when an mDNS query is leaked to DNS
;; ->>HEADER<<- opcode: QUERY, status: NXDOMAIN, id: 46304
;; flags: qr aa rd; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 1
;; WARNING: recursion requested but not available

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
; COOKIE: b636a1b2e3383e52 (echoed)
;; QUESTION SECTION:
;webcluster.default.svc.cluster.local. IN A

;; AUTHORITY SECTION:
cluster.local.          30      IN      SOA     ns.dns.cluster.local. hostmaster.cluster.local. 1776155357 7200 1800 86400 30

;; Query time: 14 msec
;; SERVER: 10.96.0.10#53(10.96.0.10)
;; WHEN: Tue Apr 14 16:29:31 CST 2026
;; MSG SIZE  rcvd: 170

# 2. 测试 NodePort 服务 (固定端口 31111)
[root@master ~]# kubectl delete -f clusterIP.yml
deployment.apps "webcluster" deleted from default namespace
service "webcluster" deleted from default namespace
[root@master ~]# cat <<EOF > nodeport.yml
apiVersion: v1
kind: Service
metadata:
  labels:
    app: webcluster
  name: webcluster
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
    nodePort: 31111
  selector:
    app: webcluster
  type: NodePort
EOF
[root@master ~]# kubectl apply -f nodeport.yml
service/webcluster created

# 访问测试 (使用 Master IP 172.25.254.100)
[root@master ~]# curl 172.25.254.100:31111/hostname.html
webcluster-77c87d9946-bsngz

第三部分:LoadBalancer 与 MetalLB 部署

复制代码
# 1. 清理旧服务并开启 arp 严格模式 (MetalLB 前置要求)
[root@master ~]# kubectl delete -f nodeport.yml
deployment.apps "webcluster" deleted from default namespace
service "webcluster" deleted from default namespace
[root@master ~]# sed -i 's/strictARP: false/strictARP: true/g' kube-proxy.yaml
[root@master ~]# kubectl apply -f kube-proxy.yaml

# 2. 安装 MetalLB
[root@master ~]# wget https://raw.githubusercontent.com/metallb/metallb/v0.15.3/config/manifests/metallb-native.yaml
# 替换为私有仓库镜像
[root@master ~]# sed -i 's#quay.io/metallb/controller:v0.15.3#reg.timinglee.org/metallb/controller:v0.15.3#g' metallb-native.yaml
[root@master ~]# sed -i 's#quay.io/metallb/speaker:v0.15.3#reg.timinglee.org/metallb/speaker:v0.15.3#g' metallb-native.yaml
[root@master ~]# kubectl apply -f metallb-native.yaml

# 3. 配置 MetalLB IP 地址池
[root@master ~]# cat <<EOF > metallb-config.yml
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
  name: first-pool
  namespace: metallb-system
spec:
  addresses:
  - 172.25.254.50-172.25.254.80
---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
  name: example
  namespace: metallb-system
spec:
  ipAddressPools:
  - first-pool
EOF
[root@master ~]# kubectl apply -f metallb-config.yml

# 4. 创建 LoadBalancer 服务并测试
[root@master ~]# sed -i 's/NodePort/LoadBalancer/g' nodeport.yml
[root@master ~]# sed -i '/nodePort: 31111/d' nodeport.yml
[root@master ~]# kubectl apply -f nodeport.yml

# 查看分配到的 EXTERNAL-IP 并访问
[root@master ~]# kubectl get svc webcluster

第四部分:Ingress 七层代理部署与测试

复制代码
# 1. 安装 Ingress-Nginx 控制器
[root@master ~]# wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.15.1/deploy/static/provider/baremetal/deploy.yaml
[root@master ~]# kubectl apply -f deploy.yaml
# 将 Controller 服务暴露为 LoadBalancer 模式以获取外部 IP
[root@master ~]# kubectl patch svc ingress-nginx-controller -n ingress-nginx -p '{"spec": {"type": "LoadBalancer"}}'

# 2. 准备后端测试应用 (v1 和 v2 版本)
[root@master ~]# cat <<EOF > myapp1.yml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp1
spec:
  replicas: 1
  selector:
    matchLabels:
      app: myapp1
  template:
    metadata:
      labels:
        app: myapp1
    spec:
      containers:
      - image: myapp:v1
        name: myapp
---
apiVersion: v1
kind: Service
metadata:
  name: myapp1
spec:
  ports:
  - port: 80
    targetPort: 80
  selector:
    app: myapp1
EOF
[root@master ~]# cp myapp1.yml myapp2.yml
[root@master ~]# sed -i 's/myapp1/myapp2/g' myapp2.yml
[root@master ~]# sed -i 's/myapp:v1/myapp:v2/g' myapp2.yml
[root@master ~]# kubectl apply -f myapp1.yml
[root@master ~]# kubectl apply -f myapp2.yml

# 3. 创建基于域名的 Ingress
[root@master ~]# cat <<EOF > 2-ingress.yml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
  name: webcluster
spec:
  ingressClassName: nginx
  rules:
  - host: myapp1.timinglee.org
    http:
      paths:
      - backend:
          service:
            name: myapp1
            port:
              number: 80
        path: /
        pathType: Prefix
  - host: myapp2.timinglee.org
    http:
      paths:
      - backend:
          service:
            name: myapp2
            port:
              number: 80
        path: /
        pathType: Prefix
EOF
[root@master ~]# kubectl apply -f 2-ingress.yml

# 4. 配置本地解析
[root@master ~]# echo "172.25.254.50 myapp1.timinglee.org myapp2.timinglee.org" >> /etc/hosts
[root@master ~]# curl myapp1.timinglee.org

第五部分:高阶 Ingress 功能 (TLS 加密、认证与金丝雀发布)

复制代码
# 1. 开启 TLS 加密 (HTTPS)
[root@master ~]# openssl req -newkey rsa:2048 -nodes -keyout tls.key -x509 -days 365 -subj "/CN=nginxsvc/O=nginxsvc" -out tls.crt
[root@master ~]# kubectl create secret tls web-tls-secret --key tls.key --cert tls.crt

# 2. 开启 Basic Auth 认证 (账号密码都设为 lee,使用 -b 静默模式)
[root@master ~]# dnf install httpd-tools -y
[root@master ~]# htpasswd -bc auth lee lee
[root@master ~]# kubectl create secret generic auth-web --from-file=auth

# 3. 部署带有 TLS 和 认证 的 Ingress
[root@master ~]# cat <<EOF > 5-ingress.yml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
    nginx.ingress.kubernetes.io/auth-type: basic
    nginx.ingress.kubernetes.io/auth-secret: auth-web
    nginx.ingress.kubernetes.io/auth-realm: "Please input username and password"
    nginx.ingress.kubernetes.io/rewrite-target: /
  name: webcluster
spec:
  tls:
  - hosts:
    - myapp1.timinglee.org
    secretName: web-tls-secret
  ingressClassName: nginx
  rules:
  - host: myapp1.timinglee.org
    http:
      paths:
      - backend:
          service:
            name: myapp1
            port:
              number: 80
        path: /
        pathType: Prefix
EOF
[root@master ~]# kubectl apply -f 5-ingress.yml
# 认证测试 (-k 忽略证书校验,-u 携带密码)
[root@master ~]# curl -k https://myapp1.timinglee.org -u lee:lee

# 4. 基于权重的金丝雀发布 (灰度发布 v2 版本)
[root@master ~]# cat <<EOF > 7-ingress-canary.yml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
    nginx.ingress.kubernetes.io/canary: "true"
    nginx.ingress.kubernetes.io/canary-weight: "10"
    nginx.ingress.kubernetes.io/canary-weight-total: "100"
    nginx.ingress.kubernetes.io/rewrite-target: /
  name: webcluster-new
spec:
  ingressClassName: nginx
  rules:
  - host: myapp1.timinglee.org
    http:
      paths:
      - backend:
          service:
            name: myapp2
            port:
              number: 80
        path: /
        pathType: Prefix
EOF
[root@master ~]# kubectl apply -f 7-ingress-canary.yml

# 5. 灰度测试脚本 (验证 10% 的流量是否去了 v2)
[root@master ~]# cat <<'EOF' > check.sh
#!/bin/bash
v1=0
v2=0
for (( i=0; i<100; i++))
do
    response=$(curl -k -s -u lee:lee https://myapp1.timinglee.org | grep -c v1)
    v1=$(expr $v1 + $response)
    v2=$(expr $v2 + 1 - $response)
done
echo "v1(稳定版):$v1次, v2(金丝雀):$v2次"
EOF
[root@master ~]# bash check.sh
相关推荐
仙草不加料20 小时前
互联网大厂Java面试故事实录:三轮场景化技术提问与详细答案解析
java·spring boot·微服务·面试·aigc·电商·内容社区
程序员老邢20 小时前
【技术底稿 19】Redis7 集群密码配置 + 权限锁死 + 磁盘占满连锁故障真实排查全记录
java·服务器·经验分享·redis·程序人生·微服务
fanly1121 小时前
利用surging 网络组件重构插件开发
微服务·ai·microservice
阿里云云原生1 天前
给 OpenClaw 加上企业级 Memory,你的 Agent 终于不用再问第二遍
云原生
平行云1 天前
虚拟直播混合式2D/3D应用程序实时云渲染推流解决方案
linux·unity·云原生·ue5·图形渲染·实时云渲染·像素流送
longerxin20201 天前
kubeasz 快速指南:一键部署 Kubernetes-k8s 测试环境
云原生·容器·kubernetes
cyber_两只龙宝1 天前
【Oracle】 Oracle之SQL的子查询
linux·运维·数据库·sql·云原生·oracle
米高梅狮子1 天前
03.Kubernetes自动化部署和namespace、pod
容器·kubernetes·自动化
特长腿特长1 天前
LVS_DR 模式的原理
linux·运维·网络·云原生·centos·lvs
indexsunny1 天前
互联网大厂Java面试实战:Spring Boot微服务与Kafka消息队列深度解析
java·spring boot·微服务·面试·kafka·消息队列·电商