Kubernetes 的 四层负载均衡 其实就是通过 Service 在 TCP/UDP 层(L4)来做的,工作在 OSI 第四层(传输层),不解析 HTTP 协议内容,而是直接根据 IP + 端口 转发流量。
一、Service 的类型和四层关系
K8s 中四层负载均衡主要用以下 Service 类型:
类型 说明 典型场景 ClusterIP(默认) 只在集群内部提供访问,L4 负载到后端 Pod 集群内部通信 NodePort 暴露到每个 Node 节点的固定端口,外部可访问,L4 负载 小规模测试或简单对外服务 LoadBalancer 依赖云厂商的 L4 LB(如 AWS ELB、阿里云 SLB),直接获取外网 IP 公网服务 ExternalName DNS 级别转发,不是真正的负载均衡 特殊场景
二、创建 Service 资源创建 Service 资源
bashkubectl explain service
1、ClusterIP类型
bashvim pod_test.yaml apiVersion: apps/v1 kind: Deployment metadata: name: my-nginx spec: selector: matchLabels: run: my-nginx replicas: 2 template: metadata: labels: run: my-nginx spec: containers: - name: my-nginx image: nginx imagePullPolicy: IfNotPresent ports: - containerPort: 80 kubectl apply -f pod_test.yaml kubectl get pods -l run=my-nginx -o wide
现在看到的ip是随机分配的,删除pod重新生成后ip会改变
创建 Service
bashapiVersion: v1 kind: Service metadata: name: my-nginx labels: run: my-nginx spec: type: ClusterIP ports: - port: 80 protocol: TCP targetPort: 80 selector: run: my-nginx kubectl apply -f service_test.yaml kubectl get svc -l run=my-nginx
port: 80
→ Service 暴露的端口,集群内访问ServiceIP:80
。
protocol: TCP
→ 使用 TCP 协议(默认)。
targetPort: 80
→ Pod 内容器监听的端口(对应 Deployment 中的containerPort: 80
)Service 会把流量转发给所有带
run=my-nginx
标签的 Pod。这和你的 Deployment 模板里
labels: run: my-nginx
一一对应。所以,这个 Service 会把流量均衡分发到你 Deployment 创建的那两个 nginx Pod 上。
2、NodePort类型
bashvim pod_nodeport.yaml apiVersion: apps/v1 kind: Deployment metadata: name: my-nginx-nodeport spec: replicas: 2 selector: matchLabels: run: my-nginx-nodeport template: metadata: labels: run: my-nginx-nodeport spec: containers: - name: my-nginx-nodeport-container image: nginx imagePullPolicy: IfNotPresent ports: - containerPort: 80 kubectl apply -f pod_nodeport.yaml kubectl get pods -l run=my-nginx-nodeport
创建service
bashvim service_nodeport.yaml apiVersion: v1 kind: Service metadata: name: my-nginx-nodeport labels: run: my-nginx-nodeport spec: type: NodePort selector: run: my-nginx-nodeport ports: - port: 80 protocol: TCP targetPort: 80 nodePort: 30380 kubectl apply -f service_nodeport.yaml kubectl get svc -l run=my-nginx-nodeport
然后根据ip加端口就可以访问了
3、ExternalName类型
bashvim client.yaml apiVersion: apps/v1 kind: Deployment metadata: name: client spec: replicas: 1 selector: matchLabels: app: busybox template: metadata: labels: app: busybox spec: containers: - name: busybox image: busybox command: ["/bin/sh", "-c", "sleep 36000"] kubectl apply -f client.yaml
bashvim client_svc.yaml apiVersion: v1 kind: Service metadata: name: client-svc spec: type: ExternalName externalName: nginx-svc.nginx-ns.svc.cluster.local ports: - name: http port: 80 targetPort: 80
kubectl get pods
kubectl apply -f client_svc.yaml
kubectl create ns nginx-ns
bashvim cat server_nginx.yaml apiVersion: apps/v1 kind: Deployment metadata: name: nginx namespace: nginx-ns spec: replicas: 1 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx imagePullPolicy: IfNotPresent kubectl apply -f server_nginx.yaml kubectl get pods -n nginx-ns
bashvim nginx_svc.yaml apiVersion: v1 kind: Service metadata: name: nginx-svc namespace: nginx-ns spec: selector: app: nginx ports: - name: http protocol: TCP port: 80 targetPort: 80 kubectl apply -f nginx_svc.yaml
进入容器访问
wget -q -O - client-svc.default.svc.cluster.local
wget -q -O - nginx-svc.nginx-ns.svc.cluster.local
看到结果一样
###有待补充。。。。