Ingress资源对象概述
Kubernetes Ingress是一种API对象,用于管理对集群内服务的外部访问,通常通过HTTP/HTTPS路由规则实现。它充当入口点,将外部请求路由到内部服务,支持基于路径、主机名或TLS的流量分发。
版本对比图
不同Kubernetes版本的Ingress功能差异如下:
- v1.18及之前:需依赖Ingress Controller(如Nginx、Traefik)。
- v1.19+ :支持
IngressClass资源,明确指定控制器类型。 - v1.22+ :引入
pathType字段(Exact/Prefix/ImplementationSpecific)。
Ingress应用案例
环境准备
-
确保Kubernetes集群运行正常,
kubectl已配置。 -
安装Ingress Controller(以Nginx为例):
bashkubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.0.4/deploy/static/provider/cloud/deploy.yaml
验证-NodePort模式
-
创建示例应用和服务:
yamlapiVersion: apps/v1 kind: Deployment metadata: name: demo-app spec: replicas: 2 template: containers: - name: nginx image: nginx:alpine --- apiVersion: v1 kind: Service metadata: name: demo-svc spec: type: NodePort ports: - port: 80 targetPort: 80 selector: app: demo-app -
定义Ingress规则:
yamlapiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: demo-ingress spec: rules: - host: example.com http: paths: - path: / pathType: Prefix backend: service: name: demo-svc port: number: 80
设置Http代理
若需通过代理访问,配置本地/etc/hosts将域名指向NodeIP,或使用工具如curl -H "Host: example.com" http://<NodeIP>:<NodePort>。
验证-LoadBalancer模式
修改ARP模式(适用于裸金属集群)
-
启用严格ARP模式(MetalLB依赖):
bashkubectl edit configmap -n kube-system kube-proxy设置
strictARP: true。
搭建MetalLB支持LoadBalancer
-
安装MetalLB:
bashkubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.12.1/manifests/namespace.yaml kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.12.1/manifests/metallb.yaml -
配置IP地址池(例如
192.168.1.100-192.168.1.200):yamlapiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: name: default-pool namespace: metallb-system spec: addresses: - 192.168.1.100-192.168.1.200
测试
将Service类型改为LoadBalancer:
yaml
apiVersion: v1
kind: Service
metadata:
name: demo-svc
spec:
type: LoadBalancer
ports:
- port: 80
selector:
app: demo-app
访问分配的LoadBalancer IP即可验证流量路由。
Ingress 资源对象详解
Ingress 的工作原理
Ingress 是 Kubernetes 中用于管理外部访问集群服务的 API 对象,通常通过 HTTP/HTTPS 路由规则实现。它充当七层负载均衡器,抽象了反向代理的配置逻辑。核心组件包括:
- Ingress 规则:定义域名、路径与后端 Service 的映射关系。
- Ingress Controller:负责实现规则的具体组件(如 Nginx、HAProxy)。
Ingress 与 NodePort/LoadBalancer 的对比
- NodePort:每个 Service 占用集群节点端口,扩展性差。
- LoadBalancer:每个 Service 需独立 LB 实例,成本高且依赖云厂商。
- Ingress:通过单一入口(如 LB 或 NodePort)路由多服务,节省资源。
Ingress Controller 的工作流程
- 规则监听:Ingress Controller 监听 Kubernetes API 的 Ingress 规则变更。
- 配置生成 :将规则转换为负载均衡器配置(如 Nginx 的
server块)。 - 动态更新:实时应用配置到负载均衡器实例(如 Nginx 热加载)。
示例 Ingress 规则
以下 YAML 定义了一个将 example.com 流量路由到 my-service 的规则:
yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: example-ingress
spec:
rules:
- host: example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: my-service
port:
number: 80
常见 Ingress Controller 选项
- Nginx Ingress: 基于 Nginx,社区支持广泛。
- Contour: 使用 Envoy,适合复杂路由需求。
- HAProxy Ingress: 高性能,适合低延迟场景。
注意事项
- TLS 支持 :可通过
spec.tls字段配置 HTTPS。 - 路径匹配 :
pathType支持Exact(精确)或Prefix(前缀)匹配。 - 健康检查 :确保后端 Service 的
readinessProbe配置正确。
性能优化建议
- 启用压缩:在 Nginx Ingress 中配置
gzip。 - 缓存静态内容:通过注解设置缓存策略。
- 负载均衡算法:根据场景选择轮询、最少连接等策略。
Kubernetes Ingress 配置示例
以下是一个基于客户端需求的 Kubernetes Ingress 配置示例,用于将不同子域名(web.itheima.com、mail.itheima.com、oa.itheima.com)路由到对应的后端 Service。
yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: itheima-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: web.itheima.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: web-service
port:
number: 80
- host: mail.itheima.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: mail-service
port:
number: 80
- host: oa.itheima.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: oa-service
port:
number: 80
组件说明
Ingress 控制器 需要提前部署 Ingress 控制器(如 Nginx Ingress Controller),该控制器负责实际处理外部请求并路由到对应 Service。
Service 配置 每个 Service 需要提前创建,示例配置如下(以 web-service 为例):
yaml
apiVersion: v1
kind: Service
metadata:
name: web-service
spec:
selector:
app: web
ports:
- protocol: TCP
port: 80
targetPort: 8080
Pod 部署 确保每个 Service 对应的 Pod 已正确部署并带有匹配的标签(如 app: web)。
DNS 配置
需要在 DNS 服务商处添加以下记录:
- A 记录将 web.itheima.com 指向 Ingress 控制器的外部 IP
- A 记录将 mail.itheima.com 指向相同 IP
- A 记录将 oa.itheima.com 指向相同 IP
测试验证
使用 curl 命令测试路由是否生效:
bash
curl -H "Host: web.itheima.com" http://INGRESS_IP
curl -H "Host: mail.itheima.com" http://INGRESS_IP
curl -H "Host: oa.itheima.com" http://INGRESS_IP
TLS 配置(可选)
如需启用 HTTPS,可在 Ingress 中配置 TLS:
yaml
spec:
tls:
- hosts:
- web.itheima.com
- mail.itheima.com
- oa.itheima.com
secretName: itheima-tls
需要提前创建包含证书的 Secret。
版本对比图
| upported | Ingress-NGINX version | k8s supported version | Alpine Version | Nginx Version | Helm Chart Version |
|---|---|---|---|---|---|
| 🔄 | v1.9.6 | 1.29, 1.28, 1.27, 1.26, 1.25 | 3.19.0 | 1.21.6 | 4.9.1* |
| 🔄 | v1.9.5 | 1.28, 1.27, 1.26, 1.25 | 3.18.4 | 1.21.6 | 4.9.0* |
| 🔄 | v1.9.4 | 1.28, 1.27, 1.26, 1.25 | 3.18.4 | 1.21.6 | 4.8.3 |
| 🔄 | v1.9.3 | 1.28, 1.27, 1.26, 1.25 | 3.18.4 | 1.21.6 | 4.8.* |
| 🔄 | v1.9.1 | 1.28, 1.27, 1.26, 1.25 | 3.18.4 | 1.21.6 | 4.8.* |
| 🔄 | v1.9.0 | 1.28, 1.27, 1.26, 1.25 | 3.18.2 | 1.21.6 | 4.8.* |
| 🔄 | v1.8.4 | 1.27, 1.26, 1.25, 1.24 | 3.18.2 | 1.21.6 | 4.7.* |
| 🔄 | v1.8.2 | 1.27, 1.26, 1.25, 1.24 | 3.18.2 | 1.21.6 | 4.7.* |
| 🔄 | v1.8.1 | 1.27, 1.26, 1.25, 1.24 | 3.18.2 | 1.21.6 | 4.7.* |
| 🔄 | v1.8.0 | 1.27, 1.26, 1.25, 1.24 | 3.18.0 | 1.21.6 | 4.7.* |
| v1.7.1 | 1.27, 1.26, 1.25, 1.24 | 3.17.2 | 1.21.6 | 4.6.* | |
| v1.7.0 | 1.26, 1.25, 1.24 | 3.17.2 | 1.21.6 | 4.6.* | |
| v1.6.4 | 1.26, 1.25, 1.24, 1.23 | 3.17.0 | 1.21.6 | 4.5.* | |
| v1.5.1 | 1.25, 1.24, 1.23 | 3.16.2 | 1.21.6 | 4.4.* | |
| v1.4.0 | 1.25, 1.24, 1.23, 1.22 | 3.16.2 | 1.19.10† | 4.3.0 | |
| v1.3.1 | 1.24, 1.23, 1.22, 1.21, 1.20 | 3.16.2 | 1.19.10† | 4.2.5 | |
| v1.3.0 | 1.24, 1.23, 1.22, 1.21, 1.20 | 3.16.0 | 1.19.10† | 4.2.3 |
二、 Ingress应用案例
2.1 环境准备
搭建ingress环境
创建文件夹
root@k8s-master01 \~\]# mkdir ingress-controller \[root@k8s-master01 \~\]# cd ingress-controller/ # 获取ingress-nginx,本次案例使用的是1.8.1版本 \[root@k8s-master01 ingress-controller\]# wget https://github.com/kubernetes/ingress-nginx/archive/refs/tags/controller-v1.8.1.tar.gz \[root@k8s-master01 ingress-controller\]# tar xf controller-v1.8.1.tar.gz \[root@k8s-master01 ingress-controller\]# cd ingress-nginx-controller-v1.8.1/deploy/static/provider/cloud/ \[root@k8s-master01 cloud\]# ls deploy.yaml kustomization.yaml ##修改镜像源为如下: \[root@k8s-master01 cloud\]# cat deploy.yaml \| grep -n image 441: image: registry.cn-shenzhen.aliyuncs.com/xiaohh-docker/ingress-nginx-controller:v1.8.1 442: imagePullPolicy: IfNotPresent 538: image: registry.cn-shenzhen.aliyuncs.com/xiaohh-docker/ingress-nginx-kube-webhook-certgen:v20230407 539: imagePullPolicy: IfNotPresent 587: image: registry.cn-shenzhen.aliyuncs.com/xiaohh-docker/ingress-nginx-kube-webhook-certgen:v20230407 588: imagePullPolicy: IfNotPresent #########部署############# \[root@k8s-master01 cloud\]# kubectl apply -f deploy.yaml namespace/ingress-nginx created serviceaccount/ingress-nginx created serviceaccount/ingress-nginx-admission created role.rbac.authorization.k8s.io/ingress-nginx created role.rbac.authorization.k8s.io/ingress-nginx-admission created clusterrole.rbac.authorization.k8s.io/ingress-nginx created clusterrole.rbac.authorization.k8s.io/ingress-nginx-admission created rolebinding.rbac.authorization.k8s.io/ingress-nginx created rolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx created clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created configmap/ingress-nginx-controller created service/ingress-nginx-controller created service/ingress-nginx-controller-admission created deployment.apps/ingress-nginx-controller created job.batch/ingress-nginx-admission-create created job.batch/ingress-nginx-admission-patch created ingressclass.networking.k8s.io/nginx created validatingwebhookconfiguration.admissionregistration.k8s.io/ingress-nginx-admission created \[root@k8s-master01 cloud\]# kubectl -n ingress-nginx get pod NAME READY STATUS RESTARTS AGE ingress-nginx-admission-create-sgksd 0/1 Completed 0 77s ingress-nginx-admission-patch-f4rdc 0/1 Completed 1 77s ingress-nginx-controller-565cc5ddd9-2qwnm 1/1 Running 0 77s \[root@k8s-master01 cloud\]# kubectl -n ingress-nginx get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE ingress-nginx-controller LoadBalancer 10.10.103.132 \
80:31502/TCP,443:31020/TCP 96s ingress-nginx-controller-admission ClusterIP 10.10.227.21 \ 443/TCP 96s ##查看集群已经存在的nginx类型 \[root@k8s-master01 cloud\]# kubectl get ingressclass NAME CONTROLLER PARAMETERS AGE nginx k8s.io/ingress-nginx \ 2m53s
准备service和pod
创建nginx.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nginx-deploy
name: nginx-deploy
spec:
replicas: 3
selector:
matchLabels:
app: nginx-deploy
template:
metadata:
labels:
app: nginx-deploy
spec:
containers:
- image: dockerproxy.cn/nginx:latest
name: nginx
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
labels:
app: nginx-deploy
name: nginx-svc
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: nginx-deploy
type: ClusterIP
bash
# 创建
[root@k8s-master01 ingress-controller]# kubectl apply -f nginx.yaml
deployment.apps/nginx-deploy created
service/nginx-svc created
# 查看
[root@k8s-master01 ingress-controller]# kubectl get pod
NAME READY STATUS RESTARTS AGE
nginx-deploy-7c7b68644b-26jtl 1/1 Running 0 20s
nginx-deploy-7c7b68644b-5jsmb 1/1 Running 0 20s
nginx-deploy-7c7b68644b-rjc4r 1/1 Running 0 20s
[root@k8s-master01 ingress-controller]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.10.0.1 <none> 443/TCP 13d
nginx-svc ClusterIP 10.10.199.33 <none> 80/TCP 36s
修改ingress代理模式
bash
[root@k8s-master01 ingress-controller]# kubectl edit svc ingress-nginx-controller -n ingress-nginx
49 type: NodePort
50 status:
51 loadBalancer: {}
##查看
[root@k8s-master01 ingress-controller]# kubectl -n ingress-nginx get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller NodePort 10.10.103.132 <none> 80:31502/TCP,443:31020/TCP 10m
ingress-nginx-controller-admission ClusterIP 10.10.227.21 <none> 443/TCP 10m
设置Http代理
创建ingress-http.yaml
bash
apiVersion: networking.k8s.io/v1
kind: Ingress # 创建一个类型为Ingress的资源
metadata:
name: nginx-ingress # 这个资源的名字为 nginx-ingress
spec:
ingressClassName: nginx # 使用nginx
rules:
- host: nginx.jx.com # 访问此内容的域名
http:
paths:
- backend:
service:
name: nginx-svc # 对应nginx的服务名字
port:
number: 80 # 访问的端口
path: / # 匹配规则
pathType: Prefix # 匹配类型,这里为前缀匹配
bash
# 创建
[root@k8s-master01 ~]# kubectl create -f ingress-http.yaml
ingress.extensions/ingress-http created
# 查看
[root@k8s-master01 ingress-controller]# kubectl get ingress nginx-ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
nginx-ingress nginx nginx.jx.com 10.10.103.132 80 31s
# 查看详情
[root@k8s-master01 ~]# kubectl describe ingress nginx-ingress
Name: nginx-ingress
Labels: <none>
Namespace: default
Address: 10.10.26.150
Ingress Class: nginx
Default backend: <default>
Rules:
Host Path Backends
---- ---- --------
nginx.jx.com
/ nginx-svc:80 (172.16.69.202:80,172.16.79.74:80,172.16.79.75:80)
Annotations: <none>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Sync 9m2s (x2 over 14m) nginx-ingress-controller Scheduled for sync
#在访问节点写入hosts解析记录
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.115.161 k8s-master01
192.168.115.162 k8s-master02
192.168.115.163 k8s-master03
192.168.115.164 k8s-worker01
192.168.115.165 k8s-worker02
192.168.115.166 nginx.jx.com
##测试,只能使用域名访问
[root@k8s-master01 ingress-controller]# curl nginx.jx.com:31502
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
2.3 验证-LoadBalancer模式
修改ARP模式,启用严格ARP模式
bash
# 执行修改操作
kubectl get configmap kube-proxy -n kube-system -o yaml | \
sed -e "s/strictARP: false/strictARP: true/" | \
kubectl apply -f - -n kube-system
#查看修改结果
kubectl edit configmap -n kube-system kube-proxy
搭建metallb支持LoadBalancer
Metallb 在 Kubernetes 中的作用主要是为没有运行在如 AWS、GCP 等具有完善网络服务的云平台上的集群,提供网络负载均衡器的实现。
-
实现 LoadBalancer 服务类型:在 Kubernetes 中,Service 有多种类型,其中 LoadBalancer 类型通常需要外部的负载均衡器支持。Metallb 可以在缺乏原生云平台负载均衡支持的环境下,模拟实现 LoadBalancer 类型的 Service。它能够为应用提供可从集群外部访问的固定 IP 地址。
-
IP 地址分配与管理:负责在指定的 IP 地址范围(IP address pool)内,为 LoadBalancer 类型的 Service 分配 IP 地址,并确保这些 IP 地址的正确映射和管理,使外部流量能够准确地路由到相应的 Kubernetes 服务后端 Pod。
-
提供高可用的网络连接:通过实现 BGP(Border Gateway Protocol)或 Layer2 模式的负载均衡机制,确保即使在节点故障或网络波动的情况下,也能维持应用的外部网络连接的稳定性和可靠性。
bash
wget https://github.com/metallb/metallb/archive/refs/tags/v0.12.1.tar.gz
mkdir Metallb
tar xf v0.12.1.tar.gz -C Metallb/
cd Metallb/metallb-0.12.1/manifests/
##编写地址段分配configmap
[root@k8s-master01 ~]# cat configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- 192.168.115.30-192.168.115.49
kubectl apply -f namespace.yaml
kubectl apply -f metallb.yaml
[root@k8s-master01 manifests]# kubectl -n metallb-system get pod
NAME READY STATUS RESTARTS AGE
controller-7476b58756-q7cql 1/1 Running 0 6m
speaker-55l64 1/1 Running 0 6m
speaker-8jjg8 1/1 Running 0 6m1s
测试
bash
[root@k8s-master01 ingress-controller]# cat nginx.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nginx-deploy1
name: nginx-deploy1
spec:
replicas: 3
selector:
matchLabels:
app: nginx-deploy1
template:
metadata:
labels:
app: nginx-deploy1
spec:
containers:
- image: nginx
name: nginx1
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
labels:
app: nginx-deploy
name: nginx-svc1
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: nginx-deploy
type: LoadBalancer
##提交
[root@k8s-master01 ingress-controller]# kubectl apply -f nginx.yaml
##查看
[root@k8s-master01 ingress-controller]# kubectl get pod
NAME READY STATUS RESTARTS AGE
nginx-deploy-5f87d95c-7ph78 1/1 Running 0 50m
nginx-deploy-5f87d95c-dswvq 1/1 Running 0 50m
nginx-deploy-5f87d95c-vk9vg 1/1 Running 0 50m
nginx-deploy1-c8d58b5c7-7dfrd 1/1 Running 0 12m
nginx-deploy1-c8d58b5c7-d2hd7 1/1 Running 0 12m
nginx-deploy1-c8d58b5c7-pfvhn 1/1 Running 0 12m
[root@k8s-master01 ingress-controller]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.10.0.1 <none> 443/TCP 14d
nginx-svc ClusterIP 10.10.83.76 <none> 80/TCP 50m
nginx-svc1 LoadBalancer 10.10.168.131 192.168.115.30 80:31261/TCP 12m
##测试访问
[root@k8s-master01 ingress-controller]# curl 192.168.115.30
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
bash
apiVersion: networking.k8s.io/v1
kind: Ingress # 创建一个类型为Ingress的资源
metadata:
name: nginx-ingress # 这个资源的名字为 nginx-ingress
spec:
ingressClassName: nginx # 使用nginx
rules:
- host: nginx.jx.com # 访问此内容的域名
http:
paths:
- backend:
service:
name: nginx-svc # 对应nginx的服务名字
port:
number: 80 # 访问的端口
path: / # 匹配规则
pathType: Prefix # 匹配类型,这里为前缀匹配
- host: nginx2.jx.com # 访问此内容的域名
http:
paths:
- backend:
service:
name: nginx-svc1 # 对应nginx的服务名字
port:
number: 80 # 访问的端口
path: / # 匹配规则
pathType: Prefix # 匹配类型,这里为前缀匹配
修改ingress模式:
bash
[root@k8s-master01 ~]# kubectl -n ingress-nginx edit svc ingress-nginx-controller
type: LoadBalancer
status:
loadBalancer: {}