k8s 对外发布(ingress)

在k8s中,service的作用体现在两个方面,对集群内部,它不断跟踪pod的变化,更新endpoint中对应pod的对象,提供了ip不断变化的pod的服务发现机制; 对集群外部,他类似负载均衡器,可以在集群内外部对pod进行访问

对外发布方式

在k8s中,Pod的IP地址和service的ClusterIP仅可以在集群网络内部使用,对于集群外的应用是不可见的。 为了使外部的应用能够访问集群内的服务,Kubernetes目前提供了以下几种方案:

NodePort:将服务暴露在节点网络上,NodePort背后就是kube-proxy,kube-proxy是沟通服务网络,Pod网络和节点网络的桥梁

测试环境使用还行,当有几十上百的服务在集群中运行时,NodePort的端口管理就是个灾难。 因为每个端口只能是一种服务,端口范围只能是 30000-32767

LoadBalancer: 通过设置LoadBalancer映射到云服务商提供的LoadBalancer地址。 这种用法仅用于在公有云服务提供商的云平台上设置 Service 的场景。 受限于云平台,且通常在云平台部署LoadBalancer还需要额外的费用

在service提交后,Kubernetes就会调用CloudProvider在公有云上为你创建一个负载均衡服务,并且把被代理的Pod的IP地址配置给负载均衡服务做后端

**externalIPs:**service允许为其分配外部IP,如果外部IP路由到集群中一个或多个Node上,Service会被暴露给这些externalIPs。通过外部IP进入到集群的流量,将会被路由到Service的Endpoint上

**Ingress:**只需一个或者少量的公网IP和LB,即可同时将多个HTTP服务暴露到外网,七层反向代理。 可以简单理解为service的service,它其实就是一组基于域名和URL路径,把用户的请求转发到一个或多个service的规则

ingress组成

**ingress:**ingress是一个API对象,通过yaml文件来配置,ingress对象的作用是定义请求如何转发到service的规则,可以理解为配置模板。 ingress通过http或https暴露集群内部service,给service提供外部URL、负载均衡、SSL/TLS以及基于域名的反向代理。 ingress要依靠 ingress-controller 来具体实现以上功能

ingress-controller : 当做反向代理或者说是转发器。 ingress-controller是具体实现反向代理及负载均衡的程序,对ingress定义的规则进行解析,根据配置的规则来实现请求转发。

ingress-controller并不是k8s自带的组件,实际上ingress-controller只是一个统称,用户可以选择不同的ingress-controller实现,目前,由k8s维护的ingress-controller只有google云的GCE与ingress-nginx两个, 其他还有很多第三方维护的ingress-controller,具体可以参考官方文档。 但是不管哪一种ingress-controller,实现的机制都大同小异,只是在具体配置上有差异。

一般来说,ingress-controller的形式都是一个pod,里面跑着daemon程序和反向代理程序。 daemon负责不断监控集群的变化,根据 ingress对象生成配置并应用新配置到反向代理,比如ingress-nginx就是动态生成nginx配置,动态更新upstream,并在需要的时候reload程序应用新配置。 为了方便,后面的例子都以k8s官方维护的ingress-nginx为例

Ingress-Nginx github 地址: https://github.com/kubernetes/ingress-nginx
**Ingress-Nginx 官方网站:**https://kubernetes.github.io/ingress-nginx/

ingress-controller才是负责具体转发的组件,通过各种方式将它暴露在集群入口,外部对集群的请求流量会先到 ingress-controller, 而ingress对象是用来告诉ingress-controller该如何转发请求,比如哪些域名、哪些URL要转发到哪些service等等

ingress工作原理

(1)ingress-controller通过和 kubernetes APIServer 交互,动态的去感知集群中ingress规则变化

(2)然后读取它,按照自定义的规则,规则就是写明了哪个域名对应哪个service,生成一段nginx配置

(3)再写到nginx-ingress-controller的pod里,这个ingress-controller的pod里运行着一个Nginx服务,控制器会把生成的 nginx配置写入 /etc/nginx.conf文件中,

(4)然后reload一下使配置生效。 以此达到域名区分配置和动态更新的作用

部署nginx-ingress-controller

bash 复制代码
1.部署ingress-controller pod及相关资源

mkdir /mnt/ingress
cd /mnt/ingress

官方下载地址:
wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.25.0/deploy/static/mandatory.yaml

上面可能无法下载,可用国内的 gitee
wget https://gitee.com/mirrors/ingress-nginx/raw/nginx-0.25.0/deploy/static/mandatory.yaml
wget https://gitee.com/mirrors/ingress-nginx/raw/nginx-0.30.0/deploy/static/mandatory.yaml

#mandatory.yaml文件中包含了很多资源的创建,包括namespace、ConfigMap、role,ServiceAccount等等所有部署ingress-controller需要的资源

2.修改ClusterRole资源配置
vim mandatory.yaml
......
apiVersion: rbac.authorization.k8s.io/v1beta1
#RBAC相关资源从1.17版本开始改用rbac.authorization.k8s.io/v1,rbac.authorization.k8s.io/v1beta1在1.22版本即将弃用
kind: ClusterRole
metadata:
  name: nginx-ingress-clusterrole
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
rules:
  - apiGroups:
      - ""
    resources:
      - configmaps
      - endpoints
      - nodes
      - pods
      - secrets
    verbs:
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - nodes
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - services
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - "extensions"
      - "networking.k8s.io"    # (0.25版本)增加 networking.k8s.io Ingress 资源的 api 
    resources:
      - ingresses
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - events
    verbs:
      - create
      - patch
  - apiGroups:
      - "extensions"
      - "networking.k8s.io"   # (0.25版本)增加 networking.k8s.io/v1 Ingress 资源的 api 
    resources:
      - ingresses/status
    verbs:
      - update

ingress 暴露服务的方式

Deployment+LoadBalancer 模式的 Service

如果要把ingress部署在公有云,那用这种方式比较合适。用Deployment部署ingress-controller,创建一个 type为 LoadBalancer 的 service 关联这组 pod。大部分公有云,都会为 LoadBalancer 的 service 自动创建一个负载均衡器,通常还绑定了公网地址。 只要把域名解析指向该地址,就实现了集群服务的对外暴露

DaemonSet+HostNetwork+nodeSelector

用DaemonSet结合nodeselector来部署ingress-controller到特定的node上,然后使用HostNetwork直接把该pod与宿主机node的网络打通,直接使用宿主机的80/433端口就能访问服务。这时,ingress-controller所在的node机器就很类似传统架构的边缘节点,比如机房入口的nginx服务器。该方式整个请求链路最简单,性能相对NodePort模式更好。缺点是由于直接利用宿主机节点的网络和端口,一个node只能部署一个ingress-controller pod。 比较适合大并发的生产环境使用

Deployment+NodePort模式的Service

同样用deployment模式部署ingress-controller,并创建对应的service,但是type为NodePort。这样,ingress就会暴露在集群节点ip的特定端口上。由于nodeport暴露的端口是随机端口,一般会在前面再搭建一套负载均衡器来转发请求。该方式一般用于宿主机是相对固定的环境ip地址不变的场景

NodePort方式暴露ingress虽然简单方便,但是NodePort多了一层NAT,在请求量级很大时可能对性能会有一定影响

DaemonSet+HostNetwork+nodeSelector

bash 复制代码
3.指定 nginx-ingress-controller 运行在 node02 节点

kubectl label node node02 ingress=true

kubectl get nodes  --show-labels
NAME       STATUS   ROLES                  AGE   VERSION    LABELS
master01   Ready    control-plane,master   18d   v1.20.11   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=master01,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=
node01     Ready    <none>                 18d   v1.20.11   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,emmm=a,kubernetes.io/arch=amd64,kubernetes.io/hostname=node01,kubernetes.io/os=linux
node02     Ready    <none>                 18d   v1.20.11   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,emmm=b,ingress=true,kubernetes.io/arch=amd64,kubernetes.io/hostname=node02,kubernetes.io/os=linux



4.修改Deployment 为 DaemonSet 指定节点运行,并开启hostNetwork网络

vim mandatory.yaml
...
apiVersion: apps/v1
# 修改 kind
# kind: Deployment
kind: DaemonSet
metadata:
  name: nginx-ingress-controller
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
spec:
# 删除Replicas
# replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: ingress-nginx
      app.kubernetes.io/part-of: ingress-nginx
  template:
    metadata:
      labels:
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/part-of: ingress-nginx
      annotations:
        prometheus.io/port: "10254"
        prometheus.io/scrape: "true"
    spec:
      # 使用主机网络
      hostNetwork: true
      # 选择节点运行
      nodeSelector:
        ingress: "true"
      serviceAccountName: nginx-ingress-serviceaccount
......

5、在所有 node 节点上传 nginx-ingress-controller 镜像压缩包 ingree.contro.tar.gz 到 /mnt目录,并解压和加载镜像

cd /mnt
tar zxvf ingree.contro.tar.gz
docker load -i ingree.contro.tar


6、启动 nginx-ingress-controller

kubectl apply -f mandatory.yaml

#nginx-ingress-controller 已经运行 node02 节点
kubectl get pod -n ingress-nginx -o wide
NAME                             READY   STATUS    RESTARTS   AGE    IP              NODE     NOMINATED NODE   READINESS GATES
nginx-ingress-controller-f28pz   1/1     Running   0          7m4s   192.168.111.9   node02   <none>           <none>

kubectl get cm,daemonset -n ingress-nginx -owide
NAME                                        DATA   AGE
configmap/ingress-controller-leader-nginx   0      21m
configmap/kube-root-ca.crt                  1      22m
configmap/nginx-configuration               0      22m
configmap/tcp-services                      0      22m
configmap/udp-services                      0      22m

NAME                                      DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE   CONTAINERS                 IMAGES                                                                  SELECTOR
daemonset.apps/nginx-ingress-controller   1         1         1       1            1           ingress=true    22m   nginx-ingress-controller   quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.25.0   app.kubernetes.io/name=ingress-nginx,app.kubernetes.io/part-of=ingress-nginx


到 node02 节点查看
netstat -lntp| grep nginx
tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN      68532/nginx: master 
tcp        0      0 0.0.0.0:8181            0.0.0.0:*               LISTEN      68532/nginx: master 
tcp        0      0 0.0.0.0:443             0.0.0.0:*               LISTEN      68532/nginx: master 
tcp6       0      0 :::10254                :::*                    LISTEN      68493/nginx-ingress 

由于配置了 hostnetwork,nginx 已经在 node 主机本地监听 80/443/8181 端口。其中 8181 是 nginx-controller 默认配置的一个 default backend(Ingress 资源没有匹配的 rule 对象时,流量就会被导向这个 default backend)。
这样,只要访问 node 主机有公网 IP,就可以直接映射域名来对外网暴露服务了。如果要 nginx 高可用的话,可以在多个 node上部署,并在前面再搭建一套 LVS+keepalived 做负载均衡



7.创建ingress规则
#创建一个deploy 和 svc
vim service-nginx.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-app
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
        - name: nginx
          image: nginx
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: nginx-app-svc
spec:
  type: ClusterIP
  ports:
  - protocol: TCP
    port: 80
    targetPort: 80
  selector:
    app: nginx


vim ingress-app.yaml
apiVersion: networking.k8s.io/v1
kind: Ingerss
metadata:
  name: nginx-app-ingress
spec:
  rules:
  - host: www.emmm.com
    http:
    - path: /
      pathType: Prefix
      backend:
        service:
          name: nginx-app-svc
          port:
            number: 80

kubectl apply -f service-nginx.yaml
kubectl apply -f ingress-app.yaml

kubectl get pods
NAME                         READY   STATUS    RESTARTS   AGE
nginx-app-845d4d9dff-hqqg5   1/1     Running   0          11m
nginx-app-845d4d9dff-zm4t9   1/1     Running   0          11m

kubectl get ingress
kubectl get ingress
NAME                CLASS    HOSTS          ADDRESS   PORTS   AGE
nginx-app-ingress   <none>   www.emmm.com             80      76s

8、测试访问
//本地 host 添加域名解析
vim /etc/hosts
192.168.111.7 master01
192.168.111.8 node01
192.168.111.9 node02
#192.168.111.10 hub.emmm.com
192.168.111.10 stor01

192.168.111.9 www.emmm.com



curl www.emmm.com

<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>


9.查看nginx-ingress-controller
kubectl get pod -n ingress-nginx -owide
NAME                             READY   STATUS    RESTARTS   AGE   IP              NODE     NOMINATED NODE   READINESS GATES
nginx-ingress-controller-f28pz   1/1     Running   0          90m   192.168.111.9   node02   <none>           <none>

kubectl exec -it nginx-ingress-controller-f28pz -n ingress-nginx /bin/bash
 # more /etc/nginx/nginx.conf
#可以看到从 start server www.emmm.com 到 end server www.emmm.com 之间包含了此域名用于反向代理的配置

Deployment+NodePort模式的Service

bash 复制代码
1.下载 nginx-ingress-controller 和 ingress-nginx 暴露端口配置文件

mkdir /mnt/ingress-nodeport
cd /mnt/ingress-nodeport

官方下载地址:
wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.30.0/deploy/static/mandatory.yaml
wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.30.0/deploy/static/provider/baremetal/service-nodeport.yaml

国内 gitee 资源地址:
wget https://gitee.com/mirrors/ingress-nginx/raw/nginx-0.30.0/deploy/static/mandatory.yaml
wget https://gitee.com/mirrors/ingress-nginx/raw/nginx-0.30.0/deploy/static/provider/baremetal/service-nodeport.yaml

2、在所有 node 节点上传镜像包 ingress-controller-0.30.0.tar 到 /opt/ingress-nodeport 目录,并加载镜像
docker load -i ingress-controller-0.30.0.tar

3、启动 nginx-ingress-controller
kubectl apply -f mandatory.yaml
kubectl apply -f service-nodeport.yaml

-----------------------------------------------------------------------------------------
//如果K8S Pod 调度失败,在 kubectl describe pod资源时显示:
Warning  FailedScheduling  18s (x2 over 18s)  default-scheduler  0/2 nodes are available: 2 node(s) didn't match node selector

解决方案:
1. 给需要调度的node加上对应标签
# 相对上面这个Yaml文件的例子
kubectl label nodes node_name kubernetes.io/os=linux

2. 删除Yaml文件中的nodeSelector,如果对节点没有要求的话,直接删除节点选择器即可
-----------------------------------------------------------------------------------------

kubectl get pod,svc -n ingress-nginx

NAME                                            READY   STATUS    RESTARTS   AGE
pod/nginx-ingress-controller-54b86f8f7b-wc7r7   1/1     Running   0          104s

NAME                    TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE
service/ingress-nginx   NodePort   10.96.113.131   <none>        80:31281/TCP,443:31688/TCP   10s



Ingress HTTP 代理访问

cd /mnt/ingress-nodeport

#创建 deployment、Service、Ingress Yaml 资源

vim ingress-nginx.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-app
spec:
  replicas: 2
  selector:
    matchLabels:
      name: nginx
  template:
    metadata:
      labels:
        name: nginx
    spec:
      containers:
        - name: nginx
          image: nginx
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: nginx-svc
spec:
  ports:
    - port: 80
      targetPort: 80
      protocol: TCP
  selector:
    name: nginx
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: nginx-test
spec:
  rules:
  - host: www.emmm.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend: 
          service:
            name: nginx-svc
            port:
              number: 80

kubectl apply -f ingress-nginx.yaml

kubectl get svc,pods -o wide
NAME                 TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE   SELECTOR
service/kubernetes   ClusterIP   10.96.0.1       <none>        443/TCP   19d   <none>
service/nginx-svc    ClusterIP   10.96.229.128   <none>        80/TCP    61s   name=nginx

NAME                             READY   STATUS    RESTARTS   AGE   IP             NODE     NOMINATED NODE   READINESS GATES
pod/nginx-app-57dd86f5cc-9pj8h   1/1     Running   0          28s   10.244.2.95    node02   <none>           <none>
pod/nginx-app-57dd86f5cc-txfqp   1/1     Running   0          27s   10.244.1.131   node01   <none>           <none>


kubectl exec -it pod/nginx-app-57dd86f5cc-9pj8h bash
 # cd /usr/share/nginx/html/
 # echo 'this is web1' >> index.html 

kubectl exec -it pod/nginx-app-57dd86f5cc-txfqp  bash
 # cd /usr/share/nginx/html/
 # echo 'this is web2' >> index.html



#进行测试
curl 10.96.229.128


kubectl get pod -n ingress-nginx -owide
NAME                                        READY   STATUS    RESTARTS   AGE   IP             NODE     NOMINATED NODE   READINESS GATES
nginx-ingress-controller-54b86f8f7b-wc7r7   1/1     Running   0          17m   10.244.1.130   node01   <none>           <none>


kubectl get svc -n ingress-nginx
NAME            TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE
ingress-nginx   NodePort   10.96.113.131   <none>        80:31281/TCP,443:31688/TCP   16m

本地 host 添加域名解析
vim /etc/hosts
192.168.111.8 www.emmm.com



#测试
www.emmm.com:31281




ingress HTTP 访问虚拟主机

mkdir /mnt/ingress-nodeport/vhost
cd /mnt/ingress-nodeport/vhost


#创建虚拟机1资源
vim deployment1.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: deployment1
spec:
  replicas: 2
  selector:
    matchLabels:
      name: nginx1
  template:
    metadata:
      labels:
        name: nginx1
    spec:
      containers:
        - name: nginx1
          image: soscscs/myapp:v1
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: svc-1
spec:
  ports:
    - port: 80
      targetPort: 80
      protocol: TCP
  selector:
    name: nginx1


kubectl apply -f deployment1.yaml

vim deployment2.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: deployment2
spec:
  replicas: 2
  selector:
    matchLabels:
      name: nginx2
  template:
    metadata:
      labels:
        name: nginx2
    spec:
      containers:
        - name: nginx2
          image: soscscs/myapp:v2
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: svc-2
spec:
  ports:
    - port: 80
      targetPort: 80
      protocol: TCP
  selector:
    name: nginx2
	
	
kubectl apply -f deployment2.yaml




#创建ingress资源
vim ingress-nginx.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ingress1
spec:
  rules:
    - host: www1.emmm.com
      http: 
        paths:
        - path: /
          pathType: Prefix
          backend:
            service:
              name: svc-1
              port: 
                number: 80
---

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ingress2
spec:
  rules:
    - host: www2.emmm.com
      http:
        paths:
        - path: /
          pathType: Prefix
          backend:
            service: 
              name: svc-2
              port:
                number: 80



kubectl apply -f ingress-nginx.yaml


#测试访问
kubectl get svc -n ingress-nginx
NAME            TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE
ingress-nginx   NodePort   10.96.113.131   <none>        80:31281/TCP,443:31688/TCP   29m


curl www1.emmm.com:31281
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
curl www2.emmm.com:31281
Hello MyApp | Version: v2 | <a href="hostname.html">Pod Name</a>



Ingress  HTTPS 代理访问 

mkdir /mnt/ingress-nodeport/https
cd /mnt/ingress-nodeport/https

#创建ssl证书
openssl req -x509 -sha256 -nodes -days 365 -newkey rsa:2048 -keyout tls.key -out tls.crt -subj "/CN=nginxsvc/O=nginxsvc"

#创建 secret 资源进行存储
kubectl create secret tls tls-secret --key tls.key --cert tls.crt


kubectl get secret
NAME                                 TYPE                                  DATA   AGE
default-token-kr2xl                  kubernetes.io/service-account-token   3      19d
mysecret                             Opaque                                2      26h
mysecret1                            Opaque                                2      26h
nfs-client-provisioner-token-nszdh   kubernetes.io/service-account-token   3      2d23h
tls-secret                           kubernetes.io/tls                     2      4s


kubectl describe secret tls-secret
Name:         tls-secret
Namespace:    default
Labels:       <none>
Annotations:  <none>

Type:  kubernetes.io/tls

Data
====
tls.key:  1704 bytes
tls.crt:  1143 bytes


#创建 deployment、Service、Ingress Yaml 资源
vim ingress-https.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-app
spec:
  replicas: 2
  selector:
    matchLabels:
      name: nginx
  template:
    metadata:
      labels:
        name: nginx
    spec:
      containers:
        - name: nginx
          image: nginx
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: nginx-svc
spec:
  ports:
    - port: 80
      targetPort: 80
      protocol: TCP
  selector:
    name: nginx
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: nginx-https
spec:
  tls:
    - hosts:
      - www3.kgc.com
      secretName: tls-secret
  rules:
    - host: www3.emmm.com
      http:
        paths:
        - path: /
          pathType: Prefix
          backend:
            service: 
              name: nginx-svc
              port:
                number: 80


kubectl apply -f ingress-https.yaml

kubectl get svc -n ingress-nginx
NAME            TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE
ingress-nginx   NodePort   10.96.113.131   <none>        80:31281/TCP,443:31688/TCP   36m


#访问测试
在宿主机的 C:\Windows\System32\drivers\etc\hosts 文件中添加 192.168.111.8 www3.emmm.com 记录。
使用谷歌浏览器访问 https://www3.emmm.com:31688



#Nginx 进行 BasicAuth
mkdir /mnt/ingress-nodeport/basic-auth
cd /mnt/ingress-nodeport/basic-auth

#生成用户密码认证文件,创建 secret 资源进行存储
yum -y install httpd
htpasswd -c auth emmmm			#认证文件名必须为 auth
kubectl create secret generic basic-auth --from-file=auth


#创建 ingress 资源
vim ingress-auth.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ingress-auth
  annotations:
    #设置认证类型basic
    nginx.ingress.kubernetes.io/auth-type: basic
	#设置secret资源名称basic-auth
    nginx.ingress.kubernetes.io/auth-secret: basic-auth
	#设置认证窗口提示信息
    nginx.ingress.kubernetes.io/auth-realm: 'Authentication Required - emmmm'
spec:
  rules:
  - host: auth.emmmm.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service: 
            name: nginx-svc
            port:
              number: 80
//具体详细设置方法可参考官网https://kubernetes.github.io/ingress-nginx/examples/auth/basic/


kubectl apply -f ingress-auth.yaml


#访问测试
kubectl get svc -n ingress-nginx
NAME            TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE
ingress-nginx   NodePort   10.96.113.131   <none>        80:31281/TCP,443:31688/TCP   48m

echo '192.168.111.8 auth.emmmm.com' >> /etc/hosts


浏览器访问:http://auth.emmmm.com:31281


Nginx 进行重写

#metadata.annotations 配置说明
●nginx.ingress.kubernetes.io/rewrite-target: <字符串> #必须重定向流量的目标URI
●nginx.ingress.kubernetes.io/ssl-redirect: <布尔值> #指示位置部分是否仅可访问SSL(当Ingress包含证书时,默认为true)
●nginx.ingress.kubernetes.io/force-ssl-redirect: <布尔值> #即使Ingress未启用TLS,也强制重定向到HTTPS
●nginx.ingress.kubernetes.io/app-root: <字符串> #定义Controller必须重定向的应用程序根,如果它在'/'上下文中
●nginx.ingress.kubernetes.io/use-regex: <布尔值> #指示Ingress上定义的路径是否使用正则表达式

vim ingress-rewrite.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: nginx-rewrite
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: http://www1.emmm.com:31281
spec:
  rules:
  - host: re.emmm.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
		  #由于re.emmm.com只是用于跳转不需要真实站点存在,因此svc资源名称可随意定义
          service: 
            name: nginx-svc
            port:
              number: 80


kubectl apply -f ingress-rewrite.yaml

echo '192.168.111.8 re.emmm.com' >> /etc/hosts

浏览器访问:http://re.kgc.com:31281
相关推荐
昌sit!几秒前
K8S node节点没有相应的pod镜像运行故障处理办法
云原生·容器·kubernetes
Peter_chq8 分钟前
【操作系统】基于环形队列的生产消费模型
linux·c语言·开发语言·c++·后端
一坨阿亮1 小时前
Linux 使用中的问题
linux·运维
dsywws2 小时前
Linux学习笔记之vim入门
linux·笔记·学习
A ?Charis3 小时前
Gitlab-runner running on Kubernetes - hostAliases
容器·kubernetes·gitlab
幺零九零零3 小时前
【C++】socket套接字编程
linux·服务器·网络·c++
wclass-zhengge3 小时前
Docker篇(Docker Compose)
运维·docker·容器
北漂IT民工_程序员_ZG4 小时前
k8s集群安装(minikube)
云原生·容器·kubernetes
小林熬夜学编程4 小时前
【Linux系统编程】第四十一弹---线程深度解析:从地址空间到多线程实践
linux·c语言·开发语言·c++·算法
程思扬5 小时前
为什么Uptime+Kuma本地部署与远程使用是网站监控新选择?
linux·服务器·网络·经验分享·后端·网络协议·1024程序员节