k8s中的微服务

目录

一、什么是微服务

二、微服务的类型

三、IPVS模式

1、ipvs模式配置方式

(1)在所有节点中安装ipvsadm

(2)修改master节点的代理配置

(3)重启pod

四、微服务类型详解

1、clusterip

示例:

2、clusterIP中的特殊模式headless

3、nodeport

4、loadbalancer

5、metalLB

6、externalname

五、Ingress-nginx

1、ingress-nginx功能

2、部署ingress

(1)下载/上传部署文件

(2)测试ingress

3、ingress的高级用法

(1)实验环境

(2)基于路径的访问

(3)基于域名的访问

(4)建立tls加密

(5)建立auth认证

(6)rewrite重定向

六、Canary金丝雀发布

1、概念

2、Canary发布

(1)基于header(http包头)灰度

(2)基于权重的灰度发布


一、什么是微服务

用控制器来完成集群的工作负载,那么应用如何暴漏出去?需要通过微服务暴漏出去后才能被访问

  • Service是一组提供相同服务的Pod对外开放的接口。

  • 借助Service,应用可以实现服务发现和负载均衡。

  • service默认只支持4层负载均衡能力,没有7层功能。(可以通过Ingress实现)

二、微服务的类型

微服务类型 作用描述
ClusterIP 默认值,k8s系统给service自动分配的虚拟IP,只能在集群内部访问
NodePort 将Service通过指定的Node上的端口暴露给外部,访问任意一个NodeIP:nodePort都将路由到ClusterIP
LoadBalancer 在NodePort的基础上,借助cloud provider创建一个外部的负载均衡器,并将请求转发到 NodeIP:NodePort,此模式只能在云服务器上使用
ExternalName 将服务通过 DNS CNAME 记录方式转发到指定的域名(通过 spec.externlName 设定)

示例

复制代码
#生成控制器文件并建立控制器
[root@k8s-master ~]# kubectl create deployment ws --image reg.zx.org/library/myapp:v1 --replicas 2 --dry-run=client -o yaml > ws.yml
[root@k8s-master ~]# kubectl apply -f ws.yml 
deployment.apps/ws created

#生成微服务yaml追加到已有yaml中
[root@k8s-master ~]# kubectl expose deployment ws --port 80 --target-port 80 --dry-run=client -o yaml >> ws.yml
[root@k8s-master ~]# vim ws.yml         #不同资源间用---隔开
[root@k8s-master ~]# cat ws.yml 
apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: ws
  name: ws
spec:
  replicas: 2
  selector:
    matchLabels:
      app: ws
  strategy: {}
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: ws
    spec:
      containers:
      - image: reg.zx.org/library/myapp:v1
        name: myapp
        resources: {}

---

apiVersion: v1
kind: Service
metadata:
  creationTimestamp: null
  labels:
    app: ws
  name: ws
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
  selector:
    app: ws

[root@k8s-master ~]# kubectl apply -f ws.yml 
deployment.apps/ws configured
service/ws created
[root@k8s-master ~]# kubectl get service
NAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)   AGE
deployment   ClusterIP   10.100.147.145   <none>        80/TCP    13h
kubernetes   ClusterIP   10.96.0.1        <none>        443/TCP   38h
readiness    ClusterIP   10.101.55.183    <none>        80/TCP    15h
ws           ClusterIP   10.98.35.35      <none>        80/TCP    9s

[root@k8s-master ~]# 
[root@k8s-master ~]# kubectl get service -o wide
NAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)   AGE   SELECTOR
deployment   ClusterIP   10.100.147.145   <none>        80/TCP    13h   app=myapp
kubernetes   ClusterIP   10.96.0.1        <none>        443/TCP   38h   <none>
readiness    ClusterIP   10.101.55.183    <none>        80/TCP    15h   run=readiness
ws           ClusterIP   10.98.35.35      <none>        80/TCP    67s   app=ws

[root@k8s-master ~]# iptables -t nat -nL
......
Chain KUBE-SERVICES (2 references)
target     prot opt source               destination         
KUBE-SVC-NPX46M4PTMTKRN6Y  tcp  --  0.0.0.0/0            10.96.0.1            /* default/kubernetes:https cluster IP */ tcp dpt:443
KUBE-SVC-6GEDBFKJV7TTAFN7  tcp  --  0.0.0.0/0            10.98.35.35          /* default/ws cluster IP */ tcp dpt:80
KUBE-NODEPORTS  all  --  0.0.0.0/0            0.0.0.0/0            /* kubernetes service nodeports; NOTE: this must be the last rule in this chain */ ADDRTYPE match dst-type LOCAL
......

[root@k8s-master ~]# vim ws.yml         # 添加:"type: NodePort"
[root@k8s-master ~]# cat ws.yml 
apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: ws
  name: ws
spec:
  replicas: 2
  selector:
    matchLabels:
      app: ws
  strategy: {}
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: ws
    spec:
      containers:
      - image: reg.zx.org/library/myapp:v1
        name: myapp
        resources: {}

---

apiVersion: v1
kind: Service
metadata:
  creationTimestamp: null
  labels:
    app: ws
  name: ws
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
  selector:
    app: ws
  type: NodePort

[root@k8s-master ~]# kubectl apply -f ws.yml 
deployment.apps/ws configured
service/ws configured
         
[root@k8s-master ~]# kubectl get service
NAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
deployment   ClusterIP   10.100.147.145   <none>        80/TCP         13h
kubernetes   ClusterIP   10.96.0.1        <none>        443/TCP        38h
readiness    ClusterIP   10.101.55.183    <none>        80/TCP         15h
ws           NodePort    10.98.35.35      <none>        80:31381/TCP   4m51s

三、IPVS模式

  • Service 是由 kube-proxy 组件,加上 iptables 来共同实现的

  • kube-proxy 通过 iptables 处理 Service 的过程,需要在宿主机上设置相当多的 iptables 规则,如果宿主机有大量的Pod,不断刷新iptables规则,会消耗大量的CPU资源

  • IPVS模式的service,可以使K8s集群支持更多量级的Pod

1、ipvs模式配置方式

切换ipvs模式后,kube-proxy会在宿主机上添加一个虚拟网卡:kube-ipvs0,并分配所有service IP

复制代码
[root@k8s-master ~]# ip a | tail -n 20
8: veth411bce96@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default 
    link/ether 92:64:68:74:60:d6 brd ff:ff:ff:ff:ff:ff link-netnsid 2
    inet6 fe80::9064:68ff:fe74:60d6/64 scope link 
       valid_lft forever preferred_lft forever
9: vethd4d7ef26@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default 
    link/ether 66:5a:fa:34:d8:02 brd ff:ff:ff:ff:ff:ff link-netnsid 3
    inet6 fe80::645a:faff:fe34:d802/64 scope link 
       valid_lft forever preferred_lft forever
11: kube-ipvs0: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN group default 
    link/ether 7a:63:54:5e:87:51 brd ff:ff:ff:ff:ff:ff
    inet 10.96.186.254/32 scope global kube-ipvs0
       valid_lft forever preferred_lft forever
    inet 10.96.0.10/32 scope global kube-ipvs0
       valid_lft forever preferred_lft forever
    inet 10.100.147.145/32 scope global kube-ipvs0
       valid_lft forever preferred_lft forever
    inet 10.96.0.1/32 scope global kube-ipvs0
       valid_lft forever preferred_lft forever
    inet 10.101.55.183/32 scope global kube-ipvs0
       valid_lft forever preferred_lft forever

(1)在所有节点中安装ipvsadm

复制代码
yum install ipvsadm -y

(2)修改master节点的代理配置

复制代码
[root@k8s-master ~]# kubectl -n kube-system edit cm kube-proxy
    metricsBindAddress: ""
    mode: "ipvs"							#设置kube-proxy使用ipvs模式
    nftables:

(3)重启pod

在pod运行时配置文件中采用默认配置,当改变配置文件后已经运行的pod状态不会变化,所以要重启pod

复制代码
[root@k8s-master ~]# kubectl -n kube-system get  pods   | awk '/kube-proxy/{system("kubectl -n kube-system delete pods "$1)}'
pod "kube-proxy-9ftsb" deleted
pod "kube-proxy-chthh" deleted
pod "kube-proxy-dzcxh" deleted

[root@k8s-master ~]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  172.25.254.200:32752 rr
  -> 10.244.1.65:80               Masq    1      0          0         
  -> 10.244.2.70:80               Masq    1      0          0         
TCP  10.96.0.1:443 rr
  -> 172.25.254.200:6443          Masq    1      0          0         
TCP  10.96.0.10:53 rr
TCP  10.96.0.10:9153 rr
TCP  10.96.186.254:80 rr
  -> 10.244.1.65:80               Masq    1      0          0         
  -> 10.244.2.70:80               Masq    1      0          0         
TCP  10.100.147.145:80 rr
TCP  10.101.55.183:80 rr
TCP  10.244.0.1:32752 rr
  -> 10.244.1.65:80               Masq    1      0          0         
  -> 10.244.2.70:80               Masq    1      0          0   

# 回收
[root@k8s-master ~]# kubectl delete -f ws.yml
deployment.apps "ws" deleted
service "ws" deleted

四、微服务类型详解

1、clusterip

特点:clusterip模式只能在集群内访问,并对集群内的pod提供健康检测和自动发现功能

示例

复制代码
[root@k8s-master ~]# kubectl run testpod --image reg.zx.org/library/myapp:v1 
pod/testpod created
[root@k8s-master ~]# kubectl get pods
NAME      READY   STATUS    RESTARTS   AGE
testpod   1/1     Running   0          9s
[root@k8s-master ~]# kubectl get pods -o wide --show-labels 
NAME      READY   STATUS    RESTARTS   AGE   IP            NODE               NOMINATED NODE   READINESS GATES   LABELS
testpod   1/1     Running   0          64s   10.244.1.66   k8s-node1.zx.org   <none>           <none>            run=testpod
[root@k8s-master ~]# kubectl expose pod testpod --port 80 --target-port 80 --dry-run=client -o yaml > testpod-svc.yml
[root@k8s-master ~]# vim testpod-svc.yml 
[root@k8s-master ~]# cat testpod-svc.yml 
apiVersion: v1
kind: Service
metadata:
  labels:
    run: testpod
  name: testpod
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
  selector:
    run: testpod
  type: ClusterIP

[root@k8s-master ~]# kubectl apply -f testpod-svc.yml 
service/testpod created

[root@k8s-master ~]# kubectl describe svc testpod 
Name:              testpod
Namespace:         default
Labels:            run=testpod
Annotations:       <none>
Selector:          run=testpod
Type:              ClusterIP
IP Family Policy:  SingleStack
IP Families:       IPv4
IP:                10.105.248.212
IPs:               10.105.248.212
Port:              <unset>  80/TCP
TargetPort:        80/TCP
Endpoints:         10.244.1.66:80
Session Affinity:  None
Events:            <none>
[root@k8s-master ~]# 
[root@k8s-master ~]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  10.96.0.1:443 rr
  -> 172.25.254.200:6443          Masq    1      0          0         
TCP  10.96.0.10:53 rr
TCP  10.96.0.10:9153 rr
TCP  10.100.147.145:80 rr
TCP  10.101.55.183:80 rr
TCP  10.105.248.212:80 rr
  -> 10.244.1.66:80               Masq    1      0          0         
UDP  10.96.0.10:53 rr

[root@k8s-master ~]# kubectl run testpod1 --image reg.zx.org/library/myapp:v1 

[root@k8s-master ~]# kubectl get pods -o wide --show-labels     # 两者标签不一致
NAME       READY   STATUS    RESTARTS   AGE    IP            NODE               NOMINATED NODE   READINESS GATES   LABELS
testpod    1/1     Running   0          7m8s   10.244.1.66   k8s-node1.zx.org   <none>           <none>            run=testpod
testpod1   1/1     Running   0          10s    10.244.2.72   k8s-node2.zx.org   <none>           <none>            run=testpod1
[root@k8s-master ~]# ipvsadm -Ln        # testpod1未在testpod调度里
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  10.96.0.1:443 rr
  -> 172.25.254.200:6443          Masq    1      0          0         
TCP  10.96.0.10:53 rr
TCP  10.96.0.10:9153 rr
TCP  10.100.147.145:80 rr
TCP  10.101.55.183:80 rr
TCP  10.105.248.212:80 rr
  -> 10.244.1.66:80               Masq    1      0          0         
UDP  10.96.0.10:53 rr
[root@k8s-master ~]# 
[root@k8s-master ~]# kubectl label pod testpod1 run=testpod --overwrite     # 修改标签为tetpod
pod/testpod1 labeled
[root@k8s-master ~]# kubectl get pods -o wide --show-labels     # 标签一致
NAME       READY   STATUS    RESTARTS   AGE     IP            NODE               NOMINATED NODE   READINESS GATES   LABELS
testpod    1/1     Running   0          7m43s   10.244.1.66   k8s-node1.zx.org   <none>           <none>            run=testpod
testpod1   1/1     Running   0          45s     10.244.2.72   k8s-node2.zx.org   <none>           <none>            run=testpod
[root@k8s-master ~]# ipvsadm -Ln        
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  10.96.0.1:443 rr
  -> 172.25.254.200:6443          Masq    1      0          0         
TCP  10.96.0.10:53 rr
TCP  10.96.0.10:9153 rr
TCP  10.100.147.145:80 rr
TCP  10.101.55.183:80 rr
TCP  10.105.248.212:80 rr            # 添加
  -> 10.244.1.66:80               Masq    1      0          0         
  -> 10.244.2.72:80               Masq    1      0          0         
UDP  10.96.0.10:53 rr
复制代码
[root@k8s-master ~]# kubectl -n default get svc 
NAME         TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.96.0.1      <none>        443/TCP   30m
testpod      ClusterIP   10.102.121.5   <none>        80/TCP    25s


[root@k8s-master ~]#  dig testpod.default.svc.cluster.local. @10.96.0.10

; <<>> DiG 9.16.23-RH <<>> testpod.default.svc.cluster.local. @10.96.0.10
;; global options: +cmd
;; Got answer:
;; WARNING: .local is reserved for Multicast DNS
;; You are currently testing what happens when an mDNS query is leaked to DNS
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 37665
;; flags: qr aa rd; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
;; WARNING: recursion requested but not available

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
; COOKIE: 56102b725cf1025e (echoed)
;; QUESTION SECTION:
;testpod.default.svc.cluster.local. IN	A

;; ANSWER SECTION:
testpod.default.svc.cluster.local. 30 IN A	10.102.121.5

;; Query time: 136 msec
;; SERVER: 10.96.0.10#53(10.96.0.10)
;; WHEN: Mon Sep 16 11:15:09 CST 2024
;; MSG SIZE  rcvd: 123

# 回收
[root@k8s-master ~]# kubectl delete -f testpod-svc.yml 
service "testpod" deleted

2、clusterIP中的特殊模式headless

headless(无头服务)

对于无头 Services 并不会分配 Cluster IP,kube-proxy不会处理它们, 而且平台也不会为它们进行负载均衡和路由,集群访问通过dns解析直接指向到业务pod上的IP,所有的调度有dns单独完成

复制代码
[root@k8s-master ~]# vim testpod-svc.yml 
[root@k8s-master ~]# cat testpod-svc.yml 
apiVersion: v1
kind: Service
metadata:
  labels:
    run: testpod
  name: testpod
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
  selector:
    run: testpod
  type: ClusterIP
  clusterIP: None

[root@k8s-master ~]# kubectl apply -f testpod-svc.yml 
service/testpod created
[root@k8s-master ~]# kubectl get service testpod 
NAME      TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
testpod   ClusterIP   None         <none>        80/TCP    10s

[root@k8s-master ~]# kubectl describe svc testpod 
Name:              testpod
Namespace:         default
Labels:            run=testpod
Annotations:       <none>
Selector:          run=testpod
Type:              ClusterIP
IP Family Policy:  SingleStack
IP Families:       IPv4
IP:                None
IPs:               None
Port:              <unset>  80/TCP
TargetPort:        80/TCP
Endpoints:         10.244.1.66:80,10.244.2.72:80    ##直接解析到pod上
Session Affinity:  None
Events:            <none>

#开启一个busyboxplus的pod测试
[root@k8s-master ~]# kubectl run  test --image reg.zx.org/library/busyboxplus:latest -it
If you don't see a command prompt, try pressing enter.
/ # nslookup testpod

3、nodeport

通过ipvs暴漏端口从而使外部主机通过master节点的对外ip:<port>来访问pod业务

其访问过程为:

复制代码
[root@k8s-master ~]# vim testpod-svc.yml 
[root@k8s-master ~]# kubectl apply -f testpod-svc.yml 
service/testpod created
[root@k8s-master ~]# kubectl get service testpod 
NAME      TYPE       CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
testpod   NodePort   10.102.166.250   <none>        80:32171/TCP   12s
[root@k8s-master ~]# for i in {1..5}
> do
> curl 172.25.254.200:32171/hostname.html
> done

nodeport默认端口是30000-32767,超出会报错

复制代码
[root@k8s-master ~]# vim testpod-svc.yml 
[root@k8s-master ~]# kubectl apply -f testpod-svc.yml 
The Service "testpod" is invalid: spec.ports[0].nodePort: Invalid value: 33333: provided port is not in the valid range. The range of valid ports is 30000-32767

如果需要使用这个范围以外的端口就需要特殊设定

复制代码
[root@k8s-master ~]# vim /etc/kubernetes/manifests/kube-apiserver.yaml

添加"--service-node-port-range=" 参数,端口范围可以自定义

修改后api-server会自动重启,等apiserver正常启动后才能操作集群

集群重启自动完成在修改完参数后全程不需要人为干预

复制代码
[root@k8s-master ~]# kubectl get nodes
The connection to the server 172.25.254.200:6443 was refused - did you specify the right host or port?
[root@k8s-master ~]# 
[root@k8s-master ~]# kubectl get nodes
The connection to the server 172.25.254.200:6443 was refused - did you specify the right host or port?
[root@k8s-master ~]# 
[root@k8s-master ~]# kubectl get nodes
NAME                STATUS   ROLES           AGE   VERSION
k8s-master.zx.org   Ready    control-plane   40h   v1.30.0
k8s-node1.zx.org    Ready    <none>          40h   v1.30.0
k8s-node2.zx.org    Ready    <none>          40h   v1.30.0
[root@k8s-master ~]# kubectl apply -f testpod-svc.yml 
service/testpod configured
[root@k8s-master ~]# kubectl get service testpod 
NAME      TYPE       CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
testpod   NodePort   10.102.166.250   <none>        80:33333/TCP   10m

4、loadbalancer

云平台会为我们分配vip并实现访问,如果是裸金属主机那么需要metallb来实现ip的分配

LoadBalancer模式适用云平台,裸金属环境需要安装metallb提供支持

复制代码
[root@k8s-master ~]# vim testpod-svc.yml 
[root@k8s-master ~]# cat testpod-svc.yml 
apiVersion: v1
kind: Service
metadata:
  labels:
    run: testpod
  name: testpod
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
  selector:
    run: testpod
  type: LoadBalancer
[root@k8s-master ~]# kubectl apply -f testpod-svc.yml 
service/testpod configured
# 默认无法分配外部访问IP
[root@k8s-master ~]# kubectl get svc
NAME         TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
deployment   ClusterIP      10.100.147.145   <none>        80/TCP         34h
kubernetes   ClusterIP      10.96.0.1        <none>        443/TCP        2d11h
readiness    ClusterIP      10.101.55.183    <none>        80/TCP         35h
testpod      LoadBalancer   10.102.166.250   <pending>     80:33333/TCP   19h

5、metalLB

功能------为LoadBalancer分配vip

复制代码
# 设置ipvs模式
[root@k8s-master ~]# kubectl edit cm -n kube-system kube-proxy
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: "ipvs"
ipvs:
  strictARP: true

[root@k8s-master ~]# kubectl -n kube-system get  pods   | awk '/kube-proxy/{system("kubectl -n kube-system delete pods "$1)}'
pod "kube-proxy-27gds" deleted
pod "kube-proxy-f7tzt" deleted
pod "kube-proxy-kkt58" deleted
[root@k8s-master ~]# 

[root@k8s-master ~]# mkdir metalLB
[root@k8s-master ~]# cd metalLB/
[root@k8s-master metalLB]# ls
metallb-native.yaml  metalLB.tag.gz
[root@k8s-master metalLB]# vim metallb-native.yaml 
[root@k8s-master metalLB]# docker load -i metalLB.tag.gz 
......
Loaded image: quay.io/metallb/controller:v0.14.8
......
Loaded image: quay.io/metallb/speaker:v0.14.8

[root@k8s-master metalLB]# docker tag quay.io/metallb/controller:v0.14.8 reg.zx.org/metallb/quay.io/metallb/controller:v0.14.8
[root@k8s-master metalLB]# docker push reg.zx.org/metallb/quay.io/metallb/controller:v0.14.8

[root@k8s-master metalLB]# docker tag quay.io/metallb/speaker:v0.14.8 reg.zx.org/metallb/quay.io/metallb/speaker:v0.14.8
[root@k8s-master metalLB]# docker push reg.zx.org/metallb/quay.io/metallb/speaker:v0.14.8

[root@k8s-master metalLB]# vim metallb-native.yaml     # 修改文件中镜像地址,与harbor仓库路径保持一致(两处)
# 部署服务
[root@k8s-master metalLB]# kubectl apply -f metallb-native.yaml 
[root@k8s-master metalLB]# kubectl get namespaces 
NAME              STATUS   AGE
default           Active   2d11h
kube-flannel      Active   2d11h
kube-node-lease   Active   2d11h
kube-public       Active   2d11h
kube-system       Active   2d11h
metallb-system    Active   44s

[root@k8s-master metalLB]# kubectl -n metallb-system get all
NAME                              READY   STATUS    RESTARTS   AGE
pod/controller-566fd4c654-mkkrc   1/1     Running   0          39s
pod/speaker-bbjt2                 1/1     Running   0          37s
pod/speaker-j9nvx                 1/1     Running   0          37s
pod/speaker-ljtgz                 1/1     Running   0          37s

# 配置分配地址段
[root@k8s-master metalLB]# vim configmap.yml
[root@k8s-master metalLB]# cat configmap.yml 
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
  name: first-pool                #地址池名称
  namespace: metallb-system
spec:
  addresses:
  - 172.25.254.50-172.25.254.99    #修改为自己本地地址段

---                                #两个不同的kind中间必须加分割
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
  name: example
  namespace: metallb-system
spec:
  ipAddressPools:
  - first-pool                    #使用地址池 

[root@k8s-master metalLB]# kubectl apply -f configmap.yml
复制代码
# 接上 ,解决loadbalancer无外部IP的问题
[root@k8s-master ~]# vim testpod-svc.yml 
[root@k8s-master ~]# kubectl apply -f testpod-svc.yml 
service/testpod created
[root@k8s-master ~]# kubectl get svc
NAME         TYPE           CLUSTER-IP      EXTERNAL-IP     PORT(S)        AGE
kubernetes   ClusterIP      10.96.0.1       <none>          443/TCP        16m
testpod      LoadBalancer   10.97.210.224   172.25.254.50   80:32231/TCP   6s
[root@k8s-master ~]# cat testpod-svc.yml 
apiVersion: v1
kind: Service
metadata:
  labels:
    run: testpod
  name: testpod
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
  selector:
    run: testpod
  type: LoadBalancer

6、externalname

  • 开启services后,不会被分配IP,而是用dns解析CNAME固定域名来解决ip变化问题

  • 一般应用于外部业务和pod沟通或外部业务迁移到pod内时

  • 在应用向集群迁移过程中,externalname在过度阶段就可以起作用了。

  • 集群外的资源迁移到集群时,在迁移的过程中ip可能会变化,但是域名+dns解析能完美解决此问题

    [root@k8s-master ~]# vim testpod-svc.yml
    [root@k8s-master ~]# kubectl apply -f testpod-svc.yml
    service/testpod configured
    [root@k8s-master ~]# kubectl get service testpod
    NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
    testpod ExternalName <none> www.baidu.com 80/TCP 7m2s
    [root@k8s-master ~]# cat testpod-svc.yml
    apiVersion: v1
    kind: Service
    metadata:
    labels:
    run: testpod
    name: testpod
    spec:
    ports:
    - port: 80
    protocol: TCP
    targetPort: 80
    selector:
    run: testpod
    type: ExternalName
    externalName: www.baidu.com

五、Ingress-nginx

1、ingress-nginx功能

  • 一种全局的、为了代理不同后端 Service 而设置的负载均衡服务,支持7层

  • Ingress由两部分组成:Ingress controller和Ingress服务

  • Ingress Controller 会根据你定义的 Ingress 对象,提供对应的代理能力。

  • 业界常用的各种反向代理项目,比如 Nginx、HAProxy、Envoy、Traefik 等,都已经为Kubernetes 专门维护了对应的 Ingress Controller。

2、部署ingress

(1)下载/上传部署文件

复制代码
[root@k8s-master ~]# wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.11.2/deploy/static/provider/baremetal/deploy.yaml

[root@k8s-master ~]# docker load -i ingress-nginx-1.11.2.tar.gz 
Loaded image: reg.harbor.org/ingress-nginx/controller:v1.11.2
Loaded image: reg.harbor.org/ingress-nginx/kube-webhook-certgen:v1.4.3

[root@k8s-master ~]# docker tag reg.harbor.org/ingress-nginx/controller:v1.11.2 reg.zx.org/ingress-nginx/controller:v1.11.2

[root@k8s-master ~]# docker tag reg.harbor.org/ingress-nginx/kube-webhook-certgen:v1.4.3 reg.zx.org/ingress-nginx/kube-webhook-certgen:v1.4.3

[root@k8s-master ~]# docker push reg.zx.org/ingress-nginx/controller:v1.11.2
[root@k8s-master ~]# docker push reg.zx.org/ingress-nginx/kube-webhook-certgen:v1.4.3

[root@k8s-master ~]# vim deploy.yaml    # 修改镜像位置(三处)
    image: reg.zx.org/ingress-nginx/controller:v1.11.2
    image: reg.zx.org/ingress-nginx/kube-webhook-certgen:v1.4.3
    image: reg.zx.org/ingress-nginx/kube-webhook-certgen:v1.4.3
[root@k8s-master ~]# kubectl apply -f deploy.yaml 
[root@k8s-master ~]# kubectl get namespaces 

(2)测试ingress

复制代码
[root@k8s-master ~]# kubectl create deployment myapp1 --image reg.zx.org/library/myapp:v1 --dry-run=client -o yaml > myapp-v1.yml
[root@k8s-master ~]# vim myapp-v1.yml 
[root@k8s-master ~]# cp myapp-v1.yml myapp-v2.yml
[root@k8s-master ~]# vim myapp-v2.yml
[root@k8s-master ~]# kubectl apply -f myapp-v1.yml 
deployment.apps/myapp1 created
[root@k8s-master ~]# kubectl apply -f myapp-v2.yml 
deployment.apps/myapp2 created
[root@k8s-master ~]# kubectl expose deployment myapp1 --port 80 --target-port 80 --dry-run=client -o yaml >> myapp-v1.yml 
[root@k8s-master ~]# kubectl expose deployment myapp2 --port 80 --target-port 80 --dry-run=client -o yaml >> myapp-v2.yml 
[root@k8s-master ~]# vim myapp-v1.yml 
[root@k8s-master ~]# vim myapp-v2.yml 
[root@k8s-master ~]# kubectl apply -f myapp-v1.yml 
deployment.apps/myapp1 configured
service/myapp1 created
[root@k8s-master ~]# kubectl apply -f myapp-v2.yml 
deployment.apps/myapp2 configured
service/myapp2 created
[root@k8s-master ~]# kubectl get service
NAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.96.0.1        <none>        443/TCP   41m
myapp1       ClusterIP   10.99.71.198     <none>        80/TCP    14s
myapp2       ClusterIP   10.101.156.212   <none>        80/TCP    8s
[root@k8s-master ~]# curl 10.99.71.198
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
[root@k8s-master ~]# curl 10.101.156.212
Hello MyApp | Version: v2 | <a href="hostname.html">Pod Name</a>

[root@k8s-master ~]# cat myapp-v1.yml 
apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: myapp1
  name: myapp1
spec:
  replicas: 1
  selector:
    matchLabels:
      app: myapp1
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: myapp1
    spec:
      containers:
      - image: reg.zx.org/library/myapp:v1
        name: myapp
---
apiVersion: v1
kind: Service
metadata:
  creationTimestamp: null
  labels:
    app: myapp1
  name: myapp1
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
  selector:
    app: myapp1

[root@k8s-master ~]# cat myapp-v2.yml 
apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: myapp2
  name: myapp2
spec:
  replicas: 1
  selector:
    matchLabels:
      app: myapp2
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: myapp2
    spec:
      containers:
      - image: reg.zx.org/library/myapp:v2
        name: myapp
---
apiVersion: v1
kind: Service
metadata:
  creationTimestamp: null
  labels:
    app: myapp2
  name: myapp2
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
  selector:
    app: myapp2

[root@k8s-master ~]# kubectl -n ingress-nginx get svc
NAME                                 TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                      AGE
ingress-nginx-controller             NodePort    10.109.170.26    <none>        80:31012/TCP,443:30324/TCP   4m18s
ingress-nginx-controller-admission   ClusterIP   10.103.195.107   <none>        443/TCP                      4m17s

[root@k8s-master ~]# kubectl create ingress myapp1 --class nginx --rule="/=myapp1:80" --dry-run=client -o yaml > ingress1.yml
[root@k8s-master ~]# vim ingress1.yml 
[root@k8s-master ~]# kubectl -n ingress-nginx get all 
[root@k8s-master ~]# kubectl -n ingress-nginx get all
NAME                                            READY   STATUS      RESTARTS   AGE
pod/ingress-nginx-admission-create-79c2p        0/1     Completed   0          7m57s
pod/ingress-nginx-admission-patch-mghg6         0/1     Completed   1          7m57s
pod/ingress-nginx-controller-6db9bc976b-ml8lg   1/1     Running     0          7m57s

NAME                                         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                      AGE
service/ingress-nginx-controller             NodePort    10.109.170.26    <none>        80:31012/TCP,443:30324/TCP   8m
service/ingress-nginx-controller-admission   ClusterIP   10.103.195.107   <none>        443/TCP                      7m59s
   
#修改微服务为loadbalancer------"type: LoadBalancer"
[root@k8s-master ~]# kubectl -n ingress-nginx edit svc ingress-nginx-controller
service/ingress-nginx-controller edited
[root@k8s-master ~]# kubectl -n ingress-nginx get all
NAME                                            READY   STATUS      RESTARTS   AGE
pod/ingress-nginx-admission-create-79c2p        0/1     Completed   0          10m
pod/ingress-nginx-admission-patch-mghg6         0/1     Completed   1          10m
pod/ingress-nginx-controller-6db9bc976b-ml8lg   1/1     Running     0          10m

NAME                                         TYPE           CLUSTER-IP       EXTERNAL-IP     PORT(S)                      AGE
service/ingress-nginx-controller             LoadBalancer   10.109.170.26    172.25.254.50   80:31012/TCP,443:30324/TCP   10m
service/ingress-nginx-controller-admission   ClusterIP      10.103.195.107   <none>          443/TCP                      10m

#建立ingress控制器
[root@k8s-master ~]# kubectl apply -f ingress1.yml 
ingress.networking.k8s.io/myapp1 created
[root@k8s-master ~]# kubectl get ingress
NAME     CLASS   HOSTS   ADDRESS   PORTS   AGE
myapp1   nginx   *                 80      13s
[root@k8s-master ~]# curl 172.25.254.50        在ingress-nginx-controller中看到的对外IP就是ingress最终对外开放的ip

[root@k8s-master ~]# cat ingress1.yml 
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  creationTimestamp: null
  name: myapp1
spec:
  ingressClassName: nginx
  rules:
  - http:
      paths:
      - backend:
          service:
            name: myapp1
            port:
              number: 80
        path: /
        pathType: Prefix
#Exact(精确匹配),ImplementationSpecific(特定实现),Prefix(前缀匹配),Regular expression(正则表达式匹配)

3、ingress的高级用法

(1)实验环境

复制代码
[root@k8s-master ~]# kubectl get pods -A
NAMESPACE        NAME                                        READY   STATUS      RESTARTS      AGE
default          myapp1-57778d7fdb-cgx7n                     1/1     Running     0             43m
default          myapp2-58657f48db-krv4l                     1/1     Running     0             43m
default          test                                        1/1     Running     1 (53m ago)   56m
default          web                                         1/1     Running     0             78m
ingress-nginx    ingress-nginx-admission-create-79c2p        0/1     Completed   0             20m
ingress-nginx    ingress-nginx-admission-patch-mghg6         0/1     Completed   1             20m
ingress-nginx    ingress-nginx-controller-6db9bc976b-ml8lg   1/1     Running     0             20m
kube-flannel     kube-flannel-ds-f9b4w                       1/1     Running     0             80m
kube-flannel     kube-flannel-ds-gpg9t                       1/1     Running     0             81m
kube-flannel     kube-flannel-ds-n7nfd                       1/1     Running     0             81m
kube-system      coredns-558b94794c-4ctss                    1/1     Running     0             82m
kube-system      coredns-558b94794c-v8zrd                    1/1     Running     0             82m
kube-system      etcd-k8s-master.zx.org                      1/1     Running     0             82m
kube-system      kube-apiserver-k8s-master.zx.org            1/1     Running     0             82m
kube-system      kube-controller-manager-k8s-master.zx.org   1/1     Running     0             82m
kube-system      kube-proxy-8ktfv                            1/1     Running     0             75m
kube-system      kube-proxy-pkbzb                            1/1     Running     0             75m
kube-system      kube-proxy-qz6ck                            1/1     Running     0             75m
kube-system      kube-scheduler-k8s-master.zx.org            1/1     Running     0             82m
metallb-system   controller-566fd4c654-trtv9                 1/1     Running     0             72m
metallb-system   speaker-f5hwh                               1/1     Running     0             72m
metallb-system   speaker-f8xg8                               1/1     Running     0             72m
metallb-system   speaker-fr8fg                               1/1     Running     0             72m
[root@k8s-master ~]# 
[root@k8s-master ~]# kubectl get ingress
NAME     CLASS   HOSTS   ADDRESS         PORTS   AGE
myapp1   nginx   *       172.25.254.20   80      9m36s
[root@k8s-master ~]# kubectl delete ingress myapp1 
ingress.networking.k8s.io "myapp1" deleted
[root@k8s-master ~]# kubectl get svc
NAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.96.0.1        <none>        443/TCP   83m
myapp1       ClusterIP   10.99.71.198     <none>        80/TCP    41m
myapp2       ClusterIP   10.101.156.212   <none>        80/TCP    41m
[root@k8s-master ~]# 
[root@k8s-master ~]# curl 10.99.71.198
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
[root@k8s-master ~]# curl 10.101.156.212
Hello MyApp | Version: v2 | <a href="hostname.html">Pod Name</a>

(2)基于路径的访问

复制代码
[root@k8s-master ~]# cp ingress1.yml ingress2.yml
[root@k8s-master ~]# vim ingress2.yml 
[root@k8s-master ~]# cat ingress2.yml 
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /        #访问路径后加任何内容都被定向到/
  creationTimestamp: null
  name: myapp
spec:
  ingressClassName: nginx
  rules:
  - http:
      paths:
      - backend:
          service:
            name: myapp1
            port:
              number: 80
        path: /v1
        pathType: Prefix
      - backend:
          service:
            name: myapp2
            port:
              number: 80
        path: /v2
        pathType: Prefix
    host: www.zx.org

[root@k8s-master ~]# kubectl apply -f ingress2.yml 
ingress.networking.k8s.io/myapp created

[root@k8s-master ~]# kubectl describe ingress myapp
Name:             myapp
Labels:           <none>
Namespace:        default
Address:          
Ingress Class:    nginx
Default backend:  <default>
Rules:
  Host        Path  Backends
  ----        ----  --------
  www.zx.org  
              /v1   myapp1:80 (10.244.1.84:80)
              /v2   myapp2:80 (10.244.1.85:80)
Annotations:  nginx.ingress.kubernetes.io/rewrite-target: /
Events:
  Type    Reason  Age   From                      Message
  ----    ------  ----  ----                      -------
  Normal  Sync    46s   nginx-ingress-controller  Scheduled for sync


[root@k8s-master ~]# kubectl get ingress myapp 
NAME    CLASS   HOSTS   ADDRESS         PORTS   AGE
myapp   nginx   *       172.25.254.20   80      88s
[root@k8s-master ~]# echo 172.25.254.50 www.zx.org >> /etc/hosts
[root@k8s-master ~]# curl www.zx.org/v1
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
[root@k8s-master ~]# curl www.zx.org/v2
Hello MyApp | Version: v2 | <a href="hostname.html">Pod Name</a>

(3)基于域名的访问

复制代码
[root@k8s-master ~]# cp ingress2.yml ingress3.yml
[root@k8s-master ~]# vim ingress3.yml 
[root@k8s-master ~]# vim /etc/hosts
[root@k8s-master ~]# kubectl apply -f ingress3.yml 
ingress.networking.k8s.io/myapp created
[root@k8s-master ~]# kubectl describe ingress myapp 
Name:             myapp
Labels:           <none>
Namespace:        default
Address:          172.25.254.20
Ingress Class:    nginx
Default backend:  <default>
Rules:
  Host           Path  Backends
  ----           ----  --------
  myapp1.zx.org  
                 /   myapp1:80 (10.244.1.84:80)
  myapp2.zx.org  
                 /   myapp2:80 (10.244.1.85:80)
Annotations:     <none>
Events:
  Type    Reason  Age                From                      Message
  ----    ------  ----               ----                      -------
  Normal  Sync    28s (x2 over 50s)  nginx-ingress-controller  Scheduled for sync

[root@k8s-master ~]# curl myapp1.zx.org
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
[root@k8s-master ~]# curl myapp2.zx.org
Hello MyApp | Version: v2 | <a href="hostname.html">Pod Name</a>

[root@k8s-master ~]# cat ingress3.yml 
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: myapp
spec:
  ingressClassName: nginx
  rules:
  - http:
      paths:
      - backend:
          service:
            name: myapp1
            port:
              number: 80
        path: /
        pathType: Prefix
    host: myapp1.zx.org
  - http:
      paths:
      - backend:
          service:
            name: myapp2
            port:
              number: 80
        path: /
        pathType: Prefix
    host: myapp2.zx.org

[root@k8s-master ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
172.25.254.200	k8s-master.zx.org
172.25.254.10 k8s-node1.zx.org
172.25.254.20 k8s-node2.zx.org
172.25.254.100 reg.zx.org
172.25.254.50 www.zx.org myapp1.zx.org myapp2.zx.org

(4)建立tls加密

复制代码
#建立证书
[root@k8s-master ~]# openssl req -newkey rsa:2048 -nodes -keyout tls.key -x509 -days 365 -subj "/CN=nginxsvc/O=nginxsvc" -out tls.crt

#建立加密资源类型secret
[root@k8s-master ~]# kubectl create secret tls  web-tls-secret --key tls.key --cert tls.crt

[root@k8s-master ~]# kubectl get secrets
NAME             TYPE                DATA   AGE
web-tls-secret   kubernetes.io/tls   2      23s
# secret通常在kubernetes中存放敏感数据,它并不是一种加密方式

#建立ingress4基于tls认证的yml文件
[root@k8s-master ~]# cp ingress3.yml ingress4.yml
[root@k8s-master ~]# vim ingress4.yml 
[root@k8s-master ~]# cat ingress4.yml 
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: myapp
spec:
  tls:
    - hosts:
      - myapp-tls.zx.org
      secretName: web-tls-secret
  ingressClassName: nginx
  rules:
  - http:
      paths:
      - backend:
          service:
            name: myapp1
            port:
              number: 80
        path: /
        pathType: Prefix
[root@k8s-master ~]# kubectl apply -f ingress4.yml 
ingress.networking.k8s.io/myapp configured

[root@k8s-master ~]# kubectl describe ingress myapp 
Name:             myapp
Labels:           <none>
Namespace:        default
Address:          172.25.254.20
Ingress Class:    nginx
Default backend:  <default>
TLS:
  web-tls-secret terminates myapp-tls.zx.org
Rules:
  Host        Path  Backends
  ----        ----  --------
  *           
              /   myapp1:80 (10.244.1.84:80)
Annotations:  <none>
Events:
  Type    Reason  Age                From                      Message
  ----    ------  ----               ----                      -------
  Normal  Sync    34s (x3 over 15m)  nginx-ingress-controller  Scheduled for sync

[root@k8s-master ~]# curl -k https://myapp-tls.zx.org
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>

在浏览器访问,需在电脑host文件中添加解析------	172.25.254.50	myapp-tls.zx.org

(5)建立auth认证

复制代码
#建立认证文件
[root@k8s-master ~]# dnf install httpd-tools -y
[root@k8s-master ~]# htpasswd -cm auth zx
New password: 
Re-type new password: 
Adding password for user zx

#建立认证类型资源
[root@k8s-master ~]# kubectl create secret generic auth-web --from-file auth
[root@k8s-master ~]# kubectl describe secrets auth-web
Name:         auth-web
Namespace:    default
Labels:       <none>
Annotations:  <none>

Type:  Opaque

Data
====
auth:  41 bytes

[root@k8s-master ~]# kubectl get secrets auth-web -o yaml
apiVersion: v1
data:
  auth: eng6JGFwcjEkSWJjV3pIZzUkZWtrSDFSeWIzWEYvTjBvRjdrRi4uLwo=
kind: Secret
metadata:
  creationTimestamp: "2024-09-16T04:50:39Z"
  name: auth-web
  namespace: default
  resourceVersion: "13126"
  uid: 36088669-225e-4fa5-a1a2-b2d5c3823779
type: Opaque

[root@k8s-master ~]# cp ingress4.yml ingress5.yml
[root@k8s-master ~]# vim ingress5.yml 
[root@k8s-master ~]# cat ingress5.yml 
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: myapp
  annotations:
    nginx.ingress.kubernetes.io/auth-type: basic
    nginx.ingress.kubernetes.io/auth-secret: auth-web
    nginx.ingress.kubernetes.io/auth-realm: "Please input username and password"
spec:
  tls:
    - hosts:
      - myapp-tls.zx.org
      secretName: web-tls-secret
  ingressClassName: nginx
  rules:
  - host: myapp-tls.zx.org
    http:
      paths:
      - backend:
          service:
            name: myapp1
            port:
              number: 80
        path: /
        pathType: Prefix

[root@k8s-master ~]# kubectl apply -f ingress5.yml 
ingress.networking.k8s.io/myapp configured
[root@k8s-master ~]# kubectl describe ingress myapp 

# 测试
[root@k8s-master ~]# curl -k https://myapp-tls.zx.org
<html>
<head><title>401 Authorization Required</title></head>
<body>
<center><h1>401 Authorization Required</h1></center>
<hr><center>nginx</center>
</body>
</html>
[root@k8s-master ~]# 
[root@k8s-master ~]# curl -k https://myapp-tls.zx.org -uzx
Enter host password for user 'zx':
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
[root@k8s-master ~]# 

(6)rewrite重定向

复制代码
#指定默认访问的文件到hostname.html上
[root@k8s-master ~]# cp ingress5.yml ingress6.yml
[root@k8s-master ~]# vim ingress6.yml 
[root@k8s-master ~]# kubectl apply -f ingress6.yml 
ingress.networking.k8s.io/myapp configured
[root@k8s-master ~]#  kubectl describe ingress myapp 
Name:             myapp
Labels:           <none>
Namespace:        default
Address:          172.25.254.20
Ingress Class:    nginx
Default backend:  <default>
TLS:
  web-tls-secret terminates myapp-tls.zx.org
Rules:
  Host              Path  Backends
  ----              ----  --------
  myapp-tls.zx.org  
                    /   myapp1:80 (10.244.1.84:80)
Annotations:        nginx.ingress.kubernetes.io/app-root: /hostname.html
                    nginx.ingress.kubernetes.io/auth-realm: Please input username and password
                    nginx.ingress.kubernetes.io/auth-secret: auth-web
                    nginx.ingress.kubernetes.io/auth-type: basic
Events:
  Type    Reason  Age                From                      Message
  ----    ------  ----               ----                      -------
  Normal  Sync    17s (x5 over 31m)  nginx-ingress-controller  Scheduled for sync
[root@k8s-master ~]# 
[root@k8s-master ~]# curl -Lk https://myapp-tls.zx.org -uzx
Enter host password for user 'zx':
myapp1-57778d7fdb-cgx7n
[root@k8s-master ~]# curl -Lk https://myapp-tls.zx.org/zx/hostname.html -uzx
Enter host password for user 'zx':
<html>
<head><title>404 Not Found</title></head>
<body bgcolor="white">
<center><h1>404 Not Found</h1></center>
<hr><center>nginx/1.12.2</center>
</body>
</html>
复制代码
#解决重定向路径问题
[root@k8s-master ~]# vim ingress6.yml 
[root@k8s-master ~]# kubectl apply -f ingress6.yml 
ingress.networking.k8s.io/myapp configured
[root@k8s-master ~]# curl -Lk https://myapp-tls.zx.org/zx/hostname.html -uzx
Enter host password for user 'zx':
myapp1-57778d7fdb-cgx7n
[root@k8s-master ~]# cat ingress6.yml 
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: myapp
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /$2
    nginx.ingress.kubernetes.io/use-regex: "true"
    nginx.ingress.kubernetes.io/auth-type: basic
    nginx.ingress.kubernetes.io/auth-secret: auth-web
    nginx.ingress.kubernetes.io/auth-realm: "Please input username and password"
spec:
  tls:
    - hosts:
      - myapp-tls.zx.org
      secretName: web-tls-secret
  ingressClassName: nginx
  rules:
  - host: myapp-tls.zx.org
    http:
      paths:
      - backend:
          service:
            name: myapp1
            port:
              number: 80
        path: /
        pathType: Prefix
      - backend:
          service:
            name: myapp1
            port:
              number: 80
        path: /zx(/|$)(.*)
        pathType: ImplementationSpecific

六、Canary金丝雀发布

1、概念

金丝雀发布(Canary Release)也称为灰度发布,是一种软件发布策略。

主要目的是在将新版本的软件全面推广到生产环境之前,先在一小部分用户或服务器上进行测试和验证,以降低因新版本引入重大问题而对整个系统造成的影响。

是一种Pod的发布方式。金丝雀发布采取先添加、再删除的方式,保证Pod的总量不低于期望值。并且在更新部分Pod后,暂停更新,当确认新Pod版本运行正常后再进行其他版本的Pod的更新。

2、Canary发布

(1)基于header(http包头)灰度

  • 通过Annotaion扩展

  • 创建灰度ingress,配置灰度头部key以及value

  • 灰度流量验证完毕后,切换正式ingress到新版本

  • 之前我们在做升级时可以通过控制器做滚动更新,默认25%利用header可以使升级更为平滑,通过key 和vule 测试新的业务体系是否有问题。

    #建立版本1的ingress
    [root@k8s-master app]# vim ingress7.yml
    apiVersion: networking.k8s.io/v1
    kind: Ingress
    metadata:
    annotations:
    name: myapp-v1-ingress
    spec:
    ingressClassName: nginx
    rules:
    - host: myapp-tls.zx.org
    http:
    paths:
    - backend:
    service:
    name: myapp-v1
    port:
    number: 80
    path: /
    pathType: Prefix

    [root@k8s-master app]# kubectl describe ingress myapp-v1-ingress
    Name: myapp-v1-ingress
    Labels: <none>
    Namespace: default
    Address: 172.25.254.10
    Ingress Class: nginx
    Default backend: <default>
    Rules:
    Host Path Backends
    ---- ---- --------
    myapp-tls.zx.org
    / myapp-v1:80 (10.244.2.31:80)
    Annotations: <none>
    Events:
    Type Reason Age From Message
    ---- ------ ---- ---- -------
    Normal Sync 44s (x2 over 73s) nginx-ingress-controller Scheduled for sync

    #建立基于header的ingress
    [root@k8s-master app]# vim ingress8.yml
    apiVersion: networking.k8s.io/v1
    kind: Ingress
    metadata:
    annotations:
    nginx.ingress.kubernetes.io/canary: "true"
    nginx.ingress.kubernetes.io/canary-by-header: version
    nginx.ingress.kubernetes.io/canary-by-header-value: 2
    name: myapp-v2-ingress
    spec:
    ingressClassName: nginx
    rules:
    - host: myapp-tls.zx.org
    http:
    paths:
    - backend:
    service:
    name: myapp-v2
    port:
    number: 80
    path: /
    pathType: Prefix
    [root@k8s-master app]# kubectl apply -f ingress8.yml
    ingress.networking.k8s.io/myapp-v2-ingress created
    [root@k8s-master app]# kubectl describe ingress myapp-v2-ingress
    Name: myapp-v2-ingress
    Labels: <none>
    Namespace: default
    Address:
    Ingress Class: nginx
    Default backend: <default>
    Rules:
    Host Path Backends
    ---- ---- --------
    myapp-tls.zx.org
    / myapp-v2:80 (10.244.2.32:80)
    Annotations: nginx.ingress.kubernetes.io/canary: true
    nginx.ingress.kubernetes.io/canary-by-header: version
    nginx.ingress.kubernetes.io/canary-by-header-value: 2
    Events:
    Type Reason Age From Message
    ---- ------ ---- ---- -------
    Normal Sync 21s nginx-ingress-controller Scheduled for sync

    #测试:
    [root@reg ~]# curl myapp-tls.zx.org
    Hello MyApp | Version: v1 | Pod Name
    [root@reg ~]# curl -H "version: 2" myapp-tls.zx.org
    Hello MyApp | Version: v2 | Pod Name

(2)基于权重的灰度发布

  • 通过Annotaion拓展

  • 创建灰度ingress,配置灰度权重以及总权重

  • 灰度流量验证完毕后,切换正式ingress到新版本

    #基于权重的灰度发布
    [root@k8s-master app]# vim ingress8.yml
    apiVersion: networking.k8s.io/v1
    kind: Ingress
    metadata:
    annotations:
    nginx.ingress.kubernetes.io/canary: "true"
    nginx.ingress.kubernetes.io/canary-weight: "10" #更改权重值
    nginx.ingress.kubernetes.io/canary-weight-total: "100"
    name: myapp-v2-ingress
    spec:
    ingressClassName: nginx
    rules:
    - host: myapp-ls.zx.org
    http:
    paths:
    - backend:
    service:
    name: myapp-v2
    port:
    number: 80
    path: /
    pathType: Prefix

    [root@k8s-master app]# kubectl apply -f ingress8.yml
    ingress.networking.k8s.io/myapp-v2-ingress created

    #测试:
    [root@reg ~]# vim check_ingress.sh
    #!/bin/bash
    v1=0
    v2=0

    for (( i=0; i<100; i++))
    do
    response=curl -s myapp.timinglee.org |grep -c v1

    复制代码
      v1=`expr $v1 + $response`
      v2=`expr $v2 + 1 - $response`

    done
    echo "v1:v1, v2:v2"

    [root@reg ~]# sh check_ingress.sh
    v1:90, v2:10

    #更改完毕权重后继续测试可观察变化

相关推荐
mghio7 小时前
Dubbo 中的集群容错
java·微服务·dubbo
阿里云云原生11 小时前
LLM 不断提升智能下限,MCP 不断提升创意上限
云原生
阿里云云原生11 小时前
GraalVM 24 正式发布阿里巴巴贡献重要特性 —— 支持 Java Agent 插桩
云原生
宁zz14 小时前
乌班图安装jenkins
运维·jenkins
云上艺旅15 小时前
K8S学习之基础七十四:部署在线书店bookinfo
学习·云原生·容器·kubernetes
无名之逆15 小时前
Rust 开发提效神器:lombok-macros 宏库
服务器·开发语言·前端·数据库·后端·python·rust
大丈夫立于天地间15 小时前
ISIS协议中的数据库同步
运维·网络·信息与通信
cg501715 小时前
Spring Boot 的配置文件
java·linux·spring boot
暮云星影15 小时前
三、FFmpeg学习笔记
linux·ffmpeg
rainFFrain15 小时前
单例模式与线程安全
linux·运维·服务器·vscode·单例模式