k8s 微服务 ingress-nginx 金丝雀发布

目录

[一 什么是微服务](#一 什么是微服务)

[二 微服务的类型](#二 微服务的类型)

[三 ipvs模式](#三 ipvs模式)

[3.1 ipvs模式配置方式](#3.1 ipvs模式配置方式)

[四 微服务类型详解](#四 微服务类型详解)

[4.1 clusterip](#4.1 clusterip)

[4.2 ClusterIP中的特殊模式headless](#4.2 ClusterIP中的特殊模式headless)

[4.3 nodeport](#4.3 nodeport)

[4.4 loadbalancer](#4.4 loadbalancer)

[4.5 metalLB](#4.5 metalLB)

[4.6 externalname](#4.6 externalname)

[五 Ingress-nginx](#五 Ingress-nginx)

[5.1 ingress-nginx功能](#5.1 ingress-nginx功能)

[5.2 部署ingress](#5.2 部署ingress)

[5.2.1 下载部署文件](#5.2.1 下载部署文件)

[5.2.2 安装ingress](#5.2.2 安装ingress)

[5.2.3 测试ingress](#5.2.3 测试ingress)

[5.3 ingress 的高级用法](#5.3 ingress 的高级用法)

[5.3.1 基于路径的访问](#5.3.1 基于路径的访问)

[5.3.2 基于域名的访问](#5.3.2 基于域名的访问)

[5.3.3 建立tls加密](#5.3.3 建立tls加密)

[5.3.4 建立auth认证](#5.3.4 建立auth认证)

[5.3.5 rewrite重定向](#5.3.5 rewrite重定向)

[六 Canary金丝雀发布](#六 Canary金丝雀发布)

[6.1 什么是金丝雀发布](#6.1 什么是金丝雀发布)

[6.2 Canary发布方式](#6.2 Canary发布方式)

[6.2.1 基于header(http包头)灰度](#6.2.1 基于header(http包头)灰度)

[6.2.2 基于权重的灰度发布](#6.2.2 基于权重的灰度发布)


一 什么是微服务

用控制器来完成集群的工作负载,那么应用如何暴漏出去?需要通过微服务暴漏出去后才能被访问

  • Service是一组提供相同服务的Pod对外开放的接口。

  • 借助Service,应用可以实现服务发现和负载均衡。

  • service默认只支持4层负载均衡能力,没有7层功能。(可以通过Ingress实现)

二 微服务的类型

微服务类型 作用描述
ClusterIP 默认值,k8s系统给service自动分配的虚拟IP,只能在集群内部访问
NodePort 将Service通过指定的Node上的端口暴露给外部,访问任意一个NodeIP:nodePort都将路由到ClusterIP
LoadBalancer 在NodePort的基础上,借助cloud provider创建一个外部的负载均衡器,并将请求转发到 NodeIP:NodePort,此模式只能在云服务器上使用
ExternalName 将服务通过 DNS CNAME 记录方式转发到指定的域名(通过 spec.externlName 设定

示例:

复制代码
#生成控制器文件并建立控制器
[root@k8s-master ~]# kubectl create deployment howe --image myapp:v1 --replicas 2 --dry-run=client -o yaml > howe.yml

[root@k8s-master ~]# kubectl apply -f howe.yml 
deployment.apps/howe created

#生成微服务yaml追加到已有yaml中
[root@k8s-master ~]# kubectl expose deployment howe --port 80 --target-port 80 --dry-run=client -o yaml >> howe.yml 

[root@k8s-master ~]# vim howe.yml 
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: howe
  name: howe
spec:
  replicas: 2
  selector:
    matchLabels:
      app: howe
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: howe
    spec:
      containers:
      - image: myapp:v1
        name: myapp
---					#不同资源间用---隔开

apiVersion: v1
kind: Service
metadata:
  labels:
    app: howe
  name: howe
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
  selector:
    app: howe
    
[root@k8s-master ~]# kubectl apply -f howe.yml 
deployment.apps/howe created
service/howe created


[root@k8s-master ~]# kubectl get services 
NAME         TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGE
howe         ClusterIP   10.99.113.82   <none>        80/TCP    32s
kubernetes   ClusterIP   10.96.0.1      <none>        443/TCP   2d22h

微服务默认使用iptables调度

复制代码
[root@k8s-master ~]# kubectl get services  -o wide
NAME         TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGE     SELECTOR
howe         ClusterIP   10.98.224.15   <none>        80/TCP    21m     app=howe
kubernetes   ClusterIP   10.96.0.1      <none>        443/TCP   2d23h   <none>

#可以在火墙中查看到策略信息
[root@k8s-master ~]# iptables -t nat -nL

三 ipvs模式

  • Service 是由 kube-proxy 组件,加上 iptables 来共同实现的

  • kube-proxy 通过 iptables 处理 Service 的过程,需要在宿主机上设置相当多的 iptables 规则,如果宿主机有大量的Pod,不断刷新iptables规则,会消耗大量的CPU资源

  • IPVS模式的service,可以使K8s集群支持更多量级的Pod

3.1 ipvs模式配置方式

1 在所有节点中安装ipvsadm

复制代码
[root@k8s-所有节点~]# yum install ipvsadm -y

2 修改master节点的代理配置

复制代码
2.修改master节点的代理配置
[root@k8s-master ~]# kubectl -n kube-system edit cm kube-proxy 
 58     metricsBindAddress: ""
 59     mode: "ipvs"		#改为ipvs
 60     nftables:

3 重启pod,在pod运行时配置文件中采用默认配置,当改变配置文件后已经运行的pod状态不会变化,所以要重启pod

复制代码
[root@k8s-master ~]# kubectl -n kube-system get pods | awk '/kube-proxy/{system("kubectl -n kube-system delete pods "$1)}'
pod "kube-proxy-2484q" deleted
pod "kube-proxy-522xr" deleted
pod "kube-proxy-9gntr" deleted


[root@k8s-master ~]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  10.96.0.1:443 rr
  -> 172.25.250.100:6443          Masq    1      0          0         
TCP  10.96.0.10:53 rr
  -> 10.244.0.2:53                Masq    1      0          0         
  -> 10.244.0.3:53                Masq    1      0          0         
TCP  10.96.0.10:9153 rr
  -> 10.244.0.2:9153              Masq    1      0          0         
  -> 10.244.0.3:9153              Masq    1      0          0         
TCP  10.98.224.15:80 rr
  -> 10.244.1.8:80                Masq    1      0          0         
  -> 10.244.2.11:80               Masq    1      0          0         
UDP  10.96.0.10:53 rr
  -> 10.244.0.2:53                Masq    1      0          0         
  -> 10.244.0.3:53                Masq    1      0          0

Note:切换ipvs模式后,kube-proxy会在宿主机上添加一个虚拟网卡:kube-ipvs0,并分配所有service IP

复制代码
[root@k8s-master ~]# ip a | tail
    inet6 fe80::ec14:d7ff:fec9:51d0/64 scope link 
       valid_lft forever preferred_lft forever
8: kube-ipvs0: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN group default 
    link/ether f6:61:15:99:d6:74 brd ff:ff:ff:ff:ff:ff
    inet 10.98.224.15/32 scope global kube-ipvs0
       valid_lft forever preferred_lft forever
    inet 10.96.0.1/32 scope global kube-ipvs0
       valid_lft forever preferred_lft forever
    inet 10.96.0.10/32 scope global kube-ipvs0
       valid_lft forever preferred_lft forever

四 微服务类型详解

4.1 clusterip

特点:

clusterip模式只能在集群内访问,并对集群内的pod提供健康检测和自动发现功能

示例:

复制代码
[root@k8s-master ~]# vim myapp.yml

---
apiVersion: v1
kind: Service
metadata:
  labels:
    app: howe
  name: clusterip
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
  selector:
    app: howe
  type: ClusterIP
  
[root@k8s-master ~]# kubectl apply -f howe.yml 
deployment.apps/howe created
service/clusterip created

[root@k8s-master ~]# kubectl -n kube-system get svc
NAME       TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)                  AGE
kube-dns   ClusterIP   10.96.0.10   <none>        53/UDP,53/TCP,9153/TCP   3d2h

  
#解析
[root@k8s-master ~]# dig howe.default.svc.cluster.local. @10.96.0.10

; <<>> DiG 9.16.23-RH <<>> howe.default.svc.cluster.local. @10.96.0.10
;; global options: +cmd
;; Got answer:
;; WARNING: .local is reserved for Multicast DNS
;; You are currently testing what happens when an mDNS query is leaked to DNS
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 58560
;; flags: qr aa rd; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
;; WARNING: recursion requested but not available

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
; COOKIE: a103f7adf3299930 (echoed)
;; QUESTION SECTION:
;howe.default.svc.cluster.local.	IN	A

;; ANSWER SECTION:
howe.default.svc.cluster.local.	30 IN	A	10.99.13.95

;; Query time: 3 msec
;; SERVER: 10.96.0.10#53(10.96.0.10)
;; WHEN: Fri Sep 06 14:24:57 CST 2024
;; MSG SIZE  rcvd: 117

4.2 ClusterIP中的特殊模式headless

headless(无头服务)

对于无头Services并不会分配 Cluster IP,kube-proxy不会处理它们, 而且平台也不会为它们进行负载均衡和路由,集群访问通过dns解析直接指向到业务pod上的IP,所有的调度有dns单独完成

复制代码
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app: howe
  name: superhowe
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
  selector:
    app: howe
  type: ClusterIP
  clusterIP: None
  
[root@k8s-master ~]# kubectl apply -f howe.yml 
deployment.apps/howe unchanged
service/superhowe created


[root@k8s-master ~]# kubectl get pods -o wide
NAME                        READY   STATUS    RESTARTS       AGE    IP               NODE                  NOMINATED NODE   READINESS GATES
howe-7b74f758bd-6k5xr       1/1     Running   0              7m6s   10.244.57.221    k8s-node1.exam.com    <none>           <none>
howe-7b74f758bd-wb97k       1/1     Running   0              7m6s   10.244.57.222    k8s-node1.exam.com    <none>           <none>


[root@k8s-master ~]# kubectl get services superhowe 
NAME        TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
superhowe   ClusterIP   None         <none>        80/TCP    36s

[root@k8s-master ~]# dig superhowe.default.svc.cluster.local. @10.96.0.10

; <<>> DiG 9.16.23-RH <<>> superhowe.default.svc.cluster.local. @10.96.0.10
;; global options: +cmd
;; Got answer:
;; WARNING: .local is reserved for Multicast DNS
;; You are currently testing what happens when an mDNS query is leaked to DNS
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 43354
;; flags: qr aa rd; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 1
;; WARNING: recursion requested but not available

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
; COOKIE: 4bb1245f5b902f9b (echoed)
;; QUESTION SECTION:
;superhowe.default.svc.cluster.local. IN	A

;; ANSWER SECTION:
superhowe.default.svc.cluster.local. 30	IN A	10.244.57.222	#直接解析到pod上
superhowe.default.svc.cluster.local. 30	IN A	10.244.57.221

;; Query time: 30 msec
;; SERVER: 10.96.0.10#53(10.96.0.10)
;; WHEN: Tue Sep 10 12:32:26 CST 2024
;; MSG SIZE  rcvd: 178


#开启一个busyboxplus的pod测试
[root@k8s-master ~]# kubectl run test --image busyboxplus -it
If you don't see a command prompt, try pressing enter.
/ # nslookup superhowe
Server:    10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local

Name:      superhowe
Address 1: 10.244.57.227 10-244-57-227.superhowe.default.svc.cluster.local
Address 2: 10.244.57.226 10-244-57-226.superhowe.default.svc.cluster.local
/ # 
/ # curl superhowe
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
/ # curl superhowe/hostname.html
howe-7b74f758bd-pnmpg
/ # 

4.3 nodeport

通过ipvs暴漏端口从而使外部主机通过master节点的对外ip:<port>来访问pod业务

其访问过程为:

示例:

复制代码
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app: howe-service
  name: howe-service
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
  selector:
    app: howe
  type: NodePort
  

[root@k8s-master ~]# kubectl apply -f howe.yml 
deployment.apps/howe created
service/howe-service created

[root@k8s-master ~]# kubectl get services howe-service 
NAME           TYPE       CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
howe-service   NodePort   10.96.182.56   <none>        80:31502/TCP   26s


nodeport在集群节点上绑定端口,一个端口对应一个服务
[root@k8s-master ~]# for i in {1..5}
> do
> curl 172.25.250.100:31771/hostname.html
> done
howe-service-c56f584cf-fjxdk
howe-service-c56f584cf-5m2z5
howe-service-c56f584cf-z2w4d
howe-service-c56f584cf-tt5g6
howe-service-c56f584cf-fjxdk

Note:

nodeport默认端口

nodeport默认端口是30000-32767,超出会报错

复制代码
[root@k8s-master ~]# vim timinglee.yaml
apiVersion: v1
kind: Service
metadata:
  labels:
    app: timinglee-service
  name: timinglee-service
spec:
  ports:

  - port: 80
    protocol: TCP
    targetPort: 80
    nodePort: 33333
      selector:
    app: timinglee
      type: NodePort

[root@k8s-master ~]# kubectl apply -f timinglee.yaml
deployment.apps/timinglee created
The Service "timinglee-service" is invalid: spec.ports[0].nodePort: Invalid value: 33333: provided port is not in the valid range. The range of valid ports is 30000-32767

如果需要使用这个范围以外的端口就需要特殊设定

复制代码
[root@k8s-master ~]# vim /etc/kubernetes/manifests/kube-apiserver.yaml

- --service-node-port-range=30000-40000

NOTE:

添加"--service-node-port-range=" 参数,端口范围可以自定义

修改后api-server会自动重启,等apiserver正常启动后才能操作集群

集群重启自动完成在修改完参数后全程不需要人为干预

4.4 loadbalancer

云平台会为我们分配vip并实现访问,如果是裸金属主机那么需要metallb来实现ip的分配

LoadBalancer模式适用云平台,裸金属环境需要安装metallb提供支持

复制代码
[root@k8s-master ~]# vim loadbalancer.yaml

---
apiVersion: v1
kind: Service
metadata:
  labels:
    app: howe-service
  name: loadbalancer
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
  selector:
    app: howe
  type: LoadBalancer
  

[root@k8s-master ~]# 
[root@k8s-master ~]# kubectl get services 
NAME           TYPE           CLUSTER-IP       EXTERNAL-IP     PORT(S)        AGE
loadbalancer   LoadBalancer   10.99.217.26     172.25.250.50   80:30908/TCP   12s

4.5 metalLB

官网:Installation :: MetalLB, bare metal load-balancer for Kubernetes

metalLB功能:

为LoadBalancer分配vip

复制代码
1.设置ipvs模式
[root@k8s-master ~]# kubectl edit cm -n kube-system kube-proxy
 44       strictARP: true
 59       mode: "ipvs"

2.下载部署文件
wget https://raw.githubusercontent.com/metallb/metallb/v0.13.12/config/manifests/metallb-native.yaml

[root@k8s-master metallb]# ls
configmap.yml  metallb-native.yaml  metalLB.tag.gz
[root@k8s-master metallb]# docker load -i metalLB.tag.gz 

3.修改文件中镜像地址,与harbor仓库路径保持一致
[root@k8s-master ~]# vim metallb-native.yaml
...
image: metallb/controller:v0.14.8
image: metallb/speaker:v0.14.8

4.上传镜像到harbor
[root@k8s-master ~]# docker tag quay.io/metallb/speaker:v0.14.8 reg.exam.com/metallb/speaker:v0.14.8

[root@k8s-master ~]# docker tag quay.io/metallb/controller:v0.14.8 reg.exam.com/metallb/controller:v0.14.8

[root@k8s-master ~]# docker push reg.exam.com/metallb/speaker:v0.14.8 
[root@k8s-master ~]# docker push reg.exam.com/metallb/controller:v0.14.8 

部署服务
[root@k8s-master ~]# kubectl apply -f metallb-native.yaml
[root@k8s-master metalLB]# kubectl -n metallb-system get pods 
NAME                          READY   STATUS    RESTARTS   AGE
controller-65957f77c8-spdkq   1/1     Running   0          28s
speaker-8wgsh                 1/1     Running   0          28s
speaker-ct8ld                 1/1     Running   0          28s
speaker-w7699                 1/1     Running   0          28s

配置分配地址段
[root@k8s-master ~]# vim configmap.yml
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
  name: first-pool
  namespace: metallb-system
spec:
  addresses:
  - 172.25.250.50-172.25.250.99			#修改为自己本地地址段

---									  #两个不同的kind中间必须加分割
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
  name: example
  namespace: metallb-system
spec:
  ipAddressPools:
  - first-pool							#使用地址池 
  

[root@k8s-master ~]# kubectl apply -f configmap.yml 
ipaddresspool.metallb.io/first-pool created
l2advertisement.metallb.io/example created

[root@k8s-master ~]# kubectl get services
NAME           TYPE           CLUSTER-IP       EXTERNAL-IP     PORT(S)        AGE
clusterip      ClusterIP      10.100.145.164   <none>          80/TCP         43h
howe           ClusterIP      10.99.13.95      <none>          80/TCP         43h
howe-service   NodePort       10.96.182.56     <none>          80:31502/TCP   42h
kubernetes     ClusterIP      10.96.0.1        <none>          443/TCP        4d21h
loadbalancer   LoadBalancer   10.99.217.26     172.25.250.50   80:30908/TCP   94s


#通过分配地址从集群外访问服务
[root@k8s-master ~]# curl 172.25.250.50
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>

4.6 externalname

  • 开启services后,不会被分配IP,而是用dns解析CNAME固定域名来解决ip变化问题

  • 一般应用于外部业务和pod沟通或外部业务迁移到pod内时

  • 在应用向集群迁移过程中,externalname在过度阶段就可以起作用了。

  • 集群外的资源迁移到集群时,在迁移的过程中ip可能会变化,但是域名+dns解析能完美解决此问题

示例:

复制代码
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app: howe-service
  name: ExternalName
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
  selector:
    app: howe
  type: ExternalName
  externalName: www.baidu.com
  
  [root@k8s-master ~]# kubectl get services howe-service 
NAME           TYPE           CLUSTER-IP   EXTERNAL-IP     PORT(S)   AGE
howe-service   ExternalName   <none>       www.baidu.com   80/TCP    14s

五 Ingress-nginx

官网:

Installation Guide - Ingress-Nginx Controller

5.1 ingress-nginx功能

  • 一种全局的、为了代理不同后端 Service 而设置的负载均衡服务,支持7层

  • Ingress由两部分组成:Ingress controller和Ingress服务

  • Ingress Controller 会根据你定义的 Ingress 对象,提供对应的代理能力。

  • 业界常用的各种反向代理项目,比如 Nginx、HAProxy、Envoy、Traefik 等,都已经为Kubernetes 专门维护了对应的 Ingress Controller。

5.2 部署ingress

5.2.1 下载部署文件

复制代码
[root@k8s-master ~]# wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.11.2/deploy/static/provider/baremetal/deploy.yaml

上传ingress所需镜像到harbor

复制代码
[root@k8s-master ~]# docker tag reg.harbor.org/ingress-nginx/controller:v1.11.2 reg.exam.com/ingress-nginx/controller:v1.11.2
[root@k8s-master ~]# docker push reg.exam.com/ingress-nginx/controller:v1.11.2

[root@k8s-master ~]# docker tag reg.harbor.org/ingress-nginx/kube-webhook-certgen:v1.4.3 reg.exam.com/ingress-nginx/kube-webhook-certgen:v1.4.3
[root@k8s-master ~]# docker push reg.exam.com/ingress-nginx/kube-webhook-certgen:v1.4.3 

5.2.2 安装ingress

复制代码
[root@k8s-master ~]# vim deploy.yaml
445         image: ingress-nginx/controller:v1.11.2
546         image: ingress-nginx/kube-webhook-certgen:v1.4.3
599         image: ingress-nginx/kube-webhook-certgen:v1.4.3

[root@k8s-master ~]# kubectl apply -f deploy.yaml 
namespace/ingress-nginx created
serviceaccount/ingress-nginx created
serviceaccount/ingress-nginx-admission created
role.rbac.authorization.k8s.io/ingress-nginx created
role.rbac.authorization.k8s.io/ingress-nginx-admission created
clusterrole.rbac.authorization.k8s.io/ingress-nginx created
clusterrole.rbac.authorization.k8s.io/ingress-nginx-admission created
rolebinding.rbac.authorization.k8s.io/ingress-nginx created
rolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created
clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx created
clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created
configmap/ingress-nginx-controller created
service/ingress-nginx-controller created
service/ingress-nginx-controller-admission created
deployment.apps/ingress-nginx-controller created
job.batch/ingress-nginx-admission-create created
job.batch/ingress-nginx-admission-patch created
ingressclass.networking.k8s.io/nginx created
validatingwebhookconfiguration.admissionregistration.k8s.io/ingress-nginx-admission created


[root@k8s-master ~]# kubectl -n ingress-nginx get pods 
NAME                                       READY   STATUS      RESTARTS   AGE
ingress-nginx-admission-create-n2txq       0/1     Completed   0          29s
ingress-nginx-admission-patch-r8cpf        0/1     Completed   1          29s
ingress-nginx-controller-bb7d8f97c-56frl   1/1     Running     0          29s


[root@k8s-master ~]# kubectl -n ingress-nginx get svc
NAME                                 TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                      AGE
ingress-nginx-controller             NodePort    10.101.239.212   <none>        80:30311/TCP,443:31161/TCP   62s
ingress-nginx-controller-admission   ClusterIP   10.109.186.61    <none>        443/TCP                      62s


#修改微服务为loadbalancer
[root@k8s-master ~]# kubectl -n ingress-nginx edit svc ingress-nginx-controller
49   type: LoadBalancer


[root@k8s-master ~]# kubectl -n ingress-nginx get services 
NAME                                 TYPE           CLUSTER-IP       EXTERNAL-IP     PORT(S)                      AGE
ingress-nginx-controller             LoadBalancer   10.101.36.67     172.25.250.50   80:31025/TCP,443:30477/TCP   2m5s
ingress-nginx-controller-admission   ClusterIP      10.111.255.211   <none>          443/TCP                      2m5s

Note:在ingress-nginx-controller中看到的对外IP就是ingress最终对外开放的ip

5.2.3 测试ingress

复制代码
#生成yaml文件
[root@k8s-master ~]# kubectl create ingress webcluster --rule '*/=howe-svc:80' --dry-run=client -o yaml > howe-ingress.yml

[root@k8s-master ~]# vim howe-ingress.yml 
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: test-ingress
spec:
  ingressClassName: nginx
  rules:
  - http:
      paths:
      - backend:
          service:
            name: howe-svc
            port:
              number: 80
        path: /
        pathType: Prefix	

#Exact(精确匹配),ImplementationSpecific(特定实现),Prefix(前缀匹配),Regular expression(正则表达式匹配)
      
#建立ingress控制器
[root@k8s-master ~]# kubectl apply -f howe-ingress.yml 
ingress.networking.k8s.io/test-ingress created

[root@k8s-master ~]# kubectl get ingress
NAME           CLASS   HOSTS   ADDRESS         PORTS   AGE
test-ingress   nginx   *       172.25.250.20   80      81s

[root@k8s-master ~]# for i in {1..5};
> do
> curl 172.25.250.50/hostname.html;
> done
howe-7b74f758bd-4vw2v
howe-7b74f758bd-jvb7q
howe-7b74f758bd-4vw2v
howe-7b74f758bd-jvb7q
howe-7b74f758bd-4vw2v

Note:ingress必须和输出的service资源处于同一namespace

5.3 ingress 的高级用法

5.3.1 基于路径的访问

1.建立用于测试的控制器myapp

复制代码
[root@k8s-master ~]# kubectl create deployment myapp-v1 --image myapp:v1 --dry-run=client -o yaml > myapp-v1.yaml

[root@k8s-master ~]# kubectl create deployment myapp-v2 --image myapp:v2 --dry-run=client -o yaml > myapp-v2.yaml


[root@k8s-master ~]# vim myapp-v1.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: myapp-v1
  name: myapp-v1
spec:
  replicas: 1
  selector:
    matchLabels:
      app: myapp-v1
  strategy: {}
  template:
    metadata:
      labels:
        app: myapp-v1
    spec:
      containers:
      - image: myapp:v1
        name: myapp

---
apiVersion: v1
kind: Service
metadata:
  labels:
    app: myapp-v1
  name: myapp-v1
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
  selector:
    app: myapp-v1


[root@k8s-master ~]# vim myapp-v2.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: myapp-v2
  name: myapp-v2
spec:
  replicas: 1
  selector:
    matchLabels:
      app: myapp-v2
  template:
    metadata:
      labels:
        app: myapp-v2
    spec:
      containers:
      - image: myapp:v2
        name: myapp
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app: myapp-v2
  name: myapp-v2
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
  selector:
    app: myapp-v2


[root@k8s-master ~]# kubectl apply -f myapp-v1.yaml 
deployment.apps/myapp-v1 created
service/myapp-v1 created
[root@k8s-master ~]# kubectl apply -f myapp-v2.yaml 
deployment.apps/myapp-v2 created
service/myapp-v2 created

#端口暴露
[root@k8s-master ~]# kubectl expose deployment myapp-v1 --port 80 --target-port 80 --dry-run=client -o yaml >> myapp-v1.yaml

[root@k8s-master ~]# kubectl expose deployment myapp-v2 --port 80 --target-port 80 --dry-run=client -o yaml >> myapp-v2.yaml

[root@k8s-master ~]# kubectl get services 
NAME         TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.96.0.1      <none>        443/TCP   5d2h
myapp-v1     ClusterIP   10.99.69.251   <none>        80/TCP    6m10s
myapp-v2     ClusterIP   10.98.76.17    <none>        80/TCP    8s

[root@k8s-master ~]# curl 10.99.69.251
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>

[root@k8s-master ~]# curl 10.98.76.17
Hello MyApp | Version: v2 | <a href="hostname.html">Pod Name</a>

2.建立ingress的yaml

复制代码
[root@k8s-master ~]# vim ingress1.yml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /		#访问路径后加任何内容都被定向到/
  name: ingress1
spec:
  ingressClassName: nginx
  rules:
  - host: www.exam.com
    http:
      paths:
      - backend:
          service:
            name: myapp-v1
            port:
              number: 80
        path: /v1
        pathType: Prefix

      - backend:
          service:
            name: myapp-v2
            port:
              number: 80
        path: /v2
        pathType: Prefix

[root@k8s-master ~]# vim /etc/hosts
[root@k8s-master ~]# kubectl apply -f ingress1.yml 
ingress.networking.k8s.io/ingress1 created

[root@k8s-master ~]# kubectl describe ingress ingress1 
Name:             ingress1
Labels:           <none>
Namespace:        default
Address:          172.25.250.20		#添加IP
Ingress Class:    nginx
Default backend:  <default>
Rules:
  Host          Path  Backends
  ----          ----  --------
  www.exam.com  
                /v1   myapp-v1:80 (10.244.1.10:80)
                /v2   myapp-v2:80 (10.244.2.11:80)
Annotations:    nginx.ingress.kubernetes.io/rewrite-target: /
Events:        
  Type    Reason  Age                    From                      Message
  ----    ------  ----                   ----                      -------
  Normal  Sync    8m35s (x2 over 8m52s)  nginx-ingress-controller  Scheduled for sync


#测试:
[root@k8s-node1 ~]# echo 172.25.254.50 www.exam.com >> /etc/hosts

[root@k8s-node1 ~]# curl www.exam.com/v1
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
[root@k8s-node1 ~]# curl www.exam.com/v2
Hello MyApp | Version: v2 | <a href="hostname.html">Pod Name</a>

#nginx.ingress.kubernetes.io/rewrite-target: / 的功能实现
[root@k8s-node1 ~]# curl www.exam.com/v1/aaaa
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>

5.3.2 基于域名的访问

复制代码
#在测试主机中设定解析
[root@k8s-node1 ~]# vim /etc/hosts
172.25.250.250  reg.exam.com
172.25.250.50   www.exam.com myappv1.exam.com myappv2.exam.com


# 建立基于域名的yml文件
[root@k8s-master ~]# vim ingress2.yml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
  name: ingress2
spec:
  ingressClassName: nginx
  rules:
  - host: myappv1.exam.com
    http:
      paths:
      - backend:
          service:
            name: myapp-v1
            port:
              number: 80
        path: /
        pathType: Prefix
  - host: myappv2.exam.com
    http:
      paths:
      - backend:
          service:
            name: myapp-v2
            port:
              number: 80
        path: /
        pathType: Prefix


#利用文件建立ingress
[root@k8s-master ~]# kubectl apply -f ingress2.yml 
ingress.networking.k8s.io/ingress2 created

[root@k8s-master ~]# kubectl describe ingress ingress2 
Name:             ingress2
Labels:           <none>
Namespace:        default
Address:          
Ingress Class:    nginx
Default backend:  <default>
Rules:
  Host              Path  Backends
  ----              ----  --------
  myappv1.exam.com  
                    /   myapp-v1:80 (10.244.1.12:80)
  myappv2.exam.com  
                    /   myapp-v2:80 (10.244.2.14:80)
Annotations:        nginx.ingress.kubernetes.io/rewrite-target: /
Events:
  Type    Reason  Age   From                      Message
  ----    ------  ----  ----                      -------
  Normal  Sync    17s   nginx-ingress-controller  Scheduled for sync


#在测试主机中测试
[root@k8s-node1 ~]# curl www.exam.com/v1
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
[root@k8s-node1 ~]# curl www.exam.com/v2
Hello MyApp | Version: v2 | <a href="hostname.html">Pod Name</a>

5.3.3 建立tls加密

复制代码
#建立证书
[root@k8s-master ~]# openssl req -newkey rsa:2048 -nodes -keyout tls.key -x509 -days 365 -subj "/CN=nginxsvc/O=nginxsvc" -out tls.crt

#建立加密资源类型secret
[root@k8s-master ~]# kubectl create secret tls  web-tls-secret --key tls.key --cert tls.crt
secret/web-tls-secret created

[root@k8s-master ~]# kubectl get secrets 
NAME             TYPE                DATA   AGE
web-tls-secret   kubernetes.io/tls   2      3m41s

Note:secret通常在kubernetes中存放敏感数据,他并不是一种加密方式

复制代码
#建立ingress3基于tls认证的yml文件
[root@k8s-master ~]# vim ingress3.yml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
  name: ingress3
spec:
  tls:
  - hosts:
    - www.exam.com
    secretName: web-tls-secret
  ingressClassName: nginx
  rules:
  - host: www.exam.com
    http:
      paths:
      - backend:
          service:
            name: myapp-v1
            port:
              number: 80
        path: /
        pathType: Prefix

[root@k8s-master ~]# kubectl apply -f ingress3.yml 
ingress.networking.k8s.io/ingress3 created

#测试
[root@k8s-node1 ~]# curl -k https://www.exam.com
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>

5.3.4 建立auth认证

复制代码
#建立认证文件
[root@k8s-master ~]# dnf install httpd-tools -y
[root@k8s-master ~]# htpasswd -cm auth howe
New password: 
Re-type new password: 
Adding password for user howe
[root@k8s-master ~]# cat auth 
howe:$apr1$1F6Ny7Nx$/u.EcLHUia5jTPqT4X3zL1

#建立认证类型资源
[root@k8s-master ~]# kubectl create secret generic auth-web --from-file auth 
secret/auth-web created

[root@k8s-master ~]# kubectl describe secrets auth-web 
Name:         auth-web
Namespace:    default
Labels:       <none>
Annotations:  <none>

Type:  Opaque

Data
====
auth:  43 bytes

#建立ingress4基于用户认证的yaml文件
[root@k8s-master ~]# vim ingress4.yml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
    nginx.ingress.kubernetes.io/auth-type: basic
    nginx.ingress.kubernetes.io/auth-secret: auth-web
    nginx.ingress.kubernetes.io/auth-realm: "Please input username and password"
  name: ingress4
spec:
  tls:
  - hosts:
    - www.exam.com
    secretName: web-tls-secret
  ingressClassName: nginx
  rules:
  - host: www.exam.com
    http:
      paths:
      - backend:
          service:
            name: myapp-v1
            port:
              number: 80
        path: /
        pathType: Prefix

[root@k8s-master ~]# kubectl apply -f ingress4.yml 
ingress.networking.k8s.io/ingress4 created

[root@k8s-master ~]# kubectl describe ingress ingress4 
Name:             ingress4
Labels:           <none>
Namespace:        default
Address:          172.25.250.20
Ingress Class:    nginx
Default backend:  <default>
TLS:
  web-tls-secret terminates www.exam.com
Rules:
  Host          Path  Backends
  ----          ----  --------
  www.exam.com  
                /   myapp-v1:80 (10.244.1.12:80)
Annotations:    nginx.ingress.kubernetes.io/auth-realm: Please input username and password
                nginx.ingress.kubernetes.io/auth-secret: auth-web
                nginx.ingress.kubernetes.io/auth-type: basic
Events:
  Type    Reason  Age               From                      Message
  ----    ------  ----              ----                      -------
  Normal  Sync    2s (x2 over 18s)  nginx-ingress-controller  Scheduled for sync


#测试:
[root@k8s-node1 ~]# curl -k https://myappv1.exam.com -uhowe:redhat
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>

5.3.5 rewrite重定向

复制代码
#指定默认访问的文件到hostname.html上
[root@k8s-master ~]# vim ingress5.yml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
    nginx.ingress.kubernetes.io/app-root: /hostname.html
    nginx.ingress.kubernetes.io/auth-type: basic
    nginx.ingress.kubernetes.io/auth-secret: auth-web
    nginx.ingress.kubernetes.io/auth-realm: "Please input username and password"
  name: ingress5
spec:
  tls:
  - hosts:
    - www.exam.com
    secretName: web-tls-secret
  ingressClassName: nginx
  rules:
  - host: www.exam.com
    http:
      paths:
      - backend:
          service:
            name: myapp-v1
            port:
              number: 80
        path: /
        pathType: Prefix

[root@k8s-master ~]# kubectl apply -f ingress5.yml 
ingress.networking.k8s.io/ingress5 created

[root@k8s-master ~]# kubectl describe ingress ingress5 
Name:             ingress5
Labels:           <none>
Namespace:        default
Address:          
Ingress Class:    nginx
Default backend:  <default>
TLS:
  web-tls-secret terminates www.exam.com
Rules:
  Host          Path  Backends
  ----          ----  --------
  www.exam.com  
                /   myapp-v1:80 (10.244.1.12:80)
Annotations:    nginx.ingress.kubernetes.io/app-root: /hostname.html
                nginx.ingress.kubernetes.io/auth-realm: Please input username and password
                nginx.ingress.kubernetes.io/auth-secret: auth-web
                nginx.ingress.kubernetes.io/auth-type: basic
Events:
  Type    Reason  Age   From                      Message
  ----    ------  ----  ----                      -------
  Normal  Sync    21s   nginx-ingress-controller  Scheduled for sync

测试:
[root@k8s-node1 ~]# curl -Lk https://www.exam.com -uhowe:redhat
myapp-v1-7479d6c54d-dlz6f

[root@k8s-node1 ~]# curl -Lk https://www.exam.com/app/hostname.html -uhowe:redhat
<html>
<head><title>404 Not Found</title></head>
<body bgcolor="white">
<center><h1>404 Not Found</h1></center>
<hr><center>nginx/1.12.2</center>
</body>
</html>

#解决重定向路径问题
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /$2
    nginx.ingress.kubernetes.io/use-regex: "true"
    nginx.ingress.kubernetes.io/auth-type: basic
    nginx.ingress.kubernetes.io/auth-secret: auth-web
    nginx.ingress.kubernetes.io/auth-realm: "Please input username and password"
  name: ingress
spec:
  tls:
  - hosts:
    - myappv1.exam.com
    secretName: web-tls-secret
  ingressClassName: nginx
  rules:
  - host: myappv1.exam.com
    http:
      paths:
      - backend:
          service:
            name: myapp-v1
            port:
              number: 80
        path: /
        pathType: Prefix
      - backend:
          service:
            name: myapp-v1
            port:
              number: 80
        path: /app(/|$)(.*)	 		#正则表达式匹配/app/,/app/abc
        pathType: ImplementationSpecific
    
    
测试:
[root@k8s-node1 ~]# curl -Lk https://myappv1.exam.com/app/hostname.html -uhowe:redhat
myapp-v1-7479d6c54d-dlz6f

六 Canary金丝雀发布

6.1 什么是金丝雀发布

金丝雀发布(Canary Release)也称为灰度发布,是一种软件发布策略。

主要目的是在将新版本的软件全面推广到生产环境之前,先在一小部分用户或服务器上进行测试和验证,以降低因新版本引入重大问题而对整个系统造成的影响。

是一种Pod的发布方式。金丝雀发布采取先添加、再删除的方式,保证Pod的总量不低于期望值。并且在更新部分Pod后,暂停更新,当确认新Pod版本运行正常后再进行其他版本的Pod的更新。

6.2 Canary发布方式

其中header和weiht中的最多

6.2.1 基于header(http包头)灰度

  • 通过Annotaion扩展

  • 创建灰度ingress,配置灰度头部key以及value

  • 灰度流量验证完毕后,切换正式ingress到新版本

  • 之前我们在做升级时可以通过控制器做滚动更新,默认25%利用header可以使升级更为平滑,通过key 和vule 测试新的业务体系是否有问题。

示例:

复制代码
#建立版本1的ingress
[root@k8s-master ~]# vim ingress7.yml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
  name: myapp-v1-ingress
spec:
  ingressClassName: nginx
  rules:
  - host: www.exam.com
    http:
      paths:
      - backend:
          service:
            name: myapp-v1
            port:
              number: 80
        path: /
        pathType: Prefix
        
[root@k8s-master ~]# kubectl apply -f ingress7.yml 
ingress.networking.k8s.io/myapp-v1-ingress created

[root@k8s-master ~]# kubectl describe ingress myapp-v1-ingress 
Name:             myapp-v1-ingress
Labels:           <none>
Namespace:        default
Address:          172.25.250.20
Ingress Class:    nginx
Default backend:  <default>
Rules:
  Host              Path  Backends
  ----              ----  --------
  www.exam.com  
                    /   myapp-v1:80 (10.244.1.12:80)
Annotations:        <none>
Events:
  Type    Reason  Age                 From                      Message
  ----    ------  ----                ----                      -------
  Normal  Sync    78s (x2 over 116s)  nginx-ingress-controller  Scheduled for sync

#建立基于header的ingress
[root@k8s-master ~]# vim ingress8.yml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
    nginx.ingress.kubernetes.io/canary: "true"
    nginx.ingress.kubernetes.io/canary-by-header: "version"
    nginx.ingress.kubernetes.io/canary-by-header-value: "2" 
  name: myapp-v2-ingress
spec:
  ingressClassName: nginx
  rules:
  - host: www.exam.com
    http:
      paths:
      - backend:
          service:
            name: myapp-v2
            port:
              number: 80
        path: /
        pathType: Prefix
        
[root@k8s-master ~]# kubectl apply -f ingress8.yml 
ingress.networking.k8s.io/myapp-v2-ingress created

[root@k8s-master ~]# kubectl describe ingress myapp-v2-ingress 
Name:             myapp-v2-ingress
Labels:           <none>
Namespace:        default
Address:          172.25.250.20
Ingress Class:    nginx
Default backend:  <default>
Rules:
  Host              Path  Backends
  ----              ----  --------
  www.exam.com  
                    /   myapp-v2:80 (10.244.2.14:80)
Annotations:        nginx.ingress.kubernetes.io/canary: true
                    nginx.ingress.kubernetes.io/canary-by-header: version
                    nginx.ingress.kubernetes.io/canary-by-header-value: 2
Events:
  Type    Reason  Age                From                      Message
  ----    ------  ----               ----                      -------
  Normal  Sync    50s (x2 over 87s)  nginx-ingress-controller  Scheduled for sync

#测试:
[root@k8s-node1 ~]# curl www.exam.com
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>

[root@k8s-node1 ~]# curl -H "version: 2" www.exam.com
Hello MyApp | Version: v2 | <a href="hostname.html">Pod Name</a>

6.2.2 基于权重的灰度发布

  • 通过Annotaion拓展

  • 创建灰度ingress,配置灰度权重以及总权重

  • 灰度流量验证完毕后,切换正式ingress到新版本

示例:

复制代码
[root@k8s-master ~]# vim ingress8.yml 
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
    nginx.ingress.kubernetes.io/canary: "true"
    nginx.ingress.kubernetes.io/canary-weight: "10"		#更改权重值
    nginx.ingress.kubernetes.io/canary-weight-total: "100"
  name: myapp-v2-ingress
spec:
  ingressClassName: nginx
  rules:
  - host: www.exam.com
    http:
      paths:
      - backend:
          service:
            name: myapp-v2
            port:
              number: 80
        path: /
        pathType: Prefix
        
#从机通过编写脚本测试:
[root@k8s-node1 ~]# vim check_ingress.sh
#!/bin/bash
v1=0
v2=0

for (( i=0; i<100; i++))
do
    response=`curl -s www.exam.com |grep -c v1`

    v1=`expr $v1 + $response`
    v2=`expr $v2 + 1 - $response`

done
echo "v1:$v1, v2:$v2"

#运行脚本	#更改完毕权重后继续测试可观察变化
[root@k8s-node1 ~]# sh check_ingress.sh 
v1:100, v2:0
[root@k8s-node1 ~]# sh check_ingress.sh 
v1:80, v2:20
相关推荐
时迁24726 分钟前
【k8s】k8s是怎么实现自动扩缩的
云原生·容器·kubernetes·k8s
编程一生1 小时前
微服务相比传统服务的优势
微服务·云原生·架构
亿坊电商1 小时前
PHP框架在微服务迁移中能发挥什么作用?
开发语言·微服务·php
诡异森林。4 小时前
Docker--Docker网络原理
网络·docker·容器
matrixlzp5 小时前
K8S Service 原理、案例
云原生·容器·kubernetes
angushine6 小时前
让Docker端口映射受Firewall管理而非iptables
运维·docker·容器
Angindem8 小时前
SpringClound 微服务分布式Nacos学习笔记
分布式·学习·微服务
SimonLiu0098 小时前
清理HiNas(海纳斯) Docker日志并限制日志大小
java·docker·容器
高峰君主10 小时前
Docker容器持久化
docker·容器·eureka
能来帮帮蒟蒻吗11 小时前
Docker安装(Ubuntu22版)
笔记·学习·spring cloud·docker·容器