kubernetes中的微服务

目录

[一 什么是微服务](#一 什么是微服务)

[二 微服务的类型](#二 微服务的类型)

[三 ipvs模式](#三 ipvs模式)

[3.1 ipvs模式配置方式](#3.1 ipvs模式配置方式)

[四 微服务类型详解](#四 微服务类型详解)

[4.1 clusterip](#4.1 clusterip)

[4.2 ClusterIP中的特殊模式headless](#4.2 ClusterIP中的特殊模式headless)

[4.3 nodeport](#4.3 nodeport)

[4.4 loadbalancer](#4.4 loadbalancer)

[4.5 metalLB](#4.5 metalLB)

[4.6 externalname](#4.6 externalname)

[五 Ingress-nginx](#五 Ingress-nginx)

[5.1 ingress-nginx功能](#5.1 ingress-nginx功能)

[5.2 部署ingress](#5.2 部署ingress)

[5.2.1 下载部署文件(资源已发)](#5.2.1 下载部署文件(资源已发))

[5.2.2 安装ingress](#5.2.2 安装ingress)

[5.2.3 测试ingress](#5.2.3 测试ingress)

[5.3 ingress 的高级用法](#5.3 ingress 的高级用法)

[5.3.1 基于路径的访问](#5.3.1 基于路径的访问)

[5.3.2 基于域名的访问](#5.3.2 基于域名的访问)

[5.3.3 建立tls加密](#5.3.3 建立tls加密)

[5.3.4 建立auth认证](#5.3.4 建立auth认证)

[5.3.5 rewrite重定向](#5.3.5 rewrite重定向)

[六 Canary金丝雀发布](#六 Canary金丝雀发布)

[6.1 什么是金丝雀发布](#6.1 什么是金丝雀发布)

[6.2 Canary发布方式](#6.2 Canary发布方式)

[6.2.1 基于header(http包头)灰度](#6.2.1 基于header(http包头)灰度)

[6.2.2 基于权重的灰度发布](#6.2.2 基于权重的灰度发布)


一 什么是微服务

用控制器来完成集群的工作负载,那么应用如何暴漏出去?需要通过微服务暴漏出去后才能被访问

  • Service是一组提供相同服务的Pod对外开放的接口。

  • 借助Service,应用可以实现服务发现和负载均衡。

  • service默认只支持4层负载均衡能力,没有7层功能。(可以通过Ingress实现)

二 微服务的类型

|--------------|----------------------------------------------------------------------------------|
| 微服务类型 | 作用描述 |
| ClusterIP | 默认值,k8s系统给service自动分配的虚拟IP,只能在集群内部访问 |
| NodePort | 将Service通过指定的Node上的端口暴露给外部,访问任意一个NodeIP:nodePort都将路由到ClusterIP |
| LoadBalancer | 在NodePort的基础上,借助cloud provider创建一个外部的负载均衡器,并将请求转发到 NodeIP:NodePort,此模式只能在云服务器上使用 |
| ExternalName | 将服务通过 DNS CNAME 记录方式转发到指定的域名(通过 spec.externlName 设定 |

示例:

#生成控制器文件并建立控制器

root@k8s-master \~\]# kubectl create deployment timinglee --image reg.timinglee.org/library/myapp:v1 --replicas 2 --dry-run=client -o yaml \> timinglee.yaml \[root@k8s-master \~\]# kubectl apply -f timinglee.yaml deployment.apps/timinglee created \[root@k8s-master \~\]# kubectl get pod NAME READY STATUS RESTARTS AGE timinglee-56f99b7f4b-4c9kc 1/1 Running 0 6s timinglee-56f99b7f4b-9wlxl 1/1 Running 0 6s #生成微服务yaml追加到已有yaml \[root@k8s-master \~\]# kubectl expose deployment timinglee --port 80 --target-port 80 --dry-run=client -o yaml \>\> timinglee.yaml \[root@k8s-master \~\]# vim timinglee.yaml apiVersion: apps/v1 kind: Deployment metadata: creationTimestamp: null labels: app: timinglee name: timinglee spec: replicas: 2 selector: matchLabels: app: timinglee strategy: {} template: metadata: creationTimestamp: null labels: app: timinglee spec: containers: - image: reg.timinglee.org/library/myapp:v1 name: myapp resources: {} status: {} --- #不同资源间用---隔开 apiVersion: v1 kind: Service metadata: creationTimestamp: null labels: app: timinglee name: timinglee spec: ports: - port: 80 protocol: TCP targetPort: 80 selector: app: timinglee status: loadBalancer: {} \[root@k8s-master \~\]# kubectl delete deployments.apps timinglee deployment.apps "timinglee" deleted \[root@k8s-master \~\]# kubectl get pods No resources found in default namespace. \[root@k8s-master \~\]# kubectl apply -f timinglee.yaml deployment.apps/timinglee created service/timinglee created \[root@k8s-master \~\]# kubectl get service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 \ 443/TCP 31d timinglee ClusterIP 10.99.121.99 \ 80/TCP

微服务默认使用iptables调度

root@k8s-master \~\]# kubectl get service -o wide NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR kubernetes ClusterIP 10.96.0.1 \ 443/TCP 31d \ timinglee ClusterIP 10.99.121.99 \ 80/TCP 89s app=timinglee #集群内部IP 10.99.121.99 #可以在火墙中查看到策略信息 \[root@k8s-master \~\]# iptables -t nat -nL Chain KUBE-SVC-I7WXYK76FWYNTTGM (1 references) target prot opt source destination KUBE-MARK-MASQ tcp -- !10.244.0.0/16 10.99.121.99 /\* default/timinglee cluster IP \*/ tcp dpt:80

三 ipvs模式


  • Service 是由 kube-proxy 组件,加上 iptables 来共同实现的
  • kube-proxy 通过 iptables 处理 Service 的过程,需要在宿主机上设置相当多的 iptables 规则,如果宿主机有大量的Pod,不断刷新iptables规则,会消耗大量的CPU资源
  • IPVS模式的service,可以使K8s集群支持更多量级的Pod

3.1 ipvs模式配置方式


1 在所有节点中安装ipvsadm

root@k8s-master/node/node2 \~\]# yum install ipvsadm.x86_64 -y

2 修改master节点的代理配置

root@k8s-master \~\]# kubectl -n kube-system edit cm kube-proxy configmap/kube-proxy edited metricsBindAddress: "" mode: "ipvs" #设置kube-proxy使用ipvs模式 nftables:

3 重启pod,在pod运行时配置文件中采用默认配置,当改变配置文件后已经运行的pod状态不会变化,所以要重启pod

root@k8s-master \~\]# kubectl -n kube-system get pods \| awk '/kube-proxy/{system("kubectl -n kube-system delete pods "$1)}' pod "kube-proxy-22hr6" deleted pod "kube-proxy-r4jj7" deleted pod "kube-proxy-vwfgr" deleted \[root@k8s-master \~\]# ipvsadm -Ln IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -\> RemoteAddress:Port Forward Weight ActiveConn InActConn TCP 10.96.0.1:443 rr -\> 192.168.10.100:6443 Masq 1 0 0 TCP 10.96.0.10:53 rr -\> 10.244.0.2:53 Masq 1 0 0 -\> 10.244.0.3:53 Masq 1 0 0 TCP 10.96.0.10:9153 rr -\> 10.244.0.2:9153 Masq 1 0 0 -\> 10.244.0.3:9153 Masq 1 0 0 TCP 10.99.121.99:80 rr -\> 10.244.1.3:80 Masq 1 0 0 -\> 10.244.2.3:80 Masq 1 0 0 UDP 10.96.0.10:53 rr -\> 10.244.0.2:53 Masq 1 0 0 -\> 10.244.0.3:53 Masq 1 0 0 \[root@k8s-master \~\]# 注意: 切换ipvs模式后,kube-proxy会在宿主机上添加一个虚拟网卡:kube-ipvs0,并分配所有service IP \[root@k8s-master \~\]# ip a \| tail inet6 fe80::ac84:aaff:fe44:17f3/64 scope link valid_lft forever preferred_lft forever 8: kube-ipvs0: \ mtu 1500 qdisc noop state DOWN group default link/ether 9e:10:d2:0c:25:33 brd ff:ff:ff:ff:ff:ff inet 10.96.0.1/32 scope global kube-ipvs0 valid_lft forever preferred_lft forever inet 10.99.121.99/32 scope global kube-ipvs0 valid_lft forever preferred_lft forever inet 10.96.0.10/32 scope global kube-ipvs0 valid_lft forever preferred_lft forever \[root@k8s-master \~\]#

四 微服务类型详解

4.1 clusterip


特点:

clusterip模式只能在集群内访问,并对集群内的pod提供健康检测和自动发现功能

示例:

root@k8s-master \~\]# vim myapp.yml --- apiVersion: v1 kind: Service metadata: labels: app: timinglee name: timinglee spec: ports: - port: 80 protocol: TCP targetPort: 80 selector: app: timinglee type: ClusterIP \[root@k8s-master \~\]# kubectl apply -f myapp.yml service/timinglee created \[root@k8s-master \~\]# kubectl get service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 \ 443/TCP 31d timinglee ClusterIP 10.110.19.199 \ 80/TCP 16s #service创建后集群DNS提供解析 \[root@k8s-master \~\]# kubectl -n kube-system get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kube-dns ClusterIP 10.96.0.10 \ 53/UDP,53/TCP,9153/TCP 31d \[root@k8s-master \~\]# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 \ 443/TCP 31d timinglee ClusterIP 10.110.19.199 \ 80/TCP 12m \[root@k8s-master \~\]# dig [email protected] ; \<\<\>\> DiG 9.16.23-RH \<\<\>\> [email protected] ;; global options: +cmd ;; Got answer: ;; -\>\>HEADER\<\<- opcode: QUERY, status: NXDOMAIN, id: 48678 ;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 0 ;; QUESTION SECTION: ;timinglee.dedault.svc.cluster.local\\@10.96.0.10. IN A ;; AUTHORITY SECTION: . 3600 IN SOA a.root-servers.net. nstld.verisign-grs.com. 2024101500 1800 900 604800 86400 ;; Query time: 1066 msec ;; SERVER: 114.114.114.114#53(114.114.114.114) ;; WHEN: Tue Oct 15 15:48:32 CST 2024 ;; MSG SIZE rcvd: 139

4.2 ClusterIP中的特殊模式headless


headless(无头服务)

对于无头 Services 并不会分配 Cluster IP,kube-proxy不会处理它们, 而且平台也不会为它们进行负载均衡和路由,集群访问通过dns解析直接指向到业务pod上的IP,所有的调度有dns单独完成

root@k8s-master \~\]# kubectl delete -f myapp.yml service "timinglee" deleted \[root@k8s-master \~\]# vim myapp.yml \[root@k8s-master \~\]# cat myapp.yml --- apiVersion: v1 kind: Service metadata: labels: app: timinglee name: timinglee spec: ports: - port: 80 protocol: TCP targetPort: 80 selector: app: timinglee type: ClusterIP clusterIP: None \[root@k8s-master \~\]# kubectl apply -f myapp.yml service/timinglee created \[root@k8s-master \~\]# kubectl get service timinglee NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE timinglee ClusterIP None \ 80/TCP 51s \[root@k8s-master \~\]# dig [email protected] ; \<\<\>\> DiG 9.16.23-RH \<\<\>\> [email protected] ;; global options: +cmd ;; Got answer: ;; -\>\>HEADER\<\<- opcode: QUERY, status: NXDOMAIN, id: 57288 ;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 1 ;; OPT PSEUDOSECTION: ; EDNS: version: 0, flags:; udp: 512 ;; QUESTION SECTION: ;timinglee.dedault.svc.cluster.local\\@10.96.0.10. IN A ;; AUTHORITY SECTION: . 3233 IN SOA a.root-servers.net. nstld.verisign-grs.com. 2024101500 1800 900 604800 86400 ;; Query time: 27 msec ;; SERVER: 114.114.114.114#53(114.114.114.114) ;; WHEN: Tue Oct 15 15:54:39 CST 2024 ;; MSG SIZE rcvd: 150 \[root@k8s-master \~\]# kubectl run test --image reg.timinglee.org/library/busyboxplus:latest -it If you don't see a command prompt, try pressing enter. / # nslookup timinglee.default.svc.cluster.local. Server: 10.96.0.10 Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local Name: timinglee.default.svc.cluster.local. Address 1: 10.96.132.41 timinglee.default.svc.cluster.local / # curl timinglee.default.svc.cluster.local. Hello MyApp \| Version: v1 \| \Pod Name\ / # curl timinglee Hello MyApp \| Version: v1 \| \Pod Name\ / # curl timinglee/hostname.html timinglee-56f99b7f4b-fnqrp

4.3 nodeport


通过ipvs暴漏端口从而使外部主机通过master节点的对外ip:<port>来访问pod业务

其访问过程为:

示例:

root@k8s-master \~\]# vim timinglee.yaml apiVersion: apps/v1 kind: Deployment metadata: creationTimestamp: null labels: app: timinglee name: timinglee spec: replicas: 2 selector: matchLabels: app: timinglee strategy: {} template: metadata: creationTimestamp: null labels: app: timinglee spec: containers: - image: reg.timinglee.org/library/myapp:v1 name: myapp resources: {} status: {} --- apiVersion: v1 kind: Service metadata: creationTimestamp: null labels: app: timinglee name: timinglee spec: ports: - port: 80 protocol: TCP targetPort: 80 selector: app: timinglee type: NodePort status: loadBalancer: {} \[root@k8s-master \~\]# kubectl apply -f timinglee.yaml deployment.apps/timinglee created service/timinglee created \[root@k8s-master \~\]# kubectl get pod NAME READY STATUS RESTARTS AGE timinglee-56f99b7f4b-blxbj 1/1 Running 0 5s timinglee-56f99b7f4b-sbl2r 1/1 Running 0 5s \[root@k8s-master \~\]# kubectl get service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 \ 443/TCP 31d timinglee NodePort 10.103.125.62 \ 80:32494/TCP 15s \[root@k8s-master \~\]# curl 192.168.10.100:32494 Hello MyApp \| Version: v1 \| \Pod Name\ \[root@k8s-master \~\]# curl 192.168.10.100:32494/hostname.html timinglee-56f99b7f4b-sbl2r \[root@k8s-master \~\]# curl 192.168.10.100:32494/hostname.html timinglee-56f99b7f4b-blxbj

注意:

nodeport默认端口

nodeport默认端口是30000-32767,超出会报错

root@k8s-master \~\]# vim timinglee.yaml apiVersion: apps/v1 kind: Deployment metadata: creationTimestamp: null labels: app: timinglee name: timinglee spec: replicas: 2 selector: matchLabels: app: timinglee strategy: {} template: metadata: creationTimestamp: null labels: app: timinglee spec: containers: - image: reg.timinglee.org/library/myapp:v1 name: myapp --- apiVersion: v1 kind: Service metadata: creationTimestamp: null labels: app: timinglee-service name: timinglee-service spec: ports: - port: 80 protocol: TCP targetPort: 80 nodePort: 33333 selector: app: timinglee type: NodePort status: loadBalancer: {} \[root@k8s-master \~\]# kubectl apply -f timinglee.yaml deployment.apps/timinglee created The Service "timinglee-service" is invalid: spec.ports\[0\].nodePort: Invalid value: 33333: provided port is not in the valid range. The range of valid ports is 30000-32767

如果需要使用这个范围以外的端口就需要特殊设定

root@k8s-master \~\]# vim /etc/kubernetes/manifests/kube-apiserver.yaml - --service-node-port-range=30000-40000

注意:

添加"--service-node-port-range=" 参数,端口范围可以自定义

修改后api-server会自动重启,等apiserver正常启动后才能操作集群

集群重启自动完成在修改完参数后全程不需要人为干预

4.4 loadbalancer

云平台会为我们分配vip并实现访问,如果是裸金属主机那么需要metallb来实现ip的分配

root@k8s-master \~\]# vim timinglee.yaml ...... --- apiVersion: v1 kind: Service metadata: creationTimestamp: null labels: app: timinglee-service name: timinglee-service spec: ports: - port: 80 protocol: TCP targetPort: 80 selector: app: timinglee type: LoadBalancer status: loadBalancer: {} \[root@k8s-master \~\]# kubectl delete -f timinglee.yaml deployment.apps "timinglee" deleted service "timinglee-service" deleted \[root@k8s-master \~\]# kubectl apply -f timinglee.yaml deployment.apps/timinglee created service/timinglee-service created \[root@k8s-master \~\]# kubectl get service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 \ 443/TCP 31d timinglee-service LoadBalancer 10.111.37.137 \ 80:37927/TCP 12s LoadBalancer模式适用云平台,裸金属环境需要安装metallb提供支持

4.5 metalLB

官网:Installation :: MetalLB, bare metal load-balancer for Kubernetes


metalLB功能:

为LoadBalancer分配vip

部署方式

1.设置ipvs模式

root@k8s-master \~\]# kubectl edit cm -n kube-system kube-proxy configmap/kube-proxy edited apiVersion: kubeproxy.config.k8s.io/v1alpha1 kind: KubeProxyConfiguration mode: "ipvs" ipvs: strictARP: true \[root@k8s-master \~\]# kubectl -n kube-system get pods \| awk '/kube-proxy/{system("kubectl -n kube-system delete pods "$1)}' pod "kube-proxy-6785p" deleted pod "kube-proxy-vmk8g" deleted pod "kube-proxy-w4qgl" deleted 2.下载部署文件(资源已发) \[root@k8s2 metallb\]# wget https://raw.githubusercontent.com/metallb/metallb/v0.13.12/config/manifests/metallb-native.yaml 3.修改文件中镜像地址,与harbor仓库路径保持一致 \[root@k8s-master \~\]# vim metallb-native.yaml ... image: metallb/controller:v0.14.8 image: metallb/speaker:v0.14.8 4.上传镜像到harbor \[root@k8s-master \~\]# docker pull quay.io/metallb/controller:v0.14.8 \[root@k8s-master \~\]# docker pull quay.io/metallb/speaker:v0.14.8 \[root@k8s-master metallb\]# docker load -i metalLB.tag.gz f144bb4c7c7f: Loading layer 327.7kB/327.7kB 49626df344c9: Loading layer 40.96kB/40.96kB 945d17be9a3e: Loading layer 2.396MB/2.396MB 4d049f83d9cf: Loading layer 1.536kB/1.536kB af5aa97ebe6c: Loading layer 2.56kB/2.56kB ac805962e479: Loading layer 2.56kB/2.56kB bbb6cacb8c82: Loading layer 2.56kB/2.56kB 2a92d6ac9e4f: Loading layer 1.536kB/1.536kB 1a73b54f556b: Loading layer 10.24kB/10.24kB f4aee9e53c42: Loading layer 3.072kB/3.072kB b336e209998f: Loading layer 238.6kB/238.6kB 371134a463a4: Loading layer 61.38MB/61.38MB 6e64357636e3: Loading layer 13.31kB/13.31kB Loaded image: quay.io/metallb/controller:v0.14.8 0b8392a2e3be: Loading layer 2.137MB/2.137MB 3d5a6e3a17d1: Loading layer 65.46MB/65.46MB 8311c2bd52ed: Loading layer 49.76MB/49.76MB 4f4d43efeed6: Loading layer 3.584kB/3.584kB 881ed6f5069a: Loading layer 13.31kB/13.31kB Loaded image: quay.io/metallb/speaker:v0.14.8 \[root@k8s-master \~\]# docker tag quay.io/metallb/speaker:v0.14.8 reg.timinglee.org/metallb/speaker:v0.14.8 \[root@k8s-master \~\]# docker tag quay.io/metallb/controller:v0.14.8 reg.timinglee.org/metallb/controller:v0.14.8 \[root@k8s-master \~\]# docker push reg.timinglee.org/metallb/speaker:v0.14.8 \[root@k8s-master \~\]# docker push reg.timinglee.org/metallb/controller:v0.14.8 5.部署服务 \[root@k8s-master metallb\]# kubectl apply -f metallb-native.yaml \[root@k8s-master metallb\]# kubectl -n metallb-system get pods NAME READY STATUS RESTARTS AGE controller-584575df59-wblql 1/1 Running 0 29s speaker-8xwvh 1/1 Running 0 29s speaker-m845b 1/1 Running 0 29s speaker-wrvh7 1/1 Running 0 29s 6.配置分配地址段 \[root@k8s-master metallb\]# vim configmap.yml apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: name: first-pool #地址池名称 namespace: metallb-system spec: addresses: - 192.168.10.10-192.168.10.200 #修改为自己本地地址段 --- #两个不同的kind中间必须加分割 apiVersion: metallb.io/v1beta1 kind: L2Advertisement metadata: name: example namespace: metallb-system spec: ipAddressPools: - first-pool #使用地址池 \[root@k8s-master \~\]# kubectl get service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 \ 443/TCP 31d timinglee-service LoadBalancer 10.105.122.155 192.168.10.50 80:36677/TCP 11s #通过分配地址从集群外访问服务 \[root@k8s-master \~\]# curl 192.168.10.50 Hello MyApp \| Version: v1 \| \Pod Name\

4.6 externalname

  • 开启services后,不会被分配IP,而是用dns解析CNAME固定域名来解决ip变化问题

  • 一般应用于外部业务和pod沟通或外部业务迁移到pod内时

  • 在应用向集群迁移过程中,externalname在过度阶段就可以起作用了。

  • 集群外的资源迁移到集群时,在迁移的过程中ip可能会变化,但是域名+dns解析能完美解决此问题

示例:

root@k8s-master \~\]# vim timinglee.yaml \[root@k8s-master \~\]# cat timinglee.yaml apiVersion: apps/v1 kind: Deployment metadata: creationTimestamp: null labels: app: timinglee name: timinglee spec: replicas: 2 selector: matchLabels: app: timinglee strategy: {} template: metadata: creationTimestamp: null labels: app: timinglee spec: containers: - image: reg.timinglee.org/library/myapp:v1 name: myapp --- apiVersion: v1 kind: Service metadata: creationTimestamp: null labels: app: timinglee-service name: timinglee-service spec: ports: - port: 80 protocol: TCP targetPort: 80 selector: app: timinglee type: ExternalName externalName: www.timinglee.org status: loadBalancer: {} \[root@k8s-master \~\]# kubectl get service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 \ 443/TCP 31d timinglee-service ExternalName \ www.timinglee.org 80/TCP 8s

五 Ingress-nginx


官网:

Installation Guide - Ingress-Nginx Controller

5.1 ingress-nginx功能

  • 一种全局的、为了代理不同后端 Service 而设置的负载均衡服务,支持7层
  • Ingress由两部分组成:Ingress controller和Ingress服务
  • Ingress Controller 会根据你定义的 Ingress 对象,提供对应的代理能力。
  • 业界常用的各种反向代理项目,比如 Nginx、HAProxy、Envoy、Traefik 等,都已经为Kubernetes 专门维护了对应的 Ingress Controller。

5.2 部署ingress


#部署前准备工作

root@k8s-master \~\]# kubectl create deployment myappv1 --image reg.timinglee.org/library/myapp:v1 --dry-run=client -o yaml \> myapp-v1.yml \[root@k8s-master \~\]# cp myapp-v1.yml myapp-v2.yml \[root@k8s-master \~\]# vim myapp-v2.yml \[root@k8s-master \~\]# cat myapp-v1.yml apiVersion: apps/v1 kind: Deployment metadata: creationTimestamp: null labels: app: myappv1 name: myappv1 spec: replicas: 1 selector: matchLabels: app: myappv1 strategy: {} template: metadata: creationTimestamp: null labels: app: myappv1 spec: containers: - image: reg.timinglee.org/library/myapp:v1 name: myapp resources: {} status: {} \[root@k8s-master \~\]# cat myapp-v2.yml apiVersion: apps/v1 kind: Deployment metadata: creationTimestamp: null labels: app: myappv2 name: myappv2 spec: replicas: 1 selector: matchLabels: app: myappv2 strategy: {} template: metadata: creationTimestamp: null labels: app: myappv2 spec: containers: - image: reg.timinglee.org/library/myapp:v2 name: myapp2 resources: {} status: {} \[root@k8s-master \~\]# kubectl apply -f myapp-v1.yml deployment.apps/myappv1 created \[root@k8s-master \~\]# kubectl apply -f myapp-v2.yml deployment.apps/myappv2 created \[root@k8s-master \~\]# kubectl get pod NAME READY STATUS RESTARTS AGE myappv1-78ff74589d-mqm6k 1/1 Running 0 11s myappv2-68578565d8-swgzv 1/1 Running 0 6s \[root@k8s-master \~\]# kubectl expose deployment myappv1 --port 80 --target-port 80 --dry-run=client -o yaml \>\> myapp-v1.yml \[root@k8s-master \~\]# kubectl expose deployment myappv2 --port 80 --target-port 80 --dry-run=client -o yaml \>\> myapp-v2.yml \[root@k8s-master \~\]# vim myapp-v1.yml \[root@k8s-master \~\]# vim myapp-v2.yml \[root@k8s-master \~\]# cat myapp-v1.yml apiVersion: apps/v1 kind: Deployment metadata: creationTimestamp: null labels: app: myappv1 name: myappv1 spec: replicas: 1 selector: matchLabels: app: myappv1 strategy: {} template: metadata: creationTimestamp: null labels: app: myappv1 spec: containers: - image: reg.timinglee.org/library/myapp:v1 name: myapp resources: {} status: {} --- apiVersion: v1 kind: Service metadata: creationTimestamp: null labels: app: myappv1 name: myappv1 spec: ports: - port: 80 protocol: TCP targetPort: 80 selector: app: myappv1 status: loadBalancer: {} \[root@k8s-master \~\]# cat myapp-v2.yml apiVersion: apps/v1 kind: Deployment metadata: creationTimestamp: null labels: app: myappv2 name: myappv2 spec: replicas: 1 selector: matchLabels: app: myappv2 strategy: {} template: metadata: creationTimestamp: null labels: app: myappv2 spec: containers: - image: reg.timinglee.org/library/myapp:v2 name: myapp2 resources: {} status: {} --- apiVersion: v1 kind: Service metadata: creationTimestamp: null labels: app: myappv2 name: myappv2 spec: ports: - port: 80 protocol: TCP targetPort: 80 selector: app: myappv2 status: loadBalancer: {} \[root@k8s-master \~\]# kubectl apply -f myapp-v1.yml deployment.apps/myappv1 configured service/myappv1 created \[root@k8s-master \~\]# kubectl apply -f myapp-v2.yml deployment.apps/myappv2 configured service/myappv2 created \[root@k8s-master \~\]# kubectl get pod NAME READY STATUS RESTARTS AGE myappv1-78ff74589d-mqm6k 1/1 Running 0 4m59s myappv2-68578565d8-swgzv 1/1 Running 0 4m54s #测试 \[root@k8s-master \~\]# kubectl get services NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 \ 443/TCP 31d myappv1 ClusterIP 10.100.212.4 \ 80/TCP 45s myappv2 ClusterIP 10.99.186.84 \ 80/TCP 40s \[root@k8s-master \~\]# curl 10.100.212.4 Hello MyApp \| Version: v1 \| \Pod Name\ \[root@k8s-master \~\]# curl 10.99.186.84 Hello MyApp \| Version: v2 \| \Pod Name\

5.2.1 下载部署文件(资源已发)

root@k8s-master \~\]# wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.11.2/deploy/static/provider/baremetal/deploy.yaml

上传ingress所需镜像到harbor

root@k8s-master \~\]# docker tag registry.k8s.io/ingress-nginx/controller:v1.11.2@sha256:d5f8217feeac4887cb1ed21f27c2674e58be06bd8f5184cacea2a69abaf78dce reg.timinglee.org/ingress-nginx/controller:v1.11.2 \[root@k8s-master \~\]# docker tag registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3 reg.timinglee.org/ingress-nginx/kube-webhook-certgen:v1.4.3 \[root@k8s-master \~\]# docker push reg.timinglee.org/ingress-nginx/controller:v1.11.2 \[root@k8s-master \~\]# docker push reg.timinglee.org/ingress-nginx/kube-webhook-certgen:v1.4.3

5.2.2 安装ingress

root@k8s-master \~\]# vim deploy.yaml 445 image: ingress-nginx/controller:v1.11.2 546 image: ingress-nginx/kube-webhook-certgen:v1.4.3 599 image: ingress-nginx/kube-webhook-certgen:v1.4.3 \[root@k8s-master ingress\]# kubectl apply -f deploy.yaml \[root@k8s-master ingress\]# kubectl -n ingress-nginx get pods NAME READY STATUS RESTARTS AGE ingress-nginx-admission-create-xql2j 0/1 Completed 0 38s ingress-nginx-admission-patch-46zhq 0/1 Completed 2 38s ingress-nginx-controller-67bd6649b6-whdjw 1/1 Running 0 38s \[root@k8s-master ingress\]# \[root@k8s-master ingress\]# kubectl -n ingress-nginx get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE ingress-nginx-controller NodePort 10.96.34.154 \ 80:38991/TCP,443:36893/TCP 63s ingress-nginx-controller-admission ClusterIP 10.111.70.191 \ 443/TCP 63s #修改微服务为loadbalancer \[root@k8s-master \~\]# kubectl -n ingress-nginx edit svc ingress-nginx-controller 49 type: LoadBalancer \[root@k8s-master ingress\]# kubectl -n ingress-nginx get services NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE ingress-nginx-controller LoadBalancer 10.96.34.154 \ 80:38991/TCP,443:36893/TCP 4m13s ingress-nginx-controller-admission ClusterIP 10.111.70.191 \ 443/TCP 4m13s \[root@k8s-master ingress\]# kubectl -n ingress-nginx get all NAME READY STATUS RESTARTS AGE pod/ingress-nginx-admission-create-xql2j 0/1 Completed 0 28m pod/ingress-nginx-admission-patch-46zhq 0/1 Completed 2 28m pod/ingress-nginx-controller-67bd6649b6-whdjw 1/1 Running 0 28m NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/ingress-nginx-controller LoadBalancer 10.96.34.154 192.168.10.50 80:38991/TCP,443:36893/TCP 28m service/ingress-nginx-controller-admission ClusterIP 10.111.70.191 \ 443/TCP 28m NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/ingress-nginx-controller 1/1 1 1 28m NAME DESIRED CURRENT READY AGE replicaset.apps/ingress-nginx-controller-67bd6649b6 1 1 1 28m NAME STATUS COMPLETIONS DURATION AGE job.batch/ingress-nginx-admission-create Complete 1/1 7s 28m job.batch/ingress-nginx-admission-patch Complete 1/1 20s 28m \[root@k8s-master ingress\]#

注意:

在ingress-nginx-controller中看到的对外IP就是ingress最终对外开放的ip

5.2.3 测试ingress

#生成yaml文件

root@k8s-master ingress\]# kubectl create ingress webcluster --rule '\*/=timinglee-svc:80' --dry-run=client -o yaml \> timinglee-ingress.yml \[root@k8s-master ingress\]# vim timinglee-ingress.yml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: test-ingress spec: rules: - http: paths: - backend: service: name: timinglee-svc port: number: 80 path: / pathType: Prefix #Exact(精确匹配),ImplementationSpecific(特定实现),Prefix(前缀匹配),Regular expression(正则表达式匹配) #建立ingress控制器 \[root@k8s-master ingress\]# kubectl apply -f timinglee-ingress.yml ingress.networking.k8s.io/test-ingress created \[root@k8s-master ingress\]# kubectl get ingress NAME CLASS HOSTS ADDRESS PORTS AGE myappv1 nginx \* 192.168.10.10 80 34s \[root@k8s-master ingress\]# curl 192.168.10.50 Hello MyApp \| Version: v1 \| \Pod Name\

注意:ingress必须和输出的service资源处于同一namespace

5.3 ingress 的高级用法


5.3.1 基于路径的访问

1.建立用于测试的控制器myapp(上面已经做了,如果按照上面做了这个不用弄了)

root@k8s-master \~\]# kubectl create deployment myappv1 --image reg.timinglee.org/library/myapp:v1 --dry-run=client -o yaml \> myapp-v1.yml \[root@k8s-master \~\]# cp myapp-v1.yml myapp-v2.yml \[root@k8s-master \~\]# vim myapp-v2.yml \[root@k8s-master \~\]# cat myapp-v1.yml apiVersion: apps/v1 kind: Deployment metadata: creationTimestamp: null labels: app: myappv1 name: myappv1 spec: replicas: 1 selector: matchLabels: app: myappv1 strategy: {} template: metadata: creationTimestamp: null labels: app: myappv1 spec: containers: - image: reg.timinglee.org/library/myapp:v1 name: myapp resources: {} status: {} \[root@k8s-master \~\]# cat myapp-v2.yml apiVersion: apps/v1 kind: Deployment metadata: creationTimestamp: null labels: app: myappv2 name: myappv2 spec: replicas: 1 selector: matchLabels: app: myappv2 strategy: {} template: metadata: creationTimestamp: null labels: app: myappv2 spec: containers: - image: reg.timinglee.org/library/myapp:v2 name: myapp2 resources: {} status: {} \[root@k8s-master \~\]# kubectl apply -f myapp-v1.yml deployment.apps/myappv1 created \[root@k8s-master \~\]# kubectl apply -f myapp-v2.yml deployment.apps/myappv2 created \[root@k8s-master \~\]# kubectl get pod NAME READY STATUS RESTARTS AGE myappv1-78ff74589d-mqm6k 1/1 Running 0 11s myappv2-68578565d8-swgzv 1/1 Running 0 6s \[root@k8s-master \~\]# kubectl expose deployment myappv1 --port 80 --target-port 80 --dry-run=client -o yaml \>\> myapp-v1.yml \[root@k8s-master \~\]# kubectl expose deployment myappv2 --port 80 --target-port 80 --dry-run=client -o yaml \>\> myapp-v2.yml \[root@k8s-master \~\]# vim myapp-v1.yml \[root@k8s-master \~\]# vim myapp-v2.yml \[root@k8s-master \~\]# cat myapp-v1.yml apiVersion: apps/v1 kind: Deployment metadata: creationTimestamp: null labels: app: myappv1 name: myappv1 spec: replicas: 1 selector: matchLabels: app: myappv1 strategy: {} template: metadata: creationTimestamp: null labels: app: myappv1 spec: containers: - image: reg.timinglee.org/library/myapp:v1 name: myapp resources: {} status: {} --- apiVersion: v1 kind: Service metadata: creationTimestamp: null labels: app: myappv1 name: myappv1 spec: ports: - port: 80 protocol: TCP targetPort: 80 selector: app: myappv1 status: loadBalancer: {} \[root@k8s-master \~\]# cat myapp-v2.yml apiVersion: apps/v1 kind: Deployment metadata: creationTimestamp: null labels: app: myappv2 name: myappv2 spec: replicas: 1 selector: matchLabels: app: myappv2 strategy: {} template: metadata: creationTimestamp: null labels: app: myappv2 spec: containers: - image: reg.timinglee.org/library/myapp:v2 name: myapp2 resources: {} status: {} --- apiVersion: v1 kind: Service metadata: creationTimestamp: null labels: app: myappv2 name: myappv2 spec: ports: - port: 80 protocol: TCP targetPort: 80 selector: app: myappv2 status: loadBalancer: {} \[root@k8s-master \~\]# kubectl apply -f myapp-v1.yml deployment.apps/myappv1 configured service/myappv1 created \[root@k8s-master \~\]# kubectl apply -f myapp-v2.yml deployment.apps/myappv2 configured service/myappv2 created \[root@k8s-master \~\]# kubectl get pod NAME READY STATUS RESTARTS AGE myappv1-78ff74589d-mqm6k 1/1 Running 0 4m59s myappv2-68578565d8-swgzv 1/1 Running 0 4m54s \[root@k8s-master \~\]# kubectl get services NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 \ 443/TCP 31d myappv1 ClusterIP 10.100.212.4 \ 80/TCP 45s myappv2 ClusterIP 10.99.186.84 \ 80/TCP 40s

2.建立ingress的yaml

root@k8s-master ingress\]# vim ingress.yml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: annotations: nginx.ingress.kubernetes.io/rewrite-target: / #访问路径后加任何内容都被定向到/ name: ingress1 spec: ingressClassName: nginx rules: - host: www.timinglee.org http: paths: - backend: service: name: myappv1 port: number: 80 path: /v1 pathType: Prefix - backend: service: name: myappv2 port: number: 80 path: /v2 pathType: Prefix #测试: \[root@k8s-master ingress\]# kubectl apply -f ingress.yml ingress.networking.k8s.io/ingress1 created \[root@k8s-master ingress\]# echo 192.168.10.50 www.timinglee.org \>\> /etc/hosts \[root@k8s-master ingress\]# curl www.timinglee.org/v1 Hello MyApp \| Version: v1 \| \Pod Name\ \[root@k8s-master ingress\]# curl www.timinglee.org/v2 Hello MyApp \| Version: v2 \| \Pod Name\ #nginx.ingress.kubernetes.io/rewrite-target: / 的功能实现 \[root@k8s-master ingress\]# curl www.timinglee.org/v2/aaa Hello MyApp \| Version: v2 \| \Pod Name\

5.3.2 基于域名的访问

#在测试主机中设定解析

root@reg \~\]# vim /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 192.168.10.130 reg.timinglee.org 192.168.10.50 www.timinglee.org myappv1.timinglee.org myappv2.timinglee.org # 建立基于域名的yml文件 \[root@k8s-master ingress\]# vim ingress2.yml \[root@k8s-master ingress\]# cat ingress2.yml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: annotations: nginx.ingress.kubernetes.io/rewrite-target: / name: ingress2 spec: ingressClassName: nginx rules: - host: myappv1.timinglee.org http: paths: - backend: service: name: myappv1 port: number: 80 path: / pathType: Prefix - host: myappv2.timinglee.org http: paths: - backend: service: name: myappv2 port: number: 80 path: / pathType: Prefix #利用文件建立ingress \[root@k8s-master ingress\]# kubectl apply -f ingress2.yml ingress.networking.k8s.io/ingress2 created \[root@k8s-master ingress\]# kubectl describe ingress ingress2 Name: ingress2 Labels: \ Namespace: default Address: 192.168.10.10 Ingress Class: nginx Default backend: \ Rules: Host Path Backends ---- ---- -------- myappv1.timinglee.org / myappv1:80 (10.244.1.23:80) myappv2.timinglee.org / myappv2:80 (10.244.2.20:80) Annotations: nginx.ingress.kubernetes.io/rewrite-target: / Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Sync 30s (x2 over 66s) nginx-ingress-controller Scheduled for sync #在测试主机中测试 \[root@reg \~\]# curl myappv1.timinglee.org Hello MyApp \| Version: v1 \| \Pod Name\ \[root@reg \~\]# curl myappv2.timinglee.org Hello MyApp \| Version: v2 \| \Pod Name\

5.3.3 建立tls加密

#建立证书

root@k8s-master tls\]# openssl req -newkey rsa:2048 -nodes -keyout tls.key -x509 -days 365 -subj "/CN=nginxsvc/O=nginxsvc" -out tls.crt .....+..+.............+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\*..+........+....+..+.+.........+.....+...+.+......+...............+...+.....+.+...........+.+..+...+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\*....+...............+..+...+....+..+.+............+..+...............+....+..+.............+.....+....+.....+...+....+...+.....+.+...........+.+..+......+.........+......+.+.........+.....+.......+.....+.......+......+.....+.......+..+......+.+......+..+.+..............+.......+......+..+...+.........+....+.........+..+.+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ ....+...............+...+.+..+.......+.....+.+..+.......+...+..+.+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\*.........+...+........+.........+...+....+...+.....+......+.......+...+.....+....+...+...+.........+..+...+..........+...+..+......+.........+.+............+..+.......+.....+......+...+.+......+...+..+.......+...+.................+.+..+...+....+......+..+.........+....+...........+.+..+.+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\*.+............+..+...+.+......+...........+...+.........+.+...........+...+...+....+.....+.........+....+..+.........+.......+.........+...+...............+...+..+...+...+.+...+...........+......+......+...+....+...+..+.......+...........+..........+..+...+....+.........+.....+....+...........+..........+.....+......+.+..+......+....+.....+...+....+...+..+.........+......+..........+.........+..+..........+..+.+.....+.+.....+.+..................+......+...+..+...+......+..........+...............+.........+........+...+.+...+......+.....+.+......+..............+.........+.+......+.......................+.........+...+....+.........+..............+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ ----- #建立加密资源类型secret \[root@k8s-master tls\]# ls tls.crt tls.key \[root@k8s-master tls\]# kubectl create secret tls web-tls-secret --key tls.key --cert tls.crt secret/web-tls-secret created \[root@k8s-master tls\]# kubectl get secrets NAME TYPE DATA AGE web-tls-secret kubernetes.io/tls 2 12s

注意:

secret通常在kubernetes中存放敏感数据,他并不是一种加密方式,在后面课程中会有专门讲解

#建立ingress3基于tls认证的yml文件

root@k8s-master tls\]# vim ingress3.yml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: annotations: nginx.ingress.kubernetes.io/rewrite-target: / name: ingress3 spec: tls: - hosts: - myapp-tls.timinglee.org secretName: web-tls-secret ingressClassName: nginx rules: - host: myapp-tls.timinglee.org http: paths: - backend: service: name: myappv1 port: number: 80 path: / pathType: Prefix \[root@k8s-master tls\]# vim /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 192.168.10.10 k8s-node 192.168.10.20 k8s-node2 192.168.10.100 k8s-master 192.168.10.130 reg.timinglee.org 192.168.10.50 www.timinglee.org myapp-tls.timinglee.org \[root@k8s-master tls\]# kubectl apply -f ingress3.yml ingress.networking.k8s.io/ingress3 created #测试 \[root@k8s-master tls\]# curl -k https://myapp-tls.timinglee.org Hello MyApp \| Version: v1 \| \Pod Name\

5.3.4 建立auth认证

#建立认证文件

root@k8s-master tls\]# yum install httpd-tools.x86_64 -y \[root@k8s-master tls\]# htpasswd -cm auth lee New password: #密码是123 Re-type new password: Adding password for user lee \[root@k8s-master tls\]# cat auth lee:$apr1$BgZiZC5c$UZ559xczgGxU0ejRWypgs0 #建立认证类型资源 \[root@k8s-master tls\]# kubectl create secret generic auth-web --from-file auth secret/auth-web created \[root@k8s-master tls\]# kubectl describe secrets auth-web Name: auth-web Namespace: default Labels: \ Annotations: \ Type: Opaque Data ==== auth: 42 bytes #建立ingress4基于用户认证的yaml文件 \[root@k8s-master tls\]# vim ingress4.yml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: annotations: nginx.ingress.kubernetes.io/auth-type: basic nginx.ingress.kubernetes.io/auth-secret: auth-web nginx.ingress.kubernetes.io/auth-realm: "Please input username and password" name: ingress4 spec: tls: - hosts: - myapp-tls.timinglee.org secretName: web-tls-secret ingressClassName: nginx rules: - host: myapp-tls.timinglee.org http: paths: - backend: service: name: myappv1 port: number: 80 path: / pathType: Prefix #建立ingress4 \[root@k8s-master tls\]# kubectl apply -f ingress4.yml ingress.networking.k8s.io/ingress4 created \[root@k8s-master tls\]# kubectl describe ingress ingress4 Name: ingress4 Labels: \ Namespace: default Address: Ingress Class: nginx Default backend: \ TLS: web-tls-secret terminates myapp-tls.timinglee.org Rules: Host Path Backends ---- ---- -------- myapp-tls.timinglee.org / myappv1:80 (10.244.1.23:80) Annotations: nginx.ingress.kubernetes.io/auth-realm: Please input username and password nginx.ingress.kubernetes.io/auth-secret: auth-web nginx.ingress.kubernetes.io/auth-type: basic Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Sync 30s nginx-ingress-controller Scheduled for sync #测试: \[root@k8s-master tls\]# vim /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 192.168.10.10 k8s-node 192.168.10.20 k8s-node2 192.168.10.100 k8s-master 192.168.10.130 reg.timinglee.org 192.168.10.50 www.timinglee.org myapp-tls.timinglee.org \[root@k8s-master tls\]# curl -k https://myapp-tls.timinglee.org \ \\401 Authorization Required\\ \ \\401 Authorization Required\\ \\nginx\ \ \ \[root@k8s-master tls\]# curl -k https://myapp-tls.timinglee.org -ulee:123 Hello MyApp \| Version: v1 \| \Pod Name\

5.3.5 rewrite重定向


#指定默认访问的文件到hostname.html上

root@k8s-master tls\]# vim ingress5.yml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: annotations: nginx.ingress.kubernetes.io/app-root: /hostname.html nginx.ingress.kubernetes.io/auth-type: basic nginx.ingress.kubernetes.io/auth-secret: auth-web nginx.ingress.kubernetes.io/auth-realm: "Please input username and password" name: ingress5 spec: tls: - hosts: - myapp-tls.timinglee.org secretName: web-tls-secret ingressClassName: nginx rules: - host: myapp-tls.timinglee.org http: paths: - backend: service: name: myappv1 port: number: 80 path: / pathType: Prefix \[root@k8s-master tls\]# kubectl apply -f ingress5.yml ingress.networking.k8s.io/ingress5 created \[root@k8s-master tls\]# kubectl describe ingress ingress5 Name: ingress5 Labels: \ Namespace: default Address: Ingress Class: nginx Default backend: \ TLS: web-tls-secret terminates myapp-tls.timinglee.org Rules: Host Path Backends ---- ---- -------- myapp-tls.timinglee.org / myappv1:80 (10.244.1.23:80) Annotations: nginx.ingress.kubernetes.io/app-root: /hostname.html nginx.ingress.kubernetes.io/auth-realm: Please input username and password nginx.ingress.kubernetes.io/auth-secret: auth-web nginx.ingress.kubernetes.io/auth-type: basic Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Sync 57s nginx-ingress-controller Scheduled for sync #测试: \[root@k8s-master tls\]# curl -Lk https://myapp-tls.timinglee.org -ulee:123 myappv1-78ff74589d-mqm6k \[root@k8s-master tls\]# curl -Lk https://myapp-tls.timinglee.org/hostname.html -ulee:123 myappv1-78ff74589d-mqm6k \[root@k8s-master tls\]# curl -Lk https://myapp-tls.timinglee.org/lee/hostname.html -ulee:123 \ \\404 Not Found\\ \ \\404 Not Found\\ \\nginx/1.12.2\ \ \ #解决重定向路径问题 \[root@k8s-master tls\]# vim ingress6.yml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: annotations: nginx.ingress.kubernetes.io/rewrite-target: /$2 nginx.ingress.kubernetes.io/use-regex: "true" nginx.ingress.kubernetes.io/auth-type: basic nginx.ingress.kubernetes.io/auth-secret: auth-web nginx.ingress.kubernetes.io/auth-realm: "Please input username and password" name: ingress6 spec: tls: - hosts: - myapp-tls.timinglee.org secretName: web-tls-secret ingressClassName: nginx rules: - host: myapp-tls.timinglee.org http: paths: - backend: service: name: myappv1 port: number: 80 path: / pathType: Prefix - backend: service: name: myappv1 port: number: 80 path: /lee(/\|$)(.\*) pathType: ImplementationSpecific \[root@k8s-master tls\]# kubectl apply -f ingress6.yml ingress.networking.k8s.io/ingress6 created \[root@k8s-master tls\]# curl -Lk https://myapp-tls.timinglee.org/lee/hostname.html -ulee:123 myappv1-78ff74589d-mqm6k

六 Canary金丝雀发布

6.1 什么是金丝雀发布

金丝雀发布(Canary Release)也称为灰度发布,是一种软件发布策略。

主要目的是在将新版本的软件全面推广到生产环境之前,先在一小部分用户或服务器上进行测试和验证,以降低因新版本引入重大问题而对整个系统造成的影响。

是一种Pod的发布方式。金丝雀发布采取先添加、再删除的方式,保证Pod的总量不低于期望值。并且在更新部分Pod后,暂停更新,当确认新Pod版本运行正常后再进行其他版本的Pod的更新。

6.2 Canary发布方式

其中header和weiht中的最多

6.2.1 基于header(http包头)灰度

  • 通过Annotaion扩展
  • 创建灰度ingress,配置灰度头部key以及value
  • 灰度流量验证完毕后,切换正式ingress到新版本
  • 之前我们在做升级时可以通过控制器做滚动更新,默认25%利用header可以使升级更为平滑,通过key 和vule 测试新的业务体系是否有问题。

示例:

#建立版本1的ingress

root@k8s-master tls\]# vim ingress7.yml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: annotations: name: myapp-v1-ingress spec: ingressClassName: nginx rules: - host: myapp.timinglee.org http: paths: - backend: service: name: myappv1 port: number: 80 path: / pathType: Prefix \[root@k8s-master tls\]# kubectl apply -f ingress7.yml ingress.networking.k8s.io/myapp-v1-ingress created \[root@k8s-master tls\]# kubectl describe ingress myapp-v1-ingress Name: myapp-v1-ingress Labels: \ Namespace: default Address: Ingress Class: nginx Default backend: \ Rules: Host Path Backends ---- ---- -------- myapp.timinglee.org / myappv1:80 (10.244.1.23:80) Annotations: \ Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Sync 15s nginx-ingress-controller Scheduled for sync #测试: \[root@k8s-master tls\]# curl myapp.timinglee.org Hello MyApp \| Version: v1 \| \Pod Name\ #建立基于header的ingress \[root@k8s-master tls\]# vim ingress8.yml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: annotations: nginx.ingress.kubernetes.io/canary: "true" nginx.ingress.kubernetes.io/canary-by-header: version nginx.ingress.kubernetes.io/canary-by-header-value: "2" name: myapp-v2-ingress spec: ingressClassName: nginx rules: - host: myapp.timinglee.org http: paths: - backend: service: name: myappv2 port: number: 80 path: / pathType: Prefix \[root@k8s-master tls\]# kubectl apply -f ingress8.yml ingress.networking.k8s.io/myapp-v2-ingress created \[root@k8s-master tls\]# kubectl describe ingress myapp-v2-ingress Name: myapp-v2-ingress Labels: \ Namespace: default Address: 192.168.10.10 Ingress Class: nginx Default backend: \ Rules: Host Path Backends ---- ---- -------- myapp.timinglee.org / myappv2:80 (10.244.2.20:80) Annotations: nginx.ingress.kubernetes.io/canary: true nginx.ingress.kubernetes.io/canary-by-header: version nginx.ingress.kubernetes.io/canary-by-header-value: 2 Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Sync 24s (x2 over 53s) nginx-ingress-controller Scheduled for sync #测试: \[root@k8s-master tls\]# curl -H "version: 2" myapp.timinglee.org Hello MyApp \| Version: v2 \| \Pod Name\

6.2.2 基于权重的灰度发布

  • 通过Annotaion拓展

  • 创建灰度ingress,配置灰度权重以及总权重

  • 灰度流量验证完毕后,切换正式ingress到新版本

示例

#基于权重的灰度发布

root@k8s-master tls\]# vim ingress9.yml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: annotations: nginx.ingress.kubernetes.io/canary: "true" nginx.ingress.kubernetes.io/canary-weight: "10" #更改权重值 nginx.ingress.kubernetes.io/canary-weight-total: "100" name: myapp-v2-ingress spec: ingressClassName: nginx rules: - host: myapp.timinglee.org http: paths: - backend: service: name: myappv2 port: number: 80 path: / pathType: Prefix \[root@k8s-master tls\]# kubectl apply -f ingress9.yml ingress.networking.k8s.io/myapp-v2-ingress created #测试: \[root@k8s-master tls\]# vim check_ingress.sh #!/bin/bash v1=0 v2=0 for (( i=0; i\<100; i++)) do response=\`curl -s myapp.timinglee.org \|grep -c v1\` v1=\`expr $v1 + $response\` v2=\`expr $v2 + 1 - $response\` done echo "v1:$v1, v2:$v2" \[root@k8s-master tls\]# kubectl apply -f ingress7.yml ingress.networking.k8s.io/myapp-v1-ingress created \[root@k8s-master tls\]# kubectl apply -f ingress8.yml ingress.networking.k8s.io/myapp-v2-ingress configured \[root@k8s-master tls\]# kubectl apply -f ingress9.yml ingress.networking.k8s.io/myapp-v2-ingress configured \[root@k8s-master tls\]# kubectl get ingress NAME CLASS HOSTS ADDRESS PORTS AGE myapp-v1-ingress nginx myapp.timinglee.org 192.168.10.10 80 56s myapp-v2-ingress nginx myapp.timinglee.org 192.168.10.10 80 8m7s \[root@k8s-master tls\]# sh check_ingress.sh v1:93, v2:7 \[root@k8s-master tls\]# sh check_ingress.sh v1:88, v2:12 \[root@k8s-master tls\]# sh check_ingress.sh v1:92, v2:8 #更改完毕权重后继续测试可观察变化 #更改权重值为30 \[root@k8s-master tls\]# sh check_ingress.sh v1:69, v2:31 \[root@k8s-master tls\]# sh check_ingress.sh v1:68, v2:32 \[root@k8s-master tls\]# sh check_ingress.sh v1:74, v2:26

相关推荐
pjx9871 小时前
微服务的“导航系统”:使用Spring Cloud Eureka实现服务注册与发现
java·spring cloud·微服务·eureka
项目題供诗4 小时前
黑马k8s(四)
云原生·容器·kubernetes
杰克逊的日记4 小时前
大项目k8s集群有多大规模,多少节点,有多少pod
云原生·容器·kubernetes
小张童鞋。4 小时前
k8s之k8s集群部署
云原生·容器·kubernetes
long_21454 小时前
k8s中ingress-nginx介绍
kubernetes·ingress-nginx
luck_me54 小时前
k8s v1.26 实战csi-nfs 部署
linux·docker·云原生·容器·kubernetes
邪恶的贝利亚4 小时前
《Docker 入门与进阶:架构剖析、隔离原理及安装实操》
docker·容器·架构
一直学下去5 小时前
K8S中构建双架构镜像-从零到成功
容器·kubernetes·cicd·多架构
知其_所以然5 小时前
使用docker安装clickhouse集群
clickhouse·docker·容器
杨不易呀6 小时前
Java面试全记录:Spring Cloud+Kafka+Redis实战解析
redis·spring cloud·微服务·kafka·高并发·java面试·面试技巧