Istio-2:流量治理之简单负载均衡

1. 概述

Istio 在自己的官网上给出了三大特性:
• Traffic Management(流量治理)

• Secuturity(安全)

• Observability(可观察性)
其中 Traffic Management(流量治理)是我们今天要讨论的内容。先讨论流量治理中的负载均衡

2. 负载均衡

负载均衡已经有很多组件都能做到,例如:nginx、k8s 的 service(底层是 kube-proxy)、istio、traefik 等等。 nginx 和 istio、tracefik 可以做到 7 层(应用协议层)的负载均衡,k8s 的 service 只能做到 4 层(端口层)负载均衡. 负载均衡简单来说就两个步骤:

  1. 找到服务提供者的 ip+port
  2. 根据一定的算法(轮询,权重,随机等等)朝向选定的一个:ip+port 发送网络请求

其中找到服务提供者的ip+port,要么是手动配置的形式,要么是从第三方获取(例如:nacos,consul,k8s,Istio等等),下面我们看下Istio如何做负载均衡

3. Istio做负载均衡

  1. 首先提供一个简单的镜像:liuxuzxx/service-mesh:v1.0.9,镜像通过 env 改变所具备的功能和性质
变量名称 可选值 备注
SERVICE_TYPE client:客户 server:服务端 为了区分请求是否到此终止,模拟服务端和客户端
VERSION 字符串,可以填写:v1.0.9/v1.0.10 这类信息 方便后续 做灰度发布,流量的复杂管理的实验
SERVICE_URL 服务端的地址 如果 SERVICE_TYPE 配置了 client,这个 env 必须配置,否则报错

2.创建一个 namespace

bash 复制代码
kubectl create ns zadig

3.执行下面的 deploy.yaml 文件,包含如下的资源信息:

bash 复制代码
3个Deployment资源:  
  istio-service-mesh-clinet(客户端)  
  istio-service-mesh-server-v1-0-9(v1.0.9的服务端)  
  istio-service-mesh-server-v1-0-10(v1.0.10的服务端)  
  
2个Service资源:  
  istio-service-mesh-client(客户端的Service)  
  istio-service-mesh-server(服务端的Service)

资源文件的yaml

bash 复制代码
---  
apiVersion: apps/v1  
kind: Deployment  
metadata:  
  name: istio-service-mesh-server-v1-0-9  
  namespace: zadig  
spec:  
  replicas: 1  
  selector:  
    matchLabels:  
      app: service-mesh  
      type: server  
      version: "v1.0.9"  
  template:  
    metadata:  
      annotations:  
        sidecar.istio.io/inject: "true"  
      labels:  
        app: service-mesh  
        type: server  
        version: "v1.0.9"  
    spec:  
      containers:  
        - name: service-mesh  
          image: liuxuzxx/service-mesh:v1.0.9  
          imagePullPolicy: Always  
          ports:  
            - name: http  
              containerPort: 3000  
          resources:  
            limits:  
              cpu: 500m  
              memory: 200Mi  
            requests:  
              cpu: 10m  
              memory: 10Mi  
          volumeMounts:  
            - name: date-config  
              mountPath: /etc/localtime  
          env:  
            - name: SERVICE_TYPE  
              value: "server"  
            - name: VERSION  
              value: "v1.0.9"  
      volumes:  
        - name: date-config  
          hostPath:  
            path: /etc/localtime  
---  
apiVersion: apps/v1  
kind: Deployment  
metadata:  
  name: istio-service-mesh-server-v1-0-10  
  namespace: zadig  
spec:  
  replicas: 1  
  selector:  
    matchLabels:  
      app: service-mesh  
      type: server  
      version: "v1.0.10"  
  template:  
    metadata:  
      annotations:  
        sidecar.istio.io/inject: "true"  
      labels:  
        app: service-mesh  
        type: server  
        version: "v1.0.10"  
    spec:  
      containers:  
        - name: service-mesh  
          image: liuxuzxx/service-mesh:v1.0.9  
          imagePullPolicy: Always  
          ports:  
            - name: http  
              containerPort: 3000  
          resources:  
            limits:  
              cpu: 500m  
              memory: 200Mi  
            requests:  
              cpu: 10m  
              memory: 10Mi  
          volumeMounts:  
            - name: date-config  
              mountPath: /etc/localtime  
          env:  
            - name: SERVICE_TYPE  
              value: "server"  
            - name: VERSION  
              value: "v1.0.10"  
      volumes:  
        - name: date-config  
          hostPath:  
            path: /etc/localtime  
---  
apiVersion: v1  
kind: Service  
metadata:  
  name: istio-service-mesh-server  
  namespace: zadig  
spec:  
  selector:  
    app: service-mesh  
    type: server  
  ports:  
    - protocol: TCP  
      name: http  
      port: 3000  
      targetPort: 3000  
  
---  
apiVersion: apps/v1  
kind: Deployment  
metadata:  
  name: istio-service-mesh-client  
  namespace: zadig  
spec:  
  selector:  
    matchLabels:  
      app: service-mesh  
      type: client  
  template:  
    metadata:  
      annotations:  
        sidecar.istio.io/inject: "true"  
      labels:  
        app: service-mesh  
        type: client  
    spec:  
      containers:  
        - name: service-mesh  
          image: liuxuzxx/service-mesh:v1.0.9  
          imagePullPolicy: Always  
          ports:  
            - name: http  
              containerPort: 3000  
          resources:  
            limits:  
              cpu: 500m  
              memory: 200Mi  
            requests:  
              cpu: 10m  
              memory: 10Mi  
          volumeMounts:  
            - name: date-config  
              mountPath: /etc/localtime  
          env:  
            - name: SERVICE_TYPE  
              value: "client"  
            - name: VERSION  
              value: "v1.0.9"  
            - name: SERVICE_URL  
              value: "istio-service-mesh-server:3000"  
      volumes:  
        - name: date-config  
          hostPath:  
            path: /etc/localtime  
---  
apiVersion: v1  
kind: Service  
metadata:  
  name: istio-service-mesh-client  
  namespace: zadig  
spec:  
  selector:  
    app: service-mesh  
    type: client  
  ports:  
    - protocol: TCP  
      name: http  
      port: 3000  
      targetPort: 3000

4.具体的访问流程图如下

5.请求 istio-service-mesh-client,查看k8s的service的负载

(由于本地安装了 telepresence,所以可以直接使用 k8s 的 Service 访问,文章末尾附带官网安装和使用链接,也可以修改 deploy.yaml 给 istio-service-mesh-client 的 service 开放 NodePort 端口)

bash 复制代码
$> curl http://istio-service-mesh-client.zadig:3000/tracing  
service-type:server version:v1.0.10 server-ip:10.216.145.12  
  
$> curl http://istio-service-mesh-client.zadig:3000/tracing  
service-type:server version:v1.0.10 server-ip:10.216.145.12  
  
$> curl http://istio-service-mesh-client.zadig:3000/tracing  
service-type:server version:v1.0.10 server-ip:10.216.145.12  
  
$> curl http://istio-service-mesh-client.zadig:3000/tracing  
service-type:server version:v1.0.9 server-ip:10.216.131.19  
  
$> curl http://istio-service-mesh-client.zadig:3000/tracing  
service-type:server version:v1.0.9 server-ip:10.216.131.19  
  
$> curl http://istio-service-mesh-client.zadig:3000/tracing  
service-type:server version:v1.0.10 server-ip:10.216.145.12  
  
$> curl http://istio-service-mesh-client.zadig:3000/tracing  
service-type:server version:v1.0.9 server-ip:10.216.131.19

查看服务端的 Pod 的信息

bash 复制代码
$> kubectl get pod -n zadig -o wide|grep server  
  
istio-service-mesh-server-v1-0-10-567954b557-z7k67   2/2     Running   0               17m   10.216.145.12  
istio-service-mesh-server-v1-0-9-978bdf4d5-7c957     2/2     Running   0               17m   10.216.131.19

可以看到确实是执行了负载均衡,但是这个是 kube-proxy 做的,和 Istio 没有关系,因为这个只是在部署的时候增加了一个 istio-init 和 istio-proxy 的 sidecar 而已,并没有做 Istio 的其他配置.并且基本上和两个版本的Deployment的pod个数比例基本一致:3:4 约等于1:1

6.配置 Istio 的 DestinationRule 和 VirtualService 资源对象,执行如下的 yaml 文件内容

流量按照如下的规则:

  1. v1.0.9版本: 90%
  2. v1.0.10版本: 10%
yaml 复制代码
---  
apiVersion: networking.istio.io/v1  
kind: DestinationRule  
metadata:  
  name: mesh-dr  
  namespace: zadig  
spec:  
  host: istio-service-mesh-server #是具体的服务Service,如果是多个,需要配置多个DestinationRule  
  subsets:  
    - labels: #pod标签,subset会根据labels匹配到的pod作为一个subset  
        version: "v1.0.9"  
        app: "service-mesh"  
      name: v1-0-9 #不允许下划线  
    - labels:  
        version: "v1.0.10"  
        app: "service-mesh"  
      name: v1-0-10  
        
---  
apiVersion: networking.istio.io/v1  
kind: VirtualService  
metadata:  
  name: mesh-vs  
  namespace: zadig  
spec:  
  hosts:  
    - istio-service-mesh-server  
  http:  
    - route:  
        - destination:  
            host: istio-service-mesh-server #对应subset所在的DestinationRule配置的host  
            subset: v1-0-9 #引用的配置的DestinationRule中定义的subsets  
          weight: 90  
        - destination:  
            host: istio-service-mesh-server  
            subset: v1-0-10  
          weight: 10

7.真实的请求 100 次,看到如下的结果数据

bash 复制代码
v1.0.9版本是的: 89次  
v1.0.10版本的: 11次  
  
基本上符合1:9的weight权重比例配置

4. 对VirtualService和DestinationRule的简单解释

bash 复制代码
Istio对VirtualService的解释如下: Configuration affecting traffic routing.  
  
描述的很简单,翻译过来的意思大概是:影响流量路由的配置
bash 复制代码
Istio对DestinationRule的解释如下:DestinationRule defines policies that apply to traffic intended for a service after routing has occurred.  
  
描述的也很简单,翻译过来大概意思是:DestinationRule定义了流量路由之后应用到service(这里应该是后台服务进程的意思)的规则策略.

VirtualService和DestinationRule综合起来简单理解就是:

VirtualService对流量进行分组分发(粗),DestinationRule对每个组的流量真实到达具体的后台服务Pod/进程负责规则策略制定(细)!

复杂的思考理解设计,我揣测下:

因为Istio想对应用层的流量进行管理,那么就需要处理两块事情:

  1. 接收上游服务的请求,按照应用层的协议特性(header/url/body等)进行分组(subset)
  2. 每个分组的流量最终还是要具体的Pod/进程承接,这个时候和下游对接就涉及到:链接个数,空闲时长,最大链接数,最大重试次数,是否使用tls,按照哪种负载算法等等
    这里面VirtualService做了1这块,DestinationRule做了2这块。这俩概念通过subset连接起来.

5. 后记

5.1 Telepresence的使用

bash 复制代码
https://www.telepresence.io/docs/quick-start
相关推荐
亲爱的非洲野猪37 分钟前
关于k8s Kubernetes的10个面试题
云原生·容器·kubernetes
西京刀客1 小时前
k8s之configmap
云原生·容器·kubernetes
阿里云云原生19 小时前
Higress MCP 服务管理,助力构建私有 MCP 市场
云原生
zzywxc78720 小时前
云原生 Serverless 架构下的智能弹性伸缩与成本优化实践
云原生·架构·serverless
KubeSphere 云原生21 小时前
Higress 上架 KubeSphere Marketplace,助力企业构建云原生流量入口
云原生
AKAMAI1 天前
在Akamai平台上进行VOD转码的参考架构
后端·云原生·云计算
2401_836836592 天前
k8s配置管理
云原生·容器·kubernetes
一切顺势而行2 天前
k8s 使用docker 安装教程
docker·容器·kubernetes