追踪链路--使用envoy来记录后端pod真实ip

前言

之前使用了iptables、ipvs,在数据包的必经之路(POSTROUTING)上拦截并且记录日志,本文使用一个比较成熟的组件envoy来记录后端pod的真实ip

环境准备

环境准备如同之前

复制代码
▶ kubectl get pod  -owide
NAME                          READY   STATUS    RESTARTS        AGE    IP            NODE     NOMINATED NODE   READINESS GATES
backend-6d4cdd4c68-mqzgj      1/1     Running   4               8d     10.244.0.73   wilson   <none>           <none>
backend-6d4cdd4c68-qjp9m      1/1     Running   4               7d3h   10.244.0.74   wilson   <none>           <none>
nginx-test-54d79c7bb8-zmrff   1/1     Running   2               23h    10.244.0.75   wilson   <none>           <none>

▶ kubectl get svc
NAME              TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                          AGE
backend-service   ClusterIP   10.105.148.194   <none>        10000/TCP                        8d
nginx-test        NodePort    10.110.71.55     <none>        80:30785/TCP                     14d

envoy

如同之前所说,需要有一个做负载均衡的组件来转发到后端的多pod,之前使用的是iptables/ipvs,它们对于链路追踪比较困难,那就要有一个组件来代替它们做负载均衡,所以该组件要么在backend之前,要么在nginx之后,那我们选择在nginx之后

创建envoy configmap

复制代码
apiVersion: v1
kind: ConfigMap
metadata:
  name: envoy-config
data:
  envoy.yaml: |
    static_resources:
      listeners:
        - name: ingress_listener
          address:
            socket_address:
              address: 0.0.0.0
              port_value: 10000
          filter_chains:
            - filters:
                - name: envoy.filters.network.http_connection_manager
                  typed_config:
                    "@type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
                    stat_prefix: ingress_http
                    http_protocol_options:
                      accept_http_10: true
                    common_http_protocol_options:
                      idle_timeout: 300s
                    codec_type: AUTO
                    route_config:
                      name: local_route
                      virtual_hosts:
                        - name: app
                          domains: ["*"]
                          routes:
                            - match: { prefix: "/test" }
                              route:
                                cluster: app_service
                    http_filters:
                      - name: envoy.filters.http.router
                        typed_config:
                          "@type": type.googleapis.com/envoy.extensions.filters.http.router.v3.Router
                    access_log:
                    - name: envoy.access_loggers.stdout
                      typed_config:
                        "@type": type.googleapis.com/envoy.extensions.access_loggers.stream.v3.StdoutAccessLog
                        log_format:
                          text_format: "[%START_TIME%] \"%REQ(:METHOD)% %REQ(X-ENVOY-ORIGINAL-PATH?:PATH)% %PROTOCOL%\" %RESPONSE_CODE% %BYTES_SENT% %DURATION% %REQ(X-REQUEST-ID)% \"%REQ(USER-AGENT)%\" \"%REQ(X-FORWARDED-FOR)%\" %UPSTREAM_HOST% %UPSTREAM_CLUSTER% %RESPONSE_FLAGS%\n"

      clusters:
        - name: app_service
          connect_timeout: 1s
          type: STRICT_DNS
          lb_policy: ROUND_ROBIN
          load_assignment:
            cluster_name: app_service
            endpoints:
              - lb_endpoints:
                  - endpoint:
                      address:
                        socket_address:
                          address: "backend-service"
                          port_value: 10000

    admin:
      access_log_path: "/tmp/access.log"
      address:
        socket_address:
          address: 0.0.0.0
          port_value: 9901

创建sidecar

与nginx-test同pod,通过patch的方式添加container

复制代码
kubectl patch deployment nginx-test --type='json' -p='
[
  {
    "op": "add",
    "path": "/spec/template/spec/volumes/-",
    "value": {
      "configMap": {
        "defaultMode": 420,
        "name": "envoy-config"
      },
      "name": "envoy-config"
    }
  },
  {
    "op": "add",
    "path": "/spec/template/spec/containers/-",
    "value": {
      "args": [
        "-c",
        "/etc/envoy/envoy.yaml"
      ],
      "image": "registry.cn-beijing.aliyuncs.com/wilsonchai/envoy:v1.32-latest",
      "imagePullPolicy": "IfNotPresent",
      "name": "envoy",
      "ports": [
        {
          "containerPort": 10000,
          "protocol": "TCP"
        },
        {
          "containerPort": 9901,
          "protocol": "TCP"
        }
      ],
      "volumeMounts": [
        {
          "mountPath": "/etc/envoy",
          "name": "envoy-config"
        }
      ]
    }
  }
]'

▶ kubectl get pod -owide -l app=nginx-test
NAME                          READY   STATUS    RESTARTS       AGE    IP            NODE     NOMINATED NODE   READINESS GATES
nginx-test-6df974c9f9-qksd4   2/2     Running   0              1d     10.244.0.80   wilson   <none>           <none>

在nginx-test pod中额外创建了envoy container,envoy打开了10000端口,并且envoy将访问/test的请求都转发到了backend-service:10000,现在需要将nginx-test的出流量转发至envoy,让envoy做负载均衡

修改nginx-test的upstream

backend-service改成127.0.0.1,由于在同一个pod,同一个net namespace,直接用127即可

复制代码
upstream backend_ups {
    # server backend-service:10000;
    server 127.0.0.1:10000;
}


server {
    listen       80;
    listen  [::]:80;
    server_name  localhost;

    location /test {
        proxy_pass http://backend_ups;
    }
}

重启后生效

验证

curl 10.22.12.178:30785/test,并且监控envoy的日志kubectl logs -f -l app=nginx-test -c envoy

复制代码
[2025-12-16T09:45:56.365Z] "GET /test HTTP/1.0" 200 40 0 99032619-a060-481d-8f0d-9d773fad9b12 "curl/7.81.0" "-" 10.105.148.194:10000 app_service -

咦?upstream_host怎么显示得还是10.105.148.194,这是service ip,并不是后端pod的真实ip,这是怎么回事??

问题解决

仔细回想一下负载均衡的工作原理:根据不同的算法(如:rr,wlc等)转发到后端的real server,但是当前提供的后端,依然是k8s的service:backend-service,所以这种配置方式,虽然加了一层envoy,但是本质依然还是使用k8s service作为负载均衡

k8s service不但提供了负载均衡的作用,还有个重要的功能,就是服务发现,所以必须要使用service来做服务发现,又不能使用service的负载均衡

所幸k8s提供了headless service来满足这种需求,访问headless service,返回一组running的pod列表,让访问者自定义做负载均衡,ok,那就使用headless试试

复制代码
apiVersion: v1
kind: Service
metadata:
  name: backend-headless-service
spec:
  clusterIP: None
  selector:
    app: backend
  ports:
    - name: http
      port: 10000
      targetPort: 10000

这里的clusterIP: None是创建headless service的关键

再修改envoy的配置文件,将转发修改为headless service

复制代码
...
      clusters:
        - name: app_service
          connect_timeout: 1s
          type: STRICT_DNS
          lb_policy: ROUND_ROBIN
          load_assignment:
            cluster_name: app_service
            endpoints:
              - lb_endpoints:
                  - endpoint:
                      address:
                        socket_address:
                          address: "backend-headless-service"
                          port_value: 10000

...

[2025-12-16T10:05:56.365Z] "GET /test HTTP/1.0" 200 40 0 2b029187-cddb-4278-99b8-2953a7e841a0 "curl/7.81.0" "-" 10.244.0.81:10000 app_service -
[2025-12-16T10:05:57.453Z] "GET /test HTTP/1.0" 200 40 1 384f9394-7ff9-4abb-b0f8-f9b69f2ba992 "curl/7.81.0" "-" 10.244.0.82:10000 app_service -

确实已经转发到后端真实pod ip去了

小结

当前的架构:

每个pod都有一个envoy sidecar,想要节约envoy资源,可以多个pod使用一个envoy,将envoy部署为daemonset,每个节点一个,然后调度到该节点的pod都转发到该envoy即可

联系我

  • 联系我,做深入的交流

至此,本文结束

在下才疏学浅,有撒汤漏水的,请各位不吝赐教...