霞的云原生之旅: 使用PureLB打开私有云的LoadBalancer

介绍

使用PureLB为本地Kubernetes集群提供负载均衡服务,弥补了本地环境缺乏云厂商自带LoadBalancer的不足。PureLB作为开源的Service Load Balancer Controller,具有易用性、灵活的Linux网络配置、本地地址支持、路由功能以及与CNI路由的轻松集成等优势。通过PureLB,用户可以在本地环境中实现与云环境相媲美的负载均衡功能,为Kubernetes集群的网络流量分发提供高效可靠的解决方案

快速入门

使用minikube microk8s k3s kind 等之类的现成的Kubernetes集群一键入门:

  1. 定义IP池范围
  2. 定义IP的CIDR

示例 : 定义IP的CIDR: 例如192.168.2.170/25192.168.2.199,找到这个范围的起始IP地址和结束IP地址的二进制,找到公共前缀,最后根据公共前缀的长度确定子网掩码。 起始IP地址192.168.2.170的二进制表示是11000000.10101000.00000010.10101010 结束IP地址192.168.2.199的二进制表示是11000000.10101000.00000010.11000111 比较这两个二进制表示,很容易找到它们的公共前缀为11000000.10101000.00000010.1,这里有25位是公共的。 所以这个范围的CIDR表示为为192.168.2.170/25

shell 复制代码
export IP_POOL='192.168.2.170-192.168.2.199'
export SUBNET='192.168.2.170/25'

定义完执行:

shell 复制代码
cat > purelb-l2.yaml <<EOF
apiVersion: purelb.io/v1
kind: ServiceGroup
metadata:
  name: layer2-ippool
  namespace: purelb
spec:
  local:
    v4pool:
      subnet: $SUBNET
      pool: $IP_POOL
      aggregation: default
EOF

kubectl apply -f purelb-l2.yaml

kubectl describe sg -n purelb


# test lb
cat > test-l2-lb-nginx.yaml << EOF
apiVersion: v1
kind: Namespace
metadata:
  name: nginx-quic
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-lb
  namespace: nginx-quic
spec:
  selector:
    matchLabels:
      app: nginx-lb
  replicas: 4
  template:
    metadata:
      labels:
        app: nginx-lb
    spec:
      containers:
      - name: nginx-lb
        image: tinychen777/nginx-quic:latest
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  annotations:
    purelb.io/service-group: layer2-ippool
  name: nginx-lb-service
  namespace: nginx-quic
spec:
  allocateLoadBalancerNodePorts: false
  externalTrafficPolicy: Cluster
  internalTrafficPolicy: Cluster
  selector:
    app: nginx-lb
  ports:
  - protocol: TCP
    port: 80 # match for service access port
    targetPort: 80 # match for pod access port
  type: LoadBalancer
---
apiVersion: v1
kind: Service
metadata:
  annotations:
    purelb.io/service-group: layer2-ippool
  name: nginx-lb2-service
  namespace: nginx-quic
spec:
  allocateLoadBalancerNodePorts: false
  externalTrafficPolicy: Cluster
  internalTrafficPolicy: Cluster
  selector:
    app: nginx-lb
  ports:
  - protocol: TCP
    port: 80 # match for service access port
    targetPort: 80 # match for pod access port
  type: LoadBalancer
---
apiVersion: v1
kind: Service
metadata:
  annotations:
    purelb.io/service-group: layer2-ippool
  name: nginx-lb3-service
  namespace: nginx-quic
spec:
  allocateLoadBalancerNodePorts: false
  externalTrafficPolicy: Cluster
  internalTrafficPolicy: Cluster
  selector:
    app: nginx-lb
  ports:
  - protocol: TCP
    port: 80 # match for service access port
    targetPort: 80 # match for pod access port
  type: LoadBalancer
EOF

kubectl apply -f test-l2-lb-nginx.yaml

kubectl get svc -n nginx-quic

kubectl describe service nginx-lb-service -n nginx-quic
kubectl describe service nginx-lb2-service -n nginx-quic
kubectl describe service nginx-lb3-service -n nginx-quic

先决条件

分两种情况:

  1. 现有的kube-proxy集群
  2. 现有的Cilium集群

选择一种与你实际情况符合的方案进行使用

现有的kube-proxy集群

  1. 进入Kubernetes 集群,编辑kube-proxy ConfigMap
shell 复制代码
kubectl edit configmap kube-proxy -n kube-system
  1. kube-proxyConfigMapYAML文件配置中,设置data.config.conf.ipvs.strictARPtrue
shell 复制代码
...
ipvs:
    strictARP: true
...
  1. 重启kube-proxy:
shell 复制代码
kubectl rollout restart daemonset kube-proxy -n kube-system

现有的Cilium集群

如果你现有的Kubernetes集群使用了Cilium接管kube-proxy作为你的Kubernetes的网络管理工具, 那么请在原有的Cilium的基础上添加修改这些参数:

  1. 开启L2公告
  2. 开启L2 Pod的公告
  3. 配置qpsburst的数值

对应的helm values参数:

  • l2podAnnouncements.interface: 可以对外访问的网卡名称, 一般以eth或者ens开头, 取决于你的操作系统
  • k8sClientRateLimit.qpsk8sClientRateLimit.burst的数值请看这篇文章讲解

更新集群: cilium upgradehelm upgrade的行为一致, 取决于你更喜欢用哪个工具

shell 复制代码
cilium upgrade cilium ./cilium \
   --namespace kube-system \
   --reuse-values \
   --set l2announcements.enabled=true \
   --set k8sClientRateLimit.qps=10 \
   --set k8sClientRateLimit.burst=20 \
   --set kubeProxyReplacement=true \
   --set l2podAnnouncements.enabled=true \
   --set l2podAnnouncements.interface=ens160

配置

Layer2

定义一个IP范围, 给LoadBalancer类型的服务使用:

IP_POOL: IP池, 可以是:

  1. 一个网段, 例如: 192.168.2.170/24
  2. 一个范围, 例如: 192.168.2.170-192.168.2.199

SUBNET: 该范围的CIDR

shell 复制代码
export IP_POOL='192.168.2.170-192.168.2.199'
export SUBNET='192.168.2.170/25'

cat > purelb-l2.yaml <<EOF
apiVersion: purelb.io/v1
kind: ServiceGroup
metadata:
  name: layer2-ippool
  namespace: purelb
spec:
  local:
    v4pool:
      subnet: $SUBNET
      pool: $IP_POOL
      aggregation: default
EOF

使用:

shell 复制代码
kubectl apply -f purelb-l2.yaml

测试

shell 复制代码
cat > test-l2-lb-nginx.yaml << EOF
apiVersion: v1
kind: Namespace
metadata:
  name: nginx-quic
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-lb
  namespace: nginx-quic
spec:
  selector:
    matchLabels:
      app: nginx-lb
  replicas: 4
  template:
    metadata:
      labels:
        app: nginx-lb
    spec:
      containers:
      - name: nginx-lb
        image: tinychen777/nginx-quic:latest
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  annotations:
    purelb.io/service-group: layer2-ippool
  name: nginx-lb-service
  namespace: nginx-quic
spec:
  allocateLoadBalancerNodePorts: false
  externalTrafficPolicy: Cluster
  internalTrafficPolicy: Cluster
  selector:
    app: nginx-lb
  ports:
  - protocol: TCP
    port: 80 # match for service access port
    targetPort: 80 # match for pod access port
  type: LoadBalancer
---
apiVersion: v1
kind: Service
metadata:
  annotations:
    purelb.io/service-group: layer2-ippool
  name: nginx-lb2-service
  namespace: nginx-quic
spec:
  allocateLoadBalancerNodePorts: false
  externalTrafficPolicy: Cluster
  internalTrafficPolicy: Cluster
  selector:
    app: nginx-lb
  ports:
  - protocol: TCP
    port: 80 # match for service access port
    targetPort: 80 # match for pod access port
  type: LoadBalancer
---
apiVersion: v1
kind: Service
metadata:
  annotations:
    purelb.io/service-group: layer2-ippool
  name: nginx-lb3-service
  namespace: nginx-quic
spec:
  allocateLoadBalancerNodePorts: false
  externalTrafficPolicy: Cluster
  internalTrafficPolicy: Cluster
  selector:
    app: nginx-lb
  ports:
  - protocol: TCP
    port: 80 # match for service access port
    targetPort: 80 # match for pod access port
  type: LoadBalancer
EOF

kubectl apply -f test-l2-lb-nginx.yaml

然后使用这些命令来查看:

shell 复制代码
kubectl get svc -n nginx-quic

kubectl describe service nginx-lb-service -n nginx-quic
kubectl describe service nginx-lb2-service -n nginx-quic
kubectl describe service nginx-lb3-service -n nginx-quic
相关推荐
昌sit!6 小时前
K8S node节点没有相应的pod镜像运行故障处理办法
云原生·容器·kubernetes
A ?Charis9 小时前
Gitlab-runner running on Kubernetes - hostAliases
容器·kubernetes·gitlab
北漂IT民工_程序员_ZG10 小时前
k8s集群安装(minikube)
云原生·容器·kubernetes
2301_8061313616 小时前
Kubernetes的基本构建块和最小可调度单元pod-0
云原生·容器·kubernetes
SilentCodeY17 小时前
containerd配置私有仓库registry
容器·kubernetes·containerd·镜像·crictl
binqian1 天前
【k8s】ClusterIP能http访问,但是不能ping 的原因
http·容器·kubernetes
探索云原生1 天前
GPU 环境搭建指南:如何在裸机、Docker、K8s 等环境中使用 GPU
ai·云原生·kubernetes·go·gpu
是垚不是土1 天前
Istio流量镜像测试
运维·kubernetes·云计算·istio
蚊子不吸吸1 天前
DevOps开发运维简述
linux·运维·ci/cd·oracle·kubernetes·gitlab·devops
林小果11 天前
K8S搭建
云原生·容器·kubernetes