霞的云原生之旅: 使用PureLB打开私有云的LoadBalancer

介绍

使用PureLB为本地Kubernetes集群提供负载均衡服务,弥补了本地环境缺乏云厂商自带LoadBalancer的不足。PureLB作为开源的Service Load Balancer Controller,具有易用性、灵活的Linux网络配置、本地地址支持、路由功能以及与CNI路由的轻松集成等优势。通过PureLB,用户可以在本地环境中实现与云环境相媲美的负载均衡功能,为Kubernetes集群的网络流量分发提供高效可靠的解决方案

快速入门

使用minikube microk8s k3s kind 等之类的现成的Kubernetes集群一键入门:

  1. 定义IP池范围
  2. 定义IP的CIDR

示例 : 定义IP的CIDR: 例如192.168.2.170/25192.168.2.199,找到这个范围的起始IP地址和结束IP地址的二进制,找到公共前缀,最后根据公共前缀的长度确定子网掩码。 起始IP地址192.168.2.170的二进制表示是11000000.10101000.00000010.10101010 结束IP地址192.168.2.199的二进制表示是11000000.10101000.00000010.11000111 比较这两个二进制表示,很容易找到它们的公共前缀为11000000.10101000.00000010.1,这里有25位是公共的。 所以这个范围的CIDR表示为为192.168.2.170/25

shell 复制代码
export IP_POOL='192.168.2.170-192.168.2.199'
export SUBNET='192.168.2.170/25'

定义完执行:

shell 复制代码
cat > purelb-l2.yaml <<EOF
apiVersion: purelb.io/v1
kind: ServiceGroup
metadata:
  name: layer2-ippool
  namespace: purelb
spec:
  local:
    v4pool:
      subnet: $SUBNET
      pool: $IP_POOL
      aggregation: default
EOF

kubectl apply -f purelb-l2.yaml

kubectl describe sg -n purelb


# test lb
cat > test-l2-lb-nginx.yaml << EOF
apiVersion: v1
kind: Namespace
metadata:
  name: nginx-quic
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-lb
  namespace: nginx-quic
spec:
  selector:
    matchLabels:
      app: nginx-lb
  replicas: 4
  template:
    metadata:
      labels:
        app: nginx-lb
    spec:
      containers:
      - name: nginx-lb
        image: tinychen777/nginx-quic:latest
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  annotations:
    purelb.io/service-group: layer2-ippool
  name: nginx-lb-service
  namespace: nginx-quic
spec:
  allocateLoadBalancerNodePorts: false
  externalTrafficPolicy: Cluster
  internalTrafficPolicy: Cluster
  selector:
    app: nginx-lb
  ports:
  - protocol: TCP
    port: 80 # match for service access port
    targetPort: 80 # match for pod access port
  type: LoadBalancer
---
apiVersion: v1
kind: Service
metadata:
  annotations:
    purelb.io/service-group: layer2-ippool
  name: nginx-lb2-service
  namespace: nginx-quic
spec:
  allocateLoadBalancerNodePorts: false
  externalTrafficPolicy: Cluster
  internalTrafficPolicy: Cluster
  selector:
    app: nginx-lb
  ports:
  - protocol: TCP
    port: 80 # match for service access port
    targetPort: 80 # match for pod access port
  type: LoadBalancer
---
apiVersion: v1
kind: Service
metadata:
  annotations:
    purelb.io/service-group: layer2-ippool
  name: nginx-lb3-service
  namespace: nginx-quic
spec:
  allocateLoadBalancerNodePorts: false
  externalTrafficPolicy: Cluster
  internalTrafficPolicy: Cluster
  selector:
    app: nginx-lb
  ports:
  - protocol: TCP
    port: 80 # match for service access port
    targetPort: 80 # match for pod access port
  type: LoadBalancer
EOF

kubectl apply -f test-l2-lb-nginx.yaml

kubectl get svc -n nginx-quic

kubectl describe service nginx-lb-service -n nginx-quic
kubectl describe service nginx-lb2-service -n nginx-quic
kubectl describe service nginx-lb3-service -n nginx-quic

先决条件

分两种情况:

  1. 现有的kube-proxy集群
  2. 现有的Cilium集群

选择一种与你实际情况符合的方案进行使用

现有的kube-proxy集群

  1. 进入Kubernetes 集群,编辑kube-proxy ConfigMap
shell 复制代码
kubectl edit configmap kube-proxy -n kube-system
  1. kube-proxyConfigMapYAML文件配置中,设置data.config.conf.ipvs.strictARPtrue
shell 复制代码
...
ipvs:
    strictARP: true
...
  1. 重启kube-proxy:
shell 复制代码
kubectl rollout restart daemonset kube-proxy -n kube-system

现有的Cilium集群

如果你现有的Kubernetes集群使用了Cilium接管kube-proxy作为你的Kubernetes的网络管理工具, 那么请在原有的Cilium的基础上添加修改这些参数:

  1. 开启L2公告
  2. 开启L2 Pod的公告
  3. 配置qpsburst的数值

对应的helm values参数:

  • l2podAnnouncements.interface: 可以对外访问的网卡名称, 一般以eth或者ens开头, 取决于你的操作系统
  • k8sClientRateLimit.qpsk8sClientRateLimit.burst的数值请看这篇文章讲解

更新集群: cilium upgradehelm upgrade的行为一致, 取决于你更喜欢用哪个工具

shell 复制代码
cilium upgrade cilium ./cilium \
   --namespace kube-system \
   --reuse-values \
   --set l2announcements.enabled=true \
   --set k8sClientRateLimit.qps=10 \
   --set k8sClientRateLimit.burst=20 \
   --set kubeProxyReplacement=true \
   --set l2podAnnouncements.enabled=true \
   --set l2podAnnouncements.interface=ens160

配置

Layer2

定义一个IP范围, 给LoadBalancer类型的服务使用:

IP_POOL: IP池, 可以是:

  1. 一个网段, 例如: 192.168.2.170/24
  2. 一个范围, 例如: 192.168.2.170-192.168.2.199

SUBNET: 该范围的CIDR

shell 复制代码
export IP_POOL='192.168.2.170-192.168.2.199'
export SUBNET='192.168.2.170/25'

cat > purelb-l2.yaml <<EOF
apiVersion: purelb.io/v1
kind: ServiceGroup
metadata:
  name: layer2-ippool
  namespace: purelb
spec:
  local:
    v4pool:
      subnet: $SUBNET
      pool: $IP_POOL
      aggregation: default
EOF

使用:

shell 复制代码
kubectl apply -f purelb-l2.yaml

测试

shell 复制代码
cat > test-l2-lb-nginx.yaml << EOF
apiVersion: v1
kind: Namespace
metadata:
  name: nginx-quic
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-lb
  namespace: nginx-quic
spec:
  selector:
    matchLabels:
      app: nginx-lb
  replicas: 4
  template:
    metadata:
      labels:
        app: nginx-lb
    spec:
      containers:
      - name: nginx-lb
        image: tinychen777/nginx-quic:latest
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  annotations:
    purelb.io/service-group: layer2-ippool
  name: nginx-lb-service
  namespace: nginx-quic
spec:
  allocateLoadBalancerNodePorts: false
  externalTrafficPolicy: Cluster
  internalTrafficPolicy: Cluster
  selector:
    app: nginx-lb
  ports:
  - protocol: TCP
    port: 80 # match for service access port
    targetPort: 80 # match for pod access port
  type: LoadBalancer
---
apiVersion: v1
kind: Service
metadata:
  annotations:
    purelb.io/service-group: layer2-ippool
  name: nginx-lb2-service
  namespace: nginx-quic
spec:
  allocateLoadBalancerNodePorts: false
  externalTrafficPolicy: Cluster
  internalTrafficPolicy: Cluster
  selector:
    app: nginx-lb
  ports:
  - protocol: TCP
    port: 80 # match for service access port
    targetPort: 80 # match for pod access port
  type: LoadBalancer
---
apiVersion: v1
kind: Service
metadata:
  annotations:
    purelb.io/service-group: layer2-ippool
  name: nginx-lb3-service
  namespace: nginx-quic
spec:
  allocateLoadBalancerNodePorts: false
  externalTrafficPolicy: Cluster
  internalTrafficPolicy: Cluster
  selector:
    app: nginx-lb
  ports:
  - protocol: TCP
    port: 80 # match for service access port
    targetPort: 80 # match for pod access port
  type: LoadBalancer
EOF

kubectl apply -f test-l2-lb-nginx.yaml

然后使用这些命令来查看:

shell 复制代码
kubectl get svc -n nginx-quic

kubectl describe service nginx-lb-service -n nginx-quic
kubectl describe service nginx-lb2-service -n nginx-quic
kubectl describe service nginx-lb3-service -n nginx-quic
相关推荐
程序那点事儿5 分钟前
k8s 之动态创建pv失败(踩坑)
云原生·容器·kubernetes
唐大爹12 小时前
项目实战:k8s部署考试系统
云原生·容器·kubernetes
Zl15975315975319 小时前
k8s基础环境部署
云原生·容器·kubernetes
花酒锄作田19 小时前
[kubernetes]二进制方式部署单机k8s-v1.30.5
kubernetes
陌殇殇殇21 小时前
使用GitLab CI构建持续集成案例
运维·ci/cd·云原生·容器·kubernetes·gitlab
daxian_am4611 天前
k8s image error
java·数据库·kubernetes
福大大架构师每日一题1 天前
20.1 分析pull模型在k8s中的应用,对比push模型
云原生·容器·kubernetes
bestcxx2 天前
(十八)、登陆 k8s 的 kubernetes-dashboard &更多可视化工具
云原生·容器·kubernetes
小胖胖吖2 天前
【CKA】二、节点管理-设置节点不可用
云原生·容器·kubernetes·cka
福大大架构师每日一题2 天前
22.2 k8s中ksm采集的使用的dns解析
云原生·容器·kubernetes