霞的云原生之旅: 使用PureLB打开私有云的LoadBalancer

介绍

使用PureLB为本地Kubernetes集群提供负载均衡服务,弥补了本地环境缺乏云厂商自带LoadBalancer的不足。PureLB作为开源的Service Load Balancer Controller,具有易用性、灵活的Linux网络配置、本地地址支持、路由功能以及与CNI路由的轻松集成等优势。通过PureLB,用户可以在本地环境中实现与云环境相媲美的负载均衡功能,为Kubernetes集群的网络流量分发提供高效可靠的解决方案

快速入门

使用minikube microk8s k3s kind 等之类的现成的Kubernetes集群一键入门:

  1. 定义IP池范围
  2. 定义IP的CIDR

示例 : 定义IP的CIDR: 例如192.168.2.170/25192.168.2.199,找到这个范围的起始IP地址和结束IP地址的二进制,找到公共前缀,最后根据公共前缀的长度确定子网掩码。 起始IP地址192.168.2.170的二进制表示是11000000.10101000.00000010.10101010 结束IP地址192.168.2.199的二进制表示是11000000.10101000.00000010.11000111 比较这两个二进制表示,很容易找到它们的公共前缀为11000000.10101000.00000010.1,这里有25位是公共的。 所以这个范围的CIDR表示为为192.168.2.170/25

shell 复制代码
export IP_POOL='192.168.2.170-192.168.2.199'
export SUBNET='192.168.2.170/25'

定义完执行:

shell 复制代码
cat > purelb-l2.yaml <<EOF
apiVersion: purelb.io/v1
kind: ServiceGroup
metadata:
  name: layer2-ippool
  namespace: purelb
spec:
  local:
    v4pool:
      subnet: $SUBNET
      pool: $IP_POOL
      aggregation: default
EOF

kubectl apply -f purelb-l2.yaml

kubectl describe sg -n purelb


# test lb
cat > test-l2-lb-nginx.yaml << EOF
apiVersion: v1
kind: Namespace
metadata:
  name: nginx-quic
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-lb
  namespace: nginx-quic
spec:
  selector:
    matchLabels:
      app: nginx-lb
  replicas: 4
  template:
    metadata:
      labels:
        app: nginx-lb
    spec:
      containers:
      - name: nginx-lb
        image: tinychen777/nginx-quic:latest
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  annotations:
    purelb.io/service-group: layer2-ippool
  name: nginx-lb-service
  namespace: nginx-quic
spec:
  allocateLoadBalancerNodePorts: false
  externalTrafficPolicy: Cluster
  internalTrafficPolicy: Cluster
  selector:
    app: nginx-lb
  ports:
  - protocol: TCP
    port: 80 # match for service access port
    targetPort: 80 # match for pod access port
  type: LoadBalancer
---
apiVersion: v1
kind: Service
metadata:
  annotations:
    purelb.io/service-group: layer2-ippool
  name: nginx-lb2-service
  namespace: nginx-quic
spec:
  allocateLoadBalancerNodePorts: false
  externalTrafficPolicy: Cluster
  internalTrafficPolicy: Cluster
  selector:
    app: nginx-lb
  ports:
  - protocol: TCP
    port: 80 # match for service access port
    targetPort: 80 # match for pod access port
  type: LoadBalancer
---
apiVersion: v1
kind: Service
metadata:
  annotations:
    purelb.io/service-group: layer2-ippool
  name: nginx-lb3-service
  namespace: nginx-quic
spec:
  allocateLoadBalancerNodePorts: false
  externalTrafficPolicy: Cluster
  internalTrafficPolicy: Cluster
  selector:
    app: nginx-lb
  ports:
  - protocol: TCP
    port: 80 # match for service access port
    targetPort: 80 # match for pod access port
  type: LoadBalancer
EOF

kubectl apply -f test-l2-lb-nginx.yaml

kubectl get svc -n nginx-quic

kubectl describe service nginx-lb-service -n nginx-quic
kubectl describe service nginx-lb2-service -n nginx-quic
kubectl describe service nginx-lb3-service -n nginx-quic

先决条件

分两种情况:

  1. 现有的kube-proxy集群
  2. 现有的Cilium集群

选择一种与你实际情况符合的方案进行使用

现有的kube-proxy集群

  1. 进入Kubernetes 集群,编辑kube-proxy ConfigMap
shell 复制代码
kubectl edit configmap kube-proxy -n kube-system
  1. kube-proxyConfigMapYAML文件配置中,设置data.config.conf.ipvs.strictARPtrue
shell 复制代码
...
ipvs:
    strictARP: true
...
  1. 重启kube-proxy:
shell 复制代码
kubectl rollout restart daemonset kube-proxy -n kube-system

现有的Cilium集群

如果你现有的Kubernetes集群使用了Cilium接管kube-proxy作为你的Kubernetes的网络管理工具, 那么请在原有的Cilium的基础上添加修改这些参数:

  1. 开启L2公告
  2. 开启L2 Pod的公告
  3. 配置qpsburst的数值

对应的helm values参数:

  • l2podAnnouncements.interface: 可以对外访问的网卡名称, 一般以eth或者ens开头, 取决于你的操作系统
  • k8sClientRateLimit.qpsk8sClientRateLimit.burst的数值请看这篇文章讲解

更新集群: cilium upgradehelm upgrade的行为一致, 取决于你更喜欢用哪个工具

shell 复制代码
cilium upgrade cilium ./cilium \
   --namespace kube-system \
   --reuse-values \
   --set l2announcements.enabled=true \
   --set k8sClientRateLimit.qps=10 \
   --set k8sClientRateLimit.burst=20 \
   --set kubeProxyReplacement=true \
   --set l2podAnnouncements.enabled=true \
   --set l2podAnnouncements.interface=ens160

配置

Layer2

定义一个IP范围, 给LoadBalancer类型的服务使用:

IP_POOL: IP池, 可以是:

  1. 一个网段, 例如: 192.168.2.170/24
  2. 一个范围, 例如: 192.168.2.170-192.168.2.199

SUBNET: 该范围的CIDR

shell 复制代码
export IP_POOL='192.168.2.170-192.168.2.199'
export SUBNET='192.168.2.170/25'

cat > purelb-l2.yaml <<EOF
apiVersion: purelb.io/v1
kind: ServiceGroup
metadata:
  name: layer2-ippool
  namespace: purelb
spec:
  local:
    v4pool:
      subnet: $SUBNET
      pool: $IP_POOL
      aggregation: default
EOF

使用:

shell 复制代码
kubectl apply -f purelb-l2.yaml

测试

shell 复制代码
cat > test-l2-lb-nginx.yaml << EOF
apiVersion: v1
kind: Namespace
metadata:
  name: nginx-quic
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-lb
  namespace: nginx-quic
spec:
  selector:
    matchLabels:
      app: nginx-lb
  replicas: 4
  template:
    metadata:
      labels:
        app: nginx-lb
    spec:
      containers:
      - name: nginx-lb
        image: tinychen777/nginx-quic:latest
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  annotations:
    purelb.io/service-group: layer2-ippool
  name: nginx-lb-service
  namespace: nginx-quic
spec:
  allocateLoadBalancerNodePorts: false
  externalTrafficPolicy: Cluster
  internalTrafficPolicy: Cluster
  selector:
    app: nginx-lb
  ports:
  - protocol: TCP
    port: 80 # match for service access port
    targetPort: 80 # match for pod access port
  type: LoadBalancer
---
apiVersion: v1
kind: Service
metadata:
  annotations:
    purelb.io/service-group: layer2-ippool
  name: nginx-lb2-service
  namespace: nginx-quic
spec:
  allocateLoadBalancerNodePorts: false
  externalTrafficPolicy: Cluster
  internalTrafficPolicy: Cluster
  selector:
    app: nginx-lb
  ports:
  - protocol: TCP
    port: 80 # match for service access port
    targetPort: 80 # match for pod access port
  type: LoadBalancer
---
apiVersion: v1
kind: Service
metadata:
  annotations:
    purelb.io/service-group: layer2-ippool
  name: nginx-lb3-service
  namespace: nginx-quic
spec:
  allocateLoadBalancerNodePorts: false
  externalTrafficPolicy: Cluster
  internalTrafficPolicy: Cluster
  selector:
    app: nginx-lb
  ports:
  - protocol: TCP
    port: 80 # match for service access port
    targetPort: 80 # match for pod access port
  type: LoadBalancer
EOF

kubectl apply -f test-l2-lb-nginx.yaml

然后使用这些命令来查看:

shell 复制代码
kubectl get svc -n nginx-quic

kubectl describe service nginx-lb-service -n nginx-quic
kubectl describe service nginx-lb2-service -n nginx-quic
kubectl describe service nginx-lb3-service -n nginx-quic
相关推荐
魏 无羡7 小时前
linux CentOS系统上卸载docker
linux·kubernetes·centos
Karoku0668 小时前
【k8s集群应用】kubeadm1.20高可用部署(3master)
运维·docker·云原生·容器·kubernetes
凌虚10 小时前
Kubernetes APF(API 优先级和公平调度)简介
后端·程序员·kubernetes
探索云原生13 小时前
在 K8S 中创建 Pod 是如何使用到 GPU 的: nvidia device plugin 源码分析
ai·云原生·kubernetes·go·gpu
启明真纳13 小时前
elasticache备份
运维·elasticsearch·云原生·kubernetes
jwolf215 小时前
基于K8S的微服务:一、服务发现,负载均衡测试(附calico网络问题解决)
微服务·kubernetes·服务发现
nangonghen15 小时前
在华为云通过operator部署Doris v2.1集群
kubernetes·华为云·doris·operator
会飞的土拨鼠呀17 小时前
chart文件结构
运维·云原生·kubernetes
自在的LEE19 小时前
当 Go 遇上 Windows:15.625ms 的时间更新困局
后端·kubernetes·go
云川之下1 天前
【k8s】访问etcd
kubernetes·etcd