霞的云原生之旅: 使用PureLB打开私有云的LoadBalancer

介绍

使用PureLB为本地Kubernetes集群提供负载均衡服务,弥补了本地环境缺乏云厂商自带LoadBalancer的不足。PureLB作为开源的Service Load Balancer Controller,具有易用性、灵活的Linux网络配置、本地地址支持、路由功能以及与CNI路由的轻松集成等优势。通过PureLB,用户可以在本地环境中实现与云环境相媲美的负载均衡功能,为Kubernetes集群的网络流量分发提供高效可靠的解决方案

快速入门

使用minikube microk8s k3s kind 等之类的现成的Kubernetes集群一键入门:

  1. 定义IP池范围
  2. 定义IP的CIDR

示例 : 定义IP的CIDR: 例如192.168.2.170/25192.168.2.199,找到这个范围的起始IP地址和结束IP地址的二进制,找到公共前缀,最后根据公共前缀的长度确定子网掩码。 起始IP地址192.168.2.170的二进制表示是11000000.10101000.00000010.10101010 结束IP地址192.168.2.199的二进制表示是11000000.10101000.00000010.11000111 比较这两个二进制表示,很容易找到它们的公共前缀为11000000.10101000.00000010.1,这里有25位是公共的。 所以这个范围的CIDR表示为为192.168.2.170/25

shell 复制代码
export IP_POOL='192.168.2.170-192.168.2.199'
export SUBNET='192.168.2.170/25'

定义完执行:

shell 复制代码
cat > purelb-l2.yaml <<EOF
apiVersion: purelb.io/v1
kind: ServiceGroup
metadata:
  name: layer2-ippool
  namespace: purelb
spec:
  local:
    v4pool:
      subnet: $SUBNET
      pool: $IP_POOL
      aggregation: default
EOF

kubectl apply -f purelb-l2.yaml

kubectl describe sg -n purelb


# test lb
cat > test-l2-lb-nginx.yaml << EOF
apiVersion: v1
kind: Namespace
metadata:
  name: nginx-quic
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-lb
  namespace: nginx-quic
spec:
  selector:
    matchLabels:
      app: nginx-lb
  replicas: 4
  template:
    metadata:
      labels:
        app: nginx-lb
    spec:
      containers:
      - name: nginx-lb
        image: tinychen777/nginx-quic:latest
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  annotations:
    purelb.io/service-group: layer2-ippool
  name: nginx-lb-service
  namespace: nginx-quic
spec:
  allocateLoadBalancerNodePorts: false
  externalTrafficPolicy: Cluster
  internalTrafficPolicy: Cluster
  selector:
    app: nginx-lb
  ports:
  - protocol: TCP
    port: 80 # match for service access port
    targetPort: 80 # match for pod access port
  type: LoadBalancer
---
apiVersion: v1
kind: Service
metadata:
  annotations:
    purelb.io/service-group: layer2-ippool
  name: nginx-lb2-service
  namespace: nginx-quic
spec:
  allocateLoadBalancerNodePorts: false
  externalTrafficPolicy: Cluster
  internalTrafficPolicy: Cluster
  selector:
    app: nginx-lb
  ports:
  - protocol: TCP
    port: 80 # match for service access port
    targetPort: 80 # match for pod access port
  type: LoadBalancer
---
apiVersion: v1
kind: Service
metadata:
  annotations:
    purelb.io/service-group: layer2-ippool
  name: nginx-lb3-service
  namespace: nginx-quic
spec:
  allocateLoadBalancerNodePorts: false
  externalTrafficPolicy: Cluster
  internalTrafficPolicy: Cluster
  selector:
    app: nginx-lb
  ports:
  - protocol: TCP
    port: 80 # match for service access port
    targetPort: 80 # match for pod access port
  type: LoadBalancer
EOF

kubectl apply -f test-l2-lb-nginx.yaml

kubectl get svc -n nginx-quic

kubectl describe service nginx-lb-service -n nginx-quic
kubectl describe service nginx-lb2-service -n nginx-quic
kubectl describe service nginx-lb3-service -n nginx-quic

先决条件

分两种情况:

  1. 现有的kube-proxy集群
  2. 现有的Cilium集群

选择一种与你实际情况符合的方案进行使用

现有的kube-proxy集群

  1. 进入Kubernetes 集群,编辑kube-proxy ConfigMap
shell 复制代码
kubectl edit configmap kube-proxy -n kube-system
  1. kube-proxyConfigMapYAML文件配置中,设置data.config.conf.ipvs.strictARPtrue
shell 复制代码
...
ipvs:
    strictARP: true
...
  1. 重启kube-proxy:
shell 复制代码
kubectl rollout restart daemonset kube-proxy -n kube-system

现有的Cilium集群

如果你现有的Kubernetes集群使用了Cilium接管kube-proxy作为你的Kubernetes的网络管理工具, 那么请在原有的Cilium的基础上添加修改这些参数:

  1. 开启L2公告
  2. 开启L2 Pod的公告
  3. 配置qpsburst的数值

对应的helm values参数:

  • l2podAnnouncements.interface: 可以对外访问的网卡名称, 一般以eth或者ens开头, 取决于你的操作系统
  • k8sClientRateLimit.qpsk8sClientRateLimit.burst的数值请看这篇文章讲解

更新集群: cilium upgradehelm upgrade的行为一致, 取决于你更喜欢用哪个工具

shell 复制代码
cilium upgrade cilium ./cilium \
   --namespace kube-system \
   --reuse-values \
   --set l2announcements.enabled=true \
   --set k8sClientRateLimit.qps=10 \
   --set k8sClientRateLimit.burst=20 \
   --set kubeProxyReplacement=true \
   --set l2podAnnouncements.enabled=true \
   --set l2podAnnouncements.interface=ens160

配置

Layer2

定义一个IP范围, 给LoadBalancer类型的服务使用:

IP_POOL: IP池, 可以是:

  1. 一个网段, 例如: 192.168.2.170/24
  2. 一个范围, 例如: 192.168.2.170-192.168.2.199

SUBNET: 该范围的CIDR

shell 复制代码
export IP_POOL='192.168.2.170-192.168.2.199'
export SUBNET='192.168.2.170/25'

cat > purelb-l2.yaml <<EOF
apiVersion: purelb.io/v1
kind: ServiceGroup
metadata:
  name: layer2-ippool
  namespace: purelb
spec:
  local:
    v4pool:
      subnet: $SUBNET
      pool: $IP_POOL
      aggregation: default
EOF

使用:

shell 复制代码
kubectl apply -f purelb-l2.yaml

测试

shell 复制代码
cat > test-l2-lb-nginx.yaml << EOF
apiVersion: v1
kind: Namespace
metadata:
  name: nginx-quic
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-lb
  namespace: nginx-quic
spec:
  selector:
    matchLabels:
      app: nginx-lb
  replicas: 4
  template:
    metadata:
      labels:
        app: nginx-lb
    spec:
      containers:
      - name: nginx-lb
        image: tinychen777/nginx-quic:latest
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  annotations:
    purelb.io/service-group: layer2-ippool
  name: nginx-lb-service
  namespace: nginx-quic
spec:
  allocateLoadBalancerNodePorts: false
  externalTrafficPolicy: Cluster
  internalTrafficPolicy: Cluster
  selector:
    app: nginx-lb
  ports:
  - protocol: TCP
    port: 80 # match for service access port
    targetPort: 80 # match for pod access port
  type: LoadBalancer
---
apiVersion: v1
kind: Service
metadata:
  annotations:
    purelb.io/service-group: layer2-ippool
  name: nginx-lb2-service
  namespace: nginx-quic
spec:
  allocateLoadBalancerNodePorts: false
  externalTrafficPolicy: Cluster
  internalTrafficPolicy: Cluster
  selector:
    app: nginx-lb
  ports:
  - protocol: TCP
    port: 80 # match for service access port
    targetPort: 80 # match for pod access port
  type: LoadBalancer
---
apiVersion: v1
kind: Service
metadata:
  annotations:
    purelb.io/service-group: layer2-ippool
  name: nginx-lb3-service
  namespace: nginx-quic
spec:
  allocateLoadBalancerNodePorts: false
  externalTrafficPolicy: Cluster
  internalTrafficPolicy: Cluster
  selector:
    app: nginx-lb
  ports:
  - protocol: TCP
    port: 80 # match for service access port
    targetPort: 80 # match for pod access port
  type: LoadBalancer
EOF

kubectl apply -f test-l2-lb-nginx.yaml

然后使用这些命令来查看:

shell 复制代码
kubectl get svc -n nginx-quic

kubectl describe service nginx-lb-service -n nginx-quic
kubectl describe service nginx-lb2-service -n nginx-quic
kubectl describe service nginx-lb3-service -n nginx-quic
相关推荐
运维小文7 分钟前
K8S资源限制之LimitRange
云原生·容器·kubernetes·k8s资源限制
登云时刻8 分钟前
Kubernetes集群外连接redis集群和使用redis-shake工具迁移数据(二)
redis·容器·kubernetes
wuxingge9 小时前
k8s1.30.0高可用集群部署
云原生·容器·kubernetes
志凌海纳SmartX10 小时前
趋势洞察|AI 能否带动裸金属 K8s 强势崛起?
云原生·容器·kubernetes
锅总10 小时前
nacos与k8s service健康检查详解
云原生·容器·kubernetes
BUG弄潮儿10 小时前
k8s 集群安装
云原生·容器·kubernetes
颜淡慕潇12 小时前
【K8S系列】kubectl describe pod显示ImagePullBackOff,如何进一步排查?
后端·云原生·容器·kubernetes
Linux运维日记12 小时前
k8s1.31版本最新版本集群使用容器镜像仓库Harbor
linux·docker·云原生·容器·kubernetes
AI_小站14 小时前
RAG 示例:使用 langchain、Redis、llama.cpp 构建一个 kubernetes 知识库问答
人工智能·程序人生·langchain·kubernetes·llama·知识库·rag
长囧鹿18 小时前
云原生之k8s服务管理
云原生·容器·kubernetes