查看kube-proxy的默认代理模式默认采用了iptables方式。然后查看 kube-proxy的日志发现为iptables模式
bash
[root@master231 ~]# kubectl get pods -n kube-system -l k8s-app=kube-proxy
NAME READY STATUS RESTARTS AGE
kube-proxy-2vxh9 1/1 Running 1 (6d21h ago) 6d22h
kube-proxy-65z9n 1/1 Running 1 (6d21h ago) 6d22h
kube-proxy-rmn84 1/1 Running 1 (6d21h ago) 6d22h
[root@master231 ~]# kubectl -n kube-system logs kube-proxy-2vxh9
I0805 07:02:38.581115 1 node.go:163] Successfully retrieved node IP: 10.0.0.231
I0805 07:02:38.581194 1 server_others.go:138] "Detected node IP" address="10.0.0.231"
I0805 07:02:38.584086 1 server_others.go:572] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
I0805 07:02:38.712188 1 server_others.go:206] "Using iptables Proxier"
如果首次安装集群时没有安装对应的ipvs模块,K8S的kube-proxy 默认是 iptables模式。后期也可以手动修改为 ipvs 模式
1.所有K8S的worker节点加载ipvs的模块
bash
apt -y install conntrack ipvsadm
2.所有节点编写加载ipvs的配置文件
bash
mkdir -pv /etc/sysconfig/modules
cat > /etc/sysconfig/modules/ipvs.modules << 'EOF'
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- ip_conntrack
EOF
3.所有节点加载ipvs相关模块并查看
bash
chmod 755 /etc/sysconfig/modules/ipvs.modules
bash /etc/sysconfig/modules/ipvs.modules
bash
[root@master231~]# lsmod | grep -e ip_vs -e nf_conntrack_ipv4
ip_vs_ftp 16384 0
nf_nat 49152 4 xt_nat,nft_chain_nat,xt_MASQUERADE,ip_vs_ftp
ip_vs_sed 16384 0
ip_vs_nq 16384 0
ip_vs_fo 16384 0
ip_vs_sh 16384 0
ip_vs_dh 16384 0
ip_vs_lblcr 16384 0
ip_vs_lblc 16384 0
ip_vs_wrr 16384 0
ip_vs_rr 16384 0
ip_vs_wlc 16384 0
ip_vs_lc 16384 0
ip_vs 176128 26 ip_vs_wlc,ip_vs_rr,ip_vs_dh,ip_vs_lblcr,ip_vs_sh,ip_vs_fo,ip_vs_nq,ip_vs_lblc,ip_vs_wrr,ip_vs_lc,ip_vs_sedip_vs_ftp
nf_conntrack 172032 6 xt_conntrack,nf_nat,xt_nat,nf_conntrack_netlink,xt_MASQUERADE,ip_vs
nf_defrag_ipv6 24576 2 nf_conntrack,ip_vs
libcrc32c 16384 6 nf_conntrack,nf_nat,btrfs,nf_tables,raid456,ip_vs
4.在master节点操作:修改kube-proxy的工作模式为ipvs
或者也可以:kubectl edit configmap kube-proxy -n kube-system
,在最后一行改为 ipvs 模式
bash
kubectl get configmap kube-proxy -n kube-system -o yaml \
| sed -e 's/mode: ""/mode: "ipvs"/' \
| kubectl apply -f - -n kube-system
5.修改kube-proxy的strictARP为true
bash
kubectl get configmap kube-proxy -n kube-system -o yaml | sed -e "s/strictARP: false/strictARP: true/" | kubectl apply -f - -n kube-system
6.验证 configmap 修改成功
bash
[root@master231 ~]# kubectl -n kube-system describe cm kube-proxy | egrep "mode|strictARP"
strictARP: true
mode: "ipvs"
7.master节点操作:删除旧的kube-proxy pod,然后会创建新的kube-proxy pod,这样才会生效
方式一:
kubectl get pods -n kube-system -l k8s-app=kube-proxy -o jsonpath="{.items[*].metadata.name}" | xargs kubectl -n kube-system delete pods
方式二:
bash
kubectl get pods -n kube-system -l k8s-app=kube-proxy -o jsonpath="{.items[*].metadata.name}" | xargs kubectl -n kube-system delete pods
8.验证kube-proxy组件工作模式生效 ,可以看到 IPVS 关键字。说明kube-proxy的工作模式已经切换为了IPVS
bash
[root@master231~]# kubectl get pods -n kube-system -l k8s-app=kube-proxy
NAME READY STATUS RESTARTS AGE
kube-proxy-bf45l 1/1 Running 0 54s
kube-proxy-f6nkh 1/1 Running 0 54s
kube-proxy-qc2vc 1/1 Running 0 55s
[root@master231~]# kubectl -n kube-system logs kube-proxy-bf45l
I1126 15:06:04.741381 1 node.go:163] Successfully retrieved node IP: 10.0.0.232
I1126 15:06:04.741471 1 server_others.go:138] "Detected node IP" address="10.0.0.232"
I1126 15:06:04.767962 1 server_others.go:269] "Using ipvs Proxier"
I1126 15:06:04.767995 1 server_others.go:271] "Creating dualStackProxier for ipvs"
I1126 15:06:04.768009 1 server_others.go:502] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
I1126 15:06:04.769240 1 proxier.go:435] "IPVS scheduler not specified, use rr by default"
I1126 15:06:04.769551 1 proxier.go:435] "IPVS scheduler not specified, use rr by default"
[root@master231~]# ipvsadm -ln | grep 10.200.0.10 -A 2
TCP 10.200.0.10:53 rr
-> 10.100.0.16:53 Masq 1 0 0
-> 10.100.0.17:53 Masq 1 0 0
TCP 10.200.0.10:9153 rr
-> 10.100.0.16:9153 Masq 1 0 0
-> 10.100.0.17:9153 Masq 1 0 0
--
UDP 10.200.0.10:53 rr
-> 10.100.0.16:53 Masq 1 0 0
-> 10.100.0.17:53 Masq 1 0 0