背景说明
有这样一个场景,我们需要使用多个NFS Server来做负载均衡,Server中保存的文件只读,所以不用考虑数据同步。
一种可能的实现方式是在物理服务器上安装NFS Server,同时安装负载均衡软件来实现(如LVS等),配置起来比较繁琐。
这里考虑在kubernetes集群部署NFS server三副本,同时安装metallb实现负载均衡。
安装负载均衡器
由于这里使用的是rke2搭建的kubernetes集群,需要编辑/etc/rancher/rke2/config.yaml文件,增加如下配置后,重启rke2
bash
kube-proxy-arg: # 不指定的话,默认是 iptables 模式
- proxy-mode=ipvs
- ipvs-strict-arp=true
helm一键安装metallb
bash
helm repo add metallb https://metallb.github.io/metallb
helm install metallb metallb/metallb
创建L2Advertisement,使集群外的服务器也可以访问NFS Server
cat<<EOF | kubectl apply -f -
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
name: first-pool
spec:
addresses:
- 192.168.122.20-192.168.122.24
---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
name: example
EOF
在kubernetes集群中部署NFS Server
bash
cat<<EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
name: nfs-server
spec:
replicas: 3
selector:
matchLabels:
app: nfs-server
template:
metadata:
labels:
app: nfs-server
spec:
containers:
- name: nfs-server
image: k8s.gcr.io/volume-nfs:0.8
ports:
- name: nfs
containerPort: 2049
- name: mountd
containerPort: 20048
- name: rpcbind
containerPort: 111
securityContext:
privileged: true
volumeMounts:
- name: storage
mountPath: /exports
volumes:
- name: storage
hostPath:
path: /data/nfs # store all data in "/data/nfs" directory of the node where it is running
type: DirectoryOrCreate # if the directory does not exist then create it
---
apiVersion: v1
kind: Service
metadata:
name: nfs-service
spec:
ports:
- name: nfs
port: 2049
- name: mountd
port: 20048
- name: rpcbind
port: 111
selector:
app: nfs-server # must match with the label of NFS pod
type: LoadBalancer
EOF
使用负载均衡测试nfs记录
bash
# kubectl get svc | grep nfs
nfs-service LoadBalancer 10.43.223.162 192.168.122.21 2049:31390/TCP,20048:31442/TCP,111:31111/TCP 23h
集群外的客户端挂载nfs
bash
mount.nfs4 -onolock 192.168.122.21:/ /mnt
ll /mnt
参考链接:
https://github.com/appscode/third-party-tools/blob/master/storage/nfs/README.md