目录
[部署worker node](#部署worker node)
k8s高可用集群
实验环境
|------|---------------|-------------------|
| 主机名 | IP | 角色 |
| k8s1 | 192.168.81.10 | harbor |
| k8s2 | 192.168.81.11 | control-plane |
| k8s3 | 192.168.81.12 | control-plane |
| k8s4 | 192.168.81.13 | control-plane |
| k8s5 | 192.168.81.14 | haproxy,pacemaker |
| k8s6 | 192.168.81.15 | haproxy,pacemaker |
| k8s7 | 192.168.81.16 | worker node |
haproxy负载均衡
配置节点解析,所有节点解析保持一致
[root@k8s5 ~]# yum install -y haproxy net-tools
[root@k8s5 ~]# cd /etc/haproxy/
[root@k8s5 haproxy]# vim haproxy.cfg
#---------------------------------------------------------------------
defaults
mode http
log global
#option httplog
option dontlognull
option http-server-close
#option forwardfor except 127.0.0.0/8
option redispatch
retries 3
timeout http-request 10s
timeout queue 1m
timeout connect 10s
timeout client 1m
timeout server 1m
timeout http-keep-alive 10s
timeout check 10s
maxconn 3000
listen status *:80 #监控
stats uri /status
stats auth admin:westos
#---------------------------------------------------------------------
# main frontend which proxys to the backends
#---------------------------------------------------------------------
frontend main *:6443
mode tcp
default_backend k8s
#---------------------------------------------------------------------
# round robin balancing between the various backends
#---------------------------------------------------------------------
backend k8s
mode tcp
balance roundrobin
server app1 192.168.81.11:6443 check
server app2 192.168.81.12:6443 check
server app3 192.168.81.13:6443 check
注意:需要修改为自己的k8s control-plane地址
[root@k8s5 haproxy]# systemctl start haproxy
访问监控页面:http://192.168.81.14/status
admin/westos
测试成功后关闭服务,不要设置自启动
[root@k8s5 haproxy]# systemctl stop haproxy
设置免密
[root@k8s5 haproxy]# ssh-keygen
[root@k8s5 haproxy]# ssh-copy-id k8s6
k8s6节点安装haproxy软件
[root@k8s6 ~]# yum install -y haproxy
从k8s5拷贝配置文件
[root@k8s5 haproxy]# scp haproxy.cfg k8s6:/etc/haproxy/
pacemaker高可用
同步配置文件
[root@k8s5 ~]# cd /etc/yum.repos.d/
[root@k8s5 yum.repos.d]# vim dvd.repo
[dvd]
name=rhel7.6
baseurl=file:///media
gpgcheck=0
[HighAvailability]
name=rhel7.6 HighAvailability
baseurl=file:///media/addons/HighAvailability
gpgcheck=0
[root@k8s5 yum.repos.d]# scp dvd.repo k8s6:/etc/yum.repos.d/
安装软件
[root@k8s5 ~]# yum install -y pacemaker pcs psmisc policycoreutils-python
[root@k8s6 ~]# yum install -y pacemaker pcs psmisc policycoreutils-python
启动pcsd服务
[root@k8s5 ~]# systemctl enable --now pcsd.service
[root@k8s5 ~]# ssh k8s6 systemctl enable --now pcsd.service
设置用户密码
[root@k8s5 ~]# echo westos | passwd --stdin hacluster
[root@k8s5 ~]# ssh k8s6 'echo westos | passwd --stdin hacluster'
节点认证
[root@k8s5 ~]# pcs cluster auth k8s5 k8s6
Username: hacluster
Password: westos
创建集群
[root@k8s5 ~]# pcs cluster setup --name mycluster k8s5 k8s6
启动集群
[root@k8s5 ~]# pcs cluster start --all
集群自启动
[root@k8s5 ~]# pcs cluster enable --all
禁用stonith
[root@k8s5 ~]# pcs property set stonith-enabled=false
添加集群资源
[root@k8s5 ~]# pcs resource create vip ocf:heartbeat:IPaddr2 ip=192.168.81.200 op monitor interval=30s //创建一个 IPaddr2 类型的资源 vip,其 IP 地址为 192.168.81.200,监测间隔为 30 秒
[root@k8s5 ~]# pcs resource create haproxy systemd:haproxy op monitor interval=60s //创建一个 systemd 类型的资源 haproxy,用于启动 HAProxy 服务,监测间隔为 60 秒
[root@k8s5 ~]# pcs resource group add hagroup vip haproxy //将前两个资源添加到一个资源组 hagroup 中
测试
[root@k8s5 ~]# pcs node standby
资源全部迁移到k8s6
恢复
[root@k8s5 ~]# pcs node unstandby
部署control-plane
重置节点
[root@k8s2 ~]# kubeadm reset
[root@k8s2 ~]# cd /etc/cni/net.d
[root@k8s2 net.d]# rm -rf *
k8s3、k8s4以此类推
加载内核模块(在所有集群节点执行)
[root@k8s2 ~]# vim /etc/modules-load.d/k8s.conf
overlay
br_netfilter
[root@k8s2 ~]# modprobe overlay
[root@k8s2 ~]# modprobe br_netfilter
[root@k8s2 ~]# vim /etc/sysctl.d/docker.conf
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
net.ipv4.ip_forward=1
[root@k8s2 ~]# sysctl --system
确认软件版本
[root@k8s2 ~]# rpm -q kubeadm kubelet kubectl
生成初始化配置文件
[root@k8s2 ~]# kubeadm config print init-defaults > kubeadm-init.yaml
修改配置
[root@k8s2 ~]# vim kubeadm-init.yaml
apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
token: abcdef.0123456789abcdef
ttl: 24h0m0s
usages:
- signing
- authentication
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.81.11 #本机ip
bindPort: 6443
nodeRegistration:
criSocket: unix:///var/run/containerd/containerd.sock
imagePullPolicy: IfNotPresent
name: k8s2 #本机主机名
taints: null
---
apiServer:
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controlPlaneEndpoint: "192.168.81.200:6443" #负载均衡地址
controllerManager: {}
dns: {}
etcd:
local:
dataDir: /var/lib/etcd
imageRepository: reg.westos.org/k8s #本地私有仓库
kind: ClusterConfiguration
kubernetesVersion: 1.25.0
networking:
dnsDomain: cluster.local
serviceSubnet: 10.96.0.0/12
podSubnet: 10.244.0.0/16 #pod分配地址段
scheduler: {}
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs #配置ipvs模式
初始化集群
[root@k8s2 ~]# kubeadm init --config kubeadm-init.yaml --upload-certs
部署网路插件
[root@k8s2 calico]# kubectl apply -f calico.yaml
添加其它control-plane节点
[root@k8s3 containerd]# kubeadm join 192.168.81.200:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:1142c506bc44c57b7c38487a5f348b73f1eb6a19a28ab44efde287d811ad1bc2 \
--control-plane --certificate-key b8feb02bce9fa4fea5676265438ec505fcc2f14501584cc938ed08684bc8c8a7
[root@k8s4 containerd]# kubeadm join 192.168.81.200:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:1142c506bc44c57b7c38487a5f348b73f1eb6a19a28ab44efde287d811ad1bc2 \
--control-plane --certificate-key b8feb02bce9fa4fea5676265438ec505fcc2f14501584cc938ed08684bc8c8a7
指定 Kubernetes 的配置文件路径为 /etc/kubernetes/admin.conf
export KUBECONFIG=/etc/kubernetes/admin.conf
部署worker node
新添加的节点需要初始化配置
- 禁用selinux、firewalld、swap分区
- 部署containerd
- 安装kubelet、kubeadm、kubectl
- 配置内核模块
禁用swap
[root@k8s7 ~]# swapoff -a
[root@k8s7 ~]# vim /etc/fstab
安装containerd、kubelet、kubeadm、kubectl
从其它节点拷贝repo文件
[root@k8s4 yum.repos.d]# scp k8s.repo docker.repo k8s7:/etc/yum.repos.d/
安装软件
[root@k8s7 yum.repos.d]# yum install -y containerd.io kubeadm-1.24.17-0 kubelet-1.24.17-0 kubectl-1.24.17-0
自启动服务
[root@k8s7 ~]# systemctl enable --now containerd
[root@k8s7 ~]# systemctl enable --now kubelet
拷贝containerd的配置文件
[root@k8s4 containerd]# scp -r * k8s7:/etc/containerd/
重启服务
[root@k8s7 containerd]# systemctl restart containerd
[root@k8s7 containerd]# crictl config runtime-endpoint unix:///run/containerd/containerd.sock
[root@k8s7 containerd]# crictl pull myapp:v1
配置内核模块
[root@k8s4 containerd]# cd /etc/modules-load.d/
[root@k8s4 modules-load.d]# scp k8s.conf k8s7:/etc/modules-load.d/
[root@k8s4 modules-load.d]# cd /etc/sysctl.d/
[root@k8s4 sysctl.d]# scp docker.conf k8s7:/etc/sysctl.d/
[root@k8s7 ~]# modprobe overlay
[root@k8s7 ~]# modprobe br_netfilter
[root@k8s7 ~]# sysctl --system
[root@k8s7 ~]# kubeadm join 192.168.81.200:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:1142c506bc44c57b7c38487a5f348b73f1eb6a19a28ab44efde287d811ad1bc2
添加worker标签
[root@k8s2 ~]# kubectl label node k8s7 node-role.kubernetes.io/woker=