Kubernetes集群部署

1.集群环境搭建

1.1 环境规划

kubernetes集群大体上分为两类:一主多从多主多从

  • 一主多从:一台Master节点和多台Node节点,搭建简单,但是有单机故障风险,适合用于测试环境
  • 多主多从:多台Master节点和多台Node节点,搭建麻烦,安全性高,适合用于生产环境

1.2 kubernetes环境部署

kubernetes有多种部署方式,目前主流的方式有kubeadm、minikube、二进制包

  • minikube:一个用于快速搭建单节点kubernetes的工具
  • kubeadm:一个用于快速搭建kubernetes集群的工具
  • 二进制包 :从官网下载每个组件的二进制包,依次去安装,此方式对于理解kubernetes组件更加有效
  • 注意:三台机器快照还原,关闭防火墙和SELinux
作用 IP地址 系统 配置
k8s-master 192.168.110.31/24 Rocky Linux8 2颗CPU 4G内存 50G硬盘
k8s-node1 192.168.110.32/24 Rocky Linux8 2颗CPU 4G内存 50G硬盘
k8s-node2 192.168.110.33/24 Rocky Linux8 2颗CPU 4G内存 50G硬盘

注意:all代表三台机子都做得操作

[root@k8s-all ~]# cat >> /etc/hosts << EOF
192.168.110.31 k8s-master
192.168.110.32 k8s-node1
192.168.110.33 k8s-node2
EOF

1.2.2 配置时间服务

注意:all为三台机器都做一样的操作

1、安装NTP时间服务器

[root@k8s-all ~]# yum install chrony -y &>/dev/null

2、修改时间同步服务器为阿里云

[root@k8s-all ~]# sed -i 's/^pool/# pool/' /etc/chrony.conf

[root@k8s-all ~]# sed -i '/^# pool/ a server ntp1.aliyun.com iburst' /etc/chrony.conf

3、三台机器查看验证

#k8s-master

[root@k8s-master ~]# systemctl restart chronyd.service

[root@k8s-master ~]# systemctl enable chronyd

[root@k8s-master ~]# chronyc sources

MS Name/IP address Stratum Poll Reach LastRx Last sample

===============================================================================

^* 120.25.115.20 2 6 17 6 +58us[+2843us] +/- 27ms

#node1

[root@k8s-node1 ~]# systemctl restart chronyd.service

[root@k8s-node1 ~]# systemctl enable chronyd

[root@k8s-node1 ~]# chronyc sources

MS Name/IP address Stratum Poll Reach LastRx Last sample

===============================================================================

^* 120.25.115.20 2 6 17 14 +187us[ +319us] +/- 19ms

#node2

[root@k8s-node2 ~]# systemctl restart chronyd.service

[root@k8s-node2 ~]# systemctl enable chronyd

[root@k8s-node2 ~]# chronyc sources

MS Name/IP address Stratum Poll Reach LastRx Last sample

===============================================================================

^* 120.25.115.20 2 6 105 8 +1338us[+3209us] +/- 20ms

1.2.3 禁用SWAP交换分区

[root@k8s-all ~]# swapoff -a #临时关闭

[root@k8s-all ~]# sed -i 's/.*swap.*/# &/' /etc/fstab #永久关闭

1.2.4 开启IPVS

[root@k8s-all ~]# vim /etc/sysconfig/modules/ipvs.modules #三台都做

shell 复制代码
#!/bin/bash

ipvs_modules="ip_vs ip_vs_lc ip_vs_wlc ip_vs_rr ip_vs_wrr ip_vs_lblc ip_vs_lblcr ip_vs_dh ip_vs_vip ip_vs_sed ip_vs_ftp nf_conntrack"

for kernel_module in $ipvs_modules; 
do
        /sbin/modinfo -F filename $kernel_module >/dev/null 2>&1
        if [ $? -eq 0 ]; then
                /sbin/modprobe $kernel_module
        fi
done

chmod 755 /etc/sysconfig/modules/ipvs.modules

[root@k8s-all ~]# bash /etc/sysconfig/modules/ipvs.modules

1.2.5 开启内核路由转发

[root@k8s-all ~]# sed -i 's/ip_forward=0/ip_forward=1/' /etc/sysctl.conf

[root@k8s-all ~]# sysctl -p #生效

1.2.6 添加网桥过滤及内核转发配置文件

[root@k8s-all ~]# cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
vm.swappiness = 0
EOF

#加载br_netfilter模块

[root@k8s-all ~]# modprobe br-netfilter

[root@k8s-all ~]# sysctl -p /etc/sysctl.d/k8s.conf #生效

net.bridge.bridge-nf-call-ip6tables = 1

net.bridge.bridge-nf-call-iptables = 1

vm.swappiness = 0

1.2.7 安装Docker

[root@k8s-all ~]# wget -O /etc/yum.repos.d/docker-ce.repo https://mirrors.huaweicloud.com/docker-ce/linux/centos/docker-ce.repo

[root@k8s-all ~]# sed -i 's+download.docker.com+mirrors.huaweicloud.com/docker-ce+' /etc/yum.repos.d/docker-ce.repo #替换仓库源

[root@k8s-all ~]# sed -i 's/$releasever/8Server/g' /etc/yum.repos.d/docker-ce.repo

#CentOS7只要把8Server换成7Server

[root@k8s-all ~]# yum remove runc containerd.io -y #Rocky再带的podman会和docker冲突

[root@k8s-all ~]# yum install docker-ce -y

[root@k8s-all ~]# mkdir -p /etc/docker

[root@k8s-all ~]# tee /etc/docker/daemon.json <<-'EOF' #配置镜像加速器
{
"exec-opts": ["native.cgroupdriver=systemd"],
"registry-mirrors": [
"https://dbckerproxy.com",
ttps://hub-mirror.c.163.com",
"https://mirror.baidubce.com",
"https://ccr.ccs.tencentyun.com"
]
}
EOF

[root@k8s-all ~]# systemctl daemon-reload

[root@k8s-all ~]# systemctl enable --now docker.service

1.2.8 cri-dockererd安装

注意:K8s从1.24版本后不支持docker了所以这里需要用contained

下载地址:Releases · Mirantis/cri-dockerd (github.com)

https://github.com/Mirantis/cri-dockerd/releases/download/v0.3.10/cri-dockerd-0.3.10-3.el8.x86_64.rpm

[root@k8s-all ~]# wget -c https://github.com/Mirantis/cri-dockerd/releases/download/v0.3.10/cri-dockerd-0.3.10-3.el8.x86_64.rpm

[root@k8s-all ~]# yum install cri-dockerd-0.3.10-3.el8.x86_64.rpm -y

配置镜像加速

[root@k8s-all ~]# sed -i 's#^ExecStart=.*#ExecStart=/usr/bin/cri-dockerd --network-plugin=cni --pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.9#' /usr/lib/systemd/system/cri-docker.service

[root@k8s-all ~]# systemctl daemon-reload

[root@k8s-all ~]# systemctl restart docker

[root@k8s-all ~]# systemctl enable --now cri-docker.service

1.3 kubernetes软件安装

1.3.1 配置K8s源

[root@k8s-all ~]# cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://pkgs.k8s.io/core:/stable:/v1.28/rpm/
enabled=1
gpgcheck=1
gpgkey=https://pkgs.k8s.io/core:/stable:/v1.28/rpm/repodata/repomd.xml.key
#exclude=kubelet kubeadm kubectl cri-tools kubernetes-cni
EOF

1.3.2 安装kubelet、kubeadm、kubectl、kubernetes-cni

[root@k8s-all ~]# yum install -y kubelet kubeadm kubectl kubernetes-cni

1.3.3 kubectl命令自动补全

[root@k8s-all ~]# yum install -y bash-completion

[root@k8s-all ~]# source /usr/share/bash-completion/bash_completion

[root@k8s-all ~]# source <(kubectl completion bash)

[root@k8s-all ~]# echo "source <(kubectl completion bash)" >> ~/.bashrc

1.3.4 在master做集群初始化

[root@k8s-master ~]# kubeadm init --node-name=k8s-master \
--image-repository=registry.aliyuncs.com/google_containers \
--cri-socket=unix:///var/run/cri-dockerd.sock \
--apiserver-advertise-address=192.168.110.31 \
--pod-network-cidr=10.244.0.0/16 \
--service-cidr=10.96.0.0/12

输出内容重点:

shell 复制代码
mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

export KUBECONFIG=/etc/kubernetes/admin.conf

kubeadm join 192.168.110.31:6443 --token d46bd5.qnboievmzpl630ht \
	--discovery-token-ca-cert-hash sha256:eeae80cfb5754b66a14c3846577c73ea08949bfc8aeeb12c34f89e12f2560538 

#这里之间粘输出的内容

[root@k8s-master ~]# mkdir -p $HOME/.kube

[root@k8s-master ~]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

[root@k8s-master ~]# chown $(id -u):$(id -g) $HOME/.kube/config

[root@k8s-master ~]# export KUBECONFIG=/etc/kubernetes/admin.conf

[root@k8s-master ~]# docker images #查看镜像

shell 复制代码
REPOSITORY                                                        TAG        IMAGE ID       CREATED         SIZE
registry.aliyuncs.com/google_containers/kube-apiserver            v1.28.7    eeb80ea66576   3 weeks ago     125MB
registry.aliyuncs.com/google_containers/kube-controller-manager   v1.28.7    4d9d9de55f19   3 weeks ago     121MB
registry.aliyuncs.com/google_containers/kube-scheduler            v1.28.7    309c26d00629   3 weeks ago     59.1MB
registry.aliyuncs.com/google_containers/kube-proxy                v1.28.7    123aa721f941   3 weeks ago     81.1MB
registry.aliyuncs.com/google_containers/etcd                      3.5.10-0   a0eed15eed44   4 months ago    148MB
registry.aliyuncs.com/google_containers/coredns                   v1.10.1    ead0a4a53df8   13 months ago   53.6MB
registry.aliyuncs.com/google_containers/pause                     3.9        e6f181688397   16 months ago   744kB

1.3.5 所有工作节点加入k8s集群

[root@k8s-node1 ~]# kubeadm join 192.168.110.31:6443 --token d46bd5.qnboievmzpl630ht \
--discovery-token-ca-cert-hash sha256:eeae80cfb5754b66a14c3846577c73ea08949bfc8aeeb12c34f89e12f2560538 \
--cri-socket=unix:///var/run/cri-dockerd.sock

[root@k8s-node2 ~]# kubeadm join 192.168.110.31:6443 --token d46bd5.qnboievmzpl630ht \
--discovery-token-ca-cert-hash sha256:eeae80cfb5754b66a14c3846577c73ea08949bfc8aeeb12c34f89e12f2560538 \
--cri-socket=unix:///var/run/cri-dockerd.sock

注意:根据init的输出,复制命令,添加命令参数--cri-socket=unix:///var/run/cri-dockerd.sock

1.3.6 k8s集群安装网络组件(只在master上做)

[root@k8s-master ~]# kubectl get nodes #三个节点的状态都是NotReady,还没有准备好没有网络插件

shell 复制代码
NAME         STATUS     ROLES           AGE     VERSION
k8s-master   NotReady   control-plane   19m     v1.28.7
k8s-node1    NotReady   <none>          3m37s   v1.28.7
k8s-node2    NotReady   <none>          3m32s   v1.28.7 

[root@k8s-master ~]# wget -c https://docs.projectcalico.org/v3.19/manifests/calico.yaml

[root@k8s-master ~]# vim calico.yaml
3867 apiVersion: policy/v1 #把v1后面的删了,只保留v1
3683 - name: CALICO_IPV4POOL_CIDR
3684 value: "10.244.0.0/16"

#3867行把v1后面的删了,只保留v1,在3683和3684,这两行默认注释需要开启,IP改为初始化时的--pod-network-cidr

注意:这里注意缩进严格要求缩进,否则会报错

[root@k8s-master ~]# kubectl apply -f calico.yaml #部署 Calico 资源

[root@k8s-master ~]# kubectl get pods -n kube-system #这里的所有必须是Running状态,如果不是大概率是网路问题,换个网

shell 复制代码
NAME                                      READY   STATUS    RESTARTS        AGE
calico-kube-controllers-64d779b5d-8c6c4   1/1     Running   0               3h1m
calico-node-2d9ps                         1/1     Running   0               3h1m
calico-node-stvw6                         1/1     Running   0               3h1m
calico-node-xfmg4                         1/1     Running   0               3h1m
coredns-66f779496c-kg526                  1/1     Running   0               3h42m
coredns-66f779496c-p7rqm                  1/1     Running   0               3h42m
etcd-k8s-master                           1/1     Running   2 (159m ago)    3h42m
kube-apiserver-k8s-master                 1/1     Running   2 (159m ago)    3h42m
kube-controller-manager-k8s-master        1/1     Running   2 (159m ago)    3h42m
kube-proxy-m4qdr                          1/1     Running   1 (2m51s ago)   3h26m
kube-proxy-szw9b                          1/1     Running   2 (159m ago)    3h42m
kube-proxy-zgf5x                          1/1     Running   1 (30m ago)     3h26m
kube-scheduler-k8s-master                 1/1     Running   2 (159m ago)    3h42m
[root@k8s-master ~]# kubectl get nodes
NAME         STATUS   ROLES           AGE     VERSION
k8s-master   Ready    control-plane   3h51m   v1.28.7
k8s-node1    Ready    <none>          3h35m   v1.28.7
k8s-node2    Ready    <none>          3h35m   v1.28.7

1.4 应用部署访问验证

1.4.1master节点中执行以下命令,在集群中创建一个 deployment,验证是否正常运行**

[root@k8s-master ~]# kubectl create deployment nginx --image=nginx

deployment.apps/nginx created

[root@k8s-master ~]# kubectl expose deployment nginx --port=80 --type=NodePort

service/nginx exposed

1.4.2 访问

[root@k8s-master ~]# kubectl get pod,service

shell 复制代码
NAME                         READY   STATUS             RESTARTS   AGE
pod/nginx-7854ff8877-fzv75   0/1     ImagePullBackOff   0          14m

NAME                 TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
service/kubernetes   ClusterIP   10.96.0.1        <none>        443/TCP        4h16m
service/nginx        NodePort    10.104.148.146   <none>        80:30193/TCP   13m

[root@k8s-master ~]# curl 192.168.110.31:30193
Welcome to nginx!

Welcome to nginx!

If you see this page, the nginx web server is successfully installed and working. Further configuration is required.

For online documentation and support please refer to nginx.org.

Commercial support is available at nginx.com.

Thank you for using nginx.

[root@k8s-master ~]# curl 10.104.148.146
Welcome to nginx!

Welcome to nginx!

If you see this page, the nginx web server is successfully installed and working. Further configuration is required.

For online documentation and support please refer to nginx.org.

Commercial support is available at nginx.com.

Thank you for using nginx.

相关推荐
蜜獾云35 分钟前
docker 安装雷池WAF防火墙 守护Web服务器
linux·运维·服务器·网络·网络安全·docker·容器
年薪丰厚2 小时前
如何在K8S集群中查看和操作Pod内的文件?
docker·云原生·容器·kubernetes·k8s·container
zhangj11252 小时前
K8S Ingress 服务配置步骤说明
云原生·容器·kubernetes
岁月变迁呀2 小时前
kubeadm搭建k8s集群
云原生·容器·kubernetes
墨水\\2 小时前
二进制部署k8s
云原生·容器·kubernetes
Source、2 小时前
k8s-metrics-server
云原生·容器·kubernetes
上海运维Q先生2 小时前
面试题整理15----K8s常见的网络插件有哪些
运维·网络·kubernetes
颜淡慕潇2 小时前
【K8S问题系列 |19 】如何解决 Pod 无法挂载 PVC问题
后端·云原生·容器·kubernetes
ProtonBase2 小时前
如何从 0 到 1 ,打造全新一代分布式数据架构
java·网络·数据库·数据仓库·分布式·云原生·架构
大熊程序猿4 小时前
K8s证书过期
云原生·容器·kubernetes