飞天使-k8s知识点30-kubernetes安装1.28.0版本-使用containerd方式

文章目录

安装前准备

内核升级包的md5,本人已验证,只要是这个md5值,放心升级
1ea91ea41eedb35c5da12fe7030f4347  kernel-ml-4.19.12-1.el7.elrepo.x86_64.rpm
01a6da596167ec2bc3122a5f30a8f627  kernel-ml-devel-4.19.12-1.el7.elrepo.x86_64.rpm
建议是4.17版本之上就好

 echo "172.17.200.40 k8s-master01" | sudo tee -a /etc/hosts
echo "172.17.200.41 k8s-master02" | sudo tee -a /etc/hosts
echo "172.17.200.42 k8s-master03" | sudo tee -a /etc/hosts
echo "172.17.200.43 k8s-node01" | sudo tee -a /etc/hosts
echo "172.17.200.44 k8s-node02" | sudo tee -a /etc/hosts
curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo
yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
sed -i -e '/mirrors.cloud.aliyuncs.com/d' -e '/mirrors.aliyuncs.com/d' /etc/yum.repos.d/CentOS-Base.repo
yum install wget jq psmisc vim net-tools telnet yum-utils device-mapper-persistent-data lvm2 git -y
systemctl disable --now firewalld 
systemctl disable --now dnsmasq
systemctl disable --now NetworkManager
setenforce 0
sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/sysconfig/selinux
sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/selinux/config
swapoff -a && sysctl -w vm.swappiness=0
sed -ri '/^[^#]*swap/s@^@#@' /etc/fstab
yum install ntpdate -y
ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
echo 'Asia/Shanghai' >/etc/timezone
echo -e "* soft nofile 65536\n* hard nofile 131072\n* soft nproc 65535\n* hard nproc 655350\n* soft memlock unlimited\n* hard memlock unlimited" | sudo tee -a /etc/security/limits.conf
cd /root
wget http://193.49.22.109/elrepo/kernel/el7/x86_64/RPMS/kernel-ml-devel-4.19.12-1.el7.elrepo.x86_64.rpm
wget http://193.49.22.109/elrepo/kernel/el7/x86_64/RPMS/kernel-ml-4.19.12-1.el7.elrepo.x86_64.rpm
cd /root && yum localinstall -y kernel-ml*
grub2-set-default  0 && grub2-mkconfig -o /etc/grub2.cfg
grubby --args="user_namespace.enable=1" --update-kernel="$(grubby --default-kernel)"
grubby --default-kernel
然后重启
containerd 配置
yum install containerd -y
mkdir -p /etc/containerd
containerd config default > /etc/containerd/config.toml

[root@master01 ~]# vim /etc/containerd/config.toml
[plugins]
...
 [plugins."io.containerd.grpc.v1.cri"]
 ...
 # 修改pause镜像地址
 sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.9"
 ...
 # 配置 systemd cgroup 驱动
 [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
 ...
 [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.op
tions]
 ...
 SystemdCgroup = true

换成这种方式修改
sed -i 's/SystemdCgroup = false/SystemdCgroup = true/g' /etc/containerd/config.toml
sed -i 's/sandbox_image = "registry.k8s.io\/pause:3.6"/sandbox_image = "registry.aliyuncs.com\/google_containers\/pause:3.9"/g' /etc/containerd/config.toml
cat /etc/containerd/config.toml  |grep -i sandbox
cat /etc/containerd/config.toml  |grep -i SystemdCgroup
systemctl daemon-reload
systemctl enable containerd --now
内核参数优化
yum install ipset ipvsadm -y
mkdir /etc/sysconfig/modules -p
cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack
EOF
chmod 755 /etc/sysconfig/modules/ipvs.modules
bash /etc/sysconfig/modules/ipvs.modules
lsmod | grep -e ip_vs -e nf_conntrack
cat > /etc/sysctl.d/k8s.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
vm.swappiness = 0
vm.overcommit_memory = 0
EOF
sysctl -p /etc/sysctl.d/k8s.conf
cat > /etc/modules-load.d/k8s.conf <<EOF
overlay
br_netfilter
EOF
sudo modprobe overlay
sudo modprobe br_netfilter
安装nerdctl
nerdctl需要使⽤buildkitd来实现镜像的构建,因此拷⻉相关命令和相关的启动程序。
wget https://github.com/containerd/nerdctl/releases/download/v1.5.0/nerdctl-full-1.5.0-linux-amd64.tar.gz
tar -xf nerdctl-full-1.5.0-linux-amd64.tar.gz 
cp bin/nerdctl /usr/local/bin/
cp bin/buildctl bin/buildkitd /usr/local/bin/
cp lib/systemd/system/buildkit.service /usr/lib/systemd/system/
systemctl daemon-reload
systemctl enable buildkit --now

 修改/etc/profile ,新增
export PATH=$PATH:/usr/local/bin
以上是所有机器全部安装
安装好的部分为
内核优化,containerd, 时间同步等服务器基础配置
buildkit

可以做个重启测试,看服务是否能够开机自启动,我用的是centos 7.9系统
uname -a
Linux gcp--test 4.19.12-1.el7.elrepo.x86_64 #1 SMP Fri Dec 21 11:06:36 EST 2018 x86_64 x86_64 x86_64 GNU/Linux
systemctl status containerd
systemctl status buildkit
开始安装
yum install -y kubelet-1.28.0 kubeadm-1.28.0 kubectl-1.28.0

检查版本
kubeadm version
systemctl enable kubelet --now

下载镜像
kubeadm config images pull \
 --image-repository registry.aliyuncs.com/google_containers \
 --kubernetes-version v1.28.0

输出为
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-apiserver:v1.28.0
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-controller-manager:v1.28.0
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-scheduler:v1.28.0
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-proxy:v1.28.0
[config/images] Pulled registry.aliyuncs.com/google_containers/pause:3.9
[config/images] Pulled registry.aliyuncs.com/google_containers/etcd:3.5.9-0
[config/images] Pulled registry.aliyuncs.com/google_containers/coredns:v1.10.1
初始化,这步骤容易出问题!
这里区分好 pod service 服务器网段别重复了
kubeadm init \
--apiserver-advertise-address="172.17.200.40" \
--control-plane-endpoint="172.17.200.37" \
--apiserver-bind-port=6443 \
--image-repository registry.aliyuncs.com/google_containers \
--kubernetes-version v1.28.0 \
--service-cidr=10.96.0.0/16 \
--pod-network-cidr=10.244.0.0/16 \
--upload-certs \
--service-dns-domain=fly.local

正确输出末尾部分
To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of the control-plane node running the following command on each as root:

  kubeadm join 172.17.200.37:6443 --token fosyex.07pp3s1zd8pqk1qr \
        --discovery-token-ca-cert-hash sha256:a70a555d55967cd210568049518ce5bb7f09fa3221d268a3af8c2 \
        --control-plane --certificate-key 0d268a3af8c20d268a3af8c20d268a3af8c20d268a3af8c20d268a3af8c2

Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 172.17.200.37:6443 --token fosyex.07pp3s1zd8pqk1qr \
        --discovery-token-ca-cert-hash sha256:a70a555d55967cd210568049518ce5bb7f09fa3221d268a3af8c2


其他节点添加到这个集群中来,此时会是NotReady ,因为coredns 还不能用
安装flannel
wget https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml

 根据情况进行修改
 containers:
 - args:
 - --ip-masq
 - --kube-subnet-mgr
 - --iface=eth0 # 指明绑定在哪个⽹卡上(可不配置)
 net-conf.json: |
 {
 "Network": "10.244.0.0/16",
 "Backend": {
 "Type": "vxlan"
 }
 }

结果展示

[root@gcp-hongkong-k8s-master01-test install_k8s]# kubectl get pod -A
NAMESPACE      NAME                                                     READY   STATUS    RESTARTS       AGE
kube-flannel   kube-flannel-ds-5mtx7                                    1/1     Running   0              27s
kube-flannel   kube-flannel-ds-64fln                                    1/1     Running   0              27s
kube-flannel   kube-flannel-ds-lvqhq                                    1/1     Running   0              27s
kube-flannel   kube-flannel-ds-mwmbx                                    1/1     Running   0              27s
kube-flannel   kube-flannel-ds-pp7w7                                    1/1     Running   0              27s
kube-system    coredns-66f779496c-4xfbc                                 1/1     Running   0              135m
kube-system    coredns-66f779496c-h4hmd                                 1/1     Running   0              135m
kube-system    etcd-gcp-hongkong-k8s-master01-test                      1/1     Running   0              135m
kube-system    etcd-gcp-hongkong-k8s-master02-test                      1/1     Running   0              132m
kube-system    etcd-gcp-hongkong-k8s-master03-test                      1/1     Running   0              132m
kube-system    kube-apiserver-gcp-hongkong-k8s-master01-test            1/1     Running   0              135m
kube-system    kube-apiserver-gcp-hongkong-k8s-master02-test            1/1     Running   0              132m
kube-system    kube-apiserver-gcp-hongkong-k8s-master03-test            1/1     Running   1 (132m ago)   132m
kube-system    kube-controller-manager-gcp-hongkong-k8s-master01-test   1/1     Running   1 (132m ago)   135m
kube-system    kube-controller-manager-gcp-hongkong-k8s-master02-test   1/1     Running   0              132m
kube-system    kube-controller-manager-gcp-hongkong-k8s-master03-test   1/1     Running   0              131m
kube-system    kube-proxy-7vbk2                                         1/1     Running   0              132m
kube-system    kube-proxy-95kvh                                         1/1     Running   0              131m
kube-system    kube-proxy-d47m7                                         1/1     Running   0              131m
kube-system    kube-proxy-nvkjg                                         1/1     Running   0              131m
kube-system    kube-proxy-wnxqp                                         1/1     Running   0              135m
kube-system    kube-scheduler-gcp-hongkong-k8s-master01-test            1/1     Running   1 (132m ago)   135m
kube-system    kube-scheduler-gcp-hongkong-k8s-master02-test            1/1     Running   0              132m
kube-system    kube-scheduler-gcp-hongkong-k8s-master03-test            1/1     Running   0              132m
相关推荐
qq_312920116 分钟前
docker 部署 kvm 图形化管理工具 WebVirtMgr
运维·docker·容器
踏雪Vernon6 分钟前
[OpenHarmony5.0][Docker][环境]OpenHarmony5.0 Docker编译环境镜像下载以及使用方式
linux·docker·容器·harmonyos
条纹布鲁斯1 小时前
dockerdsktop修改安装路径/k8s部署wordpress和ubuntu
docker·kubernetes
CP-DD3 小时前
Docker 容器化开发 应用
运维·docker·容器
老司机张师傅4 小时前
【微服务实战之Docker容器】第七章-Dockerfile解析
容器·dockerfile·虚悬镜像·docker学习
登云时刻5 小时前
Kubernetes集群外连接redis集群和使用redis-shake工具迁移数据(一)
redis·kubernetes·bootstrap
运维&陈同学5 小时前
【zookeeper03】消息队列与微服务之zookeeper集群部署
linux·微服务·zookeeper·云原生·消息队列·云计算·java-zookeeper
吴半杯5 小时前
gateway漏洞(CVE-2022-22947)
docker·kubernetes·gateway
Code_Artist8 小时前
使用Portainer来管理并编排Docker容器
docker·云原生·容器
Eternal-Student8 小时前
【docker 保存】将Docker镜像保存为一个离线的tar归档文件
运维·docker·容器