飞天使-k8s知识点30-kubernetes安装1.28.0版本-使用containerd方式

文章目录

安装前准备

复制代码
内核升级包的md5,本人已验证,只要是这个md5值,放心升级
1ea91ea41eedb35c5da12fe7030f4347  kernel-ml-4.19.12-1.el7.elrepo.x86_64.rpm
01a6da596167ec2bc3122a5f30a8f627  kernel-ml-devel-4.19.12-1.el7.elrepo.x86_64.rpm
建议是4.17版本之上就好

 echo "172.17.200.40 k8s-master01" | sudo tee -a /etc/hosts
echo "172.17.200.41 k8s-master02" | sudo tee -a /etc/hosts
echo "172.17.200.42 k8s-master03" | sudo tee -a /etc/hosts
echo "172.17.200.43 k8s-node01" | sudo tee -a /etc/hosts
echo "172.17.200.44 k8s-node02" | sudo tee -a /etc/hosts
curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo
yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
sed -i -e '/mirrors.cloud.aliyuncs.com/d' -e '/mirrors.aliyuncs.com/d' /etc/yum.repos.d/CentOS-Base.repo
yum install wget jq psmisc vim net-tools telnet yum-utils device-mapper-persistent-data lvm2 git -y
systemctl disable --now firewalld 
systemctl disable --now dnsmasq
systemctl disable --now NetworkManager
setenforce 0
sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/sysconfig/selinux
sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/selinux/config
swapoff -a && sysctl -w vm.swappiness=0
sed -ri '/^[^#]*swap/s@^@#@' /etc/fstab
yum install ntpdate -y
ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
echo 'Asia/Shanghai' >/etc/timezone
echo -e "* soft nofile 65536\n* hard nofile 131072\n* soft nproc 65535\n* hard nproc 655350\n* soft memlock unlimited\n* hard memlock unlimited" | sudo tee -a /etc/security/limits.conf
cd /root
wget http://193.49.22.109/elrepo/kernel/el7/x86_64/RPMS/kernel-ml-devel-4.19.12-1.el7.elrepo.x86_64.rpm
wget http://193.49.22.109/elrepo/kernel/el7/x86_64/RPMS/kernel-ml-4.19.12-1.el7.elrepo.x86_64.rpm
cd /root && yum localinstall -y kernel-ml*
grub2-set-default  0 && grub2-mkconfig -o /etc/grub2.cfg
grubby --args="user_namespace.enable=1" --update-kernel="$(grubby --default-kernel)"
grubby --default-kernel
然后重启
containerd 配置
复制代码
yum install containerd -y
mkdir -p /etc/containerd
containerd config default > /etc/containerd/config.toml

[root@master01 ~]# vim /etc/containerd/config.toml
[plugins]
...
 [plugins."io.containerd.grpc.v1.cri"]
 ...
 # 修改pause镜像地址
 sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.9"
 ...
 # 配置 systemd cgroup 驱动
 [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
 ...
 [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.op
tions]
 ...
 SystemdCgroup = true

换成这种方式修改
sed -i 's/SystemdCgroup = false/SystemdCgroup = true/g' /etc/containerd/config.toml
sed -i 's/sandbox_image = "registry.k8s.io\/pause:3.6"/sandbox_image = "registry.aliyuncs.com\/google_containers\/pause:3.9"/g' /etc/containerd/config.toml
cat /etc/containerd/config.toml  |grep -i sandbox
cat /etc/containerd/config.toml  |grep -i SystemdCgroup
systemctl daemon-reload
systemctl enable containerd --now
内核参数优化
复制代码
yum install ipset ipvsadm -y
mkdir /etc/sysconfig/modules -p
cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack
EOF
chmod 755 /etc/sysconfig/modules/ipvs.modules
bash /etc/sysconfig/modules/ipvs.modules
lsmod | grep -e ip_vs -e nf_conntrack
cat > /etc/sysctl.d/k8s.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
vm.swappiness = 0
vm.overcommit_memory = 0
EOF
sysctl -p /etc/sysctl.d/k8s.conf
cat > /etc/modules-load.d/k8s.conf <<EOF
overlay
br_netfilter
EOF
sudo modprobe overlay
sudo modprobe br_netfilter
安装nerdctl
复制代码
nerdctl需要使⽤buildkitd来实现镜像的构建,因此拷⻉相关命令和相关的启动程序。
wget https://github.com/containerd/nerdctl/releases/download/v1.5.0/nerdctl-full-1.5.0-linux-amd64.tar.gz
tar -xf nerdctl-full-1.5.0-linux-amd64.tar.gz 
cp bin/nerdctl /usr/local/bin/
cp bin/buildctl bin/buildkitd /usr/local/bin/
cp lib/systemd/system/buildkit.service /usr/lib/systemd/system/
systemctl daemon-reload
systemctl enable buildkit --now

 修改/etc/profile ,新增
export PATH=$PATH:/usr/local/bin
以上是所有机器全部安装
复制代码
安装好的部分为
内核优化,containerd, 时间同步等服务器基础配置
buildkit

可以做个重启测试,看服务是否能够开机自启动,我用的是centos 7.9系统
uname -a
Linux gcp--test 4.19.12-1.el7.elrepo.x86_64 #1 SMP Fri Dec 21 11:06:36 EST 2018 x86_64 x86_64 x86_64 GNU/Linux
systemctl status containerd
systemctl status buildkit
开始安装
复制代码
yum install -y kubelet-1.28.0 kubeadm-1.28.0 kubectl-1.28.0

检查版本
kubeadm version
systemctl enable kubelet --now

下载镜像
kubeadm config images pull \
 --image-repository registry.aliyuncs.com/google_containers \
 --kubernetes-version v1.28.0

输出为
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-apiserver:v1.28.0
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-controller-manager:v1.28.0
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-scheduler:v1.28.0
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-proxy:v1.28.0
[config/images] Pulled registry.aliyuncs.com/google_containers/pause:3.9
[config/images] Pulled registry.aliyuncs.com/google_containers/etcd:3.5.9-0
[config/images] Pulled registry.aliyuncs.com/google_containers/coredns:v1.10.1
初始化,这步骤容易出问题!
复制代码
这里区分好 pod service 服务器网段别重复了
kubeadm init \
--apiserver-advertise-address="172.17.200.40" \
--control-plane-endpoint="172.17.200.37" \
--apiserver-bind-port=6443 \
--image-repository registry.aliyuncs.com/google_containers \
--kubernetes-version v1.28.0 \
--service-cidr=10.96.0.0/16 \
--pod-network-cidr=10.244.0.0/16 \
--upload-certs \
--service-dns-domain=fly.local

正确输出末尾部分
To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of the control-plane node running the following command on each as root:

  kubeadm join 172.17.200.37:6443 --token fosyex.07pp3s1zd8pqk1qr \
        --discovery-token-ca-cert-hash sha256:a70a555d55967cd210568049518ce5bb7f09fa3221d268a3af8c2 \
        --control-plane --certificate-key 0d268a3af8c20d268a3af8c20d268a3af8c20d268a3af8c20d268a3af8c2

Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 172.17.200.37:6443 --token fosyex.07pp3s1zd8pqk1qr \
        --discovery-token-ca-cert-hash sha256:a70a555d55967cd210568049518ce5bb7f09fa3221d268a3af8c2


其他节点添加到这个集群中来,此时会是NotReady ,因为coredns 还不能用
安装flannel
复制代码
wget https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml

 根据情况进行修改
 containers:
 - args:
 - --ip-masq
 - --kube-subnet-mgr
 - --iface=eth0 # 指明绑定在哪个⽹卡上(可不配置)
 net-conf.json: |
 {
 "Network": "10.244.0.0/16",
 "Backend": {
 "Type": "vxlan"
 }
 }

结果展示

复制代码
[root@gcp-hongkong-k8s-master01-test install_k8s]# kubectl get pod -A
NAMESPACE      NAME                                                     READY   STATUS    RESTARTS       AGE
kube-flannel   kube-flannel-ds-5mtx7                                    1/1     Running   0              27s
kube-flannel   kube-flannel-ds-64fln                                    1/1     Running   0              27s
kube-flannel   kube-flannel-ds-lvqhq                                    1/1     Running   0              27s
kube-flannel   kube-flannel-ds-mwmbx                                    1/1     Running   0              27s
kube-flannel   kube-flannel-ds-pp7w7                                    1/1     Running   0              27s
kube-system    coredns-66f779496c-4xfbc                                 1/1     Running   0              135m
kube-system    coredns-66f779496c-h4hmd                                 1/1     Running   0              135m
kube-system    etcd-gcp-hongkong-k8s-master01-test                      1/1     Running   0              135m
kube-system    etcd-gcp-hongkong-k8s-master02-test                      1/1     Running   0              132m
kube-system    etcd-gcp-hongkong-k8s-master03-test                      1/1     Running   0              132m
kube-system    kube-apiserver-gcp-hongkong-k8s-master01-test            1/1     Running   0              135m
kube-system    kube-apiserver-gcp-hongkong-k8s-master02-test            1/1     Running   0              132m
kube-system    kube-apiserver-gcp-hongkong-k8s-master03-test            1/1     Running   1 (132m ago)   132m
kube-system    kube-controller-manager-gcp-hongkong-k8s-master01-test   1/1     Running   1 (132m ago)   135m
kube-system    kube-controller-manager-gcp-hongkong-k8s-master02-test   1/1     Running   0              132m
kube-system    kube-controller-manager-gcp-hongkong-k8s-master03-test   1/1     Running   0              131m
kube-system    kube-proxy-7vbk2                                         1/1     Running   0              132m
kube-system    kube-proxy-95kvh                                         1/1     Running   0              131m
kube-system    kube-proxy-d47m7                                         1/1     Running   0              131m
kube-system    kube-proxy-nvkjg                                         1/1     Running   0              131m
kube-system    kube-proxy-wnxqp                                         1/1     Running   0              135m
kube-system    kube-scheduler-gcp-hongkong-k8s-master01-test            1/1     Running   1 (132m ago)   135m
kube-system    kube-scheduler-gcp-hongkong-k8s-master02-test            1/1     Running   0              132m
kube-system    kube-scheduler-gcp-hongkong-k8s-master03-test            1/1     Running   0              132m
相关推荐
chuanauc5 小时前
Kubernets K8s 学习
java·学习·kubernetes
小张是铁粉5 小时前
docker学习二天之镜像操作与容器操作
学习·docker·容器
烟雨书信5 小时前
Docker文件操作、数据卷、挂载
运维·docker·容器
IT成长日记5 小时前
【Docker基础】Docker数据卷管理:docker volume prune及其参数详解
运维·docker·容器·volume·prune
这儿有一堆花5 小时前
Docker编译环境搭建与开发实战指南
运维·docker·容器
LuckyLay5 小时前
Compose 高级用法详解——AI教你学Docker
运维·docker·容器
Uluoyu5 小时前
redisSearch docker安装
运维·redis·docker·容器
IT成长日记10 小时前
【Docker基础】Docker数据持久化与卷(Volume)介绍
运维·docker·容器·数据持久化·volume·
疯子的模样14 小时前
Docker 安装 Neo4j 保姆级教程
docker·容器·neo4j
虚伪的空想家15 小时前
rook-ceph配置dashboard代理无法访问
ceph·云原生·k8s·存储·rook