k8s v1.28.15部署(kubeadm方式)

k8s部署(kubeadm方式)

部署环境及版本

bash 复制代码
系统版本:CentOS Linux release 7.9.2009
k8s版本:v1.28.15
docker版本:26.1.4
containerd版本:1.6.33
calico版本:v3.25.0

准备

主机ip 主机名 角色 配置
192.168.1.131 vm131 master 2核4G,50G
192.168.1.132 vm132 node1 2核4G,50G
192.168.1.133 vm133 node2 2核4G,50G

1、修改主机名

bash 复制代码
#master
hostnamectl set-hostname vm131
#node1
hostnamectl set-hostname vm132
#node2
hostnamectl set-hostname vm133

#修改hosts文件
vim /etc/hosts
192.168.1.131 vm131
192.168.1.132 vm132
192.168.1.133 vm133

2、关闭自带防火墙

bash 复制代码
# 关闭SELinux
setenforce 0 
sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config

# 关闭Firewalld并禁止自启动
systemctl stop firewalld
systemctl disable firewalld

# 设置iptables规则
iptables -F && iptables -X && iptables -F -t nat && iptables -X -t nat && iptables -P FORWARD ACCEPT

# 设置iptables,解决iptables而导致流量无法正确路由的问题
cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_nonlocal_bind = 1
net.ipv4.ip_forward = 1
vm.swappiness = 0
vm.overcommit_memory = 1
EOF
sysctl --system

# 关闭swap
swapoff -a && free --h
sed -i 's@/dev/mapper/centos-swap@#/dev/mapper/centos-swap@g' /etc/fstab

3、时间同步

bash 复制代码
yum -y install ntp
systemctl enable ntpd
systemctl start ntpd
timedatectl set-timezone Asia/Shanghai
ntpdate -u time.nist.gov
date

#如果遇到yum源的问题,可以通过以下方式解决
curl -o /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
yum clean all #清除缓存
yum makecache #生成缓存

一、安装docker

bash 复制代码
#安装编译器环境
yum -y install gcc gcc-c++
#卸载之前下载的docker(防止和之前安装过的docker出现冲突)
 yum remove docker \
                  docker-client \
                  docker-client-latest \
                  docker-common \
                  docker-latest \
                  docker-latest-logrotate \
                  docker-logrotate \
                  docker-engine
#安装yum-utils
yum install -y yum-utils
#将docker-ce.repo添加到/etc/yum.repos.d/
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
#提升yum安装的速度
yum makecache fast
#安装docker
yum install docker-ce docker-ce-cli containerd.io -y
#启动docker
systemctl start docker
#开机自启动
systemctl enable docker

二、安装containerd

bash 复制代码
yum install containerd -y

#配置/etc/containerd/config.toml 
#如果你不打算使用跟踪功能,可以在 containerd 配置文件中禁用该插件。

vim /etc/containerd/config.toml
#disabled_plugins = ["cri"]
[plugins."io.containerd.grpc.v1.cri"]
disable = false

#配置完成后,重启 containerd:
systemctl restart containerd
#如果你需要启用 tracing,可以配置跟踪端点。在 containerd 的配置文件中,提供正确的 tracing 端点配置:
#[plugins."io.containerd.tracing.processor.v1.otlp"]
#  endpoint = "your-tracing-endpoint:port"
#根据你所使用的 tracing 服务(如 OpenTelemetry Collector)来提供适当的端点信息。
#配置完成后,重启 containerd:
#systemctl restart containerd

三、Master节点安装kubeadm

1、安装kubelet 和kubeadm以及kubectl

bash 复制代码
yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
systemctl enable --now kubelet

2、下载所需要的镜像

bash 复制代码
#查看需要下载的镜像
kubeadm config images list
#查看结果
I0306 11:27:12.886721   21939 version.go:256] remote version is much newer: v1.32.2; falling back to: stable-1.28
registry.k8s.io/kube-apiserver:v1.28.15
registry.k8s.io/kube-controller-manager:v1.28.15
registry.k8s.io/kube-scheduler:v1.28.15
registry.k8s.io/kube-proxy:v1.28.15
registry.k8s.io/pause:3.9
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/coredns/coredns:v1.10.1
#下载镜像(注意:namespace是k8s.io)
ctr -n k8s.io images pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.28.15
ctr -n k8s.io images pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.28.15
ctr -n k8s.io images pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.28.15
ctr -n k8s.io images pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.28.15
ctr -n k8s.io images pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.9
ctr -n k8s.io images pull registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.5.9-0
ctr -n k8s.io images pull registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:v1.10.1
ctr -n k8s.io images pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.6
#重新打标签
ctr -n k8s.io images tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.28.15 registry.k8s.io/kube-apiserver:v1.28.15
ctr -n k8s.io images tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.28.15 registry.k8s.io/kube-controller-manager:v1.28.15
ctr -n k8s.io images tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.28.15 registry.k8s.io/kube-scheduler:v1.28.15
ctr -n k8s.io images tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.28.15 registry.k8s.io/kube-proxy:v1.28.15
ctr -n k8s.io images tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.9 registry.k8s.io/pause:3.9
ctr -n k8s.io images tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.6 registry.k8s.io/pause:3.6
ctr -n k8s.io images tag registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.5.9-0 registry.k8s.io/etcd:3.5.9-0
ctr -n k8s.io images tag registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:v1.10.1 registry.k8s.io/coredns/coredns:v1.10.1
#删除aliyun镜像
for item in `ctr -n k8s.io images list | awk '$1 ~ /registry.cn-hangzhou.aliyuncs.com/ {print $1}'`;do ctr -n k8s.io images rm ${item};done

3、更改kubelet参数

bash 复制代码
vim /etc/sysconfig/kubelet

#改为如下参数
KUBELET_EXTRA_ARGS=--cgroup-driver=systemd

4、kubeadm初始化

bash 复制代码
kubeadm init
#出现Your Kubernetes control-plane has initialized successfully!说明初始化成功
#根据提示执行如下命令
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

四、Node节点安装kubeadm

1、安装kubeadm kubelet

bash 复制代码
yum -y install kubeadm kubelet
systemctl enable kubelet.service

2、下载所需要的镜像

bash 复制代码
#查看需要下载的镜像
kubeadm config images list
#查看结果
I0306 11:27:12.886721   21939 version.go:256] remote version is much newer: v1.32.2; falling back to: stable-1.28
registry.k8s.io/kube-apiserver:v1.28.15
registry.k8s.io/kube-controller-manager:v1.28.15
registry.k8s.io/kube-scheduler:v1.28.15
registry.k8s.io/kube-proxy:v1.28.15
registry.k8s.io/pause:3.9
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/coredns/coredns:v1.10.1
#下载镜像(注意:namespace是k8s.io)
ctr -n k8s.io images pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.28.15
ctr -n k8s.io images pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.28.15
ctr -n k8s.io images pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.28.15
ctr -n k8s.io images pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.28.15
ctr -n k8s.io images pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.9
ctr -n k8s.io images pull registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.5.9-0
ctr -n k8s.io images pull registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:v1.10.1
ctr -n k8s.io images pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.6
#重新打标签
ctr -n k8s.io images tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.28.15 registry.k8s.io/kube-apiserver:v1.28.15
ctr -n k8s.io images tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.28.15 registry.k8s.io/kube-controller-manager:v1.28.15
ctr -n k8s.io images tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.28.15 registry.k8s.io/kube-scheduler:v1.28.15
ctr -n k8s.io images tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.28.15 registry.k8s.io/kube-proxy:v1.28.15
ctr -n k8s.io images tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.9 registry.k8s.io/pause:3.9
ctr -n k8s.io images tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.6 registry.k8s.io/pause:3.6
ctr -n k8s.io images tag registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.5.9-0 registry.k8s.io/etcd:3.5.9-0
ctr -n k8s.io images tag registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:v1.10.1 registry.k8s.io/coredns/coredns:v1.10.1
#删除aliyun镜像
for item in `ctr -n k8s.io images list | awk '$1 ~ /registry.cn-hangzhou.aliyuncs.com/ {print $1}'`;do ctr -n k8s.io images rm ${item};done

3、更改kubelet参数

bash 复制代码
vim /etc/sysconfig/kubelet

#改为如下参数
KUBELET_EXTRA_ARGS=--cgroup-driver=systemd

4、加入Master

bash 复制代码
#token来自master节点执行kubeinit的结果
kubeadm join 192.168.1.131:6443 --token 51lpgy.tue7o5rnwi3h62ql \
> --discovery-token-ca-cert-hash sha256:200ad4e8f13649b3cd14db97cdaf842d97f4dbbca0cbec123c06b2a3c687ede1

五、安装网络插件

1、在Master节点拉取并修改calico.yml文件

bash 复制代码
#下载calico的yaml文件(Master节点操作)
yum install wget -y
mkdir -pv /data/yaml && cd /data/yaml
wget https://docs.tigera.io/archive/v3.25/manifests/calico.yaml
#修改calico的yaml文件
- name: CALICO_IPV4POOL_CIDR
  value: "10.244.0.0/16"
#在CLUSTER_TYPE那行,下面添加(interface根据自己的主机网卡名修改)
- name: IP_AUTODETECTION_METHOD
  value: "interface=ens192"

2、下载所需要的镜像(Master和Node都要操作)

bash 复制代码
mkdir -pv /data/tar && cd /data/tar
#需要先用docker下载,用ctr测试下载不下来
docker pull calico/cni:v3.25.0
docker pull calico/node:v3.25.0
docker pull calico/kube-controllers:v3.25.0
#导出docker镜像
docker save -o cni.tar calico/cni:v3.25.0
docker save -o node.tar calico/node:v3.25.0
docker save -o kube-controllers.tar calico/kube-controllers:v3.25.0
#用ctr导入镜像(namespace是k8s.io)(前面两步可以在一台服务器上操作,最后这步都要操作)
ctr -n k8s.io images import cni.tar
ctr -n k8s.io images import node.tar
ctr -n k8s.io images import kube-controllers.tar
#验证是否导入成功
ctr -n k8s.io images list|grep calico

3、安装calico插件

bash 复制代码
#只在Master节点操作即可
cd /data/yaml
kubectl apply -f calico.yaml
bash 复制代码
[root@vm131 ~]# kubectl get pods -A
NAMESPACE     NAME                                       READY   STATUS    RESTARTS       AGE
kube-system   calico-kube-controllers-658d97c59c-fc46q   1/1     Running   0              46m
kube-system   calico-node-4cc9q                          1/1     Running   0              46m
kube-system   calico-node-l6ch2                          1/1     Running   0              46m
kube-system   calico-node-r27jj                          1/1     Running   0              46m
kube-system   coredns-5dd5756b68-84cww                   0/1     Running   0              4h
kube-system   coredns-5dd5756b68-nd9v8                   0/1     Running   0              4h
kube-system   etcd-vm131                                 1/1     Running   0              4h1m
kube-system   kube-apiserver-vm131                       1/1     Running   0              4h1m
kube-system   kube-controller-manager-vm131              1/1     Running   1 (3h9m ago)   4h1m
kube-system   kube-proxy-26qwm                           1/1     Running   0              3h44m
kube-system   kube-proxy-bj5xg                           1/1     Running   0              4h
kube-system   kube-proxy-wgfnz                           1/1     Running   0              3h43m
kube-system   kube-scheduler-vm131                       1/1     Running   0              4h1m

#出现上面情况可以尝试,重启coreDNS
#plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
kubectl rollout restart deployment coredns -n kube-system

六、验证

bash 复制代码
[root@vm131 ~]# kubectl get pods -A
NAMESPACE     NAME                                       READY   STATUS    RESTARTS        AGE
kube-system   calico-kube-controllers-658d97c59c-t678p   1/1     Running   0               5m26s
kube-system   calico-node-5zj2n                          1/1     Running   0               5m26s
kube-system   calico-node-hjdcm                          1/1     Running   0               5m26s
kube-system   calico-node-hrwh4                          1/1     Running   0               5m26s
kube-system   coredns-967d5bb69-sb9xx                    1/1     Running   0               31m
kube-system   coredns-967d5bb69-sngxg                    1/1     Running   0               31m
kube-system   etcd-vm131                                 1/1     Running   0               4h34m
kube-system   kube-apiserver-vm131                       1/1     Running   0               4h34m
kube-system   kube-controller-manager-vm131              1/1     Running   1 (3h43m ago)   4h34m
kube-system   kube-proxy-26qwm                           1/1     Running   0               4h17m
kube-system   kube-proxy-bj5xg                           1/1     Running   0               4h34m
kube-system   kube-proxy-wgfnz                           1/1     Running   0               4h16m
kube-system   kube-scheduler-vm131                       1/1     Running   0               4h34m
[root@vm131 ~]# kubectl get node
NAME    STATUS   ROLES           AGE     VERSION
vm131   Ready    control-plane   4h34m   v1.28.2
vm132   Ready    <none>          4h16m   v1.28.2
vm133   Ready    <none>          4h17m   v1.28.2

七、问题记录

bash 复制代码
crictl images 报错:
WARN[0000] image connect using default endpoints: [unix:///var/run/dockershim.sock unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock unix:///var/run/cri-dockerd.sock]. As the default settings are now deprecated, you should set the endpoint instead. 
E0305 16:42:46.922252   22812 remote_image.go:119] "ListImages with filter from image service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/dockershim.sock: connect: no such file or directory\"" filter="&ImageFilter{Image:&ImageSpec{Image:,Annotations:map[string]string{},},}"
FATA[0000] listing images: rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing dial unix /var/run/dockershim.sock: connect: no such file or directory"
解决:
export CONTAINER_RUNTIME_ENDPOINT=unix:///run/containerd/containerd.sock
export IMAGE_SERVICE_ENDPOINT=unix:///run/containerd/containerd.sock
配置 crictl 默认连接 containerd
编辑 crictl 配置文件:
mkdir -p /etc/crictl
echo 'runtime-endpoint: unix:///run/containerd/containerd.sock' > /etc/crictl.yaml

vim /etc/containerd/config.toml
#disabled_plugins = ["cri"]
[plugins."io.containerd.grpc.v1.cri"]
disable = false

禁用跟踪插件: 如果你不打算使用跟踪功能,可以在 containerd 配置文件中禁用该插件。在 /etc/containerd/config.toml 文件中,查找并禁用跟踪插件:
[plugins."io.containerd.tracing.processor.v1.otlp"]
  enabled = false
然后重启 containerd:
sudo systemctl restart containerd
配置 tracing endpoint: 如果你需要启用 tracing,可以配置跟踪端点。在 containerd 的配置文件中,提供正确的 tracing 端点配置:

[plugins."io.containerd.tracing.processor.v1.otlp"]
  endpoint = "your-tracing-endpoint:port"
根据你所使用的 tracing 服务(如 OpenTelemetry Collector)来提供适当的端点信息。

配置完成后,重启 containerd:
sudo systemctl restart containerd
总结:
如果不使用跟踪功能,可以简单地忽略这个信息或禁用该插件。
如果需要启用跟踪功能,确保配置了正确的 tracing endpoint
相关推荐
π大星星️25 分钟前
Kubernetes中的微服务
微服务·容器·kubernetes
沉默的八哥1 小时前
如何配置 Horizontal Pod Autoscaler (HPA)
运维·kubernetes
朱剑君1 小时前
番外篇 - Docker的使用
爬虫·docker·容器
AWS官方合作商1 小时前
AWS原生架构下的服务器性能与成本平衡之道——海外业务云端实践
服务器·云原生·云计算·aws
weixin_748877006 小时前
【2025年后端开发终极指南:云原生、AI融合与性能优化实战】
人工智能·云原生·性能优化
LCY13311 小时前
k8s的配置文件说明
云原生·容器·kubernetes
小刘爱喇石( ˝ᗢ̈˝ )11 小时前
玛卡巴卡的k8s知识点问答题(二)
docker·容器·kubernetes
encoding-console11 小时前
k8s概念及k8s集群部署(Centos7)
云原生·容器·kubernetes·centos·部署
Ares-Wang12 小时前
docker compose
java·docker·容器
Hellc00713 小时前
使用 Docker 部署 BaGet 并推送 NuGet 包
java·docker·容器