kubeadm init 失败: failed to pull image k8s.gcr.io/etcd:3.4.13-0

一. kubeadm init 失败

报错:

bash 复制代码
error execution phase preflight: [preflight] Some fatal errors occurred:
	[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-apiserver:v1.19.3: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
, error: exit status 1
	[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-controller-manager:v1.19.3: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
, error: exit status 1
	[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-scheduler:v1.19.3: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
, error: exit status 1
	[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-proxy:v1.19.3: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
, error: exit status 1
	[ERROR ImagePull]: failed to pull image k8s.gcr.io/pause:3.2: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
, error: exit status 1
	[ERROR ImagePull]: failed to pull image k8s.gcr.io/etcd:3.4.13-0: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
, error: exit status 1
	[ERROR ImagePull]: failed to pull image k8s.gcr.io/coredns:1.7.0: output: 

二. 原因

因为k8s.gcr.io镜像库在国内需要翻墙,所以普通下载无法成功完成,可以更改一下下载的镜像库地址为国内地址,下载后将标签打回和kubeadm config一致的方法解决。

  • 更改前查看镜像地址
bash 复制代码
[root@master01 opt]# kubeadm config images list
I0825 12:06:37.047523   32435 version.go:252] remote version is much newer: v1.28.1; falling back to: stable-1.19
W0825 12:06:38.260046   32435 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
k8s.gcr.io/kube-apiserver:v1.19.16
k8s.gcr.io/kube-controller-manager:v1.19.16
k8s.gcr.io/kube-scheduler:v1.19.16
k8s.gcr.io/kube-proxy:v1.19.16
k8s.gcr.io/pause:3.2
k8s.gcr.io/etcd:3.4.13-0
k8s.gcr.io/coredns:1.7.0
  • 更改镜像基地址
bash 复制代码
for i in `kubeadm config images list`; do
        imageName=${i#k8s.gcr.io/}
        docker pull registry.aliyuncs.com/google_containers/$imageName
        docker tag registry.aliyuncs.com/google_containers/$imageName k8s.gcr.io/$imageName
        docker rmi registry.aliyuncs.com/google_containers/$imageName
done;

三. 重新执行 kubeadm init

bash 复制代码
[root@mater01 opt]# sudo kubeadm init


Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 101.16.97.0:6443 --token iplvm8.ex1q0syqg \
    --discovery-token-ca-cert-hash sha256:3266bbdab19c3640d43f486e63dc0a8660de5
相关推荐
2501_939909052 小时前
k8s基础与安装部署
云原生·容器·kubernetes
Python_Study20252 小时前
制造业数据采集系统选型指南:从技术挑战到架构实践
大数据·网络·数据结构·人工智能·架构
谷隐凡二3 小时前
Kubernetes Route控制器简单介绍
java·容器·kubernetes
蚂蚁吃大象6663 小时前
vmware虚拟机-网络模型
网络
Caitlin_lee_4 小时前
计算机网络期末复习SCAU-第三章
网络·计算机网络
野生技术架构师4 小时前
原来可以搭建一个HTTP服务
网络·网络协议·http
奇树谦4 小时前
FastDDS路由可达的跨网段通信支持说明
网络
tianyuanwo4 小时前
深入解析CentOS 8网络配置:NetworkManager DNS管理机制与网卡类型深度剖析
linux·网络·centos
DX_水位流量监测5 小时前
无人机测流之雷达流速仪监测技术分析
大数据·网络·人工智能·数据分析·自动化·无人机
Xの哲學5 小时前
Linux io_uring 深度剖析: 重新定义高性能I/O的架构革命
linux·服务器·网络·算法·边缘计算