CentOS7安装部署K8s

一、环境准备与基础配置(Master 和 Worker 节点均需执行)

1. 服务器要求

  • 系统:CentOS 7.6+(最小化安装),Docker 20.10.17、Kubernetes 1.23.0
  • 硬件:2 核 CPU、2GB 内存、20GB 磁盘
  • 网络:静态 IP(示例:Master 192.168.19.143,Worker 192.168.19.144),可访问互联网
  • 关闭虚拟机快照,使用桥接网络模式

2. 更换国内 YUM 源(解决下载失败问题)

复制代码
# 备份默认源
mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.bak
mv /etc/yum.repos.d/epel.repo /etc/yum.repos.d/epel.repo.bak

# 下载阿里云源(关键步骤,解决CentOS 7默认源失效问题)
curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo
curl -o /etc/yum.repos.d/epel.repo https://mirrors.aliyun.com/repo/epel-7.repo

# 清理缓存并生成新缓存
yum clean all
yum makecache fast

3. 系统初始化配置

复制代码
# 1. 关闭防火墙
systemctl stop firewalld
systemctl disable firewalld

# 2. 关闭SELinux
setenforce 0
sed -i 's/^SELINUX=enforcing$/SELINUX=disabled/' /etc/selinux/config

# 3. 关闭Swap
swapoff -a
sed -i '/swap/s/^/#/' /etc/fstab

# 4. 配置内核参数(开启IP转发和桥接)
cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF
sysctl --system

# 5. 安装基础工具
yum install -y net-tools bind-utils wget curl vim yum-utils device-mapper-persistent-data lvm2 jq

二、安装 Docker(Master 和 Worker 节点均需执行)

复制代码
# 1. 配置Docker阿里云YUM源
yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

# 2. 安装指定版本Docker(20.10.x,与K8s 1.23兼容)
yum install -y docker-ce-20.10.17 docker-ce-cli-20.10.17 containerd.io

# 3. 配置Docker(镜像加速+统一cgroup驱动)
# 注意:JSON文件中严禁添加任何注释(#或//),否则Docker启动失败
cat > /etc/docker/daemon.json << EOF
{
  "registry-mirrors": [
    "https://b9pmyelo.mirror.aliyuncs.com",
    "https://hub-mirror.c.163.com",
    "https://docker.mirrors.ustc.edu.cn",
    "https://reg-mirror.qiniu.com"
  ],
  "exec-opts": [
    "native.cgroupdriver=systemd"
  ]
}
EOF

# 4. 验证JSON格式(无报错则正确)
jq . /etc/docker/daemon.json

# 5. 启动Docker并设置开机自启
systemctl daemon-reload
systemctl start docker
systemctl enable docker

# 6. 确认Docker状态(输出active (running))
systemctl status docker -l | grep active

# 7. 验证cgroup驱动(必须输出systemd)
docker info | grep "Cgroup Driver"

# 8. 查看镜像加速(输出配置的阿里云镜像源)
docker info | grep -A 2 "Registry Mirrors"

三、安装 Kubernetes 组件(Master 和 Worker 节点均需执行)

复制代码
# 1. 配置Kubernetes阿里云YUM源
cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
exclude=kubelet kubeadm kubectl
EOF

# 2. 安装指定版本(1.23.0)
yum install -y kubelet-1.23.0 kubeadm-1.23.0 kubectl-1.23.0 --disableexcludes=kubernetes

# 3. 启动kubelet并设置开机自启(此时状态为activating,初始化后正常)
systemctl start kubelet
systemctl enable kubelet

# 4. 验证组件版本
kubeadm version
kubectl version --client

四、拉取 K8s 基础镜像

复制代码
# 定义K8s 1.23.0所需镜像列表
images=(
  kube-apiserver:v1.23.0
  kube-controller-manager:v1.23.0
  kube-scheduler:v1.23.0
  kube-proxy:v1.23.0
  pause:3.6
  etcd:3.5.1-0
  coredns:v1.8.6
)

# 从阿里云公共仓库拉取镜像(无需私有仓库,避免权限问题)
for image in "${images[@]}"; do
  docker pull registry.aliyuncs.com/google_containers/$image
done

# 验证镜像拉取成功(应显示7个镜像)
docker images | grep registry.aliyuncs.com/google_containers

五、初始化 Master 节点

复制代码
# 1. 定义Master节点IP(替换为你的实际IP)
MASTER_IP="192.168.19.143"

# 2. 重置可能的残留配置(首次安装可跳过,失败重试时必须执行)
kubeadm reset -f
rm -rf /etc/cni/net.d

# 3. 执行初始化(使用阿里云公共仓库)
kubeadm init \
  --apiserver-advertise-address=$MASTER_IP \
  --image-repository=registry.aliyuncs.com/google_containers \
  --kubernetes-version=v1.23.0 \
  --service-cidr=10.96.0.0/12 \
  --pod-network-cidr=10.244.0.0/16 \
  --ignore-preflight-errors=all

# 4. 配置kubectl权限(root用户)
mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config

# 5. 验证初始化结果(节点状态为NotReady,等待网络插件)
kubectl get nodes
关键成功输出(需保存 join 命令):
复制代码
Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:
  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Then you can join any number of worker nodes by running the following on each as root:
  kubeadm join 192.168.19.143:6443 --token dnbzfo.7iuqo03wuhedalaq \
        --discovery-token-ca-cert-hash sha256:ac67b3dda4ac71ae3b95b11a9296f74930a0af66b82747738a4c8235c79c1cf7
配置 kubectl 权限(Master 节点操作)
复制代码
# 配置当前用户的kubeconfig
mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config

六、安装 Calico 网络插件(Master 节点操作)

复制代码
# 1. 下载Calico配置文件
curl -O https://docs.projectcalico.org/v3.20/manifests/calico.yaml

# 2. 替换镜像地址为华为云源(解决Docker Hub拉取失败)
sed -i 's#docker.io/calico/cni:v3.20.6#swr.cn-east-3.myhuaweicloud.com/kubesre/docker.io/calico/cni:v3.20.6#g' calico.yaml
sed -i 's#docker.io/calico/node:v3.20.6#swr.cn-east-3.myhuaweicloud.com/kubesre/docker.io/calico/node:v3.20.6#g' calico.yaml
sed -i 's#docker.io/calico/kube-controllers:v3.20.6#swr.cn-east-3.myhuaweicloud.com/kubesre/docker.io/calico/kube-controllers:v3.20.6#g' calico.yaml
sed -i 's#docker.io/calico/pod2daemon-flexvol:v3.20.6#swr.cn-east-3.myhuaweicloud.com/kubesre/docker.io/calico/pod2daemon-flexvol:v3.20.6#g' calico.yaml

# 3. 部署Calico
kubectl apply -f calico.yaml

# 4. 等待Calico启动(约2-5分钟,直到所有Pod为Running)
kubectl get pods -n kube-system -w | grep calico

# 5. 确认节点状态变为Ready
kubectl get nodes

# 1. 查看Master节点状态(输出Ready,Roles为control-plane,master)
kubectl get nodes
# 正确输出:
# NAME             STATUS   ROLES                  AGE   VERSION
# 192.168.19.143   Ready    control-plane,master   10m   v1.23.0

# 2. 查看控制平面组件(所有Pod STATUS为Running)
kubectl get pods -n kube-system
# 正确输出(核心组件均为Running):
# NAME                                       READY   STATUS    RESTARTS   AGE
# calico-kube-controllers-869dff5c6d-zlc88   1/1     Running   0          8m
# calico-node-djsf6                          1/1     Running   0          8m
# coredns-6d8c4cb4d-j9prx                    1/1     Running   0          10m
# etcd-192.168.19.143                        1/1     Running   0          10m
# kube-apiserver-192.168.19.143              1/1     Running   0          10m
# kube-controller-manager-192.168.19.143     1/1     Running   0          10m
# kube-proxy-m8bwn                           1/1     Running   0          10m
# kube-scheduler-192.168.19.143              1/1     Running   0          10m

至此,Master就完成了

七、添加 Worker 节点(仅 Worker 节点执行)

1. 执行 Master 节点生成的 join 命令

复制代码
# 粘贴Master初始化时保存的join命令(示例,需替换为实际命令)
kubeadm join 192.168.19.143:6443 --token dnbzfo.7iuqo03wuhedalaq \
        --discovery-token-ca-cert-hash sha256:ac67b3dda4ac71ae3b95b11a9296f74930a0af66b82747738a4c8235c79c1cf7

关键成功输出:

复制代码
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

2. 验证 Worker 节点加入(Master 节点执行)

复制代码
# 查看所有节点(Worker节点STATUS为Ready,Roles为空)
kubectl get nodes
# 正确输出:
# NAME             STATUS   ROLES                  AGE   VERSION
# 192.168.19.143   Ready    control-plane,master   15m   v1.23.0
# 192.168.19.144   Ready    <none>                 5m    v1.23.0

# 查看Worker节点的Calico组件(STATUS为Running)
kubectl get pods -n kube-system -o wide | grep calico-node
# 正确输出(包含Worker节点IP):
# calico-node-djsf6                          1/1     Running   0          13m   192.168.19.143   192.168.19.143
# calico-node-w2t6p                          1/1     Running   0          5m    192.168.19.144   192.168.19.144
相关推荐
Q飞了2 小时前
k8s里三种探针的使用场景
云原生·容器·kubernetes
bxlj_jcj4 小时前
Service :微服务通信、负载、故障难题的解决方案
云原生·容器·kubernetes
Q飞了5 小时前
Kubernetes Pod 的生命周期与故障排查
云原生·容器·kubernetes
止观止5 小时前
容器化安装新趋势:云原生到边缘计算
人工智能·云原生·边缘计算
娶个名字趴5 小时前
Docker(二)
运维·docker·容器
Ribou7 小时前
k8s集群部署nacos集群
kubernetes
失散137 小时前
分布式专题——21 Kafka客户端消息流转流程
java·分布式·云原生·架构·kafka
凉茶社7 小时前
前端容器化配置注入全攻略(docker/k8s) 🐳🚀
运维·docker·容器
June`8 小时前
Docker镜像与容器:轻松理解与实战
运维·docker·容器