### 准备测试环境
启动3个master节点 + 3个node节点, master节点需要是奇数个,防止出现"脑裂问题"
Ruby
# 定义虚拟机数量
num_masters = 3
num_nodes = 3
Vagrant.configure("2") do |config|
# 循环创建多个虚拟机 -master
(1..num_masters).each do |i|
config.vm.define "master#{i}" do |master|
# 使用的基础镜像
master.vm.box = "centos7"
# 虚拟机网络配置,使用静态 IP
master.vm.network "private_network", ip: "192.168.33.#{10 + i}"
# 配置共享文件夹,实现实时同步
master.vm.synced_folder "share", "/vagrant", type: "virtualbox"
# 虚拟机创建完成后执行的脚本
master.vm.provision "shell", path: "bootstrap.sh"
# 设置虚拟机的资源
master.vm.provider "virtualbox" do |vb|
# 设置 CPU 核心数为 2
vb.customize ["modifyvm", :id, "--cpus", "2"]
# 设置内存为 4GB(4096MB)
vb.customize ["modifyvm", :id, "--memory", "4096"]
end
end
end
# 循环创建多个虚拟机 -- node
(1..num_nodes).each do |i|
config.vm.define "node#{i}" do |node|
# 使用的基础镜像
node.vm.box = "centos7"
# 虚拟机网络配置,使用静态 IP
node.vm.network "private_network", ip: "192.168.33.#{100 + i}"
# 配置共享文件夹,实现实时同步
node.vm.synced_folder "share", "/vagrant", type: "virtualbox"
# 虚拟机创建完成后执行的脚本
node.vm.provision "shell", path: "bootstrap.sh"
end
end
end
- 修改 /etc/ssh/sshd_config 测试环境可以开启root远程连接
切换国内的yum源
- cd /etc/yum.repos.d/
- 清理缓存 sudo yum clean all
- 生成新的缓存 sudo yum makecache
- yum repolist
使用virtualbox实时共享文件夹
vagrant plugin install vagrant-vbguest
安装 Docker
-
卸载旧版本
如果你之前安装过旧版本的 Docker,可以先卸载它们:
bashsudo yum remove docker \ docker-client \ docker-client-latest \ docker-common \ docker-latest \ docker-latest-logrotate \ docker-logrotate \ docker-engine
-
设置存储库
安装所需的包:
bashsudo yum install -y yum-utils device-mapper-persistent-data lvm2
使用以下命令设置稳定的 Docker 存储库:
bashsudo yum-config-manager \ --add-repo \ https://download.docker.com/linux/centos/docker-ce.repo
-
安装 Docker Engine
安装最新版本的 Docker Engine 和 containerd:
bashsudo yum install docker-ce docker-ce-cli containerd.io
若要安装特定版本,可先列出可用版本:
bashyum list docker-ce --showduplicates | sort -r ``· 然后选择版本进行安装,例如: ```bash sudo yum install docker-ce-<VERSION_STRING> docker-ce-cli-<VERSION_STRING> containerd.io
-
启动并设置开机自启
启动 Docker 服务:
bashsudo systemctl start docker
设置 Docker 开机自启:
bashsudo systemctl enable docker
-
验证 Docker 安装
运行以下命令验证 Docker 是否安装成功:
bashsudo docker run hello-world
安装 Kubernetes(k8s)
-
禁用 SELinux
临时禁用 SELinux:
bashsudo setenforce 0
永久禁用 SELinux,编辑
/etc/selinux/config
文件,将SELINUX=enforcing
改为SELINUX=permissive
。 -
禁用交换分区
临时禁用交换分区:
bashsudo swapoff -a
永久禁用,编辑
/etc/fstab
文件,注释掉包含swap
的行。 -
添加 Kubernetes 存储库
创建
/etc/yum.repos.d/kubernetes.repo
文件,并添加以下内容:ini[kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
-
安装 kubelet、kubeadm 和 kubectl
bashsudo yum install -y kubelet kubeadm kubectl sudo systemctl enable --now kubelet
-
配置containerd
luacontainerd config dump > /etc/containerd/config.toml
-
初始化 Kubernetes 控制平面(仅在主节点上执行)
创建初始化控制平面配置
yamlapiVersion: kubeadm.k8s.io/v1beta3 kind: ClusterConfiguration kubernetesVersion: stable controlPlaneEndpoint: "192.168.33.11:6443" networking: podSubnet: "10.244.0.0/16" serviceSubnet: "10.96.0.0/12" apiServer: certSANs: - "192.168.33.11" - "10.96.0.1" - "10.0.2.15" - "localhost" etcd: local: dataDir: /var/lib/etcd imageRepository: registry.aliyuncs.com/google_containers
运行以下命令初始化控制平面:
bashsudo kubeadm init --config=/vagrant/cluster.yaml -v=5
sql............ Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config Alternatively, if you are the root user, you can run: export KUBECONFIG=/etc/kubernetes/admin.conf You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of control-plane nodes by copying certificate authorities and service account keys on each node and then running the following as root: kubeadm join 192.168.33.11:6443 --token 6b0q98.cr8mshftuj7pm0hs \ --discovery-token-ca-cert-hash sha256:ac9d8d7a96648e3780baa15bac1eab7a423dbba9c44aa8476fca7d3760320bb7 \ --control-plane Then you can join any number of worker nodes by running the following on each as root: kubeadm join 192.168.33.11:6443 --token 6b0q98.cr8mshftuj7pm0hs \ --discovery-token-ca-cert-hash sha256:ac9d8d7a96648e3780baa15bac1eab7a423dbba9c44aa8476fca7d3760320bb7
初始化完成后,按照提示设置
kubectl
配置:bashmkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
-
安装网络插件(如 Flannel)
运行以下命令安装 Flannel 网络插件:
bashkubectl apply -f https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml
-
加入工作节点(在工作节点上执行)
在主节点初始化完成后,会输出一个
kubeadm join
命令,在工作节点上执行该命令,将工作节点加入到 Kubernetes 集群中。
验证安装
在主节点上运行以下命令查看集群节点状态:
bash
kubectl get nodes
如果一切正常,你应该能看到所有节点的状态为 Ready
。