文章目录
集群配置
配置清单
- OS: ubuntu 20.04
- kubernetes: 1.29.1
- Container Runtime:Containerd 1.7.11
- CRI: runc 1.10
- CNI: cni-plugin 1.4
集群规划
IP | Hostname | 配置 |
---|---|---|
192.168.254.130 | master01 | 2C 4G 30G |
192.168.254.131 | master02 | 2C 4G 30G |
192.168.254.132 | node1 | 2C 4G 30G |
集群网络规划
- Pod 网络: 10.244.0.0/16
- Service 网络: 10.96.0.0/12
- Node 网络: 192.168.254.0/24
环境初始化
主机配置
sh
ssh-keygen
ssh-copy-id -i /root/.ssh/id_rsa.pub root@192.168.254.131
ssh-copy-id -i /root/.ssh/id_rsa.pub root@192.168.254.132
# 将节点加入 hosts
cat << EOF >> /etc/hosts
192.168.254.130 master01
192.168.254.131 master02
192.168.254.132 node01
EOF
配置高可用ApiServer
安装 nginx
所有 master 节点都要操作
sh
apt install nginx -y
systemctl status nginx
# 修改 nginx 配置文件
cat /etc/nginx/nginx.conf
user user;
worker_processes auto;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;
events {
worker_connections 768;
# multi_accept on;
}
#添加了stream 这一段,其他的保持默认即可
stream {
log_format main '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent';
access_log /var/log/nginx/k8s-access.log main;
upstream k8s-apiserver {
server 192.168.254.130:6443; #master01的IP和6443端口
server 192.168.254.131:6443; #master02的IP和6443端口
}
server {
listen 16443; #监听的是16443端口,因为nginx和master复用机器,所以不能是6443端口
proxy_pass k8s-apiserver; #使用proxy_pass模块进行反向代理
}
}
......
# 重启 nginx 服务
systemctl restart nginx && systemctl enable nginx && systemctl status nginx
# 端口检查
# netstat -lntup| grep 16443
nc -l -p 16443
#nc: Address already in use
安装 Keepalived
所有 master 节点都要操作
sh
apt install keepalived -y
# 写入 nginx 检查脚本
cat << EOF > /etc/keepalived/nginx_check.sh
#!/bin/bash
#1、判断Nginx是否存活
counter=`ps -C nginx --no-header | wc -l`
if [ $counter -eq 0 ]; then
#2、如果不存活则尝试启动Nginx
./usr/local/nginx/sbin/nginx
sleep 2
#3、等待2秒后再次获取一次Nginx状态
counter=`ps -C nginx --no-header | wc -l`
#4、再次进行判断,如Nginx还不存活则停止Keepalived,让地址进行漂移
if [ $counter -eq 0 ]; then
killall keepalived
fi
fi
EOF
chmod +x /etc/keepalived/nginx_check.sh
更改 master01 的 keepalived 配置:
sh
cat << EOF > /etc/keepalived/keepalived.conf
global_defs {
router_id LVS_DEVEL
vrrp_skip_check_adv_addr
vrrp_garp_interval 0
vrrp_gna_interval 0
}
vrrp_script chk_nginx {
script "/etc/keepalived/nginx_check.sh" ## 检测 nginx 状态的脚本路径
interval 2 ## 检测时间间隔
weight -20 ## 如果条件成立,权重-20
}
vrrp_instance VI_1 {
state MASTER ##主节点为 MASTER,备份节点为 BACKUP
interface ens33 ##绑定 VIP 的网络接口,与本机IP地址所在网络接口相同
virtual_router_id 100 ##虚拟路由id,主从节点必须保持一致
priority 100 ##节点优先级,直范围0-254,MASTER 要比 BACKUP 高
advert_int 1
authentication { ##设置验证信息,两个节点必须一致
auth_type PASS
auth_pass 123456
}
track_script {
chk_nginx ##执行 Nginx 监控
}
virtual_ipaddress {
192.168.254.100 ##VIP,两个节点必须设置一样(可设置多个)
}
}
EOF
systemctl restart keepalived && systemctl enable keepalived.service
ip a | grep 192.168.254.100
更改 master02 的 keepalived 配置:
sh
cat << EOF > /etc/keepalived/keepalived.conf
global_defs {
router_id LVS_DEVEL
vrrp_skip_check_adv_addr
vrrp_garp_interval 0
vrrp_gna_interval 0
}
vrrp_script chk_nginx {
script "/etc/keepalived/nginx_check.sh" ## 检测 nginx 状态的脚本路径
interval 2 ## 检测时间间隔
weight -20 ## 如果条件成立,权重-20
}
vrrp_instance VI_1 {
state BACKUP ##主节点为 MASTER,备份节点为 BACKUP
interface ens33 ##绑定 VIP 的网络接口,与本机IP地址所在网络接口相同
virtual_router_id 100 ##虚拟路由id,主从节点必须保持一致
priority 90 ##节点优先级,直范围0-254,MASTER 要比 BACKUP 高
advert_int 1
authentication { ##设置验证信息,两个节点必须一致
auth_type PASS
auth_pass 123456
}
track_script {
chk_nginx ##执行 Nginx 监控
}
virtual_ipaddress {
192.168.254.100 ##VIP,两个节点必须设置一样(可设置多个)
}
}
EOF
systemctl restart keepalived && systemctl enable keepalived.service
ip a | grep 192.168.254.100
安装脚本
**前置条件: ** 脚本中存在拉取国外资源, 需要你配置代理 ==> [如何让虚拟机拥有愉快网络环境](https://ai-feier.github.io/p/如何让虚拟机拥有愉快网络环境/)
需要:
- 虚拟机代理
- apt 下载代理
需要魔法的脚本
在所有节点执行以下脚本
脚本功能:
- 时间同步
- 关闭 swap
- 启用内核模块
- 安装 ipvs 并启用内核参数
- 安装 containerd, runc, cni
- 更改 containerd 沙箱镜像和 cgroup 并且配置镜像加速
- 安装最新 kubelet, kubeadm, kubectl
注意: 请先通过export name=master01
方式设置当前 node 的 hostname
sh
export name=master01 # 改为你 hostname 的名称, 脚本中删除该行
#!/bin/bash
hostnamectl set-hostname $name
# 阿里源
mv /etc/apt/sources.list /etc/apt/sources.list.bak
cat <<EOF > /etc/apt/sources.list
deb https://mirrors.aliyun.com/ubuntu/ focal main restricted universe multiverse
deb-src https://mirrors.aliyun.com/ubuntu/ focal main restricted universe multiverse
deb https://mirrors.aliyun.com/ubuntu/ focal-security main restricted universe multiverse
deb-src https://mirrors.aliyun.com/ubuntu/ focal-security main restricted universe multiverse
deb https://mirrors.aliyun.com/ubuntu/ focal-updates main restricted universe multiverse
deb-src https://mirrors.aliyun.com/ubuntu/ focal-updates main restricted universe multiversedeb https://mirrors.aliyun.com/ubuntu/ focal-backports main restricted universe multiverse
deb-src https://mirrors.aliyun.com/ubuntu/ focal-backports main restricted universe multiverse
EOF
apt update
# 时间同步
timedatectl set-timezone Asia/Shanghai
#安装chrony,联网同步时间
apt install chrony -y && systemctl enable --now chronyd
# 禁用 swap
sudo swapoff -a && sed -i '/swap/s/^/#/' /etc/fstab
# 安装 ipvs
apt install -y ipset ipvsadm
# 配置需要的内核模块
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF
# 启动模块
sudo modprobe overlay
sudo modprobe br_netfilter
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
EOF
# 是 sysctl 参数生效
sudo sysctl --system
# 检验是否配置成功
#lsmod | grep br_netfilter
#lsmod | grep overlay
#sysctl net.bridge.bridge-nf-call-iptables net.bridge.bridge-nf-call-ip6tables net.ipv4.ip_forward
# 配置 ipvs 内核参数
cat <<EOF | sudo tee /etc/modules-load.d/ipvs.conf
ip_vs
ip_vs_rr
ip_vs_wrr
ip_vs_sh
nf_conntrack
EOF
# 内核加载 ipvs
sudo modprobe ip_vs
sudo modprobe ip_vs_rr
sudo modprobe ip_vs_wrr
sudo modprobe ip_vs_sh
sudo modprobe nf_conntrack
# 确认ipvs模块加载
#lsmod |grep -e ip_vs -e nf_conntrack
# 安装 Containerd
wget -c https://github.com/containerd/containerd/releases/download/v1.7.11/containerd-1.7.11-linux-amd64.tar.gz
tar -xzvf containerd-1.7.11-linux-amd64.tar.gz
#解压出来一个bin目录,containerd可执行文件都在bin目录里面
mv bin/* /usr/local/bin/
rm -rf bin
#使用systemcd来管理containerd
cat << EOF > /usr/lib/systemd/system/containerd.service
[Unit]
Description=containerd container runtime
Documentation=https://containerd.io
After=network.target local-fs.target
[Service]
ExecStartPre=-/sbin/modprobe overlay
ExecStart=/usr/local/bin/containerd
Type=notify
Delegate=yes
KillMode=process
Restart=always
RestartSec=5
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNPROC=infinity
LimitCORE=infinity
# Comment TasksMax if your systemd version does not supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
OOMScoreAdjust=-999
[Install]
WantedBy=multi-user.target
EOF
systemctl daemon-reload && systemctl enable --now containerd
#systemctl status containerd
# 安装 runc
#runc是容器运行时,runc实现了容器的init,run,create,ps...我们在运行容器所需要的cmd:
curl -LO https://github.com/opencontainers/runc/releases/download/v1.1.10/runc.amd64 && \
install -m 755 runc.amd64 /usr/local/sbin/runc
# 安装 CNI plugins
wget -c https://github.com/containernetworking/plugins/releases/download/v1.4.0/cni-plugins-linux-amd64-v1.4.0.tgz
#根据官网的安装步骤来,创建一个目录用于存放cni插件
mkdir -p /opt/cni/bin
tar -xzvf cni-plugins-linux-amd64-v1.4.0.tgz -C /opt/cni/bin/
# 修改 Containd 配置
#修改containerd的配置,因为containerd默认从k8s官网拉取镜像
#创建一个目录用于存放containerd的配置文件
mkdir -p /etc/containerd
#把containerd配置导出到文件
containerd config default | sudo tee /etc/containerd/config.toml
# 修改沙箱镜像
sed -i 's#sandbox_image = "registry.k8s.io/pause:.*"#sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.9"#' /etc/containerd/config.toml
# 修改 cgroup 为 systemd
sed -i 's#SystemdCgroup = false#SystemdCgroup = true#' /etc/containerd/config.toml
# 配置镜像加速
sed -i 's#config_path = ""#config_path = "/etc/containerd/certs.d"#' /etc/containerd/config.toml
# 配置 Containerd 镜像源
# docker hub镜像加速
mkdir -p /etc/containerd/certs.d/docker.io
cat > /etc/containerd/certs.d/docker.io/hosts.toml << EOF
server = "https://docker.io"
[host."https://dockerproxy.com"]
capabilities = ["pull", "resolve"]
[host."https://docker.m.daocloud.io"]
capabilities = ["pull", "resolve"]
[host."https://reg-mirror.qiniu.com"]
capabilities = ["pull", "resolve"]
[host."https://registry.docker-cn.com"]
capabilities = ["pull", "resolve"]
[host."http://hub-mirror.c.163.com"]
capabilities = ["pull", "resolve"]
EOF
# k8s.gcr.io镜像加速
mkdir -p /etc/containerd/certs.d/k8s.gcr.io
tee /etc/containerd/certs.d/k8s.gcr.io/hosts.toml << 'EOF'
server = "https://k8s.gcr.io"
[host."https://k8s-gcr.m.daocloud.io"]
capabilities = ["pull", "resolve", "push"]
EOF
#重启containerd
systemctl restart containerd
#systemctl status containerd
# 安装 kubeadm、kubelet、kubectl
# 安装依赖
sudo systemctl restart containerd
sudo apt-get update -y
sudo apt-get install -y apt-transport-https ca-certificates curl gpg
mkdir -p /etc/apt/keyrings
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.29/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.29/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list
sudo apt-get update -y
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl
# kubelet 开机自启
systemctl enable --now kubelet
# 配置 crictl socket
crictl config runtime-endpoint unix:///run/containerd/containerd.sock
crictl config image-endpoint unix:///run/containerd/containerd.sock
不需要魔法的脚本
前置:
下载我下载好的资源包
资源列表:
资源 | 原始地址 |
---|---|
Container Runtime:Containerd 1.7.11 | https://github.com/containerd/containerd/releases/download/v1.7.11/containerd-1.7.11-linux-amd64.tar.gz |
CRI: runc 1.10 | https://github.com/opencontainers/runc/releases/download/v1.1.10/runc.amd64 |
CNI: cni-plugin 1.4 | https://github.com/containernetworking/plugins/releases/download/v1.4.0/cni-plugins-linux-amd64-v1.4.0.tgz |
calico 3.27 : tigera-operator.yaml | https://raw.githubusercontent.com/projectcalico/calico/v3.27.0/manifests/tigera-operator.yaml |
calico 3.27 : custom-resources.yaml | https://raw.githubusercontent.com/projectcalico/calico/v3.27.0/manifests/custom-resources.yaml |
下载资源:
sh
wget -O k8s1.29.tar.gz https://blog-source-mkt.oss-cn-chengdu.aliyuncs.com/resources/k8s/kubeadm%20init/k8s1.29.tar.gz
tar xzvf k8s1.29.tar.gz
cd workdir
export name=master01 # 改为你 hostname 的名称
在所有节点执行以下脚本
脚本功能:
- 时间同步
- 关闭 swap
- 启用内核模块
- 安装 ipvs 并启用内核参数
- 安装 containerd, runc, cni
- 更改 containerd 沙箱镜像和 cgroup 并且配置镜像加速
- 安装最新 kubelet, kubeadm, kubectl
注意: 请先通过export name=master01
方式设置当前 node 的 hostname
sh
#!/bin/bash
hostnamectl set-hostname $name
# 阿里源
mv /etc/apt/sources.list /etc/apt/sources.list.bak
cat <<EOF > /etc/apt/sources.list
deb https://mirrors.aliyun.com/ubuntu/ focal main restricted universe multiverse
deb-src https://mirrors.aliyun.com/ubuntu/ focal main restricted universe multiverse
deb https://mirrors.aliyun.com/ubuntu/ focal-security main restricted universe multiverse
deb-src https://mirrors.aliyun.com/ubuntu/ focal-security main restricted universe multiverse
deb https://mirrors.aliyun.com/ubuntu/ focal-updates main restricted universe multiverse
deb-src https://mirrors.aliyun.com/ubuntu/ focal-updates main restricted universe multiversedeb https://mirrors.aliyun.com/ubuntu/ focal-backports main restricted universe multiverse
deb-src https://mirrors.aliyun.com/ubuntu/ focal-backports main restricted universe multiverse
EOF
apt update
# 时间同步
timedatectl set-timezone Asia/Shanghai
#安装chrony,联网同步时间
apt install chrony -y && systemctl enable --now chronyd
# 禁用 swap
sudo swapoff -a && sed -i '/swap/s/^/#/' /etc/fstab
# 安装 ipvs
apt install -y ipset ipvsadm
# 配置需要的内核模块
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF
# 启动模块
sudo modprobe overlay
sudo modprobe br_netfilter
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
EOF
# 是 sysctl 参数生效
sudo sysctl --system
# 检验是否配置成功
#lsmod | grep br_netfilter
#lsmod | grep overlay
#sysctl net.bridge.bridge-nf-call-iptables net.bridge.bridge-nf-call-ip6tables net.ipv4.ip_forward
# 配置 ipvs 内核参数
cat <<EOF | sudo tee /etc/modules-load.d/ipvs.conf
ip_vs
ip_vs_rr
ip_vs_wrr
ip_vs_sh
nf_conntrack
EOF
# 内核加载 ipvs
sudo modprobe ip_vs
sudo modprobe ip_vs_rr
sudo modprobe ip_vs_wrr
sudo modprobe ip_vs_sh
sudo modprobe nf_conntrack
# 确认ipvs模块加载
#lsmod |grep -e ip_vs -e nf_conntrack
# 安装 Containerd
#wget -c https://github.com/containerd/containerd/releases/download/v1.7.11/containerd-1.7.11-linux-amd64.tar.gz
tar -xzvf containerd-1.7.11-linux-amd64.tar.gz
#解压出来一个bin目录,containerd可执行文件都在bin目录里面
mv bin/* /usr/local/bin/
rm -rf bin
#使用systemcd来管理containerd
cat << EOF > /usr/lib/systemd/system/containerd.service
[Unit]
Description=containerd container runtime
Documentation=https://containerd.io
After=network.target local-fs.target
[Service]
ExecStartPre=-/sbin/modprobe overlay
ExecStart=/usr/local/bin/containerd
Type=notify
Delegate=yes
KillMode=process
Restart=always
RestartSec=5
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNPROC=infinity
LimitCORE=infinity
# Comment TasksMax if your systemd version does not supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
OOMScoreAdjust=-999
[Install]
WantedBy=multi-user.target
EOF
systemctl daemon-reload && systemctl enable --now containerd
#systemctl status containerd
# 安装 runc
#runc是容器运行时,runc实现了容器的init,run,create,ps...我们在运行容器所需要的cmd:
#curl -LO https://github.com/opencontainers/runc/releases/download/v1.1.10/runc.amd64 && \
install -m 755 runc.amd64 /usr/local/sbin/runc
# 安装 CNI plugins
#wget -c https://github.com/containernetworking/plugins/releases/download/v1.4.0/cni-plugins-linux-amd64-v1.4.0.tgz
#根据官网的安装步骤来,创建一个目录用于存放cni插件
mkdir -p /opt/cni/bin
tar -xzvf cni-plugins-linux-amd64-v1.4.0.tgz -C /opt/cni/bin/
# 修改 Containd 配置
#修改containerd的配置,因为containerd默认从k8s官网拉取镜像
#创建一个目录用于存放containerd的配置文件
mkdir -p /etc/containerd
#把containerd配置导出到文件
containerd config default | sudo tee /etc/containerd/config.toml
# 修改沙箱镜像
sed -i 's#sandbox_image = "registry.k8s.io/pause:.*"#sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.9"#' /etc/containerd/config.toml
# 修改 cgroup 为 systemd
sed -i 's#SystemdCgroup = false#SystemdCgroup = true#' /etc/containerd/config.toml
# 配置镜像加速
sed -i 's#config_path = ""#config_path = "/etc/containerd/certs.d"#' /etc/containerd/config.toml
# 配置 Containerd 镜像源
# docker hub镜像加速
mkdir -p /etc/containerd/certs.d/docker.io
cat > /etc/containerd/certs.d/docker.io/hosts.toml << EOF
server = "https://docker.io"
[host."https://dockerproxy.com"]
capabilities = ["pull", "resolve"]
[host."https://docker.m.daocloud.io"]
capabilities = ["pull", "resolve"]
[host."https://reg-mirror.qiniu.com"]
capabilities = ["pull", "resolve"]
[host."https://registry.docker-cn.com"]
capabilities = ["pull", "resolve"]
[host."http://hub-mirror.c.163.com"]
capabilities = ["pull", "resolve"]
EOF
# k8s.gcr.io镜像加速
mkdir -p /etc/containerd/certs.d/k8s.gcr.io
tee /etc/containerd/certs.d/k8s.gcr.io/hosts.toml << 'EOF'
server = "https://k8s.gcr.io"
[host."https://k8s-gcr.m.daocloud.io"]
capabilities = ["pull", "resolve", "push"]
EOF
#重启containerd
systemctl restart containerd
#systemctl status containerd
# 安装 kubeadm、kubelet、kubectl
# 安装依赖
sudo apt-get update -y
sudo apt-get install -y apt-transport-https ca-certificates curl gpg
mkdir -p /etc/apt/keyrings
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.29/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.29/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list
sudo apt-get update -y
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl
# kubelet 开机自启
systemctl enable --now kubelet
# 配置 crictl socket
crictl config runtime-endpoint unix:///run/containerd.sock
crictl config image-endpoint unix:///run/containerd/containerd.sock
sh
chmod +x install.sh
./install.sh
初始化 master01
暴露环境变量
sh
export K8S_VERSION=1.29.1 # k8s 集群版本
export POD_CIDR=10.244.0.0/16 # pod 网段
export SERVICE_CIDR=10.96.0.0/12 # service 网段
export APISERVER_MASTER01=192.168.254.130 # master01 ip
export APISERVER_HA=192.168.254.100 # 集群 vip 地址
export APISERVER_HA_PORT=16443 # 集群 vip 地址
在你的主节点初始化集群(同样在 workdir/ 下)
sh
# 命令行方式初始化, 后面需要手动更改 kube-proxy 为 ipvs 模式
# kubeadm init --apiserver-advertise-address=$APISERVER_MASTER01 --apiserver-bind-port=6443 --image-repository registry.aliyuncs.com/google_containers --kubernetes-version v1.29.1 --service-cidr=$SERVICE_CIDR --pod-network-cidr=$POD_CIDR --upload-certs
# kubeadm config print init-defaults >Kubernetes-cluster.yaml # kubeadm 默认配置
cat << EOF > Kubernetes-cluster.yaml
apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
token: abcdef.0123456789abcdef
ttl: 24h0m0s
usages:
- signing
- authentication
kind: InitConfiguration
localAPIEndpoint:
# 将此处IP地址替换为主节点IP ETCD容器会试图通过此地址绑定端口 如果主机不存在则会失败
advertiseAddress: $APISERVER_MASTER01
bindPort: 6443
nodeRegistration:
criSocket: unix:///run/containerd/containerd.sock
imagePullPolicy: IfNotPresent
name: $name # 节点 hostname
taints: null
---
# controlPlaneEndpoint 可配置高可用的 ApiServer
apiServer:
timeoutForControlPlane: 4m0s
certSANs: # 主节点IP
- $APISERVER_HA
- $APISERVER_MASTER01
apiVersion: kubeadm.k8s.io/v1beta3
controlPlaneEndpoint: "$APISERVER_HA:$APISERVER_HA_PORT"
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: {}
etcd: # 可使用外接 etcd 集群
local:
dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers # 国内源
kind: ClusterConfiguration
kubernetesVersion: $K8S_VERSION
networking:
dnsDomain: cluster.local
# 增加配置 指定pod网段
podSubnet: $POD_CIDR
serviceSubnet: $SERVICE_CIDR
scheduler: {}
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs # kubeproxy 使用 ipvs
---
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
cgroupDriver: systemd
EOF
kubeadm init --config Kubernetes-cluster.yaml --upload-certs
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
# 安装 calico
sed -i 's#cidr.*#cidr: '$POD_CIDR'#' custom-resources.yaml
kubectl create -f tigera-operator.yaml
kubectl create -f custom-resources.yaml
--upload-certs: 将控制平面证书上传到 kubeadm-certs Secret。
简单来说: 后面就不需要把集群证书拷贝到其他 master 节点
配置自动补全
sh
apt install bash-completion -y
cat << EOF >> ~/.profile
alias k='kubectl'
source <(kubectl completion bash)
complete -F __start_kubectl k
EOF
source ~/.profile
加入其余节点
master02:
sh
kubeadm join 192.168.254.100:16443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:6c9f43be739919e1e03abaa3d0deae00bc2400f77dc7574e338dc6460be2eab6 \
--control-plane --certificate-key 02feec260870e7145d69b65d0252f1067768c193d9e8c4aba31ed1b1fa7aaba8
node01:
sh
kubeadm join 192.168.254.100:16443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:6c9f43be739919e1e03abaa3d0deae00bc2400f77dc7574e338dc6460be2eab6
验证集群
sh
$ k get po -A
NAMESPACE NAME READY STATUS RESTARTS AGE
calico-system calico-kube-controllers-75f84bf8b4-96hht 0/1 ContainerCreating 0 6m19s
calico-system calico-node-4cd7c 0/1 PodInitializing 0 105s
calico-system calico-node-7z22c 0/1 PodInitializing 0 109s
calico-system calico-node-pcq8m 0/1 Running 0 6m19s
calico-system calico-typha-65b78b8f8d-r2qjn 1/1 Running 0 100s
calico-system calico-typha-65b78b8f8d-vv4ph 1/1 Running 0 6m19s
calico-system csi-node-driver-bsd66 0/2 ContainerCreating 0 105s
calico-system csi-node-driver-h465x 0/2 ContainerCreating 0 109s
calico-system csi-node-driver-htqj2 0/2 ContainerCreating 0 6m19s
kube-system coredns-857d9ff4c9-nk4kx 1/1 Running 0 6m40s
kube-system coredns-857d9ff4c9-w6zff 1/1 Running 0 6m40s
kube-system etcd-master01 1/1 Running 0 6m53s
kube-system etcd-master02 1/1 Running 0 97s
kube-system kube-apiserver-master01 1/1 Running 0 6m53s
kube-system kube-apiserver-master02 1/1 Running 0 98s
kube-system kube-controller-manager-master01 1/1 Running 0 6m53s
kube-system kube-controller-manager-master02 1/1 Running 0 97s
kube-system kube-proxy-7mwpd 1/1 Running 0 109s
kube-system kube-proxy-gfcqb 1/1 Running 0 6m40s
kube-system kube-proxy-vkkm4 1/1 Running 0 105s
kube-system kube-scheduler-master01 1/1 Running 0 6m53s
kube-system kube-scheduler-master02 1/1 Running 0 99s
tigera-operator tigera-operator-55585899bf-xssq5 1/1 Running 0 6m40s
参考: