ubuntu22.04使用kubeadm部署k8s集群

使用kubeadm部署一个k8s集群,1个master+2个worker节点

1.环境信息

  • 操作系统:ubuntu22.04
  • 内存:16GB
  • CPU:4
  • 网络:能够互访,能够访问互联网
hostname ip 备注
node01 10.121.218.50 master
node02 10.121.218.49 worker
node03 10.121.218.48 worker

2.准备工作

基础配置:

bash 复制代码
# 时间同步
sudo apt -y install chrony
sudo systemctl enable chrony && sudo systemctl start chrony
sudo chronyc sources -v

# 设置时区
sudo timedatectl set-timezone Asia/Shanghai

# 设置hosts文件
vim /etc/hosts
#添加如下内容:
10.121.218.50 node01
10.121.218.49 node02
10.121.218.48 node03

# 免密登录node01执行
ssh-keygen
ssh-copy-id 10.121.218.50
ssh-copy-id 10.121.218.49
ssh-copy-id 10.121.218.48

# 禁用swap
sudo swapoff -a && sudo sed -i '/swap/s/^/#/' /etc/fstab
sudo swapon --show

# 禁用防火墙
sudo ufw disable
sudo ufw status

内核参数调整

bash 复制代码
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF

# 加载模块
sudo modprobe overlay
sudo modprobe br_netfilter

# 设置所需的sysctl 参数
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables  = 1
net.bridge.bridge-nf-call-ip6tables = 1   # 将桥接的IPv4 流量传递到iptables 的链
net.ipv4.ip_forward                 = 1   # 启用 IPv4 数据包转发
EOF

# 应用 sysctl 参数
sudo sysctl --system

# 通过运行以下指令确认 br_netfilter 和 overlay 模块被加载
sudo lsmod | grep br_netfilter
sudo lsmod | grep overlay

# 通过运行以下指令确认 net.bridge.bridge-nf-call-iptables、net.bridge.bridge-nf-call-ip6tables 和 net.ipv4.ip_forward 系统变量在你的 sysctl 配置中被设置为 1
sudo sysctl net.bridge.bridge-nf-call-iptables net.bridge.bridge-nf-call-ip6tables net.ipv4.ip_forward

配置ipvs

bash 复制代码
# 安装
sudo apt install -y ipset ipvsadm

# 内核加载ipvs
cat <<EOF | sudo tee /etc/modules-load.d/ipvs.conf
ip_vs
ip_vs_rr
ip_vs_wrr
ip_vs_sh
nf_conntrack
EOF

# 加载模块
sudo modprobe ip_vs
sudo modprobe ip_vs_rr
sudo modprobe ip_vs_wrr
sudo modprobe ip_vs_sh
sudo modprobe nf_conntrack

安装容器

本文选用containerd作为容器运行时:

bash 复制代码
# 安装containerd
sudo apt install -y containerd

修改containerd的配置文件

配置containerd使用cgroup的驱动为systemd,并修改沙箱镜像源:

bash 复制代码
# 生成containetd的配置文件
sudo mkdir -p /etc/containerd/
containerd config default | sudo tee /etc/containerd/config.toml >/dev/null 2>&1
# 修改/etc/containerd/config.toml,修改SystemdCgroup为true
sudo sed -i "s#SystemdCgroup\ \=\ false#SystemdCgroup\ \=\ true#g" /etc/containerd/config.toml
sudo cat /etc/containerd/config.toml | grep SystemdCgroup

# 修改沙箱镜像源
sudo sed -i "s#registry.k8s.io/pause#registry.cn-hangzhou.aliyuncs.com/google_containers/pause#g" /etc/containerd/config.toml
sudo cat /etc/containerd/config.toml | grep sandbox_image

# 配置containerd代理
vim /etc/containerd/config.toml
# 添加如下内容:
      [plugins."io.containerd.grpc.v1.cri".registry.mirrors]
         [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]
            endpoint = ["http://地址:8443"]
         [plugins."io.containerd.grpc.v1.cri".registry.mirrors."registry.k8s.io"]
            endpoint = ["http://地址:8443"]
# 重启containerd
systemctl restart containerd.service

cgroup驱动说明:

cgroup驱动有两个,cgroupfs和systemd。文使用的ubuntu使用systemd作为初始化系统程序,因此将kubelet和容器运行时的cgroup驱动都配置为systemd。

安装kubeadm、kubelet和kubectl

bash 复制代码
# 安装依赖
sudo apt-get update && sudo apt-get install -y apt-transport-https ca-certificates curl gpg

# 添加kubernetes的key
curl -fsSL https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg

# 添加kubernetes apt仓库,使用阿里云镜像源
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main' | sudo tee /etc/apt/sources.list.d/kubernetes.list

# 更新apt索引
sudo apt update

# 查看版本列表
apt-cache madison kubeadm

# 不带版本默认会安装最新版本,本文安装的版本为1.28.2
sudo apt-get install -y kubelet kubeadm kubectl

# 锁定版本,不随 apt upgrade 更新
sudo apt-mark hold kubelet kubeadm kubectl

# kubectl命令补全
sudo apt install -y bash-completion
kubectl completion bash | sudo tee /etc/profile.d/kubectl_completion.sh > /dev/null
. /etc/profile.d/kubectl_completion.sh

3. 安装k8s集群

准备镜像

bash 复制代码
# 查看镜像版本
kubeadm config images list

# 查看阿里云镜像
kubeadm config images list --image-repository registry.cn-hangzhou.aliyuncs.com/google_containers

# 下载阿里云镜像
kubeadm config images pull --kubernetes-version=v1.28.2 --image-repository registry.aliyuncs.com/google_containers

备注:

阿里云有两个镜像仓库可用

初始化kubernetes集群

初始化支持命令行和配置文件两种方式。

  • 配置文件

生成配置文件模板:

bash 复制代码
kubeadm config print init-defaults > init.default.yaml

init.default.yaml文件内容如下,根据当前环境信息修改:

bash 复制代码
apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 10.121.218.50
  bindPort: 6443
nodeRegistration:
  criSocket: unix:///var/run/containerd/containerd.sock
  imagePullPolicy: IfNotPresent
  name: node
  taints: null
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: {}
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: 1.28.2
networking:
  dnsDomain: cluster.local
  podSubnet: 10.244.0.0/16
  serviceSubnet: 10.96.0.0/12
scheduler: {}

初始化控制台节点

初始化控制节点,配置文件方式:

bash 复制代码
sudo kubeadm init --config init.default.yaml --upload-certs
# 初始化完成后的输出
Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:
# 工作节点加入
kubeadm join 10.121.218.50:6443 --token abcdef.0123456789abcdef \
        --discovery-token-ca-cert-hash sha256:d7c612e50d63f81d15ad205eedd722a0338bfdcb6663a5799bbfb86d5fd155a4

部署成功后配置kubecongfig文件:

bash 复制代码
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
  • 命令行方式
bash 复制代码
sudo kubeadm init \
--kubernetes-version=v1.28.2  \
--apiserver-advertise-address=10.121.218.50 \
--image-repository registry.aliyuncs.com/google_containers --v=5 \
--upload-certs \
--service-cidr=10.96.0.0/12 \
--pod-network-cidr=10.244.0.0/16

安装网络插件

部署网络插件,选用calico

bash 复制代码
wget https://raw.githubusercontent.com/projectcalico/calico/v3.25.1/manifests/tigera-operator.yaml
wget https://raw.githubusercontent.com/projectcalico/calico/v3.25.1/manifests/custom-resources.yaml
vim custom-resources.yaml
......
 11     ipPools:
 12     - blockSize: 26
 13       cidr: 10.244.0.0/16  # --pod-network-cidr对应的IP地址段
 14       encapsulation: VXLANCrossSubnet
......
 kubectl  create -f tigera-operator.yaml
 kubectl create -f custom-resources.yaml

加入worker节点:

bash 复制代码
kubeadm join 10.121.218.50:6443 --token abcdef.0123456789abcdef \
        --discovery-token-ca-cert-hash sha256:d7c612e50d63f81d15ad205eedd722a0338bfdcb6663a5799bbfb86d5fd155a4
# 查看集群节点
kubectl  get nodes
NAME     STATUS   ROLES           AGE     VERSION
node     Ready    control-plane   76m     v1.28.2
node02   Ready    <none>          7m13s   v1.28.2
node03   Ready    <none>          6m56s   v1.28.2
相关推荐
wenyue112125 分钟前
Revolutionize Your Kubernetes Experience with Easegress: Kubernetes Gateway API
容器·kubernetes·gateway
ac.char1 小时前
在 Ubuntu 上安装 Yarn 环境
linux·运维·服务器·ubuntu
梅见十柒2 小时前
wsl2中kali linux下的docker使用教程(教程总结)
linux·经验分享·docker·云原生
入 梦皆星河2 小时前
在 Ubuntu/Debian 上安装 Go
ubuntu·golang·debian
IT果果日记3 小时前
ubuntu 安装 conda
linux·ubuntu·conda
Python私教3 小时前
ubuntu搭建k8s环境详细教程
linux·ubuntu·kubernetes
运维&陈同学4 小时前
【zookeeper01】消息队列与微服务之zookeeper工作原理
运维·分布式·微服务·zookeeper·云原生·架构·消息队列
O&REO4 小时前
单机部署kubernetes环境下Overleaf-基于MicroK8s的Overleaf应用部署指南
云原生·容器·kubernetes
politeboy4 小时前
k8s启动springboot容器的时候,显示找不到application.yml文件
java·spring boot·kubernetes
运维小文5 小时前
K8S资源限制之LimitRange
云原生·容器·kubernetes·k8s资源限制