Kubernetes1.23版本搭建(三台机器)

Ubuntu部署 Kubernetes1.23

  • 资源列表

准备三台Ubuntu的服务器,配置好网络。

主机名 IP 所需软件
master 192.168.221.21 Docker Ce、kube-apiserver、kube-controller-manager、kube-scheduler、kubelet、Etcd、kube-proxy
node1 192.168.221.22 Docker CE、kubectl、kube-proxy、Flnnel
node2 192.168.221.23 Docker CE、kubectl、kube-proxy、Flnnel

基础环境(三台机器都执行)

javascript 复制代码
 # 分别修改主机名
 hostnamectl set-hostname master
 hostnamectl set-hostname node1
 hostnamectl set-hostname node2
 # 切换root用户(已经是的话就省略)
 su -
 # 绑定hosts解析
 cat >> /etc/hosts << EOF
 192.168.221.21 master
 192.168.221.22 node1
 192.168.221.23 node2
 EOF


 ### 一、环境准备
 # 1.1、安装常用软件
 # 更新软件仓库
 apt update
 # 安装常用软件
 apt install vim lrzsz unzip wget net-tools tree bash-completion telnet -y
 # 1.2、关闭交换分区- kubeadm不支持swap交换分区
 # 临时关闭
 swapoff -a
 # 或者永久关闭
 sed -i '/swap/s/^/#/' /etc/fstab
 # 1.3、开启IPv4转发和内核优化
 cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
 net.bridge.bridge-nf-call-iptables  = 1
 net.bridge.bridge-nf-call-ip6tables = 1
 net.ipv4.ip_forward            = 1
 EOF
 sysctl --system
 # 1.4、时间同步
 apt -y install ntpdate
 ntpdate ntp.aliyun.com


 ### 二、安装Docker
 # 2.1、卸载残留Docker软件包(没有下载过就跳过)
 for pkg in docker.io docker-doc docker-compose docker-compose-v2 podman-docker containerd runc; do sudo apt-get remove $pkg; done
 # 2.2、更新软件包- 在终端中执行以下命令来更新Ubuntu软件包列表和已安装软件的版本升级
 sudo apt update
 sudo apt upgrade
 # 2.3、安装Docker依赖- Docker在Ubuntu上依赖一些软件包,执行以下命令来安装这些依赖
 apt-get -y install ca-certificates curl gnupg lsb-release
 # 2.4、添加Docker官方GPG密钥- 执行以下命令来添加Docker官方的GPG密钥
 # 最终回显OK表示运行命令正确
 curl -fsSL http://mirrors.aliyun.com/docker-ce/linux/ubuntu/gpg | sudo apt-key add -
 # 2.5、添加Docker软件源
 # 需要管理员交互式按一下回车键
 sudo add-apt-repository "deb [arch=amd64] http://mirrors.aliyun.com/docker-ce/linux/ubuntu $(lsb_release -cs) stable"
 # 2.6、安装Docker- 执行以下命令安装Docker-20.10版本,新版本Docker和k8s-1.23不兼容
 apt install docker-ce=5:20.10.14~3-0~ubuntu-jammy docker-ce-cli=5:20.10.14~3-0~ubuntu-jammy containerd.io -y
 # 2.7、配置用户组(可选)
 - 默认情况下,只有root用户和Docker组的用户才能运行Docker命令。我们可以将当前用户添加到Docker组,以避免每次使用时都需要使用sudo。
 - 注意:重新登录才能使更改生效
 sudo usermod -aG docker $USER
 # 2.8、安装工具
 apt-get -y install apt-transport-https ca-certificates curl software-properties-common
 # 2.9、开启Docker
 systemctl start docker
 systemctl enable docker
 # 2.10、配置Docker加速器
 vim /etc/docker/daemon.json
 {
     "registry-mirrors": [
         "https://0c105db5188026850f80c001def654a0.mirror.swr.myhuaweicloud.com",
         "https://registry.docker-cn.com",
         "http://hub-mirror.c.163.com",
         "https://docker.mirrors.ustc.edu.cn",
         "https://kfwkfulq.mirror.aliyuncs.com",
         "https://docker.rainbond.cc",
         "https://5tqw56kt.mirror.aliyuncs.com",
         "https://docker.1panel.live",
         "http://mirrors.ustc.edu.cn",
         "http://mirror.azure.cn",
         "https://hub.rat.dev",
         "https://docker.chenby.cn",
         "https://docker.hpcloud.cloud",
         "https://docker.m.daocloud.io",
         "https://docker.unsee.tech",
         "https://dockerpull.org",
         "https://dockerhub.icu",
         "https://proxy.1panel.live",
         "https://docker.1panel.top",
         "https://docker.1ms.run",
         "https://docker.ketches.cn"
     ],
     "insecure-registries": ["http://192.168.57.200:8099"]
 }
 # 重启Docker
 systemctl daemon-reload
 systemctl restart docker
 

### 三、通过kubeadm部署Kubernetes集群
 # 3.1、配置Kubernetes的APT源- 这里使用aliyun的源
 # 安装软件包
 apt-get install -y apt-transport-https ca-certificates curl
 # 下载Kubernetes GPG密钥
 curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg  https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg
 # 将GPG密钥添加到APT的密钥管理中
 cat /usr/share/keyrings/kubernetes-archive-keyring.gpg |  sudo apt-key add -
 # 指定软件仓库位置
 echo "deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
 # 更新软件仓库
 apt-get update
 # 3.2、查看Kubernetes可用版本
 apt-cache madison kubeadm
 # 3.3、安装kubeadm管理工具- kubectl:命令行管理工具、kubeadm:安装K8S集群工具、kubelet管理容器工具
 # 安装1.23版本的Kubernetes,因为1.23以后Kubernetes就不再支持Docker做底层容器运行时
 apt-get install -y kubelet=1.23.0-00 kubeadm=1.23.0-00 kubectl=1.23.0-00
 # 锁定版本,防止自动升级
 apt-mark hold kubelet kubeadm kubectl docker docker-ce docker-ce-cli
 # 查看版本
 kubelet --version
 kubeadm version
 kubectl version
 # 3.4、设置Kubelet开机启动
 systemctl enable kubelet

kubeadm初始化集群、安装flannel网络插件(节点分开操作)

javascript 复制代码
 # 4.1、master节点生成初始化配置文件
 root@master:~# kubeadm config print init-defaults > init-config.yaml
 # ### 4.2、master节点修改初始化配置文件
 root@master:~# vim init-config.yaml
 +++++++++++++++++++++++++++++++++++++++++++
 apiVersion: kubeadm.k8s.io/v1beta3
 bootstrapTokens:
 - groups:
   - system:bootstrappers:kubeadm:default-node-token
   token: abcdef.0123456789abcdef
   ttl: 24h0m0s
   usages:
   - signing
   - authentication
 kind: InitConfiguration
 localAPIEndpoint:
   advertiseAddress: 192.168.221.21  # master节点IP地址
   bindPort: 6443
 nodeRegistration:
   criSocket: /var/run/dockershim.sock
   imagePullPolicy: IfNotPresent
   name: master   # 如果使用域名保证可以解析,或直接使用IP地址
   taints: null
 ---
 apiServer:
   timeoutForControlPlane: 4m0s
 apiVersion: kubeadm.k8s.io/v1beta3
 certificatesDir: /etc/kubernetes/pki
 clusterName: kubernetes
 controllerManager: {}
 dns: {}
 etcd:
   local:
     dataDir: /var/lib/etcd
 imageRepository: registry.aliyuncs.com/google_containers # 默认地址国内无法访问,修改为国内地址
 kind: ClusterConfiguration
 kubernetesVersion: 1.23.0   # 指定kubernetes部署的版本
 networking:
   dnsDomain: cluster.local
   serviceSubnet: 10.96.0.0/12   # service资源的网段,集群内部的网络
   podSubnet: 10.244.0.0/16  # 新增加Pod资源网段,需要与下面的pod网络插件地址一致
 scheduler: {}
 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++
 # 4.3、master节点拉取所需镜像
 # 在教室局域网有打包好的镜像
 wget http://192.168.57.200/Software/k8s-1.23.tar.xz
 tar -xvf k8s-1.23.tar.xz
 cd k8s-1.23/
 # 批量导入镜像
 for img in `ls *.tar`;do  docker load -i $img;done

补充内容:

或者可以使用公网安装镜像(与教室局域网二选一即可)

查看初始化需要的镜像

root@master:~# kubeadm config images list --config=init-config.yaml

registry.aliyuncs.com/google_containers/kube-apiserver:v1.23.0

registry.aliyuncs.com/google_containers/kube-controller-manager:v1.23.0

registry.aliyuncs.com/google_containers/kube-scheduler:v1.23.0

registry.aliyuncs.com/google_containers/kube-proxy:v1.23.0

registry.aliyuncs.com/google_containers/pause:3.6

registry.aliyuncs.com/google_containers/etcd:3.5.1-0

registry.aliyuncs.com/google_containers/coredns:v1.8.6

拉取所需镜像

root@master:~# kubeadm config images pull --config=init-config.yaml

config/images\] Pulled registry.aliyuncs.com/google_containers/kube-apiserver:v1.23.0 \[config/images\] Pulled registry.aliyuncs.com/google_containers/kube-controller-manager:v1.23.0 \[config/images\] Pulled registry.aliyuncs.com/google_containers/kube-scheduler:v1.23.0 \[config/images\] Pulled registry.aliyuncs.com/google_containers/kube-proxy:v1.23.0 \[config/images\] Pulled registry.aliyuncs.com/google_containers/pause:3.6 \[config/images\] Pulled registry.aliyuncs.com/google_containers/etcd:3.5.1-0 \[config/images\] Pulled registry.aliyuncs.com/google_containers/coredns:v1.8.6 # 查看拉取的镜像 root@master:\~# docker images REPOSITORY TAG IMAGE ID CREATED SIZE registry.aliyuncs.com/google_containers/kube-apiserver v1.23.0 e6bf5ddd4098 2 years ago 135MB registry.aliyuncs.com/google_containers/kube-proxy v1.23.0 e03484a90585 2 years ago 112MB registry.aliyuncs.com/google_containers/kube-controller-manager v1.23.0 37c6aeb3663b 2 years ago 125MB registry.aliyuncs.com/google_containers/kube-scheduler v1.23.0 56c5af1d00b5 2 years ago 53.5MB registry.aliyuncs.com/google_containers/etcd 3.5.1-0 25f8c7f3da61 2 years ago 293MB registry.aliyuncs.com/google_containers/coredns v1.8.6 a4ca41631cc7 2 years ago 46.8MB hello-world latest feb5d9fea6a5 2 years ago 13.3kB registry.aliyuncs.com/google_containers/pause 3.6 6270bb605e12 2 years ago 683kB *** ** * ** *** ```javascript # 4.4、master初始化集群 cd root@master:~# kubeadm init --config=init-config.yaml 最后记下这两段话就行 ############################################################# mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config ############################################################# kubeadm join 192.168.221.21:6443 --token abcdef.0123456789abcdef \ --discovery-token-ca-cert-hash sha256:6e96cc6ec35a69175035a4a056c05df77c24399da83e394ff12b11768db419a3 ############################################################# # 4.5、master节点复制k8s认证文件到用户的home目录 # 4.6、Node节点加入集群- 直接把`master`节点初始化之后的最后回显的token复制粘贴到node节点回车即可,无须做任何配置 root@node:~# kubeadm join 192.168.221.21:6443 --token abcdef.0123456789abcdef \ --discovery-token-ca-cert-hash sha256:6e96cc6ec35a69175035a4a056c05df77c24399da83e394ff12b11768db419a3 [preflight] Running pre-flight checks ······ 最后显示 Run 'kubectl get nodes' on the control-plane to see this node join the cluster. # 4.7、在master主机查看节点状态- 在初始化k8s-master时并没有网络相关的配置,所以无法跟node节点通信,因此状态都是"Not Ready"。但是通过kubeadm join加入的node节点已经在master上可以看到 root@master:~# kubectl get nodes NAME STATUS ROLES AGE VERSION master NotReady control-plane,master 101s v1.23.0 node1 NotReady 27s v1.23.0 node2 NotReady 21s v1.23.0 ## 5.1、安装flannel网络插件master(创建了一个覆盖整个集群的群集网络,使得Pod可以跨节点通信) # 下载kube-flannel.yml文件 root@master:~# wget https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml # 如果失败就用后面这种方法手动创建文件 ######放在文章末尾了 # 安装kube-flannel root@master:~# kubectl apply -f kube-flannel.yml # 5.2、查看Node节点状态(需要等待一会儿) root@master:~# kubectl get node NAME STATUS ROLES AGE VERSION master Ready control-plane,master 13m v1.23.0 node1 Ready 12m v1.23.0 node2 Ready 12m v1.23.0 # 查看所有Pod状态 root@master:~# kubectl get pod -A NAMESPACE NAME READY STATUS RESTARTS AGE kube-flannel kube-flannel-ds-pc7gw 1/1 Running 0 89s kube-flannel kube-flannel-ds-xxcqj 1/1 Running 0 89s kube-system coredns-6d8c4cb4d-fx6vd 1/1 Running 0 44m kube-system coredns-6d8c4cb4d-pgwcc 1/1 Running 0 44m kube-system etcd-master 1/1 Running 1 44m kube-system kube-apiserver-master 1/1 Running 1 44m kube-system kube-controller-manager-master 1/1 Running 1 44m kube-system kube-proxy-2gghk 1/1 Running 0 44m kube-system kube-proxy-qf26c 1/1 Running 0 20m kube-system kube-scheduler-master 1/1 Running 1 44m # 查看组件状态 root@master:~# kubectl get cs Warning: v1 ComponentStatus is deprecated in v1.19+ NAME STATUS MESSAGE ERROR controller-manager Healthy ok scheduler Healthy ok etcd-0 Healthy {"health":"true","reason":""} # 5.3、开启kubectl命令补全功能 apt install bash-completion -y source /usr/share/bash-completion/bash_completion source <(kubectl completion bash) echo "source <(kubectl completion bash)" >> ~/.bashrc ​ ############################################################ ## 新增节点到K8S集群 # 1.配置hosts文件 cat >> /etc/hosts << EOF 192.168.221.23 node2 EOF # 2.通过 kubeadm 将节点加入集群 # 主节点上获取token(然后在新节点上运行) kubeadm token create --print-join-command # 查看节点是否加入成功------显示Ready(主节点操作) kubectl get node ## 从K8S集群中删除节点 # 1.先确保节点安全下线(主操作) kubectl cordon node2 kubectl drain node2 --ignore-daemonsets --delete-emptydir-data # 2.从集群中移除节点(主操作) kubectl delete node node2 kubectl get node # 3.清理被移除的节点(工作节点操作) systemctl stop kubelet kubeadm reset rm -rf /etc/cni/net.d /var/lib/kubelet /var/lib/etcd ``` **补充内容:** # 如果下载失败也可以手动创建kube-flannel.yml文件 root@master:\~# vim kube-flannel.yml --- kind: Namespace apiVersion: v1 metadata: name: kube-flannel labels: k8s-app: flannel pod-security.kubernetes.io/enforce: privileged --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: labels: k8s-app: flannel name: flannel rules: - apiGroups: - "" resources: - pods verbs: - get - apiGroups: - "" resources: - nodes verbs: - get - list - watch - apiGroups: - "" resources: - nodes/status verbs: - patch --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: labels: k8s-app: flannel name: flannel roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: flannel subjects: - kind: ServiceAccount name: flannel namespace: kube-flannel --- apiVersion: v1 kind: ServiceAccount metadata: labels: k8s-app: flannel name: flannel namespace: kube-flannel --- kind: ConfigMap apiVersion: v1 metadata: name: kube-flannel-cfg namespace: kube-flannel labels: tier: node k8s-app: flannel app: flannel data: cni-conf.json: \| { "name": "cbr0", "cniVersion": "0.3.1", "plugins": \[ { "type": "flannel", "delegate": { "hairpinMode": true, "isDefaultGateway": true } }, { "type": "portmap", "capabilities": { "portMappings": true } }

}

net-conf.json: |

{

"Network": "10.244.0.0/16",

"EnableNFTables": false,

"Backend": {

"Type": "vxlan"

}

}


apiVersion: apps/v1

kind: DaemonSet

metadata:

name: kube-flannel-ds

namespace: kube-flannel

labels:

tier: node

app: flannel

k8s-app: flannel

spec:

selector:

matchLabels:

app: flannel

template:

metadata:

labels:

tier: node

app: flannel

spec:

affinity:

nodeAffinity:

requiredDuringSchedulingIgnoredDuringExecution:

nodeSelectorTerms:

operator: In

values:

  • linux

hostNetwork: true

priorityClassName: system-node-critical

tolerations:

  • operator: Exists

effect: NoSchedule

serviceAccountName: flannel

initContainers:

  • name: install-cni-plugin

#image: docker.io/flannel/flannel-cni-plugin:v1.6.0-flannel1

image: 192.168.57.200:8099/k8s-1.23/flannel-cni-plugin:v1.6.0-flannel1 # 使用局域网镜像

command:

  • cp

args:

  • -f

  • /flannel

  • /opt/cni/bin/flannel

volumeMounts:

  • name: cni-plugin

mountPath: /opt/cni/bin

  • name: install-cni

#image: docker.io/flannel/flannel:v0.26.1

image: 192.168.57.200:8099/k8s-1.23/flannel:v0.26.1 # 使用局域网镜像

command:

  • cp

args:

  • -f

  • /etc/kube-flannel/cni-conf.json

  • /etc/cni/net.d/10-flannel.conflist

volumeMounts:

  • name: cni

mountPath: /etc/cni/net.d

  • name: flannel-cfg

mountPath: /etc/kube-flannel/

containers:

  • name: kube-flannel

#image: docker.io/flannel/flannel:v0.26.1

image: 192.168.57.200:8099/k8s-1.23/flannel:v0.26.1 # 使用局域网镜像

command:

  • /opt/bin/flanneld

args:

  • --ip-masq

  • --kube-subnet-mgr

resources:

requests:

cpu: "100m"

memory: "50Mi"

securityContext:

privileged: false

capabilities:

add: ["NET_ADMIN", "NET_RAW"]

env:

  • name: POD_NAME

valueFrom:

fieldRef:

fieldPath: metadata.name

  • name: POD_NAMESPACE

valueFrom:

fieldRef:

fieldPath: metadata.namespace

  • name: EVENT_QUEUE_DEPTH

value: "5000"

volumeMounts:

  • name: run

mountPath: /run/flannel

  • name: flannel-cfg

mountPath: /etc/kube-flannel/

  • name: xtables-lock

mountPath: /run/xtables.lock

volumes:

  • name: run

hostPath:

path: /run/flannel

  • name: cni-plugin

hostPath:

path: /opt/cni/bin

  • name: cni

hostPath:

path: /etc/cni/net.d

  • name: flannel-cfg

configMap:

name: kube-flannel-cfg

  • name: xtables-lock

hostPath:

path: /run/xtables.lock

type: FileOrCreate

最后一步,安装KubePi

KubePi 是一个现代化的 K8s 面板。KubePi 允许管理员导入多个 Kubernetes 集群,并且通过权限控制,将不同 cluster、namespace 的权限分配给指定用户;允许开发人员管理 Kubernetes 集群中运行的应用程序并对其进行故障排查,供开发人员更好地处理 Kubernetes 集群中的复杂性。

# 6.1快速开始

javascript 复制代码
 docker run --privileged -d --restart=unless-stopped -p 80:80 1panel/kubepi
 # 如果这里出错就重启一次再运行
 systemctl restart docker
 浏览器访问192.168.221.21即可显示kubepi
 # 用户名: admin
 # 密码: kubepi
 ​
 # 查看master上的kubeconfig文件
 root@master:~# cat ~/.kube/config
 # 忽略tls安全设置
 --kubelet-insecure-tls

点击蓝色警告安装metric server

修改配置

相关推荐
花落已飘2 小时前
openEuler WSL2容器化开发实战:Docker Desktop集成与应用部署
运维·docker·容器
@HNUSTer2 小时前
基于 GEE 的 MODIS 数据逐月植被覆盖度(FVC)计算与数据导出完整流程
云计算·数据集·遥感大数据·gee·云平台·fvc·modis
betazhou3 小时前
基于Linux环境使用ogg19版本从oracle 19c ADG备库远程同步数据
linux·运维·oracle·goldengate·adg·远程抽取
路由侠内网穿透.3 小时前
本地部署消息代理软件 RabbitMQ 并实现外部访问( Windows 版本 )
linux·运维·服务器·远程工作
wanhengidc3 小时前
海外云手机是指什么
运维·服务器·游戏·智能手机·云计算
Fanmeang3 小时前
华为防火墙基础功能详解:构建网络安全的基石
运维·网络·安全·华为·防火墙·策略·安全域
AWS官方合作商4 小时前
AWS Lambda的安全之道:S3静态加密与运行时完整性检查的双重保障
安全·云计算·aws
skyeeeeee5 小时前
kubeadm安装k8s集群
后端·kubernetes
求知若渴,虚心若愚。5 小时前
手搓 OpenStack 部署 实战
运维·openstack