部署k8s 集群1.26.0(containerd方式)

随着k8s版本逐步更新,在不支持docker环境的情况下,需要使用containerd方式作为容器引擎。为了更好的个人学习使用,需要重新部署一套1.26.0版本的k8s集群,并且使用containerd方式作为容器引擎,版本为1.6.33。在部署过程中也出现各类问题,不过还好逐步解决。

一、准备环境:

复制代码
 master 192.168.40.5 CentOS Linux release 7.9.2009 (Core) 3.10.0-1160.119.1.el7.x86_64
 node1  192.168.40.6 CentOS Linux release 7.9.2009 (Core) 3.10.0-1160.119.1.el7.x86_64
 node2  192.168.40.7 CentOS Linux release 7.9.2009 (Core) 3.10.0-1160.119.1.el7.x86_64

二、基本配置(三个节点都要操作)

1、虚机设置主机名

复制代码
hostnamectl set-hostname master
hostnamectl set-hostname node1
hostnamectl set-hostname node2

2、配置/etc/hosts ip地址,主机名解析

复制代码
在每个/ecc/hosts 节点后面增加
192.168.40.5 master
192.168.40.6 node1
192.168.40.7 node2

3、禁用配置防火墙

复制代码
禁用防火墙:
systemctl stop firewalld
systemctl disable firewalld

禁用 SELINUX:
setenforce 0
cat /etc/selinux/config
SELINUX=disabled

4、配置转发模块

复制代码
由于开启内核 ipv4 转发需要加载br_netfilter 模块,所以加载下该模块:
modprobe br_netfilter
创建/etc/sysctl.d/k8s.conf文件,添加如下内容:
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1

配置文件生效
sysctl -p /etc/sysctl.d/k8s.conf

备注:为避免虚机重启后模块不生效问题,建议将modprobe br_netfilter放到开启启动文件中,你也可以手工编辑启动文件,或者放到/etc/rc.d/rc.local 下
echo modprobe br_netfilter >>/etc/rc.d/rc.local
chmod +x /etc/rc.d/rc.local
[root@master rc.d]# cat rc.local 
#!/bin/bash
# THIS FILE IS ADDED FOR COMPATIBILITY PURPOSES
#
# It is highly advisable to create own systemd services or udev rules
# to run scripts during boot instead of using this file.
#
# In contrast to previous versions due to parallel execution during boot
# this script will NOT be run after all other services.
#
# Please note that you must run 'chmod +x /etc/rc.d/rc.local' to ensure
# that this script will be executed during boot.

touch /var/lock/subsys/local
modprobe br_netfilter
这样虚机启动后会自动加载模块

5、安装ipvs、ipset、ipvsadm和同步时间

复制代码
安装 ipvs:
创建/etc/sysconfig/modules/ipvs.modules文件,保证在节点重启后能自动加载所需模块
cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF

chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4

使用lsmod | grep -e ip_vs -e nf_conntrack_ipv4命令查看是否已经正确加载所需的内核模块。

接下来还需要确保各个节点上已经安装了ipset 软件包:
yum install ipset

为了便于查看 ipvs 的代理规则,最好安装一下管理工具 ipvsadm
yum install ipvsadm

同步服务器时间
yum install chrony -y
systemctl enable chronyd
systemctl start chronyd
chronyc sources
[root@master ~]# chronyc sources
210 Number of sources = 4
MS Name/IP address         Stratum Poll Reach LastRx Last sample               
===============================================================================
^* 111.230.189.174               2   9   377   346    -36ms[  -37ms] +/-   92ms
^+ tock.ntp.infomaniak.ch        1   8   357    17    +48ms[  +48ms] +/-  152ms
^- ntp7.flashdance.cx            2   9    37   148    +22ms[  +22ms] +/-  162ms
^- ntp.wdc2.us.leaseweb.net      2   9   377    25    +75ms[  +75ms] +/-  324ms
备注:建议节点配置通外网

6、关闭swap分区

复制代码
关闭 swap 分区:
swapoff -a
修改/etc/fstab文件,注释掉 SWAP 的自动挂载,使用free -m确认 swap 已经关闭。
swappiness 参数调整,修改/etc/sysctl.d/k8s.conf添加下面一行:
vm.swappiness=0
执行 sysctl -p /etc/sysctl.d/k8s.conf 使修改生效。

[root@master ~]# echo vm.swappiness=0 >> /etc/sysctl.d/k8s.conf 
[root@master ~]# cat /etc/sysctl.d/k8s.conf 
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
vm.swappiness=0
[root@master ~]# sysctl -p /etc/sysctl.d/k8s.conf 
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
vm.swappiness = 0

7、配置yum源

复制代码
配置阿里云的yum源,安装wget
yum install -y wget

备份默认的yum
mv /etc/yum.repos.d /etc/yum.repos.d.bk

设置新的yum目录
mkdir -p /etc/yum.repos.d

下载阿里yum配置到该目录中,选择对应版本
wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo

更新epel源为阿里云epel源
wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo

重建缓存
yum clean all
yum makecache

三、部署containerd组件

复制代码
yum -y install yum-utils
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
yum -y install containerd 
wget https://github.moeyy.xyz/https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.27.1/crictl-v1.27.1-linux-amd64.tar.gz
tar xf crictl-v1.27.1-linux-amd64.tar.gz -C /usr/local/bin
containerd config default > /etc/containerd/config.toml
sed -i 's#SystemdCgroup = false#SystemdCgroup = true#' /etc/containerd/config.toml
sed -i 's@registry.k8s.io/pause:3.6@registry.aliyuncs.com/google_containers/pause:3.9@' /etc/containerd/config.toml
systemctl daemon-reload
systemctl start containerd
systemctl enable containerd
cat >> /etc/crictl.yaml << EOF
runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
timeout: 3
debug: true
EOF

四、安装k8s服务

复制代码
#master安装
yum -y install kubelet-1.26.0 kubectl-1.26.0 kubeadm-1.26.0 

#node安装
yum -y install kubeadm-1.26.0 kubelet-1.26.0

systemctl enable --now kubelet

[root@master ~]# kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"26", GitVersion:"v1.26.0", GitCommit:"b46a3f887ca979b1a5d14fd39cb1af43e7e5d12d", GitTreeState:"clean", BuildDate:"2022-12-08T19:57:06Z", GoVersion:"go1.19.4", Compiler:"gc", Platform:"linux/amd64"}

五、初始化集群

复制代码
在master节点生成初始化配置文件

kubeadm config print init-defaults --component-configs KubeletConfiguration > kubeadm.yaml

然后根据我们自己的需求修改配置,比如修改 imageRepository 指定集群初始化时拉取 Kubernetes 所需镜像的地址,kube-proxy 的模式为 ipvs,另外需要注意的是我们这里是准备安装 flannel 网络插件的,需要将 networking.podSubnet 设置为10.244.0.0/16:

[root@master ~]# cat kubeadm.yaml 

---
apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 192.168.40.5   ###配置master节点IP
  bindPort: 6443    
nodeRegistration:
  criSocket: unix:///var/run/containerd/containerd.sock   #使用containerd的Unix socket地址
  imagePullPolicy: IfNotPresent
  name: master
  taints:                          ####配置污点,master节点不能调度应用
  - effect: "NoSchedule"
    key: "node-role.kubernetes.io/master"

---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs                      ###配置为ipvs模式
    
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: {}
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers   ###修改为可以下载的镜像地址registry.aliyuncs.com/k8sxio
kind: ClusterConfiguration
kubernetesVersion: 1.26.0                      ###版本为1.26.0
networking:
  dnsDomain: cluster.local
  serviceSubnet: 10.96.0.0/12
  podSubnet: 10.244.0.0/16                    ####增加podsubnet 10.244.0.0/16
scheduler: {}

---
apiVersion: kubelet.config.k8s.io/v1beta1
authentication:
  anonymous:
    enabled: false
  webhook:
    cacheTTL: 0s
    enabled: true
  x509:
    clientCAFile: /etc/kubernetes/pki/ca.crt
authorization:
  mode: Webhook
  webhook:
    cacheAuthorizedTTL: 0s
    cacheUnauthorizedTTL: 0s
clusterDNS:
- 10.96.0.10
clusterDomain: cluster.local
cpuManagerReconcilePeriod: 0s
evictionPressureTransitionPeriod: 0s
fileCheckFrequency: 0s
healthzBindAddress: 127.0.0.1
healthzPort: 10248
httpCheckFrequency: 0s
imageMinimumGCAge: 0s
kind: KubeletConfiguration
cgroupDriver: systemd   ### 配置 cgroup driver
logging: {}
memorySwap: {}
nodeStatusReportFrequency: 0s
nodeStatusUpdateFrequency: 0s
rotateCertificates: true
runtimeRequestTimeout: 0s
shutdownGracePeriod: 0s
shutdownGracePeriodCriticalPods: 0s
staticPodPath: /etc/kubernetes/manifests
streamingConnectionIdleTimeout: 0s
syncFrequency: 0s
volumeStatsAggPeriod: 0s


配置文件准备好过后,可以使用如下命令先将相关镜像 pull 下来:
[root@master ~]# scp kubeadm.yaml node1:/root/
[root@master ~]# scp kubeadm.yaml node2:/root/
[root@master ~]# kubeadm config images pull --config kubeadm.yaml
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-apiserver:v1.26.0
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-controller-manager:v1.26.0
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-scheduler:v1.26.0
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-proxy:v1.26.0
[config/images] Pulled registry.aliyuncs.com/google_containers/pause:3.9
[config/images] Pulled registry.aliyuncs.com/google_containers/etcd:3.5.6-0
[config/images] Pulled registry.aliyuncs.com/google_containers/coredns:v1.9.3

同样在node节点上同步执行
[root@node1 ~]# kubeadm config images pull --config kubeadm.yaml
[root@node2 ~]# kubeadm config images pull --config kubeadm.yaml

在master节点进行节点初始化操作
[root@master ~]# kubeadm init --config kubeadm.yaml
[init] Using Kubernetes version: v1.26.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local master] and IPs [10.96.0.1 192.168.40.5]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost master] and IPs [192.168.40.5 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost master] and IPs [192.168.40.5 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 12.501933 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.26" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node master as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: abcdef.0123456789abcdef
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.40.5:6443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:8c1f43da860b0e7bd9f290fe057f08cf7650b89e650ff316ce4a9cad3834475c

按照提示进行设置
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

将master节点上面的 $HOME/.kube/config 文件拷贝到node节点对应的文件中(提前在node节点上创建好$HOME/.kube目录)
[root@master ~]# scp $HOME/.kube/config node1:/root/.kube/
[root@master ~]# scp $HOME/.kube/config node2:/root/.kube/

然后可以使用 kubectl 命令查看 master 节点已经初始化成功了:
[root@master ~]# kubectl get nodes
NAME     STATUS   ROLES                  AGE     VERSION
master   NotReady    control-plane,master   2m10s   v1.26.0

六、添加节点

复制代码
执行上面初始化完成后提示的 join 命令即可:

[root@node1 ~]# kubeadm join 192.168.40.5:6443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:8c1f43da860b0e7bd9f290fe057f08cf7650b89e650ff316ce4a9cad3834475c
[preflight] Running pre-flight checks
[preflight] WARNING: Couldn't create the interface used for talking to the container runtime: docker is required for container runtime: exec: "docker": executable file not found in $PATH
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	
[root@node2 ~]# kubeadm join 192.168.40.5:6443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:8c1f43da860b0e7bd9f290fe057f08cf7650b89e650ff316ce4a9cad3834475c
[preflight] Running pre-flight checks
[preflight] WARNING: Couldn't create the interface used for talking to the container runtime: docker is required for container runtime: exec: "docker": executable file not found in $PATH
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.	

备注:如果忘记了上面的 join 命令可以使用命令 
# kubeadm token create --print-join-command 重新获取。



执行成功后在master节点运行 get nodes 命令:

[root@master ~]# kubectl get nodes
NAME     STATUS     ROLES                  AGE   VERSION
master   Ready      control-plane,master   47m   v1.26.0
node1    NotReady   <none>                 46s   v1.26.0
node2    NotReady   <none>                 46s   v1.26.0

七、部署flannel组件

复制代码
可以看到是 NotReady 状态,这是因为还没有安装网络插件,接下来安装网络插件,可以在文档 https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/ 中选择我们自己的网络插件,这里我们安装 flannel:

以下为配置好的kube-flannel.yml文件,请参考

[root@master ~]# cat kube-flannel.yaml

---
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: flannel
  name: flannel
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    k8s-app: flannel
  name: flannel
rules:
- apiGroups:
  - ""
  resources:
  - pods
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - nodes/status
  verbs:
  - patch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    k8s-app: flannel
  name: flannel
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: flannel
subjects:
- kind: ServiceAccount
  name: flannel
  namespace: kube-system
---
apiVersion: v1
data:
  cni-conf.json: |
    {
      "name": "cbr0",
      "cniVersion": "0.3.1",
      "plugins": [
        {
          "type": "flannel",
          "delegate": {
            "hairpinMode": true,
            "isDefaultGateway": true
          }
        },
        {
          "type": "portmap",
          "capabilities": {
            "portMappings": true
          }
        }
      ]
    }
  net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "EnableNFTables": false,
      "Backend": {
        "Type": "vxlan"
      }
    }
kind: ConfigMap
metadata:
  labels:
    app: flannel
    k8s-app: flannel
    tier: node
  name: kube-flannel-cfg
  namespace: kube-system
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  labels:
    app: flannel
    k8s-app: flannel
    tier: node
  name: kube-flannel-ds
  namespace: kube-system
spec:
  selector:
    matchLabels:
      app: flannel
      k8s-app: flannel
  template:
    metadata:
      labels:
        app: flannel
        k8s-app: flannel
        tier: node
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: kubernetes.io/os
                operator: In
                values:
                - linux
      containers:
      - args:
        - --ip-masq
        - --kube-subnet-mgr
        command:
        - /opt/bin/flanneld
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        - name: EVENT_QUEUE_DEPTH
          value: "5000"
        image: ghcr.io/flannel-io/flannel:v0.26.4
        name: kube-flannel
        resources:
          requests:
            cpu: 100m
            memory: 50Mi
        securityContext:
          capabilities:
            add:
            - NET_ADMIN
            - NET_RAW
          privileged: false
        volumeMounts:
        - mountPath: /run/flannel
          name: run
        - mountPath: /etc/kube-flannel/
          name: flannel-cfg
        - mountPath: /run/xtables.lock
          name: xtables-lock
      hostNetwork: true
      initContainers:
      - args:
        - -f
        - /flannel
        - /opt/cni/bin/flannel
        command:
        - cp
        image: ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1
        name: install-cni-plugin
        volumeMounts:
        - mountPath: /opt/cni/bin
          name: cni-plugin
      - args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        command:
        - cp
        image: ghcr.io/flannel-io/flannel:v0.26.4
        name: install-cni
        volumeMounts:
        - mountPath: /etc/cni/net.d
          name: cni
        - mountPath: /etc/kube-flannel/
          name: flannel-cfg
      priorityClassName: system-node-critical
      serviceAccountName: flannel
      tolerations:
      - effect: NoSchedule
        operator: Exists
      volumes:
      - hostPath:
          path: /run/flannel
        name: run
      - hostPath:
          path: /opt/cni/bin
        name: cni-plugin
      - hostPath:
          path: /etc/cni/net.d
        name: cni
      - configMap:
          name: kube-flannel-cfg
        name: flannel-cfg
      - hostPath:
          path: /run/xtables.lock
          type: FileOrCreate
        name: xtables-lock

注意:如果虚机有多网卡的情况,需要指定一下对应网卡
      - args:
        - --ip-masq
        - --kube-subnet-mgr
		- --iface=eth0  # 如果是多网卡的话,指定内网网卡的名称  
        command:
        - /opt/bin/flanneld

#安装 flannel 网络插件
[root@master ~]# kubectl apply -f kube-flannel.yml  

#查看 Pod 运行状态
[root@master ~]# kubectl  get pod -n kube-system -owide 
NAME                             READY   STATUS    RESTARTS        AGE     IP             NODE     NOMINATED NODE   READINESS GATES
coredns-78969d9f7c-fzk78         1/1     Running   0               26h     10.244.1.3     node1    <none>           <none>
coredns-78969d9f7c-zzrd4         1/1     Running   0               26h     10.244.1.2     node1    <none>           <none>
etcd-master                      1/1     Running   4 (4h1m ago)    26h     192.168.40.5   master   <none>           <none>
kube-apiserver-master            1/1     Running   4 (4h1m ago)    26h     192.168.40.5   master   <none>           <none>
kube-controller-manager-master   1/1     Running   4 (4h1m ago)    26h     192.168.40.5   master   <none>           <none>
kube-flannel-ds-9swdb            1/1     Running   0               3h57m   192.168.40.7   node2    <none>           <none>
kube-flannel-ds-jqvgs            1/1     Running   11 (4h3m ago)   5h16m   192.168.40.5   master   <none>           <none>
kube-flannel-ds-kllm5            1/1     Running   0               3h57m   192.168.40.6   node1    <none>           <none>
kube-proxy-66sfz                 1/1     Running   0               26h     192.168.40.7   node2    <none>           <none>
kube-proxy-m9xnt                 1/1     Running   4 (4h1m ago)    26h     192.168.40.5   master   <none>           <none>
kube-proxy-thzpx                 1/1     Running   0               4h31m   192.168.40.6   node1    <none>           <none>
kube-scheduler-master            1/1     Running   4 (4h1m ago)    26h     192.168.40.5   master   <none>           <none>

网络插件运行成功了,node 状态也正常了
[root@master ~]# kubectl  get nodes -owide 
NAME     STATUS   ROLES           AGE   VERSION   INTERNAL-IP    EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION                 CONTAINER-RUNTIME
master   Ready    control-plane   26h   v1.26.0   192.168.40.5   <none>        CentOS Linux 7 (Core)   3.10.0-1160.119.1.el7.x86_64   containerd://1.6.33
node1    Ready    <none>          26h   v1.26.0   192.168.40.6   <none>        CentOS Linux 7 (Core)   3.10.0-1160.119.1.el7.x86_64   containerd://1.6.33
node2    Ready    <none>          26h   v1.26.0   192.168.40.7   <none>        CentOS Linux 7 (Core)   3.10.0-1160.119.1.el7.x86_64   containerd://1.6.33
[root@master ~]# 

八、部署Dashboard

复制代码
如果说您可以访问外网直接 wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.3.1/aio/deploy/recommended.yaml 下载recommended.yaml

如果不行可以建议使用如下配置
[root@master ~]# cat recommended.yaml

# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

apiVersion: v1
kind: Namespace
metadata:
  name: kubernetes-dashboard

---

apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard

---

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  ports:
    - port: 443
      targetPort: 8443
  selector:
    k8s-app: kubernetes-dashboard
  type: NodePort  # 加上type=NodePort变成NodePort类型的服务

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-certs
  namespace: kubernetes-dashboard
type: Opaque

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-csrf
  namespace: kubernetes-dashboard
type: Opaque
data:
  csrf: ""

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-key-holder
  namespace: kubernetes-dashboard
type: Opaque

---

kind: ConfigMap
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-settings
  namespace: kubernetes-dashboard

---

kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
rules:
  # Allow Dashboard to get, update and delete Dashboard exclusive secrets.
  - apiGroups: [""]
    resources: ["secrets"]
    resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"]
    verbs: ["get", "update", "delete"]
    # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
  - apiGroups: [""]
    resources: ["configmaps"]
    resourceNames: ["kubernetes-dashboard-settings"]
    verbs: ["get", "update"]
    # Allow Dashboard to get metrics.
  - apiGroups: [""]
    resources: ["services"]
    resourceNames: ["heapster", "dashboard-metrics-scraper"]
    verbs: ["proxy"]
  - apiGroups: [""]
    resources: ["services/proxy"]
    resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"]
    verbs: ["get"]

---

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
rules:
  # Allow Metrics Scraper to get metrics from the Metrics server
  - apiGroups: ["metrics.k8s.io"]
    resources: ["pods", "nodes"]
    verbs: ["get", "list", "watch"]

---

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: kubernetes-dashboard
subjects:
  - kind: ServiceAccount
    name: kubernetes-dashboard
    namespace: kubernetes-dashboard

---

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: kubernetes-dashboard
subjects:
  - kind: ServiceAccount
    name: kubernetes-dashboard
    namespace: kubernetes-dashboard

---

kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: kubernetes-dashboard
  template:
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
    spec:
      containers:
        - name: kubernetes-dashboard
          image: registry.aliyuncs.com/google_containers/dashboard:v2.3.1  
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 8443
              protocol: TCP
          args:
            - --auto-generate-certificates
            - --namespace=kubernetes-dashboard
            # Uncomment the following line to manually specify Kubernetes API server Host
            # If not specified, Dashboard will attempt to auto discover the API server and connect
            # to it. Uncomment only if the default does not work.
            # - --apiserver-host=http://my-address:port
          volumeMounts:
            - name: kubernetes-dashboard-certs
              mountPath: /certs
              # Create on-disk volume to store exec logs
            - mountPath: /tmp
              name: tmp-volume
          livenessProbe:
            httpGet:
              scheme: HTTPS
              path: /
              port: 8443
            initialDelaySeconds: 30
            timeoutSeconds: 30
          securityContext:
            allowPrivilegeEscalation: false
            readOnlyRootFilesystem: true
            runAsUser: 1001
            runAsGroup: 2001
      volumes:
        - name: kubernetes-dashboard-certs
          secret:
            secretName: kubernetes-dashboard-certs
        - name: tmp-volume
          emptyDir: {}
      serviceAccountName: kubernetes-dashboard
      nodeSelector:
        "kubernetes.io/os": linux
      # Comment the following tolerations if Dashboard must not be deployed on master
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule

---

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: dashboard-metrics-scraper
  name: dashboard-metrics-scraper
  namespace: kubernetes-dashboard
spec:
  ports:
    - port: 8000
      targetPort: 8000
  selector:
    k8s-app: dashboard-metrics-scraper

---

kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    k8s-app: dashboard-metrics-scraper
  name: dashboard-metrics-scraper
  namespace: kubernetes-dashboard
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: dashboard-metrics-scraper
  template:
    metadata:
      labels:
        k8s-app: dashboard-metrics-scraper
      annotations:
        seccomp.security.alpha.kubernetes.io/pod: 'runtime/default'
    spec:
      containers:
        - name: dashboard-metrics-scraper
          image: registry.aliyuncs.com/google_containers/metrics-scraper:v1.0.6  ###修改为可以下载的镜像地址registry
          ports:
            - containerPort: 8000
              protocol: TCP
          livenessProbe:
            httpGet:
              scheme: HTTP
              path: /
              port: 8000
            initialDelaySeconds: 30
            timeoutSeconds: 30
          volumeMounts:
          - mountPath: /tmp
            name: tmp-volume
          securityContext:
            allowPrivilegeEscalation: false
            readOnlyRootFilesystem: true
            runAsUser: 1001
            runAsGroup: 2001
      serviceAccountName: kubernetes-dashboard
      nodeSelector:
        "kubernetes.io/os": linux
      # Comment the following tolerations if Dashboard must not be deployed on master
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule
      volumes:
        - name: tmp-volume
          emptyDir: {}


编辑完成后执行
[root@master ~]# kubectl apply -f recommended.yaml
namespace/kubernetes-dashboard created
serviceaccount/kubernetes-dashboard created
service/kubernetes-dashboard created
secret/kubernetes-dashboard-certs created
secret/kubernetes-dashboard-csrf created
secret/kubernetes-dashboard-key-holder created
configmap/kubernetes-dashboard-settings created
role.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
deployment.apps/kubernetes-dashboard created
service/dashboard-metrics-scraper created
Warning: spec.template.metadata.annotations[seccomp.security.alpha.kubernetes.io/pod]: deprecated since v1.19, non-functional in a future release; use the "seccompProfile" field instead
deployment.apps/dashboard-metrics-scraper created

新版本的 Dashboard会被默认安装在 kubernetes-dashboard 这个命名空间下面
[root@master ~]# kubectl get pods -n kubernetes-dashboard -o wide
NAME                                        READY   STATUS              RESTARTS   AGE   IP           NODE    NOMINATED NODE   READINESS GATES
dashboard-metrics-scraper-7756547df-tcpbv   1/1     Running             0          17s   10.244.2.2   node2   <none>           <none>
kubernetes-dashboard-74f8d9c449-9hvhk       0/1     ContainerCreating   0          17s   <none>       node2   <none>           <none>
[root@master ~]# 
[root@master ~]# 
[root@master ~]# 
[root@master ~]# kubectl get pods -n kubernetes-dashboard -o wide
NAME                                        READY   STATUS    RESTARTS   AGE   IP           NODE    NOMINATED NODE   READINESS GATES
dashboard-metrics-scraper-7756547df-tcpbv   1/1     Running   0          37s   10.244.2.2   node2   <none>           <none>
kubernetes-dashboard-74f8d9c449-9hvhk       1/1     Running   0          37s   10.244.2.3   node2   <none>           <none>

查看 Dashboard 的 NodePort 端口:
[root@master ~]# kubectl get svc -n kubernetes-dashboard
NAME                        TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)         AGE
dashboard-metrics-scraper   ClusterIP   10.108.241.26   <none>        8000/TCP        3m29s
kubernetes-dashboard        NodePort    10.105.173.33   <none>        443:31048/TCP   3m30s

然后创建一个具有全局所有权限的用户来登录 Dashboard:(dashboard-admin.yaml)

备注:这个地方需要单独再创建一个secret来绑定sa,这样才能在kubernetes-dashboard 命名空间下才有相关token信息,如果没有添加这个secret,只有kubernetes-dashboard-certs、kubernetes-dashboard-csrf和kubernetes-dashboard-key-holder的信息,无法获取到有用的token

复制代码
[root@master ~]# cat dashboard-admin.yaml 
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: admin
roleRef:
  kind: ClusterRole
  name: cluster-admin
  apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
  name: admin
  namespace: kubernetes-dashboard
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin
  namespace: kubernetes-dashboard
---
apiVersion: v1
kind: Secret
type: kubernetes.io/service-account-token
metadata:
  name: admin-token
  namespace: kubernetes-dashboard
  annotations:
    kubernetes.io/service-account.name: "admin"

==========================闭坑的地方=================================================
备注:这个地方需要单独再创建一个secret来绑定sa,这样才能在kubernetes-dashboard 命名空间下才有相关token信息,如果没有添加这个secret,只有kubernetes-dashboard-certs、kubernetes-dashboard-csrf和kubernetes-dashboard-key-holder的信息,无法获取到有用的token

[root@master ~]# kubectl -n kubernetes-dashboard get secret 
NAME                              TYPE                                  DATA   AGE
kubernetes-dashboard-certs        Opaque                                0      142m
kubernetes-dashboard-csrf         Opaque                                1      142m
kubernetes-dashboard-key-holder   Opaque                                2      142m
====================================================================================

配置完成后apply一下
[root@master ~]#  kubectl apply -f dashboard-admin.yaml
[root@master ~]# kubectl -n kubernetes-dashboard get secret 
NAME                              TYPE                                  DATA   AGE
admin-token                       kubernetes.io/service-account-token   3      13m
admin-user                        kubernetes.io/service-account-token   3      15m
kubernetes-dashboard-certs        Opaque                                0      146m
kubernetes-dashboard-csrf         Opaque                                1      146m
kubernetes-dashboard-key-holder   Opaque                                2      146m

[root@master ~]#  kubectl -n kubernetes-dashboard get secret admin-token -o go-template="{{.data.token | base64decode}}"

[root@master ~]# kubectl -n kubernetes-dashboard get secret admin-token -o go-template="{{.data.token | base64decode}}"
eyJhbGciOiJSUzI1NiIsImtpZCI6InJ6Y0lnSkdlTVJxdUljRzZMWHh5ZE9aVTFLak1hRnRISlBFbkhrLXF5TlUifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi10b2tlbiIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJhZG1pbiIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImM2ODVjN2M3LTE5YzItNDFiMi1hMDViLTBiNzdlZWM1OGI5MSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlcm5ldGVzLWRhc2hib2FyZDphZG1pbiJ9.Kyw-OqZMvcwyVGQ3s5ajFW0tKZM8bOKB3Xg0MqYSb2wrM9ib0KQFpfxJNRFKe6YBNQG4eU-qu65a_GmSZeEMP9U7RpsNTslbH1KW5NURygcVK809qYPgKslncdnFZATkhdriIYqWxaIWMF62ga6HoMDXIfi_pb0iXmq7XUA6Kqoxl85k1YE_jEcOQa-n6_wZBR3ZjgfsAIkRv9GCmMGh9gciZUZKiXyp68XqNyw8xLa3sDkzDi8Wlnw7NsTDBJ7Vd7qHrDREdWeiaAZdPA2nVEnfDwJY_jX5MGFWzB8JCGEhF2yjcoP9CcdQ-fqZEPGpt_hKcxG7KguqVfxWJDuMKQ[root@master ~]# 

然后用上面的 base64 解码后的字符串作为 token 登录 Dashboard 即可

可以选择对应的命名空间进行查看

也可以通过Kubeconfig 方式登录

复制代码
Kubeconfig 方式登录
创建目录 
mkdir -p /etc/kubernetes/dashboard/
DASH_TOCKEN=$(kubectl -n kubernetes-dashboard get secret admin-token -o go-template="{{.data.token | base64decode}}")
# 设置 kubeconfig 文件中的一个集群条目
kubectl config set-cluster kubernetes --server=192.168.40.5:6443 --kubeconfig=/etc/kubernetes/dashboard/dashbord-admin.conf
# 设置 kubeconfig 文件中的一个用户条目
kubectl config set-credentials dashboard-admin --token=$DASH_TOCKEN --kubeconfig=/etc/kubernetes/dashboard/dashbord-admin.conf
# 设置 kubeconfig 文件中的一个上下文条目
kubectl config set-context dashboard-admin@kubernetes --cluster=kubernetes --user=dashboard-admin --kubeconfig=/etc/kubernetes/dashboard/dashbord-admin.conf
#设置 kubeconfig 文件中的当前上下文
kubectl config use-context dashboard-admin@kubernetes --kubeconfig=/etc/kubernetes/dashboard/dashbord-admin.conf

[root@master ~]# mkdir -p /etc/kubernetes/dashboard/
[root@master ~]# DASH_TOCKEN=$(kubectl -n kubernetes-dashboard get secret admin-token -o go-template="{{.data.token | base64decode}}")
[root@master ~]# kubectl config set-cluster kubernetes --server=192.168.40.5:6443 --kubeconfig=/etc/kubernetes/dashboard/dashbord-admin.conf
Cluster "kubernetes" set.
[root@master ~]# kubectl config set-credentials dashboard-admin --token=$DASH_TOCKEN --kubeconfig=/etc/kubernetes/dashboard/dashbord-admin.conf
User "dashboard-admin" set.
[root@master ~]# kubectl config set-context dashboard-admin@kubernetes --cluster=kubernetes --user=dashboard-admin --kubeconfig=/etc/kubernetes/dashboard/dashbord-admin.conf
Context "dashboard-admin@kubernetes" created.
[root@master ~]# kubectl config use-context dashboard-admin@kubernetes --kubeconfig=/etc/kubernetes/dashboard/dashbord-admin.conf
Switched to context "dashboard-admin@kubernetes".
[root@master ~]# cd /etc/kubernetes/dashboard/
[root@master dashboard]# ll
total 4
-rw------- 1 root root 1233 Feb 18 17:23 dashbord-admin.conf

将 dashbord-admin.conf文件下载到本地,并上传

点击登录即可

这样我们就完成了使用 kubeadm 搭建 v1.26.0 版本的 kubernetes 集群。

相关推荐
木鱼时刻17 小时前
容器与 Kubernetes 基本概念与架构
容器·架构·kubernetes
chuanauc1 天前
Kubernets K8s 学习
java·学习·kubernetes
庸子2 天前
基于Jenkins和Kubernetes构建DevOps自动化运维管理平台
运维·kubernetes·jenkins
李白你好2 天前
高级运维!Kubernetes(K8S)常用命令的整理集合
运维·容器·kubernetes
Connie14512 天前
k8s多集群管理中的联邦和舰队如何理解?
云原生·容器·kubernetes
伤不起bb2 天前
Kubernetes 服务发布基础
云原生·容器·kubernetes
别骂我h2 天前
Kubernetes服务发布基础
云原生·容器·kubernetes
weixin_399380692 天前
k8s一键部署tongweb企业版7049m6(by why+lqw)
java·linux·运维·服务器·云原生·容器·kubernetes
斯普信专业组3 天前
K8s环境下基于Nginx WebDAV与TLS/SSL的文件上传下载部署指南
nginx·kubernetes·ssl
&如歌的行板&3 天前
如何在postman中动态请求k8s中的pod ip(基于nacos)
云原生·容器·kubernetes