K8S-高可用集群

集群架构规划

角色 主机名 IP地址
master k8s-master01 192.168.115.161/24
master k8s-master02 192.168.115.162/24
master k8s-master03 192.168.115.163/24
node k8s-worker01 192.168.115.164/24
node k8s-worker02 192.168.115.165/24
高可用(keepalived+haproxy) 部署在所有master节点

初始化主机 所有主机都采用最小化安装!!!所有主机依次配置! IP地址配置

bash 复制代码
[root@localhost ~]# vi /etc/sysconfig/network-scripts/ifcfg-ens33
[root@localhost ~]# cat /etc/sysconfig/network-scripts/ifcfg-ens33
TYPE=Ethernet
BOOTPROTO=static
NAME=ens33
DEVICE=ens33
ONBOOT=yes
IPADDR=192.168.115.161
PREFIX=24
GATEWAY=192.168.115.2
DNS1=192.168.115.2
[root@localhost ~]# ifdown ens33 ; ifup ens33

安装必要工具

bash 复制代码
####安装bash-completion:补齐工具,安装完成后需要重启,替代方案:bash#####
[root@localhost ~]# yum install -y bash-completion ; bash

####安装vim、net-tools、wget:下载工具、lrzsz:rz和sz命令:xshell文件传输工具####
[root@localhost ~]# yum install -y vim net-tools wget lrzsz

修改主机名

bash 复制代码
[root@localhost ~]# hostnamectl set-hostname k8s-master01

修改主机的yum源为ali源

bash 复制代码
wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo

查找国内镜像源

http://dockerproxy.cn/

一、集群结构

角色 主机名 IP地址
master k8s-master01 192.168.115.161/24
master k8s-master02 192.168.115.162/24
master k8s-master03 192.168.115.163/24
node k8s-worker01 192.168.115.164/24
node k8s-worker02 192.168.115.165/24
高可用(keepalived+haproxy) 部署在所有master节点 192.168.115.166/24

安装配置信息

配置信息 备注 说明
系统版本 CentOS7.9
Docker版本 1.24
Pod网段 172.16.0.0/16 Pod IP,Pod的IP地址,容器(docker0)网桥分配的地址。cluster-cidr定义Pod网络CIDR地址范围的参数。
Service网段 10.10.0.0/16 Cluster IP,也可叫Service IP,Service的IP地址。service-cluster-ip-range定义Service IP地址范围的参数

二、基本环境配置

以下配置在所有节点设置

1、修改主机名

bash 复制代码
[root@localhost ~]# hostnamectl set-hostname k8s-master01
[root@localhost ~]# hostnamectl set-hostname k8s-master02
[root@localhost ~]# hostnamectl set-hostname k8s-master03
[root@localhost ~]# hostnamectl set-hostname k8s-worker01
[root@localhost ~]# hostnamectl set-hostname k8s-worker02

2、关闭SELinux

bash 复制代码
[root@k8s-master01 ~]# setenforce 0
[root@k8s-master01 ~]# sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
bash 复制代码
[root@k8s-master01 ~]# systemctl stop firewalld.service ; systemctl disable firewalld.service
#或者
[root@k8s-master01 ~]# systemctl disable --now firewalld3、关闭firewalld

4、关闭NetworkManager

bash 复制代码
[root@k8s-master01 ~]# systemctl stop NetworkManager ; systemctl disable NetworkManager
#或者
[root@k8s-master01 ~]# systemctl disable --now NetworkManager

5、关闭swap

bash 复制代码
[root@k8s-master01 ~]# swapoff -a
[root@k8s-master01 ~]# sed -ri 's/.*swap.*/#&/' /etc/fstab

6、修改hosts文件

bash 复制代码
[root@k8s-master01 ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.115.161	k8s-master01
192.168.115.162	k8s-master02
192.168.115.163	k8s-master03
192.168.115.164	k8s-worker01
192.168.115.165	k8s-worker02
[root@k8s-master01 ~]# for i in k8s-master02 k8s-master03 k8s-worker01 k8s-worker02;do scp /etc/hosts $i:/etc/;done

7、配置yum源

修改基础源为阿里云的源

bash 复制代码
[root@k8s-master01 ~]# curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  2523  100  2523    0     0  27870      0 --:--:-- --:--:-- --:--:-- 28033
[root@k8s-master01 ~]# cat /etc/yum.repos.d/CentOS-Base.repo 
# CentOS-Base.repo
#
# The mirror system uses the connecting IP address of the client and the
# update status of each mirror to pick mirrors that are updated to and
# geographically close to the client.  You should use this for CentOS updates
# unless you are manually picking other mirrors.
#
# If the mirrorlist= does not work for you, as a fall back you can try the 
# remarked out baseurl= line instead.
#
#
 
[base]
name=CentOS-$releasever - Base - mirrors.aliyun.com
failovermethod=priority
baseurl=http://mirrors.aliyun.com/centos/$releasever/os/$basearch/
        http://mirrors.aliyuncs.com/centos/$releasever/os/$basearch/
        http://mirrors.cloud.aliyuncs.com/centos/$releasever/os/$basearch/
gpgcheck=1
gpgkey=http://mirrors.aliyun.com/centos/RPM-GPG-KEY-CentOS-7
 
#released updates 
[updates]
name=CentOS-$releasever - Updates - mirrors.aliyun.com
failovermethod=priority
baseurl=http://mirrors.aliyun.com/centos/$releasever/updates/$basearch/
        http://mirrors.aliyuncs.com/centos/$releasever/updates/$basearch/
        http://mirrors.cloud.aliyuncs.com/centos/$releasever/updates/$basearch/
gpgcheck=1
gpgkey=http://mirrors.aliyun.com/centos/RPM-GPG-KEY-CentOS-7
 
#additional packages that may be useful
[extras]
name=CentOS-$releasever - Extras - mirrors.aliyun.com
failovermethod=priority
baseurl=http://mirrors.aliyun.com/centos/$releasever/extras/$basearch/
        http://mirrors.aliyuncs.com/centos/$releasever/extras/$basearch/
        http://mirrors.cloud.aliyuncs.com/centos/$releasever/extras/$basearch/
gpgcheck=1
gpgkey=http://mirrors.aliyun.com/centos/RPM-GPG-KEY-CentOS-7
 
#additional packages that extend functionality of existing packages
[centosplus]
name=CentOS-$releasever - Plus - mirrors.aliyun.com
failovermethod=priority
baseurl=http://mirrors.aliyun.com/centos/$releasever/centosplus/$basearch/
        http://mirrors.aliyuncs.com/centos/$releasever/centosplus/$basearch/
        http://mirrors.cloud.aliyuncs.com/centos/$releasever/centosplus/$basearch/
gpgcheck=1
enabled=0
gpgkey=http://mirrors.aliyun.com/centos/RPM-GPG-KEY-CentOS-7
 
#contrib - packages by Centos Users
[contrib]
name=CentOS-$releasever - Contrib - mirrors.aliyun.com
failovermethod=priority
baseurl=http://mirrors.aliyun.com/centos/$releasever/contrib/$basearch/
        http://mirrors.aliyuncs.com/centos/$releasever/contrib/$basearch/
        http://mirrors.cloud.aliyuncs.com/centos/$releasever/contrib/$basearch/
gpgcheck=1
enabled=0
gpgkey=http://mirrors.aliyun.com/centos/RPM-GPG-KEY-CentOS-7
bash 复制代码
**添加docker源**

```bash
[root@k8s-master01 ~]# yum install -y yum-utils device-mapper-persistent-data lvm2
[root@k8s-master01 ~]# yum-config-manager  --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
```

**添加K8S源**

```bash
[root@k8s-master01 ~]# cat > /etc/yum.repos.d/kubernetes.repo <<EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

```

### 8、配置SSH免密登录【只在k8s-master01操作】

安装过程中生成的配置文件和证书都是在k8s-master01上操作,所以需要能够免密登录到其他主机!!

```bash
[root@k8s-master01 ~]# ssh-keygen -t rsa -N '' -f ~/.ssh/id_rsa
[root@k8s-master01 ~]# for i in k8s-master01 k8s-master02 k8s-master03 k8s-worker01 k8s-worker02;do ssh-copy-id $i;done
```

### 9、升级系统

```bash
[root@k8s-master01 ~]# yum update -y && reboot

```

## 

三、 内核配置

所有节点都要操作!!!

升级内核(由于停更无法升级内核)

bash 复制代码
[root@k8s-master01 ~]# cat <<EOF > /etc/yum.repos.d/elrepo.repo
[elrepo]
name=elrepo
baseurl=https://mirrors.aliyun.com/elrepo/archive/kernel/el7/x86_64
gpgcheck=0
enabled=1
EOF

[root@k8s-master01 ~]# yum  install -y kernel-lt
[root@k8s-master01 ~]# grub2-set-default 0 
[root@k8s-master01 ~]# reboot
###重启后验证内核#####
[root@k8s-master01 ~]# uname -r
5.4.278-1.el7.elrepo.x86_64

安装ipvsadm和ipset

bash 复制代码
[root@k8s-master01 ~]# yum install -y ipvsadm ipset sysstat conntrack libseccomp

配置资源限制

bash 复制代码
[root@k8s-master01 ~]# cat << EOF >> /etc/security/limits.conf
soft nofile 655360
hard nofile 131072
soft nproc 655350
hard nproc 655350
soft memlock unlimited
hard memlock unlimited
EOF
bash 复制代码
配置ipvs模块
[root@k8s-master01 ~]# cat /etc/modules-load.d/ipvs.conf
ip_vs
ip_vs_lc
ip_vs_wlc
ip_vs_rr
ip_vs_wrr
ip_vs_lblc
ip_vs_lblcr
ip_vs_dh
ip_vs_sh
ip_vs_fo
ip_vs_nq
ip_vs_sed
ip_vs_ftp
nf_conntrack
ip_tables
ip_set
xt_set
ipt_set
ipt_rpfilter
ipt_REJECT
ipip
[root@k8s-master01 ~]# for i in 162 163 164 165;do scp /etc/modules-load.d/ipvs.conf 192.168.115.$i:/etc/modules-load.d/;done
[root@k8s-master01 ~]# systemctl enable --now systemd-modules-load.service

设置内核参数

bash 复制代码
######加载containerd相关内核模块###
[root@k8s-master01 ~]# cat <<EOF | tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF
[root@k8s-master01 ~]# modprobe -- br_netfilter
[root@k8s-master01 ~]# modprobe -- overlay
#####开启内核路由转发及网桥过滤###
[root@k8s-master01 ~]# cat <<EOF > /etc/sysctl.d/k8s.conf
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
fs.may_detach_mounts = 1
vm.overcommit_memory=1
vm.panic_on_oom=0
fs.inotify.max_user_watches=89100
fs.file-max=52706963
fs.nr_open=52706963
net.netfilter.nf_conntrack_max=2310720
net.ipv4.tcp_keepalive_time = 600
net.ipv4.tcp_keepalive_probes = 3
net.ipv4.tcp_keepalive_intvl =15
net.ipv4.tcp_max_tw_buckets = 36000
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_max_orphans = 327680
net.ipv4.tcp_orphan_retries = 3
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.ip_conntrack_max = 131072
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.tcp_timestamps = 0
net.core.somaxconn = 16384
EOF

[root@k8s-master01 ~]# sysctl --system
[root@k8s-master01 ~]# reboot
#####重启后查看containerd相关模块加载情况###
[root@k8s-master01 ~]# lsmod | grep -e br_netfilter -e overlay
overlay               114688  0 
br_netfilter           28672  0 
####重启后查看ipvs模块加载情况###
[root@k8s-master01 ~]# lsmod | grep -e ip_vs -e nf_conntrack
ip_vs_ftp              16384  0 
nf_nat                 45056  3 ip6table_nat,iptable_nat,ip_vs_ftp
ip_vs_sed              16384  0 
ip_vs_nq               16384  0 
ip_vs_fo               16384  0 
ip_vs_sh               16384  0 
ip_vs_dh               16384  0 
ip_vs_lblcr            16384  0 
ip_vs_lblc             16384  0 
ip_vs_wrr              16384  0 
ip_vs_rr               16384  0 
ip_vs_wlc              16384  0 
ip_vs_lc               16384  0 
ip_vs                 155648  24 ip_vs_wlc,ip_vs_rr,ip_vs_dh,ip_vs_lblcr,ip_vs_sh,ip_vs_fo,ip_vs_nq,ip_vs_lblc,ip_vs_wrr,ip_vs_lc,ip_vs_sed,ip_vs_ftp
nf_conntrack          147456  3 xt_conntrack,nf_nat,ip_vs
nf_defrag_ipv6         24576  2 nf_conntrack,ip_vs
nf_defrag_ipv4         16384  1 nf_conntrack
libcrc32c              16384  4 nf_conntrack,nf_nat,xfs,ip_vs

四、安装和配置Containerd

所有节点都要操作!!!

安装

bash 复制代码
bash
[root@k8s-master01 ~]# yum install -y docker-ce docker-ce-cli containerd.io docker-compose-plugin
```

配置

bash 复制代码
[root@k8s-master01 ~]# cat << EOF >> /etc/crictl.yaml
runtime-endpoint: unix:///var/run/containerd/containerd.sock
image-endpoint: unix:///var/run/containerd/containerd.sock
timeout: 10
debug: false
EOF
[root@k8s-master01 ~]# containerd config default | tee /etc/containerd/config.toml
[root@k8s-master01 ~]# sed -ri 's#SystemdCgroup = false#SystemdCgroup = true#' /etc/containerd/config.toml
[root@k8s-master01 ~]# sed -ri 's#registry.k8s.io/pause:3.6#registry.aliyuncs.com/google_containers/pause:3.7#' /etc/containerd/config.toml
####手动或者使用下列语句生成配置文件####
[root@k8s-master01 ~]# for i in 162 163 164 165;do scp /etc/containerd/config.toml 192.168.115.$i:/etc/containerd/;done
bash 复制代码
### 启动

```bash
[root@k8s-master01 ~]# systemctl daemon-reload 
[root@k8s-master01 ~]# systemctl enable --now containerd
####必要重启###
[root@k8s-master01 ~]# systemctl restart containerd
```

五、安装kubelet、kubeadm、kubectl

所有节点都需要安装操作

安装组件

bash 复制代码
####查看k8s源中所有可安装版本
[root@k8s-master01 ~]# yum list kubeadm.x86_64 --showduplicates
[root@k8s-master01 ~]# yum install -y kubelet-1.24.17 kubeadm-1.24.17 kubectl-1.24.17

设置kubelet开机自启动

bash 复制代码
[root@k8s-master01 ~]# systemctl enable --now kubelet
[root@k8s-master01 ~]# systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
   Loaded: loaded (/usr/lib/systemd/system/kubelet.service; disabled; vendor preset: disabled)
  Drop-In: /usr/lib/systemd/system/kubelet.service.d
           └─10-kubeadm.conf
   Active: activating (auto-restart) (Result: exit-code) since 二 2024-07-30 16:47:26 CST; 7s ago
     Docs: https://kubernetes.io/docs/
  Process: 2375 ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS (code=exited, status=1/FAILURE)
 Main PID: 2375 (code=exited, status=1/FAILURE)

7月 30 16:47:26 k8s-master01 kubelet[2375]: --tls-min-version string                                   Minimum TLS...
7月 30 16:47:26 k8s-master01 kubelet[2375]: --tls-private-key-file string                              File contai...
7月 30 16:47:26 k8s-master01 kubelet[2375]: --topology-manager-policy string                           Topology Ma...
7月 30 16:47:26 k8s-master01 kubelet[2375]: --topology-manager-scope string                            Scope to wh...
7月 30 16:47:26 k8s-master01 kubelet[2375]: -v, --v Level                                                  nu...osity
7月 30 16:47:26 k8s-master01 kubelet[2375]: --version version[=true]                                   Print ... quit
7月 30 16:47:26 k8s-master01 kubelet[2375]: --vmodule pattern=N,...                                    comma-...rmat)
7月 30 16:47:26 k8s-master01 kubelet[2375]: --volume-plugin-dir string                                 The full pa...
7月 30 16:47:26 k8s-master01 kubelet[2375]: --volume-stats-agg-period duration                         Specifies i...
7月 30 16:47:26 k8s-master01 kubelet[2375]: Error: failed to load kubelet config file, error: failed to load ....yaml
Hint: Some lines were ellipsized, use -l to show in full.

六、高可用组件安装与配置

安装keepalived与HAproxy

所有master节点安装!!!

bash 复制代码
[root@k8s-master01 ~]# yum install -y keepalived haproxy 

配置HAproxy

所有master节点操作并且配置一致!!!

bash 复制代码
[root@k8s-master01 ~]# cat /etc/haproxy/haproxy.cfg
global 
    maxconn 2000
    ulimit-n 16384
    log 127.0.0.1 local0 err
    stats timeout 30s
 
defaults 
    log global
    mode http  
    timeout connect 5000ms  
    timeout client 50000ms  
    timeout server 50000ms  
    timeout http-request 15s
    timeout http-keep-alive 15s
 
frontend monitor-in 
    bind *:33305  
    mode http
    option httplog
    monitor-uri /monitor 
 
 
frontend k8s-master 
    bind 0.0.0.0:16443 
    bind 127.0.0.1:16443 
    mode tcp
    option httplog
    tcp-request inspect-delay 5s
    default_backend k8s-master

backend k8s-master
    mode tcp
    option tcplog
    option tcp-check
    balance roundrobin
    default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100
    server k8s-master01		192.168.115.161:6443 check
    server k8s-master02		192.168.115.162:6443 check
    server k8s-master03		192.168.115.163:6443 check

配置keealived

每个节点配置有差异~!!!

bash 复制代码
[root@k8s-master01 ~]# cat /etc/keepalived/keepalived.conf
! Configuration File for keepalived

global_defs {
   router_id LVS_DEVEL
   script_user root
   enable_script_security
}

vrrp_script chk_apiserver {
   script "/etc/keepalived/check_apiserver.sh"
   interval 5
   weight -5
   fall 2
   rise 1
}

vrrp_instance VI_1 {
    state MASTER
    interface ens33
    mcast_src_ip  192.168.115.161   
    virtual_router_id 51
    priority 100
    advert_int 2
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        192.168.115.166
    }
    track_script {
        chk_apiserver
    }
}
bash 复制代码
[root@k8s-master02 ~]# cat /etc/keepalived/keepalived.conf
! Configuration File for keepalived

global_defs {
   router_id LVS_DEVEL
   script_user root
   enable_script_security
}

vrrp_script chk_apiserver {
   script "/etc/keepalived/check_apiserver.sh"
   interval 5
   weight -5
   fall 2
   rise 1
}

vrrp_instance VI_1 {
    state BACKUP
    interface ens33
    mcast_src_ip  192.168.115.162
    virtual_router_id 51
    priority 90
    advert_int 2
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        192.168.115.166
    }
    track_script {
        chk_apiserver
    }
}
bash 复制代码
[root@k8s-master03 ~]# cat /etc/keepalived/keepalived.conf
! Configuration File for keepalived

global_defs {
   router_id LVS_DEVEL
   script_user root
   enable_script_security
}

vrrp_script chk_apiserver {
   script "/etc/keepalived/check_apiserver.sh"
   interval 5
   weight -5
   fall 2
   rise 1
}

vrrp_instance VI_1 {
    state BACKUP
    interface ens33
    mcast_src_ip  192.168.115.163
    virtual_router_id 51
    priority 90
    advert_int 2
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        192.168.115.166
    }
    track_script {
        chk_apiserver
    }
}

添加keepalived健康检查脚本

所有master节点都需要

bash 复制代码
[root@k8s-master01 ~]# cat /etc/keepalived/check_apiserver.sh 
#!/bin/bash
err=0
for i in $(seq 1 3);do
	check_code=$(pgrep haproxy)
	if [[ $check_code == "" ]];then
		err=$[err+1]
		sleep 1
		continue
	else
		err=0
		break
	fi
done
if [[ $err != "0" ]];then
	echo "systemctl stop keepalived"
	/usr/bin/systemctl stop keepalived
	exit 1
else
	exit 0
fi
[root@k8s-master01 ~]# chmod +x /etc/keepalived/check_apiserver.sh
[root@k8s-master01 ~]# ls -l /etc/keepalived/
总用量 8
-rw-r-xr-x. 1 root root 290 1月  19 00:37 check_apiserver.sh
-rwxr--r--. 1 root root 576 1月  19 00:27 keepalived.conf

七、集群初始化

master节点操作即可!!!

设置命令补齐功能

bash 复制代码
bash
[root@k8s-master01 ~]# source <(kubeadm completion bash)
[root@k8s-master01 ~]# source <(kubectl completion bash)
```

生成初始化文件

bash 复制代码
[root@k8s-master01 ~]# kubeadm config print init-defaults  > kubeadm.yaml[root@k8s-master01 ~]# cat kubeadm.yamlapiVersion: kubeadm.k8s.io/v1beta3bootstrapTokens:- groups:  - system:bootstrappers:kubeadm:default-node-token  token: abcdef.0123456789abcdef  ttl: 24h0m0s  usages:  - signing  - authenticationkind: InitConfigurationlocalAPIEndpoint:  advertiseAddress: 192.168.115.161  bindPort: 6443nodeRegistration:  criSocket: unix:///var/run/containerd/containerd.sock  imagePullPolicy: IfNotPresent  name: k8s-master01  taints:  - effect: NoSchedule    key: node-role.kubernetes.io/master  - effect: NoSchedule    key: node-role.kubernetes.io/control-plane---apiServer:  certSANs:  - 192.168.115.166  timeoutForControlPlane: 4m0sapiVersion: kubeadm.k8s.io/v1beta3certificatesDir: /etc/kubernetes/pkiclusterName: kubernetescontrolPlaneEndpoint: 192.168.115.166:16443controllerManager: {}dns: {}etcd:  local:    dataDir: /var/lib/etcdimageRepository: registry.aliyuncs.com/google_containerskind: ClusterConfigurationkubernetesVersion: v1.24.0networking:  dnsDomain: cluster.local  podSubnet: 172.16.0.0/16  serviceSubnet: 10.10.0.0/16scheduler: {}​###生成符合当前版本的文件##[root@k8s-master01 ~]# kubeadm config migrate --old-config kubeadm.yaml --new-config new.yaml##拷贝至其他master节点[root@k8s-master01 ~]# scp  new.yaml k8s-master02:/root[root@k8s-master01 ~]# scp  new.yaml k8s-master03:/root

下载相关镜像(可选操作)

bash 复制代码
###查看可下载镜像
[root@k8s-master01 ~]# kubeadm config images list 
I0119 02:23:30.410543   16503 version.go:256] remote version is much newer: v1.29.1; falling back to: stable-1.24
registry.k8s.io/kube-apiserver:v1.24.17
registry.k8s.io/kube-controller-manager:v1.24.17
registry.k8s.io/kube-scheduler:v1.24.17
registry.k8s.io/kube-proxy:v1.24.17
registry.k8s.io/pause:3.7
registry.k8s.io/etcd:3.5.6-0
registry.k8s.io/coredns/coredns:v1.8.6
#####直接使用kubeadm config images pull 下载
[root@k8s-master01 ~]# kubeadm config images pull /root/new.yaml
##或者编写下载镜像脚本下载
[root@k8s-master01 ~]# tee ./images.sh <<-'EOF'
#!/bin/bash
images=(
kube-apiserver:v1.24.17
kube-controller-manager:v1.24.17
kube-scheduler:v1.24.17
kube-proxy:v1.24.17
pause:3.7
etcd:3.5.6-0
coredns:v1.8.6
)
for imageName in ${images[@]} ; do
docker pull registry.aliyuncs.com/google_containers/$imageName
done
EOF
####下载镜像
[root@k8s-master01 ~]# chmod +x ./images.sh && ./images.sh

k8s-master01初始化

bash 复制代码
[root@k8s-master01 ~]# kubeadm init --config /root/new.yaml --upload-certs 

####初始化过程如下:####
[init] Using Kubernetes version: v1.24.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [192.168.0.1 192.168.115.161 192.168.115.166]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master01 localhost] and IPs [192.168.115.161 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master01 localhost] and IPs [192.168.115.161 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
W0119 02:57:24.046170   21298 endpoint.go:57] [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "admin.conf" kubeconfig file
W0119 02:57:24.423942   21298 endpoint.go:57] [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "kubelet.conf" kubeconfig file
W0119 02:57:24.484314   21298 endpoint.go:57] [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
W0119 02:57:24.671282   21298 endpoint.go:57] [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 16.544258 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
eadd854bc199d402e60cb1490d4b243274d144ce6f9fa046d8cf840c3fa22eba
[mark-control-plane] Marking the node k8s-master01 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node k8s-master01 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: abcdef.0123456789abcdef
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
W0119 02:57:47.224890   21298 endpoint.go:57] [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of the control-plane node running the following command on each as root:
#####以下为master节点加入集群的token
  kubeadm join 192.168.115.166:16443 --token abcdef.0123456789abcdef \
	--discovery-token-ca-cert-hash sha256:cb1c6fc2e7022230f6762064689f509b196796086c15c8209ea20ced4cda90bf \
	--control-plane --certificate-key eadd854bc199d402e60cb1490d4b243274d144ce6f9fa046d8cf840c3fa22eba
#############################
Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.

Then you can join any number of worker nodes by running the following on each as root:
#####以下为worker节点加入集群的token
kubeadm join 192.168.115.166:16443 --token abcdef.0123456789abcdef \
	--discovery-token-ca-cert-hash sha256:cb1c6fc2e7022230f6762064689f509b196796086c15c8209ea20ced4cda90bf 

################################

假如出现初始化失败的现象,可以使用下列命令进行回滚操作~~~!!!

bash 复制代码
```bash
[root@k8s-master01 ~]# kubeadm reset
```

master节点加入集群

bash 复制代码
[root@k8s-master02 ~]# kubeadm join 192.168.115.166:16443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:cb1c6fc2e7022230f6762064689f509b196796086c15c8209ea20ced4cda90bf \
--control-plane --certificate-key eadd854bc199d402e60cb1490d4b243274d144ce6f9fa046d8cf840c3fa22eba


[root@k8s-master03 ~]# kubeadm join 192.168.115.166:16443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:cb1c6fc2e7022230f6762064689f509b196796086c15c8209ea20ced4cda90bf \
--control-plane --certificate-key eadd854bc199d402e60cb1490d4b243274d144ce6f9fa046d8cf840c3fa22eba

node节点加入集群

bash 复制代码
[root@k8s-worker01 ~]# kubeadm join 192.168.115.166:16443 --token abcdef.0123456789abcdef \
	--discovery-token-ca-cert-hash sha256:cb1c6fc2e7022230f6762064689f509b196796086c15c8209ea20ced4cda90bf

[root@k8s-worker02 ~]# kubeadm join 192.168.115.166:16443 --token abcdef.0123456789abcdef \
	--discovery-token-ca-cert-hash sha256:cb1c6fc2e7022230f6762064689f509b196796086c15c8209ea20ced4cda90bf

查看集群状态

在k8s-master01上查看

bash 复制代码
####在k8s-master01配置 KUBECONFIG变量​[root@k8s-master01 ~]# echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> /root/.bashrc​###查看集群节点信息###[root@k8s-master01 ~]# kubectl get nodes NAME           STATUS     ROLES           AGE     VERSIONk8s-master01   NotReady   control-plane   29m     v1.24.17k8s-master02   NotReady   control-plane   25m     v1.24.17k8s-master03   NotReady   control-plane   24m     v1.24.17k8s-worker01   NotReady   <none>          4m21s   v1.24.17k8s-worker02   NotReady   <none>          3m33s   v1.24.17###现在显示"NotReady"状态很正常,因为还没有安装网络组件~~!!!!##若要ROLES列能正常显示master或者worker需要如下操作:#####[root@k8s-master01 ~]# kubectl label node k8s-master01 node-role.kubernetes.io/master=master[root@k8s-master01 ~]# kubectl label node k8s-master02 node-role.kubernetes.io/master=master[root@k8s-master01 ~]# kubectl label node k8s-master03 node-role.kubernetes.io/master=master[root@k8s-master01 ~]# kubectl label node k8s-worker01 node-role.kubernetes.io/worker=worker[root@k8s-master01 ~]# kubectl label node k8s-worker02 node-role.kubernetes.io/worker=worker[root@k8s-master01 ~]# kubectl get nodes NAME           STATUS     ROLES                  AGE   VERSIONk8s-master01   NotReady   control-plane,master   36m   v1.24.17k8s-master02   NotReady   control-plane,master   33m   v1.24.17k8s-master03   NotReady   control-plane,master   32m   v1.24.17k8s-worker01   NotReady   worker                 12m   v1.24.17k8s-worker02   NotReady   worker                 11m   v1.24.17###查看详细信息[root@k8s-master01 ~]# kubectl get nodes -o wide NAME           STATUS     ROLES                  AGE   VERSION    INTERNAL-IP       EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION                CONTAINER-RUNTIMEk8s-master01   NotReady   control-plane,master   38m   v1.24.17   192.168.115.161   <none>        CentOS Linux 7 (Core)   5.4.267-1.el7.elrepo.x86_64   containerd://1.6.27k8s-master02   NotReady   control-plane,master   34m   v1.24.17   192.168.115.162   <none>        CentOS Linux 7 (Core)   5.4.267-1.el7.elrepo.x86_64   containerd://1.6.27k8s-master03   NotReady   control-plane,master   33m   v1.24.17   192.168.115.163   <none>        CentOS Linux 7 (Core)   5.4.267-1.el7.elrepo.x86_64   containerd://1.6.27k8s-worker01   NotReady   worker                 13m   v1.24.17   192.168.115.164   <none>        CentOS Linux 7 (Core)   5.4.267-1.el7.elrepo.x86_64   containerd://1.6.27k8s-worker02   NotReady   worker                 13m   v1.24.17   192.168.115.165   <none>        CentOS Linux 7 (Core)   5.4.267-1.el7.elrepo.x86_64   containerd://1.6.27​

Token过期处理

token的过期时间为24小时。

bash 复制代码
##只是过期
[root@k8s-master01 ~]# kubeadm token create --print-join-command
##若要加入其他master,如下命令
[root@k8s-master01 ~]#  kubeadm init phase upload-certs --upload-certs

八、Calico网络组件安装

网络组件有Calico和Flannel等,但是Flannel不支持CNI插件!后续学习网络策略需要CNI插件来支持NetworkPolicy。

bash 复制代码
###rz上传calico相关文件到k8s-master01
[root@k8s-master01 calico]# ls
 calico.yaml
####修改POD的地址段为自己设置好的地址段
[root@k8s-master01 calico]# vim calico.yaml
###搜索 "CALICO_IPV4POOL_CIDR"关键字,地址在下面一行,修改地址为pod分配的地址段
4551             - name: CALICO_IPV4POOL_CIDR
4552               value: "172.16.0.0/16"
[root@k8s-master01 calico]# sed -i "s/docker.io/dockerproxy.cn/" calico.yaml
[root@k8s-master01 calico]# kubectl apply -f calico.yaml
####等待片刻###
[root@k8s-master01 ~]# kubectl get node
NAME           STATUS   ROLES                  AGE     VERSION
k8s-master01   Ready    control-plane,master   5h32m   v1.24.17
k8s-master02   Ready    control-plane,master   5h28m   v1.24.17
k8s-master03   Ready    control-plane,master   5h27m   v1.24.17
k8s-worker01   Ready    worker                 5h7m    v1.24.17
k8s-worker02   Ready    worker 
[root@k8s-master01 dashboard]# kubectl get pod -n kube-system
NAME                                       READY   STATUS    RESTARTS        AGE
calico-kube-controllers-74bdd4fc7d-97789   1/1     Running   1 (3h43m ago)   4h23m
calico-node-fwjvz                          1/1     Running   0               4h23m
calico-node-jldd6                          1/1     Running   1 (3h43m ago)   4h23m
calico-node-x9zjp                          1/1     Running   0               4h23m
calico-node-znkck                          1/1     Running   1 (3h43m ago)   4h23m
calico-node-zthfp                          1/1     Running   1 (3h44m ago)   4h23m
coredns-74586cf9b6-bwnv2                   1/1     Running   1 (3h43m ago)   5h37m
coredns-74586cf9b6-lnc5w                   1/1     Running   1 (3h43m ago)   5h37m
etcd-k8s-master01                          1/1     Running   3 (3h44m ago)   5h37m
etcd-k8s-master02                          1/1     Running   1 (3h43m ago)   5h34m
etcd-k8s-master03                          1/1     Running   1 (3h43m ago)   5h31m
kube-apiserver-k8s-master01                1/1     Running   3 (3h43m ago)   5h37m
kube-apiserver-k8s-master02                1/1     Running   1 (3h43m ago)   5h34m
kube-apiserver-k8s-master03                1/1     Running   3 (3h43m ago)   5h31m
kube-controller-manager-k8s-master01       1/1     Running   2 (3h44m ago)   5h37m
kube-controller-manager-k8s-master02       1/1     Running   2 (6m19s ago)   5h34m
kube-controller-manager-k8s-master03       1/1     Running   1 (3h43m ago)   5h32m
kube-proxy-6l884                           1/1     Running   1 (3h43m ago)   5h33m
kube-proxy-7cxsc                           1/1     Running   0               5h12m
kube-proxy-bgk2b                           1/1     Running   1 (3h43m ago)   5h34m
kube-proxy-hb5lj                           1/1     Running   1 (3h44m ago)   5h37m
kube-proxy-rxsfz                           1/1     Running   0               5h13m
kube-scheduler-k8s-master01                1/1     Running   2 (3h44m ago)   5h37m
kube-scheduler-k8s-master02                1/1     Running   1 (3h43m ago)   5h34m
kube-scheduler-k8s-master03                1/1     Running   1 (3h43m ago)   5h32m

九、Metrics部署

系统资源的采集需要使用Metrics-server,能够采集节点和pod的内存、磁盘、CPU、网络的使用率

bash 复制代码
######将k8s-master01的front-proxy-ca.crt文件复制到所有NODE节点
[root@k8s-master01 ~]# scp /etc/kubernetes/pki/front-proxy-ca.crt k8s-worker01:/etc/kubernetes/pki/
[root@k8s-master01 ~]# scp /etc/kubernetes/pki/front-proxy-ca.crt k8s-worker02:/etc/kubernetes/pki/

##下载0.7版本的metrics
#下载地址:https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.7.0/components.yaml
#或者网络方式安装:
[root@k8s-master01 ~]# kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.7.0/components.yaml

###修改yaml文件镜像源及探针模式
[root@k8s-master01 metrics]# cat components.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    k8s-app: metrics-server
    rbac.authorization.k8s.io/aggregate-to-admin: "true"
    rbac.authorization.k8s.io/aggregate-to-edit: "true"
    rbac.authorization.k8s.io/aggregate-to-view: "true"
  name: system:aggregated-metrics-reader
rules:
- apiGroups:
  - metrics.k8s.io
  resources:
  - pods
  - nodes
  verbs:
  - get
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    k8s-app: metrics-server
  name: system:metrics-server
rules:
- apiGroups:
  - ""
  resources:
  - nodes/metrics
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - pods
  - nodes
  verbs:
  - get
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server-auth-reader
  namespace: kube-system
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: extension-apiserver-authentication-reader
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server:system:auth-delegator
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:auth-delegator
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    k8s-app: metrics-server
  name: system:metrics-server
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:metrics-server
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: v1
kind: Service
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system
spec:
  ports:
  - name: https
    port: 443
    protocol: TCP
    targetPort: https
  selector:
    k8s-app: metrics-server
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system
spec:
  selector:
    matchLabels:
      k8s-app: metrics-server
  strategy:
    rollingUpdate:
      maxUnavailable: 0
  template:
    metadata:
      labels:
        k8s-app: metrics-server
    spec:
      containers:
      - args:
        - --cert-dir=/tmp
        - --secure-port=10250
        - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
        - --kubelet-use-node-status-port
        - --metric-resolution=15s
        - --kubelet-insecure-tls     ##增加证书验证
        image: registry.aliyuncs.com/google_containers/metrics-server:v0.7.0   ##修改为国内镜像源
        imagePullPolicy: IfNotPresent
        livenessProbe:
          failureThreshold: 3
          tcpSocket:     ###修改探针模式
            port: 10250  ###修改探测端口
          periodSeconds: 10
        name: metrics-server
        ports:
        - containerPort: 10250
          name: https
          protocol: TCP
        readinessProbe:
          failureThreshold: 3
          tcpSocket:		###修改探针模式
            port: 10250	###修改探测端口
          initialDelaySeconds: 20
          periodSeconds: 10
        resources:
          requests:
            cpu: 100m
            memory: 200Mi
        securityContext:
          allowPrivilegeEscalation: false
          capabilities:
            drop:
            - ALL
          readOnlyRootFilesystem: true
          runAsNonRoot: true
          runAsUser: 1000
          seccompProfile:
            type: RuntimeDefault
        volumeMounts:
        - mountPath: /tmp
          name: tmp-dir
      nodeSelector:
        kubernetes.io/os: linux
      priorityClassName: system-cluster-critical
      serviceAccountName: metrics-server
      volumes:
      - emptyDir: {}
        name: tmp-dir
---
apiVersion: apiregistration.k8s.io/v1
kind: APIService
metadata:
  labels:
    k8s-app: metrics-server
  name: v1beta1.metrics.k8s.io
spec:
  group: metrics.k8s.io
  groupPriorityMinimum: 100
  insecureSkipTLSVerify: true
  service:
    name: metrics-server
    namespace: kube-system
  version: v1beta1
  versionPriority: 100

十、Dashboard部署

bash 复制代码
[root@k8s-master01 ~]# mkdir dashboard
[root@k8s-master01 ~]# cd dashboard
[root@k8s-master01 dashboard]# wget https://github.com/kubernetes/dashboard/archive/refs/tags/v2.6.1.tar.gz
[root@k8s-master01 dashboard]# tar xf dashboard-2.6.1.tar.gz
[root@k8s-master01 dashboard]# cd dashboard-2.6.1/
[root@k8s-master01 dashboard-2.6.1]# cd aio/deploy/
[root@k8s-master01 deploy]#
##修改dashboard service类型
[root@k8s-master01 deploy]# vim recommended.yaml 

 39 spec:
 40   type: NodePort
 41   ports:
 42     - port: 443
 43       targetPort: 8443
 44       nodePort: 30443
 45   selector:
 46     k8s-app: kubernetes-dashboard

##提交##
[root@k8s-master01 deploy]# kubectl apply -f recommended.yaml 
###查看
[root@k8s-master01 deploy]# kubectl -n kubernetes-dashboard get pod -o wide 
NAME                                        READY   STATUS    RESTARTS   AGE   IP              NODE           NOMINATED NODE   READINESS GATES
dashboard-metrics-scraper-8c47d4b5d-lgxfv   1/1     Running   0          38m   172.16.79.102   k8s-worker01   <none>           <none>
kubernetes-dashboard-6c75475678-xvbw5       1/1     Running   0          38m   172.16.69.224   k8s-worker02   <none>           <none>
##创建登录账户及登录token
[root@k8s-master01 dashboard]# cat sa.yaml 

kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: admin
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
roleRef:
  kind: ClusterRole
  name: cluster-admin
  apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
  name: admin
  namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin
  namespace: kube-system
  labels:
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile


##提交
[root@k8s-master01 dashboard]# kubectl apply -f sa.yaml 
##查看账户admin
[root@k8s-master01 dashboard]# kubectl -n kube-system get sa admin
NAME    SECRETS   AGE
admin   0         32m
###创建token
[root@k8s-master01 dashboard]# kubectl create token admin  --namespace kube-system
eyJhbGciOiJSUzI1NiIsImtpZCI6IlUtUUNRaXNUc0xUblpJZi1mak1UakJtREhIaEpqbi1IRF9JaUJSZVJQa2MifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiXSwiZXhwIjoxNzA3MDM4NzAwLCJpYXQiOjE3MDcwMzUxMDAsImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsInNlcnZpY2VhY2NvdW50Ijp7Im5hbWUiOiJhZG1pbiIsInVpZCI6ImI2YzI3ZmNmLWViODUtNDYzZi1iMDliLWZmNzE5Mzk3YTgxZCJ9fSwibmJmIjoxNzA3MDM1MTAwLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06YWRtaW4ifQ.RV52LS7vtRBBtHWOfrDjsx6BSTElr7vVEX-6blBFWmewDgtvw8kLy0whu8ikmv1Enz5lqiMTiwj4ZVg0NtG4G_Eq98lHG-QqgapTyUXCKjC_w76jNrEMopyNkvGTw1SgI5RDQFDZUDs2lfi-EJuayEHXOTe7eRVGn5PkK9N5GGYVq6RhtkOZZYCRX7LWzv6ZOq-MD6sPCcrkAhbo8jmKCfnO_Qgs4E8rg40DfW6ESYBHC7UE7DxmDzhro-8_uBjWAd4eJpBRjr12P7TncmnqLKzTf-_gkk6IAaaEaSo2_Ms19c3NCeLwICDmVH3NpVEOxzvHgE-W0s0ixEgIlJ08LQ

###查看token
[root@k8s-master01 dashboard]# kubectl get secret -n kube-system
相关推荐
卿雪1 小时前
Redis 数据持久化:RDB和 AOF 有什么区别?
java·数据库·redis·python·mysql·缓存·golang
codingPower1 小时前
@RequiredArgsConstructor和@Autowired依赖注入对比
java·常用注解
❥ღ Komo·1 小时前
K8s ConfigMap:配置管理的终极指南
云原生·容器·kubernetes
虎头金猫1 小时前
从杂乱到有序,Paperless-ngx 加个cpolar更好用
linux·运维·人工智能·docker·开源·beautifulsoup·pandas
Seven971 小时前
十大经典排序算法
java
华仔啊1 小时前
RabbitMQ 的 6 种工作模式你都掌握了吗?附完整可运行代码
java·后端·rabbitmq
古城小栈2 小时前
Spring AI Alibaba 重磅更新:Java 的开发新纪元
java·人工智能·spring
老华带你飞2 小时前
作业管理|基于Java作业管理系统(源码+数据库+文档)
java·开发语言·数据库·vue.js·spring boot·后端
JIngJaneIL2 小时前
基于Java人力资源管理系统(源码+数据库+文档)
java·开发语言·数据库·vue.js·spring boot