K8S二进制部署详解,你想要的都在这里

文章目录

  • 1.k8s环境规划
  • 2.kubeadm和二进制安装k8s适用场景分析
  • 3.必备工具安装
  • 3.初始化
    • [3.1 配置静态IP](#3.1 配置静态IP)
    • [3.2 配置主机名](#3.2 配置主机名)
    • [3.3 配置hosts文件](#3.3 配置hosts文件)
    • [3.4 配置主机之间无密码登录,每台机器都按照如下操作](#3.4 配置主机之间无密码登录,每台机器都按照如下操作)
    • [3.5 关闭firewalld防火墙](#3.5 关闭firewalld防火墙)
    • [3.6 关闭selinux](#3.6 关闭selinux)
    • [3.7 关闭交换分区swap](#3.7 关闭交换分区swap)
    • [3.8 修改内核参数](#3.8 修改内核参数)
    • 3.9阿里源安装docker-ce
    • 3.10配置docker加速
  • 4.搭建etcd集群
    • [4.1 配置etcd工作目录](#4.1 配置etcd工作目录)
    • [4.2 安装签发证书工具cfssl](#4.2 安装签发证书工具cfssl)
    • [4.3 配置ca证书](#4.3 配置ca证书)
    • [4.4 生成etcd证书](#4.4 生成etcd证书)
    • [4.5 部署etcd集群](#4.5 部署etcd集群)
  • 5.安装kubernetes组件
    • [5.1 下载安装包](#5.1 下载安装包)
    • [5.2 部署apiserver组件](#5.2 部署apiserver组件)
    • [5.3 部署kubectl组件](#5.3 部署kubectl组件)
    • [5.4 部署kube-controller-manager组件](#5.4 部署kube-controller-manager组件)
    • [5.5 部署kube-scheduler组件](#5.5 部署kube-scheduler组件)
    • [5.6 导入离线镜像压缩包](#5.6 导入离线镜像压缩包)
    • [5.7 部署kubelet组件](#5.7 部署kubelet组件)
    • [5.8 部署kube-proxy组件](#5.8 部署kube-proxy组件)
    • [5.9 部署calico组件](#5.9 部署calico组件)
    • [5.10 部署coredns组件](#5.10 部署coredns组件)
  • [6.安装keepalived+nginx实现k8s apiserver高可用](#6.安装keepalived+nginx实现k8s apiserver高可用)
  • 7.将master节点打上污点,禁止调度

1.k8s环境规划

Pod网段: 10.0.0.0/16

Service网段: 10.255.0.0/16

集群角色 ip 主机名 安装组件

控制节点 10.10.0.10 master01 apiserver、controller-manager、scheduler、etcd、docker、keepalived、nginx

控制节点 10.10.0.11 master02 apiserver、controller-manager、scheduler、etcd、docker、keepalived、nginx

控制节点 10.10.0.12 master03 apiserver、controller-manager、scheduler、etcd、docker、keepalived、nginx

工作节点 10.10.0.14 node01 kubelet、kube-proxy、docker、calico、coredns

VIP 10.10.0.100

2.kubeadm和二进制安装k8s适用场景分析

kubeadm是官方提供的开源工具,是一个开源项目,用于快速搭建kubernetes集群,

目前是比较方便和推荐使用的。kubeadm init 以及 kubeadm join 这两个命令可以快速创建 kubernetes 集群。

Kubeadm初始化k8s,所有的组件都是以pod形式运行的,具备故障自恢复能力。

kubeadm是工具,可以快速搭建集群,也就是相当于用程序脚本帮我们装好了集群,属于自动部署,

简化部署操作,自动部署屏蔽了很多细节,使得对各个模块感知很少,如果对k8s架构组件理解不深的话,遇到问题比较难排查。

kubeadm适合需要经常部署k8s,或者对自动化要求比较高的场景下使用。

二进制:在官网下载相关组件的二进制包,如果手动安装,对kubernetes理解也会更全面。

Kubeadm和二进制都适合生产环境,在生产环境运行都很稳定,具体如何选择,可以根据实际项目进行评估。

3.必备工具安装

yum install wget jq psmisc vim net-tools telnet yum-utils device-mapper-persistent-data lvm2 git -y

安装ntpdate:

[root@master01 ~ ]# yum install -y ntpdate

[root@master01 ~ ]# systemctl enable ntpdate.service --now

3.初始化

3.1 配置静态IP

3.2 配置主机名

各个主机名配置类似下面命令

hostnamectl set-hostname master03 && bash

3.3 配置hosts文件

#修改master01、master02、master03、node01机器的/etc/hosts文件,增加如下四行:

10.10.0.10 master01

10.10.0.11 master02

10.10.0.12 master03

10.10.0.14 node01

3.4 配置主机之间无密码登录,每台机器都按照如下操作

#生成ssh 密钥对

ssh-keygen -t rsa #一路回车,不输入密码

把本地的ssh公钥文件安装到远程主机对应的账户

3.5 关闭firewalld防火墙

在master01、master02、master03、node01上操作:

systemctl stop firewalld ; systemctl disable firewalld

3.6 关闭selinux

在master01、master02、master03、node01上操作:

sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config

#修改selinux配置文件之后,重启机器,selinux配置才能永久生效

重启之后登录机器验证是否修改成功:

getenforce

#显示Disabled说明selinux已经关闭

3.7 关闭交换分区swap

在master01、master02、master03、node01上操作:

#临时关闭

swapoff -a

#永久关闭:注释swap挂载,给swap这行开头加一下注释

vim /etc/fstab

#/dev/mapper/centos-swap swap swap defaults 0 0

#如果是克隆的虚拟机,需要删除UUID

3.8 修改内核参数

在master1、master2、master3、node01上操作:

所有节点安装ipvsadm: 使用ipvs流量调度模式

yum install ipvsadm ipset sysstat conntrack libseccomp -y

所有节点配置ipvs模块,在内核4.19+版本nf_conntrack_ipv4已经改为nf_conntrack, 4.18以下使用nf_conntrack_ipv4即可:

modprobe -- ip_vs

modprobe -- ip_vs_rr

modprobe -- ip_vs_wrr

modprobe -- ip_vs_sh

modprobe -- nf_conntrack

bash 复制代码
[root@master01 ~ ]# vim /etc/modules-load.d/ipvs.conf

ip_vs
ip_vs_lc
ip_vs_wlc
ip_vs_rr
ip_vs_wrr
ip_vs_lblc
ip_vs_lblcr
ip_vs_dh
ip_vs_sh
ip_vs_fo
ip_vs_nq
ip_vs_sed
ip_vs_ftp
ip_vs_sh
nf_conntrack
ip_tables
ip_set
xt_set
ipt_set
ipt_rpfilter
ipt_REJECT
ipip

然后执行systemctl enable --now systemd-modules-load.service即可

内核参数修改:br_netfilter模块用于将桥接流量转发至iptables链,br_netfilter内核参数需要开启转发。

[root@master01 ~]# modprobe br_netfilter

#修改内核参数

开启一些k8s集群中必须的内核参数,所有节点配置k8s内核:

bash 复制代码
cat <<EOF > /etc/sysctl.d/k8s.conf
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
fs.may_detach_mounts = 1
net.ipv4.conf.all.route_localnet = 1
vm.overcommit_memory=1
vm.panic_on_oom=0
fs.inotify.max_user_watches=89100
fs.file-max=52706963
fs.nr_open=52706963
net.netfilter.nf_conntrack_max=2310720

net.ipv4.tcp_keepalive_time = 600
net.ipv4.tcp_keepalive_probes = 3
net.ipv4.tcp_keepalive_intvl =15
net.ipv4.tcp_max_tw_buckets = 36000
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_max_orphans = 327680
net.ipv4.tcp_orphan_retries = 3
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.ip_conntrack_max = 65536
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.tcp_timestamps = 0
net.core.somaxconn = 16384
EOF

sysctl --system

sysctl -p /etc/sysctl.d/k8s.conf出现报错:

sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-ip6tables: No such file or directory

sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory

解决方法:

modprobe br_netfilter

所有节点配置完内核后,重启服务器,保证重启后内核依旧加载

reboot

lsmod | grep --color=auto -e ip_vs -e nf_conntrack

bash 复制代码
[root@master01 ~ ]# lsmod | grep --color=auto -e ip_vs -e nf_conntrack
ip_vs_ftp              16384  0 
nf_nat                 32768  1 ip_vs_ftp
ip_vs_sed              16384  0 
ip_vs_nq               16384  0 
ip_vs_fo               16384  0 
ip_vs_sh               16384  0 
ip_vs_dh               16384  0 
ip_vs_lblcr            16384  0 
ip_vs_lblc             16384  0 
ip_vs_wrr              16384  0 
ip_vs_rr               16384  0 
ip_vs_wlc              16384  0 
ip_vs_lc               16384  0 
ip_vs                 151552  24 ip_vs_wlc,ip_vs_rr,ip_vs_dh,ip_vs_lblcr,ip_vs_sh,ip_vs_fo,ip_vs_nq,ip_vs_lblc,ip_vs_wrr,ip_vs_lc,ip_vs_sed,ip_vs_ftp
nf_conntrack          143360  2 nf_nat,ip_vs
nf_defrag_ipv6         20480  1 nf_conntrack
nf_defrag_ipv4         16384  1 nf_conntrack
libcrc32c              16384  4 nf_conntrack,nf_nat,xfs,ip_vs

3.9阿里源安装docker-ce

bash 复制代码
# step 1: 安装必要的一些系统工具
sudo yum install -y yum-utils device-mapper-persistent-data lvm2
# Step 2: 添加软件源信息
sudo yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
# Step 3
sudo sed -i 's+download.docker.com+mirrors.aliyun.com/docker-ce+' /etc/yum.repos.d/docker-ce.repo
# Step 4: 更新并安装Docker-CE
sudo yum makecache fast
sudo yum -y install docker-ce
# Step 5: 开启Docker服务
sudo service docker start

3.10配置docker加速

bash 复制代码
[root@master01 ~ ]# cat /etc/docker/daemon.json 
{
  "registry-mirrors": ["http://abcd1234.m.daocloud.io"],
  "exec-opts": ["native.cgroupdriver=systemd"]
}


systemctl daemon-reload
systemctl restart docker
systemctl status docker

4.搭建etcd集群

4.1 配置etcd工作目录

#创建配置文件和证书文件存放目录

[root@master01 ~ ]#mkdir -p /etc/etcd

[root@master01 ~ ]#mkdir -p /etc/etcd/ssl

[root@master02 ~ ]#mkdir -p /etc/etcd

[root@master02 ~ ]#mkdir -p /etc/etcd/ssl

[root@master03 ~ ]#mkdir -p /etc/etcd

[root@master03 ~ ]#mkdir -p /etc/etcd/ssl

4.2 安装签发证书工具cfssl

[root@master01 ~ ]#mkdir /data/work -p

[root@master01 ~ ]#cd /data/work/

#cfssl-certinfo_linux-amd64 、cfssljson_linux-amd64 、cfssl_linux-amd64上传到/data/work/目录下

bash 复制代码
[root@master01 work ]#ll
total 18808
-rw-r--r-- 1 root root  6595195 Oct 25 15:39 cfssl-certinfo_linux-amd64
-rw-r--r-- 1 root root  2277873 Oct 25 15:39 cfssljson_linux-amd64
-rw-r--r-- 1 root root 10376657 Oct 25 15:39 cfssl_linux-amd64

#把文件变成可执行权限

bash 复制代码
[root@master01 work ]#chmod +x *
[root@master01 work ]#ll
total 18808
-rwxr-xr-x 1 root root  6595195 Oct 25 15:39 cfssl-certinfo_linux-amd64
-rwxr-xr-x 1 root root  2277873 Oct 25 15:39 cfssljson_linux-amd64
-rwxr-xr-x 1 root root 10376657 Oct 25 15:39 cfssl_linux-amd64
[root@master01 work ]#mv cfssl_linux-amd64 /usr/local/bin/cfssl
[root@master01 work ]#mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
[root@master01 work ]#mv cfssl-certinfo_linux-amd64 /usr/local/bin/cfssl-certinfo

4.3 配置ca证书

#生成ca证书请求文件

生成ca证书:

初始化

cfssl print-defaults config > ca-config.json

cfssl print-defaults csr > ca-csr.json

#生成ca证书请求文件

bash 复制代码
[root@master01 work ]#cat ca-csr.json 
{
  "CN": "kubernetes",
  "key": {
      "algo": "rsa",
      "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "Hubei",
      "L": "Wuhan",
      "O": "k8s",
      "OU": "system"
    }
  ],
  "ca": {
          "expiry": "87600h"
  }
}

注:

CN:Common Name(公用名称),kube-apiserver 从证书中提取该字段作为请求的用户名 (User Name);

浏览器使用该字段验证网站是否合法;对于 SSL 证书,一般为网站域名;而对于代码签名证书则为申请单位名称;

而对于客户端证书则为证书申请者的姓名。

O:Organization(单位名称),kube-apiserver 从证书中提取该字段作为请求用户所属的组 (Group);

对于 SSL 证书,一般为网站域名;而对于代码签名证书则为申请单位名称;而对于客户端单位证书则为证书申请者所在单位名称。

L 字段:所在城市

S 字段:所在省份

C 字段:只能是国家字母缩写,如中国:CN

bash 复制代码
[root@master01 work ]#cfssl gencert -initca ca-csr.json  | cfssljson -bare ca
2022/10/25 16:59:15 [INFO] generating a new CA key and certificate from CSR
2022/10/25 16:59:15 [INFO] generate received request
2022/10/25 16:59:15 [INFO] received CSR
2022/10/25 16:59:15 [INFO] generating key: rsa-2048
2022/10/25 16:59:15 [INFO] encoded CSR
2022/10/25 16:59:15 [INFO] signed certificate with serial number 389912219972037047043791867430049210836195704387

[root@master01 work ]#ll

total 16

-rw-r--r-- 1 root root 997 Oct 25 16:59 ca.csr

-rw-r--r-- 1 root root 253 Oct 25 15:39 ca-csr.json

-rw------- 1 root root 1679 Oct 25 16:59 ca-key.pem

-rw-r--r-- 1 root root 1346 Oct 25 16:59 ca.pem

#生成ca证书文件

[root@master01 work ]#cat ca-config.json

{

"signing": {

"default": {

"expiry": "87600h"

},

"profiles": {

"kubernetes": {

"usages": [

"signing",

"key encipherment",

"server auth",

"client auth"

],

"expiry": "87600h"

}

}

}

}

4.4 生成etcd证书

#配置etcd证书请求,hosts的ip变成自己etcd所在节点的ip

bash 复制代码
[root@master01 work ]#cat etcd-csr.json 
{
  "CN": "etcd",
  "hosts": [
    "127.0.0.1",
    "10.10.0.10",
    "10.10.0.11",
    "10.10.0.12",
    "10.10.0.100"
  ],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [{
    "C": "CN",
    "ST": "Hubei",
    "L": "Wuhan",
    "O": "k8s",
    "OU": "system"
  }]
} 

#上述文件hosts字段中IP为所有etcd节点的集群内部通信IP,VIP.可以预留几个,做扩容用。

bash 复制代码
[root@master01 work ]#cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes etcd-csr.json | cfssljson  -bare etcd
2022/10/25 17:08:11 [INFO] generate received request
2022/10/25 17:08:11 [INFO] received CSR
2022/10/25 17:08:11 [INFO] generating key: rsa-2048
2022/10/25 17:08:11 [INFO] encoded CSR
2022/10/25 17:08:11 [INFO] signed certificate with serial number 135048769106462136999813410398927441265408992932
2022/10/25 17:08:11 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").

上述命令-profile指定的文件名称要与ca-config.json 中 "profiles"下面的名称一致

[root@master01 work ]#ls etcd*.pem

etcd-key.pem etcd.pem

4.5 部署etcd集群

下载etcd:

https://github.com/etcd-io/etcd/releases

把etcd-v3.4.13-linux-amd64.tar.gz上传到/data/work目录下

[root@master01 work ]#ll

-rw-r--r-- 1 root root 17373136 Oct 25 15:39 etcd-v3.4.13-linux-amd64.tar.gz

[root@master01 work ]#tar xf etcd-v3.4.13-linux-amd64.tar.gz

[root@master01 etcd-v3.4.13-linux-amd64 ]#cp -a etcdctl etcd /usr/local/bin/

将etcd文件拷贝到另两个master节点:

bash 复制代码
[root@master01 etcd-v3.4.13-linux-amd64 ]#scp -p etcd* 10.10.0.11:/usr/local/bin/
etcd                                                                                                                   100%   23MB 119.7MB/s   00:00    
etcdctl                                                                                                                100%   17MB  69.5MB/s   00:00    
[root@master01 etcd-v3.4.13-linux-amd64 ]#scp -p etcd* 10.10.0.12:/usr/local/bin/
etcd                                                                                                                   100%   23MB 112.7MB/s   00:00    
etcdctl 

#创建配置文件

bash 复制代码
[root@master01 work ]#cat etcd.conf 
#[Member]
ETCD_NAME="etcd1"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://10.10.0.10:2380"
ETCD_LISTEN_CLIENT_URLS="https://10.10.0.10:2379,http://127.0.0.1:2379"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://10.10.0.10:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://10.10.0.10:2379"
ETCD_INITIAL_CLUSTER="etcd1=https://10.10.0.10:2380,etcd2=https://10.10.0.11:2380,etcd3=https://10.10.0.12:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"

#注:

ETCD_NAME:节点名称,集群中唯一

ETCD_DATA_DIR:数据目录

ETCD_LISTEN_PEER_URLS:集群通信监听地址

ETCD_LISTEN_CLIENT_URLS:客户端访问监听地址

ETCD_INITIAL_ADVERTISE_PEER_URLS:集群通告地址

ETCD_ADVERTISE_CLIENT_URLS:客户端通告地址

ETCD_INITIAL_CLUSTER:集群节点地址

ETCD_INITIAL_CLUSTER_TOKEN:集群Token

ETCD_INITIAL_CLUSTER_STATE:加入集群的当前状态,new是新集群,existing表示加入已有集群

#创建启动服务文件

bash 复制代码
[root@master01 work ]#cat etcd.service 
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
 
[Service]
Type=notify
EnvironmentFile=-/etc/etcd/etcd.conf
WorkingDirectory=/var/lib/etcd/
ExecStart=/usr/local/bin/etcd \
  --cert-file=/etc/etcd/ssl/etcd.pem \
  --key-file=/etc/etcd/ssl/etcd-key.pem \
  --trusted-ca-file=/etc/etcd/ssl/ca.pem \
  --peer-cert-file=/etc/etcd/ssl/etcd.pem \
  --peer-key-file=/etc/etcd/ssl/etcd-key.pem \
  --peer-trusted-ca-file=/etc/etcd/ssl/ca.pem \
  --peer-client-cert-auth \
  --client-cert-auth
Restart=on-failure
RestartSec=5
LimitNOFILE=65536
 
[Install]
WantedBy=multi-user.target
bash 复制代码
[root@master01 work ]#cp ca*.pem /etc/etcd/ssl/
[root@master01 work ]#cp etcd*.pem /etc/etcd/ssl/
[root@master01 work ]#cp etcd.conf /etc/etcd/
[root@master01 work ]#cp etcd.service /usr/lib/systemd/system/

[root@master01 work ]#for i in master02 master03;do rsync -vaz etcd.conf $i:/etc/etcd/;done

[root@master01 work ]#for i in master02 master03;do rsync -vaz etcd*.pem ca*.pem $i:/etc/etcd/ssl/;done

[root@master01 work ]#for i in master02 master03;do rsync -vaz etcd.service $i:/usr/lib/systemd/system/;done

#启动etcd集群

[root@master01 work]# mkdir -p /var/lib/etcd/default.etcd

[root@master02 work]# mkdir -p /var/lib/etcd/default.etcd

[root@master03 work]# mkdir -p /var/lib/etcd/default.etcd

修改master02的配置文件:

bash 复制代码
[root@master02 ~ ]#cat /etc/etcd/etcd.conf 
#[Member]
ETCD_NAME="etcd2"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://10.10.0.11:2380"
ETCD_LISTEN_CLIENT_URLS="https://10.10.0.11:2379,http://127.0.0.1:2379"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://10.10.0.11:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://10.10.0.11:2379"
ETCD_INITIAL_CLUSTER="etcd1=https://10.10.0.10:2380,etcd2=https://10.10.0.11:2380,etcd3=https://10.10.0.12:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"

修改master03的配置文件:

bash 复制代码
[root@master03 ~ ]#vim /etc/etcd/etcd.conf 
#[Member]
ETCD_NAME="etcd3"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://10.10.0.12:2380"
ETCD_LISTEN_CLIENT_URLS="https://10.10.0.12:2379,http://127.0.0.1:2379"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://10.10.0.12:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://10.10.0.12:2379"
ETCD_INITIAL_CLUSTER="etcd1=https://10.10.0.10:2380,etcd2=https://10.10.0.11:2380,etcd3=https://10.10.0.12:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"

[root@master01 work ]#systemctl daemon-reload

[root@master02 ~ ]#systemctl daemon-reload

[root@master03 ~ ]#systemctl daemon-reload

[root@master01 work ]#systemctl enable etcd.service --now

[root@master02 work ]#systemctl enable etcd.service --now

[root@master03 work ]#systemctl enable etcd.service --now

启动etcd的时候,先启动xianchaomaster1的etcd服务,

会一直卡住在启动的状态,然后接着再启动xianchaomaster2的etcd,这样xianchaomaster1这个节点etcd才会正常起来

全部重启下

[root@master01 work ]#systemctl restart etcd.service

#查看etcd集群

[root@master01 work ]#export ETCDCTL_API=3

bash 复制代码
[root@master01 work ]#etcdctl --write-out=table --cacert=/etc/etcd/ssl/ca.pem --cert=/etc/etcd/ssl/etcd.pem --key=/etc/etcd/ssl/etcd-key.pem --endpoints=https://10.10.0.10:2379,https://10.10.0.11:2379,https://10.10.0.12:2379  endpoint health
+-------------------------+--------+-------------+-------+
|        ENDPOINT         | HEALTH |    TOOK     | ERROR |
+-------------------------+--------+-------------+-------+
| https://10.10.0.12:2379 |   true | 10.214118ms |       |
| https://10.10.0.10:2379 |   true |  9.085152ms |       |
| https://10.10.0.11:2379 |   true |  10.12115ms |       |
+-------------------------+--------+-------------+-------+
[root@master01 work ]#
[root@master01 work ]#etcdctl --write-out=table --cacert=/etc/etcd/ssl/ca.pem --cert=/etc/etcd/ssl/etcd.pem --key=/etc/etcd/ssl/etcd-key.pem --endpoints=https://10.10.0.10:2379,https://10.10.0.11:2379,https://10.10.0.12:2379  endpoint status
+-------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
|        ENDPOINT         |        ID        | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |
+-------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| https://10.10.0.10:2379 | 7c3b81b30c59fb64 |  3.4.13 |   20 kB |      true |      false |        11 |         15 |                 15 |        |
| https://10.10.0.11:2379 | 86041dd24c0806ff |  3.4.13 |   25 kB |     false |      false |        11 |         15 |                 15 |        |
| https://10.10.0.12:2379 | 76002ef45e4ee68e |  3.4.13 |   20 kB |     false |      false |        11 |         15 |                 15 |        |
+-------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+

5.安装kubernetes组件

5.1 下载安装包

二进制包所在的github地址如下:

https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/

在githup上下载

https://github.com/kubernetes/kubernetes/releases/tag/v1.23.8

release 点击CHANGELOG -- Download for v1.23.8 -- Server Bianries -- kubernetes-server-linux-amd64.tar.gz

下载这个

不要使用1.21.0 这种点0版本,刚出来的版本可能会有很多问题

#把kubernetes-server-linux-amd64.tar.gz上传到master01上的/data/work目录下:

[root@master01 work ]#tar xf kubernetes-server-linux-amd64.tar.gz

[root@master01 work ]#cd kubernetes/

[root@master01 kubernetes ]#ll

total 33672

drwxr-xr-x 2 root root 6 May 12 2021 addons

-rw-r--r-- 1 root root 34477274 May 12 2021 kubernetes-src.tar.gz

drwxr-xr-x 3 root root 49 May 12 2021 LICENSES

drwxr-xr-x 3 root root 17 May 12 2021 server

[root@master01 server ]#cd bin/

bash 复制代码
[root@master01 bin ]#ll
total 986524
-rwxr-xr-x 1 root root  46678016 May 12  2021 apiextensions-apiserver
-rwxr-xr-x 1 root root  39215104 May 12  2021 kubeadm
-rwxr-xr-x 1 root root  44675072 May 12  2021 kube-aggregator
-rwxr-xr-x 1 root root 118210560 May 12  2021 kube-apiserver
-rw-r--r-- 1 root root         8 May 12  2021 kube-apiserver.docker_tag
-rw------- 1 root root 123026944 May 12  2021 kube-apiserver.tar
-rwxr-xr-x 1 root root 112746496 May 12  2021 kube-controller-manager
-rw-r--r-- 1 root root         8 May 12  2021 kube-controller-manager.docker_tag
-rw------- 1 root root 117562880 May 12  2021 kube-controller-manager.tar
-rwxr-xr-x 1 root root  40226816 May 12  2021 kubectl
-rwxr-xr-x 1 root root 114097256 May 12  2021 kubelet
-rwxr-xr-x 1 root root  39481344 May 12  2021 kube-proxy
-rw-r--r-- 1 root root         8 May 12  2021 kube-proxy.docker_tag
-rw------- 1 root root 120374784 May 12  2021 kube-proxy.tar
-rwxr-xr-x 1 root root  43716608 May 12  2021 kube-scheduler
-rw-r--r-- 1 root root         8 May 12  2021 kube-scheduler.docker_tag
-rw------- 1 root root  48532992 May 12  2021 kube-scheduler.tar
-rwxr-xr-x 1 root root   1634304 May 12  2021 mounter

[root@master01 bin ]#cp kube-apiserver kube-controller-manager kube-scheduler kubectl kubelet /usr/local/bin/

将二进制程序拷贝到其他master节点

bash 复制代码
[root@master01 bin ]#rsync -avz  kube-apiserver kube-controller-manager kube-scheduler kubectl kubelet  master02:/usr/local/bin/
sending incremental file list
kube-apiserver
kube-controller-manager
kube-scheduler
kubectl
kubelet

sent 109,825,016 bytes  received 111 bytes  6,656,068.30 bytes/sec
total size is 428,997,736  speedup is 3.91
[root@master01 bin ]#rsync -avz  kube-apiserver kube-controller-manager kube-scheduler kubectl kubelet  master03:/usr/local/bin/
sending incremental file list
kube-apiserver
kube-controller-manager
kube-scheduler
kubectl
kubelet

sent 109,825,016 bytes  received 111 bytes  5,108,145.44 bytes/sec
total size is 428,997,736  speedup is 3.91

server节点的程序包含client节点的二进制包,将二进制程序拷贝到node节点

bash 复制代码
[root@master01 bin ]#pwd
/data/work/kubernetes/server/bin
[root@master01 bin ]#scp kube-proxy kubelet node01:/usr/local/bin/
The authenticity of host 'node01 (10.10.0.14)' can't be established.
ECDSA key fingerprint is SHA256:KNFJAL1IY7QwPJevBpHBLelq/cGGjS4Iu3qb3gxgqgs.
ECDSA key fingerprint is MD5:6b:82:08:da:02:d6:1b:d0:ec:d1:93:c3:b8:21:6a:b7.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'node01' (ECDSA) to the list of known hosts.
kube-proxy                                                                                                             100%   38MB  27.6MB/s   00:01    
kubelet 


[root@master01 work ]#mkdir -p /etc/kubernetes/
[root@master01 work ]#mkdir -p /etc/kubernetes/ssl
[root@master01 work ]#mkdir /var/log/kubernetes

[root@master02 ~ ]#mkdir -p /etc/kubernetes/
[root@master02 ~ ]#mkdir -p /etc/kubernetes/ssl
[root@master02 ~ ]#mkdir /var/log/kubernetes

[root@master03 ~ ]#mkdir -p /etc/kubernetes/
[root@master03 ~ ]#mkdir -p /etc/kubernetes/ssl
[root@master03 ~ ]#mkdir /var/log/kubernetes

5.2 部署apiserver组件

#启动TLS Bootstrapping 机制

Master apiserver启用TLS认证后,每个节点的 kubelet 组件都要使用由 apiserver 使用的 CA 签发的有效证书才能与 apiserver 通讯,

当Node节点很多时,这种客户端证书颁发需要大量工作,同样也会增加集群扩展复杂度。

为了简化流程,Kubernetes引入了TLS bootstraping机制来自动颁发客户端证书,

kubelet会以一个低权限用户自动向apiserver申请证书,kubelet的证书由apiserver动态签署,自动给kubelet颁发证书。

Bootstrap 是很多系统中都存在的程序,

比如 Linux 的bootstrap,bootstrap 一般都是作为预先配置在开启或者系统启动的时候加载,

这可以用来生成一个指定环境。Kubernetes 的 kubelet 在启动时同样可以加载一个这样的配置文件,这个文件的内容类似如下形式:

bash 复制代码
apiVersion: v1
clusters: null
contexts:
- context:
    cluster: kubernetes
    user: kubelet-bootstrap
  name: default
current-context: default
kind: Config
preferences: {}
users:
- name: kubelet-bootstrap
  user: {}

#TLS bootstrapping 具体引导过程

1.TLS 作用

TLS 的作用就是对通讯加密,防止中间人窃听;

同时如果证书不信任的话根本就无法与 apiserver 建立连接,

更不用提有没有权限向apiserver请求指定内容。

  1. RBAC 作用 Role base access control 基于角色的访问控制
    当 TLS 解决了通讯问题后,那么权限问题就应由 RBAC 解决(可以使用其他权限模型,如 ABAC);
    RBAC 中规定了一个用户或者用户组(subject)具有请求哪些 api 的权限;
    在配合 TLS 加密的时候,实际上 apiserver 读取客户端证书的 CN 字段作为用户名,读取 O字段作为用户组.

以上说明:

第一,想要与 apiserver 通讯就必须采用由 apiserver CA 签发的证书,

这样才能形成信任关系,建立 TLS 连接;

第二,可以通过证书的 CN、O 字段来提供 RBAC 所需的用户与用户组。

#kubelet 首次启动流程

TLS bootstrapping 功能是让 kubelet 组件去 apiserver 申请证书,

然后用于连接 apiserver;那么第一次启动时没有证书如何连接 apiserver ?

在apiserver 配置中指定了一个 token.csv 文件,该文件中是一个预设的用户配置;启动时会加载

同时该用户的Token 和 由apiserver 的 CA签发的用户被写入了 kubelet 所使用的 bootstrap.kubeconfig 配置文件中;

kubelet启动时,会加载bootstrap.kubeconfig文件

这样在首次请求时,kubelet 使用 bootstrap.kubeconfig 中被 apiserver CA 签发证书时信任的用户来与 apiserver 建立 TLS 通讯,

使用 bootstrap.kubeconfig 中的用户 Token 来向 apiserver 声明自己的 RBAC 授权身份.

token.csv格式:

3940fd7fbb391d1b4d861ad17a1f0613,kubelet-bootstrap,10001,"system:kubelet-bootstrap"

创建上述配置文件中token文件:

生产token:

head -c 16 /dev/urandom | od -An -t x | tr -d ' '

cat > /data/kubernetes/token/token.csv << EOF

8414f742b0b05960998699427f780978,kubelet-bootstrap,10001,"system:node-bootstrapper"

EOF

首次启动时,可能与遇到 kubelet 报 401 无权访问 apiserver 的错误;

这是因为在默认情况下,kubelet 通过 bootstrap.kubeconfig 中的预设用户 Token 声明了自己的身份

,然后创建 CSR 请求;但是不要忘记这个用户在我们不处理的情况下他没任何权限的,

包括创建 CSR 请求;所以需要创建一个 ClusterRoleBinding,

将预设用户 kubelet-bootstrap 与内置的 ClusterRole system:node-bootstrapper 绑定到一起,

使其能够发起 CSR 请求。

稍后安装kubelet的时候演示。

#创建token.csv文件

bash 复制代码
[root@master01 work]# cat > token.csv << EOF
$(head -c 16 /dev/urandom | od -An -t x | tr -d ' '),kubelet-bootstrap,10001,"system:kubelet-bootstrap"
EOF

#格式:token,用户名,UID,用户组

[root@master01 work ]#cat token.csv

fe63c95ecacff5f161138fddee6d0a5e,kubelet-bootstrap,10001,"system:kubelet-bootstrap"

#创建csr请求文件,替换为自己机器的IP

bash 复制代码
[root@master01 work ]#cat kube-apiserver-csr.json 
{
  "CN": "kubernetes",
  "hosts": [
    "127.0.0.1",
    "10.10.0.10",
    "10.10.0.11",
    "10.10.0.12",
    "10.10.0.14",
    "10.10.0.100",
    "10.255.0.1",
    "kubernetes",
    "kubernetes.default",
    "kubernetes.default.svc",
    "kubernetes.default.svc.cluster",
    "kubernetes.default.svc.cluster.local"
  ],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "Hubei",
      "L": "Wuhan",
      "O": "k8s",
      "OU": "system"
    }
  ]
}

#注: 如果 hosts 字段不为空则需要指定授权使用该证书的 IP 或域名列表。

由于该证书后续被 kubernetes master 集群使用,需要将master节点的IP都填上,还可以多预留几个ip,最好node节点的ip也填上

同时还需要填写 service 网络的首个IP。

(一般是 kube-apiserver 指定的 service-cluster-ip-range 网段的第一个IP,如 10.255.0.1)

#生成证书

bash 复制代码
[root@master01 work ]#cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-apiserver-csr.json | cfssljson -bare kube-apiserver
2022/10/26 09:17:42 [INFO] generate received request
2022/10/26 09:17:42 [INFO] received CSR
2022/10/26 09:17:42 [INFO] generating key: rsa-2048
2022/10/26 09:17:43 [INFO] encoded CSR
2022/10/26 09:17:43 [INFO] signed certificate with serial number 472605278820153059718832369709947675981512800305
2022/10/26 09:17:43 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").

-profile=kubernetes 是在ca-config.json文件中指定的

#创建api-server的配置文件,替换成自己的ip

bash 复制代码
[root@master01 work ]#cat kube-apiserver.conf 
KUBE_APISERVER_OPTS="--enable-admission-plugins=NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \
  --anonymous-auth=false \
  --bind-address=10.10.0.10 \
  --secure-port=6443 \
  --advertise-address=10.10.0.10 \
  --insecure-port=0 \
  --authorization-mode=Node,RBAC \
  --runtime-config=api/all=true \
  --enable-bootstrap-token-auth \
  --service-cluster-ip-range=10.255.0.0/16 \
  --token-auth-file=/etc/kubernetes/token.csv \
  --service-node-port-range=30000-50000 \
  --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem  \
  --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem \
  --client-ca-file=/etc/kubernetes/ssl/ca.pem \
  --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem \
  --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem \
  --service-account-key-file=/etc/kubernetes/ssl/ca-key.pem \
  --service-account-signing-key-file=/etc/kubernetes/ssl/ca-key.pem  \
  --service-account-issuer=https://kubernetes.default.svc.cluster.local \
  --etcd-cafile=/etc/etcd/ssl/ca.pem \
  --etcd-certfile=/etc/etcd/ssl/etcd.pem \
  --etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \
  --etcd-servers=https://10.10.0.10:2379,https://10.10.0.11:2379,https://10.10.0.12:2379 \
  --enable-swagger-ui=true \
  --allow-privileged=true \
  --apiserver-count=3 \
  --audit-log-maxage=30 \
  --audit-log-maxbackup=3 \
  --audit-log-maxsize=100 \
  --audit-log-path=/var/log/kube-apiserver-audit.log \
  --event-ttl=1h \
  --alsologtostderr=true \
  --logtostderr=false \
  --log-dir=/var/log/kubernetes \
  --v=4"

#注:

--logtostderr:启用日志

--v:日志等级

--log-dir:日志目录

--etcd-servers:etcd集群地址

--bind-address:监听地址

--secure-port:https安全端口

--advertise-address:集群通告地址

--allow-privileged:启用授权

--service-cluster-ip-range:Service虚拟IP地址段

--enable-admission-plugins:准入控制模块

--authorization-mode:认证授权,启用RBAC授权和节点自管理

--enable-bootstrap-token-auth:启用TLS bootstrap机制

--token-auth-file:bootstrap token文件

--service-node-port-range:Service nodeport类型默认分配端口范围

--kubelet-client-xxx:apiserver访问kubelet客户端证书

--tls-xxx-file:apiserver https证书

--etcd-xxxfile:连接Etcd集群证书 --

-audit-log-xxx:审计日志

#创建服务启动文件

bash 复制代码
[root@master01 work ]#cat kube-apiserver.service 
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=etcd.service
Wants=etcd.service
 
[Service]
EnvironmentFile=-/etc/kubernetes/kube-apiserver.conf
ExecStart=/usr/local/bin/kube-apiserver $KUBE_APISERVER_OPTS
Restart=on-failure
RestartSec=5
Type=notify
LimitNOFILE=65536
 
[Install]
WantedBy=multi-user.target
bash 复制代码
[root@master01 work ]#cp ca*.pem /etc/kubernetes/ssl
[root@master01 work ]#cp kube-apiserver*.pem /etc/kubernetes/ssl/
[root@master01 work ]#cp token.csv /etc/kubernetes/
[root@master01 work ]#cp kube-apiserver.conf /etc/kubernetes/
[root@master01 work ]#cp kube-apiserver.service /usr/lib/systemd/system/
[root@master01 work ]#rsync -vaz token.csv master02:/etc/kubernetes/
sending incremental file list
token.csv

sent 161 bytes  received 35 bytes  392.00 bytes/sec
total size is 84  speedup is 0.43
[root@master01 work ]#rsync -vaz token.csv master03:/etc/kubernetes/
sending incremental file list
token.csv

sent 161 bytes  received 35 bytes  78.40 bytes/sec
total size is 84  speedup is 0.43
[root@master01 work ]#rsync -vaz kube-apiserver*.pem master02:/etc/kubernetes/ssl/
sending incremental file list
kube-apiserver-key.pem
kube-apiserver.pem

sent 2,604 bytes  received 54 bytes  5,316.00 bytes/sec
total size is 3,310  speedup is 1.25
[root@master01 work ]#rsync -vaz kube-apiserver*.pem master03:/etc/kubernetes/ssl/
sending incremental file list
kube-apiserver-key.pem
kube-apiserver.pem

sent 2,604 bytes  received 54 bytes  5,316.00 bytes/sec
total size is 3,310  speedup is 1.25
[root@master01 work ]#rsync -vaz ca*.pem master02:/etc/kubernetes/ssl/
sending incremental file list
ca-key.pem
ca.pem

sent 2,420 bytes  received 54 bytes  4,948.00 bytes/sec
total size is 3,025  speedup is 1.22
[root@master01 work ]#rsync -vaz ca*.pem master03:/etc/kubernetes/ssl/
sending incremental file list
ca-key.pem
ca.pem

sent 2,420 bytes  received 54 bytes  1,649.33 bytes/sec
total size is 3,025  speedup is 1.22
[root@master01 work ]#rsync -vaz kube-apiserver.conf  master02:/etc/kubernetes/
sending incremental file list
kube-apiserver.conf


sent 1,005 bytes  received 54 bytes  2,118.00 bytes/sec
total size is 1,959  speedup is 1.85
[root@master01 work ]#rsync -vaz kube-apiserver.conf master03:/etc/kubernetes/
sending incremental file list
kube-apiserver.conf

sent 707 bytes  received 35 bytes  1,484.00 bytes/sec
total size is 1,597  speedup is 2.15
[root@master01 work ]#rsync -vaz kube-apiserver.service master02:/usr/lib/systemd/system/
sending incremental file list
kube-apiserver.service

sent 340 bytes  received 35 bytes  750.00 bytes/sec
total size is 362  speedup is 0.97
[root@master01 work ]#rsync -vaz kube-apiserver.service master03:/usr/lib/systemd/system/
sending incremental file list
kube-apiserver.service

sent 340 bytes  received 35 bytes  750.00 bytes/sec
total size is 362  speedup is 0.97

注:master02和master03配置文件kube-apiserver.conf的IP地址修改为实际的本机IP

master02配置文件:

bash 复制代码
[root@master02 kubernetes ]#cat kube-apiserver.conf 
KUBE_APISERVER_OPTS="--enable-admission-plugins=NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \
  --anonymous-auth=false \
  --bind-address=10.10.0.11 \
  --secure-port=6443 \
  --advertise-address=10.10.0.11 \
  --insecure-port=0 \
  --authorization-mode=Node,RBAC \
  --runtime-config=api/all=true \
  --enable-bootstrap-token-auth \
  --service-cluster-ip-range=10.255.0.0/16 \
  --token-auth-file=/etc/kubernetes/token.csv \
  --service-node-port-range=30000-50000 \
  --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem  \
  --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem \
  --client-ca-file=/etc/kubernetes/ssl/ca.pem \
  --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem \
  --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem \
  --service-account-key-file=/etc/kubernetes/ssl/ca-key.pem \
  --service-account-signing-key-file=/etc/kubernetes/ssl/ca-key.pem  \
  --service-account-issuer=https://kubernetes.default.svc.cluster.local \
  --etcd-cafile=/etc/etcd/ssl/ca.pem \
  --etcd-certfile=/etc/etcd/ssl/etcd.pem \
  --etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \
  --etcd-servers=https://10.10.0.10:2379,https://10.10.0.11:2379,https://10.10.0.12:2379 \
  --enable-swagger-ui=true \
  --allow-privileged=true \
  --apiserver-count=3 \
  --audit-log-maxage=30 \
  --audit-log-maxbackup=3 \
  --audit-log-maxsize=100 \
  --audit-log-path=/var/log/kube-apiserver-audit.log \
  --event-ttl=1h \
  --alsologtostderr=true \
  --logtostderr=false \
  --log-dir=/var/log/kubernetes \
  --v=4"

master03配置文件:

bash 复制代码
[root@master03 kubernetes ]#cat kube-apiserver.conf 
KUBE_APISERVER_OPTS="--enable-admission-plugins=NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \
  --anonymous-auth=false \
  --bind-address=10.10.0.12 \
  --secure-port=6443 \
  --advertise-address=10.10.0.12 \
  --insecure-port=0 \
  --authorization-mode=Node,RBAC \
  --runtime-config=api/all=true \
  --enable-bootstrap-token-auth \
  --service-cluster-ip-range=10.255.0.0/16 \
  --token-auth-file=/etc/kubernetes/token.csv \
  --service-node-port-range=30000-50000 \
  --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem  \
  --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem \
  --client-ca-file=/etc/kubernetes/ssl/ca.pem \
  --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem \
  --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem \
  --service-account-key-file=/etc/kubernetes/ssl/ca-key.pem \
  --service-account-signing-key-file=/etc/kubernetes/ssl/ca-key.pem  \
  --service-account-issuer=https://kubernetes.default.svc.cluster.local \
  --etcd-cafile=/etc/etcd/ssl/ca.pem \
  --etcd-certfile=/etc/etcd/ssl/etcd.pem \
  --etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \
  --etcd-servers=https://10.10.0.10:2379,https://10.10.0.11:2379,https://10.10.0.12:2379 \
  --enable-swagger-ui=true \
  --allow-privileged=true \
  --apiserver-count=3 \
  --audit-log-maxage=30 \
  --audit-log-maxbackup=3 \
  --audit-log-maxsize=100 \
  --audit-log-path=/var/log/kube-apiserver-audit.log \
  --event-ttl=1h \
  --alsologtostderr=true \
  --logtostderr=false \
  --log-dir=/var/log/kubernetes \
  --v=4"

[root@master01 work ]#systemctl daemon-reload

[root@master02 kubernetes ]#systemctl daemon-reload

[root@master03 kubernetes ]#systemctl daemon-reload

[root@master01 work ]#systemctl enable kube-apiserver.service --now

[root@master02 kubernetes ]#systemctl enable kube-apiserver.service --now

[root@master03 kubernetes ]#systemctl enable kube-apiserver.service --now

bash 复制代码
[root@master01 work ]#curl --insecure https://10.10.0.10:6443/
{
  "kind": "Status",
  "apiVersion": "v1",
  "metadata": {
    
  },
  "status": "Failure",
  "message": "Unauthorized",
  "reason": "Unauthorized",
  "code": 401

上面看到401,这个是正常的的状态,还没认证

5.3 部署kubectl组件

Kubectl是客户端工具,操作k8s资源的,如增删改查等。

Kubectl操作资源的时候,怎么知道连接到哪个集群,

需要一个文件/etc/kubernetes/admin.conf,这个文件是kubeadm安装时生成的。kubectl会根据这个文件的配置,

去访问k8s资源。/etc/kubernetes/admin.conf文件记录了访问的k8s集群,和要用到的证书。

可以设置一个环境变量KUBECONFIG

[root@master01 ~ ]#export KUBECONFIG =/etc/kubernetes/admin.conf

这样在操作kubectl,就会自动加载KUBECONFIG来操作要管理哪个集群的k8s资源了

也可以按照下面方法,这个是在kubeadm初始化k8s的时候会告诉我们要用的一个方法

[root@ master01 ~]# cp /etc/kubernetes/admin.conf /root/.kube/config

这样我们在执行kubectl,就会加载/root/.kube/config文件,去操作k8s资源了

如果设置了KUBECONFIG,那就会先找到KUBECONFIG去操作k8s,

如果没有KUBECONFIG变量,那就会使用/root/.kube/config文件决定管理哪个k8s集群的资源

二进制安装/root/.kube/config 文件不会自动生成,需要手动去配置

#创建csr请求文件

bash 复制代码
[root@master01 work ]#cat admin-csr.json 
{
  "CN": "admin",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "Hubei",
      "L": "Wuhan",
      "O": "system:masters",             
      "OU": "system"
    }
  ]
}

#说明: 后续 kube-apiserver 使用 RBAC 对客户端(如 kubelet、kube-proxy、Pod)请求进行授权;

kube-apiserver 预定义了一些 RBAC 使用的 RoleBindings,

如 cluster-admin 将 Group system:masters 与 Role cluster-admin 绑定,

该 Role 授予了调用kube-apiserver 的所有 API的权限;

O指定该证书的 Group 为 system:masters,kubelet 使用该证书访问 kube-apiserver 时 ,

由于证书被 CA 签名,所以认证通过,同时由于证书用户组为经过预授权的 system:masters,所以被授予访问所有 API 的权限;

注: 这个admin 证书,是将来生成管理员用的kube config 配置文件用的,

现在我们一般建议使用RBAC 来对kubernetes 进行角色权限控制,

kubernetes 将证书中的CN 字段 作为User, O 字段作为 Group; "O": "system:masters",

必须是system:masters,否则后面kubectl create clusterrolebinding报错。

#证书O配置为system:masters 在集群内部cluster-admin的clusterrolebinding将system:masters组和cluster-admin clusterrole绑定在一起

#生成证书

bash 复制代码
[root@master01 work ]#cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin
2022/10/26 13:13:36 [INFO] generate received request
2022/10/26 13:13:36 [INFO] received CSR
2022/10/26 13:13:36 [INFO] generating key: rsa-2048
2022/10/26 13:13:37 [INFO] encoded CSR
2022/10/26 13:13:37 [INFO] signed certificate with serial number 446576960760276520597218013698142034111813000185
2022/10/26 13:13:37 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").

-profile=kubernetes 是在ca-config.json文件中指定的

这样,admin以及system:masters组下的成员都被apiserver信任。

[root@master01 work ]#cp admin*.pem /etc/kubernetes/ssl/

配置安全上下文

#创建kubeconfig配置文件,比较重要

kubeconfig 为 kubectl 的配置文件,包含访问 apiserver 的所有信息,

如 apiserver 地址、CA 证书和自身使用的证书(这里如果报错找不到kubeconfig路径,请手动复制到相应路径下,没有则忽略)

二进制部署不会生成~/.kube/config文件

手动生成:

1.设置集群参数

集群名字是kubernetes --kubeconfig 指定生成文件的名称以及位置

bash 复制代码
[root@master01 work ]#kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://10.10.0.10:6443 --kubeconfig=kube.config
Cluster "kubernetes" set.

#查看kube.config内容

bash 复制代码
[root@master01 work ]#cat kube.config 
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUR0akNDQXA2Z0F3SUJBZ0lVUkV4RXhidjQ1ZlVBY1hCQlpENGZEZnRqZmtNd0RRWUpLb1pJaHZjTkFRRUwKQlFBd1lURUxNQWtHQTFVRUJoTUNRMDR4RGpBTUJnTlZCQWdUQlVoMVltVnBNUTR3REFZRFZRUUhFd1ZYZFdoaApiakVNTUFvR0ExVUVDaE1EYXpoek1ROHdEUVlEVlFRTEV3WnplWE4wWlcweEV6QVJCZ05WQkFNVENtdDFZbVZ5CmJtVjBaWE13SGhjTk1qSXhNREkxTURnMU5EQXdXaGNOTXpJeE1ESXlNRGcxTkRBd1dqQmhNUXN3Q1FZRFZRUUcKRXdKRFRqRU9NQXdHQTFVRUNCTUZTSFZpWldreERqQU1CZ05WQkFjVEJWZDFhR0Z1TVF3d0NnWURWUVFLRXdOcgpPSE14RHpBTkJnTlZCQXNUQm5ONWMzUmxiVEVUTUJFR0ExVUVBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKCktvWklodmNOQVFFQkJRQURnZ0VQQURDQ0FRb0NnZ0VCQUxVV2lYcGZPMFZ6dTBrek8xYllaVUo2dFVnTHp2TDUKTlJwWnpoS2pNZ3pDbmZYM0pHWUM0SWpFdGoxRWNhUGdFOUErbkZ1UndCWlk0a0U3NkE1RjdhaS8rNzdiak5tMApJcWdzY3o2VmdzM0JuMEZSSXVmMlFkQm9OUmNlWWhXL011Z0ZWUlZRSUVQTjRuZGxCVUFScHpNbDk4clpxSVU5CkxTMkZ1WWRwYTRRSi8weVp3YWlYYllxazFDUk5JUkRBMUZJRDh6VEpHeXhRdmZoZ05wRGJTQkhQVlhqK3oxZW4KTU5XQzVDMGt1QWcya3IzcExHUkg3T2dMR1dHV0RuRXVwMUZKMUJONDl2V3IwTW8zQWd6TE1TcTdaRnRGYS9KYgpvN3lOWmZDZnd5NTZQTi9Rckc5eSs0aFBONHVRdG16em1YbUJYT2JJTFF2SXhEN2lKYTVWb0I4Q0F3RUFBYU5tCk1HUXdEZ1lEVlIwUEFRSC9CQVFEQWdFR01CSUdBMVVkRXdFQi93UUlNQVlCQWY4Q0FRSXdIUVlEVlIwT0JCWUUKRk4rdFZGbnBVTS9JcmMyc21qWkRYT3BlRE1VVE1COEdBMVVkSXdRWU1CYUFGTit0VkZucFVNL0lyYzJzbWpaRApYT3BlRE1VVE1BMEdDU3FHU0liM0RRRUJDd1VBQTRJQkFRQjk3NXBmODY5THN6NkdDWkdxckNyWE1GMm00aENXCmFIRGx4UWQvK3NPQU5neEsxUnV1QWZ5d2lMU2pRTGFaYmxSTnZBMWQ1cGVndXBmcUdyQXljSjE1MkpyUGQ1aE8KNkpGMmlhMWxJMS80bno4OFE1STBZcEpJc0FSUDZBTSttb2RrWjFKM3V5UHFZbFVmSnBJU1pLSlhpQStERVJCaQo1eEhjZWMyWVlPTHJqK2xTMk90eUJ4K3ZpRnlMWFU0aGkrN08zRVRwNjFjRWVKUHE0YzcvWE95UlNnMWZudFFRCkN5SC81eHFBSStSYmhXMzA4TXdVWGh1M2QxcXFyRTBoUXpOVDVoOW4zSHN6V0F3dGthQVUzOHEzMEZqZFdwYXgKeFdDcURBdyt5UElMc2JzdjdBMnVnTUkxY1o0Zk9uWmkwK1MyRHFXdnZ4UEZ6enZza013SS9BNjQKLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
    server: https://10.10.0.10:6443
  name: kubernetes
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

certificate-authority-data 内容就是根据ca.pem内容加密/解密生成的

2.设置客户端认证参数

set-credentials admin admin是admin-csr.json 文件中的CN值

[root@master01 work ]#kubectl config set-credentials admin --client-certificate=admin.pem --client-key=admin-key.pem --embed-certs=true --kubeconfig=kube.config

User "admin" set.

bash 复制代码
[root@master01 work ]#cat kube.config 
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUR0akNDQXA2Z0F3SUJBZ0lVUkV4RXhidjQ1ZlVBY1hCQlpENGZEZnRqZmtNd0RRWUpLb1pJaHZjTkFRRUwKQlFBd1lURUxNQWtHQTFVRUJoTUNRMDR4RGpBTUJnTlZCQWdUQlVoMVltVnBNUTR3REFZRFZRUUhFd1ZYZFdoaApiakVNTUFvR0ExVUVDaE1EYXpoek1ROHdEUVlEVlFRTEV3WnplWE4wWlcweEV6QVJCZ05WQkFNVENtdDFZbVZ5CmJtVjBaWE13SGhjTk1qSXhNREkxTURnMU5EQXdXaGNOTXpJeE1ESXlNRGcxTkRBd1dqQmhNUXN3Q1FZRFZRUUcKRXdKRFRqRU9NQXdHQTFVRUNCTUZTSFZpWldreERqQU1CZ05WQkFjVEJWZDFhR0Z1TVF3d0NnWURWUVFLRXdOcgpPSE14RHpBTkJnTlZCQXNUQm5ONWMzUmxiVEVUTUJFR0ExVUVBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKCktvWklodmNOQVFFQkJRQURnZ0VQQURDQ0FRb0NnZ0VCQUxVV2lYcGZPMFZ6dTBrek8xYllaVUo2dFVnTHp2TDUKTlJwWnpoS2pNZ3pDbmZYM0pHWUM0SWpFdGoxRWNhUGdFOUErbkZ1UndCWlk0a0U3NkE1RjdhaS8rNzdiak5tMApJcWdzY3o2VmdzM0JuMEZSSXVmMlFkQm9OUmNlWWhXL011Z0ZWUlZRSUVQTjRuZGxCVUFScHpNbDk4clpxSVU5CkxTMkZ1WWRwYTRRSi8weVp3YWlYYllxazFDUk5JUkRBMUZJRDh6VEpHeXhRdmZoZ05wRGJTQkhQVlhqK3oxZW4KTU5XQzVDMGt1QWcya3IzcExHUkg3T2dMR1dHV0RuRXVwMUZKMUJONDl2V3IwTW8zQWd6TE1TcTdaRnRGYS9KYgpvN3lOWmZDZnd5NTZQTi9Rckc5eSs0aFBONHVRdG16em1YbUJYT2JJTFF2SXhEN2lKYTVWb0I4Q0F3RUFBYU5tCk1HUXdEZ1lEVlIwUEFRSC9CQVFEQWdFR01CSUdBMVVkRXdFQi93UUlNQVlCQWY4Q0FRSXdIUVlEVlIwT0JCWUUKRk4rdFZGbnBVTS9JcmMyc21qWkRYT3BlRE1VVE1COEdBMVVkSXdRWU1CYUFGTit0VkZucFVNL0lyYzJzbWpaRApYT3BlRE1VVE1BMEdDU3FHU0liM0RRRUJDd1VBQTRJQkFRQjk3NXBmODY5THN6NkdDWkdxckNyWE1GMm00aENXCmFIRGx4UWQvK3NPQU5neEsxUnV1QWZ5d2lMU2pRTGFaYmxSTnZBMWQ1cGVndXBmcUdyQXljSjE1MkpyUGQ1aE8KNkpGMmlhMWxJMS80bno4OFE1STBZcEpJc0FSUDZBTSttb2RrWjFKM3V5UHFZbFVmSnBJU1pLSlhpQStERVJCaQo1eEhjZWMyWVlPTHJqK2xTMk90eUJ4K3ZpRnlMWFU0aGkrN08zRVRwNjFjRWVKUHE0YzcvWE95UlNnMWZudFFRCkN5SC81eHFBSStSYmhXMzA4TXdVWGh1M2QxcXFyRTBoUXpOVDVoOW4zSHN6V0F3dGthQVUzOHEzMEZqZFdwYXgKeFdDcURBdyt5UElMc2JzdjdBMnVnTUkxY1o0Zk9uWmkwK1MyRHFXdnZ4UEZ6enZza013SS9BNjQKLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
    server: https://10.10.0.10:6443
  name: kubernetes
contexts: null
current-context: ""
kind: Config
preferences: {}
users:
- name: admin
  user:
    client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUQxVENDQXIyZ0F3SUJBZ0lVVGprMEdIUG1CRWNRVVhGeFZrL21wb05pRy9rd0RRWUpLb1pJaHZjTkFRRUwKQlFBd1lURUxNQWtHQTFVRUJoTUNRMDR4RGpBTUJnTlZCQWdUQlVoMVltVnBNUTR3REFZRFZRUUhFd1ZYZFdoaApiakVNTUFvR0ExVUVDaE1EYXpoek1ROHdEUVlEVlFRTEV3WnplWE4wWlcweEV6QVJCZ05WQkFNVENtdDFZbVZ5CmJtVjBaWE13SGhjTk1qSXhNREkyTURVd09UQXdXaGNOTXpJeE1ESXpNRFV3T1RBd1dqQm5NUXN3Q1FZRFZRUUcKRXdKRFRqRU9NQXdHQTFVRUNCTUZTSFZpWldreERqQU1CZ05WQkFjVEJWZDFhR0Z1TVJjd0ZRWURWUVFLRXc1egplWE4wWlcwNmJXRnpkR1Z5Y3pFUE1BMEdBMVVFQ3hNR2MzbHpkR1Z0TVE0d0RBWURWUVFERXdWaFpHMXBiakNDCkFTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTDRqTGlNaXJmUWlJdHp2MHFOOE1iM3AKSExMRGNMVXB3TUtlTFQwb3NsZ0xFL21jdmMvSG12TlBLNGZMTWk1OVhYTk5IV0JiL0ZVa0RPdlFwKytOenZZegpFTVFkTnNNTkdQOUsxbGFTMUw4V1BJMmYwc0xSM2h1UGt0L09OTmdoNDBxUW81aFYwSFpZTms0TzRpTE1lejhZCmNjbjFvaTVJdUFOTHg2Q2dkNURqQWR6NWJHeTdQL0gvM0phKzVNblNkNU8yb05SL0NMRmVkWVdJb055S1VlTDkKV3A2T0l1S0pUdW94T2JMT1lCWUtITnBSUXVnRmFIQmx4aUxCd2FjR3g4VVBtTUQrRlI0SXlMNldBWGFtTUVPQwpDTi9Yayt6bXUrOGdsQ1hoMUNsblJaTGNENXhDR29sTlhpVFZGclIyc3dKcTN0RHJXSUpzV2FKZnBvTGczWDhDCkF3RUFBYU4vTUgwd0RnWURWUjBQQVFIL0JBUURBZ1dnTUIwR0ExVWRKUVFXTUJRR0NDc0dBUVVGQndNQkJnZ3IKQmdFRkJRY0RBakFNQmdOVkhSTUJBZjhFQWpBQU1CMEdBMVVkRGdRV0JCU1lJaXl3V3M5MzFjZmZJSzRNVWlrdQpWQ2J3NWpBZkJnTlZIU01FR0RBV2dCVGZyVlJaNlZEUHlLM05ySm8yUTF6cVhnekZFekFOQmdrcWhraUc5dzBCCkFRc0ZBQU9DQVFFQVljTW5YdUNmVEV4RkwvRGRkUUYyT3pFTEx3Y1hHOFNMUjJKRUNXSnI0QXpsd1JUVUhjTDkKVGkxWXczZDI4TWh0d09NekNEYzM5ZURLM2RHZGVlMFozeFhXM0FlVGlRcFUrZnBCbURmQW1MRy9Mdzk1YzZsMQoxeXJiN1dXV2pyRHJDZHVVN1JCQnhJT3RqOERnNG1mQk1uUEhFQS9RY04yR1drc2ZwRVpleElzWWc1b242Z0RaCjdyNWxtSzZHTGZSWW83emlJY0NFak1zc281U1owbmJqYXZJUzZ0TUY0U2NLdWJKQUpVSERscFVMSnNqTWxTRW8KY01nQnVtd2ZXUnlRdU1HOXlnbHF0RDlJZDJxTUJsdU5tdlpwRy80ejdVMkhRVjVKOC9uaDFtckladmZKNDNMRQowOTNYOGdIamQvVE5kRjduVFY5ZWphTTlMZCt1TUVJQzhnPT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
    client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcEFJQkFBS0NBUUVBdmlNdUl5S3Q5Q0lpM08vU28zd3h2ZWtjc3NOd3RTbkF3cDR0UFNpeVdBc1QrWnk5Cno4ZWE4MDhyaDhzeUxuMWRjMDBkWUZ2OFZTUU02OUNuNzQzTzlqTVF4QjAyd3cwWS8wcldWcExVdnhZOGpaL1MKd3RIZUc0K1MzODQwMkNIalNwQ2ptRlhRZGxnMlRnN2lJc3g3UHhoeHlmV2lMa2k0QTB2SG9LQjNrT01CM1BscwpiTHMvOGYvY2xyN2t5ZEozazdhZzFIOElzVjUxaFlpZzNJcFI0djFhbm80aTRvbE82akU1c3M1Z0Znb2MybEZDCjZBVm9jR1hHSXNIQnB3Ykh4UStZd1A0Vkhnakl2cFlCZHFZd1E0SUkzOWVUN09hNzd5Q1VKZUhVS1dkRmt0d1AKbkVJYWlVMWVKTlVXdEhhekFtcmUwT3RZZ214Wm9sK21ndURkZndJREFRQUJBb0lCQVFDWkxjSjNyL0t3b2VldwpVczFCeEVaV2x6ejFqNXAzZVFIQVNLcHRnU0hjNkYvWlVydGdiNUNYd0FwenhmSFJubEh4R0FrNG5pSzFmT3VqCjkxKzBFR3pSeitZTCtQVXJRcHdHNEFXNWpXVXo1UGczcUxDbEgycHVqY1puNDdxUy9Rb2VBbFNwMzBpb2J2eWcKK2tDWWhHQXVQc1U5VFZTeE1RaCtMMGpPVVRqQ1VacHU2V0RLSVhiVzFqWU5mdWo5bVBuTlhWMTNvOWR6VFpLMApGSm1pd085VVd0WS81Z0FCVDBqQmxhWHpCSVd1QlI5QnMxcHZTS1BsNnBRVXJMMmVKZjJkR2FaRExSWHVXb0crCkp5RUtkNGRkRFZUR0ZzcFVSc0NCYXBOYTg4RjNnNzBmSS9DekR2MG1Mb1Z4cE10RGRQRlljeDhXb0pzRDNGeUQKRFArMVhZNEJBb0dCQVBiVEhXMmhaNkJSbjVXNitEM1F1U3NINDAzWmJUL09Naysyb3JuK1FqNExYR3o0cVBsSQp4YXFhcElycm9XYU05VzJJUEo1dVJZdnM0NTZUcmhWLzl0cWFxOSsrOWtlREJsalE2Wi9kQkJxSTdRUHUrdTVxCnpiK0RxcHg1TVgwVy8rS1dydFlkVHEyaHBwQlQxK0lvZTJ6STN4MXNiUk4zQ3d0VW9CZ2Z4YkQvQW9HQkFNVTAKbXl1SHl3MGVIY21iN3dvTzF5V05NbVNPb0tQK1FJMEV4U2ZCblF1UWNJNFl1dFdITUFUYWJGTXhjazZzNnhPNwppVVdPRWVLck4zbXdjM3NSN3dGUHRWNGUyQ3lWZlZjKzJIV2xmL3JzTGp6eUgrbzM1ZVdLL2NaaW9uT1Y2YzJWCmFLa01tZlR1OXdnbm5UN20vKzlSK0tPdVhubFBreVg4S3J5NElGT0JBb0dBWkZ1TWtLSGE5NVdZbEpIVUU1WkYKWTlpdU5GNGVqSjN6V1BRQ2tDdHdsYmVhMmZmMUJIN3hXQi9PbldtWFU1SW16R1ZqZUd1UHZZZ1JPTTRGTDFxNwpiVUVNZDBvMjZ2YThZdXAydzNoakRjTDAwKytjZWNwVlkvUk9MNWNiWnlndDNOeTFzL3R3blNxb0JmRUJTMFI0CmdzL2Q0Q0hRNitRd1NtZ2JQQlBYRnRNQ2dZQVErSjRCK1FXNGMwY00rcVp2cnlkRXpBbnlMWFFWcU9QVlB2dlkKbUFqejNkSlI2RDdyOFY1b2pJT1dCVU5aRWZpSkVqS1dFY3ZvUGVQZ1RSY2pHRUFCVk9LKzN0aXJ2Wkd6Mkd5NApjeTI0WW1yNFE3NExZaFFlMVA5Uisxc1BwMjhmaWlRZnFEMzNuamtVTXBTTnZVTjVUUXlneVhqSDU5azZBNkdKCjdDNmNBUUtCZ1FDSEdibTJOdXpEaUcyMEJvcURqVWR1dk9ycmYxV3p6THd0L05TZUFGOGdMNGNGcUY2WDAxOUoKcTFuV0VkSkU5UFlEcW9TQ1I4aWovTmxsbVZHVUdjNzdwLytwUDEzQTNWL1ZwdndETU5aK2FDRDJWQkYvUXBUTworUmdTTHhDbWpaeHVBTVlBd0V1c3d2dW5LNG9jeWNWWjBFSHpVSTZnblJDNkxqVk1IZVQxZ0E9PQotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo=

3.设置上下文参数

目的是将admin这个用户和apiserver集群连接起来,靠上下文参数连接

bash 复制代码
[root@master01 work ]#kubectl config set-context kubernetes --cluster=kubernetes --user=admin --kubeconfig=kube.config
Context "kubernetes" created.

[root@master01 work ]#cat kube.config 
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUR0akNDQXA2Z0F3SUJBZ0lVUkV4RXhidjQ1ZlVBY1hCQlpENGZEZnRqZmtNd0RRWUpLb1pJaHZjTkFRRUwKQlFBd1lURUxNQWtHQTFVRUJoTUNRMDR4RGpBTUJnTlZCQWdUQlVoMVltVnBNUTR3REFZRFZRUUhFd1ZYZFdoaApiakVNTUFvR0ExVUVDaE1EYXpoek1ROHdEUVlEVlFRTEV3WnplWE4wWlcweEV6QVJCZ05WQkFNVENtdDFZbVZ5CmJtVjBaWE13SGhjTk1qSXhNREkxTURnMU5EQXdXaGNOTXpJeE1ESXlNRGcxTkRBd1dqQmhNUXN3Q1FZRFZRUUcKRXdKRFRqRU9NQXdHQTFVRUNCTUZTSFZpWldreERqQU1CZ05WQkFjVEJWZDFhR0Z1TVF3d0NnWURWUVFLRXdOcgpPSE14RHpBTkJnTlZCQXNUQm5ONWMzUmxiVEVUTUJFR0ExVUVBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKCktvWklodmNOQVFFQkJRQURnZ0VQQURDQ0FRb0NnZ0VCQUxVV2lYcGZPMFZ6dTBrek8xYllaVUo2dFVnTHp2TDUKTlJwWnpoS2pNZ3pDbmZYM0pHWUM0SWpFdGoxRWNhUGdFOUErbkZ1UndCWlk0a0U3NkE1RjdhaS8rNzdiak5tMApJcWdzY3o2VmdzM0JuMEZSSXVmMlFkQm9OUmNlWWhXL011Z0ZWUlZRSUVQTjRuZGxCVUFScHpNbDk4clpxSVU5CkxTMkZ1WWRwYTRRSi8weVp3YWlYYllxazFDUk5JUkRBMUZJRDh6VEpHeXhRdmZoZ05wRGJTQkhQVlhqK3oxZW4KTU5XQzVDMGt1QWcya3IzcExHUkg3T2dMR1dHV0RuRXVwMUZKMUJONDl2V3IwTW8zQWd6TE1TcTdaRnRGYS9KYgpvN3lOWmZDZnd5NTZQTi9Rckc5eSs0aFBONHVRdG16em1YbUJYT2JJTFF2SXhEN2lKYTVWb0I4Q0F3RUFBYU5tCk1HUXdEZ1lEVlIwUEFRSC9CQVFEQWdFR01CSUdBMVVkRXdFQi93UUlNQVlCQWY4Q0FRSXdIUVlEVlIwT0JCWUUKRk4rdFZGbnBVTS9JcmMyc21qWkRYT3BlRE1VVE1COEdBMVVkSXdRWU1CYUFGTit0VkZucFVNL0lyYzJzbWpaRApYT3BlRE1VVE1BMEdDU3FHU0liM0RRRUJDd1VBQTRJQkFRQjk3NXBmODY5THN6NkdDWkdxckNyWE1GMm00aENXCmFIRGx4UWQvK3NPQU5neEsxUnV1QWZ5d2lMU2pRTGFaYmxSTnZBMWQ1cGVndXBmcUdyQXljSjE1MkpyUGQ1aE8KNkpGMmlhMWxJMS80bno4OFE1STBZcEpJc0FSUDZBTSttb2RrWjFKM3V5UHFZbFVmSnBJU1pLSlhpQStERVJCaQo1eEhjZWMyWVlPTHJqK2xTMk90eUJ4K3ZpRnlMWFU0aGkrN08zRVRwNjFjRWVKUHE0YzcvWE95UlNnMWZudFFRCkN5SC81eHFBSStSYmhXMzA4TXdVWGh1M2QxcXFyRTBoUXpOVDVoOW4zSHN6V0F3dGthQVUzOHEzMEZqZFdwYXgKeFdDcURBdyt5UElMc2JzdjdBMnVnTUkxY1o0Zk9uWmkwK1MyRHFXdnZ4UEZ6enZza013SS9BNjQKLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
    server: https://10.10.0.10:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: admin
  name: kubernetes
current-context: ""
kind: Config
preferences: {}
users:
- name: admin
  user:
    client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUQxVENDQXIyZ0F3SUJBZ0lVVGprMEdIUG1CRWNRVVhGeFZrL21wb05pRy9rd0RRWUpLb1pJaHZjTkFRRUwKQlFBd1lURUxNQWtHQTFVRUJoTUNRMDR4RGpBTUJnTlZCQWdUQlVoMVltVnBNUTR3REFZRFZRUUhFd1ZYZFdoaApiakVNTUFvR0ExVUVDaE1EYXpoek1ROHdEUVlEVlFRTEV3WnplWE4wWlcweEV6QVJCZ05WQkFNVENtdDFZbVZ5CmJtVjBaWE13SGhjTk1qSXhNREkyTURVd09UQXdXaGNOTXpJeE1ESXpNRFV3T1RBd1dqQm5NUXN3Q1FZRFZRUUcKRXdKRFRqRU9NQXdHQTFVRUNCTUZTSFZpWldreERqQU1CZ05WQkFjVEJWZDFhR0Z1TVJjd0ZRWURWUVFLRXc1egplWE4wWlcwNmJXRnpkR1Z5Y3pFUE1BMEdBMVVFQ3hNR2MzbHpkR1Z0TVE0d0RBWURWUVFERXdWaFpHMXBiakNDCkFTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTDRqTGlNaXJmUWlJdHp2MHFOOE1iM3AKSExMRGNMVXB3TUtlTFQwb3NsZ0xFL21jdmMvSG12TlBLNGZMTWk1OVhYTk5IV0JiL0ZVa0RPdlFwKytOenZZegpFTVFkTnNNTkdQOUsxbGFTMUw4V1BJMmYwc0xSM2h1UGt0L09OTmdoNDBxUW81aFYwSFpZTms0TzRpTE1lejhZCmNjbjFvaTVJdUFOTHg2Q2dkNURqQWR6NWJHeTdQL0gvM0phKzVNblNkNU8yb05SL0NMRmVkWVdJb055S1VlTDkKV3A2T0l1S0pUdW94T2JMT1lCWUtITnBSUXVnRmFIQmx4aUxCd2FjR3g4VVBtTUQrRlI0SXlMNldBWGFtTUVPQwpDTi9Yayt6bXUrOGdsQ1hoMUNsblJaTGNENXhDR29sTlhpVFZGclIyc3dKcTN0RHJXSUpzV2FKZnBvTGczWDhDCkF3RUFBYU4vTUgwd0RnWURWUjBQQVFIL0JBUURBZ1dnTUIwR0ExVWRKUVFXTUJRR0NDc0dBUVVGQndNQkJnZ3IKQmdFRkJRY0RBakFNQmdOVkhSTUJBZjhFQWpBQU1CMEdBMVVkRGdRV0JCU1lJaXl3V3M5MzFjZmZJSzRNVWlrdQpWQ2J3NWpBZkJnTlZIU01FR0RBV2dCVGZyVlJaNlZEUHlLM05ySm8yUTF6cVhnekZFekFOQmdrcWhraUc5dzBCCkFRc0ZBQU9DQVFFQVljTW5YdUNmVEV4RkwvRGRkUUYyT3pFTEx3Y1hHOFNMUjJKRUNXSnI0QXpsd1JUVUhjTDkKVGkxWXczZDI4TWh0d09NekNEYzM5ZURLM2RHZGVlMFozeFhXM0FlVGlRcFUrZnBCbURmQW1MRy9Mdzk1YzZsMQoxeXJiN1dXV2pyRHJDZHVVN1JCQnhJT3RqOERnNG1mQk1uUEhFQS9RY04yR1drc2ZwRVpleElzWWc1b242Z0RaCjdyNWxtSzZHTGZSWW83emlJY0NFak1zc281U1owbmJqYXZJUzZ0TUY0U2NLdWJKQUpVSERscFVMSnNqTWxTRW8KY01nQnVtd2ZXUnlRdU1HOXlnbHF0RDlJZDJxTUJsdU5tdlpwRy80ejdVMkhRVjVKOC9uaDFtckladmZKNDNMRQowOTNYOGdIamQvVE5kRjduVFY5ZWphTTlMZCt1TUVJQzhnPT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
    client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcEFJQkFBS0NBUUVBdmlNdUl5S3Q5Q0lpM08vU28zd3h2ZWtjc3NOd3RTbkF3cDR0UFNpeVdBc1QrWnk5Cno4ZWE4MDhyaDhzeUxuMWRjMDBkWUZ2OFZTUU02OUNuNzQzTzlqTVF4QjAyd3cwWS8wcldWcExVdnhZOGpaL1MKd3RIZUc0K1MzODQwMkNIalNwQ2ptRlhRZGxnMlRnN2lJc3g3UHhoeHlmV2lMa2k0QTB2SG9LQjNrT01CM1BscwpiTHMvOGYvY2xyN2t5ZEozazdhZzFIOElzVjUxaFlpZzNJcFI0djFhbm80aTRvbE82akU1c3M1Z0Znb2MybEZDCjZBVm9jR1hHSXNIQnB3Ykh4UStZd1A0Vkhnakl2cFlCZHFZd1E0SUkzOWVUN09hNzd5Q1VKZUhVS1dkRmt0d1AKbkVJYWlVMWVKTlVXdEhhekFtcmUwT3RZZ214Wm9sK21ndURkZndJREFRQUJBb0lCQVFDWkxjSjNyL0t3b2VldwpVczFCeEVaV2x6ejFqNXAzZVFIQVNLcHRnU0hjNkYvWlVydGdiNUNYd0FwenhmSFJubEh4R0FrNG5pSzFmT3VqCjkxKzBFR3pSeitZTCtQVXJRcHdHNEFXNWpXVXo1UGczcUxDbEgycHVqY1puNDdxUy9Rb2VBbFNwMzBpb2J2eWcKK2tDWWhHQXVQc1U5VFZTeE1RaCtMMGpPVVRqQ1VacHU2V0RLSVhiVzFqWU5mdWo5bVBuTlhWMTNvOWR6VFpLMApGSm1pd085VVd0WS81Z0FCVDBqQmxhWHpCSVd1QlI5QnMxcHZTS1BsNnBRVXJMMmVKZjJkR2FaRExSWHVXb0crCkp5RUtkNGRkRFZUR0ZzcFVSc0NCYXBOYTg4RjNnNzBmSS9DekR2MG1Mb1Z4cE10RGRQRlljeDhXb0pzRDNGeUQKRFArMVhZNEJBb0dCQVBiVEhXMmhaNkJSbjVXNitEM1F1U3NINDAzWmJUL09Naysyb3JuK1FqNExYR3o0cVBsSQp4YXFhcElycm9XYU05VzJJUEo1dVJZdnM0NTZUcmhWLzl0cWFxOSsrOWtlREJsalE2Wi9kQkJxSTdRUHUrdTVxCnpiK0RxcHg1TVgwVy8rS1dydFlkVHEyaHBwQlQxK0lvZTJ6STN4MXNiUk4zQ3d0VW9CZ2Z4YkQvQW9HQkFNVTAKbXl1SHl3MGVIY21iN3dvTzF5V05NbVNPb0tQK1FJMEV4U2ZCblF1UWNJNFl1dFdITUFUYWJGTXhjazZzNnhPNwppVVdPRWVLck4zbXdjM3NSN3dGUHRWNGUyQ3lWZlZjKzJIV2xmL3JzTGp6eUgrbzM1ZVdLL2NaaW9uT1Y2YzJWCmFLa01tZlR1OXdnbm5UN20vKzlSK0tPdVhubFBreVg4S3J5NElGT0JBb0dBWkZ1TWtLSGE5NVdZbEpIVUU1WkYKWTlpdU5GNGVqSjN6V1BRQ2tDdHdsYmVhMmZmMUJIN3hXQi9PbldtWFU1SW16R1ZqZUd1UHZZZ1JPTTRGTDFxNwpiVUVNZDBvMjZ2YThZdXAydzNoakRjTDAwKytjZWNwVlkvUk9MNWNiWnlndDNOeTFzL3R3blNxb0JmRUJTMFI0CmdzL2Q0Q0hRNitRd1NtZ2JQQlBYRnRNQ2dZQVErSjRCK1FXNGMwY00rcVp2cnlkRXpBbnlMWFFWcU9QVlB2dlkKbUFqejNkSlI2RDdyOFY1b2pJT1dCVU5aRWZpSkVqS1dFY3ZvUGVQZ1RSY2pHRUFCVk9LKzN0aXJ2Wkd6Mkd5NApjeTI0WW1yNFE3NExZaFFlMVA5Uisxc1BwMjhmaWlRZnFEMzNuamtVTXBTTnZVTjVUUXlneVhqSDU5azZBNkdKCjdDNmNBUUtCZ1FDSEdibTJOdXpEaUcyMEJvcURqVWR1dk9ycmYxV3p6THd0L05TZUFGOGdMNGNGcUY2WDAxOUoKcTFuV0VkSkU5UFlEcW9TQ1I4aWovTmxsbVZHVUdjNzdwLytwUDEzQTNWL1ZwdndETU5aK2FDRDJWQkYvUXBUTworUmdTTHhDbWpaeHVBTVlBd0V1c3d2dW5LNG9jeWNWWjBFSHpVSTZnblJDNkxqVk1IZVQxZ0E9PQotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo=

4.设置当前上下文

当前上下文就是admin这个用户访问当前集群 use-context kubernetes 这里kubernetes是上下文的名字

[root@master01 work ]#kubectl config use-context kubernetes --kubeconfig=kube.config

Switched to context "kubernetes".

bash 复制代码
[root@master01 work ]#cat kube.config 
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUR0akNDQXA2Z0F3SUJBZ0lVUkV4RXhidjQ1ZlVBY1hCQlpENGZEZnRqZmtNd0RRWUpLb1pJaHZjTkFRRUwKQlFBd1lURUxNQWtHQTFVRUJoTUNRMDR4RGpBTUJnTlZCQWdUQlVoMVltVnBNUTR3REFZRFZRUUhFd1ZYZFdoaApiakVNTUFvR0ExVUVDaE1EYXpoek1ROHdEUVlEVlFRTEV3WnplWE4wWlcweEV6QVJCZ05WQkFNVENtdDFZbVZ5CmJtVjBaWE13SGhjTk1qSXhNREkxTURnMU5EQXdXaGNOTXpJeE1ESXlNRGcxTkRBd1dqQmhNUXN3Q1FZRFZRUUcKRXdKRFRqRU9NQXdHQTFVRUNCTUZTSFZpWldreERqQU1CZ05WQkFjVEJWZDFhR0Z1TVF3d0NnWURWUVFLRXdOcgpPSE14RHpBTkJnTlZCQXNUQm5ONWMzUmxiVEVUTUJFR0ExVUVBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKCktvWklodmNOQVFFQkJRQURnZ0VQQURDQ0FRb0NnZ0VCQUxVV2lYcGZPMFZ6dTBrek8xYllaVUo2dFVnTHp2TDUKTlJwWnpoS2pNZ3pDbmZYM0pHWUM0SWpFdGoxRWNhUGdFOUErbkZ1UndCWlk0a0U3NkE1RjdhaS8rNzdiak5tMApJcWdzY3o2VmdzM0JuMEZSSXVmMlFkQm9OUmNlWWhXL011Z0ZWUlZRSUVQTjRuZGxCVUFScHpNbDk4clpxSVU5CkxTMkZ1WWRwYTRRSi8weVp3YWlYYllxazFDUk5JUkRBMUZJRDh6VEpHeXhRdmZoZ05wRGJTQkhQVlhqK3oxZW4KTU5XQzVDMGt1QWcya3IzcExHUkg3T2dMR1dHV0RuRXVwMUZKMUJONDl2V3IwTW8zQWd6TE1TcTdaRnRGYS9KYgpvN3lOWmZDZnd5NTZQTi9Rckc5eSs0aFBONHVRdG16em1YbUJYT2JJTFF2SXhEN2lKYTVWb0I4Q0F3RUFBYU5tCk1HUXdEZ1lEVlIwUEFRSC9CQVFEQWdFR01CSUdBMVVkRXdFQi93UUlNQVlCQWY4Q0FRSXdIUVlEVlIwT0JCWUUKRk4rdFZGbnBVTS9JcmMyc21qWkRYT3BlRE1VVE1COEdBMVVkSXdRWU1CYUFGTit0VkZucFVNL0lyYzJzbWpaRApYT3BlRE1VVE1BMEdDU3FHU0liM0RRRUJDd1VBQTRJQkFRQjk3NXBmODY5THN6NkdDWkdxckNyWE1GMm00aENXCmFIRGx4UWQvK3NPQU5neEsxUnV1QWZ5d2lMU2pRTGFaYmxSTnZBMWQ1cGVndXBmcUdyQXljSjE1MkpyUGQ1aE8KNkpGMmlhMWxJMS80bno4OFE1STBZcEpJc0FSUDZBTSttb2RrWjFKM3V5UHFZbFVmSnBJU1pLSlhpQStERVJCaQo1eEhjZWMyWVlPTHJqK2xTMk90eUJ4K3ZpRnlMWFU0aGkrN08zRVRwNjFjRWVKUHE0YzcvWE95UlNnMWZudFFRCkN5SC81eHFBSStSYmhXMzA4TXdVWGh1M2QxcXFyRTBoUXpOVDVoOW4zSHN6V0F3dGthQVUzOHEzMEZqZFdwYXgKeFdDcURBdyt5UElMc2JzdjdBMnVnTUkxY1o0Zk9uWmkwK1MyRHFXdnZ4UEZ6enZza013SS9BNjQKLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
    server: https://10.10.0.10:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: admin
  name: kubernetes
current-context: kubernetes
kind: Config
preferences: {}
users:
- name: admin
  user:
    client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUQxVENDQXIyZ0F3SUJBZ0lVVGprMEdIUG1CRWNRVVhGeFZrL21wb05pRy9rd0RRWUpLb1pJaHZjTkFRRUwKQlFBd1lURUxNQWtHQTFVRUJoTUNRMDR4RGpBTUJnTlZCQWdUQlVoMVltVnBNUTR3REFZRFZRUUhFd1ZYZFdoaApiakVNTUFvR0ExVUVDaE1EYXpoek1ROHdEUVlEVlFRTEV3WnplWE4wWlcweEV6QVJCZ05WQkFNVENtdDFZbVZ5CmJtVjBaWE13SGhjTk1qSXhNREkyTURVd09UQXdXaGNOTXpJeE1ESXpNRFV3T1RBd1dqQm5NUXN3Q1FZRFZRUUcKRXdKRFRqRU9NQXdHQTFVRUNCTUZTSFZpWldreERqQU1CZ05WQkFjVEJWZDFhR0Z1TVJjd0ZRWURWUVFLRXc1egplWE4wWlcwNmJXRnpkR1Z5Y3pFUE1BMEdBMVVFQ3hNR2MzbHpkR1Z0TVE0d0RBWURWUVFERXdWaFpHMXBiakNDCkFTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTDRqTGlNaXJmUWlJdHp2MHFOOE1iM3AKSExMRGNMVXB3TUtlTFQwb3NsZ0xFL21jdmMvSG12TlBLNGZMTWk1OVhYTk5IV0JiL0ZVa0RPdlFwKytOenZZegpFTVFkTnNNTkdQOUsxbGFTMUw4V1BJMmYwc0xSM2h1UGt0L09OTmdoNDBxUW81aFYwSFpZTms0TzRpTE1lejhZCmNjbjFvaTVJdUFOTHg2Q2dkNURqQWR6NWJHeTdQL0gvM0phKzVNblNkNU8yb05SL0NMRmVkWVdJb055S1VlTDkKV3A2T0l1S0pUdW94T2JMT1lCWUtITnBSUXVnRmFIQmx4aUxCd2FjR3g4VVBtTUQrRlI0SXlMNldBWGFtTUVPQwpDTi9Yayt6bXUrOGdsQ1hoMUNsblJaTGNENXhDR29sTlhpVFZGclIyc3dKcTN0RHJXSUpzV2FKZnBvTGczWDhDCkF3RUFBYU4vTUgwd0RnWURWUjBQQVFIL0JBUURBZ1dnTUIwR0ExVWRKUVFXTUJRR0NDc0dBUVVGQndNQkJnZ3IKQmdFRkJRY0RBakFNQmdOVkhSTUJBZjhFQWpBQU1CMEdBMVVkRGdRV0JCU1lJaXl3V3M5MzFjZmZJSzRNVWlrdQpWQ2J3NWpBZkJnTlZIU01FR0RBV2dCVGZyVlJaNlZEUHlLM05ySm8yUTF6cVhnekZFekFOQmdrcWhraUc5dzBCCkFRc0ZBQU9DQVFFQVljTW5YdUNmVEV4RkwvRGRkUUYyT3pFTEx3Y1hHOFNMUjJKRUNXSnI0QXpsd1JUVUhjTDkKVGkxWXczZDI4TWh0d09NekNEYzM5ZURLM2RHZGVlMFozeFhXM0FlVGlRcFUrZnBCbURmQW1MRy9Mdzk1YzZsMQoxeXJiN1dXV2pyRHJDZHVVN1JCQnhJT3RqOERnNG1mQk1uUEhFQS9RY04yR1drc2ZwRVpleElzWWc1b242Z0RaCjdyNWxtSzZHTGZSWW83emlJY0NFak1zc281U1owbmJqYXZJUzZ0TUY0U2NLdWJKQUpVSERscFVMSnNqTWxTRW8KY01nQnVtd2ZXUnlRdU1HOXlnbHF0RDlJZDJxTUJsdU5tdlpwRy80ejdVMkhRVjVKOC9uaDFtckladmZKNDNMRQowOTNYOGdIamQvVE5kRjduVFY5ZWphTTlMZCt1TUVJQzhnPT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
    client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcEFJQkFBS0NBUUVBdmlNdUl5S3Q5Q0lpM08vU28zd3h2ZWtjc3NOd3RTbkF3cDR0UFNpeVdBc1QrWnk5Cno4ZWE4MDhyaDhzeUxuMWRjMDBkWUZ2OFZTUU02OUNuNzQzTzlqTVF4QjAyd3cwWS8wcldWcExVdnhZOGpaL1MKd3RIZUc0K1MzODQwMkNIalNwQ2ptRlhRZGxnMlRnN2lJc3g3UHhoeHlmV2lMa2k0QTB2SG9LQjNrT01CM1BscwpiTHMvOGYvY2xyN2t5ZEozazdhZzFIOElzVjUxaFlpZzNJcFI0djFhbm80aTRvbE82akU1c3M1Z0Znb2MybEZDCjZBVm9jR1hHSXNIQnB3Ykh4UStZd1A0Vkhnakl2cFlCZHFZd1E0SUkzOWVUN09hNzd5Q1VKZUhVS1dkRmt0d1AKbkVJYWlVMWVKTlVXdEhhekFtcmUwT3RZZ214Wm9sK21ndURkZndJREFRQUJBb0lCQVFDWkxjSjNyL0t3b2VldwpVczFCeEVaV2x6ejFqNXAzZVFIQVNLcHRnU0hjNkYvWlVydGdiNUNYd0FwenhmSFJubEh4R0FrNG5pSzFmT3VqCjkxKzBFR3pSeitZTCtQVXJRcHdHNEFXNWpXVXo1UGczcUxDbEgycHVqY1puNDdxUy9Rb2VBbFNwMzBpb2J2eWcKK2tDWWhHQXVQc1U5VFZTeE1RaCtMMGpPVVRqQ1VacHU2V0RLSVhiVzFqWU5mdWo5bVBuTlhWMTNvOWR6VFpLMApGSm1pd085VVd0WS81Z0FCVDBqQmxhWHpCSVd1QlI5QnMxcHZTS1BsNnBRVXJMMmVKZjJkR2FaRExSWHVXb0crCkp5RUtkNGRkRFZUR0ZzcFVSc0NCYXBOYTg4RjNnNzBmSS9DekR2MG1Mb1Z4cE10RGRQRlljeDhXb0pzRDNGeUQKRFArMVhZNEJBb0dCQVBiVEhXMmhaNkJSbjVXNitEM1F1U3NINDAzWmJUL09Naysyb3JuK1FqNExYR3o0cVBsSQp4YXFhcElycm9XYU05VzJJUEo1dVJZdnM0NTZUcmhWLzl0cWFxOSsrOWtlREJsalE2Wi9kQkJxSTdRUHUrdTVxCnpiK0RxcHg1TVgwVy8rS1dydFlkVHEyaHBwQlQxK0lvZTJ6STN4MXNiUk4zQ3d0VW9CZ2Z4YkQvQW9HQkFNVTAKbXl1SHl3MGVIY21iN3dvTzF5V05NbVNPb0tQK1FJMEV4U2ZCblF1UWNJNFl1dFdITUFUYWJGTXhjazZzNnhPNwppVVdPRWVLck4zbXdjM3NSN3dGUHRWNGUyQ3lWZlZjKzJIV2xmL3JzTGp6eUgrbzM1ZVdLL2NaaW9uT1Y2YzJWCmFLa01tZlR1OXdnbm5UN20vKzlSK0tPdVhubFBreVg4S3J5NElGT0JBb0dBWkZ1TWtLSGE5NVdZbEpIVUU1WkYKWTlpdU5GNGVqSjN6V1BRQ2tDdHdsYmVhMmZmMUJIN3hXQi9PbldtWFU1SW16R1ZqZUd1UHZZZ1JPTTRGTDFxNwpiVUVNZDBvMjZ2YThZdXAydzNoakRjTDAwKytjZWNwVlkvUk9MNWNiWnlndDNOeTFzL3R3blNxb0JmRUJTMFI0CmdzL2Q0Q0hRNitRd1NtZ2JQQlBYRnRNQ2dZQVErSjRCK1FXNGMwY00rcVp2cnlkRXpBbnlMWFFWcU9QVlB2dlkKbUFqejNkSlI2RDdyOFY1b2pJT1dCVU5aRWZpSkVqS1dFY3ZvUGVQZ1RSY2pHRUFCVk9LKzN0aXJ2Wkd6Mkd5NApjeTI0WW1yNFE3NExZaFFlMVA5Uisxc1BwMjhmaWlRZnFEMzNuamtVTXBTTnZVTjVUUXlneVhqSDU5azZBNkdKCjdDNmNBUUtCZ1FDSEdibTJOdXpEaUcyMEJvcURqVWR1dk9ycmYxV3p6THd0L05TZUFGOGdMNGNGcUY2WDAxOUoKcTFuV0VkSkU5UFlEcW9TQ1I4aWovTmxsbVZHVUdjNzdwLytwUDEzQTNWL1ZwdndETU5aK2FDRDJWQkYvUXBUTworUmdTTHhDbWpaeHVBTVlBd0V1c3d2dW5LNG9jeWNWWjBFSHpVSTZnblJDNkxqVk1IZVQxZ0E9PQotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo=

将config文件拷贝到/root/.kube/config

[root@master01 work ]#mkdir ~/.kube -p

[root@master01 work ]#cp kube.config ~/.kube/config

5.授权kubernetes证书访问kubelet api权限

如果想通过kubectl创建资源,还需要授权,将kubernetes用户绑定到clusterrole

kubernetes用户是ca-csr.json中CN定义的

bash 复制代码
[root@master01 work ]#kubectl create clusterrolebinding kube-apiserver:kubelet-apis --clusterrole=system:kubelet-api-admin --user kubernetes
clusterrolebinding.rbac.authorization.k8s.io/kube-apiserver:kubelet-apis created

#查看集群组件状态

[root@master01 work ]#kubectl cluster-info

Kubernetes control plane is running at https://10.10.0.10:6443

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

bash 复制代码
[root@master01 work ]#kubectl get componentstatuses
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS      MESSAGE                                                                                       ERROR
scheduler            Unhealthy   Get "http://127.0.0.1:10251/healthz": dial tcp 127.0.0.1:10251: connect: connection refused   
controller-manager   Unhealthy   Get "http://127.0.0.1:10252/healthz": dial tcp 127.0.0.1:10252: connect: connection refused   
etcd-1               Healthy     {"health":"true"}                                                                             
etcd-0               Healthy     {"health":"true"}                                                                             
etcd-2               Healthy     {"health":"true"}   
bash 复制代码
[root@master01 work ]#kubectl get all --all-namespaces
NAMESPACE   NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
default     service/kubernetes   ClusterIP   10.255.0.1   <none>        443/TCP   147m

#同步kubectl文件到其他节点,为了防止单台机器故障,其他机器,仍然能够操作集群

[root@master02 ~ ]#mkdir /root/.kube/

[root@master03 kubernetes ]#mkdir /root/.kube/

[root@master01 work ]#rsync -vaz /root/.kube/config master02:/root/.kube/

sending incremental file list

config

sent 4,193 bytes received 35 bytes 8,456.00 bytes/sec

total size is 6,234 speedup is 1.47

[root@master01 work ]#rsync -vaz /root/.kube/config master03:/root/.kube/

sending incremental file list

config

sent 4,193 bytes received 35 bytes 2,818.67 bytes/sec

total size is 6,234 speedup is 1.47

#配置kubectl子命令补全

bash 复制代码
[root@master01 work ]#yum install -y bash-completion

[root@master01 work ]#source /usr/share/bash-completion/bash_completion
[root@master01 work ]#source <(kubectl completion bash)
[root@master01 work ]#kubectl completion bash > ~/.kube/completion.bash.inc
[root@master01 work ]#source '/root/.kube/completion.bash.inc'
[root@master01 work ]#source $HOME/.bash_profile

按两下tab键会将所有向查的东西列出来

bash 复制代码
[root@master01 work ]#kubectl get 
apiservices.apiregistration.k8s.io                            namespaces
certificatesigningrequests.certificates.k8s.io                networkpolicies.networking.k8s.io
clusterrolebindings.rbac.authorization.k8s.io                 nodes
clusterroles.rbac.authorization.k8s.io                        persistentvolumeclaims
componentstatuses                                             persistentvolumes
configmaps                                                    poddisruptionbudgets.policy
controllerrevisions.apps                                      pods
cronjobs.batch                                                podsecuritypolicies.policy
csidrivers.storage.k8s.io                                     podtemplates
csinodes.storage.k8s.io                                       priorityclasses.scheduling.k8s.io
customresourcedefinitions.apiextensions.k8s.io                prioritylevelconfigurations.flowcontrol.apiserver.k8s.io
daemonsets.apps                                               replicasets.apps
deployments.apps                                              replicationcontrollers
endpoints                                                     resourcequotas
endpointslices.discovery.k8s.io                               rolebindings.rbac.authorization.k8s.io
events                                                        roles.rbac.authorization.k8s.io
events.events.k8s.io                                          runtimeclasses.node.k8s.io
flowschemas.flowcontrol.apiserver.k8s.io                      secrets
horizontalpodautoscalers.autoscaling                          serviceaccounts
ingressclasses.networking.k8s.io                              services
ingresses.extensions                                          statefulsets.apps
ingresses.networking.k8s.io                                   storageclasses.storage.k8s.io
jobs.batch                                                    storageversions.internal.apiserver.k8s.io
leases.coordination.k8s.io                                    validatingwebhookconfigurations.admissionregistration.k8s.io
limitranges                                                   volumeattachments.storage.k8s.io
mutatingwebhookconfigurations.admissionregistration.k8s.io  

Kubectl官方备忘单: 安装补全命令参考地址:

https://kubernetes.io/zh/docs/reference/kubectl/cheatsheet/

5.4 部署kube-controller-manager组件

#创建csr请求文件

bash 复制代码
[root@master01 work ]#cat kube-controller-manager-csr.json 
{
    "CN": "system:kube-controller-manager",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "hosts": [
      "127.0.0.1",
      "10.10.0.10",
      "10.10.0.11",
      "10.10.0.12",
      "10.10.0.100"
    ],
    "names": [
      {
        "C": "CN",
        "ST": "Hubei",
        "L": "Wuhan",
        "O": "system:kube-controller-manager",
        "OU": "system"
      }
    ]
}

注: hosts 列表包含所有 kube-controller-manager 节点 IP;

CN 为 system:kube-controller-manager、

O 为 system:kube-controller-manager,

kubernetes 内置的 ClusterRoleBindings system:kube-controller-manager 赋予 kube-controller-manager 工作所需的权限

#生成证书

bash 复制代码
[root@master01 work ]#cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager
2022/10/26 14:27:54 [INFO] generate received request
2022/10/26 14:27:54 [INFO] received CSR
2022/10/26 14:27:54 [INFO] generating key: rsa-2048
2022/10/26 14:27:54 [INFO] encoded CSR
2022/10/26 14:27:54 [INFO] signed certificate with serial number 721895207641224154550279977497269955483545721536
2022/10/26 14:27:54 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").

#创建kube-controller-manager的kubeconfig

1.设置集群参数

bash 复制代码
[root@master01 work ]#kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://10.10.0.10:6443 --kubeconfig=kube-controller-manager.kubeconfig
Cluster "kubernetes" set.

[root@master01 work ]#cat kube-controller-manager.kubeconfig 
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUR0akNDQXA2Z0F3SUJBZ0lVUkV4RXhidjQ1ZlVBY1hCQlpENGZEZnRqZmtNd0RRWUpLb1pJaHZjTkFRRUwKQlFBd1lURUxNQWtHQTFVRUJoTUNRMDR4RGpBTUJnTlZCQWdUQlVoMVltVnBNUTR3REFZRFZRUUhFd1ZYZFdoaApiakVNTUFvR0ExVUVDaE1EYXpoek1ROHdEUVlEVlFRTEV3WnplWE4wWlcweEV6QVJCZ05WQkFNVENtdDFZbVZ5CmJtVjBaWE13SGhjTk1qSXhNREkxTURnMU5EQXdXaGNOTXpJeE1ESXlNRGcxTkRBd1dqQmhNUXN3Q1FZRFZRUUcKRXdKRFRqRU9NQXdHQTFVRUNCTUZTSFZpWldreERqQU1CZ05WQkFjVEJWZDFhR0Z1TVF3d0NnWURWUVFLRXdOcgpPSE14RHpBTkJnTlZCQXNUQm5ONWMzUmxiVEVUTUJFR0ExVUVBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKCktvWklodmNOQVFFQkJRQURnZ0VQQURDQ0FRb0NnZ0VCQUxVV2lYcGZPMFZ6dTBrek8xYllaVUo2dFVnTHp2TDUKTlJwWnpoS2pNZ3pDbmZYM0pHWUM0SWpFdGoxRWNhUGdFOUErbkZ1UndCWlk0a0U3NkE1RjdhaS8rNzdiak5tMApJcWdzY3o2VmdzM0JuMEZSSXVmMlFkQm9OUmNlWWhXL011Z0ZWUlZRSUVQTjRuZGxCVUFScHpNbDk4clpxSVU5CkxTMkZ1WWRwYTRRSi8weVp3YWlYYllxazFDUk5JUkRBMUZJRDh6VEpHeXhRdmZoZ05wRGJTQkhQVlhqK3oxZW4KTU5XQzVDMGt1QWcya3IzcExHUkg3T2dMR1dHV0RuRXVwMUZKMUJONDl2V3IwTW8zQWd6TE1TcTdaRnRGYS9KYgpvN3lOWmZDZnd5NTZQTi9Rckc5eSs0aFBONHVRdG16em1YbUJYT2JJTFF2SXhEN2lKYTVWb0I4Q0F3RUFBYU5tCk1HUXdEZ1lEVlIwUEFRSC9CQVFEQWdFR01CSUdBMVVkRXdFQi93UUlNQVlCQWY4Q0FRSXdIUVlEVlIwT0JCWUUKRk4rdFZGbnBVTS9JcmMyc21qWkRYT3BlRE1VVE1COEdBMVVkSXdRWU1CYUFGTit0VkZucFVNL0lyYzJzbWpaRApYT3BlRE1VVE1BMEdDU3FHU0liM0RRRUJDd1VBQTRJQkFRQjk3NXBmODY5THN6NkdDWkdxckNyWE1GMm00aENXCmFIRGx4UWQvK3NPQU5neEsxUnV1QWZ5d2lMU2pRTGFaYmxSTnZBMWQ1cGVndXBmcUdyQXljSjE1MkpyUGQ1aE8KNkpGMmlhMWxJMS80bno4OFE1STBZcEpJc0FSUDZBTSttb2RrWjFKM3V5UHFZbFVmSnBJU1pLSlhpQStERVJCaQo1eEhjZWMyWVlPTHJqK2xTMk90eUJ4K3ZpRnlMWFU0aGkrN08zRVRwNjFjRWVKUHE0YzcvWE95UlNnMWZudFFRCkN5SC81eHFBSStSYmhXMzA4TXdVWGh1M2QxcXFyRTBoUXpOVDVoOW4zSHN6V0F3dGthQVUzOHEzMEZqZFdwYXgKeFdDcURBdyt5UElMc2JzdjdBMnVnTUkxY1o0Zk9uWmkwK1MyRHFXdnZ4UEZ6enZza013SS9BNjQKLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
    server: https://10.10.0.10:6443
  name: kubernetes
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

2.设置客户端认证参数

用户system:kube-controller-manager 就是kube-controller-manager-csr.json CN设置的值

[root@master01 work ]#kubectl config set-credentials system:kube-controller-manager --client-certificate=kube-controller-manager.pem --client-key=kube-controller-manager-key.pem --embed-certs=true --kubeconfig=kube-controller-manager.kubeconfig

User "system:kube-controller-manager" set.

bash 复制代码
[root@master01 work ]#cat kube-controller-manager.kubeconfig 
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUR0akNDQXA2Z0F3SUJBZ0lVUkV4RXhidjQ1ZlVBY1hCQlpENGZEZnRqZmtNd0RRWUpLb1pJaHZjTkFRRUwKQlFBd1lURUxNQWtHQTFVRUJoTUNRMDR4RGpBTUJnTlZCQWdUQlVoMVltVnBNUTR3REFZRFZRUUhFd1ZYZFdoaApiakVNTUFvR0ExVUVDaE1EYXpoek1ROHdEUVlEVlFRTEV3WnplWE4wWlcweEV6QVJCZ05WQkFNVENtdDFZbVZ5CmJtVjBaWE13SGhjTk1qSXhNREkxTURnMU5EQXdXaGNOTXpJeE1ESXlNRGcxTkRBd1dqQmhNUXN3Q1FZRFZRUUcKRXdKRFRqRU9NQXdHQTFVRUNCTUZTSFZpWldreERqQU1CZ05WQkFjVEJWZDFhR0Z1TVF3d0NnWURWUVFLRXdOcgpPSE14RHpBTkJnTlZCQXNUQm5ONWMzUmxiVEVUTUJFR0ExVUVBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKCktvWklodmNOQVFFQkJRQURnZ0VQQURDQ0FRb0NnZ0VCQUxVV2lYcGZPMFZ6dTBrek8xYllaVUo2dFVnTHp2TDUKTlJwWnpoS2pNZ3pDbmZYM0pHWUM0SWpFdGoxRWNhUGdFOUErbkZ1UndCWlk0a0U3NkE1RjdhaS8rNzdiak5tMApJcWdzY3o2VmdzM0JuMEZSSXVmMlFkQm9OUmNlWWhXL011Z0ZWUlZRSUVQTjRuZGxCVUFScHpNbDk4clpxSVU5CkxTMkZ1WWRwYTRRSi8weVp3YWlYYllxazFDUk5JUkRBMUZJRDh6VEpHeXhRdmZoZ05wRGJTQkhQVlhqK3oxZW4KTU5XQzVDMGt1QWcya3IzcExHUkg3T2dMR1dHV0RuRXVwMUZKMUJONDl2V3IwTW8zQWd6TE1TcTdaRnRGYS9KYgpvN3lOWmZDZnd5NTZQTi9Rckc5eSs0aFBONHVRdG16em1YbUJYT2JJTFF2SXhEN2lKYTVWb0I4Q0F3RUFBYU5tCk1HUXdEZ1lEVlIwUEFRSC9CQVFEQWdFR01CSUdBMVVkRXdFQi93UUlNQVlCQWY4Q0FRSXdIUVlEVlIwT0JCWUUKRk4rdFZGbnBVTS9JcmMyc21qWkRYT3BlRE1VVE1COEdBMVVkSXdRWU1CYUFGTit0VkZucFVNL0lyYzJzbWpaRApYT3BlRE1VVE1BMEdDU3FHU0liM0RRRUJDd1VBQTRJQkFRQjk3NXBmODY5THN6NkdDWkdxckNyWE1GMm00aENXCmFIRGx4UWQvK3NPQU5neEsxUnV1QWZ5d2lMU2pRTGFaYmxSTnZBMWQ1cGVndXBmcUdyQXljSjE1MkpyUGQ1aE8KNkpGMmlhMWxJMS80bno4OFE1STBZcEpJc0FSUDZBTSttb2RrWjFKM3V5UHFZbFVmSnBJU1pLSlhpQStERVJCaQo1eEhjZWMyWVlPTHJqK2xTMk90eUJ4K3ZpRnlMWFU0aGkrN08zRVRwNjFjRWVKUHE0YzcvWE95UlNnMWZudFFRCkN5SC81eHFBSStSYmhXMzA4TXdVWGh1M2QxcXFyRTBoUXpOVDVoOW4zSHN6V0F3dGthQVUzOHEzMEZqZFdwYXgKeFdDcURBdyt5UElMc2JzdjdBMnVnTUkxY1o0Zk9uWmkwK1MyRHFXdnZ4UEZ6enZza013SS9BNjQKLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
    server: https://10.10.0.10:6443
  name: kubernetes
contexts: null
current-context: ""
kind: Config
preferences: {}
users:
- name: system:kube-controller-manager
  user:
    client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUVLakNDQXhLZ0F3SUJBZ0lVZm5MbWtpdDdmZEJuY2lmdE5pQ01FdjZ0V3NBd0RRWUpLb1pJaHZjTkFRRUwKQlFBd1lURUxNQWtHQTFVRUJoTUNRMDR4RGpBTUJnTlZCQWdUQlVoMVltVnBNUTR3REFZRFZRUUhFd1ZYZFdoaApiakVNTUFvR0ExVUVDaE1EYXpoek1ROHdEUVlEVlFRTEV3WnplWE4wWlcweEV6QVJCZ05WQkFNVENtdDFZbVZ5CmJtVjBaWE13SGhjTk1qSXhNREkyTURZeU16QXdXaGNOTXpJeE1ESXpNRFl5TXpBd1dqQ0JrREVMTUFrR0ExVUUKQmhNQ1EwNHhEakFNQmdOVkJBZ1RCVWgxWW1WcE1RNHdEQVlEVlFRSEV3VlhkV2hoYmpFbk1DVUdBMVVFQ2hNZQpjM2x6ZEdWdE9tdDFZbVV0WTI5dWRISnZiR3hsY2kxdFlXNWhaMlZ5TVE4d0RRWURWUVFMRXdaemVYTjBaVzB4Ckp6QWxCZ05WQkFNVEhuTjVjM1JsYlRwcmRXSmxMV052Ym5SeWIyeHNaWEl0YldGdVlXZGxjakNDQVNJd0RRWUoKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBS01RQWFjZElxUEZ3K29LckR1aTdQcmxnaThpT1hKMworTTBTZUNDaFlMZjd5dU9VWlRZV0wyZk81UmZMQ1RRODFCWXVQOUxrY0k4aFRIRHZMZkQ3ZmdWcmZIOVMyaUZ3CllHSGNCc2ZjNFZGVVg5YUlDU0VlM0RRY3NDWG55VnJEVkQzUVUvY3VZQVRuZEMxdFMvZU1pNHFEVFVKTU1maFUKQnR1L3NwUmY4bGpkOWVtdk1ZQzZkQ3dIT1FsZmF0WThLcHJxZjdYTUtKSWJRdnhsSEhlL3RaYW1RWGo0Si9tagpqNHRCYW1uckQrNEp2dXpDY3ZOaEhtc09PbXA5Mkkxd3lNNlNWQUpjM2ZPTFRrakFGZGQ3MnJRUHo4elhhK3NkClpzVWR3SXNZbGp5bW5NcUdaWlZuU1dYL2RxQ2VxQzRxcnZybmo4dmN2NlE4OVVBTjNSU2pmdTBDQXdFQUFhT0IKcVRDQnBqQU9CZ05WSFE4QkFmOEVCQU1DQmFBd0hRWURWUjBsQkJZd0ZBWUlLd1lCQlFVSEF3RUdDQ3NHQVFVRgpCd01DTUF3R0ExVWRFd0VCL3dRQ01BQXdIUVlEVlIwT0JCWUVGTktGSjhqQXIvanQyUmN4MWNNbStFNUJVRTNsCk1COEdBMVVkSXdRWU1CYUFGTit0VkZucFVNL0lyYzJzbWpaRFhPcGVETVVUTUNjR0ExVWRFUVFnTUI2SEJIOEEKQUFHSEJBb0tBQXFIQkFvS0FBdUhCQW9LQUF5SEJBb0tBR1F3RFFZSktvWklodmNOQVFFTEJRQURnZ0VCQUFkcQo4cE4rOXdKdUJFRVpDT2FEN2MrZ0tMK3ZtL2lJTzdQL3ZlbFgrNzd0cVgrTTBqaUwwa2IvTmczRWNuVUhieDVECi9haUJLZU1rRVltUG5rc2hHWng1eGhmN0oraFlkOWsrNm5wOGRmUk80bHlpeENCMWNYODBoMFhZbjBmZ2N0eE8Kd0RCaVhNcUI2OEo4WnkvYVBoSVY0OGpCaUZZZlJMcUpxZWZEMEx2NFh2RVpISkt6bVpZdnlWV3dYRWFzRFI0bQo4cGRSWEtOakRXTXkyTHI1WGNRLzNKaFRCZ2g4cEV4czBFaUQwTUxDUmNwaGdjK3p0UndXZmR0T2FQR0tjNUpDCis3aEVjQmZJZ1MwNTE5bmt5Yjg0dEJpWmRhK21PQVB4RTNqVTRLK01hRWxZK3pFNHR6a1pMOGJkbEV0Vm9Jb2YKSVF2bEl6akJxSzE4VG9tUmNmQT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
    client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb3dJQkFBS0NBUUVBb3hBQnB4MGlvOFhENmdxc082THMrdVdDTHlJNWNuZjR6Uko0SUtGZ3Qvdks0NVJsCk5oWXZaODdsRjhzSk5EelVGaTQvMHVSd2p5Rk1jTzh0OFB0K0JXdDhmMUxhSVhCZ1lkd0d4OXpoVVZSZjFvZ0oKSVI3Y05CeXdKZWZKV3NOVVBkQlQ5eTVnQk9kMExXMUw5NHlMaW9OTlFrd3grRlFHMjcreWxGL3lXTjMxNmE4eApnTHAwTEFjNUNWOXExandxbXVwL3Rjd29raHRDL0dVY2Q3KzFscVpCZVBnbithT1BpMEZxYWVzUDdnbSs3TUp5CjgyRWVhdzQ2YW4zWWpYREl6cEpVQWx6ZDg0dE9TTUFWMTN2YXRBL1B6TmRyNngxbXhSM0FpeGlXUEthY3lvWmwKbFdkSlpmOTJvSjZvTGlxdSt1ZVB5OXkvcER6MVFBM2RGS04rN1FJREFRQUJBb0lCQUc4WFU1anZ2NDdHQ0lCbAp2d3R1SjNlVGJ3ci9qUlhRYUlBR0tqTkkzcVRaOVZMdzRiZGtpKzEwUmgzY3BLdWpHWGIzRVdKelljQVJsb3VHClY4MUsrWU5seEU3V09tZjNzS0piRFgrU215c1dpYWlWeTJwMkpOMllBZVlCTU93V0VVbC9xZ1RING9EVTB4Q3oKMnNLUFRPNFVJRW1mc1plV1g0bk00elEwM2QzdVc4Qi90R3BtcHhvNk1Da2FPNW9yVVRyUFYvUHpTdnA5R3l1VwpzUnE3b3J0QjdNZnRjTXlWdUUzWWVXYlJYbWRFSlk3cWpBWW9qVXVjdEJzNnVlSHhUbGRKNWNWdzQ2aXBPRzE4CnhBZE5xLzNNRVl2b21TbzhJQXJUWTBIc0o4djlXQ0VhUnF5ZVAwTHB5eDdIamxDb0ZhRnhTUTdxTCt0VjlPUnEKZmxIVnlXRUNnWUVBMGdlK3h0amtMOXdFVkpkZHBrektFWGJTc09TaHhya0V6blhNY0ZTNThkQ0hJS0tYalZIagpUaDV4K2EzWkpucTduRDJNUVRZeWcxYmNERUhQMW0vNGdrQnRJS0ZvOFgvcjdEM1c2OHFRNEZJTlZjRnVOZk9vCmFmc3RNQkF4ZnJQaGVyYlRVdUtJTTRvOUxzT3lvN1lHcTBFTTcxWmRYZDg5QVlSVUxibElxRGtDZ1lFQXhzQ1oKN210TG5wZzhaZlJZeWRhVEh4dmtBWTFWNVI3S2JmQ0pmMDhhb0c2N1c0UWF3TXVNZHVrZ3M1QmVwTUNidkRVTQovNmJnQUVncjFENnJTTDVlL3BiYjZkQ0diYVRhSVFsZmdqNlZpakpDZzNLazBTZEw2L2Q1UEdsL3N3ZHB5azljCmMyL1NzaHdoUFpjK1Evd3FqNGw5MGRkblFBWTBBWmhVWExSNHhGVUNnWUVBdGZncDdVU0xaMy9iYktMOGE1SUsKWE5rek1EbldoRU5YQzczNkU3VUVxYUwvQUdKK3BkMDE4RC9taGVsK3c1MEFvUnllUVAzQkJCUWtjS1l3ZVZ6bgoxWW9XUW5nMllVNXd6R3pEb2VVT1lwd1VtNkVNYU1nanVUYjY3cktJLzNyQU43N2hGdVhZRmJlR3pOYVhGc29sCnV3aVFPV2o5V2RDSm5aL1dBd3VPRE5rQ2dZQTdNUlVtK25GMDlDWFl2MkxLQ2N1YkVqVmZlUFpCM0YreFNsZkkKd0loUGkycmxJSHpQT2svRkFqMG8vVEFTcFFJOGxSZ2Y4MVQzQUlkOUdJVHVqek8vWXJKditoaHZBdytya3gwTQpyeExlSzRXL25COFY0endyTkhLNDJUcWMyUEphdkRQdWRUa3NybEFBQmRFWGNqeENyMUgzY3MxZk5mbTdGK0RZCkV5OThXUUtCZ0JycnFMZjc4WE91SWN5K0RnYWkwQ3FKdmg3RU9pMFBPN3BDWGtvUHFQM2Q4VlUrcFFyV0QyTG8KQndOaVh6dzVUQ0Y5UXQ3blJxUW1BRXJjdmJBcHg4UEk5TE9HZ3hYbmpPTHVNWmRWQ2h6Mjg3T0tEbitTL0JhLwppQVJjZ0R6YXlTUllUUCs5Z3lBQm4rNGV0TS9PWFFVYWw0RisyYU5wOG1OV0d2TVhYdCsxCi0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tCg==

3.设置上下文参数

bash 复制代码
[root@master01 work ]#kubectl config set-context system:kube-controller-manager --cluster=kubernetes --user=system:kube-controller-manager --kubeconfig=kube-controller-manager.kubeconfig
Context "system:kube-controller-manager" created.

4.设置当前上下文

bash 复制代码
[root@master01 work ]#kubectl config use-context system:kube-controller-manager --kubeconfig=kube-controller-manager.kubeconfig
Switched to context "system:kube-controller-manager".
bash 复制代码
[root@master01 work ]#cat kube-controller-manager.kubeconfig 
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUR0akNDQXA2Z0F3SUJBZ0lVUkV4RXhidjQ1ZlVBY1hCQlpENGZEZnRqZmtNd0RRWUpLb1pJaHZjTkFRRUwKQlFBd1lURUxNQWtHQTFVRUJoTUNRMDR4RGpBTUJnTlZCQWdUQlVoMVltVnBNUTR3REFZRFZRUUhFd1ZYZFdoaApiakVNTUFvR0ExVUVDaE1EYXpoek1ROHdEUVlEVlFRTEV3WnplWE4wWlcweEV6QVJCZ05WQkFNVENtdDFZbVZ5CmJtVjBaWE13SGhjTk1qSXhNREkxTURnMU5EQXdXaGNOTXpJeE1ESXlNRGcxTkRBd1dqQmhNUXN3Q1FZRFZRUUcKRXdKRFRqRU9NQXdHQTFVRUNCTUZTSFZpWldreERqQU1CZ05WQkFjVEJWZDFhR0Z1TVF3d0NnWURWUVFLRXdOcgpPSE14RHpBTkJnTlZCQXNUQm5ONWMzUmxiVEVUTUJFR0ExVUVBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKCktvWklodmNOQVFFQkJRQURnZ0VQQURDQ0FRb0NnZ0VCQUxVV2lYcGZPMFZ6dTBrek8xYllaVUo2dFVnTHp2TDUKTlJwWnpoS2pNZ3pDbmZYM0pHWUM0SWpFdGoxRWNhUGdFOUErbkZ1UndCWlk0a0U3NkE1RjdhaS8rNzdiak5tMApJcWdzY3o2VmdzM0JuMEZSSXVmMlFkQm9OUmNlWWhXL011Z0ZWUlZRSUVQTjRuZGxCVUFScHpNbDk4clpxSVU5CkxTMkZ1WWRwYTRRSi8weVp3YWlYYllxazFDUk5JUkRBMUZJRDh6VEpHeXhRdmZoZ05wRGJTQkhQVlhqK3oxZW4KTU5XQzVDMGt1QWcya3IzcExHUkg3T2dMR1dHV0RuRXVwMUZKMUJONDl2V3IwTW8zQWd6TE1TcTdaRnRGYS9KYgpvN3lOWmZDZnd5NTZQTi9Rckc5eSs0aFBONHVRdG16em1YbUJYT2JJTFF2SXhEN2lKYTVWb0I4Q0F3RUFBYU5tCk1HUXdEZ1lEVlIwUEFRSC9CQVFEQWdFR01CSUdBMVVkRXdFQi93UUlNQVlCQWY4Q0FRSXdIUVlEVlIwT0JCWUUKRk4rdFZGbnBVTS9JcmMyc21qWkRYT3BlRE1VVE1COEdBMVVkSXdRWU1CYUFGTit0VkZucFVNL0lyYzJzbWpaRApYT3BlRE1VVE1BMEdDU3FHU0liM0RRRUJDd1VBQTRJQkFRQjk3NXBmODY5THN6NkdDWkdxckNyWE1GMm00aENXCmFIRGx4UWQvK3NPQU5neEsxUnV1QWZ5d2lMU2pRTGFaYmxSTnZBMWQ1cGVndXBmcUdyQXljSjE1MkpyUGQ1aE8KNkpGMmlhMWxJMS80bno4OFE1STBZcEpJc0FSUDZBTSttb2RrWjFKM3V5UHFZbFVmSnBJU1pLSlhpQStERVJCaQo1eEhjZWMyWVlPTHJqK2xTMk90eUJ4K3ZpRnlMWFU0aGkrN08zRVRwNjFjRWVKUHE0YzcvWE95UlNnMWZudFFRCkN5SC81eHFBSStSYmhXMzA4TXdVWGh1M2QxcXFyRTBoUXpOVDVoOW4zSHN6V0F3dGthQVUzOHEzMEZqZFdwYXgKeFdDcURBdyt5UElMc2JzdjdBMnVnTUkxY1o0Zk9uWmkwK1MyRHFXdnZ4UEZ6enZza013SS9BNjQKLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
    server: https://10.10.0.10:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: system:kube-controller-manager
  name: system:kube-controller-manager
current-context: system:kube-controller-manager
kind: Config
preferences: {}
users:
- name: system:kube-controller-manager
  user:
    client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUVLakNDQXhLZ0F3SUJBZ0lVZm5MbWtpdDdmZEJuY2lmdE5pQ01FdjZ0V3NBd0RRWUpLb1pJaHZjTkFRRUwKQlFBd1lURUxNQWtHQTFVRUJoTUNRMDR4RGpBTUJnTlZCQWdUQlVoMVltVnBNUTR3REFZRFZRUUhFd1ZYZFdoaApiakVNTUFvR0ExVUVDaE1EYXpoek1ROHdEUVlEVlFRTEV3WnplWE4wWlcweEV6QVJCZ05WQkFNVENtdDFZbVZ5CmJtVjBaWE13SGhjTk1qSXhNREkyTURZeU16QXdXaGNOTXpJeE1ESXpNRFl5TXpBd1dqQ0JrREVMTUFrR0ExVUUKQmhNQ1EwNHhEakFNQmdOVkJBZ1RCVWgxWW1WcE1RNHdEQVlEVlFRSEV3VlhkV2hoYmpFbk1DVUdBMVVFQ2hNZQpjM2x6ZEdWdE9tdDFZbVV0WTI5dWRISnZiR3hsY2kxdFlXNWhaMlZ5TVE4d0RRWURWUVFMRXdaemVYTjBaVzB4Ckp6QWxCZ05WQkFNVEhuTjVjM1JsYlRwcmRXSmxMV052Ym5SeWIyeHNaWEl0YldGdVlXZGxjakNDQVNJd0RRWUoKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBS01RQWFjZElxUEZ3K29LckR1aTdQcmxnaThpT1hKMworTTBTZUNDaFlMZjd5dU9VWlRZV0wyZk81UmZMQ1RRODFCWXVQOUxrY0k4aFRIRHZMZkQ3ZmdWcmZIOVMyaUZ3CllHSGNCc2ZjNFZGVVg5YUlDU0VlM0RRY3NDWG55VnJEVkQzUVUvY3VZQVRuZEMxdFMvZU1pNHFEVFVKTU1maFUKQnR1L3NwUmY4bGpkOWVtdk1ZQzZkQ3dIT1FsZmF0WThLcHJxZjdYTUtKSWJRdnhsSEhlL3RaYW1RWGo0Si9tagpqNHRCYW1uckQrNEp2dXpDY3ZOaEhtc09PbXA5Mkkxd3lNNlNWQUpjM2ZPTFRrakFGZGQ3MnJRUHo4elhhK3NkClpzVWR3SXNZbGp5bW5NcUdaWlZuU1dYL2RxQ2VxQzRxcnZybmo4dmN2NlE4OVVBTjNSU2pmdTBDQXdFQUFhT0IKcVRDQnBqQU9CZ05WSFE4QkFmOEVCQU1DQmFBd0hRWURWUjBsQkJZd0ZBWUlLd1lCQlFVSEF3RUdDQ3NHQVFVRgpCd01DTUF3R0ExVWRFd0VCL3dRQ01BQXdIUVlEVlIwT0JCWUVGTktGSjhqQXIvanQyUmN4MWNNbStFNUJVRTNsCk1COEdBMVVkSXdRWU1CYUFGTit0VkZucFVNL0lyYzJzbWpaRFhPcGVETVVUTUNjR0ExVWRFUVFnTUI2SEJIOEEKQUFHSEJBb0tBQXFIQkFvS0FBdUhCQW9LQUF5SEJBb0tBR1F3RFFZSktvWklodmNOQVFFTEJRQURnZ0VCQUFkcQo4cE4rOXdKdUJFRVpDT2FEN2MrZ0tMK3ZtL2lJTzdQL3ZlbFgrNzd0cVgrTTBqaUwwa2IvTmczRWNuVUhieDVECi9haUJLZU1rRVltUG5rc2hHWng1eGhmN0oraFlkOWsrNm5wOGRmUk80bHlpeENCMWNYODBoMFhZbjBmZ2N0eE8Kd0RCaVhNcUI2OEo4WnkvYVBoSVY0OGpCaUZZZlJMcUpxZWZEMEx2NFh2RVpISkt6bVpZdnlWV3dYRWFzRFI0bQo4cGRSWEtOakRXTXkyTHI1WGNRLzNKaFRCZ2g4cEV4czBFaUQwTUxDUmNwaGdjK3p0UndXZmR0T2FQR0tjNUpDCis3aEVjQmZJZ1MwNTE5bmt5Yjg0dEJpWmRhK21PQVB4RTNqVTRLK01hRWxZK3pFNHR6a1pMOGJkbEV0Vm9Jb2YKSVF2bEl6akJxSzE4VG9tUmNmQT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
    client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb3dJQkFBS0NBUUVBb3hBQnB4MGlvOFhENmdxc082THMrdVdDTHlJNWNuZjR6Uko0SUtGZ3Qvdks0NVJsCk5oWXZaODdsRjhzSk5EelVGaTQvMHVSd2p5Rk1jTzh0OFB0K0JXdDhmMUxhSVhCZ1lkd0d4OXpoVVZSZjFvZ0oKSVI3Y05CeXdKZWZKV3NOVVBkQlQ5eTVnQk9kMExXMUw5NHlMaW9OTlFrd3grRlFHMjcreWxGL3lXTjMxNmE4eApnTHAwTEFjNUNWOXExandxbXVwL3Rjd29raHRDL0dVY2Q3KzFscVpCZVBnbithT1BpMEZxYWVzUDdnbSs3TUp5CjgyRWVhdzQ2YW4zWWpYREl6cEpVQWx6ZDg0dE9TTUFWMTN2YXRBL1B6TmRyNngxbXhSM0FpeGlXUEthY3lvWmwKbFdkSlpmOTJvSjZvTGlxdSt1ZVB5OXkvcER6MVFBM2RGS04rN1FJREFRQUJBb0lCQUc4WFU1anZ2NDdHQ0lCbAp2d3R1SjNlVGJ3ci9qUlhRYUlBR0tqTkkzcVRaOVZMdzRiZGtpKzEwUmgzY3BLdWpHWGIzRVdKelljQVJsb3VHClY4MUsrWU5seEU3V09tZjNzS0piRFgrU215c1dpYWlWeTJwMkpOMllBZVlCTU93V0VVbC9xZ1RING9EVTB4Q3oKMnNLUFRPNFVJRW1mc1plV1g0bk00elEwM2QzdVc4Qi90R3BtcHhvNk1Da2FPNW9yVVRyUFYvUHpTdnA5R3l1VwpzUnE3b3J0QjdNZnRjTXlWdUUzWWVXYlJYbWRFSlk3cWpBWW9qVXVjdEJzNnVlSHhUbGRKNWNWdzQ2aXBPRzE4CnhBZE5xLzNNRVl2b21TbzhJQXJUWTBIc0o4djlXQ0VhUnF5ZVAwTHB5eDdIamxDb0ZhRnhTUTdxTCt0VjlPUnEKZmxIVnlXRUNnWUVBMGdlK3h0amtMOXdFVkpkZHBrektFWGJTc09TaHhya0V6blhNY0ZTNThkQ0hJS0tYalZIagpUaDV4K2EzWkpucTduRDJNUVRZeWcxYmNERUhQMW0vNGdrQnRJS0ZvOFgvcjdEM1c2OHFRNEZJTlZjRnVOZk9vCmFmc3RNQkF4ZnJQaGVyYlRVdUtJTTRvOUxzT3lvN1lHcTBFTTcxWmRYZDg5QVlSVUxibElxRGtDZ1lFQXhzQ1oKN210TG5wZzhaZlJZeWRhVEh4dmtBWTFWNVI3S2JmQ0pmMDhhb0c2N1c0UWF3TXVNZHVrZ3M1QmVwTUNidkRVTQovNmJnQUVncjFENnJTTDVlL3BiYjZkQ0diYVRhSVFsZmdqNlZpakpDZzNLazBTZEw2L2Q1UEdsL3N3ZHB5azljCmMyL1NzaHdoUFpjK1Evd3FqNGw5MGRkblFBWTBBWmhVWExSNHhGVUNnWUVBdGZncDdVU0xaMy9iYktMOGE1SUsKWE5rek1EbldoRU5YQzczNkU3VUVxYUwvQUdKK3BkMDE4RC9taGVsK3c1MEFvUnllUVAzQkJCUWtjS1l3ZVZ6bgoxWW9XUW5nMllVNXd6R3pEb2VVT1lwd1VtNkVNYU1nanVUYjY3cktJLzNyQU43N2hGdVhZRmJlR3pOYVhGc29sCnV3aVFPV2o5V2RDSm5aL1dBd3VPRE5rQ2dZQTdNUlVtK25GMDlDWFl2MkxLQ2N1YkVqVmZlUFpCM0YreFNsZkkKd0loUGkycmxJSHpQT2svRkFqMG8vVEFTcFFJOGxSZ2Y4MVQzQUlkOUdJVHVqek8vWXJKditoaHZBdytya3gwTQpyeExlSzRXL25COFY0endyTkhLNDJUcWMyUEphdkRQdWRUa3NybEFBQmRFWGNqeENyMUgzY3MxZk5mbTdGK0RZCkV5OThXUUtCZ0JycnFMZjc4WE91SWN5K0RnYWkwQ3FKdmg3RU9pMFBPN3BDWGtvUHFQM2Q4VlUrcFFyV0QyTG8KQndOaVh6dzVUQ0Y5UXQ3blJxUW1BRXJjdmJBcHg4UEk5TE9HZ3hYbmpPTHVNWmRWQ2h6Mjg3T0tEbitTL0JhLwppQVJjZ0R6YXlTUllUUCs5Z3lBQm4rNGV0TS9PWFFVYWw0RisyYU5wOG1OV0d2TVhYdCsxCi0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tCg==

#创建配置文件kube-controller-manager.conf

bash 复制代码
[root@master01 work ]#cat kube-controller-manager.conf 
KUBE_CONTROLLER_MANAGER_OPTS="--port=0 \
  --secure-port=10252 \
  --bind-address=127.0.0.1 \  #官方推荐绑定127.0.0.1,外界不能访问
  --kubeconfig=/etc/kubernetes/kube-controller-manager.kubeconfig \
  --service-cluster-ip-range=10.255.0.0/16 \
  --cluster-name=kubernetes \
  --cluster-signing-cert-file=/etc/kubernetes/ssl/ca.pem \
  --cluster-signing-key-file=/etc/kubernetes/ssl/ca-key.pem \
  --allocate-node-cidrs=true \
  --cluster-cidr=10.0.0.0/16 \
  --experimental-cluster-signing-duration=87600h \
  --root-ca-file=/etc/kubernetes/ssl/ca.pem \
  --service-account-private-key-file=/etc/kubernetes/ssl/ca-key.pem \
  --leader-elect=true \
  --feature-gates=RotateKubeletServerCertificate=true \
  --controllers=*,bootstrapsigner,tokencleaner \
  --horizontal-pod-autoscaler-use-rest-clients=true \
  --horizontal-pod-autoscaler-sync-period=10s \
  --tls-cert-file=/etc/kubernetes/ssl/kube-controller-manager.pem \
  --tls-private-key-file=/etc/kubernetes/ssl/kube-controller-manager-key.pem \
  --use-service-account-credentials=true \
  --alsologtostderr=true \
  --logtostderr=false \
  --log-dir=/var/log/kubernetes \
  --v=2"

#创建启动文件

bash 复制代码
[root@master01 work ]#cat kube-controller-manager.service 
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=-/etc/kubernetes/kube-controller-manager.conf
ExecStart=/usr/local/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target

#启动服务

bash 复制代码
[root@master01 work ]#cp kube-controller-manager*.pem /etc/kubernetes/ssl/
[root@master01 work ]#cp kube-controller-manager.kubeconfig /etc/kubernetes/
[root@master01 work ]#cp kube-controller-manager.conf /etc/kubernetes/
[root@master01 work ]#cp kube-controller-manager.service /usr/lib/systemd/system/
[root@master01 work ]#
[root@master01 work ]#rsync -vaz kube-controller-manager*.pem master02:/etc/kubernetes/ssl/
sending incremental file list
kube-controller-manager-key.pem
kube-controller-manager.pem

sent 2,498 bytes  received 54 bytes  5,104.00 bytes/sec
total size is 3,180  speedup is 1.25
[root@master01 work ]#rsync -vaz kube-controller-manager*.pem master03:/etc/kubernetes/ssl/
sending incremental file list
kube-controller-manager-key.pem
kube-controller-manager.pem

sent 2,498 bytes  received 54 bytes  1,701.33 bytes/sec
total size is 3,180  speedup is 1.25
[root@master01 work ]#rsync -vaz kube-controller-manager.kubeconfig kube-controller-manager.conf master02:/etc/kubernetes/
sending incremental file list
kube-controller-manager.conf
kube-controller-manager.kubeconfig

sent 4,869 bytes  received 54 bytes  9,846.00 bytes/sec
total size is 7,575  speedup is 1.54
[root@master01 work ]#rsync -vaz kube-controller-manager.kubeconfig kube-controller-manager.conf master03:/etc/kubernetes/
sending incremental file list
kube-controller-manager.conf
kube-controller-manager.kubeconfig

sent 4,869 bytes  received 54 bytes  3,282.00 bytes/sec
total size is 7,575  speedup is 1.54
[root@master01 work ]#rsync -vaz kube-controller-manager.service master02:/usr/lib/systemd/system/
sending incremental file list
kube-controller-manager.service

sent 328 bytes  received 35 bytes  726.00 bytes/sec
total size is 325  speedup is 0.90
[root@master01 work ]#rsync -vaz kube-controller-manager.service master03:/usr/lib/systemd/system/
sending incremental file list
kube-controller-manager.service

sent 328 bytes  received 35 bytes  242.00 bytes/sec
total size is 325  speedup is 0.90
bash 复制代码
[root@master01 work ]#systemctl daemon-reload
[root@master01 work ]#
[root@master01 work ]#systemctl enable kube-controller-manager.service --now
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-controller-manager.service to /usr/lib/systemd/system/kube-controller-manager.service.

[root@master02 ~ ]#systemctl daemon-reload
[root@master02 ~ ]#systemctl enable kube-controller-manager.service --now
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-controller-manager.service to /usr/lib/systemd/system/kube-controller-manager.service.

[root@master03 kubernetes ]#systemctl daemon-reload
[root@master03 kubernetes ]#systemctl enable kube-controller-manager.service --now
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-controller-manager.service to /usr/lib/systemd/system/kube-controller-manager.service.

端口开启表示正常

bash 复制代码
[root@master02 kubernetes ]#ss -lanptu|grep 10252
tcp    LISTEN     0      16384  127.0.0.1:10252                 *:*                   users:(("kube-controller",pid=61345,fd=8))

5.5 部署kube-scheduler组件

#创建csr请求

bash 复制代码
[root@master01 work ]#cat kube-scheduler-csr.json 
{
    "CN": "system:kube-scheduler",
    "hosts": [
      "127.0.0.1",
      "10.10.0.10",
      "10.10.0.11",
      "10.10.0.12",
      "10.10.0.100"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
      {
        "C": "CN",
        "ST": "Hubei",
        "L": "Wuhan",
        "O": "system:kube-scheduler",
        "OU": "system"
      }
    ]
}

注: hosts 列表包含所有 kube-scheduler 节点 IP;

CN 为 system:kube-scheduler、

O 为 system:kube-scheduler,kubernetes 内置的 ClusterRoleBindings system:kube-scheduler

将赋予 kube-scheduler 工作所需的权限。

#生成证书

bash 复制代码
[root@master01 work ]#cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-scheduler-csr.json | cfssljson -bare kube-scheduler
2022/10/26 14:45:07 [INFO] generate received request
2022/10/26 14:45:07 [INFO] received CSR
2022/10/26 14:45:07 [INFO] generating key: rsa-2048
2022/10/26 14:45:07 [INFO] encoded CSR
2022/10/26 14:45:07 [INFO] signed certificate with serial number 635930628159443813223981479189831176983910270655
2022/10/26 14:45:07 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").

#创建kube-scheduler的kubeconfig

1.设置集群参数

bash 复制代码
[root@master01 work ]#kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://10.10.0.10:6443 --kubeconfig=kube-scheduler.kubeconfig
Cluster "kubernetes" set.

2.设置客户端认证参数

system:kube-scheduler这个用户就是kube-scheduler-csr.json 里面CN设置的用户

bash 复制代码
[root@master01 work ]#kubectl config set-credentials system:kube-scheduler --client-certificate=kube-scheduler.pem --client-key=kube-scheduler-key.pem --embed-certs=true --kubeconfig=kube-scheduler.kubeconfig
User "system:kube-scheduler" set.

3.设置上下文参数

bash 复制代码
[root@master01 work ]#kubectl config set-context system:kube-scheduler --cluster=kubernetes --user=system:kube-scheduler --kubeconfig=kube-scheduler.kubeconfig
Context "system:kube-scheduler" created.

4.设置当前上下文

bash 复制代码
[root@master01 work ]#kubectl config use-context system:kube-scheduler --kubeconfig=kube-scheduler.kubeconfig
Switched to context "system:kube-scheduler".

#创建配置文件kube-scheduler.conf

bash 复制代码
[root@master01 work ]#cat kube-scheduler.conf 
KUBE_SCHEDULER_OPTS="--address=127.0.0.1 \
--kubeconfig=/etc/kubernetes/kube-scheduler.kubeconfig \
--leader-elect=true \
--alsologtostderr=true \
--logtostderr=false \
--log-dir=/var/log/kubernetes \
--v=2"

#创建服务启动文件

bash 复制代码
[root@master01 work ]#cat kube-scheduler.service 
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
 
[Service]
EnvironmentFile=-/etc/kubernetes/kube-scheduler.conf
ExecStart=/usr/local/bin/kube-scheduler $KUBE_SCHEDULER_OPTS
Restart=on-failure
RestartSec=5
 
[Install]
WantedBy=multi-user.target

#启动服务

bash 复制代码
[root@master01 work ]#cp kube-scheduler*.pem /etc/kubernetes/ssl/
[root@master01 work ]#cp kube-scheduler.kubeconfig /etc/kubernetes/
[root@master01 work ]#cp kube-scheduler.conf /etc/kubernetes/
[root@master01 work ]#cp kube-scheduler.service /usr/lib/systemd/system/
[root@master01 work ]#rsync -vaz kube-scheduler*.pem master02:/etc/kubernetes/ssl/
sending incremental file list
kube-scheduler-key.pem
kube-scheduler.pem

sent 2,531 bytes  received 54 bytes  1,723.33 bytes/sec
total size is 3,159  speedup is 1.22
[root@master01 work ]#rsync -vaz kube-scheduler*.pem master03:/etc/kubernetes/ssl/
sending incremental file list
kube-scheduler-key.pem
kube-scheduler.pem

sent 2,531 bytes  received 54 bytes  1,723.33 bytes/sec
total size is 3,159  speedup is 1.22
[root@master01 work ]#rsync -vaz kube-scheduler.kubeconfig kube-scheduler.conf master02:/etc/kubernetes/
sending incremental file list
kube-scheduler.conf
kube-scheduler.kubeconfig

sent 4,495 bytes  received 54 bytes  9,098.00 bytes/sec
total size is 6,617  speedup is 1.45
[root@master01 work ]#rsync -vaz kube-scheduler.kubeconfig kube-scheduler.conf master03:/etc/kubernetes/
sending incremental file list
kube-scheduler.conf
kube-scheduler.kubeconfig

sent 4,495 bytes  received 54 bytes  9,098.00 bytes/sec
total size is 6,617  speedup is 1.45
[root@master01 work ]#rsync -vaz kube-scheduler.service master02:/usr/lib/systemd/system/
sending incremental file list
kube-scheduler.service

sent 301 bytes  received 35 bytes  672.00 bytes/sec
total size is 293  speedup is 0.87
[root@master01 work ]#rsync -vaz kube-scheduler.service master03:/usr/lib/systemd/system/
sending incremental file list
kube-scheduler.service

sent 301 bytes  received 35 bytes  224.00 bytes/sec
total size is 293  speedup is 0.87


[root@master01 work ]#systemctl daemon-reload
[root@master01 work ]#systemctl enable kube-scheduler.service --now


[root@master02 kubernetes ]#systemctl daemon-reload
[root@master02 kubernetes ]#systemctl enable kube-scheduler.service --now

[root@master03 kubernetes ]#systemctl daemon-reload
[root@master03 kubernetes ]#systemctl enable kube-scheduler.service --now

端口开启表示正常

bash 复制代码
[root@master01 kubernetes ]#ss -lanptu|grep 10251
tcp    LISTEN     0      16384  127.0.0.1:10251                 *:*                   users:(("kube-scheduler",pid=823,fd=8))

kubeadm安装也是绑定到127.0.0.1

5.6 导入离线镜像压缩包

#把pause-cordns.tar.gz上传到node01节点,手动解压

bash 复制代码
[root@node01 ~ ]#docker load -i pause-cordns.tar.gz 
225df95e717c: Loading layer [==================================================>]  336.4kB/336.4kB
96d17b0b58a7: Loading layer [==================================================>]  45.02MB/45.02MB
Loaded image: k8s.gcr.io/coredns:1.7.0
ba0dae6243cc: Loading layer [==================================================>]  684.5kB/684.5kB
Loaded image: k8s.gcr.io/pause:3.2
[root@node01 ~ ]#
[root@node01 ~ ]#
[root@node01 ~ ]#docker images 
REPOSITORY           TAG       IMAGE ID       CREATED       SIZE
k8s.gcr.io/coredns   1.7.0     bfe3a36ebd25   2 years ago   45.2MB
k8s.gcr.io/pause     3.2       80d28bedfe5d   2 years ago   683kB

5.7 部署kubelet组件

kubelet: 每个Node节点上的kubelet定期就会调用API Server的REST接口报告自身状态,

API Server接收这些信息后,将节点状态信息更新到etcd中。

kubelet也通过API Server监听Pod信息,从而对Node机器上的POD进行管理,如创建、删除、更新Pod

控制节点不允许调度pod,pod都会被调度到工作节点,所以kubelet只需要部署到工作节点,启动

以下操作在master01上操作

创建kubelet-bootstrap.kubeconfig

[root@master01 kubernetes ]#cd /data/work/

[root@master01 work ]#BOOTSTRAP_TOKEN=$(awk -F "," '{print $1}' /etc/kubernetes/token.csv)

[root@master01 work ]#rm -r kubelet-bootstrap.kubeconfig

rm: cannot remove 'kubelet-bootstrap.kubeconfig': No such file or directory

1.设置集群参数

[root@master01 work ]#kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://10.10.0.10:6443 --kubeconfig=kubelet-bootstrap.kubeconfig

Cluster "kubernetes" set.

2.设置客户端认证参数

[root@master01 work ]#kubectl config set-credentials kubelet-bootstrap --token=${BOOTSTRAP_TOKEN} --kubeconfig=kubelet-bootstrap.kubeconfig

User "kubelet-bootstrap" set.

3.设置上下文参数

[root@master01 work ]#kubectl config set-context default --cluster=kubernetes --user=kubelet-bootstrap --kubeconfig=kubelet-bootstrap.kubeconfig

Context "default" created.

4.设置当前上下文

[root@master01 work ]#kubectl config use-context default --kubeconfig=kubelet-bootstrap.kubeconfig

Switched to context "default".

bash 复制代码
[root@master01 work ]#cat kubelet-bootstrap.kubeconfig 
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUR0akNDQXA2Z0F3SUJBZ0lVUkV4RXhidjQ1ZlVBY1hCQlpENGZEZnRqZmtNd0RRWUpLb1pJaHZjTkFRRUwKQlFBd1lURUxNQWtHQTFVRUJoTUNRMDR4RGpBTUJnTlZCQWdUQlVoMVltVnBNUTR3REFZRFZRUUhFd1ZYZFdoaApiakVNTUFvR0ExVUVDaE1EYXpoek1ROHdEUVlEVlFRTEV3WnplWE4wWlcweEV6QVJCZ05WQkFNVENtdDFZbVZ5CmJtVjBaWE13SGhjTk1qSXhNREkxTURnMU5EQXdXaGNOTXpJeE1ESXlNRGcxTkRBd1dqQmhNUXN3Q1FZRFZRUUcKRXdKRFRqRU9NQXdHQTFVRUNCTUZTSFZpWldreERqQU1CZ05WQkFjVEJWZDFhR0Z1TVF3d0NnWURWUVFLRXdOcgpPSE14RHpBTkJnTlZCQXNUQm5ONWMzUmxiVEVUTUJFR0ExVUVBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKCktvWklodmNOQVFFQkJRQURnZ0VQQURDQ0FRb0NnZ0VCQUxVV2lYcGZPMFZ6dTBrek8xYllaVUo2dFVnTHp2TDUKTlJwWnpoS2pNZ3pDbmZYM0pHWUM0SWpFdGoxRWNhUGdFOUErbkZ1UndCWlk0a0U3NkE1RjdhaS8rNzdiak5tMApJcWdzY3o2VmdzM0JuMEZSSXVmMlFkQm9OUmNlWWhXL011Z0ZWUlZRSUVQTjRuZGxCVUFScHpNbDk4clpxSVU5CkxTMkZ1WWRwYTRRSi8weVp3YWlYYllxazFDUk5JUkRBMUZJRDh6VEpHeXhRdmZoZ05wRGJTQkhQVlhqK3oxZW4KTU5XQzVDMGt1QWcya3IzcExHUkg3T2dMR1dHV0RuRXVwMUZKMUJONDl2V3IwTW8zQWd6TE1TcTdaRnRGYS9KYgpvN3lOWmZDZnd5NTZQTi9Rckc5eSs0aFBONHVRdG16em1YbUJYT2JJTFF2SXhEN2lKYTVWb0I4Q0F3RUFBYU5tCk1HUXdEZ1lEVlIwUEFRSC9CQVFEQWdFR01CSUdBMVVkRXdFQi93UUlNQVlCQWY4Q0FRSXdIUVlEVlIwT0JCWUUKRk4rdFZGbnBVTS9JcmMyc21qWkRYT3BlRE1VVE1COEdBMVVkSXdRWU1CYUFGTit0VkZucFVNL0lyYzJzbWpaRApYT3BlRE1VVE1BMEdDU3FHU0liM0RRRUJDd1VBQTRJQkFRQjk3NXBmODY5THN6NkdDWkdxckNyWE1GMm00aENXCmFIRGx4UWQvK3NPQU5neEsxUnV1QWZ5d2lMU2pRTGFaYmxSTnZBMWQ1cGVndXBmcUdyQXljSjE1MkpyUGQ1aE8KNkpGMmlhMWxJMS80bno4OFE1STBZcEpJc0FSUDZBTSttb2RrWjFKM3V5UHFZbFVmSnBJU1pLSlhpQStERVJCaQo1eEhjZWMyWVlPTHJqK2xTMk90eUJ4K3ZpRnlMWFU0aGkrN08zRVRwNjFjRWVKUHE0YzcvWE95UlNnMWZudFFRCkN5SC81eHFBSStSYmhXMzA4TXdVWGh1M2QxcXFyRTBoUXpOVDVoOW4zSHN6V0F3dGthQVUzOHEzMEZqZFdwYXgKeFdDcURBdyt5UElMc2JzdjdBMnVnTUkxY1o0Zk9uWmkwK1MyRHFXdnZ4UEZ6enZza013SS9BNjQKLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
    server: https://10.10.0.10:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: kubelet-bootstrap
  name: default
current-context: default
kind: Config
preferences: {}
users:
- name: kubelet-bootstrap
  user: {}

授权:

bash 复制代码
[root@master01 work ]#kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap
clusterrolebinding.rbac.authorization.k8s.io/kubelet-bootstrap created

#创建配置文件kubelet.json

"cgroupDriver": "systemd"要和docker的驱动一致。

address替换为自己node01的IP地址。

bash 复制代码
[root@master01 work ]#cat kubelet.json 
{
  "kind": "KubeletConfiguration",
  "apiVersion": "kubelet.config.k8s.io/v1beta1",
  "authentication": {
    "x509": {
      "clientCAFile": "/etc/kubernetes/ssl/ca.pem"
    },
    "webhook": {
      "enabled": true,
      "cacheTTL": "2m0s"
    },
    "anonymous": {
      "enabled": false
    }
  },
  "authorization": {
    "mode": "Webhook",
    "webhook": {
      "cacheAuthorizedTTL": "5m0s",
      "cacheUnauthorizedTTL": "30s"
    }
  },
  "address": "10.10.0.14",
  "port": 10250,
  "readOnlyPort": 10255,
  "cgroupDriver": "systemd",
  "hairpinMode": "promiscuous-bridge",
  "serializeImagePulls": false,
  "featureGates": {
    "RotateKubeletClientCertificate": true,
    "RotateKubeletServerCertificate": true
  },
  "clusterDomain": "cluster.local.",
  "clusterDNS": ["10.255.0.2"]
}
bash 复制代码
[root@master01 work ]#cat kubelet.service 
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/kubernetes/kubernetes
After=docker.service
Requires=docker.service
[Service]
WorkingDirectory=/var/lib/kubelet
ExecStart=/usr/local/bin/kubelet \
  --bootstrap-kubeconfig=/etc/kubernetes/kubelet-bootstrap.kubeconfig \  #第一次启动用这个文件,后续都用--kubeconfig
  --cert-dir=/etc/kubernetes/ssl \
  --kubeconfig=/etc/kubernetes/kubelet.kubeconfig \    #这个是自动生成的
  --config=/etc/kubernetes/kubelet.json \
  --network-plugin=cni \
  --pod-infra-container-image=k8s.gcr.io/pause:3.2 \
  --alsologtostderr=true \
  --logtostderr=false \
  --log-dir=/var/log/kubernetes \
  --v=2
Restart=on-failure
RestartSec=5
 
[Install]
WantedBy=multi-user.target

#注: --hostname-override:显示名称,集群中唯一

--network-plugin:启用CNI

--kubeconfig:空路径,会自动生成,后面用于连接apiserver

--bootstrap-kubeconfig:首次启动向apiserver申请证书

--config:配置参数文件

--cert-dir:kubelet证书生成目录

--pod-infra-container-image:管理Pod网络容器的镜像

#注:kubelete.json配置文件address改为各个节点的ip地址,在各个work节点上启动服务

bash 复制代码
[root@node01 ~ ]#mkdir /etc/kubernetes/ssl -p

[root@master01 work ]#scp kubelet-bootstrap.kubeconfig kubelet.json node01:/etc/kubernetes/
kubelet-bootstrap.kubeconfig                                                                                           100% 2107     3.7MB/s   00:00    
kubelet.json

[root@master01 work ]#scp ca.pem node01:/etc/kubernetes/ssl/

ca.pem

[root@master01 work ]#scp kubelet.service node01:/usr/lib/systemd/system/

kubelet.service

#启动kubelet服务

这个目录最好先删一下 /var/lib/kubelet,如果存在之前的会有冲突

[root@node01 ~ ]#mkdir /var/lib/kubelet

[root@node01 ~ ]#mkdir /var/log/kubernetes

bash 复制代码
[root@node01 system ]#systemctl daemon-reload 
[root@node01 system ]#
[root@node01 system ]#
[root@node01 system ]#
[root@node01 system ]#systemctl enable kubelet.service --now
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.

确认kubelet服务启动成功后,接着到master01节点上Approve一下bootstrap请求。

master节点执行如下命令可以看到一个worker节点发送了一个 CSR 请求:

bash 复制代码
[root@master01 work ]#kubectl get csr
NAME                                                   AGE   SIGNERNAME                                    REQUESTOR           CONDITION
node-csr-auJz6SMLC0bu-hO2MXiibM-syMll0YQHZ1kAXyo3gaM   32s   kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Pending

批准请求:

bash 复制代码
[root@master01 work ]#kubectl certificate approve node-csr-auJz6SMLC0bu-hO2MXiibM-syMll0YQHZ1kAXyo3gaM
certificatesigningrequest.certificates.k8s.io/node-csr-auJz6SMLC0bu-hO2MXiibM-syMll0YQHZ1kAXyo3gaM approved
bash 复制代码
[root@master01 work ]#kubectl get csr
NAME                                                   AGE   SIGNERNAME                                    REQUESTOR           CONDITION
node-csr-auJz6SMLC0bu-hO2MXiibM-syMll0YQHZ1kAXyo3gaM   11m   kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Approved,Issued
bash 复制代码
[root@master01 work ]#kubectl get nodes
NAME     STATUS     ROLES    AGE   VERSION
node01   NotReady   <none>   34s   v1.20.7

#注意:STATUS是NotReady表示还没有安装网络插件

二进制安装 kubectl get nodes 看不到控制节点,只能看到工作节点

默认主节点是没部署kubelet的,不具备服务调度能力。

一般生产上都是这么做的。如果想让主节点可具备调度能力,

可以像worker节点一样去在master上部署一下kubelet相关服务,也就可以get no看到了

node节点动态生成一些证书:

bash 复制代码
[root@node01 ~ ]#cd /etc/kubernetes/
[root@node01 kubernetes ]#ll
total 12
-rw------- 1 root root 2148 Oct 27 08:44 kubelet-bootstrap.kubeconfig
-rw-r--r-- 1 root root  800 Oct 27 08:44 kubelet.json
-rw------- 1 root root 2277 Oct 27 08:49 kubelet.kubeconfig
drwxr-xr-x 2 root root  138 Oct 27 08:49 ssl
[root@node01 kubernetes ]#cd ssl/
[root@node01 ssl ]#ll
total 16
-rw-r--r-- 1 root root 1346 Oct 27 08:44 ca.pem
-rw------- 1 root root 1212 Oct 27 08:49 kubelet-client-2022-10-27-08-49-42.pem
lrwxrwxrwx 1 root root   58 Oct 27 08:49 kubelet-client-current.pem -> /etc/kubernetes/ssl/kubelet-client-2022-10-27-08-49-42.pem
-rw-r--r-- 1 root root 2237 Oct 27 08:47 kubelet.crt
-rw------- 1 root root 1675 Oct 27 08:47 kubelet.key

5.8 部署kube-proxy组件

bash 复制代码
[root@master01 work ]#kubectl get svc
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.255.0.1   <none>        443/TCP   22h

10.255.0.1在外面是访问不了的,只能在iptables,ipvs规则中使用,需要kube-proxy来生成规则

#创建csr请求

bash 复制代码
[root@master01 work ]#cat kube-proxy-csr.json 
{
  "CN": "system:kube-proxy",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "Hubei",
      "L": "Wuhan",
      "O": "k8s",
      "OU": "system"
    }
  ]
}

生成证书

bash 复制代码
[root@master01 work ]#cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
2022/10/27 08:52:02 [INFO] generate received request
2022/10/27 08:52:02 [INFO] received CSR
2022/10/27 08:52:02 [INFO] generating key: rsa-2048
2022/10/27 08:52:02 [INFO] encoded CSR
2022/10/27 08:52:02 [INFO] signed certificate with serial number 621761669559249301102184867130469483006128723356
2022/10/27 08:52:02 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
bash 复制代码
[root@master01 work ]#ll kube-proxy*
-rw-r--r-- 1 root root 1005 Oct 27 08:52 kube-proxy.csr
-rw-r--r-- 1 root root  212 Oct 25 15:39 kube-proxy-csr.json
-rw------- 1 root root 1679 Oct 27 08:52 kube-proxy-key.pem
-rw------- 1 root root 6238 Oct 27 08:54 kube-proxy.kubeconfig
-rw-r--r-- 1 root root 1391 Oct 27 08:52 kube-proxy.pem
-rw-r--r-- 1 root root  297 Oct 25 15:39 kube-proxy.yaml

#创建kubeconfig文件,安全上下文,确认kube-proxy 跟哪个集群交互

设置集群参数

[root@master01 work ]#kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://10.10.0.10:6443 --kubeconfig=kube-proxy.kubeconfig

Cluster "kubernetes" set.

设置客户端认证参数

[root@master01 work ]#kubectl config set-credentials kube-proxy --client-certificate=kube-proxy.pem --client-key=kube-proxy-key.pem --embed-certs=true --kubeconfig=kube-proxy.kubeconfig

User "kube-proxy" set.

设置上下文参数

[root@master01 work ]#kubectl config set-context default --cluster=kubernetes --user=kube-proxy --kubeconfig=kube-proxy.kubeconfig

Context "default" created.

--user=kube-proxy 是在kube-proxy-csr.json中设置的CN用户

如果OU那里设置的是system,可以--user那里可以直接使用后面的用户名,与加上system:效果一样

设置当前上下文

[root@master01 work ]#kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig

Switched to context "default".

#创建kube-proxy配置文件

bash 复制代码
[root@master01 work ]#cat kube-proxy.yaml 
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 10.10.0.14                 #node节点ip
clientConnection:
  kubeconfig: /etc/kubernetes/kube-proxy.kubeconfig
clusterCIDR: 10.10.0.0/24               #这里要使用物理机的网段
healthzBindAddress: 10.10.0.14:10256    #node节点ip
kind: KubeProxyConfiguration 
metricsBindAddress: 10.10.0.14:10249    #node节点ip  采集指标
mode: "ipvs"                            #使用ipvs转发模式

#创建服务启动文件

bash 复制代码
[root@master01 work ]#cat kube-proxy.service 
[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target
 
[Service]
WorkingDirectory=/var/lib/kube-proxy
ExecStart=/usr/local/bin/kube-proxy \
  --config=/etc/kubernetes/kube-proxy.yaml \
  --alsologtostderr=true \
  --logtostderr=false \
  --log-dir=/var/log/kubernetes \
  --v=2
Restart=on-failure
RestartSec=5
LimitNOFILE=65536
 
[Install]
WantedBy=multi-user.target
bash 复制代码
[root@master01 work ]#scp  kube-proxy.kubeconfig kube-proxy.yaml node01:/etc/kubernetes/
kube-proxy.kubeconfig                                                                                                  100% 6238     5.9MB/s   00:00    
kube-proxy.yaml                                                                                                        100%  282   418.8KB/s   00:00    
[root@master01 work ]#scp  kube-proxy.service node01:/usr/lib/systemd/system/
kube-proxy.service

#启动服务,在node节点创建目录

bash 复制代码
[root@node01 kubernetes ]#mkdir -p /var/lib/kube-proxy

[root@node01 kubernetes ]#systemctl daemon-reload
[root@node01 kubernetes ]#systemctl enable kube-proxy.service --now
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /usr/lib/systemd/system/kube-proxy.service.
bash 复制代码
[root@node01 kubernetes ]#systemctl status kube-proxy.service 
● kube-proxy.service - Kubernetes Kube-Proxy Server
   Loaded: loaded (/usr/lib/systemd/system/kube-proxy.service; enabled; vendor preset: disabled)
   Active: active (running) since Thu 2022-10-27 09:39:33 CST; 1min 22s ago
     Docs: https://github.com/kubernetes/kubernetes
 Main PID: 12737 (kube-proxy)
    Tasks: 7
   Memory: 57.5M
   CGroup: /system.slice/kube-proxy.service
           └─12737 /usr/local/bin/kube-proxy --config=/etc/kubernetes/kube-proxy.yaml --alsologtostderr=true --logtostderr=false --log-dir=/var/log/ku...

Oct 27 09:39:35 node01 kube-proxy[12737]: I1027 09:39:35.136193   12737 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
Oct 27 09:39:35 node01 kube-proxy[12737]: I1027 09:39:35.136212   12737 config.go:315] Starting service config controller
Oct 27 09:39:35 node01 kube-proxy[12737]: I1027 09:39:35.136214   12737 shared_informer.go:240] Waiting for caches to sync for service config
Oct 27 09:39:35 node01 kube-proxy[12737]: I1027 09:39:35.136432   12737 reflector.go:219] Starting reflector *v1beta1.EndpointSlice (15m0s) f...ry.go:134
Oct 27 09:39:35 node01 kube-proxy[12737]: I1027 09:39:35.136441   12737 reflector.go:219] Starting reflector *v1.Service (15m0s) from k8s.io/...ry.go:134
Oct 27 09:39:35 node01 kube-proxy[12737]: I1027 09:39:35.172077   12737 service.go:275] Service default/kubernetes updated: 1 ports
Oct 27 09:39:35 node01 kube-proxy[12737]: I1027 09:39:35.236331   12737 shared_informer.go:247] Caches are synced for service config
Oct 27 09:39:35 node01 kube-proxy[12737]: I1027 09:39:35.236505   12737 shared_informer.go:247] Caches are synced for endpoint slice config
Oct 27 09:39:35 node01 kube-proxy[12737]: I1027 09:39:35.236532   12737 proxier.go:1036] Not syncing ipvs rules until Services and Endpoints ...om master
Oct 27 09:39:35 node01 kube-proxy[12737]: I1027 09:39:35.236928   12737 service.go:390] Adding new service port "default/kubernetes:https" at...1:443/TCP
Hint: Some lines were ellipsized, use -l to show in full.

5.9 部署calico组件

#解压离线镜像压缩包

#把calico.tar.gz上传到node01节点,手动解压

[root@node01 ~ ]#docker load -i calico.tar.gz

#把calico.yaml文件上传到master01上的的/data/work目录

Calico组件的安装

Calico:https://www.projectcalico.org/

https://docs.projectcalico.org/getting-started/kubernetes/self-managed-onprem/onpremises

curl https://docs.projectcalico.org/manifests/calico.yaml -O 原版本

点击Manifest下载:

curl https://projectcalico.docs.tigera.io/manifests/calico.yaml -O 新版本

点击requirements 可以查看calico支持的kubernetes的版本

修改calico.yaml的以下位置,把注释取消,改成自己配置的podSubnet网段

  • name: CALICO_IPV4POOL_CIDR

value: "10.0.0.0/16"

这个如果是二进制安装,必须改,否则生成的pod 的ip地址默认是 192.168.0.0/16网段的

使用默认的:

[root@master01 work ]#vim calico.yaml

- name: CALICO_IPV4POOL_CIDR

value: "192.168.0.0/16"

[root@master01 ~ ]#kubectl get pods -owide

NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES

nginx 1/1 Running 0 55s 192.168.196.130 node01

修改后:

[root@master01 work ]#vim calico.yaml

  • name: CALICO_IPV4POOL_CIDR

value: "10.0.0.0/16"

[root@master01 work ]#kubectl get pods -owide

NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES

nginx 1/1 Running 0 2m9s 10.0.196.130 node01

bash 复制代码
[root@master01 work ]#kubectl apply -f calico.yaml 

[root@master01 work ]#kubectl get pods -n kube-system -owide
NAME                                       READY   STATUS    RESTARTS   AGE   IP             NODE     NOMINATED NODE   READINESS GATES
calico-kube-controllers-6949477b58-dpwmn   1/1     Running   0          24s   10.0.196.129   node01   <none>           <none>
calico-node-rmkbm                          1/1     Running   0          24s   10.10.0.14     node01   <none>           <none>

calico-node-rmkbm 分配ip的

calico-kube-controllers-6949477b58-dpwmn 做网络策略的

bash 复制代码
[root@master01 work ]#kubectl get nodes
NAME     STATUS   ROLES    AGE   VERSION
node01   Ready    <none>   62m   v1.20.7

报错分析:

bash 复制代码
[root@master03 kubernetes ]#kubectl get pods -n kube-system -owide
NAME                                       READY   STATUS    RESTARTS   AGE     IP             NODE       NOMINATED NODE   READINESS GATES
calico-kube-controllers-6949477b58-cr5lh   1/1     Running   0          5h44m   10.0.196.129   node01     <none>           <none>
calico-node-49kj4                          1/1     Running   6          35m     10.10.0.11     master02   <none>           <none>
calico-node-bgv4n                          1/1     Running   5          35m     10.10.0.10     master01   <none>           <none>
calico-node-bjsmm                          1/1     Running   0          5h44m   10.10.0.14     node01     <none>           <none>
calico-node-t2ljz                          0/1     Running   1          8m46s   10.10.0.12     master03   <none>           <none>
coredns-7bf4bd64bd-572z9                   1/1     Running   0          5h31m   10.0.196.131   node01     <none>           <none>

master03的calico组件未就绪

K8S集群Calico网络组件报错BIRD

calico/node is not ready: BIRD is not ready: BGP not established with

Warning Unhealthy 8s kubelet Readiness probe failed: 2022-10-27 08:00:12.256 [INFO][374] confd/health.go 180: Number of node(s) with BGP peering established = 0

calico/node is not ready: BIRD is not ready: BGP not established with 10.10.0.10,10.10.0.11,10.10.0.14

产生该报错的原因是节点上出现了冲突的网卡,将有问题的网卡删除后,Calico组件的pod就能正常启动了

异常网卡命名都是以br开头的,

bash 复制代码
[root@master03 kubernetes ]#ip link 
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000
    link/ether 00:0c:29:77:30:7c brd ff:ff:ff:ff:ff:ff
3: tunl0@NONE: <NOARP,UP,LOWER_UP> mtu 1480 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/ipip 0.0.0.0 brd 0.0.0.0
4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT group default 
    link/ether 02:42:c2:48:1f:bc brd ff:ff:ff:ff:ff:ff
5: br-8763acf3655c: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default 
    link/ether 02:42:da:e8:e8:55 brd ff:ff:ff:ff:ff:ff

就是这个br-8763acf3655c

将br开头的网卡删除:

bash 复制代码
[root@master03 kubernetes ]#ip link delete br-8763acf3655c

重新生成calico,成功:
[root@master03 kubernetes ]#kubectl get pods -n kube-system -owide
NAME                                       READY   STATUS    RESTARTS   AGE     IP             NODE       NOMINATED NODE   READINESS GATES
calico-kube-controllers-6949477b58-5rqgc   1/1     Running   2          20m     10.0.241.64    master01   <none>           <none>
calico-node-g48ng                          1/1     Running   0          7m53s   10.10.0.12     master03   <none>           <none>
calico-node-nb4g4                          1/1     Running   0          20m     10.10.0.11     master02   <none>           <none>
calico-node-ppcqd                          1/1     Running   0          20m     10.10.0.10     master01   <none>           <none>
calico-node-tsrd4                          1/1     Running   0          20m     10.10.0.14     node01     <none>           <none>
coredns-7bf4bd64bd-572z9                   1/1     Running   0          5h57m   10.0.196.131   node01     <none>           <none>

或者:

估计是没用发现实际真正的网卡

解决方法

/*

调整calicao 网络插件的网卡发现机制,修改IP_AUTODETECTION_METHOD对应的value值。

官方提供的yaml文件中,ip识别策略(IPDETECTMETHOD)没有配置,即默认为first-found,这会导致一个网络异常的ip作为nodeIP被注册,

从而影响node-to-node mesh。我们可以修改成can-reach或者interface的策略,尝试连接某一个Ready的node的IP,以此选择出正确的IP。

*/

// calico.yaml 文件添加以下二行

  • name: IP_AUTODETECTION_METHOD

value: "interface=ens.*" # ens 根据实际网卡开头配置

// 配置如下

  • name: CLUSTER_TYPE

value: "k8s,bgp"

  • name: IP_AUTODETECTION_METHOD

value: "interface=ens.*"

#或者 value: "interface=ens160"

Auto-detect the BGP IP address.

  • name: IP

value: "autodetect"

Enable IPIP

  • name: CALICO_IPV4POOL_IPIP

value: "Always"

5.10 部署coredns组件

coreDNS 就是DNS服务器,将域名解析成ip

会根据svc创建FQDN 全限定域名 svcname.namespace.svc.cluster.local

镜像就是之前解压的pause-cordns.tar.gz

bash 复制代码
[root@master01 work ]#cat coredns.yaml 
apiVersion: v1
kind: ServiceAccount
metadata:
  name: coredns
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
  name: system:coredns
rules:
  - apiGroups:
    - ""
    resources:
    - endpoints
    - services
    - pods
    - namespaces
    verbs:
    - list
    - watch
  - apiGroups:
    - discovery.k8s.io
    resources:
    - endpointslices
    verbs:
    - list
    - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
  name: system:coredns
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:coredns
subjects:
- kind: ServiceAccount
  name: coredns
  namespace: kube-system
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: coredns
  namespace: kube-system
data:
  Corefile: |
    .:53 {
        errors
        health {
          lameduck 5s
        }
        ready
        kubernetes cluster.local in-addr.arpa ip6.arpa {
          fallthrough in-addr.arpa ip6.arpa
        }
        prometheus :9153
        forward . /etc/resolv.conf {
          max_concurrent 1000
        }
        cache 30
        loop
        reload
        loadbalance
    }
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: coredns
  namespace: kube-system
  labels:
    k8s-app: kube-dns
    kubernetes.io/name: "CoreDNS"
spec:
  # replicas: not specified here:
  # 1. Default is 1.
  # 2. Will be tuned in real time if DNS horizontal auto-scaling is turned on.
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 1
  selector:
    matchLabels:
      k8s-app: kube-dns
  template:
    metadata:
      labels:
        k8s-app: kube-dns
    spec:
      priorityClassName: system-cluster-critical
      serviceAccountName: coredns
      tolerations:
        - key: "CriticalAddonsOnly"
          operator: "Exists"
      nodeSelector:
        kubernetes.io/os: linux
      affinity:
         podAntiAffinity:
           preferredDuringSchedulingIgnoredDuringExecution:
           - weight: 100
             podAffinityTerm:
               labelSelector:
                 matchExpressions:
                   - key: k8s-app
                     operator: In
                     values: ["kube-dns"]
               topologyKey: kubernetes.io/hostname
      containers:
      - name: coredns
        image: coredns/coredns:1.7.0
        imagePullPolicy: IfNotPresent
        resources:
          limits:
            memory: 170Mi
          requests:
            cpu: 100m
            memory: 70Mi
        args: [ "-conf", "/etc/coredns/Corefile" ]
        volumeMounts:
        - name: config-volume
          mountPath: /etc/coredns
          readOnly: true
        ports:
        - containerPort: 53
          name: dns
          protocol: UDP
        - containerPort: 53
          name: dns-tcp
          protocol: TCP
        - containerPort: 9153
          name: metrics
          protocol: TCP
        securityContext:
          allowPrivilegeEscalation: false
          capabilities:
            add:
            - NET_BIND_SERVICE
            drop:
            - all
          readOnlyRootFilesystem: true
        livenessProbe:
          httpGet:
            path: /health
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 60
          timeoutSeconds: 5
          successThreshold: 1
          failureThreshold: 5
        readinessProbe:
          httpGet:
            path: /ready
            port: 8181
            scheme: HTTP
      dnsPolicy: Default
      volumes:
        - name: config-volume
          configMap:
            name: coredns
            items:
            - key: Corefile
              path: Corefile
---
apiVersion: v1
kind: Service
metadata:
  name: kube-dns
  namespace: kube-system
  annotations:
    prometheus.io/port: "9153"
    prometheus.io/scrape: "true"
  labels:
    k8s-app: kube-dns
    kubernetes.io/cluster-service: "true"
    kubernetes.io/name: "CoreDNS"
spec:
  selector:
    k8s-app: kube-dns
  clusterIP: 10.255.0.2
  ports:
  - name: dns
    port: 53
    protocol: UDP
  - name: dns-tcp
    port: 53
    protocol: TCP
  - name: metrics
    port: 9153
    protocol: TCP
bash 复制代码
[root@master01 work ]#kubectl apply -f coredns.yaml 
serviceaccount/coredns created
clusterrole.rbac.authorization.k8s.io/system:coredns created
clusterrolebinding.rbac.authorization.k8s.io/system:coredns created
configmap/coredns created
deployment.apps/coredns created
service/kube-dns created
bash 复制代码
[root@master01 work ]#kubectl get pods -n kube-system -owide
NAME                                       READY   STATUS    RESTARTS   AGE   IP             NODE     NOMINATED NODE   READINESS GATES
calico-kube-controllers-6949477b58-cr5lh   1/1     Running   0          13m   10.0.196.129   node01   <none>           <none>
calico-node-bjsmm                          1/1     Running   0          13m   10.10.0.14     node01   <none>           <none>
coredns-7bf4bd64bd-572z9                   1/1     Running   0          16s   10.0.196.131   node01   <none>           <none>
bash 复制代码
[root@master01 work ]#kubectl get svc -n kube-system
NAME       TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)                  AGE
kube-dns   ClusterIP   10.255.0.2   <none>        53/UDP,53/TCP,9153/TCP   2m6s

calico和coredns都是以pod方式运行的

4.查看集群状态

bash 复制代码
[root@master01 work ]#kubectl get nodes
NAME     STATUS   ROLES    AGE    VERSION
node01   Ready    <none>   102m   v1.20.7

5.测试k8s集群部署tomcat服务

#把tomcat.tar.gz和busybox-1-28.tar.gz上传到node01,手动解压

bash 复制代码
[root@node01 ~ ]#docker load -i busybox-1-28.tar.gz 
432b65032b94: Loading layer [==================================================>]   1.36MB/1.36MB
Loaded image: busybox:1.28
[root@node01 ~ ]#docker load -i tomcat.tar.gz 
f1b5933fe4b5: Loading layer [==================================================>]  5.796MB/5.796MB
9b9b7f3d56a0: Loading layer [==================================================>]  3.584kB/3.584kB
edd61588d126: Loading layer [==================================================>]  80.28MB/80.28MB
48988bb7b861: Loading layer [==================================================>]   2.56kB/2.56kB
8e0feedfd296: Loading layer [==================================================>]  24.06MB/24.06MB
aac21c2169ae: Loading layer [==================================================>]  2.048kB/2.048kB
Loaded image: tomcat:8.5-jre8-alpine

在master01执行:

bash 复制代码
[root@master01 work ]#cat tomcat.yaml 
apiVersion: v1  #pod属于k8s核心组v1
kind: Pod  #创建的是一个Pod资源
metadata:  #元数据
  name: demo-pod  #pod名字
  namespace: default  #pod所属的名称空间
  labels:
    app: myapp  #pod具有的标签
    env: dev      #pod具有的标签
spec:
  containers:      #定义一个容器,容器是对象列表,下面可以有多个name
  - name:  tomcat-pod-java  #容器的名字
    ports:
    - containerPort: 8080
    image: tomcat:8.5-jre8-alpine   #容器使用的镜像
    imagePullPolicy: IfNotPresent
  - name: busybox
    image: busybox:latest
    command:  #command是一个列表,定义的时候下面的参数加横线
    - "/bin/sh"
    - "-c"
    - "sleep 3600"

[root@master01 work ]#kubectl apply -f tomcat.yaml

pod/demo-pod created

bash 复制代码
[root@master01 work ]#kubectl get pods -owide
NAME       READY   STATUS    RESTARTS   AGE   IP             NODE     NOMINATED NODE   READINESS GATES
demo-pod   2/2     Running   0          17s   10.0.196.132   node01   <none>           <none>

创建个service讲服务代理出去:

bash 复制代码
[root@master01 work ]#cat tomcat-service.yaml 
apiVersion: v1
kind: Service
metadata:
  name: tomcat
spec:
  type: NodePort
  ports:
    - port: 8080
      nodePort: 30080
  selector:
    app: myapp
    env: dev

[root@master01 work ]#kubectl apply -f tomcat-service.yaml

service/tomcat created

bash 复制代码
[root@master01 work ]#kubectl get svc
NAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)          AGE
kubernetes   ClusterIP   10.255.0.1       <none>        443/TCP          23h
tomcat       NodePort    10.255.199.247   <none>        8080:30080/TCP   12s

在浏览器访问 http://node01的ip:30080

验证可以正常展示页面

6.验证cordns是否正常

bash 复制代码
[root@master01 work ]#kubectl run busybox --image busybox:1.28 --restart=Never --rm -it busybox -- sh
If you don't see a command prompt, try pressing enter.
/ # ping www.baidu.com
PING www.baidu.com (14.215.177.39): 56 data bytes
64 bytes from 14.215.177.39: seq=0 ttl=127 time=5.189 ms
64 bytes from 14.215.177.39: seq=1 ttl=127 time=17.240 ms
^C
--- www.baidu.com ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 5.189/11.214/17.240 ms


/ # nslookup kubernetes.default.svc.cluster.local
Server:    10.255.0.2
Address 1: 10.255.0.2 kube-dns.kube-system.svc.cluster.local

Name:      kubernetes.default.svc.cluster.local
Address 1: 10.255.0.1 kubernetes.default.svc.cluster.local



/ # nslookup tomcat.default.svc.cluster.local
Server:    10.255.0.2
Address 1: 10.255.0.2 kube-dns.kube-system.svc.cluster.local

Name:      tomcat.default.svc.cluster.local
Address 1: 10.255.199.247 tomcat.default.svc.cluster.local

#注意:

busybox要用指定的1.28版本,不能用最新版本,最新版本,nslookup会解析不到dns和ip,报错如下:

bash 复制代码
/ # nslookup kubernetes.default.svc.cluster.local
Server:		10.255.0.2
Address:	10.255.0.2:53
*** Can't find kubernetes.default.svc.cluster.local: No answer
*** Can't find kubernetes.default.svc.cluster.local: No answer

10.255.0.2 就是我们coreDNS的clusterIP,说明coreDNS配置好了。

解析内部Service的名称,是通过coreDNS去解析的。

6.安装keepalived+nginx实现k8s apiserver高可用

把epel.repo上传到xianchaomaster1的/etc/yum.repos.d目录下,这样才能安装keepalived和nginx

把epel.repo传到xianchaomaster2、xianchaomaster3、node01上

总结一下,各个节点安装nginx和keepalived

三个master节点。直接用epel源安装:

yum install nginx keepalived -y

所有Master节点配置nginx(详细配置参考nginx文档,所有Master节点的nginx配置相同):

http模块外面下方添加,stream模块与http模块时平行的

bash 复制代码
[root@master01 work ]#cat /etc/nginx/nginx.conf
# For more information on configuration, see:
#   * Official English Documentation: http://nginx.org/en/docs/
#   * Official Russian Documentation: http://nginx.org/ru/docs/

user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;

# Load dynamic modules. See /usr/share/doc/nginx/README.dynamic.
include /usr/share/nginx/modules/*.conf;

events {
    worker_connections 1024;
}

http {
    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

    access_log  /var/log/nginx/access.log  main;

    sendfile            on;
    tcp_nopush          on;
    tcp_nodelay         on;
    keepalive_timeout   65;
    types_hash_max_size 4096;

    include             /etc/nginx/mime.types;
    default_type        application/octet-stream;

    # Load modular configuration files from the /etc/nginx/conf.d directory.
    # See http://nginx.org/en/docs/ngx_core_module.html#include
    # for more information.
    include /etc/nginx/conf.d/*.conf;



    server {
        listen       80;
        listen       [::]:80;
        server_name  _;
        root         /usr/share/nginx/html;

        # Load configuration files for the default server block.
        include /etc/nginx/default.d/*.conf;

        error_page 404 /404.html;
        location = /404.html {
        }

        error_page 500 502 503 504 /50x.html;
        location = /50x.html {
        }
    }

# Settings for a TLS enabled server.
#
#    server {
#        listen       443 ssl http2;
#        listen       [::]:443 ssl http2;
#        server_name  _;
#        root         /usr/share/nginx/html;
#
#        ssl_certificate "/etc/pki/nginx/server.crt";
#        ssl_certificate_key "/etc/pki/nginx/private/server.key";
#        ssl_session_cache shared:SSL:1m;
#        ssl_session_timeout  10m;
#        ssl_ciphers HIGH:!aNULL:!MD5;
#        ssl_prefer_server_ciphers on;
#
#        # Load configuration files for the default server block.
#        include /etc/nginx/default.d/*.conf;
#
#        error_page 404 /404.html;
#            location = /40x.html {
#        }
#
#        error_page 500 502 503 504 /50x.html;
#            location = /50x.html {
#        }
#    }

}

    stream {

    upstream apiserver {
        server 10.10.0.10:6443 max_fails=3 fail_timeout=30s;
        server 10.10.0.11:6443 max_fails=3 fail_timeout=30s;
        server 10.10.0.12:6443 max_fails=3 fail_timeout=30s;
     }

    server {
        listen 7443;
        proxy_connect_timeout 2s;
        proxy_timeout 900s;
        proxy_pass apiserver;

     }

}

启动nginx如果报错:

bash 复制代码
[root@master01 work ]#systemctl status nginx.service
● nginx.service - The nginx HTTP and reverse proxy server
   Loaded: loaded (/usr/lib/systemd/system/nginx.service; disabled; vendor preset: disabled)
   Active: failed (Result: exit-code) since Thu 2022-10-27 10:55:16 CST; 12s ago
  Process: 3019 ExecStartPre=/usr/sbin/nginx -t (code=exited, status=1/FAILURE)
  Process: 3017 ExecStartPre=/usr/bin/rm -f /run/nginx.pid (code=exited, status=0/SUCCESS)

Oct 27 10:55:16 master01 systemd[1]: Starting The nginx HTTP and reverse proxy server...
Oct 27 10:55:16 master01 nginx[3019]: nginx: [emerg] unknown directive "stream" in /etc/nginx/nginx.conf:37
Oct 27 10:55:16 master01 nginx[3019]: nginx: configuration file /etc/nginx/nginx.conf test failed
Oct 27 10:55:16 master01 systemd[1]: nginx.service: control process exited, code=exited status=1
Oct 27 10:55:16 master01 systemd[1]: Failed to start The nginx HTTP and reverse proxy server.
Oct 27 10:55:16 master01 systemd[1]: Unit nginx.service entered failed state.
Oct 27 10:55:16 master01 systemd[1]: nginx.service failed.

就需要安装stream模块,并不是所有的版本都有现成的,没有的话需要编译,目前epel源的1.20版本有

yum install nginx-mod-stream -y

bash 复制代码
[root@master01 work ]#systemctl enable nginx.service --now
Created symlink from /etc/systemd/system/multi-user.target.wants/nginx.service to /usr/lib/systemd/system/nginx.service.

[root@master02 yum.repos.d ]#systemctl enable nginx.service --now
Created symlink from /etc/systemd/system/multi-user.target.wants/nginx.service to /usr/lib/systemd/system/nginx.service.


[root@master03 yum.repos.d ]#systemctl enable nginx.service --now
Created symlink from /etc/systemd/system/multi-user.target.wants/nginx.service to /usr/lib/systemd/system/nginx.service.

keepalived配置:

所有Master节点配置KeepAlived,配置不一样,注意区分 公有云不支持keepalived

主备三个地方配置不一样

router_id

state

priority

其他都完全一样

主keepalived-master01

bash 复制代码
[root@master01 keepalived ]#cat keepalived.conf 
! Configuration File for keepalived

global_defs {
   router_id 10.10.0.10
}

vrrp_script chk_nginx {
   script "/etc/keepalived/check_nginx.sh"
   interval 2
   weight -20
}

vrrp_instance VI_1 {
    state MASTER
    interface ens33           # 修改为实际网卡名
    virtual_router_id 51      # VRRP 路由 ID实例,每个实例是唯一的 ,如果运行多个keepalived,该id不能重复
    priority 100              # 优先级,备服务器设置 90 
    advert_int 1              # 指定VRRP 心跳包通告间隔时间,默认1秒 
	nopreempt                 #默认是抢占模式,主节点改成非抢占模式,防止VIP来回漂移
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
	10.10.0.100
    }
    track_script {
        chk_nginx
    }
}

如果nginx死掉,会导致用户请求失败,但keepalived并不会进行切换,需要写脚本检查nginx存活状态,

如果确认nginx已死掉,则停掉本机keepalived,已确保keepalived能够正常切换

bash 复制代码
[root@master01 keepalived ]#cat check_nginx.sh 
#!/bin/bash
while true;do
nginxpid=$(ps -C nginx --no-header|wc -l) &> /dev/null
if [ $nginxpid -eq 0 ];then
        systemctl start nginx
        echo "nginx已重启!"
        sleep 5
        nginxpid=$(ps -C nginx --no-header|wc -l) &> /dev/null
        if [ $nginxpid -eq 0 ];then
          systemctl stop keepalived
          echo "该机已出现故障!"
          exit 1
        fi
fi
        sleep 5
        echo "正常运行中!"
done

给脚本加执行权限

[root@master01 keepalived ]#chmod +x check_nginx.sh

备-master02配置:

bash 复制代码
[root@master02 keepalived ]#cat keepalived.conf 
! Configuration File for keepalived

global_defs {
   router_id 10.10.0.11
}

vrrp_script chk_nginx {
   script "/etc/keepalived/check_nginx.sh"
   interval 2
   weight -20
}

vrrp_instance VI_1 {
    state BACKUP
    interface ens33
    virtual_router_id 51
    priority 90
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
	10.10.0.100
    }
    track_script {
        chk_nginx
    }
}

备-master03配置:

bash 复制代码
[root@master03 keepalived ]#cat keepalived.conf 
! Configuration File for keepalived

global_defs {
   router_id 10.10.0.12
}

vrrp_script chk_nginx {
   script "/etc/keepalived/check_nginx.sh"
   interval 2
   weight -20
}

vrrp_instance VI_1 {
    state BACKUP
    interface ens33
    virtual_router_id 51
    priority 90
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
	10.10.0.100
    }
    track_script {
        chk_nginx
    }
}

测试keepalived

停掉master01上的nginx。vip会漂移到master02或master03

经测试,会漂移

更改node节点连接VIP

目前所有的Worker Node组件连接都还是master01 Node,如果不改为连接VIP走负载均衡器,那么Master还是单点故障。

因此接下来就是要改所有Worker Node(kubectl get node命令查看到的节点)组件配置文件,

由原来10.10.0.10修改为10.10.0.100(VIP)。

在所有Worker Node执行:

bash 复制代码
[root@node01 ~]# sed -i 's#10.10.0.10:6443#10.10.0.100:7443#' /etc/kubernetes/kubelet-bootstrap.kubeconfig

[root@node01 ~]# sed -i 's#10.10.0.10:6443#10.10.0.100:7443#' /etc/kubernetes/kubelet.kubeconfig

[root@node01 ~]# sed -i 's#10.10.0.10:6443#10.10.0.100:7443#' /etc/kubernetes/kube-proxy.yaml

[root@node01 ~]# sed -i 's#10.10.0.10:6443#10.10.0.100:7443#' /etc/kubernetes/kube-proxy.kubeconfig

[root@node01 ~]# systemctl restart kubelet kube-proxy

每个master节点:

bash 复制代码
[root@master02 ~ ]#sed -i 's/10.10.0.10:6443/10.10.0.100:7443/g' /root/.kube/config
[root@master03 keepalived ]#sed -i 's/10.10.0.10:6443/10.10.0.100:7443/g' /root/.kube/config
[root@master01 work ]#sed -i 's/10.10.0.10:6443/10.10.0.100:7443/g' /root/.kube/config


[root@master03 ~ ]#sed -i 's/10.10.0.10:6443/10.10.0.100:7443/g' /etc/kubernetes/kube-scheduler.kubeconfig
[root@master03 ~ ]#sed -i 's/10.10.0.10:6443/10.10.0.100:7443/g' /etc/kubernetes/kube-controller-manager.kubeconfig

[root@master02 ~ ]#sed -i 's/10.10.0.10:6443/10.10.0.100:7443/g' /etc/kubernetes/kube-scheduler.kubeconfig
[root@master02 ~ ]#sed -i 's/10.10.0.10:6443/10.10.0.100:7443/g' /etc/kubernetes/kube-controller-manager.kubeconfig

[root@master01 ~ ]#sed -i 's/10.10.0.10:6443/10.10.0.100:7443/g' /etc/kubernetes/kube-scheduler.kubeconfig
[root@master01 ~ ]#sed -i 's/10.10.0.10:6443/10.10.0.100:7443/g' /etc/kubernetes/kube-controller-manager.kubeconfig


[root@master03 ~ ]#systemctl restart kube-scheduler.service kube-controller-manager.service 
[root@master02 ~ ]#systemctl restart kube-scheduler.service kube-controller-manager.service 
[root@master01 ~ ]#systemctl restart kube-scheduler.service kube-controller-manager.service 


[root@master03 ~ ]#kubectl cluster-info
Kubernetes control plane is running at https://10.10.0.100:7443
CoreDNS is running at https://10.10.0.100:7443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

这样高可用集群就安装好了

目前kubectl get nodes 只能看到工作节点,若想能看到控制节点。则需要在控制节点将kubelet kube-proxy流程也走一遍

并且注意修改kubelet.json kubelet-bootstrap.kubeconfig kube-proxy.yaml kube-proxy.kubeconfig 中的ip

将master节点打上标签:

bash 复制代码
[root@master01 work ]#kubectl label node master01 node-role.kubernetes.io/controlplane=true
node/master01 labeled
[root@master01 work ]#kubectl label node master02 node-role.kubernetes.io/controlplane=true
node/master02 labeled
[root@master01 work ]#kubectl label node master03 node-role.kubernetes.io/controlplane=true
node/master03 labeled
[root@master01 work ]#kubectl label node master01 node-role.kubernetes.io/etcd=true
node/master01 labeled
[root@master01 work ]#kubectl label node master02 node-role.kubernetes.io/etcd=true
node/master02 labeled
[root@master01 work ]#kubectl label node master03 node-role.kubernetes.io/etcd=true

工作节点打标签

bash 复制代码
[root@master01 work ]kubectl label node node01 node-role.kubernetes.io/worker=true
bash 复制代码
[root@master01 work ]#kubectl get nodes
NAME       STATUS   ROLES               AGE     VERSION
master01   Ready    controlplane,etcd   67m     v1.20.7
master02   Ready    controlplane,etcd   67m     v1.20.7
master03   Ready    controlplane,etcd   67m     v1.20.7
node01     Ready    worker              7h42m   v1.20.7

7.将master节点打上污点,禁止调度

bash 复制代码
[root@master01 work ]#kubectl taint node master01 master01=null:NoSchedule
node/master01 tainted
[root@master01 work ]#kubectl taint node master02 master02=null:NoSchedule
node/master02 tainted
[root@master01 work ]#kubectl taint node master03 master03=null:NoSchedule
node/master03 tainted

好了,以上就是二进制文件搭建高可用K8S集群的全部流程,希望对大家学习工作中提供帮助,ღ( ´・ᴗ・` )比心。

相关推荐
铁蛋Q3 分钟前
进程的状态
linux·服务器·ubuntu
极客小张20 分钟前
基于正点原子Linux开发板的智能监控与家电控制系统设计:深度解析Video4Linux和TCP/IP技术栈
linux·运维·c++·物联网·网络协议·tcp/ip·算法
sunxunyong21 分钟前
Linux 删除文件不释放空间问题处理
大数据·linux·运维·服务器
只对您心动1 小时前
【C高级】有关shell脚本的一些练习
linux·c语言·shell·脚本
lldhsds1 小时前
linux下的分布式Minio部署实践
linux·minio·分布式对象存储
液态不合群2 小时前
低代码革命:加速云原生时代的端到端产品创新
低代码·云原生
OH五星上将2 小时前
OpenHarmony(鸿蒙南向开发)——小型系统内核(LiteOS-A)【内核通信机制】上
linux·嵌入式硬件·harmonyos·openharmony·鸿蒙开发·liteos-a·鸿蒙内核
沈艺强3 小时前
伊犁linux 创建yum 源过程
linux·运维·服务器
拾光师3 小时前
linux命令行快捷键
linux·运维·服务器
Dola_Pan5 小时前
Linux文件IO(二)-文件操作使用详解
java·linux·服务器