文章目录
- 一、操作系统初始化配置(所有节点均执行)
- 二、部署Etcd集群
-
- 1、准备cfssl证书生成工具
- 2、生成Etcd证书
-
- 2.1、自签etcd证书颁发机构(根CA)
- [2.2、使用自签的CA来签发Etcd HTTPS证书](#2.2、使用自签的CA来签发Etcd HTTPS证书)
- 2.3、从Github下载Etcd二进制文件
- 2.4、部署Etcd集群
- 2.5、配置systemd管理etcd
- [2.6 把刚才生成的etcd证书拷贝到配置文件中的路径中](#2.6 把刚才生成的etcd证书拷贝到配置文件中的路径中)
- 2.6、启动etcd并设置开机启动
- 2.7、将etcd节点1(k8s-master1节点)所有生成的文件拷贝到其它etcd节点上(节点2和节点3)
- 2.8、查看集群状态
- 三、安装Docker
- 四、部署k8s-master节点
-
- 1、生成kube-apiserver服务证书
-
- 1.1、自签kubernetes证书颁发机构(根CA)
- [1.2、使用自签的CA来签发kube-apiserver HTTPS证书](#1.2、使用自签的CA来签发kube-apiserver HTTPS证书)
- 1.3、从Github下载kubernetes二进制文件
- 1.4、解压二进制包
- 1.5、部署kube-apiserver
-
- (1)创建配置文件
- (2)拷贝刚才生成的证书
- [(3)启用 TLS Bootstrapping 机制](#(3)启用 TLS Bootstrapping 机制)
- (4)使用systemd管理kube-apiserver
- (5)启动并设置开机自启
- 1.6、部署kube-controller-manager
-
- (1)创建配置文件
- [(2)使用自签的CA来签发kube-controller-manager HTTPS证书](#(2)使用自签的CA来签发kube-controller-manager HTTPS证书)
- (3)生成kube-controller-manager的kubeconfig文件(以下是shell命令,直接在终端执行)
- (4)部署kube-controller-manager并启动
- 1.7、部署kube-scheduler
-
- (1)创建配置文件
- [(2)使用自签的CA来签发kube-scheduler HTTPS证书](#(2)使用自签的CA来签发kube-scheduler HTTPS证书)
- (3)生成kube-scheduler的kubeconfig文件(以下是shell命令,直接在终端执行)
- (4)部署kube-scheduler并启动
- 1.8、查看集群状态
- 1.9、授权kubelet-bootstrap用户允许请求证书
- [五、部署work node](#五、部署work node)
-
- 1、创建工作目录并拷贝二进制文件
-
- 1.1、部署kubelet
-
- (1)创建配置文件
- (2)创建配置参数(kubelet-config.yml)文件
- (3)生成kubelet初次加入集群向apiserver申请证书的kubeconfig文件
- [(4)systemd管理kubelet 并启动](#(4)systemd管理kubelet 并启动)
- 1.2、在master节点上批准kubelet证书申请并加入集群
- 1.3、部署kube-proxy
-
- (1)创建配置文件
- (2)创建配置参数(kube-proxy-config.yml)文件
- (3)生成kube-proxy.kubeconfig文件
- [(4)systemd管理kubelet 并启动](#(4)systemd管理kubelet 并启动)
- [1.4、部署网络组件 calico](#1.4、部署网络组件 calico)
- 1.5、授权apiserver访问kubelet
- [六、新增加worker node](#六、新增加worker node)
-
- 1、拷贝已部署好的Node相关文件到新的节点
- [2、 删除kubelet证书和kubeconfig文件](#2、 删除kubelet证书和kubeconfig文件)
- [3、 修改主机名](#3、 修改主机名)
- [4、 启动并设置开机自启](#4、 启动并设置开机自启)
- [5、 在master上批准新的node kubelet证书申请](#5、 在master上批准新的node kubelet证书申请)
- [七、部署Core DNS和Ingress](#七、部署Core DNS和Ingress)
-
- 1、部署CoreDNS
- [2、修改CoreDNS配置(configmap 和 service处)](#2、修改CoreDNS配置(configmap 和 service处))
- 3、运行CoreDNS并验证域名解析功能
- 4、部署ingress
- 5、修改ingress配置
-
-
- (1)需要将镜像地址(registry.k8s.io)修改为国内的(lank8s.cn)否则会下不了镜像
- [(2)修改ingress使用 共享宿主机网络(主要是宿主机的80、443端口)的方式来进行工作(这是ingress的一种工作方式,还有"Service NodePort暴露Ingress Controller"的方式,后面会有详细文章单独来讲ingress)](#(2)修改ingress使用 共享宿主机网络(主要是宿主机的80、443端口)的方式来进行工作(这是ingress的一种工作方式,还有"Service NodePort暴露Ingress Controller"的方式,后面会有详细文章单独来讲ingress))
-
- 6、部署ingress并启动一个pod来测试
- 八、扩容多master(高可用架构)
-
- 1、部署Master2
- 2、部署Nginx+Keeaplived高可用负载均衡器
-
- (1)安装nginx和keepalived软件包(主/备一样)
- (2)配置nginx(主/备一样)
- [(3)配置keepalived(Nginx Master)](#(3)配置keepalived(Nginx Master))
- [(4)配置keepalived(Nginx Backup)](#(4)配置keepalived(Nginx Backup))
- (5)启动nginx和keepalived并设置开机启动
- (6)查看keepalived工作状态
- (7)Nginx+Keepalived高可用测试
- (8)访问负载均衡器测试
- [(9)修改所有WorkNode连接LB VIP](#(9)修改所有WorkNode连接LB VIP)
- (10)测试高可用
- 总结
单master架构图
一、操作系统初始化配置(所有节点均执行)
1、关闭防火墙
bash
[root@iZbp11oc69mdt2iizzqhzuZ ~]# systemctl stop firewalld
[root@iZbp11oc69mdt2iizzqhzuZ ~]# systemctl disable firewalld
2、关闭selinux
bash
[root@iZbp11oc69mdt2iizzqhzuZ ~]# setenforce 0 #临时关闭
setenforce: SELinux is disabled
[root@iZbp11oc69mdt2iizzqhzuZ ~]# sed -i 's/enforcing/disabled/' /etc/selinux/config #永久关闭
3、关闭swap
bash
[root@iZbp11oc69mdt2iizzqhzuZ ~]# swapoff -a #临时关闭
[root@iZbp11oc69mdt2iizzqhzuZ ~]# sed -ri 's/.*swap.*/#&/' /etc/fstab #永久关闭
4、根据规划修改主机名
master节点
bash
[root@iZbp11oc69mdt2iizzqhzuZ ~]# hostnamectl set-hostname k8s-master1
[root@iZbp11oc69mdt2iizzqhzuZ ~]# bash
[root@k8s-master1 ~]#
node1节点
bash
[root@iZbp109g5wnimxn10iv2krZ ~]# hostnamectl set-hostname k8s-node1
[root@iZbp109g5wnimxn10iv2krZ ~]# bash
[root@k8s-node1 ~]#
node2节点
bash
[root@iZbp1b25lq5akibeea3sr4Z ~]# hostnamectl set-hostname k8s-node2
[root@iZbp1b25lq5akibeea3sr4Z ~]# bash
[root@k8s-node2 ~]#
5、在master节点上添加host
bash
[root@k8s-master1 ~]# vim /etc/hosts
172.32.0.11 k8s-master1
172.32.0.12 k8s-node1
172.32.0.13 k8s-node2
6、将桥接的IPv4流量传递到iptables的链
bash
[root@k8s-master1 ~]# vim /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
[root@k8s-master1 ~]# sysctl --system #使配置生效
7、时间同步
bash
[root@k8s-master1 ~]# yum -y install ntpdate
[root@k8s-master1 ~]# ntpdate time.windows.com
4 Sep 11:03:38 ntpdate[31657]: adjust time server 52.231.114.183 offset 0.001765 sec
二、部署Etcd集群
Etcd是一个分布式键值存储系统,K8s使用Etcd进行数据存储,所以先准备一个Etcd数据库,为解决Etcd单点故障,应采用集群方式进行部署,这里使用3台组件集群,可容忍1台机器故障,当然 也可以使用5台组件集群,可容忍2台机器故障;
节点名称 | IP |
---|---|
etcd-1 | 172.32.0.11 |
etcd-2 | 172.32.0.12 |
etcd-3 | 172.32.0.13 |
提示:为了节省机器,这里与k8s节点复用,也可以独立于k8s集群之外部署,只要apiserver能连接到就行;(生产环境下还是建议大家分开部署)
1、准备cfssl证书生成工具
cfssl是一个开源的证书管理工具,使用json文件生成证书,相比于openssl更方便使用;
在随便一台服务器上操作,这里使用master节点
bash
[root@k8s-master1 ~]# wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
[root@k8s-master1 ~]# wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
[root@k8s-master1 ~]# wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
[root@k8s-master1 ~]# chmod +x cfssl-certinfo_linux-amd64 cfssl_linux-amd64 cfssljson_linux-amd64
[root@k8s-master1 ~]# ll
total 18808
-rw-r--r-- 1 root root 6595195 Dec 7 2021 cfssl-certinfo_linux-amd64
-rw-r--r-- 1 root root 2277873 Dec 7 2021 cfssljson_linux-amd64
-rw-r--r-- 1 root root 10376657 Dec 7 2021 cfssl_linux-amd64
[root@k8s-master1 ~]# mv cfssl_linux-amd64 /usr/local/bin/cfssl #cfssl工具
[root@k8s-master1 ~]# mv cfssljson_linux-amd64 /usr/local/bin/cfssljson #可显示CSR或证书文件的详细信息;可用于证书校验;
[root@k8s-master1 ~]# mv cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo #将从cfssl和multirootca等获得的json格式的输出转化为证书格式的文件(证书,密钥,CSR和bundle)进行存储;
2、生成Etcd证书
2.1、自签etcd证书颁发机构(根CA)
创建工作目录
bash
[root@k8s-master1 ~]# mkdir -pv /hqtbj/hqtwww/TLS/{etcd,k8s}
mkdir: created directory '/hqtbj'
mkdir: created directory '/hqtbj/hqtwww'
mkdir: created directory '/hqtbj/hqtwww/TLS'
mkdir: created directory '/hqtbj/hqtwww/TLS/etcd'
mkdir: created directory '/hqtbj/hqtwww/TLS/k8s'
创建请求证书的json配置文件(自签CA)
以下命令可以创建一个CA证书的请求json的默认生成方式,大家可以在此基础上面修改,也可以自行创建;
cfssl print-defaults config > ca-config.json
bash
[root@k8s-master1 etcd]# vim ca-config.json
{
"signing": {
"default": {
"expiry": "87600h"
},
"profiles": {
"etcd": {
"expiry": "87600h",
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
]
}
}
}
}
ca-config.json :这个配置文件只是告诉我们颁发有什么功能的证书,它用于配置证书的使用场景(profile)和具体参数(usage、过期时间、服务端认证、客户端认证、加密等);
default :是默认策略,指定证书默认有效期是10年;
profiles :是定义使用场景,这里只是etcd,也可以定义多个场景,分别指定不同的过期时间,使用场景等参数,后续签名时使用某个profile即可;
signing :表示该证书可用于签名其他证书,生成的ca.pem证书中的CA=TRUE;
server auth :表示client可以用该CA对server提供的证书进行校验;
client auth:表示server可以用该CA对client提供的证书进行验证;
创建根CA证书签名请求文件
以下命令可以创建一个CSR默认证书签名请求文件,大家可以在此基础上面修改,也可以自行创建;
cfssl print-defaults csr > ca-csr.json
bash
[root@k8s-master1 etcd]# vim ca-csr.json
{
"CN": "etcd CA",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "Beijing",
"ST": "Beijing"
}
]
}
生成自签CA的证书和私钥ca.pem,ca-key.pem
bash
[root@k8s-master1 etcd]# ll
total 8
-rw-r--r-- 1 root root 288 Sep 4 11:45 ca-config.json
-rw-r--r-- 1 root root 209 Sep 4 11:43 ca-csr.json
[root@k8s-master1 etcd]# cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
[root@k8s-master1 etcd]# ll
total 20
-rw-r--r-- 1 root root 288 Sep 4 11:45 ca-config.json
-rw-r--r-- 1 root root 956 Sep 4 11:48 ca.csr
-rw-r--r-- 1 root root 209 Sep 4 11:43 ca-csr.json
-rw------- 1 root root 1679 Sep 4 11:48 ca-key.pem
-rw-r--r-- 1 root root 1265 Sep 4 11:48 ca.pem
会生成ca.pem证书和ca-key.pem私钥
2.2、使用自签的CA来签发Etcd HTTPS证书
创建etcd证书的申请文件
bash
[root@k8s-master1 etcd]# vim etcd-server.json
{
"CN": "etcd",
"hosts": [
"172.32.0.11",
"172.32.0.12",
"172.32.0.13",
"172.32.0.14",
"172.32.0.15",
"172.32.0.16",
"172.32.0.17"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "BeiJing",
"ST": "BeiJing"
}
]
}
提示:上述文件hosts字段中IP为所有etcd节点的集群内部通信IP,一个都不能少!为了方便后期扩容可以多写几个预留的IP
CN :具体表示的域;
hosts :使用该证书的域名,这里使用的是IP;
key :加密方式,常用的就是RSA 2048;
names:证书包含的信息,例如国家、地区等;
生成etcd证书
bash
[root@k8s-master1 etcd]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=etcd etcd-server.json | cfssljson -bare etcd-server
[root@k8s-master1 etcd]# ll
total 36
-rw-r--r-- 1 root root 288 Sep 4 11:45 ca-config.json
-rw-r--r-- 1 root root 956 Sep 4 11:48 ca.csr
-rw-r--r-- 1 root root 209 Sep 4 11:43 ca-csr.json
-rw------- 1 root root 1679 Sep 4 11:48 ca-key.pem
-rw-r--r-- 1 root root 1265 Sep 4 11:48 ca.pem
-rw-r--r-- 1 root root 1045 Sep 4 12:22 etcd-server.csr
-rw-r--r-- 1 root root 360 Sep 4 11:57 etcd-server.json
-rw------- 1 root root 1679 Sep 4 12:22 etcd-server-key.pem
-rw-r--r-- 1 root root 1371 Sep 4 12:22 etcd-server.pem
会生成etcd-server.pem证书和etcd-server-key.pem私钥
2.3、从Github下载Etcd二进制文件
下载地址:https://github.com/etcd-io/etcd/releases/download/v3.4.9/etcd-v3.4.9-linux-amd64.tar.gz
参考文档:https://kubernetes.io/zh-cn/docs/tasks/administer-cluster/configure-upgrade-etcd/
2.4、部署Etcd集群
以下在节点1(master)上操作,为简化操作,待会将master生成的所有文件拷贝到节点2(node1)和节点3(node2;
(1)创建工作目录并解压二进制包
bash
[root@k8s-master1 ~]# mkdir -pv /hqtbj/hqtwww/etcd/{bin,cfg,ssl}
mkdir: created directory '/hqtbj/hqtwww/etcd'
mkdir: created directory '/hqtbj/hqtwww/etcd/bin'
mkdir: created directory '/hqtbj/hqtwww/etcd/cfg'
mkdir: created directory '/hqtbj/hqtwww/etcd/ssl'
[root@k8s-master1 ~]# wget https://github.com/etcd-io/etcd/releases/download/v3.4.9/etcd-v3.4.9-linux-amd64.tar.gz
[root@k8s-master1 ~]# tar -zxf etcd-v3.4.9-linux-amd64.tar.gz
[root@k8s-master1 ~]# ll etcd-v3.4.9-linux-amd64
total 40540
drwxr-xr-x 14 630384594 600260513 4096 May 22 2020 Documentation
-rwxr-xr-x 1 630384594 600260513 23827424 May 22 2020 etcd
-rwxr-xr-x 1 630384594 600260513 17612384 May 22 2020 etcdctl
-rw-r--r-- 1 630384594 600260513 43094 May 22 2020 README-etcdctl.md
-rw-r--r-- 1 630384594 600260513 8431 May 22 2020 README.md
-rw-r--r-- 1 630384594 600260513 7855 May 22 2020 READMEv2-etcdctl.md
[root@k8s-master1 ~]# mv etcd-v3.4.9-linux-amd64/{etcd,etcdctl} /hqtbj/hqtwww/etcd/bin/
[root@k8s-master1 ~]# cd /hqtbj/hqtwww/etcd/bin/
[root@k8s-master1 bin]# ll
total 40472
-rwxr-xr-x 1 630384594 600260513 23827424 May 22 2020 etcd
-rwxr-xr-x 1 630384594 600260513 17612384 May 22 2020 etcdctl
(2)创建etcd的配置文件
bash
[root@k8s-master1 cfg]# vim /hqtbj/hqtwww/etcd/cfg/etcd.conf
#[Member]
ETCD_NAME="etcd-1"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://172.32.0.11:2380"
ETCD_LISTEN_CLIENT_URLS="https://172.32.0.11:2379"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://172.32.0.11:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://172.32.0.11:2379"
ETCD_INITIAL_CLUSTER="etcd-1=https://172.32.0.11:2380,etcd-2=https://172.32.0.12:2380,etcd-3=https://172.32.0.13:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
ETCD_NAME :节点名称,集群中唯一
ETCD_DATA_DIR :etcd的数据目录
ETCD_LISTEN_PEER_URLS :集群通信的监听地址
ETCD_LISTEN_PEER_URLS :客户端访问的监听地址
ETCD_INITIAL_ADVERTISE_PEER_URLS :集群通告地址
ETCD_ADVERTISE_CLIENT_URLS :客户端通告地址
ETCD_INITIAL_CLUSTER :集群中的节点地址
ETCD_INITIAL_CLUSTER_TOKEN :集群的token
ETCD_INITIAL_CLUSTER_STATE:加入集群的当前状态,new是新集群,"existing"表示加入已有的集群
2.5、配置systemd管理etcd
bash
[root@k8s-master1 ~]# vim /usr/lib/systemd/system/etcd.service
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
[Service]
Type=notify
EnvironmentFile=/hqtbj/hqtwww/etcd/cfg/etcd.conf #etcd的配置文件
ExecStart=/hqtbj/hqtwww/etcd/bin/etcd \
--cert-file=/hqtbj/hqtwww/etcd/ssl/etcd-server.pem \ #etcd对外提供服务的服务器证书
--key-file=/hqtbj/hqtwww/etcd/ssl/etcd-server-key.pem \ #etcd服务器证书对应的私钥
--peer-cert-file=/hqtbj/hqtwww/etcd/ssl/etcd-server.pem \ #peer证书,用于etcd节点之间的相互访问,这里使用etcd服务器证书
--peer-key-file=/hqtbj/hqtwww/etcd/ssl/etcd-server-key.pem \ #peer证书对应的私钥,这里使用etcd服务器证书对应的私钥
--trusted-ca-file=/hqtbj/hqtwww/etcd/ssl/ca.pem \ #用于验证访问etcd服务器的客户端证书的CA证书,就是我们给etcd颁发https的 自建的 CA机构
--peer-trusted-ca-file=/hqtbj/hqtwww/etcd/ssl/ca.pem \ #用于验证peer证书的CA根证书
--logger=zap
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
2.6 把刚才生成的etcd证书拷贝到配置文件中的路径中
bash
[root@k8s-master1 cfg]# cp /hqtbj/hqtwww/TLS/etcd/ca*pem /hqtbj/hqtwww/TLS/etcd/etcd-server*pem /hqtbj/hqtwww/etcd/ssl/
2.6、启动etcd并设置开机启动
bash
[root@k8s-master1 ~]# systemctl daemon-reload
[root@k8s-master1 ~]# systemctl enable etcd
[root@k8s-master1 ~]# systemctl start etcd
#启动的时候会阻塞终端,因为我们的集群是3台,可以容忍1台etcd故障,但是现在我们剩下两台还没有启动,所以这里会尝试去连接集群内的其它节点,如果剩下两台都连不上的话就会启动不起来;日志如下:
Job for etcd.service failed because a timeout was exceeded. See "systemctl status etcd.service" and "journalctl -xe" for details.
[root@k8s-master1 ~]# journalctl -f -u etcd.service
......
Sep 04 12:38:46 k8s-master1 etcd[32366]: {"level":"warn","ts":"2022-09-04T12:38:46.299+0800","caller":"rafthttp/probing_status.go:70","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"1c8006ffe5f4dc13","rtt":"0s","error":"dial tcp 172.32.0.12:2380: connect: connection refused"}
Sep 04 12:38:46 k8s-master1 etcd[32366]: {"level":"warn","ts":"2022-09-04T12:38:46.299+0800","caller":"rafthttp/probing_status.go:70","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"1c8006ffe5f4dc13","rtt":"0s","error":"dial tcp 172.32.0.12:2380: connect: connection refused"}
Sep 04 12:38:46 k8s-master1 etcd[32366]: {"level":"warn","ts":"2022-09-04T12:38:46.307+0800","caller":"rafthttp/probing_status.go:70","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"c9a413f6093101a1","rtt":"0s","error":"dial tcp 172.32.0.13:2380: connect: connection refused"}
Sep 04 12:38:46 k8s-master1 etcd[32366]: {"level":"warn","ts":"2022-09-04T12:38:46.308+0800","caller":"rafthttp/probing_status.go:70","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"c9a413f6093101a1","rtt":"0s","error":"dial tcp 172.32.0.13:2380: connect: connection refused"}
Sep 04 12:38:46 k8s-master1 systemd[1]: etcd.service start operation timed out. Terminating.
Sep 04 12:38:46 k8s-master1 systemd[1]: Failed to start Etcd Server.
提示:这里的解决办法就是在启动etcd的时候把集群中的另一台节点一同起来就可以了,下一个步骤就是:
2.7、将etcd节点1(k8s-master1节点)所有生成的文件拷贝到其它etcd节点上(节点2和节点3)
bash
[root@k8s-master1 ~]# scp -pr /hqtbj/hqtwww/etcd/ [email protected]:/hqtbj/hqtwww/
[root@k8s-master1 ~]# scp -r /usr/lib/systemd/system/etcd.service [email protected]:/usr/lib/systemd/system/
[root@k8s-master1 ~]# scp -pr /hqtbj/hqtwww/etcd/ [email protected]:/hqtbj/hqtwww/
[root@k8s-master1 ~]# scp -r /usr/lib/systemd/system/etcd.service [email protected]:/usr/lib/systemd/system/
然后在节点2和节点3上分别修改etcd.conf配置文件中的节点名称和当前服务器的IP:
bash
[root@k8s-node1 ~]# vim /hqtbj/hqtwww/etcd/cfg/etcd.conf
#[Member]
ETCD_NAME="etcd-2" #修改此处,节点2改为etcd-2,节点3改为etcd-3
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://172.32.0.12:2380" #修改此处为当前服务器的IP
ETCD_LISTEN_CLIENT_URLS="https://172.32.0.12:2379" #修改此处为当前服务器的IP
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://172.32.0.12:2380" #修改此处为当前服务器的IP
ETCD_ADVERTISE_CLIENT_URLS="https://172.32.0.12:2379" #修改此处为当前服务器的IP
ETCD_INITIAL_CLUSTER="etcd-1=https://172.32.0.11:2380,etcd-2=https://172.32.0.12:2380,etcd-3=https://172.32.0.13:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
最后启动etcd并设置开机启动(不要等一台启动了再起另一台,直接两台一起或者三台一起启动),同上;
2.8、查看集群状态
bash
[root@k8s-master1 ~]# ETCDCTL_API=3 /hqtbj/hqtwww/etcd/bin/etcdctl --cacert=/hqtbj/hqtwww/etcd/ssl/ca.pem --cert=/hqtbj/hqtwww/etcd/ssl/etcd-server.pem --key=/hqtbj/hqtwww/etcd/ssl/etcd-server-key.pem --endpoints="https://172.32.0.11:2379,https://172.32.0.12:2379,https://172.32.0.13:2379" endpoint health --write-out=table
+--------------------------+--------+-------------+-------+
| ENDPOINT | HEALTH | TOOK | ERROR |
+--------------------------+--------+-------------+-------+
| https://172.32.0.13:2379 | true | 11.686469ms | |
| https://172.32.0.11:2379 | true | 11.83647ms | |
| https://172.32.0.12:2379 | true | 11.999272ms | |
+--------------------------+--------+-------------+-------+
如果输出上面信息,就说明集群部署成功;
三、安装Docker
这里使用Docker作为容器引擎,也可以换成别的,例如containerd
二进制下载地址:https://download.docker.com/linux/static/stable/x86_64/
以下操作需在所有节点操作,为了方便,这里使用yum安装,二进制安装也一样;
1、下载安装docker并启动;
bash
[root@k8s-master1 ~]# yum update
[root@k8s-master1 ~]# wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo
[root@k8s-master1 ~]# yum -y install docker-ce
[root@k8s-master1 ~]# systemctl enable docker && systemctl start docker
[root@k8s-master1 ~]# docker info
2、创建docker配置文件
bash
[root@k8s-master1 ~]# vim /etc/docker/daemon.json
{
"registry-mirrors": ["https://b9pmyelo.mirror.aliyuncs.com"]
}
[root@k8s-master1 ~]# systemctl restart docker
registry-mirrors: ["https://b9pmyelo.mirror.aliyuncs.com"]:使用阿里云的加速器,否则下载镜像会慢;
四、部署k8s-master节点
1、生成kube-apiserver服务证书
1.1、自签kubernetes证书颁发机构(根CA)
创建请求证书的json配置文件(自签CA)
bash
[root@k8s-master1 ~]# cd /hqtbj/hqtwww/TLS/k8s/
[root@k8s-master1 k8s]# vim ca-config.json
{
"signing": {
"default": {
"expiry": "87600h"
},
"profiles": {
"kubernetes": {
"expiry": "87600h",
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
]
}
}
}
}
创建kubernetes根CA证书签名请求文件
bash
[root@k8s-master1 k8s]# vim ca-csr.json
{
"CN": "kubernetes",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "Beijing",
"ST": "Beijing",
"O": "k8s",
"OU": "System"
}
]
}
生成kubernetes根证书
bash
[root@k8s-master1 k8s]# ll
total 8
-rw-r--r-- 1 root root 294 Sep 4 13:44 ca-config.json
-rw-r--r-- 1 root root 264 Sep 4 13:50 ca-csr.json
[root@k8s-master1 k8s]# cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
[root@k8s-master1 k8s]# ll
total 20
-rw-r--r-- 1 root root 294 Sep 4 13:44 ca-config.json
-rw-r--r-- 1 root root 1001 Sep 4 13:51 ca.csr
-rw-r--r-- 1 root root 264 Sep 4 13:50 ca-csr.json
-rw------- 1 root root 1679 Sep 4 13:51 ca-key.pem
-rw-r--r-- 1 root root 1359 Sep 4 13:51 ca.pem
会生成ca.pem证书和ca-key.pem私钥
1.2、使用自签的CA来签发kube-apiserver HTTPS证书
创建kube-apiserver证书的csr申请文件:
bash
[root@k8s-master1 k8s]# vim kube-apiserver-csr.json
{
"CN": "kubernetes",
"hosts": [
"10.0.0.1",
"127.0.0.1",
"172.32.0.11",
"172.32.0.12",
"172.32.0.13",
"172.32.0.14",
"172.32.0.15",
"172.32.0.16",
"172.32.0.17",
"172.32.0.18",
"172.32.0.19",
"172.32.0.20",
"172.32.0.21",
"172.32.0.22",
"172.32.0.23",
"kubernetes",
"kubernetes.default",
"kubernetes.default.svc",
"kubernetes.default.svc.cluster",
"kubernetes.default.svc.cluster.local"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "BeiJing",
"ST": "BeiJing",
"O": "k8s",
"OU": "System"
}
]
}
提示:上述文件hosts字段中IP为所有Master/LB/VIP IP,一个都不能少!为了方便后期扩容可以多写几个预留的IP,后面部署高可用会用到!!
生成kube-apisever证书
bash
[root@k8s-master1 k8s]# ll
total 24
-rw-r--r-- 1 root root 294 Sep 4 13:44 ca-config.json
-rw-r--r-- 1 root root 1001 Sep 4 13:51 ca.csr
-rw-r--r-- 1 root root 264 Sep 4 13:50 ca-csr.json
-rw------- 1 root root 1679 Sep 4 13:51 ca-key.pem
-rw-r--r-- 1 root root 1359 Sep 4 13:51 ca.pem
-rw-r--r-- 1 root root 800 Sep 4 13:59 kube-apiserver-csr.json
[root@k8s-master1 k8s]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-apiserver-csr.json | cfssljson -bare kube-apiserver
[root@k8s-master1 k8s]# ll
total 36
-rw-r--r-- 1 root root 294 Sep 4 13:44 ca-config.json
-rw-r--r-- 1 root root 1001 Sep 4 13:51 ca.csr
-rw-r--r-- 1 root root 264 Sep 4 13:50 ca-csr.json
-rw------- 1 root root 1679 Sep 4 13:51 ca-key.pem
-rw-r--r-- 1 root root 1359 Sep 4 13:51 ca.pem
-rw-r--r-- 1 root root 1342 Sep 4 14:03 kube-apiserver.csr
-rw-r--r-- 1 root root 761 Sep 4 14:03 kube-apiserver-csr.json
-rw------- 1 root root 1675 Sep 4 14:03 kube-apiserver-key.pem
-rw-r--r-- 1 root root 1708 Sep 4 14:03 kube-apiserver.pem
会生成kube-apiserver.pem证书和kube-apiserver-key.pem私钥
1.3、从Github下载kubernetes二进制文件
下载地址:https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.20.md#downloads-for-v12011
提示:打开链接你会发现里面有很多包,下载一个server包就够了,包含了Master和Worker Node二进制文件。
1.4、解压二进制包
bash
[root@k8s-master1 ~]# mkdir -pv /hqtbj/hqtwww/kubernetes/{bin,cfg,ssl,logs}
mkdir: created directory '/hqtbj/hqtwww/kubernetes'
mkdir: created directory '/hqtbj/hqtwww/kubernetes/bin'
mkdir: created directory '/hqtbj/hqtwww/kubernetes/cfg'
mkdir: created directory '/hqtbj/hqtwww/kubernetes/ssl'
mkdir: created directory '/hqtbj/hqtwww/kubernetes/logs'
[root@k8s-master1 ~]# tar -zxf kubernetes-server-linux-amd64.tar.gz
[root@k8s-master1 ~]# cd kubernetes/server/bin/
[root@k8s-master1 bin]# cp kube-apiserver kube-scheduler kube-controller-manager /hqtbj/hqtwww/kubernetes/bin/
[root@k8s-master1 bin]# cp kubectl /usr/bin/
1.5、部署kube-apiserver
(1)创建配置文件
bash
[root@k8s-master1 ~]# vim /hqtbj/hqtwww/kubernetes/cfg/kube-apiserver.conf
KUBE_APISERVER_OPTS="--logtostderr=false \
--v=2 \
--log-dir=/hqtbj/hqtwww/kubernetes/logs \
--etcd-servers=https://172.32.0.11:2379,https://172.32.0.12:2379,https://172.32.0.13:2379 \
--bind-address=172.32.0.11 \
--secure-port=6443 \
--advertise-address=172.32.0.11 \
--allow-privileged=true \
--service-cluster-ip-range=10.0.0.0/24 \
--enable-admission-plugins=NodeRestriction \
--authorization-mode=RBAC,Node \
--enable-bootstrap-token-auth=true \
--token-auth-file=/hqtbj/hqtwww/kubernetes/cfg/token.csv \
--service-node-port-range=30000-32767 \
--kubelet-client-certificate=/hqtbj/hqtwww/kubernetes/ssl/kube-apiserver.pem \
--kubelet-client-key=/hqtbj/hqtwww/kubernetes/ssl/kube-apiserver-key.pem \
--tls-cert-file=/hqtbj/hqtwww/kubernetes/ssl/kube-apiserver.pem \
--tls-private-key-file=/hqtbj/hqtwww/kubernetes/ssl/kube-apiserver-key.pem \
--client-ca-file=/hqtbj/hqtwww/kubernetes/ssl/ca.pem \
--service-account-key-file=/hqtbj/hqtwww/kubernetes/ssl/ca-key.pem \
--service-account-issuer=api \
--service-account-signing-key-file=/hqtbj/hqtwww/kubernetes/ssl/ca-key.pem \
--etcd-cafile=/hqtbj/hqtwww/etcd/ssl/ca.pem \
--etcd-certfile=/hqtbj/hqtwww/etcd/ssl/etcd-server.pem \
--etcd-keyfile=/hqtbj/hqtwww/etcd/ssl/etcd-server-key.pem \
--requestheader-client-ca-file=/hqtbj/hqtwww/kubernetes/ssl/ca.pem \
--proxy-client-cert-file=/hqtbj/hqtwww/kubernetes/ssl/kube-apiserver.pem \
--proxy-client-key-file=/hqtbj/hqtwww/kubernetes/ssl/kube-apiserver-key.pem \
--requestheader-allowed-names=kubernetes \
--requestheader-extra-headers-prefix=X-Remote-Extra- \
--requestheader-group-headers=X-Remote-Group \
--requestheader-username-headers=X-Remote-User \
--enable-aggregator-routing=true \
--audit-log-maxage=30 \
--audit-log-maxbackup=3 \
--audit-log-maxsize=100 \
--audit-log-path=/hqtbj/hqtwww/kubernetes/logs/k8s-audit.log"
--logtostderr :启用日志
--v :日志等级
--log-dir :日志的存放目录
--etcd-servers :etcd集群的地址
--bind-address :kube-apiserver监听地址
--secure-port :https安全端口
--advertise-address :集群通告地址
--allow-privileged :启用授权
--service-cluster-ip-range :service虚拟ip地址段
--enable-admission-plugins :准入控制模块
--authorization-mode :认证授权,启用RBAC授权和节点自管理
--enable-bootstrap-token-auth :启动TLS bootstrap机制
--token-auth-file :bootstrap tonken 文件
--service-node-port :service nodeport类型默认分配端口范围
--kubelet-client-xxx :kube-apiserver访问kubelet客户端的证书和私钥
--tls-xxxx-file :用于kube-apiserver对外提供服务的服务器证书和私钥(apiserver https证书)
--1.20版本必须加的参数:--service-account-issuer,--service-account-signing-key-file
--etcd-xxxfile :连接etcd集群的证书
--audit-log-xxx :审计日志
--启动聚合层相关配置:--requestheader-client-ca-file,--proxy-client-cert-file,--proxy-client-key-file,--requestheader-allowed-names,--requestheader-extra-headers-prefix,--requestheader-group-headers,--requestheader-username-headers,--enable-aggregator-routing
(2)拷贝刚才生成的证书
把刚才生成的kube-apiserver证书拷贝到配置文件中的路径:
bash
[root@k8s-master1 ~]# cp /hqtbj/hqtwww/TLS/k8s/ca*pem /hqtbj/hqtwww/TLS/k8s/kube-apiserver*pem /hqtbj/hqtwww/kubernetes/ssl/
(3)启用 TLS Bootstrapping 机制
TLS Bootstraping:Master apiserver启用TLS认证后,Node节点kubelet和kube-proxy要与kube-apiserver进行通信,必须使用CA签发的有效证书才可以,当Node节点很多时,这种客户端证书颁发需要大量工作,同样也会增加集群扩展复杂度。为了简化流程,Kubernetes引入了TLS bootstraping机制来自动颁发客户端证书,kubelet会以一个低权限用户自动向apiserver申请证书,kubelet的证书由apiserver动态签署。所以强烈建议在Node上使用这种方式,目前主要用于kubelet,kube-proxy还是由我们统一颁发一个证书;
TLS bootstraping工作流程:
创建上面配置文件中的token文件(--token-auth-file=/hqtbj/hqtwww/kubernetes/cfg/token.csv)
bash
[root@k8s-master1 ~]# head -c 16 /dev/urandom | od -An -t x | tr -d ' '
6c02be6b3c0875bc5783ecafcb6e10bc
[root@k8s-master1 ~]# vim /hqtbj/hqtwww/kubernetes/cfg/token.csv
6c02be6b3c0875bc5783ecafcb6e10bc,kubelet-bootstrap,10001,"system:node-bootstrapper"
格式:token,用户名,UID,用户组
这里的token也可以用上述命令自己生成然后替换;
(4)使用systemd管理kube-apiserver
bash
[root@k8s-master1 ~]# vim /usr/lib/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=/hqtbj/hqtwww/kubernetes/cfg/kube-apiserver.conf
ExecStart=/hqtbj/hqtwww/kubernetes/bin/kube-apiserver $KUBE_APISERVER_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target
(5)启动并设置开机自启
root@k8s-master1 \~\]# systemctl daemon-reload
\[root@k8s-master1 \~\]# systemctl start kube-apiserver
\[root@k8s-master1 \~\]# systemctl enable kube-apiserver
#### 1.6、部署kube-controller-manager
##### (1)创建配置文件
```bash
[root@k8s-master1 ~]# vim /hqtbj/hqtwww/kubernetes/cfg/kube-controller-manager.conf
KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=false \
--v=2 \
--log-dir=/hqtbj/hqtwww/kubernetes/logs \
--leader-elect=true \
--kubeconfig=/hqtbj/hqtwww/kubernetes/cfg/kube-controller-manager.kubeconfig \
--bind-address=127.0.0.1 \
--allocate-node-cidrs=true \
--cluster-cidr=10.244.0.0/16 \
--service-cluster-ip-range=10.0.0.0/24 \
--cluster-signing-cert-file=/hqtbj/hqtwww/kubernetes/ssl/ca.pem \
--cluster-signing-key-file=/hqtbj/hqtwww/kubernetes/ssl/ca-key.pem \
--root-ca-file=/hqtbj/hqtwww/kubernetes/ssl/ca.pem \
--service-account-private-key-file=/hqtbj/hqtwww/kubernetes/ssl/ca-key.pem \
--cluster-signing-duration=87600h0m0s"
```
***--cluster-cidr*** :设置集群内pod使用的网络
***--kubeconfig*** :连接apiserver的配置文件
***--leader-elect*** :当该组件启动多个时,自动选举(HA)
***--cluster-signing-cert-file*** :用于签发证书的根CA证书
***--cluster-signing-key-file***:用于签发证书的根CA私钥
##### (2)使用自签的CA来签发kube-controller-manager HTTPS证书
创建controller-manager证书的csr申请文件
```bash
[root@k8s-master1 ~]# cd /hqtbj/hqtwww/TLS/k8s/
[root@k8s-master1 k8s]# vim kube-controller-manager-csr.json
{
"CN": "system:kube-controller-manager",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "BeiJing",
"ST": "BeiJing",
"O": "system:masters",
"OU": "System"
}
]
}
```
生成kube-controller-manager证书
```bash
[root@k8s-master1 k8s]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager
[root@k8s-master1 k8s]# ll
total 52
-rw-r--r-- 1 root root 294 Sep 4 13:44 ca-config.json
-rw-r--r-- 1 root root 1001 Sep 4 13:51 ca.csr
-rw-r--r-- 1 root root 264 Sep 4 13:50 ca-csr.json
-rw------- 1 root root 1679 Sep 4 13:51 ca-key.pem
-rw-r--r-- 1 root root 1359 Sep 4 13:51 ca.pem
-rw-r--r-- 1 root root 1342 Sep 4 14:03 kube-apiserver.csr
-rw-r--r-- 1 root root 761 Sep 4 14:03 kube-apiserver-csr.json
-rw------- 1 root root 1675 Sep 4 14:03 kube-apiserver-key.pem
-rw-r--r-- 1 root root 1708 Sep 4 14:03 kube-apiserver.pem
-rw-r--r-- 1 root root 1045 Sep 4 15:02 kube-controller-manager.csr
-rw-r--r-- 1 root root 255 Sep 4 15:01 kube-controller-manager-csr.json
-rw------- 1 root root 1675 Sep 4 15:02 kube-controller-manager-key.pem
-rw-r--r-- 1 root root 1436 Sep 4 15:02 kube-controller-manager.pem
会生成kube-controller-manager.pem证书和kube-controller-manager-key.pem私钥
```
##### (3)生成kube-controller-manager的kubeconfig文件(以下是shell命令,直接在终端执行)
设置kubeconfig文件的目录以及kube-apiserver地址的变量,方便后面引用
```bash
[root@k8s-master1 k8s]# KUBE_CONFIG="/hqtbj/hqtwww/kubernetes/cfg/kube-controller-manager.kubeconfig"
[root@k8s-master1 k8s]# KUBE_APISERVER="https://172.32.0.11:6443"
```
设置集群信息
```bash
[root@k8s-master1 k8s]# kubectl config set-cluster kubernetes \
> --certificate-authority=/hqtbj/hqtwww/kubernetes/ssl/ca.pem \
> --embed-certs=true \
> --server=${KUBE_APISERVER} \
> --kubeconfig=${KUBE_CONFIG}
Cluster "kubernetes" set.
```
***--certificate-authority*** :用于验证kube-apiserver服务器证书的CA证书

设置kube-controller-manager的用户信息
```bash
[root@k8s-master1 k8s]# kubectl config set-credentials kube-controller-manager \
> --client-certificate=/hqtbj/hqtwww/TLS/k8s/kube-controller-manager.pem \
> --client-key=/hqtbj/hqtwww/TLS/k8s/kube-controller-manager-key.pem \
> --embed-certs=true \
> --kubeconfig=${KUBE_CONFIG}
User "kube-controller-manager" set.
```
***--client-certificate*** :用于访问kube-apiserver的客户端证书
***--client-key*** :客户端证书所对应的私钥

设置controller-manager的kubeconfig上下文信息
```bash
[root@k8s-master1 k8s]# kubectl config set-context default \
> --cluster=kubernetes \
> --user=kube-controller-manager \
> --kubeconfig=${KUBE_CONFIG}
Context "default" created.
[root@k8s-master1 k8s]# kubectl config use-context default --kubeconfig=${KUBE_CONFIG}
Switched to context "default".
```

这样我们的kube-controller-manager连接api-server的kubeconfig配置文件就生成好了;可以看出kube-controller是作为客户端去连接apiserver服务的;
##########待补充############
##### (4)部署kube-controller-manager并启动
```bash
[root@k8s-master1 k8s]# vim /usr/lib/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=/hqtbj/hqtwww/kubernetes/cfg/kube-controller-manager.conf
ExecStart=/hqtbj/hqtwww/kubernetes/bin/kube-controller-manager \$KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target
[root@k8s-master1 k8s]# systemctl daemon-reload
[root@k8s-master1 k8s]# systemctl start kube-controller-manager
[root@k8s-master1 k8s]# systemctl enable kube-controller-manager
```
#### 1.7、部署kube-scheduler
##### (1)创建配置文件
```bash
[root@k8s-master1 ~]# vim /hqtbj/hqtwww/kubernetes/cfg/kube-scheduler.conf
KUBE_SCHEDULER_OPTS="--logtostderr=false \
--v=2 \
--log-dir=/hqtbj/hqtwww/kubernetes/logs \
--leader-elect \
--kubeconfig=/hqtbj/hqtwww/kubernetes/cfg/kube-scheduler.kubeconfig \
--bind-address=127.0.0.1"
```
***--kubeconfig*** :连接api-server的配置文件
***--leader-elect***:当该组件启动多个时,自动选举(HA)
##### (2)使用自签的CA来签发kube-scheduler HTTPS证书
创建scheduler证书的csr申请文件
```bash
[root@k8s-master1 ~]# cd /hqtbj/hqtwww/TLS/k8s/
[root@k8s-master1 k8s]# vim kube-scheduler-csr.json
{
"CN": "system:kube-scheduler",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "BeiJing",
"ST": "BeiJing",
"O": "system:masters",
"OU": "System"
}
]
}
```
生成证书
```bash
[root@k8s-master1 k8s]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-scheduler-csr.json | cfssljson -bare kube-scheduler
[root@k8s-master1 k8s]# ll
total 68
-rw-r--r-- 1 root root 294 Sep 4 13:44 ca-config.json
-rw-r--r-- 1 root root 1001 Sep 4 13:51 ca.csr
-rw-r--r-- 1 root root 264 Sep 4 13:50 ca-csr.json
-rw------- 1 root root 1679 Sep 4 13:51 ca-key.pem
-rw-r--r-- 1 root root 1359 Sep 4 13:51 ca.pem
-rw-r--r-- 1 root root 1342 Sep 4 14:03 kube-apiserver.csr
-rw-r--r-- 1 root root 761 Sep 4 14:03 kube-apiserver-csr.json
-rw------- 1 root root 1675 Sep 4 14:03 kube-apiserver-key.pem
-rw-r--r-- 1 root root 1708 Sep 4 14:03 kube-apiserver.pem
-rw-r--r-- 1 root root 1045 Sep 4 15:02 kube-controller-manager.csr
-rw-r--r-- 1 root root 255 Sep 4 15:01 kube-controller-manager-csr.json
-rw------- 1 root root 1675 Sep 4 15:02 kube-controller-manager-key.pem
-rw-r--r-- 1 root root 1436 Sep 4 15:02 kube-controller-manager.pem
-rw-r--r-- 1 root root 1029 Sep 4 15:24 kube-scheduler.csr
-rw-r--r-- 1 root root 245 Sep 4 15:22 kube-scheduler-csr.json
-rw------- 1 root root 1679 Sep 4 15:24 kube-scheduler-key.pem
-rw-r--r-- 1 root root 1424 Sep 4 15:24 kube-scheduler.pem
```
##### (3)生成kube-scheduler的kubeconfig文件(以下是shell命令,直接在终端执行)
```bash
[root@k8s-master1 k8s]# KUBE_CONFIG="/hqtbj/hqtwww/kubernetes/cfg/kube-scheduler.kubeconfig"
[root@k8s-master1 k8s]# KUBE_APISERVER="https://172.32.0.11:6443"
[root@k8s-master1 k8s]# kubectl config set-cluster kubernetes \
> --certificate-authority=/hqtbj/hqtwww/kubernetes/ssl/ca.pem \
> --embed-certs=true \
> --server=${KUBE_APISERVER} \
> --kubeconfig=${KUBE_CONFIG}
Cluster "kubernetes" set.
[root@k8s-master1 k8s]# kubectl config set-credentials kube-scheduler \
> --client-certificate=/hqtbj/hqtwww/TLS/k8s/kube-scheduler.pem \
> --client-key=/hqtbj/hqtwww/TLS/k8s/kube-scheduler-key.pem \
> --embed-certs=true \
> --kubeconfig=${KUBE_CONFIG}
User "kube-scheduler" set.
[root@k8s-master1 k8s]# kubectl config set-context default \
> --cluster=kubernetes \
> --user=kube-scheduler \
> --kubeconfig=${KUBE_CONFIG}
Context "default" created.
[root@k8s-master1 k8s]# kubectl config use-context default --kubeconfig=${KUBE_CONFIG}
Switched to context "default".
```
##### (4)部署kube-scheduler并启动
```bash
[root@k8s-master1 k8s]# vim /usr/lib/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=/hqtbj/hqtwww/kubernetes/cfg/kube-scheduler.conf
ExecStart=/hqtbj/hqtwww/kubernetes/bin/kube-scheduler $KUBE_SCHEDULER_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target
[root@k8s-master1 k8s]# systemctl daemon-reload
[root@k8s-master1 k8s]# systemctl start kube-scheduler
[root@k8s-master1 k8s]# systemctl enable kube-scheduler
```
#### 1.8、查看集群状态
##### (1)生成kubectl工具连接集群的证书(kubeadm安装方式默认生成的在/root/.kube/config)
创建kubect证书的csr申请文件
```bash
[root@k8s-master1 k8s]# vim kubectl-admin-cluster-csr.json
{
"CN": "admin",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "BeiJing",
"ST": "BeiJing",
"O": "system:masters",
"OU": "System"
}
]
}
```
生成证书
```bash
[root@k8s-master1 k8s]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kubectl-admin-cluster-csr.json | cfssljson -bare kubectl-admin-cluster
[root@k8s-master1 k8s]# ll
total 84
-rw-r--r-- 1 root root 294 Sep 4 13:44 ca-config.json
-rw-r--r-- 1 root root 1001 Sep 4 13:51 ca.csr
-rw-r--r-- 1 root root 264 Sep 4 13:50 ca-csr.json
-rw------- 1 root root 1679 Sep 4 13:51 ca-key.pem
-rw-r--r-- 1 root root 1359 Sep 4 13:51 ca.pem
-rw-r--r-- 1 root root 1342 Sep 4 14:03 kube-apiserver.csr
-rw-r--r-- 1 root root 761 Sep 4 14:03 kube-apiserver-csr.json
-rw------- 1 root root 1675 Sep 4 14:03 kube-apiserver-key.pem
-rw-r--r-- 1 root root 1708 Sep 4 14:03 kube-apiserver.pem
-rw-r--r-- 1 root root 1045 Sep 4 15:02 kube-controller-manager.csr
-rw-r--r-- 1 root root 255 Sep 4 15:01 kube-controller-manager-csr.json
-rw------- 1 root root 1675 Sep 4 15:02 kube-controller-manager-key.pem
-rw-r--r-- 1 root root 1436 Sep 4 15:02 kube-controller-manager.pem
-rw-r--r-- 1 root root 1009 Sep 4 15:39 kubectl-admin-cluster.csr
-rw-r--r-- 1 root root 229 Sep 4 15:38 kubectl-admin-cluster-csr.json
-rw------- 1 root root 1675 Sep 4 15:39 kubectl-admin-cluster-key.pem
-rw-r--r-- 1 root root 1399 Sep 4 15:39 kubectl-admin-cluster.pem
-rw-r--r-- 1 root root 1029 Sep 4 15:24 kube-scheduler.csr
-rw-r--r-- 1 root root 245 Sep 4 15:22 kube-scheduler-csr.json
-rw------- 1 root root 1679 Sep 4 15:24 kube-scheduler-key.pem
-rw-r--r-- 1 root root 1424 Sep 4 15:24 kube-scheduler.pem
```
##### (2)生成kubectl的kubeconfig文件(以下是shell命令,直接在终端执行)
```bash
[root@k8s-master1 k8s]# mkdir /root/.kube
[root@k8s-master1 k8s]# KUBE_CONFIG="/root/.kube/config"
[root@k8s-master1 k8s]# KUBE_APISERVER="https://172.32.0.11:6443"
[root@k8s-master1 k8s]# kubectl config set-cluster kubernetes \
> --certificate-authority=/hqtbj/hqtwww/kubernetes/ssl/ca.pem \
> --embed-certs=true \
> --server=${KUBE_APISERVER} \
> --kubeconfig=${KUBE_CONFIG}
Cluster "kubernetes" set.
[root@k8s-master1 k8s]# kubectl config set-credentials cluster-admin \
> --client-certificate=/hqtbj/hqtwww/TLS/k8s/kubectl-admin-cluster.pem \
> --client-key=/hqtbj/hqtwww/TLS/k8s/kubectl-admin-cluster-key.pem \
> --embed-certs=true \
> --kubeconfig=${KUBE_CONFIG}
User "cluster-admin" set.
[root@k8s-master1 k8s]# kubectl config set-context default \
> --cluster=kubernetes \
> --user=cluster-admin \
> --kubeconfig=${KUBE_CONFIG}
Context "default" created.
[root@k8s-master1 k8s]# kubectl config use-context default --kubeconfig=${KUBE_CONFIG}
Switched to context "default".
```
上面我们就把kubectl连接kube-api-server的配置完成了,接下来执行kubectl命令查看集群组件的状态;
```bash
[root@k8s-master1 k8s]# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-1 Healthy {"health":"true"}
etcd-2 Healthy {"health":"true"}
etcd-0 Healthy {"health":"true"}
```
如上输出说明master节点运行正常
#### 1.9、授权kubelet-bootstrap用户允许请求证书
允许 kubelet-bootstrap 用户创建首次启动的 CSR 请求(kubelet组件加入集群的请求)
```bash
[root@k8s-master1 k8s]# kubectl create clusterrolebinding kubelet-bootstrap \
> --clusterrole=system:node-bootstrapper \
> --user=kubelet-bootstrap
clusterrolebinding.rbac.authorization.k8s.io/kubelet-bootstrap created
[root@k8s-master1 k8s]# kubectl get clusterrolebinding
NAME ROLE AGE
cluster-admin ClusterRole/cluster-admin 54m
kubelet-bootstrap ClusterRole/system:node-bootstrapper 8s
......
```
在有些用户首次启动的时候,可能会遇到kubelet报401无权访问apiserver的错误;这是因为在默认情况下,kubelet通过bootstrap.kubeconfig中的预设用户Token声明了自己的身份,然后创建CSR请求;但是不要忘记这个用户在我们不处理的情况下它没任何权限,包括创建CSR请求;所以需要如上命令创建一个ClusterRoleBinding,将预设用户kubelet-bootstrap与内置的ClusterRole system:node-bootstrapper绑定到一起,使其能够发起CSR请求;
## 五、部署work node
下面还是在Master Node上操作,即同时作为Worker Node
### 1、创建工作目录并拷贝二进制文件
在所有worker node创建工作目录
将从github下载的kubernete二进制文件包里的kubelet、kube-proxy组件拷贝到工作节点
```bash
[root@k8s-master1 ~]# cd /root/kubernetes/server/bin
[root@k8s-master1 bin]# ll
total 968412
-rwxr-xr-x. 1 root root 46690304 Sep 16 2021 apiextensions-apiserver
-rwxr-xr-x. 1 root root 39219200 Sep 16 2021 kubeadm
-rwxr-xr-x. 1 root root 44687360 Sep 16 2021 kube-aggregator
-rwxr-xr-x. 1 root root 118272000 Sep 16 2021 kube-apiserver
-rw-r--r--. 1 root root 9 Sep 16 2021 kube-apiserver.docker_tag
-rw-------. 1 root root 123088384 Sep 16 2021 kube-apiserver.tar
-rwxr-xr-x. 1 root root 112799744 Sep 16 2021 kube-controller-manager
-rw-r--r--. 1 root root 9 Sep 16 2021 kube-controller-manager.docker_tag
-rw-------. 1 root root 117616128 Sep 16 2021 kube-controller-manager.tar
-rwxr-xr-x. 1 root root 40226816 Sep 16 2021 kubectl
-rwxr-xr-x. 1 root root 114150568 Sep 16 2021 kubelet
-rwxr-xr-x. 1 root root 39489536 Sep 16 2021 kube-proxy
-rw-r--r--. 1 root root 9 Sep 16 2021 kube-proxy.docker_tag
-rw-------. 1 root root 101492224 Sep 16 2021 kube-proxy.tar
-rwxr-xr-x. 1 root root 43724800 Sep 16 2021 kube-scheduler
-rw-r--r--. 1 root root 9 Sep 16 2021 kube-scheduler.docker_tag
-rw-------. 1 root root 48541184 Sep 16 2021 kube-scheduler.tar
-rwxr-xr-x. 1 root root 1634304 Sep 16 2021 mounter
[root@k8s-master1 bin]# cp kubelet kube-proxy /hqtbj/hqtwww/kubernetes/bin/
```
#### 1.1、部署kubelet
##### (1)创建配置文件
```bash
[root@k8s-master1 ~]# cd /hqtbj/hqtwww/kubernetes/cfg
[root@k8s-master1 cfg]# vim kubelet.conf
KUBELET_OPTS="--logtostderr=false \
--v=2 \
--log-dir=/hqtbj/hqtwww/kubernetes/logs \
--hostname-override=k8s-master1 \
--network-plugin=cni \
--kubeconfig=/hqtbj/hqtwww/kubernetes/cfg/kubelet.kubeconfig \
--bootstrap-kubeconfig=/hqtbj/hqtwww/kubernetes/cfg/bootstrap.kubeconfig \
--config=/hqtbj/hqtwww/kubernetes/cfg/kubelet-config.yml \
--cert-dir=/hqtbj/hqtwww/kubernetes/ssl \
--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:3.0"
```
***--hostname-override*** :显示名称,集群中唯一
***--network-plugin*** :启用CNI
***--kubeconfig*** :空路径,会自动生成,后面用于连接apiserver
***--bootstrap-kubeconfig*** :首次启动向apiserver申请证书的文件
***--config*** :配置参数文件
***--cert-dir*** :kubelet证书生成目录
***--pod-infra-container-image***:管理pod网络容器(infra容器)的镜像
##### (2)创建配置参数(kubelet-config.yml)文件
```bash
[root@k8s-master1 cfg]# vim kubelet-config.yml
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 0.0.0.0
port: 10250
readOnlyPort: 10255
cgroupDriver: cgroupfs
clusterDNS:
- 10.0.0.2
clusterDomain: cluster.local
failSwapOn: false
authentication:
anonymous:
enabled: false
webhook:
cacheTTL: 2m0s
enabled: true
x509:
clientCAFile: /hqtbj/hqtwww/kubernetes/ssl/ca.pem
authorization:
mode: Webhook
webhook:
cacheAuthorizedTTL: 5m0s
cacheUnauthorizedTTL: 30s
evictionHard:
imagefs.available: 15%
memory.available: 100Mi
nodefs.available: 10%
nodefs.inodesFree: 5%
maxOpenFiles: 1000000
maxPods: 110
```
***clusterDNS*** :为集群内部dns的地址(后面安装CoreDNS的时候需要);
***clusterDomain***: 为本地集群的dns域名,后面部署CoreDNS也需要用;
##### (3)生成kubelet初次加入集群向apiserver申请证书的kubeconfig文件
在我理解这里也可以是kubelet的kubeconfig文件(kubelet.kubeconifg)
```bash
[root@k8s-master1 cfg]# KUBE_CONFIG="/hqtbj/hqtwww/kubernetes/cfg/bootstrap.kubeconfig"
[root@k8s-master1 cfg]# KUBE_APISERVER="https://172.32.0.11:6443" #apiserver的地址
[root@k8s-master1 cfg]# TOKEN="6c02be6b3c0875bc5783ecafcb6e10bc" #与token.csv文件里保持一直
[root@k8s-master1 cfg]# kubectl config set-cluster kubernetes \
> --certificate-authority=/hqtbj/hqtwww/kubernetes/ssl/ca.pem \
> --embed-certs=true \
> --server=${KUBE_APISERVER} \
> --kubeconfig=${KUBE_CONFIG}
Cluster "kubernetes" set.
[root@k8s-master1 cfg]# kubectl config set-credentials "kubelet-bootstrap" \
> --token=${TOKEN} \
> --kubeconfig=${KUBE_CONFIG}
User "kubelet-bootstrap" set.
[root@k8s-master1 cfg]# kubectl config set-context default \
> --cluster=kubernetes \
> --user="kubelet-bootstrap" \
> --kubeconfig=${KUBE_CONFIG}
Context "default" created.
[root@k8s-master1 cfg]# kubectl config use-context default --kubeconfig=${KUBE_CONFIG}
Switched to context "default".
```
##### (4)systemd管理kubelet 并启动
```bash
[root@k8s-master1 ~]# vim /usr/lib/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet
After=docker.service
[Service]
EnvironmentFile=/hqtbj/hqtwww/kubernetes/cfg/kubelet.conf
ExecStart=/hqtbj/hqtwww/kubernetes/bin/kubelet $KUBELET_OPTS
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
[root@k8s-master1 ~]# systemctl daemon-reload
[root@k8s-master1 ~]# systemctl start kubelet
[root@k8s-master1 ~]# systemctl enable kubelet
```
#### 1.2、在master节点上批准kubelet证书申请并加入集群
```bash
#查看kubelet证书请求
[root@k8s-master1 ~]# kubectl get csr
NAME AGE SIGNERNAME REQUESTOR CONDITION
node-csr-gLVg4Mw43pDAk6aEP8MSHzQKVgx_aitjPr9o-InDFxw 8m13s kubernetes.io/kube-apiserver-client-kubelet kubelet-bootstrap Pending
#批准申请
[root@k8s-master1 ~]# kubectl certificate approve node-csr-gLVg4Mw43pDAk6aEP8MSHzQKVgx_aitjPr9o-InDFxw
certificatesigningrequest.certificates.k8s.io/node-csr-gLVg4Mw43pDAk6aEP8MSHzQKVgx_aitjPr9o-InDFxw approved
[root@k8s-master1 ~]# kubectl get csr
NAME AGE SIGNERNAME REQUESTOR CONDITION
node-csr-gLVg4Mw43pDAk6aEP8MSHzQKVgx_aitjPr9o-InDFxw 17m kubernetes.io/kube-apiserver-client-kubelet kubelet-bootstrap Approved,Issued
#查看节点
[root@k8s-master1 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master1 NotReady