Zero to JupyterHub with Kubernetes上篇 - Kubernetes 离线二进制部署

前言: 纯个人记录使用。

k8s二进制部署所需离线包和镜像

链接:https://pan.baidu.com/s/1z8quvOEoLgH0x7jkZWfVEw

提取码:1234

参考:

https://www.yuque.com/fairy-era/yg511q/xyqxge

https://blog.csdn.net/2301_77428746/article/details/140032125

文章目录

    • [1、 集群架构](#1、 集群架构)
    • [2、 cfssl证书生成工具](#2、 cfssl证书生成工具)
    • 3、Etcd集群部署
      • [3.1 使用自签CA机构签发Etcd服务ssl证书](#3.1 使用自签CA机构签发Etcd服务ssl证书)
      • [3.2 部署Etcd集群](#3.2 部署Etcd集群)
    • 4、安装docker
    • [5、 Master节点部署](#5、 Master节点部署)
      • [5.1 使用自签CA签发kube-apiserver HTTPS证书](#5.1 使用自签CA签发kube-apiserver HTTPS证书)
      • [5.2 部署kube-apiserver](#5.2 部署kube-apiserver)
        • [5.2.1 服务启动错误](#5.2.1 服务启动错误)
      • [5.3 部署kube-controller-manager](#5.3 部署kube-controller-manager)
      • [5.4 部署kube-scheduler](#5.4 部署kube-scheduler)
      • [5.5 查看集群状态](#5.5 查看集群状态)
    • [6、 Node节点部署](#6、 Node节点部署)
      • [6.1 kubelet 部署](#6.1 kubelet 部署)
      • [6.2 kube-proxy部署](#6.2 kube-proxy部署)
    • [7、 网络插件部署calico](#7、 网络插件部署calico)
    • [8、 授权apiserver访问kubelet](#8、 授权apiserver访问kubelet)
    • [9、 node1、node2 节点加入 woker node](#9、 node1、node2 节点加入 woker node)
    • [10、 部署CoreDNS和Dashboard](#10、 部署CoreDNS和Dashboard)
      • [10.1 部署CoreDNS](#10.1 部署CoreDNS)
      • [10.2 部署Dashboard](#10.2 部署Dashboard)

1、 集群架构

主机 角色 组件 主机版本
10.34.X.10 k8s-Master Kube-apiserver、Kube-controller-manager、Kube-Scheduler\docker、calico、Etcd centos7.9
10.34.X.11 k8s-Node1 Kubelet、Kube-proxy、docker、calico、Etcd centos7.9
10.34.X.12 k8s-Node2 Kubelet、Kube-proxy、docker、calico、Etcd centos7.9
软件 版本
Docker 19.03.9
Kubernetes v1.20.4
calico v3.15.1
etcd v3.4.9

环境准备

shell 复制代码
# 1、3台机器配置ssh免密登录
> ssh-keygen -t rsa -b 4096
> ssh-copy-id username@hostname

# 2、主机名映射
> cat /etc/hosts
  10.34.X.10      k8s-Master
  10.34.X.11      k8s-Node1
  10.34.X.12      k8s-Node2  
  
# 3、 机器防火墙状态(未启用)
> systemctl status firewalld   # dead

# 4、 安全模块selinux状态
> getenforce    # Disabled

# 5、 swap分区禁用
> swapoff -a        # 临时关闭swap 
> vim /etc/fstab    # 注销掉swap分区配置
> free -h              total        used        free      shared  buff/cache   available
Mem:           251G         78G        2.4G        794M        169G        170G
Swap:            0B          0B          0B

# 6、 将桥接的IPv4流量传递到iptables的链 
> vim /etc/sysctl.d/k8s.conf 
'''
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
'''
> sysctl --system  # 生效

2、 cfssl证书生成工具

shell 复制代码
## cfssl 工具
[root@k8s-master /data/kubernetes/cfssl]$ tar -xzf cfssl.tar.gz
[root@k8s-master /data/kubernetes/cfssl]$ mv cfssl /usr/local/bin/cfssl               # 用于签发证书
[root@k8s-master /data/kubernetes/cfssl]$ mv cfssljson /usr/local/bin/cfssljson       # 将cfssl生成的证书(json)变成证书文件(pem)     
[root@k8s-master /data/kubernetes/cfssl]$ mv cfssl-certinfo /usr/bin/cfssl-certinfo   # 验证或查看证书

## 生成Etcd证书
#创建目录
[root@k8s-master ~]$ mkdir -p ca/etcd 
[root@k8s-master ~]$ cd ca/etcd
#自签CA机构配置文件:定义证书颁发机构(CA)的签名配置和策略。它通常包含关于证书过期时间、用途、签名配置等设置
[root@k8s-master ~/ca/etcd]$ vim ca-config.json
{
	"signing": {
		"default": {                // 默认签名配置
			"expiry": "87600h"      // 所有签发证书的默认有效期10年
		},
		"profiles": {                    // 定义不同类型证书的详细签名配置
			"www": {
				"expiry": "87600h",
				"usages": [              // 定义证书的用途
					"signing",
					"key encipherment",
					"server auth",
					"client auth"
				]
			}
		}
	}
}
#自签ca机构根证书签名申请文件
[root@k8s-master ~/ca/etcd]$ vim ca-csr.json
{
	"CN": "etcd CA",
	"key": {
		"algo": "rsa",
		"size": 2048
	},
	"names": [{
		"C": "CN",
		"L": "Beijing",
		"ST": "Beijing"
	}]
}

#生成ca机构证书
[root@k8s-master ~/ca/etcd]$ cfssl gencert -initca ca-csr.json  | cfssljson -bare ca -
ca.csr   ca-key.pem   ca.pem
# ca.csr 证书请求文件 ca.pem、ca-key.pem CA根证书文件及其私钥文件 

3、Etcd集群部署

3.1 使用自签CA机构签发Etcd服务ssl证书

shell 复制代码
## 使用自签CA签发Etcd HTTPS证书
#创建etcd服务证书申请文件
[root@k8s-master ~/ca/etcd]$ vim server-csr.json
{
    "CN": "etcd",
    "hosts": [             // 列出了该证书应该支持的所有主机名或域名
    "10.34.x.10",
    "10.34.x.11",
    "10.34.x.12"
    ],
    "key": {               // 指定秘钥算法及秘钥长度
        "algo": "rsa",
        "size": 2048
    },
    "names": [             // 该服务机构信息
        {
            "C": "CN",
            "L": "BeiJing",
            "ST": "BeiJing"
        }
    ]
}

# CN(common name):申请者名称
# hosts: 网络请求url中的合法主机名或域名集合
# key: 加密说明
# names: 所在国家、省市等信息

# 生成Etcd服务证书
[root@k8s-master ~/ca/etcd]$ cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server
server.csr server-key.pem  server.pem

3.2 部署Etcd集群

shell 复制代码
# 解压
[root@k8s-master /data/s0/kubernetes/etcd]$ tar zxvf etcd-v3.4.9-linux-amd64.tar.gz
# 创建etcd配置文件
[root@k8s-master /data/s0/kubernetes/etcd]$ vim etcd.conf
#[Member]
ETCD_NAME="etcd-1"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://10.34.x.10:2380"
ETCD_LISTEN_CLIENT_URLS="https://10.34.x.10:2379"

#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://10.34.x.10:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://10.34.x.10:2379"
ETCD_INITIAL_CLUSTER="etcd-1=https://10.34.x.10:2380,etcd-2=https://10.34.x.11:2380,etcd-3=https://10.34.x.12:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"

*参数解释
•	ETCD_NAME:节点名称,集群中唯一
•	ETCD_DATA_DIR:数据目录
•	ETCD_LISTEN_PEER_URLS:集群通信监听地址
•	ETCD_LISTEN_CLIENT_URLS:客户端访问监听地址
•	ETCD_INITIAL_ADVERTISE_PEERURLS:集群通告地址
•	ETCD_ADVERTISE_CLIENT_URLS:客户端通告地址
•	ETCD_INITIAL_CLUSTER:集群节点地址
•	ETCD_INITIALCLUSTER_TOKEN:集群Token
•	ETCD_INITIALCLUSTER_STATE:加入集群的当前状态,new是新集群,existing表示加入已有集群

# 配置系统服务
[root@k8s-master /data/s0/kubernetes/etcd]$ vim /usr/lib/systemd/system/etcd.service
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target

[Service]
Type=notify
EnvironmentFile=/data/s0/kubernetes/etcd/etcd.conf
ExecStart=/data/s0/kubernetes/etcd/etcd-v3.4.9-linux-amd64/etcd \
--cert-file=/root/ca/etcd/server.pem \
--key-file=/root/ca/etcd/server-key.pem \
--peer-cert-file=/root/ca/etcd/server.pem \
--peer-key-file=/root/ca/etcd/server-key.pem \
--trusted-ca-file=/root/ca/etcd/ca.pem \
--peer-trusted-ca-file=/root/ca/etcd/ca.pem \
--logger=zap
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target


# 将k8s-master的配置copy到k8s-node1、k8s-node2
[root@k8s-master ~]$ scp -r ~/ca 10.34.x.11:~/
[root@k8s-master ~]$ scp -r ~/ca 10.34.x.12:~/

[root@k8s-master ~]$ scp -r /data/s0/kubernetes/etcd 10.34.x.11:/data/s0/kubernetes
[root@k8s-master ~]$ scp -r /data/s0/kubernetes/etcd 10.34.x.12:/data/s0/kubernetes

[root@k8s-master ~]$ scp /usr/lib/systemd/system/etcd.service  10.34.x.11:/usr/lib/systemd/system
[root@k8s-master ~]$ scp /usr/lib/systemd/system/etcd.service  10.34.x.12:/usr/lib/systemd/system

# node1、node2 节点修改配置
[root@k8s-node1 ~]$ vim /data/s0/kubernetes/etcd/etcd.conf
#[Member]
ETCD_NAME="etcd-2" # 名称各节点不一样,注意
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://10.34.x.11:2380"
ETCD_LISTEN_CLIENT_URLS="https://10.34.x.11:2379"

#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://10.34.x.11:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://10.34.x.11:2379"
ETCD_INITIAL_CLUSTER="etcd-1=https://10.34.x.10:2380,etcd-2=https://10.34.x.11:2380,etcd-3=https://10.34.x.12:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"

[root@k8s-node2 ~]$ vim /data/s0/kubernetes/etcd/etcd.conf
#[Member]
ETCD_NAME="etcd-3"   # 名称各节点不一样,注意
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://10.34.x.12:2380"
ETCD_LISTEN_CLIENT_URLS="https://10.34.x.12:2379"

#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://10.34.x.12:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://10.34.x.12:2379"
ETCD_INITIAL_CLUSTER="etcd-1=https://10.34.x.10:2380,etcd-2=https://10.34.x.11:2380,etcd-3=https://10.34.x.12:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"

# 启动Etcd服务
[root@k8s-master ~]$ systemctl start etcd
[root@k8s-node1 ~]$ systemctl start etcd
[root@k8s-node2 ~]$ systemctl start etcd

# 查看集群状态
[root@k8s-master /data/s0/kubernetes/etcd/etcd-v3.4.9-linux-amd64]$./etcdctl --cacert=/root/ca/etcd/ca.pem --cert=/root/ca/etcd/server.pem --key=/root/ca/etcd/server-key.pem --endpoints="https://10.34.x.10:2379,https://10.34.x.11:2379,https://10.34.x.12:2379" endpoint health --write-out=table

+--------------------------+--------+-------------+-------+
|         ENDPOINT         | HEALTH |    TOOK     | ERROR |
+--------------------------+--------+-------------+-------+
| https://10.34.x.10:2379 |   true | 28.399299ms |       |
| https://10.34.x.11:2379 |   true | 28.433169ms |       |
| https://10.34.x.12:2379 |   true | 28.925481ms |       |
+--------------------------+--------+-------------+-------+

 

4、安装docker

shell 复制代码
# 解压安装
[root@k8s-master /data/s0/kubernetes/docker]$ tar zxvf docker-19.03.9.tgz
[root@k8s-master /data/s0/kubernetes/docker]$ cp docker/* /usr/bin
# 配置
[root@k8s-master /data/s0/kubernetes/docker]$ mkdir /etc/docker
[root@k8s-master /data/s0/kubernetes/docker]$ vim /etc/docker/daemon.json

{
    "data-root": "/data/s0/kubernetes/docker/docker_data"     # docker 数据保存地址默认保存地址/var/lib/docker
    "registry-mirrors": ["https://b9pmyelo.mirror.aliyuncs.com"]  # 镜像源,离线机器应该使用不到
}

# 配置系统服务
[root@k8s-master /data/s0/kubernetes/docker]$ vim /usr/lib/systemd/system/docker.service
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service
Wants=network-online.target

[Service]
Type=notify
ExecStart=/usr/bin/dockerd
ExecReload=/bin/kill -s HUP $MAINPID
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
TimeoutStartSec=0
Delegate=yes
KillMode=process
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s

[Install]
WantedBy=multi-user.target

# 启动docker服务
[root@k8s-master /data/s0/kubernetes/docker]$ systemctl start docker

# node1、node2同时开启docker服务
[root@k8s-master /data/s0/kubernetes/docker]$ scp docker/* 10.34.x.11:/usr/bin
[root@k8s-master /data/s0/kubernetes/docker]$ scp docker/* 10.34.x.12:/usr/bin

[root@k8s-master /data/s0/kubernetes/docker]$ scp -r /etc/docker 10.34.x.11:/etc
[root@k8s-master /data/s0/kubernetes/docker]$ scp -r /etc/docker 10.34.x.12:/etc

[root@k8s-master /data/s0/kubernetes/docker]$ scp /usr/lib/systemd/system/docker.service 10.34.x.11:/usr/lib/systemd/system
[root@k8s-master /data/s0/kubernetes/docker]$ scp /usr/lib/systemd/system/docker.service 10.34.x.12:/usr/lib/systemd/system

[root@k8s-node1 ~]$ systemctl start docker
[root@k8s-node2 ~]$ systemctl start docker

5、 Master节点部署

5.1 使用自签CA签发kube-apiserver HTTPS证书

shell 复制代码
# 创建目录
[root@k8s-master ~]$ mkdir ca/k8s

# 服务签名配置文件
[root@k8s-master ~/ca/k8s]$ vim ca-config.json
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "kubernetes": {
         "expiry": "87600h",
         "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ]
      }
    }
  }
}
# ca自签机构根证书签名请求
[root@k8s-master ~/ca/k8s]$ vim ca-csr.json
{
    "CN": "kubernetes",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing",
            "O": "k8s",
            "OU": "System"
        }
    ]
}

# 生成证书
[root@k8s-master1 ~/ca/k8s]$ cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
# 创建kube-apiserver服务证书申请文件
[root@k8s-master ~/ca/k8s]$ vim server-csr.json
{
	"CN": "kubernetes",
	"hosts": [
		"10.0.0.1",
		"127.0.0.1",
		"10.34.x.10", // master
		"10.34.x.11", // node1
		"10.34.x.12", // node2
		"kubernetes",
		"kubernetes.default",
		"kubernetes.default.svc",
		"kubernetes.default.svc.cluster",
		"kubernetes.default.svc.cluster.local"
	],
	"key": {
		"algo": "rsa",
		"size": 2048
	},
	"names": [{
		"C": "CN",
		"L": "BeiJing",
		"ST": "BeiJing",
		"O": "k8s",
		"OU": "System"
	}]
}

# 生成server.pem和server-key.pem文件
[root@k8s-master ~/ca/k8s]$  cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server

5.2 部署kube-apiserver

shell 复制代码
# 解压
[root@k8s-master /data/s0/kubernetes/k8s]$ tar -zxvf kubernetes-v1.20.4-server-linux-amd64.tar.gz
[root@k8s-master /data/s0/kubernetes/k8s]$ cp kubernetes/server/bin/kubectl  /usr/bin
[root@k8s-master /data/s0/kubernetes/k8s]$ mkdir {bin,cfg,logs}
[root@k8s-master /data/s0/kubernetes/k8s]$ cp kubernetes/server/bin/{kube-apiserver,kube-scheduler,kube-controller-manager,kubelet,kube-proxy} ./bin

# 创建配置文件
[root@k8s-master /data/s0/kubernetes/k8s]$ vim kube-apiserver.conf
KUBE_APISERVER_OPTS="--logtostderr=false \
--v=2 \
--log-dir=/data/s0/kubernetes/k8s/logs \
--etcd-servers=https://10.34.x.10:2379,https://10.34.x.11:2379,https://10.34.x.12:2379 \
--bind-address=10.34.x.10 \
--secure-port=6443 \
--advertise-address=10.34.x.10 \
--allow-privileged=true \
--service-cluster-ip-range=10.0.0.0/24 \
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \
--authorization-mode=RBAC,Node \
--enable-bootstrap-token-auth=true \
--token-auth-file=/data/s0/kubernetes/k8s/cfg/token.csv \
--service-node-port-range=30000-32767 \
--kubelet-client-certificate=/root/ca/k8s/server.pem \
--kubelet-client-key=/root/ca/k8s/server-key.pem \
--tls-cert-file=/root/ca/k8s/server.pem  \
--tls-private-key-file=/root/ca/k8s/server-key.pem \
--client-ca-file=/root/ca/k8s/ca.pem \
--service-account-key-file=/root/ca/k8s/ca-key.pem \
--service-account-issuer=https://kubernetes.default.svc.cluster.local  \
--service-account-signing-key-file=/root/ca/k8s/ca-key.pem \
--etcd-cafile=/root/ca/etcd/ca.pem \
--etcd-certfile=/root/ca/etcd/server.pem \
--etcd-keyfile=/root/ca/etcd/server-key.pem \
--requestheader-client-ca-file=/root/ca/k8s/ca.pem \
--proxy-client-cert-file=/root/ca/k8s/server.pem \
--proxy-client-key-file=/root/ca/k8s/server-key.pem \
--requestheader-allowed-names=kubernetes \
--requestheader-extra-headers-prefix=X-Remote-Extra- \
--requestheader-group-headers=X-Remote-Group \
--requestheader-username-headers=X-Remote-User \
--enable-aggregator-routing=true \
--audit-log-maxage=30 \
--audit-log-maxbackup=3 \
--audit-log-maxsize=100 \
--audit-log-path=/data/s0/kubernetes/k8s/logs/k8s-audit.log"


参考说明
•	--logtostderr:启用日志
•	---v:日志等级
•	--log-dir:日志目录
•	--etcd-servers:etcd集群地址
•	--bind-address:监听地址
•	--secure-port:https安全端口
•	--advertise-address:集群通告地址
•	--allow-privileged:启用授权
•	--service-cluster-ip-range:Service虚拟IP地址段
•	--enable-admission-plugins:准入控制模块
•	--authorization-mode:认证授权,启用RBAC授权和节点自管理
•	--enable-bootstrap-token-auth:启用TLS bootstrap机制
•	--token-auth-file:bootstrap token文件
•	--service-node-port-range:Service nodeport类型默认分配端口范围
•	--kubelet-client-xxx:apiserver访问kubelet客户端证书
•	--tls-xxx-file:apiserver https证书
•	1.20版本必须加的参数:--service-account-issuer,--service-account-signing-key-file
•	--etcd-xxxfile:连接Etcd集群证书
•	--audit-log-xxx:审计日志
•	启动聚合层相关配置:--requestheader-client-ca-file,--proxy-client-cert-file,--proxy-client-key-file,--requestheader-allowed-names,--requestheader-extra-headers-prefix,--requestheader-group-headers,--requestheader-username-headers,--enable-aggregator-routing

# 配置token文件
[root@k8s-master /data/s0/kubernetes/k8s]$ vim cfg/token.csv
bfd627b0217a49e8626ba1caf1259e0c,kubelet-bootstrap,10001,system:node-bootstrapper

#注:上述token可自行生成替换,但一定要与后续配置对应
> head -c 16 /dev/urandom | od -An -t x | tr -d ' '

# 配置系统服务
[root@k8s-master /data/s0/kubernetes/k8s]$ vim /usr/lib/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=/data/s0/kubernetes/k8s/kube-apiserver.conf
ExecStart=/data/s0/kubernetes/k8s/bin/kube-apiserver $KUBE_APISERVER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target

# 启动kube-apiserver服务
[root@k8s-master /data/s0/kubernetes/k8s]$ systemctl start kube-apiserver
5.2.1 服务启动错误
  • 错误1:Error: parse error on line 1, column 83: extraneous or missing " in quoted-field'

    修改 token.csv,角色:system:node-bootstrapper 去掉引号

  • 错误2:Could not construct pre-rendered responses for ServiceAccountIssuerDiscovery endpoints. Endpoints will not be enabled.

    --service-account-issuer=https://kubernetes.default.svc.cluster.local

    --service-account-signing-key-file=/root/ca/k8s/ca-key.pem

  • 错误3:Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/

    服务关闭后,再次重启,log文件错误,不影响使用,未处理。

5.3 部署kube-controller-manager

shell 复制代码
# 配置文件
[root@k8s-master /data/s0/kubernetes/k8s]$ vim cfg/kube-controller-manager.conf
KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=false \
--v=2 \
--log-dir=/data/s0/kubernetes/k8s/logs \
--leader-elect=true \
--kubeconfig=/data/s0/kubernetes/k8s/cfg/kube-controller-manager.kubeconfig \
--bind-address=127.0.0.1 \
--allocate-node-cidrs=true \
--cluster-cidr=10.244.0.0/16 \
--service-cluster-ip-range=10.0.0.0/24 \
--cluster-signing-cert-file=/root/ca/k8s/ca.pem \
--cluster-signing-key-file=/root/ca/k8s/ca-key.pem  \
--root-ca-file=/root/ca/k8s/ca.pem \
--service-account-private-key-file=/root/ca/k8s/ca-key.pem \
--cluster-signing-duration=87600h0m0s"   #证书过期时间10年

参数说明
•	--kubeconfig:连接apiserver配置文件
•	--leader-elect:当该组件启动多个时,自动选举(HA)
•	--cluster-signing-cert-file/--cluster-signing-key-file:为kubelet颁发证书的CA,与apiserver保持一致


# 生成kube-controller-manager证书
[root@k8s-master /data/s0/kubernetes/k8s]$ cd ~/ca/k8s/
[root@k8s-master ~/ca/k8s]$ vim kube-controller-manager-csr.json
{
	"CN": "system:kube-controller-manager",
	"hosts": [],
	"key": {
		"algo": "rsa",
		"size": 2048
	},
	"names": [{
		"C": "CN",
		"L": "BeiJing",
		"ST": "BeiJing",
		"O": "system:masters",
		"OU": "System"
	}]
}
# 证书生成
[root@k8s-master ~/ca/k8s]$ cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager

#生成kubeconfig文件
[root@k8s-master /data/s0/kubernetes/k8s]$ KUBE_CONFIG="/data/s0/kubernetes/k8s/cfg/kube-controller-manager.kubeconfig"
[root@k8s-master /data/s0/kubernetes/k8s]$ KUBE_APISERVER="https://10.34.x.10:6443"

# 终端执行(4条)
# 将集群及证书信息写入kube-controller-manager的配置文件中
[root@k8s-master /data/s0/kubernetes/k8s]$ kubectl config set-cluster kubernetes \
  --certificate-authority=/root/ca/k8s/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=${KUBE_CONFIG}
# 配置 kube-controller-manager 用户的证书和私钥
[root@k8s-master /data/s0/kubernetes/k8s]$ kubectl config set-credentials kube-controller-manager \
  --client-certificate=/root/ca/k8s/kube-controller-manager.pem \
  --client-key=/root/ca/k8s/kube-controller-manager-key.pem \
  --embed-certs=true \
  --kubeconfig=${KUBE_CONFIG}
# 创建上下文,关联 kubernetes集群和 kube-controller-manager用户
[root@k8s-master /data/s0/kubernetes/k8s]$ kubectl config set-context default \
  --cluster=kubernetes \
  --user=kube-controller-manager \
  --kubeconfig=${KUBE_CONFIG}
# 切换上下文
[root@k8s-master /data/s0/kubernetes/k8s]$ kubectl config use-context default --kubeconfig=${KUBE_CONFIG}

# 配置系统服务
[root@k8s-master /data/s0/kubernetes/k8s]$ vim /usr/lib/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=/data/s0/kubernetes/k8s/cfg/kube-controller-manager.conf
ExecStart=/data/s0/kubernetes/k8s/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target


# kube-controller-manager服务启动
[root@k8s-master /data/s0/kubernetes/k8s]$ systemctl start kube-controller-manager

5.4 部署kube-scheduler

shell 复制代码
# 创建配置文件
[root@k8s-master /data/s0/kubernetes/k8s]$ vim ./cfg/kube-scheduler.conf
KUBE_SCHEDULER_OPTS="--logtostderr=false \
--v=2 \
--log-dir=/data/s0/kubernetes/k8s/logs \
--leader-elect \
--kubeconfig=/data/s0/kubernetes/k8s/cfg/kube-scheduler.kubeconfig \
--bind-address=127.0.0.1"

参数说明
•	--kubeconfig:连接apiserver配置文件
•	--leader-elect:当该组件启动多个时,自动选举(HA)

#生成kube-scheduler证书
[root@k8s-master /data/s0/kubernetes/k8s]$ cd ~/ca/k8s
[root@datanode40 ~/ca/k8s]$ vim kube-scheduler-csr.json
{
	"CN": "system:kube-scheduler",
	"hosts": [],
	"key": {
		"algo": "rsa",
		"size": 2048
	},
	"names": [{
		"C": "CN",
		"L": "BeiJing",
		"ST": "BeiJing",
		"O": "system:masters",
		"OU": "System"
	}]
}

#生成证书
[root@k8s-master ~/ca/k8s]$ cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-scheduler-csr.json | cfssljson -bare kube-scheduler

# kube-scheduler配置文件
[root@k8s-master ~/ca/k8s]$ KUBE_CONFIG="/data/s0/kubernetes/k8s/cfg/kube-scheduler.kubeconfig"
[root@k8s-master ~/ca/k8s]$ KUBE_APISERVER="https://10.34.x.10:6443"


#终端执行(4条)
[root@k8s-master ~/ca/k8s]$ kubectl config set-cluster kubernetes \
  --certificate-authority=/root/ca/k8s/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=${KUBE_CONFIG}

[root@k8s-master ~/ca/k8s]$ kubectl config set-credentials kube-scheduler \
  --client-certificate=/root/ca/k8s/kube-scheduler.pem \
  --client-key=/root/ca/k8s/kube-scheduler-key.pem \
  --embed-certs=true \
  --kubeconfig=${KUBE_CONFIG}

[root@k8s-master ~/ca/k8s]$ kubectl config set-context default \
  --cluster=kubernetes \
  --user=kube-scheduler \
  --kubeconfig=${KUBE_CONFIG}

[root@k8s-master1 ~/ca/k8s]$ kubectl config use-context default --kubeconfig=${KUBE_CONFIG}

# 配置系统服务
[root@k8s-master ~/ca/k8s]$ vim /usr/lib/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=/data/s0/kubernetes/k8s/cfg/kube-scheduler.conf
ExecStart=/data/s0/kubernetes/k8s/bin/kube-scheduler $KUBE_SCHEDULER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target

# kube-scheduler 服务启动
[root@k8s-master ~/ca/k8s]$ systemctl start kube-scheduler

5.5 查看集群状态

shell 复制代码
# 生成kubectl连接集群的证书
[root@k8s-master ~/ca/k8s]$ vim admin-csr.json
{
	"CN": "admin",
	"hosts": [],
	"key": {
		"algo": "rsa",
		"size": 2048
	},
	"names": [{
		"C": "CN",
		"L": "BeiJing",
		"ST": "BeiJing",
		"O": "system:masters",
		"OU": "System"
	}]
}
[root@k8s-master ~/ca/k8s]$ cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin

#生成kubeconfig文件
[root@k8s-master ~/ca/k8s]$ mkdir /root/.kube

[root@k8s-master ~/ca/k8s]$ KUBE_CONFIG="/root/.kube/config"
[root@k8s-master ~/ca/k8s]$ KUBE_APISERVER="https://10.34.x.10:6443"

# 终端执行(4条)
[root@k8s-master ~/ca/k8s]$ kubectl config set-cluster kubernetes \
  --certificate-authority=/root/ca/k8s/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=${KUBE_CONFIG}

[root@k8s-master ~/ca/k8s]$ kubectl config set-credentials cluster-admin \
  --client-certificate=/root/ca/k8s/admin.pem \
  --client-key=/root/ca/k8s/admin-key.pem \
  --embed-certs=true \
  --kubeconfig=${KUBE_CONFIG}

[root@k8s-master ~/ca/k8s]$ kubectl config set-context default \
  --cluster=kubernetes \
  --user=cluster-admin \
  --kubeconfig=${KUBE_CONFIG}

[root@k8s-master ~/ca/k8s]$ kubectl config use-context default --kubeconfig=${KUBE_CONFIG}

# 通过kubectl工具查看当前集群组件状态
[root@k8s-master ~/ca/k8s]$ kubectl get cs

6、 Node节点部署

6.1 kubelet 部署

shell 复制代码
# ---------------------------master节点 --------------------------------------------------
# 部署kubelet
[root@k8s-master /data/s0/kubernetes/k8s]$ vim ./cfg/kubelet.conf
KUBELET_OPTS="--logtostderr=false \
--v=2 \
--log-dir=/data/s0/kubernetes/k8s/logs \
--hostname-override=k8s-master \
--network-plugin=cni \
--kubeconfig=/data/s0/kubernetes/k8s/cfg/kubelet.kubeconfig \
--bootstrap-kubeconfig=/data/s0/kubernetes/k8s/cfg/bootstrap.kubeconfig \
--config=/data/s0/kubernetes/k8s/cfg/kubelet-config.yml \
--cert-dir=/root/ca/k8s \
--pod-infra-container-image=lizhenliang/pause-amd64:3.0"

参数说明
•	--hostname-override:显示名称,集群中唯一
•	--network-plugin:启用CNI
•	--kubeconfig:空路径,会自动生成,后面用于连接apiserver
•	--bootstrap-kubeconfig:首次启动向apiserver申请证书
•	--config:配置参数文件
•	--cert-dir:kubelet证书生成目录
•	--pod-infra-container-image:管理Pod网络容器的镜像,每个pod基础容器

# docker 加载离线镜像
[root@k8s-master /data/s0/kubernetes/k8s]$ docker load -i pause.tar 

# 配置参数文件
[root@k8s-master /data/s0/kubernetes/k8s]$ vim ./cfg/kubelet-config.yml
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 0.0.0.0
port: 10250
readOnlyPort: 10255
cgroupDriver: cgroupfs
clusterDNS:
- 10.0.0.2
clusterDomain: cluster.local
failSwapOn: false
authentication:
  anonymous:
    enabled: false
  webhook:
    cacheTTL: 2m0s
    enabled: true
  x509:
    clientCAFile: /root/ca/k8s/ca.pem
authorization:
  mode: Webhook
  webhook:
    cacheAuthorizedTTL: 5m0s
    cacheUnauthorizedTTL: 30s
evictionHard:
  imagefs.available: 15%
  memory.available: 100Mi
  nodefs.available: 10%
  nodefs.inodesFree: 5%
maxOpenFiles: 1000000
maxPods: 110

#授权kubelet-bootstrap用户允许请求证书
[root@k8s-master /data/s0/kubernetes/k8s]$ kubectl create clusterrolebinding kubelet-bootstrap \
--clusterrole=system:node-bootstrapper \
--user=kubelet-bootstrap
                                     

#生成kubelet初次加入集群引导kubeconfig文件
#在生成kubernetes证书的目录下执行以下命令生成kubeconfig文件
[root@k8s-master /data/s0/kubernetes/k8s]$ cd /root/ca/k8s/
[root@k8s-master ~/ca/k8s]$ KUBE_CONFIG="/data/s0/kubernetes/k8s/cfg/bootstrap.kubeconfig"
[root@k8s-master ~/ca/k8s]$ KUBE_APISERVER="https://10.34.x.10:6443"   # apiserver的 IP:PORT
[root@k8s-master ~/ca/k8s]$ TOKEN="bfd627b0217a49e8626ba1caf1259e0c"    # 与master的token.csv里保持一致


# 终端执行(四条)
[root@k8s-master ~/ca/k8s]$ kubectl config set-cluster kubernetes \
  --certificate-authority=/root/ca/k8s/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=${KUBE_CONFIG}

[root@k8s-master ~/ca/k8s]$ kubectl config set-credentials "kubelet-bootstrap" \
  --token=${TOKEN} \
  --kubeconfig=${KUBE_CONFIG}

[root@k8s-master ~/ca/k8s]$ kubectl config set-context default \
  --cluster=kubernetes \
  --user="kubelet-bootstrap" \
  --kubeconfig=${KUBE_CONFIG}

[root@k8s-master ~/ca/k8s]$ kubectl config use-context default --kubeconfig=${KUBE_CONFIG}

# 配置系统服务
[root@k8s-master ~/ca/k8s]$ vim /usr/lib/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet
After=docker.service

[Service]
EnvironmentFile=/data/s0/kubernetes/k8s/cfg/kubelet.conf
ExecStart=/data/s0/kubernetes/k8s/bin/kubelet $KUBELET_OPTS
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target


# master kubelet服务启动
[root@k8s-master ~/ca/k8s]$ systemctl start kubelet   # systemctl status kubelet


# 批准kubelet证书申请并加入集群
[root@k8s-master ~/ca/k8s]$ kubectl get csr
NAME                                                   AGE   SIGNERNAME                                    REQUESTOR           CONDITION
node-csr-pqKVTNbghbuRP1p9ldj2H0hp9vodjsPUNFq1TVjJ2J0   16m   kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Pending


# 批准申请 -- kubectl certificate approve <申请的NAME>
[root@k8s-master ~/ca/k8s]$ kubectl certificate approve node-csr-pqKVTNbghbuRP1p9ldj2H0hp9vodjsPUNFq1TVjJ2J0

# 查看节点状态
[root@k8s-master ~/ca/k8s]$ kubectl get node

NAME         STATUS     ROLES    AGE   VERSION
k8s-master   NotReady   <none>   75s   v1.20.4  # 由于网络插件还没有部署,节点会没有准备就绪 NotReady

6.2 kube-proxy部署

shell 复制代码
#   -------------------master节点-------------------------
#在/root/ca/k8s下创建证书请求文件
[root@k8s-master ~/ca/k8s]$ vim kube-proxy-csr.json
{
	"CN": "system:kube-proxy",
	"hosts": [],
	"key": {
		"algo": "rsa",
		"size": 2048
	},
	"names": [{
		"C": "CN",
		"L": "BeiJing",
		"ST": "BeiJing",
		"O": "k8s",
		"OU": "System"
	}]
}

# 生成证书
[root@k8s-master ~/ca/k8s]$ cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy

#生成kubeconfig文件
[root@k8s-master1 k8s]$ KUBE_CONFIG="/data/s0/kubernetes/k8s/cfg/kube-proxy.kubeconfig"
[root@k8s-master1 k8s]$ KUBE_APISERVER="https://10.34.x.10:6443"

# 终端执行(4条)
[root@k8s-master ~/ca/k8s]$ kubectl config set-cluster kubernetes \
  --certificate-authority=/root/ca/k8s/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=${KUBE_CONFIG}

[root@k8s-master ~/ca/k8s]$ kubectl config set-credentials kube-proxy \
  --client-certificate=./kube-proxy.pem \
  --client-key=./kube-proxy-key.pem \
  --embed-certs=true \
  --kubeconfig=${KUBE_CONFIG}

[root@k8s-master ~/ca/k8s]$ kubectl config set-context default \
  --cluster=kubernetes \
  --user=kube-proxy \
  --kubeconfig=${KUBE_CONFIG}

[root@k8s-master ~/ca/k8s]$ kubectl config use-context default --kubeconfig=${KUBE_CONFIG}

#创建服务启动参数配置文件
[root@k8s-master /data/s0/kubernetes/k8s]$ vim cfg/kube-proxy.conf
KUBE_PROXY_OPTS="--logtostderr=false \
--v=2 \
--log-dir=/data/s0/kubernetes/k8s/logs \
--config=/data/s0/kubernetes/k8s/cfg/kube-proxy-config.yml"

# 配置参数文件
[root@k8s-master /data/s0/kubernetes/k8s]$ vim cfg/kube-proxy-config.yml
kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 0.0.0.0
metricsBindAddress: 0.0.0.0:10249
clientConnection:
  kubeconfig: /data/s0/kubernetes/k8s/cfg/kube-proxy.kubeconfig
hostnameOverride: k8s-master
clusterCIDR: 10.244.0.0/16

# 配置系统服务
[root@k8s-master /data/s0/kubernetes/k8s]$ vim /usr/lib/systemd/system/kube-proxy.service
[Unit]
Description=Kubernetes Proxy
After=network.target

[Service]
EnvironmentFile=/data/s0/kubernetes/k8s/cfg/kube-proxy.conf
ExecStart=/data/s0/kubernetes/k8s/bin/kube-proxy $KUBE_PROXY_OPTS
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

# 服务启动
[root@k8s-master /data/s0/kubernetes/k8s]$ systemctl start kube-proxy

7、 网络插件部署calico

shell 复制代码
# 导入calico镜像
[root@k8s-master /data/s0/kubernetes]$ cd calico
[root@k8s-master /data/s0/kubernetes/calico]$ docker load -i calico-cni.tar
[root@k8s-master /data/s0/kubernetes/calico]$ docker load -i calico-controllers.tar
[root@k8s-master /data/s0/kubernetes/calico]$ docker load -i calico-flexvol.tar
[root@k8s-master /data/s0/kubernetes/calico]$ docker load -i calico-node.tar
[root@k8s-master /data/s0/kubernetes/calico]$ docker images
REPOSITORY                  TAG                 IMAGE ID            CREATED             SIZE
calico/node                 v3.15.1             1470783b1474        4 years ago         262MB
calico/pod2daemon-flexvol   v3.15.1             a696ebcb2ac7        4 years ago         112MB
calico/cni                  v3.15.1             2858353c1d25        4 years ago         217MB
calico/kube-controllers     v3.15.1             8ed9dbffe350        4 years ago         53.1MB
lizhenliang/pause-amd64     3.0                 99e59f495ffa        8 years ago         747kB

# 部署calico
[root@k8s-master /data/s0/kubernetes/calico]$ tar -zxvf cni-plugins-linux-amd64-v0.8.6.tgz -C /opt/cni/bin  # 与calico.yaml 文件中路径要对应
[root@k8s-master /data/s0/kubernetes/calico]$ kubectl apply -f calico.yaml
[root@k8s-master /data/s0/kubernetes/calico]$ kubectl get pods -n kube-system
NAME                                      READY   STATUS    RESTARTS   AGE
calico-kube-controllers-97769f7c7-5h4vv   1/1     Running   0          28s
calico-node-2wkns                         1/1     Running   0          28s

# 部署完网络后,再次查看看节点状态
[root@k8s-master /data/s0/kubernetes/calico]$kubectl get node
NAME         STATUS   ROLES    AGE   VERSION
k8s-master   Ready    <none>   20h   v1.20.4

8、 授权apiserver访问kubelet

shell 复制代码
[root@k8s-master /data/s0/kubernetes/k8s]$ vim apiserver-to-kubelet-rbac.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
  name: system:kube-apiserver-to-kubelet
rules:
  - apiGroups:
      - ""
    resources:
      - nodes/proxy
      - nodes/stats
      - nodes/log
      - nodes/spec
      - nodes/metrics
      - pods/log
    verbs:
      - "*"
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: system:kube-apiserver
  namespace: ""
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:kube-apiserver-to-kubelet
subjects:
  - apiGroup: rbac.authorization.k8s.io
    kind: User
    name: kubernetes


[root@k8s-master /data/s0/kubernetes/k8s]$ kubectl apply -f apiserver-to-kubelet-rbac.yaml

9、 node1、node2 节点加入 woker node

shell 复制代码
# --------------------------- node1、node2节点 --------------------------------------------
# node1、node2节点
[root@k8s-node1 /data/s0/kubernetes/k8s]$ mkdir -p {bin,cfg,logs}
[root@k8s-node2 /data/s0/kubernetes/k8s]$ mkdir -p {bin,cfg,logs}

## kubel、kube-proxy 相关文件拷贝
# master将kubectl、kubelet、kube-proxy命令拷贝给node1、node2节点
[root@k8s-master /data/s0/kubernetes/k8s]$ scp /usr/bin/kubectl 10.34.9.110:/usr/bin
[root@k8s-master /data/s0/kubernetes/k8s]$ scp -r ~/.kube 10.34.9.110:~/ # kubectl想要在node节点上使用,需要.kube配置文件
[root@k8s-master /data/s0/kubernetes/k8s]$ scp kubernetes/server/bin/{kubelet,kube-proxy} 10.34.9.110:/data/s0/kubernetes/k8s/bin
[root@k8s-master /data/s0/kubernetes/k8s]$ scp kubernetes/server/bin/{kubelet,kube-proxy} 10.34.9.111:/data/s0/kubernetes/k8s/bin

# master将kubelet、kube-proxy系统服务文件拷贝给node1、node2节点
[root@k8s-master /data/s0/kubernetes/k8s]$ scp -r /usr/lib/systemd/system/{kubelet,kube-proxy}.service 10.34.9.110:/usr/lib/systemd/system
[root@k8s-master /data/s0/kubernetes/k8s]$ scp -r /usr/lib/systemd/system/{kubelet,kube-proxy}.service 10.34.9.111:/usr/lib/systemd/system

# 拷贝证书
[root@k8s-master ~/ca/k8s]$ scp ./* 10.34.x.11:~/ca/k8s
[root@k8s-master ~/ca/k8s]$ scp ./* 10.34.x.12:~/ca/k8s
# 删除或移动kubelet证书,这是证书申请审批后自动生成的,每个Node不同
[root@k8s-node1 ~/ca/k8s]$ /bin/rm -rf kubelet*
[root@k8s-node2 ~/ca/k8s]$ /bin/rm -rf kubelet*

#master将kubelet、kube-proxy配置文件拷贝给node1、node2节点
[root@k8s-master /data/s0/kubernetes/k8s]$ scp cfg/{kubelet.conf,kubelet-config.yml,kube-proxy.conf,kube-proxy-config.yml} 10.34.9.110:/data/s0/kubernetes/k8s/cfg
[root@k8s-master /data/s0/kubernetes/k8s]$ scp cfg/{kubelet.conf,kubelet-config.yml,kube-proxy.conf,kube-proxy-config.yml} 10.34.9.111:/data/s0/kubernetes/k8s/cfg
# 修改kubelet.conf文件,每个节点不一样
[root@k8s-node1 /data/s0/kubernetes/k8s]$ vim cfg/kubelet.conf
--hostname-override=k8s-node1
[root@k8s-node2 /data/s0/kubernetes/k8s]$ vim cfg/kubelet.conf
--hostname-override=k8s-node2
# 修改kube-proxy-config.yml文件,每个节点不一样
[root@k8s-node1 /data/s0/kubernetes/k8s]$ vim cfg/kube-proxy-config.yml
hostnameOverride:k8s-node1
[root@k8s-node2 /data/s0/kubernetes/k8s]$ vim cfg/kube-proxy-config.yml
hostnameOverride:k8s-node2

# 配置calico镜像资源,node1和node2一样,这里以node1为l例
# calico-node DaemonSet 会自动在新加入的 Kubernetes 节点上启动与 Calico 相关的 Pod。
[root@k8s-master /data/s0/kubernetes]$ scp -r calico 10.34.x.11:/data/s0/kubernetes

[root@k8s-node1 /data/s0/kubernetes/calico]$ docker load -i calico-cni.tar
[root@k8s-node1 /data/s0/kubernetes/calico]$ docker load -i calico-controllers.tar
[root@k8s-node1 /data/s0/kubernetes/calico]$ docker load -i calico-flexvol.tar
[root@k8s-node1 /data/s0/kubernetes/calico]$ docker load -i calico-node.tar
[root@k8s-node1 /data/s0/kubernetes/calico]$ mkdir -p /opt/cni/bin
[root@k8s-node1 /data/s0/kubernetes/calico]$ tar -zxvf cni-plugins-linux-amd64-v0.8.6.tgz -C /opt/cni/bin

# 管理Pod网络容器的镜像,node1和node2一样,这里以node1为l例
[root@k8s-master /data/s0/kubernetes/k8s]$ scp ./pause.tar 10.34.x.11:/data/s0/kubernetes/k8s
[root@k8s-node1 /data/s0/kubernetes/k8s]$ docker load -i pause.tar

# 启动kubelet,这里以node1为例
[root@k8s-node1 /data/s0/kubernetes/k8s]$ KUBE_CONFIG="/data/s0/kubernetes/k8s/cfg/bootstrap.kubeconfig"
[root@k8s-node1 /data/s0/kubernetes/k8s]$ KUBE_APISERVER="https://10.34.x.10:6443"   # apiserver的 IP:PORT
[root@k8s-node1 /data/s0/kubernetes/k8s]$ TOKEN="bfd627b0217a49e8626ba1caf1259e0c"    # 与master的token.csv里保持一致
# 终端执行(四条)
[root@k8s-node1 /data/s0/kubernetes/k8s]$ kubectl config set-cluster kubernetes \
  --certificate-authority=/root/ca/k8s/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=${KUBE_CONFIG}

[root@k8s-node1 /data/s0/kubernetes/k8s]$ kubectl config set-credentials "kubelet-bootstrap" \
  --token=${TOKEN} \
  --kubeconfig=${KUBE_CONFIG}

[root@k8s-node1 /data/s0/kubernetes/k8s]$ kubectl config set-context default \
  --cluster=kubernetes \
  --user="kubelet-bootstrap" \
  --kubeconfig=${KUBE_CONFIG}

[root@k8s-node1 /data/s0/kubernetes/k8s]$ kubectl config use-context default --kubeconfig=${KUBE_CONFIG}
[root@k8s-node1 /data/s0/kubernetes/k8s]$ systemctl start kubelet 

# 启动kube-proxy,这里以node1为例
[root@k8s-node1 /data/s0/kubernetes/k8s]$ KUBE_CONFIG="/data/s0/kubernetes/k8s/cfg/kube-proxy.kubeconfig"
[root@k8s-node1 /data/s0/kubernetes/k8s]$ KUBE_APISERVER="https://10.34.x.10:6443"

# 终端执行(4条)
[root@k8s-node1 /data/s0/kubernetes/k8s]$ kubectl config set-cluster kubernetes \
  --certificate-authority=/root/ca/k8s/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=${KUBE_CONFIG}

[root@k8s-node1 /data/s0/kubernetes/k8s]$ kubectl config set-credentials kube-proxy \
  --client-certificate=/root/ca/k8s/kube-proxy.pem \
  --client-key=/root/ca/k8s/kube-proxy-key.pem \
  --embed-certs=true \
  --kubeconfig=${KUBE_CONFIG}

[root@k8s-node1 /data/s0/kubernetes/k8s]$ kubectl config set-context default \
  --cluster=kubernetes \
  --user=kube-proxy \
  --kubeconfig=${KUBE_CONFIG}

[root@k8s-node1 /data/s0/kubernetes/k8s]$ kubectl config use-context default --kubeconfig=${KUBE_CONFIG}
[root@k8s-node1 /data/s0/kubernetes/k8s]$ systemctl start kube-proxy

# master同意node1、node2 kubelet证书请求
[root@k8s-master /data/s0/kubernetes/k8s]$ kubectl get csr
NAME                                                   AGE   SIGNERNAME                                    REQUESTOR           CONDITION
node-csr-93g8WjHI8u4h8JKMolBCVzGCRshA0QKK8fsOR9Zkde4   13m   kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Pending
node-csr-US94mtC2QVJ_hBcsju8QF6K8o9Of6-E84qWKGw9GcP8   75s   kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Pending

[root@k8s-master /data/s0/kubernetes/k8s]$ kubectl certificate approve node-csr-93g8WjHI8u4h8JKMolBCVzGCRshA0QKK8fsOR9Zkde4
[root@k8s-master /data/s0/kubernetes/k8s]$ kubectl certificate approve node-csr-US94mtC2QVJ_hBcsju8QF6K8o9Of6-E84qWKGw9GcP8


# 查看整个集群状态
# 查看node状态
[root@k8s-master /data/s0/kubernetes/k8s]$ kubectl get nodes
NAME         STATUS   ROLES    AGE   VERSION
k8s-master   Ready    <none>   26h   v1.20.4
k8s-node1    Ready    <none>   63m   v1.20.4
k8s-node2    Ready    <none>   62m   v1.20.4

# 查看pod状态
[root@k8s-master /data/s0/kubernetes/k8s]$ kubectl get pods -n kube-system
calico-kube-controllers-97769f7c7-5h4vv   1/1     Running   0          3d2h
calico-node-2wkns                         1/1     Running   0          3d2h
calico-node-j46rf                         1/1     Running   0          2d21h
calico-node-wtfrn                         1/1     Running   0          2d21h

10、 部署CoreDNS和Dashboard

10.1 部署CoreDNS

作用:

  • DNS 服务: CoreDNS 是 Kubernetes 集群中的 DNS 服务器,负责为集群内部的服务和 Pod 提供域名解析服务。它将服务名解析为相应的 ClusterIP 地址,方便各个服务之间的通信。
  • 服务发现 : CoreDNS 允许 Pod 使用服务名(例如 my-service.my-namespace.svc.cluster.local)来访问其他服务,而不是直接使用 IP 地址。这种抽象使得服务的动态性得以更好地支持。
  • 插件架构: CoreDNS 使用插件架构,用户可以根据需要加载不同的插件来扩展其功能,比如支持自定义域名解析、负载均衡、缓存等。
  • 集成与配置: CoreDNS 是 Kubernetes 的默认 DNS 解决方案,集成非常方便,并且可以通过 ConfigMap 来配置。
shell 复制代码
# 联网机器
> docker pull m.daocloud.1o/docker.1o/coredns/coredns:1.2.2
> docker save -o coredns.tar m.daocloud.io/docker.io/coredns/coredns:1.2.2
# 加载离线镜像{node1、node2节点镜像保持一致,因为pod可能调度到任意节点上启动}
[root@k8s-master /data/s0/kubernetes/dashboard]$ docker load -i coredns.tar

# 将CoreDNS用于集群内部Service名称解析
[root@k8s-master /data/s0/kubernetes/dashboard]$ kubectl apply -f coredns.yaml # 注意yaml文件中 image: m.daocloud.io/docker.io/coredns/coredns:1.2.2 与实际保持一致
[root@k8s-master /data/s0/kubernetes/dashboard]$ kubectl get pods -n kube-system
NAME                                      READY   STATUS    RESTARTS   AGE
calico-kube-controllers-97769f7c7-5h4vv   1/1     Running   0          3d6h
calico-node-2wkns                         1/1     Running   0          3d6h
calico-node-j46rf                         1/1     Running   0          3d1h
calico-node-wtfrn                         1/1     Running   0          3d
coredns-776cb8597f-g6dvp                  1/1     Running   0          18s

10.2 部署Dashboard

作用:

  • Web UI: Kubernetes Dashboard 是一个基于 Web 的用户界面,用于管理和监控 Kubernetes 集群。用户可以通过它可视化集群中的各种资源和状态。
  • 资源管理: 通过 Dashboard,用户可以轻松查看、创建、编辑和删除 Kubernetes 资源(如 Pod、Service、Deployment、ReplicaSet 等)。
  • 监控与日志: Dashboard 提供了一些基本的监控和日志功能,可以查看集群和应用的健康状态,以及访问 Pod 的日志。
  • 访问控制: 通过 Kubernetes 的 RBAC(基于角色的访问控制)机制,用户可以控制对 Dashboard 的访问权限,确保安全性。
shell 复制代码
# 加载离线镜像{node1、node2节点镜像保持一致,因为pod可能调度到任意节点上启动}
[root@k8s-master /data/s0/kubernetes/dashboard]$ docker load -i metrics-scraper.tar
[root@k8s-master /data/s0/kubernetes/dashboard]$ docker load -i metrics-server.tar
[root@k8s-master /data/s0/kubernetes/dashboard]$ docker load -i dashboard.tar

# 创建服务
[root@k8s-master /data/s0/kubernetes/dashboard]$ kubectl apply -f kubernetes-dashboard.yaml
[root@k8s-master /data/s0/kubernetes/dashboard]$ kubectl get pods,svc -n kubernetes-dashboard
#[root@k8s-master /data/s0/kubernetes/dashboard]$ kubectl delete namespace kubernetes-dashboard   # 报错删除命名空间kubernetes-dashboard,会先删除对应的pod
NAME                                             READY   STATUS    RESTARTS   AGE
pod/dashboard-metrics-scraper-7b59f7d4df-l6ngh   1/1     Running   0          16m
pod/kubernetes-dashboard-548f88599b-k7824        1/1     Running   0          22s

NAME                                TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)         AGE
service/dashboard-metrics-scraper   ClusterIP   10.0.0.183   <none>        8000/TCP        16m
service/kubernetes-dashboard        NodePort    10.0.0.144   <none>        443:30001/TCP   16m


# 创建dashboard 账户
[root@k8s-master /data/s0/kubernetes/dashboard]$ kubectl create serviceaccount dashboard-admin -n kube-system 
[root@k8s-master /data/s0/kubernetes/dashboard]$ kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin
[root@k8s-master /data/s0/kubernetes/dashboard]$ kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print $1}')

Name:         dashboard-admin-token-gdmpr
Namespace:    kube-system
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: dashboard-admin
              kubernetes.io/service-account.uid: 5a92a1f6-b180-43d2-9dcd-7062bae9503a

Type:  kubernetes.io/service-account-token

Data
====
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IjUyV3R3bHdMVnFZNzcyY2gzSjVIbU5rc1RSVllqblEyM0wtUl9GaW1CemcifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4tZ2RtcHIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiNWE5MmExZjYtYjE4MC00M2QyLTlkY2QtNzA2MmJhZTk1MDNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmRhc2hib2FyZC1hZG1pbiJ9.XYnuRk1vkyw-PDrAfX9tv6vHvhfvW3ZwzMYOcjEgB8fCb5Ifn2GrBPVQHrO79DpY9ixJJ7r57Ah5r0Vz94CyMz1Qqd1ZJx3jYn2kTWRYbHU66YhbsVnOI0C6rPH9ZH8Qqtf8MJIj81c2CsE4_tbw-JJnv-NkisLvHSNyVvgkB3TIYUFzAJKj6PTm9th2BddLoWwx9Fl7G0u2bAJPhWfkPGB_Raq2KHsX99qM4JtIdWBFXWmksVnBVXtfi6nfUhfQQL5qPWVf9YIGV20KQIdwcQoPGFBlaxFIoSmgHpIBOdMpqYbeV41OTEkflyQo2uu7y9BhqeOiiz7g-vz8Iu7obg
ca.crt:     1359 bytes
namespace:  11 bytes


# 访问dashboard(https:k8s-matser:30001)
# 远程浏览器提示"您的连接不是私密连接"
# 报错查看pod日志:kubectl logs -n kubernetes-dashboard <dashboard-service-name> http: TLS handshake error from 10.244.235.192:56020: remote error: tls: unknown certificate
# 解决 [将"test-type --ignore-certificate-errors"放到Chrome快捷方式目标执行命令"C:\Program Files\Google\Chrome\Application\chrome.exe"后面 参考链接](https://blog.csdn.net/weixin_44968234/article/details/129679992)
相关推荐
老实巴交的麻匪34 分钟前
Logs 可观测性 | Grafana Loki 架构窥探与实践
运维·云原生·容器
塑遂35 分钟前
Kubernetes高级调度01
容器·kubernetes
MarkGosling35 分钟前
【开源项目】轻量加速利器 HubProxy自建 Docker、GitHub 下载加速服务
docker·容器·github
MarkGosling1 小时前
【开源项目】轻量加速利器 HubProxy 自建 Docker、GitHub 下载加速服务
运维·git·docker·容器·开源·github·个人开发
chanalbert1 小时前
Docker网络技术深度研究与实战手册
docker·容器·自动化运维
东风微鸣2 小时前
AI 赋能的故障排除:技术趋势与实践
docker·云原生·kubernetes·可观察性
KubeSphere 云原生2 小时前
云原生周刊:2025年的服务网格
云原生
Etual2 小时前
云原生联调利器:Telepresence实战
云原生
陌上阳光15 小时前
docker搭建ray集群
docker·容器·ray
这就是佬们吗15 小时前
初识 docker [上]
java·开发语言·笔记·docker·容器