【Kubernetes部署篇】二进制搭建K8s高可用集群1.26.15版本

文章目录

一、服务器环境信息及部署规划

1、K8S服务器信息及网段规划

K8S网段规划:

  • Pod网段:10.0.0.0/16
  • Service网段:10.255.0.0/16

服务器资源配置信息:

IP地址 操作系统 资源 备注
16.32.15.115 CentOS Linux release 7.9 4C/4G/50G
16.32.15.200 CentOS Linux release 7.9 4C/4G/50G
16.32.15.201 CentOS Linux release 7.9 4C/4G/50G
16.32.15.202 CentOS Linux release 7.9 4C/4G/50G
16.32.15.210 VIP地址

2、服务器部署架构规划

我本次实验工作节点是一个,生产环境中根据需求自行扩容即可!

角色 IP地址 主机名 安装的组件
控制节点 16.32.15.115 master-1 apiserver、controller-manager、scheduler、etcd、docker、keepalived、nginx、
控制节点 16.32.15.200 master-2 apiserver、controller-manager、scheduler、etcd、docker、keepalived、nginx
控制节点 16.32.15.201 master-3 apiserver、controller-manager、scheduler、etcd、docker
工作节点 16.32.15.202 node-1 kubelet、kube-proxy、docker、calico、coredns
VIP地址 16.32.15.210

3、组件版本信息

序号 组件名称 版本 备注
1 Docker 20.10.6
2 Cri-dockerd 0.3.14 下载地址:
3 Etcd 3.4.33 下载地址:
4 Nginx 1.20.1
5 Keepalived 1.3.5
8 Calico 3.25.0
9 CoreDNS 1.9.3
10 Kubernetes集群各组件 1.26.15 下载地址:

二、初始化环境操作

提示:初始化环境操作,所有设计到的服务器中都需要操作!

1、关闭防火墙

1、安装iptable防火墙(先安装 在禁用)

bash 复制代码
yum install iptables-services -y

2、关闭防火墙限制

bash 复制代码
systemctl disable firewalld --now
setenforce 0
sed  -i -r 's/SELINUX=[ep].*/SELINUX=disabled/g' /etc/selinux/config

service iptables stop
systemctl disable iptables
iptables -F

2、配置本地域名解析

1、配置本地域名解析

bash 复制代码
cat  >> /etc/hosts << EOF
16.32.15.115 master-1
16.32.15.200 master-2
16.32.15.201 master-3
16.32.15.202 node-1
EOF

2、修改服务器主机名(对应服务器中执行)

bash 复制代码
hostnamectl set-hostname master-1 && bash
hostnamectl set-hostname master-2 && bash
hostnamectl set-hostname master-3 && bash
hostnamectl set-hostname node-1 && bash

3、配置服务器时间保持一致

1、设置时区

bash 复制代码
timedatectl set-timezone Asia/Shanghai

2、同步阿里云时间源

bash 复制代码
yum -y install ntpdate
ntpdate ntp1.aliyun.com

3、添加定时同步 每天凌晨1点自动同步时间

bash 复制代码
echo "0 1 * * * ntpdate ntp1.aliyun.com" >> /var/spool/cron/root
crontab -l

4、禁用swap交换分区(K8S强制要求禁用)

1、禁用swap交换分区

bash 复制代码
swapoff --all

2、禁止开机自启动swap交换分区

bash 复制代码
sed -i -r '/swap/ s/^/#/' /etc/fstab

5、配置主机之间无密码登录

1、生成ssh密钥对

bash 复制代码
ssh-keygen -t rsa

2、将本地的ssh公钥文件COPY到远程主机

bash 复制代码
ssh-copy-id -i ~/.ssh/id_rsa.pub master-1
ssh-copy-id -i ~/.ssh/id_rsa.pub master-2
ssh-copy-id -i ~/.ssh/id_rsa.pub master-3
ssh-copy-id -i ~/.ssh/id_rsa.pub node-1

6、修改Linux内核参数,添加网桥过滤器和地址转发功能

1、添加内核参数

bash 复制代码
cat >> /etc/sysctl.d/kubernetes.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF

sysctl -p /etc/sysctl.d/kubernetes.conf

2、加载网桥过滤器模块

bash 复制代码
modprobe br_netfilter
lsmod | grep br_netfilter # 验证是否生效

7、启用ipvs功能

在kubernetes中Service有两种代理模型,一种是基于iptables的,一种是基于ipvs,两者对比ipvs的性能要高,如果想要使用ipvs模型,需要手动载入ipvs模块

1、安装ipvs

bash 复制代码
yum -y install ipset ipvsadm

2、配置ipvs功能

bash 复制代码
cat > /etc/sysconfig/modules/ipvs.modules <<EOF
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4  
EOF

chmod +x /etc/sysconfig/modules/ipvs.modules
/etc/sysconfig/modules/ipvs.modules

3、验证ipvs模块

bash 复制代码
lsmod | grep -e ip_vs -e nf_conntrack_ipv4

8、安装Docker容器组件

1、配置阿里安装源,并安装docker

bash 复制代码
curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo
wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
yum makecache

# yum-utils软件用于提供yum-config-manager程序
yum install -y yum-utils

# 使用yum-config-manager创建docker阿里存储库
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

yum install docker-ce-20.10.6 docker-ce-cli-20.10.6 -y

2、配置docker国内加速器

bash 复制代码
mkdir /etc/docker
cat <<EOF > /etc/docker/daemon.json
{
 "registry-mirrors": [
"https://vm1wbfhf.mirror.aliyuncs.com",
"http://f1361db2.m.daocloud.io",
"https://hub-mirror.c.163.com",
"https://docker.mirrors.ustc.edu.cn",
"https://mirror.baidubce.com",
"https://ustc-edu-cn.mirror.aliyuncs.com",
"https://registry.cn-hangzhou.aliyuncs.com",
"https://ccr.ccs.tencentyun.com",
"https://hub.daocloud.io",
"https://docker.shootchat.top",
"https://do.nark.eu.org",
"https://dockerproxy.com",
"https://docker.m.daocloud.io",
"https://dockerhub.timeweb.cloud",
"https://docker.shootchat.top",
"https://do.nark.eu.org"],
"exec-opts": ["native.cgroupdriver=systemd"]
}
EOF

3、启动docker并设置开机自启

bash 复制代码
systemctl enable docker --now
systemctl status docker

9、安装cri-dockerd插件

Kubernetes 1.24 版本之后不在兼容docker了,如果还需要使用docker那就需要安装cri-dockerd了,或者使用其他容器运行时,比如containerd等。

1、安装cri-dockerd插件

bash 复制代码
wget https://github.com/Mirantis/cri-dockerd/releases/download/v0.3.14/cri-dockerd-0.3.14-3.el7.x86_64.rpm
rpm -ivh cri-dockerd-0.3.14-3.el7.x86_64.rpm

2、备份并更新Systemd文件

bash 复制代码
mv /usr/lib/systemd/system/cri-docker.service{,.default}
cat > /usr/lib/systemd/system/cri-docker.service << EOF
[Unit]
Description=CRI Interface for Docker Application Container Engine
Documentation=https://docs.mirantis.com
After=network-online.target firewalld.service docker.service
Wants=network-online.target
Requires=cri-docker.socket
[Service]
Type=notify
ExecStart=/usr/bin/cri-dockerd --network-plugin=cni --pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.7
ExecReload=/bin/kill -s HUP \$MAINPID
TimeoutSec=0
RestartSec=2
Restart=always
StartLimitBurst=3
StartLimitInterval=60s
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
TasksMax=infinity
Delegate=yes
KillMode=process
[Install]
WantedBy=multi-user.target
EOF

3、启动并加入开机自启

bash 复制代码
systemctl daemon-reload
systemctl start cri-docker.service 
systemctl enable cri-docker.service 
systemctl status cri-docker.service 

4、查看sock文件

bash 复制代码
ll -l /var/run/cri-dockerd.sock

K8S kubelet组件指定cirSocket时,将填写unix:///var/run/cri-dockerd.sock

10、安装基础软件包

平常排查问题使用

bash 复制代码
yum install -y yum-utils device-mapper-persistent-data lvm2 wget net-tools nfs-utils lrzsz gcc gcc-c++ make cmake libxml2-devel openssl-devel curl curl-devel unzip sudo ntp libaio-devel wget vim ncurses-devel autoconf automake zlib-devel  python-devel epel-release openssh-server socat  ipvsadm conntrack ntpdate telnet rsync openssh-clients

三、部署etcd高可用集群

提示:在master-1、master-2、master-3中部署etcd集群,涉及证书配置可以先在其中一台生成,然后拷贝到其他master主机中。

1、签发etcd证书

注意:在操作签发证书操作时一定要检查三台master主机服务器时间、时区是否一致,会导致证书不可用!!

1、创建etcd目录(master-1、master-2、master-3中都要先创建)

bash 复制代码
mkdir /etc/etcd/{ssl,data} -p

2、安装签发证书工具

bash 复制代码
wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64

chmod +x cfssl_linux-amd64 cfssljson_linux-amd64 cfssl-certinfo_linux-amd64

mv cfssl_linux-amd64 /usr/local/bin/cfssl
mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
mv cfssl-certinfo_linux-amd64 /usr/local/bin/cfssl-certinfo

3、创建CA生成证书签名请求文件

创建工作目录,证书和配置相关文件在此目录进行生成,之后在同步在master主机。

bash 复制代码
mkdir ~/workdir
cd ~/workdir

创建CA证书签名请求文件

json 复制代码
cat > ~/workdir/ca-csr.json << EOF
{
  "CN": "kubernetes",
  "key": {
      "algo": "rsa",
      "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "Hebei",
      "L": "Handan",
      "O": "k8s",
      "OU": "system"
    }
  ],
  "ca": {
          "expiry": "87600h"
  }
}
EOF

重要参数解释:

  • CN:证书的公共名称
  • algo:指定使用 RSA 算法
  • size:RSA 密钥的大小,以位为单位
  • expiry:证书过期时间,87600h=10年

4、生成CA根证书

bash 复制代码
cfssl gencert -initca ca-csr.json  | cfssljson -bare ca

5、创建CA证书配置文件,用于定义证书颁发机构 (CA) 的签名策略和配置

json 复制代码
cat > ~/workdir/ca-config.json << EOF
{
  "signing": {
      "default": {
          "expiry": "87600h"
        },
      "profiles": {
          "kubernetes": {
              "usages": [
                  "signing",
                  "key encipherment",
                  "server auth",
                  "client auth"
              ],
              "expiry": "87600h"
          }
      }
  }
}
EOF

重要参数解释:

  • usages:定义了证书可以用来做什么。这个配置指定了四种用途

    • "key encipherment": 用于加密密钥
    • "server auth": 用于服务器身份验证
    • "client auth": 用于客户端身份验证
  • expiry:指定了kubernetes 配置文件中定义的证书有效期也是 10 年。

6、创建etcd生成证书签名请求文件

json 复制代码
cat > ~/workdir/etcd-csr.json << EOF
{
  "CN": "etcd",
  "hosts": [
    "127.0.0.1",
    "16.32.15.115",
    "16.32.15.200",
    "16.32.15.201",
    "16.32.15.210"
  ],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [{
    "C": "CN",
    "ST": "Hebei",
    "L": "Handan",
    "O": "k8s",
    "OU": "system"
  }]
}
EOF

注意:上述文件hosts字段中IP为所有etcd节点的集群内部通信IP,可以预留几个,后续做扩容用,就不用在重新配置证书了。

7、签发etcd证书

bash 复制代码
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes etcd-csr.json | cfssljson  -bare etcd

重要参数解释:

  • gencert:生成证书
  • -ca=ca.pem:指定证书颁发机构(CA)的证书文件 ca.pem,用于签发新证书
  • -ca-key=ca-key.pem:指定 CA 的私钥文件
  • -config=ca-config.json:使用 ca-config.json 文件中定义的配置来生成证书。这些配置包括签名策略和证书有效期等
  • -profile=kubernetes:使用配置文件中的 kubernetes 配置文件作为证书的签名配置
  • -bare etcd:定生成的证书文件名为 etcd

8、拷贝相关证书文件到/etc/etcd/ssl目录

bash 复制代码
cp -p ca*.pem /etc/etcd/ssl/
cp -p etcd*.pem /etc/etcd/ssl/

查看证书:

bash 复制代码
ls -l /etc/etcd/ssl*

9、将相关配置文件同步至其他Master节点中

bash 复制代码
scp -p /etc/etcd/ssl/* root@master-2:/etc/etcd/ssl
scp -p /etc/etcd/ssl/* root@master-3:/etc/etcd/ssl

2、搭建etcd高可用集群

首先需要下载对应版本etcd二进制包 官网下载地址:

1、解压压缩包并移动etcd相关命令

bash 复制代码
tar zxf etcd-v3.5.15-linux-amd64.tar.gz
cp -p etcd-v3.5.15-linux-amd64/etcd* /usr/local/bin/

2、创建etcd集群配置文件,各个节点配置文件是不同的,需要单独配置

master-1服务器配置:

bash 复制代码
cat > /etc/etcd/etcd.conf << EOF
ETCD_NAME="etcd1"
ETCD_DATA_DIR="/etc/etcd/data"
ETCD_LISTEN_PEER_URLS="https://16.32.15.115:2380"
ETCD_LISTEN_CLIENT_URLS="https://16.32.15.115:2379,http://127.0.0.1:2379"
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://16.32.15.115:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://16.32.15.115:2379"
ETCD_INITIAL_CLUSTER="etcd1=https://16.32.15.115:2380,etcd2=https://16.32.15.200:2380,etcd3=https://16.32.15.201:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
EOF

master-2服务器配置:

bash 复制代码
cat > /etc/etcd/etcd.conf << EOF
ETCD_NAME="etcd2"
ETCD_DATA_DIR="/etc/etcd/data"
ETCD_LISTEN_PEER_URLS="https://16.32.15.200:2380"
ETCD_LISTEN_CLIENT_URLS="https://16.32.15.200:2379,http://127.0.0.1:2379"
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://16.32.15.200:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://16.32.15.200:2379"
ETCD_INITIAL_CLUSTER="etcd1=https://16.32.15.115:2380,etcd2=https://16.32.15.200:2380,etcd3=https://16.32.15.201:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
EOF

master-3服务器配置:

bash 复制代码
cat > /etc/etcd/etcd.conf << EOF
ETCD_NAME="etcd3"
ETCD_DATA_DIR="/etc/etcd/data"
ETCD_LISTEN_PEER_URLS="https://16.32.15.201:2380"
ETCD_LISTEN_CLIENT_URLS="https://16.32.15.201:2379,http://127.0.0.1:2379"
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://16.32.15.201:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://16.32.15.201:2379"
ETCD_INITIAL_CLUSTER="etcd1=https://16.32.15.115:2380,etcd2=https://16.32.15.200:2380,etcd3=https://16.32.15.201:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
EOF

重要配置项解释:

  • ETCD_NAME:节点名称,集群中唯一
  • ETCD_DATA_DIR:数据目录
  • ETCD_LISTEN_PEER_URLS:集群通信监听地址
  • ETCD_LISTEN_CLIENT_URLS:客户端访问监听地址
  • ETCD_INITIAL_ADVERTISE_PEER_URLS:集群通告地址
  • ETCD_ADVERTISE_CLIENT_URLS:客户端通告地址
  • ETCD_INITIAL_CLUSTER:集群节点地址
  • ETCD_INITIAL_CLUSTER_TOKEN:集群Token
  • ETCD_INITIAL_CLUSTER_STATE:加入集群的当前状态,new是新集群,existing表示加入已有集群

3、创建Systemd管理etcd服务

bash 复制代码
cat > /usr/lib/systemd/system/etcd.service << EOF
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
 
[Service]
Type=notify
EnvironmentFile=-/etc/etcd/etcd.conf
WorkingDirectory=/etc/etcd/data/
ExecStart=/usr/local/bin/etcd \\
  --cert-file=/etc/etcd/ssl/etcd.pem \\
  --key-file=/etc/etcd/ssl/etcd-key.pem \\
  --trusted-ca-file=/etc/etcd/ssl/ca.pem \\
  --peer-cert-file=/etc/etcd/ssl/etcd.pem \\
  --peer-key-file=/etc/etcd/ssl/etcd-key.pem \\
  --peer-trusted-ca-file=/etc/etcd/ssl/ca.pem \\
  --peer-client-cert-auth \\
  --client-cert-auth
Restart=on-failure
RestartSec=5
LimitNOFILE=65536
 
[Install]
WantedBy=multi-user.target
EOF

将etcd相关命令和Systemd配置拷贝到其他master节点:

bash 复制代码
scp -p /usr/local/bin/etcd* root@master-2:/usr/local/bin/
scp -p /usr/local/bin/etcd* root@master-3:/usr/local/bin/

scp -p /usr/lib/systemd/system/etcd.service root@master-2:/usr/lib/systemd/system/
scp -p /usr/lib/systemd/system/etcd.service root@master-3:/usr/lib/systemd/system/

4、启动集群,并加入开机自启动(三台master节点中执行)

bash 复制代码
systemctl enable etcd
systemctl start etcd
systemctl status etcd

3、测试连接集群

bash 复制代码
ETCDCTL_API=3

/usr/local/bin/etcdctl --write-out=table --cacert=/etc/etcd/ssl/ca.pem --cert=/etc/etcd/ssl/etcd.pem --key=/etc/etcd/ssl/etcd-key.pem --endpoints=https://16.32.15.115:2379,https://16.32.15.200:2379,https://16.32.15.201:2379  endpoint health

+---------------------------+--------+-------------+-------+
|         ENDPOINT          | HEALTH |    TOOK     | ERROR |
+---------------------------+--------+-------------+-------+
| https://16.32.15.115:2379 |   true | 19.402185ms |       |
| https://16.32.15.200:2379 |   true | 21.604449ms |       |
| https://16.32.15.201:2379 |   true | 53.427052ms |       |
+---------------------------+--------+-------------+-------+

四、部署Nginx+Keepalived

1、部署并配置Nginx服务

注意:master-1、master-2主机都需要执行,步骤都是一致的。

1、安装Nginx服务

bash 复制代码
yum install nginx-1.20.1 nginx-mod-stream -y

2、编辑nginx配置文件

bash 复制代码
vim /etc/nginx/nginx.conf

user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;

include /usr/share/nginx/modules/*.conf;

events {
    worker_connections 1024;
}

stream {
    log_format  main  '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent';
    access_log  /var/log/nginx/k8s-access.log  main;

    upstream k8s-apiserver {
       server 16.32.15.115:6443;
       server 16.32.15.200:6443;
       server 16.32.15.201:6443;

    }
    server {
       listen 16443; 
       proxy_pass k8s-apiserver;
    }
}

http {
    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

    access_log  /var/log/nginx/access.log  main;

    sendfile            on;
    tcp_nopush          on;
    tcp_nodelay         on;
    keepalive_timeout   65;
    types_hash_max_size 2048;

    include             /etc/nginx/mime.types;
    default_type        application/octet-stream;
}

3、启动并加入开机自启动

bash 复制代码
systemctl start nginx
systemctl enable nginx
systemctl status nginx

4、查看端口是否监听

bash 复制代码
netstat -anput |grep :16443

2、部署并配置Keepalived服务

提示:master-1、master-2主机都需要执行,除了keepalived配置内容不一致,其余都一致。

1、安装keepalived

bash 复制代码
yum -y install keepalived

2、编辑keepalived配置文件

master-1配置如下:

bash 复制代码
cat > /etc/keepalived/keepalived.conf << EOF

global_defs {
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   router_id NGINX_MASTER
}
vrrp_script check_nginx {
    script "/etc/keepalived/check_nginx.sh"
}

vrrp_instance VI_1 {
    state MASTER
    interface ens32      # 网卡名称
    virtual_router_id 51
    priority 100
    advert_int 1         # VRRP心跳包通告间隔时间
    authentication {
        auth_type PASS
        auth_pass 1111
    }

    virtual_ipaddress {
        16.32.15.210/24  # VIP地址
    }
    track_script {
        check_nginx
    }
}
EOF

master-2配置如下:

bash 复制代码
cat > /etc/keepalived/keepalived.conf << EOF

global_defs {
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   router_id NGINX_MASTER
}
vrrp_script check_nginx {
    script "/etc/keepalived/check_nginx.sh"
}

vrrp_instance VI_1 {
    state BACKUP
    interface eth0      # 网卡名称
    virtual_router_id 51
    priority 50
    advert_int 1         # VRRP心跳包通告间隔时间
    authentication {
        auth_type PASS
        auth_pass 1111
    }

    virtual_ipaddress {
        16.32.15.210/24  # VIP地址
    }
    track_script {
        check_nginx
    }
}
EOF

3、添加Nginx检测脚本

bash 复制代码
vim /etc/keepalived/check_nginx.sh

#!/bin/bash
count=$(ps -ef |grep nginx | grep sbin | egrep -cv "grep|$$")
if [ $count -eq 0 ];then
    systemctl stop keepalived
fi

添加执行权限:

bash 复制代码
chmod +x /etc/keepalived/check_nginx.sh

3、启动并加入开机自启动

bash 复制代码
systemctl start keepalived
systemctl enable keepalived
systemctl status keepalived

3、测试VIP故障自动切换

停止掉master-1服务器nginx服务,VIP会切换到master-2服务器中。

1、master-1服务器中操作

查看是否有VIP地址,正常是存在的:

bash 复制代码
hostname -I|grep 210

停止nginx服务器,这时VIP地址会漂移到master-2主机上面:

bash 复制代码
systemctl stop nginx

当停止nginx后,会发现master-1服务器中的VIP没有了

2、master-2服务器中操作

查看VIP地址是否切换到master-2服务器中

bash 复制代码
hostname -I|grep 210

验证完成!!后面的Kubernetes相关组件连接apiserver地址填写 https://16.32.15.210:16443 即可!

五、部署Kubernetes控制节点节点相关组件

1、下载二进制包和环境准备相关操作

1、官网下载二进制包

官网下载地址:

2、解压缩,并将二进制命令放到/usr/local/bin/目录下

bash 复制代码
tar zxf kubernetes-server-linux-amd64.tar.gz
cp -p kubernetes/server/bin/{kube-apiserver,kube-controller-manager,kube-scheduler,kubectl} /usr/local/bin/

3、将二进制命令同步到其他K8S主机

bash 复制代码
scp -p /usr/local/bin/kube* master-2:/usr/local/bin/
scp -p /usr/local/bin/kube* master-3:/usr/local/bin/

# kubelet、kube-proxy组件拷贝至Node节点中
scp -p kubernetes/server/bin/{kubelet,kube-proxy} node-1:/usr/local/bin/

4、创建Kubernetes相关目录(所有主机中都要创建)

bash 复制代码
mkdir -p /etc/kubernetes/{ssl,logs}

2、部署api-service组件

提示:在master-1、master-2、master-3中部署组件,证书和配置的可以先在其中一台生成,然后拷贝到其他master主机中。

1、创建token.csv文件

该文件中是一个预设的用户配置,同时该用户的Token 和 由apiserver 的 CA签发的用户被写入了 kubelet 所使用的。

bash 复制代码
# 格式:token,用户名,UID,用户组
cat > /etc/kubernetes/token.csv << EOF
$(head -c 16 /dev/urandom | od -An -t x | tr -d ' '),kubelet-bootstrap,10001,"system:kubelet-bootstrap"
EOF

2、创建apiserver证书签名请求文件

json 复制代码
cat > ~/workdir/kube-apiserver-csr.json << EOF

{
  "CN": "kubernetes",
  "hosts": [
    "127.0.0.1",
    "16.32.15.115",
    "16.32.15.200",
    "16.32.15.201",
    "16.32.15.202",
    "16.32.15.210",
    "10.255.0.1",
    "kubernetes",
    "kubernetes.default",
    "kubernetes.default.svc",
    "kubernetes.default.svc.cluster",
    "kubernetes.default.svc.cluster.local"
  ],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "Hebei",
      "L": "Handan",
      "O": "k8s",
      "OU": "system"
    }
  ]
}
EOF

注意:hosts 字段需要指定授权使用该证书的 IP 或域名列表。 由于该证书后续被 K8S Master 集群使用,需要将Master节点的IP都填上,同时还需要填写 Service 网络的首个IP。(一般是 kube-apiserver 指定的 service-cluster-ip-range 网段的第一个IP,如 10.255.0.1)

3、生成证书

注意:这里使用的CA颁发机构和etcd是同一个,保证当前目录下存在ca.pemca-key.pem文件!

bash 复制代码
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-apiserver-csr.json | cfssljson -bare kube-apiserver

4、移动证书到指定位置

bash 复制代码
cp -p ca*.pem /etc/kubernetes/ssl/
cp -p kube-apiserver*.pem /etc/kubernetes/ssl/

ls -l /etc/kubernetes/ssl/

5、创建kube-apiserver配置文件

Master-1配置:

yaml 复制代码
cat > /etc/kubernetes/kube-apiserver.conf <<EOF
KUBE_APISERVER_OPTS="--enable-admission-plugins=NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \
  --anonymous-auth=false \\
  --bind-address=16.32.15.115 \\
  --secure-port=6443 \\
  --advertise-address=16.32.15.115 \\
  --authorization-mode=Node,RBAC \\
  --runtime-config=api/all=true \\
  --enable-bootstrap-token-auth \\
  --service-cluster-ip-range=10.255.0.0/16 \\
  --token-auth-file=/etc/kubernetes/token.csv \\
  --service-node-port-range=30000-50000 \\
  --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem  \\
  --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem \\
  --client-ca-file=/etc/kubernetes/ssl/ca.pem \\
  --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem \\
  --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem \\
  --service-account-key-file=/etc/kubernetes/ssl/ca-key.pem \\
  --service-account-signing-key-file=/etc/kubernetes/ssl/ca-key.pem  \\
  --service-account-issuer=https://kubernetes.default.svc.cluster.local \\
  --etcd-cafile=/etc/etcd/ssl/ca.pem \\
  --etcd-certfile=/etc/etcd/ssl/etcd.pem \\
  --etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \\
  --etcd-servers=https://16.32.15.115:2379,https://16.32.15.200:2379,https://16.32.15.201:2379 \\
  --allow-privileged=true \\
  --apiserver-count=3 \\
  --audit-log-maxage=30 \\
  --audit-log-maxbackup=3 \\
  --audit-log-maxsize=100 \\
  --audit-log-path=/etc/kubernetes/logs/kube-apiserver-audit.log \\
  --logging-format=json \\
  --event-ttl=1h \\
  --v=4"
EOF

Master-2配置:

bash 复制代码
cat > /etc/kubernetes/kube-apiserver.conf << EOF
KUBE_APISERVER_OPTS="--enable-admission-plugins=NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \
  --anonymous-auth=false \\
  --bind-address=16.32.15.200 \\
  --secure-port=6443 \\
  --advertise-address=16.32.15.200 \\
  --authorization-mode=Node,RBAC \\
  --runtime-config=api/all=true \\
  --enable-bootstrap-token-auth \\
  --service-cluster-ip-range=10.255.0.0/16 \\
  --token-auth-file=/etc/kubernetes/token.csv \\
  --service-node-port-range=30000-50000 \\
  --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem  \\
  --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem \
  --client-ca-file=/etc/kubernetes/ssl/ca.pem \
  --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem \
  --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem \
  --service-account-key-file=/etc/kubernetes/ssl/ca-key.pem \
  --service-account-signing-key-file=/etc/kubernetes/ssl/ca-key.pem  \
  --service-account-issuer=https://kubernetes.default.svc.cluster.local \
  --etcd-cafile=/etc/etcd/ssl/ca.pem \\
  --etcd-certfile=/etc/etcd/ssl/etcd.pem \\
  --etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \\
  --etcd-servers=https://16.32.15.115:2379,https://16.32.15.200:2379,https://16.32.15.201:2379 \\
  --allow-privileged=true \\
  --apiserver-count=3 \\
  --audit-log-maxage=30 \\
  --audit-log-maxbackup=3 \\
  --audit-log-maxsize=100 \\
  --audit-log-path=/etc/kubernetes/logs/kube-apiserver-audit.log \\
  --logging-format=json \\
  --event-ttl=1h \\
  --v=4"
EOF

Master-3配置:

bash 复制代码
cat > /etc/kubernetes/kube-apiserver.conf << EOF
KUBE_APISERVER_OPTS="--enable-admission-plugins=NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \\
  --anonymous-auth=false \\
  --bind-address=16.32.15.201 \\
  --secure-port=6443 \\
  --advertise-address=16.32.15.201 \\
  --authorization-mode=Node,RBAC \\
  --runtime-config=api/all=true \\
  --enable-bootstrap-token-auth \\
  --service-cluster-ip-range=10.255.0.0/16 \\
  --token-auth-file=/etc/kubernetes/token.csv \\
  --service-node-port-range=30000-50000 \\
  --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem  \\
  --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem \\
  --client-ca-file=/etc/kubernetes/ssl/ca.pem \\
  --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem \\
  --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem \\
  --service-account-key-file=/etc/kubernetes/ssl/ca-key.pem \\
  --service-account-signing-key-file=/etc/kubernetes/ssl/ca-key.pem  \\
  --service-account-issuer=https://kubernetes.default.svc.cluster.local \\
  --etcd-cafile=/etc/etcd/ssl/ca.pem \\
  --etcd-certfile=/etc/etcd/ssl/etcd.pem \\
  --etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \\
  --etcd-servers=https://16.32.15.115:2379,https://16.32.15.200:2379,https://16.32.15.201:2379 \\
  --allow-privileged=true \\
  --apiserver-count=3 \\
  --audit-log-maxage=30 \\
  --audit-log-maxbackup=3 \\
  --audit-log-maxsize=100 \\
  --audit-log-path=/etc/kubernetes/logs/kube-apiserver-audit.log \\
  --logging-format=json \\
  --event-ttl=1h \\
  --v=4"
EOF

6、添加Systemd管理apiserver

bash 复制代码
cat > /usr/lib/systemd/system/kube-apiserver.service << EOF

[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=etcd.service
Wants=etcd.service

[Service]
EnvironmentFile=-/etc/kubernetes/kube-apiserver.conf
ExecStart=/usr/local/bin/kube-apiserver \$KUBE_APISERVER_OPTS
Restart=on-failure
RestartSec=5
Type=notify
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF

7、同步相关配置到Master主机

bash 复制代码
scp -p /usr/lib/systemd/system/kube-apiserver.service root@master-2:/usr/lib/systemd/system/
scp -p /usr/lib/systemd/system/kube-apiserver.service root@master-3:/usr/lib/systemd/system/

scp -p /etc/kubernetes/ssl/* master-2:/etc/kubernetes/ssl/
scp -p /etc/kubernetes/ssl/* master-3:/etc/kubernetes/ssl/

scp -p /etc/kubernetes/token.csv master-2:/etc/kubernetes/
scp -p /etc/kubernetes/token.csv master-3:/etc/kubernetes/

8、启动apiserver,并加入开机自启动(三台master节点中执行)

bash 复制代码
systemctl enable kube-apiserver
systemctl start kube-apiserver
systemctl status kube-apiserver

9、访问apiserver

bash 复制代码
curl -k https://16.32.15.210:16443/

{
  "kind": "Status",
  "apiVersion": "v1",
  "metadata": {},
  "status": "Failure",
  "message": "Unauthorized",
  "reason": "Unauthorized",
  "code": 401
}

正常是无法访问直接apiserver的,所以此处返回401是属于正常的

3、部署kubectl组件

1、创建证书签名请求文件

json 复制代码
cat > ~/workdir/admin-csr.json << EOF
{
  "CN": "admin",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "Hebei",
      "L": "Handan",
      "O": "system:masters",
      "OU": "system"
    }
  ]
}
EOF

2、生成证书,使用apiservice的CA证书颁发机构进行生成,这样apiserver就会信任kubectl了

bash 复制代码
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin

3、将生成的证书拷贝到指定位置

bash 复制代码
cp admin*.pem /etc/kubernetes/ssl/
ls -l /etc/kubernetes/ssl/admin*.pem

4、配置安全上下文

创建默认配置文件目录:

bash 复制代码
mkdir -p $HOME/.kube
chown $(id -u):$(id -g) $HOME/.kube

设置集群参数:

bash 复制代码
kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://16.32.15.210:16443 --kubeconfig=$HOME/.kube/config

设置客户端认证参数:

bash 复制代码
kubectl config set-credentials admin --client-certificate=admin.pem --client-key=admin-key.pem --embed-certs=true --kubeconfig=$HOME/.kube/config

设置上下文参数:

bash 复制代码
kubectl config set-context kubernetes --cluster=kubernetes --user=admin --kubeconfig=$HOME/.kube/config
kubectl config use-context kubernetes --kubeconfig=$HOME/.kube/config

查看当前配置信息:

bash 复制代码
kubectl config view

授权K8S证书访问kubelet api权限:

bash 复制代码
kubectl create clusterrolebinding kube-apiserver:kubelet-apis --clusterrole=system:kubelet-api-admin --user kubernetes

5、查看集群组件状态

bash 复制代码
kubectl cluster-info
kubectl get componentstatuses

6、将相关配置文件同步至其他Master节点中

bash 复制代码
scp -rp ~/.kube master-2:~/
scp -rp ~/.kube master-3:~/

scp -p  /etc/kubernetes/ssl/admin*.pem master-2:/etc/kubernetes/ssl/
scp -p  /etc/kubernetes/ssl/admin*.pem master-3:/etc/kubernetes/ssl/

将配置同步过去,可以直接使用kubectl命令了

bash 复制代码
kubectl cluster-info

4、部署controller-manager组件

controller-manager,负责管理和维护集群中的各类控制器,它的作用是监控集群状态,并执行相应的操作,确保集群的实际状态与期望状态一致。具体来说,它可以处理资源的创建、更新和删除操作,确保应用程序和服务按照定义运行。

1、创建证书签名请求文件

json 复制代码
cat > ~/workdir/kube-controller-manager-csr.json << EOF
{
    "CN": "system:kube-controller-manager",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "hosts": [
      "127.0.0.1",
      "16.32.15.115",
      "16.32.15.200",
      "16.32.15.201",
      "16.32.15.210"
    ],
    "names": [
      {
        "C": "CN",
        "ST": "Heibei",
        "L": "Handan",
        "O": "system:kube-controller-manager",
        "OU": "system"
      }
    ]
}
EOF

注意:"O": "system:kube-controller-manager",是K8S内置的 ClusterRoleBindings,system:kube-controller-manager 赋予 kube-controller-manager工作所需的权限!

查看内置system:kube-controller-manager ClusterRoleBindings

bash 复制代码
kubectl get clusterrolebindings|grep 'system:kube-controller-manager'

2、生成证书

bash 复制代码
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager

3、将证书拷贝到指定位置

bash 复制代码
cp -p kube-controller-manager*.pem /etc/kubernetes/ssl/

4、配置安全上下文

设置集群参数:

bash 复制代码
kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://16.32.15.210:16443 --kubeconfig=/etc/kubernetes/kube-controller-manager.kubeconfig

设置客户端认证参数:

bash 复制代码
kubectl config set-credentials system:kube-controller-manager --client-certificate=kube-controller-manager.pem --client-key=kube-controller-manager-key.pem --embed-certs=true --kubeconfig=/etc/kubernetes/kube-controller-manager.kubeconfig

设置上下文:

bash 复制代码
kubectl config set-context system:kube-controller-manager --cluster=kubernetes --user=system:kube-controller-manager --kubeconfig=/etc/kubernetes/kube-controller-manager.kubeconfig
kubectl config use-context system:kube-controller-manager --kubeconfig=/etc/kubernetes/kube-controller-manager.kubeconfig

5、创建controller-manager配置文件

bash 复制代码
cat > /etc/kubernetes/kube-controller-manager.conf << EOF
KUBE_CONTROLLER_MANAGER_OPTS="--secure-port=10252 \\
  --bind-address=127.0.0.1 \\
  --kubeconfig=/etc/kubernetes/kube-controller-manager.kubeconfig \\
  --service-cluster-ip-range=10.255.0.0/16 \\
  --cluster-name=kubernetes \\
  --cluster-signing-cert-file=/etc/kubernetes/ssl/ca.pem \\
  --cluster-signing-key-file=/etc/kubernetes/ssl/ca-key.pem \\
  --allocate-node-cidrs=true \\
  --cluster-cidr=10.0.0.0/16 \\
  --root-ca-file=/etc/kubernetes/ssl/ca.pem \\
  --service-account-private-key-file=/etc/kubernetes/ssl/ca-key.pem \\
  --leader-elect=true \\
  --feature-gates=RotateKubeletServerCertificate=true \\
  --controllers=*,bootstrapsigner,tokencleaner \\
  --horizontal-pod-autoscaler-sync-period=10s \\
  --tls-cert-file=/etc/kubernetes/ssl/kube-controller-manager.pem \\
  --tls-private-key-file=/etc/kubernetes/ssl/kube-controller-manager-key.pem \\
  --use-service-account-credentials=true \\
  --logging-format=json \\
  --cluster-signing-duration=87600h \\
  --v=3"
EOF

6、添加Systemd管理controller-manager

bash 复制代码
cat > /usr/lib/systemd/system/kube-controller-manager.service << EOF
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=-/etc/kubernetes/kube-controller-manager.conf
ExecStart=/usr/local/bin/kube-controller-manager \$KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF

7、将相关配置文件同步至其他Master节点中

bash 复制代码
scp -rp /etc/kubernetes/ssl/kube-controller-manager*.pem master-2:/etc/kubernetes/ssl/
scp -rp /etc/kubernetes/ssl/kube-controller-manager*.pem master-3:/etc/kubernetes/ssl/

scp -rp /etc/kubernetes/kube-controller-manager.kubeconfig master-2:/etc/kubernetes/
scp -rp /etc/kubernetes/kube-controller-manager.kubeconfig master-3:/etc/kubernetes/

scp -rp /etc/kubernetes/kube-controller-manager.conf master-2:/etc/kubernetes/
scp -rp /etc/kubernetes/kube-controller-manager.conf master-3:/etc/kubernetes/

scp -rp /usr/lib/systemd/system/kube-controller-manager.service master-2:/usr/lib/systemd/system/
scp -rp /usr/lib/systemd/system/kube-controller-manager.service master-3:/usr/lib/systemd/system/

8、启动并加入开机自启动(三台master节点中执行)

bash 复制代码
systemctl enable kube-controller-manager.service
systemctl start kube-controller-manager.service
systemctl status kube-controller-manager.service

5、部署kube-scheduler组件

kube-scheduler,负责将 Pods 调度到合适的节点上,根据预定义的策略和资源的可用性,选择最适合的节点以运行 Pod,kube-scheduler 会考虑各种因素,如节点资源、Pod 需求、亲和性和反亲和性规则等,确保 Pod 在集群中均匀地分布并满足性能需求。

1、创建证书签名请求文件

json 复制代码
cat > ~/workdir/kube-scheduler-csr.json << EOF

{
    "CN": "system:kube-scheduler",
    "hosts": [
      "127.0.0.1",
      "16.32.15.115",
      "16.32.15.200",
      "16.32.15.201",
      "16.32.15.210"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
      {
        "C": "CN",
        "ST": "Hebei",
        "L": "Handan",
        "O": "system:kube-scheduler",
        "OU": "system"
      }
    ]
}
EOF

2、生成证书

bash 复制代码
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-scheduler-csr.json | cfssljson -bare kube-scheduler

3、将证书文件拷贝到指定位置

bash 复制代码
cp -p kube-scheduler*.pem /etc/kubernetes/ssl/

4、配置安全上下文

设置集群参数:

bash 复制代码
kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://16.32.15.210:16443 --kubeconfig=/etc/kubernetes/kube-scheduler.kubeconfig

设置客户端认证参数:

bash 复制代码
kubectl config set-credentials system:kube-scheduler --client-certificate=kube-scheduler.pem --client-key=kube-scheduler-key.pem --embed-certs=true --kubeconfig=/etc/kubernetes/kube-scheduler.kubeconfig

设置上下文参数:

bash 复制代码
kubectl config set-context system:kube-scheduler --cluster=kubernetes --user=system:kube-scheduler --kubeconfig=/etc/kubernetes/kube-scheduler.kubeconfig
kubectl config use-context system:kube-scheduler --kubeconfig=/etc/kubernetes/kube-scheduler.kubeconfig

5、创建scheduler配置文件

bash 复制代码
cat > /etc/kubernetes/kube-scheduler.conf  << EOF
KUBE_SCHEDULER_OPTS="--bind-address=127.0.0.1 \\
--kubeconfig=/etc/kubernetes/kube-scheduler.kubeconfig \\
--leader-elect=true \\
--logging-format=json \\
--v=2"
EOF

6、添加Systemd管理scheduler

bash 复制代码
cat > /usr/lib/systemd/system/kube-scheduler.service << EOF
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=-/etc/kubernetes/kube-scheduler.conf
ExecStart=/usr/local/bin/kube-scheduler \$KUBE_SCHEDULER_OPTS
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF

7、将相关配置文件同步至其他Master节点中

bash 复制代码
scp -rp /etc/kubernetes/ssl/kube-scheduler*.pem master-2:/etc/kubernetes/
scp -rp /etc/kubernetes/ssl/kube-scheduler*.pem master-3:/etc/kubernetes/

scp -rp /etc/kubernetes/kube-scheduler.kubeconfig master-2:/etc/kubernetes/
scp -rp /etc/kubernetes/kube-scheduler.kubeconfig master-3:/etc/kubernetes/

scp -rp /etc/kubernetes/kube-scheduler.conf master-2:/etc/kubernetes/
scp -rp /etc/kubernetes/kube-scheduler.conf master-3:/etc/kubernetes/

scp -rp /usr/lib/systemd/system/kube-scheduler.service master-2:/usr/lib/systemd/system/
scp -rp /usr/lib/systemd/system/kube-scheduler.service master-3:/usr/lib/systemd/system/

8、启动并加入开机自启动(三台master节点中执行)

bash 复制代码
systemctl enable kube-scheduler.service
systemctl start kube-scheduler.service
systemctl status kube-scheduler.service

六、部署Kubernetes工作节点节点相关组件

1、部署kubelet组件

每个Node节点上的kubelet定期就会调用API Server的REST接口报告自身状态,API Server接收这些信息后,将节点状态信息更新到etcd中。kubelet也通过API Server监听Pod信息,从而对Node机器上的POD进行管理,如创建、删除、更新Pod。

1、获取token.csv文件token值

bash 复制代码
BOOTSTRAP_TOKEN=$(awk -F "," '{print $1}' /etc/kubernetes/token.csv)
echo ${BOOTSTRAP_TOKEN}

2、配置安全上下文

设置集群参数:

bash 复制代码
kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://16.32.15.210:16443 --kubeconfig=kubelet-bootstrap.kubeconfig

设置客户端认证参数:

bash 复制代码
kubectl config set-credentials kubelet-bootstrap --token=${BOOTSTRAP_TOKEN} --kubeconfig=kubelet-bootstrap.kubeconfig

设置上下文:

bash 复制代码
kubectl config set-context default --cluster=kubernetes --user=kubelet-bootstrap --kubeconfig=kubelet-bootstrap.kubeconfig
kubectl config use-context default --kubeconfig=kubelet-bootstrap.kubeconfig

绑定角色:

bash 复制代码
kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap

3、将相关配置同步到Node节点

bash 复制代码
scp -p kubelet-bootstrap.kubeconfig node-1:/etc/kubernetes/
scp -p /etc/kubernetes/ssl/ca.pem node-1:/etc/kubernetes/ssl/

4、创建kubelet.json 文件

注意:后续操作均在node-1操作,16.32.15.202是node-1宿主机IP地址!

bash 复制代码
cat > /etc/kubernetes/kubelet.json << EOF
{
  "kind": "KubeletConfiguration",
  "apiVersion": "kubelet.config.k8s.io/v1beta1",
  "authentication": {
    "x509": {
      "clientCAFile": "/etc/kubernetes/ssl/ca.pem"
    },
    "webhook": {
      "enabled": true,
      "cacheTTL": "2m0s"
    },
    "anonymous": {
      "enabled": false
    }
  },
  "authorization": {
    "mode": "Webhook",
    "webhook": {
      "cacheAuthorizedTTL": "5m0s",
      "cacheUnauthorizedTTL": "30s"
    }
  },
  "address": "16.32.15.202",
  "port": 10250,
  "readOnlyPort": 10255,
  "cgroupDriver": "systemd",
  "hairpinMode": "promiscuous-bridge",
  "serializeImagePulls": false,
  "clusterDomain": "cluster.local.",
  "clusterDNS": ["10.255.0.2"]
}
EOF

5、创建kubelet配置文件

bash 复制代码
cat > /etc/kubernetes/kubelet.conf << EOF
KUBELET_OPTS="--bootstrap-kubeconfig=/etc/kubernetes/kubelet-bootstrap.kubeconfig \\
  --cert-dir=/etc/kubernetes/ssl \\
  --kubeconfig=/etc/kubernetes/kubelet.kubeconfig \\
  --config=/etc/kubernetes/kubelet.json \\
  --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:v1.9.3 \\
  --logging-format=json \\
  --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock \\
  --v=2"
EOF

6、添加Systemd管理kubelet

bash 复制代码
mkdir /var/lib/kubelet
cat > /usr/lib/systemd/system/kubelet.service << EOF
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/kubernetes/kubernetes
After=docker.service
Requires=docker.service
[Service]
EnvironmentFile=-/etc/kubernetes/kubelet.conf
WorkingDirectory=/var/lib/kubelet
ExecStart=/usr/local/bin/kubelet \$KUBELET_OPTS
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
EOF

7、启动并加入开机自启动

bash 复制代码
systemctl enable kubelet.service
systemctl start kubelet.service
systemctl status kubelet.service

8、kubelet启动后会向apiserver发送一个csr请求,去申请证书

注意:任意master节点执行

bash 复制代码
kubectl get csr

批准csr请求

bash 复制代码
kubectl certificate approve node-csr-gCK454vfN_Wlj6z-HRvz3scijAzmkvO0CbgR3EyKI_U

批量请求后,在node节点可以查看到apiserver生成的证书文件,如下图:

bash 复制代码
ll /etc/kubernetes/ssl/

2、部署kube-proxy组件

1、创建证书签名请求文件

json 复制代码
cat > kube-proxy-csr.json << EOF
{
  "CN": "system:kube-proxy",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "Hebei",
      "L": "Handan",
      "O": "k8s",
      "OU": "system"
    }
  ]
}
EOF

2、生成证书

bash 复制代码
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy

3、配置安全上下文

设置集群参数:

bash 复制代码
kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://16.32.15.210:16443 --kubeconfig=kube-proxy.kubeconfig

设置客户端认证参数:

bash 复制代码
kubectl config set-credentials kube-proxy --client-certificate=kube-proxy.pem --client-key=kube-proxy-key.pem --embed-certs=true --kubeconfig=kube-proxy.kubeconfig

设置上下文参数:

bash 复制代码
kubectl config set-context default --cluster=kubernetes --user=kube-proxy --kubeconfig=kube-proxy.kubeconfig
kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig

4、将相关配置同步到Node节点

bash 复制代码
scp -p kube-proxy.kubeconfig node-1:/etc/kubernetes/
scp -p kube-proxy*.pem node-1:/etc/kubernetes/ssl/

5、创建kube-proxy配置文件

注意:后续操作均在node-1操作,16.32.15.202是node-1宿主机IP地址!

bash 复制代码
cat >> /etc/kubernetes/kube-proxy.yaml << EOF
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 16.32.15.202
clientConnection:
  kubeconfig: /etc/kubernetes/kube-proxy.kubeconfig
clusterCIDR: 16.32.15.0/24
healthzBindAddress: 16.32.15.202:10256
kind: KubeProxyConfiguration
metricsBindAddress: 16.32.15.202:10249
mode: "ipvs"
EOF

6、添加Systemd管理scheduler

bash 复制代码
mkdir /var/lib/kube-proxy
cat > /usr/lib/systemd/system/kube-proxy.service << EOF
[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target
[Service]
WorkingDirectory=/var/lib/kube-proxy
ExecStart=/usr/local/bin/kube-proxy \\
  --config=/etc/kubernetes/kube-proxy.yaml \\
  --v=2
Restart=on-failure
RestartSec=5
LimitNOFILE=65536
EOF

8、启动并加入开机自启动

bash 复制代码
systemctl enable kube-proxy.service
systemctl start kube-proxy.service
systemctl status kube-proxy.service

3、部署calico网络组件

由于calico.yaml文件内容较多,我放到了Gitee上面进行下载了。下载地址

1、执行apply

bash 复制代码
kubectl apply -f calico.yaml

2、查看创建的资源

bash 复制代码
kubectl get pods  -n kube-system

4、部署coreDNS解析组件

1、创建coredns.yaml文件

bash 复制代码
cat > coredns.yaml << EOF
apiVersion: v1
kind: ServiceAccount
metadata:
  name: coredns
  namespace: kube-system
  labels:
      kubernetes.io/cluster-service: "true"
      addonmanager.kubernetes.io/mode: Reconcile
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
    addonmanager.kubernetes.io/mode: Reconcile
  name: system:coredns
rules:
- apiGroups:
  - ""
  resources:
  - endpoints
  - services
  - pods
  - namespaces
  verbs:
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - get
- apiGroups:
  - discovery.k8s.io
  resources:
  - endpointslices
  verbs:
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
    addonmanager.kubernetes.io/mode: EnsureExists
  name: system:coredns
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:coredns
subjects:
- kind: ServiceAccount
  name: coredns
  namespace: kube-system
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: coredns
  namespace: kube-system
  labels:
      addonmanager.kubernetes.io/mode: EnsureExists
data:
  Corefile: |
    .:53 {
        errors
        health {
            lameduck 5s
        }
        ready
        kubernetes cluster.local in-addr.arpa ip6.arpa {
            pods insecure
            fallthrough in-addr.arpa ip6.arpa
            ttl 30
        }
        prometheus :9153
        forward . /etc/resolv.conf {
            max_concurrent 1000
        }
        cache 30
        reload
        loadbalance
    }
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: coredns
  namespace: kube-system
  labels:
    k8s-app: kube-dns
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
    kubernetes.io/name: "CoreDNS"
spec:
  replicas: 1
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 1
  selector:
    matchLabels:
      k8s-app: kube-dns
  template:
    metadata:
      labels:
        k8s-app: kube-dns
    spec:
      securityContext:
        seccompProfile:
          type: RuntimeDefault
      priorityClassName: system-cluster-critical
      serviceAccountName: coredns
      affinity:
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
          - weight: 100
            podAffinityTerm:
              labelSelector:
                matchExpressions:
                  - key: k8s-app
                    operator: In
                    values: ["kube-dns"]
              topologyKey: kubernetes.io/hostname
      tolerations:
        - key: "CriticalAddonsOnly"
          operator: "Exists"
      nodeSelector:
        kubernetes.io/os: linux
      containers:
      - name: coredns
        image: coredns/coredns:1.9.3
        imagePullPolicy: IfNotPresent
        resources:
          limits:
            memory: 300Mi
          requests:
            cpu: 100m
            memory: 70Mi
        args: [ "-conf", "/etc/coredns/Corefile" ]
        volumeMounts:
        - name: config-volume
          mountPath: /etc/coredns
          readOnly: true
        ports:
        - containerPort: 53
          name: dns
          protocol: UDP
        - containerPort: 53
          name: dns-tcp
          protocol: TCP
        - containerPort: 9153
          name: metrics
          protocol: TCP
        livenessProbe:
          httpGet:
            path: /health
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 60
          timeoutSeconds: 5
          successThreshold: 1
          failureThreshold: 5
        readinessProbe:
          httpGet:
            path: /ready
            port: 8181
            scheme: HTTP
        securityContext:
          allowPrivilegeEscalation: false
          capabilities:
            add:
            - NET_BIND_SERVICE
            drop:
            - all
          readOnlyRootFilesystem: true
      dnsPolicy: Default
      volumes:
        - name: config-volume
          configMap:
            name: coredns
            items:
            - key: Corefile
              path: Corefile
---
apiVersion: v1
kind: Service
metadata:
  name: kube-dns
  namespace: kube-system
  annotations:
    prometheus.io/port: "9153"
    prometheus.io/scrape: "true"
  labels:
    k8s-app: kube-dns
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
    kubernetes.io/name: "CoreDNS"
spec:
  selector:
    k8s-app: kube-dns
  clusterIP: 10.255.0.2
  ports:
  - name: dns
    port: 53
    protocol: UDP
  - name: dns-tcp
    port: 53
    protocol: TCP
  - name: metrics
    port: 9153
    protocol: TCP
EOF

2、执行apply

bash 复制代码
kubectl apply -f coredns.yaml 

3、查看创建的资源

bash 复制代码
kubectl get pods  -n kube-system

七、K8S集群功能测试

1、测试创建Pod、SVC资源

1、创建nginx.yaml文件

bash 复制代码
cat > nginx.yaml << EOF
apiVersion: v1 
kind: Pod
metadata:
  name: nginx
  namespace: default
  labels:
    app: nginx 
    env: dev
spec:
  containers: 
  - name: nginx 
    ports:
    - containerPort: 80
    image: nginx
    imagePullPolicy: IfNotPresent
---
apiVersion: v1
kind: Service
metadata:
  name: nginx-svc 
spec:
  type: NodePort
  ports:
    - port: 80
      nodePort: 30080
  selector:
    app: nginx
    env: dev
EOF

2、执行apply

bash 复制代码
kubectl apply -f nginx.yaml

3、查看创建资源

bash 复制代码
kubectl get pods,svc -o wide

4、页面访问网址

如果Pod都已经运行起来了,但是页面访问不到,排查一下kube-proxy组件是否正常!

2、测试coreDNS解析

1、创建测试pod,版本一定要有1.28,其他版本会有问题

bash 复制代码
kubectl run busybox --image busybox:1.28 --restart=Never --rm -it busybox -- sh

2、测试解析

bash 复制代码
nslookup kubernetes.default.svc.cluster.local

nslookup nginx-svc.default.svc.cluster.local

正常解析如下图:

复制代码
相关推荐
chuanauc2 小时前
Kubernets K8s 学习
java·学习·kubernetes
小张是铁粉2 小时前
docker学习二天之镜像操作与容器操作
学习·docker·容器
烟雨书信2 小时前
Docker文件操作、数据卷、挂载
运维·docker·容器
IT成长日记2 小时前
【Docker基础】Docker数据卷管理:docker volume prune及其参数详解
运维·docker·容器·volume·prune
这儿有一堆花3 小时前
Docker编译环境搭建与开发实战指南
运维·docker·容器
LuckyLay3 小时前
Compose 高级用法详解——AI教你学Docker
运维·docker·容器
Uluoyu3 小时前
redisSearch docker安装
运维·redis·docker·容器
IT成长日记7 小时前
【Docker基础】Docker数据持久化与卷(Volume)介绍
运维·docker·容器·数据持久化·volume·
疯子的模样11 小时前
Docker 安装 Neo4j 保姆级教程
docker·容器·neo4j
虚伪的空想家12 小时前
rook-ceph配置dashboard代理无法访问
ceph·云原生·k8s·存储·rook