目录
[2.2、安装 Docker](#2.2、安装 Docker)
[2.3.1、生成 CA 证书(所有主机操作)](#2.3.1、生成 CA 证书(所有主机操作))
[2.3.2、生成 Server 证书(所有主机)](#2.3.2、生成 Server 证书(所有主机))
[2.3.3、生成 admin 证书(所有主机)](#2.3.3、生成 admin 证书(所有主机))
[2.3.4、生成 proxy 证书](#2.3.4、生成 proxy 证书)
[三、部署 Etcd 集群](#三、部署 Etcd 集群)
[3.1、在 k8s-master主机上部署 Etcd 节点](#3.1、在 k8s-master主机上部署 Etcd 节点)
[3.2、在k8s-node01 、k8s-node02 主机上部署 Etcd 节点](#3.2、在k8s-node01 、k8s-node02 主机上部署 Etcd 节点)
[3.3、查看 Etcd 集群部署状况](#3.3、查看 Etcd 集群部署状况)
[四、部署 Flannel 网络](#四、部署 Flannel 网络)
[4.1、分配子网段到 Etcd](#4.1、分配子网段到 Etcd)
[4.2、配置 Flannel](#4.2、配置 Flannel)
[4.4、测试 Flanneld 是否安装成功](#4.4、测试 Flanneld 是否安装成功)
[五、部署 Kubernetes-master 组件](#五、部署 Kubernetes-master 组件)
[5.1、添加 kubectl 命令环境](#5.1、添加 kubectl 命令环境)
[5.2、创建 TLS Bootstrapping Token](#5.2、创建 TLS Bootstrapping Token)
[5.3、创建 Kubelet kubeconfig](#5.3、创建 Kubelet kubeconfig)
[5.4、创建 kube-proxy kubeconfig](#5.4、创建 kube-proxy kubeconfig)
[5.5、部署 Kube-apiserver](#5.5、部署 Kube-apiserver)
[5.6、部署 Kube-controller-manager](#5.6、部署 Kube-controller-manager)
[5.7、部署 Kube-scheduler](#5.7、部署 Kube-scheduler)
[六、部署 Kubernetes-node 组件](#六、部署 Kubernetes-node 组件)
[6.2、部署 kube-kubelet](#6.2、部署 kube-kubelet)
[6.3、部署 kube-proxy](#6.3、部署 kube-proxy)
[6.4、查看 Node 节点组件是否安装成功](#6.4、查看 Node 节点组件是否安装成功)
一、环境准备
二进制所需源码包提取链接: https://pan.baidu.com/s/1LHnJjn4mbG0dRoDzChVIfg?pwd=uz4m
**提取码:**uz4m
|-------------------------------------|---------------|-------------|------------|
| 操作系统 | IP地址 | 主机名 | 组件 |
| CentOS 7. x | 192.168.2.116 | k8s-master | |
| CentOS 7. x | 192.168.2.117 | k8s-node1 | |
| CentOS 7.x | 192.168.2.118 | k8s-node2 | |
注意:所有主机配置推荐CPU:2C+ Memory:2G+
2 .1 、主机配置
为三台主机分别设置主机名
[root@localhost ~]# hostname k8s-master
[root@localhost ~]# bash
[root@k8s-master ~]#
[root@localhost ~]# hostname k8s-node1
[root@localhost ~]# bash
[root@k8s-node1 ~]#
[root@localhost ~]# hostname k8s-node2
[root@localhost ~]# bash
[root@k8s-node2 ~]#
在三台主机上修改 hosts 文件添加地址解析记录
[root@k8s-master ~]# cat << EOF >> /etc/hosts
192.168.2.116 k8s-master
192.168.2.117 k8s-node1
192.168.2.118 k8s-node2
EOF
[root@k8s-master ~]# scp /etc/hosts 192.168.2.117:/etc/
[root@k8s-master ~]# scp /etc/hosts 192.168.2.118:/etc/
2. 2、安装 Docker
在所有主机上安装并配置 Docker
[root@k8s-master ~]# yum -y install iptable* wget telnet lsof vim rsync lrzsz net-tools unzip
[root@k8s-master ~]# wget -O /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo
[root@k8s-master ~]# yum install -y yum-utils device-mapper-persistent-data lvm2
[root@k8s-master ~]# yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
[root@k8s-master ~]# yum clean all && yum makecache fast
[root@k8s-master ~]# yum -y install docker-ce
[root@k8s-master ~]# systemctl start docker
[root@k8s-master ~]# systemctl enable docker
[root@k8s-master ~]# cat << EOF >> /etc/docker/daemon.json
{
"registry-mirrors": [
"https://dockerhub.azk8s.cn",
"https://hub-mirror.c.163.com"
]
}
EOF
[root@k8s-master ~]# systemctl daemon-reload
[root@k8s-master ~]# systemctl restart docker
K8S 创建容器时需要生成 iptables 规则,需要将 CentOS默认的 Firewalld 防火墙
换成 iptables。在所有主机上设置防火墙
[root@k8s-master ~]# systemctl stop firewalld
[root@k8s-master ~]# systemctl disable firewalld
[root@k8s-master ~]# systemctl start iptables
[root@k8s-master ~]# iptables -F
[root@k8s-master ~]# iptables -I INPUT -s 192.168.2.0/24 -j ACCEPT
禁用Selinux
[root@k8s-master ~]# sed -i '/^SELINUX=/s/enforcing/disabled/' /etc/selinux/config
[root@k8s-master ~]# setenforce 0
2 .3 、生成通信加密证书
Kubernetes 系统各组件之间需要使用 TLS 证书对通信进行加密,本实验使用CloudFlare 的 PKI 工具集 CFSSL 来生成 Certificate Authority 和其他证书。(所有主机操作)
Kubernetes工具提取链接: https://pan.baidu.com/s/16GaKmbCBjWr8ZIAf3QCYNQ?pwd=62fn
**提取码:**62fn
[root@k8s-master ~]# tar xzf kubernetes-server-linux-amd64.tar.gz
2.3. 1、生成 CA 证书(所有主机操作)
ca证书工具提取链接: https://pan.baidu.com/s/1HY_5YXpyFO9OKagyjeq2NA?pwd=zvi3
******提取码:******zvi3
执行以下操作,创建证书存放位置并安装证书生成工具。
[root@k8s-master ~]# cd /usr/local/bin/
[root@k8s-master bin]# rz #上传工具
[root@k8s-master bin]# mv cfssl_linux-amd64 ./cfssl
[root@k8s-master bin]# mv cfssljson_linux-amd64 ./cfssljion
[root@k8s-master bin]# mv cfssl-certinfo_linux-amd64 ./cfssl-certinfo
[root@k8s-master bin]# chmod +x ./*
[root@k8s-master bin]# ll
总用量 18808
-rwxr-xr-x. 1 root root 10376657 7月 9 2020 cfssl
-rwxr-xr-x. 1 root root 6595195 7月 9 2020 cfssl-certinfo
-rwxr-xr-x. 1 root root 2277873 7月 9 2020 cfssljion
[root@k8s-master ~]# cfssl --help
Usage:
Available commands:
ocsprefresh
scan
genkey
ocspdump
ocspsign
ocspserve
sign
serve
gencert
selfsign
revoke
certinfo
version
info
print-defaults
bundle
gencrl
Top-level flags:
-allow_verification_with_non_compliant_keys
Allow a SignatureVerifier to use keys which are technically non-compliant with RFC6962.
-loglevel int
Log level (0 = DEBUG, 5 = FATAL) (default 1)
执行以下命令,拷贝证书生成脚本。
[root@k8s-master ~]# cat << EOF > ca-config.json
> {
> "signing": {
> "default": {
> "expiry": "87600h"
> },
> "profiles": {
> "kubernetes": {
> "expiry": "87600h",
> "usages": [
> "signing",
> "key encipherment",
> "server auth",
> "client auth"
> ]
> }
> }
> }
> }
> EOF
[root@k8s-master ~]# cat << EOF > ca-csr.json
> {
> "CN": "kubernetes",
> "key": {
> "algo": "rsa",
> "size": 2048
> },
> "names": [
> {
> "C": "CN",
> "L": "Beijing",
> "ST": "Beijing",
> "O": "k8s",
> "OU": "System"
> }
> ]
> }
> EOF
执行以下操作,生成 CA 证书。
[root@k8s-master ~]# cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
2023/08/10 19:44:09 [INFO] generating a new CA key and certificate from CSR
2023/08/10 19:44:09 [INFO] generate received request
2023/08/10 19:44:09 [INFO] received CSR
2023/08/10 19:44:09 [INFO] generating key: rsa-2048
2023/08/10 19:44:09 [INFO] encoded CSR
2023/08/10 19:44:09 [INFO] signed certificate with serial number 232408171082706122668724082483527707664314357277
2.3.2 、生成 Server 证书(所有主机)
执行以下操作,创建 kubernetes-csr.json 文件,并生成 Server 证书。文件中配置的 IP地址,是使用该证书的主机 IP 地址,根据实际的实验环境填写。其中 10.10.10.1 是
kubernetes 自带的 Service。
[root@k8s-master ~]# vim /etc/server-csr.json
{
"CN": "kubernetes",
"hosts": [
"127.0.0.1",
"192.168.2.116",
"192.168.2.117",
"192.168.2.118",
"10.10.10.1",
"kubernetes",
"kubernetes.default",
"kubernetes.default.svc",
"kubernetes.default.svc.cluster",
"kubernetes.default.svc.cluster.local"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "BeiJing",
"ST": "BeiJing",
"O": "k8s",
"OU": "System"
}
]
}
[root@k8s-master ~]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server
2023/08/10 19:57:50 [INFO] generate received request
2023/08/10 19:57:50 [INFO] received CSR
2023/08/10 19:57:50 [INFO] generating key: rsa-2048
2023/08/10 19:57:50 [INFO] encoded CSR
2023/08/10 19:57:50 [INFO] signed certificate with serial number 424188719705968634905526760201201991499922096108
2023/08/10 19:57:50 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
2.3.3 、生成 admin 证书(所有主机 )
执行以下操作,创建 admin-csr.json 文件,并生成 admin 证书。
[root@k8s-master ~]# vim admin-csr.json
{
"CN": "admin",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "BeiJing",
"ST": "BeiJing",
"O": "system:masters",
"OU": "System"
}
]
}
[root@k8s-master ~]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin // admin 证书是用于管理员访问集群的证书
2023/08/10 20:03:12 [INFO] generate received request
2023/08/10 20:03:12 [INFO] received CSR
2023/08/10 20:03:12 [INFO] generating key: rsa-2048
2023/08/10 20:03:12 [INFO] encoded CSR
2023/08/10 20:03:12 [INFO] signed certificate with serial number 159836210599051633906118237113258532670720286284
2023/08/10 20:03:12 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
2 .3.4 、生成 proxy 证书
执行以下操作,创建 kube-proxy-csr.json 文件并生成证书。
[root@k8s-master ~]# vim kube-proxy-csr.json
{
"CN": "system:kube-proxy",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "BeiJing",
"ST": "BeiJing",
"O": "k8s",
"OU": "System"
}
]
}
[root@k8s-master ~]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
2023/08/10 20:05:09 [INFO] generate received request
2023/08/10 20:05:09 [INFO] received CSR
2023/08/10 20:05:09 [INFO] generating key: rsa-2048
2023/08/10 20:05:10 [INFO] encoded CSR
2023/08/10 20:05:10 [INFO] signed certificate with serial number 59446791205648555156331506972188557314618920013
2023/08/10 20:05:10 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
[root@k8s-master ~]# ls | grep -v pem | xargs -i rm {} //删除证书以外的 json 文件,只保留 pem 证书
[root@k8s-master ~]# ll
总用量 32
-rw------- 1 root root 1679 8月 10 20:03 admin-key.pem
-rw-r--r-- 1 root root 1399 8月 10 20:03 admin.pem
-rw------- 1 root root 1679 8月 10 19:44 ca-key.pem
-rw-r--r-- 1 root root 1359 8月 10 19:44 ca.pem
-rw------- 1 root root 1679 8月 10 20:05 kube-proxy-key.pem
-rw-r--r-- 1 root root 1403 8月 10 20:05 kube-proxy.pem
drwxr-xr-x 4 root root 79 2月 12 2020 kubernetes
-rw------- 1 root root 1679 8月 10 19:57 server-key.pem
-rw-r--r-- 1 root root 1627 8月 10 19:57 server.pem
三、部署 Etcd 集群
执行以下操作,创建配置文件目录。
[root@k8s-master ~]# mkdir /opt/kubernetes
[root@k8s-master ~]# mkdir /opt/kubernetes/{bin,cfg,ssl}
上传 etcd-v3.3.18-linux-amd64.tar.gz 软件包并执行以下操作,解压 etcd 软件包并拷贝二进制 bin 文件。
[root@k8s-master ~]# tar xf etcd-v3.3.18-linux-amd64.tar.gz
[root@k8s-master ~]# cd etcd-v3.3.18-linux-amd64
[root@k8s-master etcd-v3.3.18-linux-amd64]# mv etcd /opt/kubernetes/bin/
[root@k8s-master etcd-v3.3.18-linux-amd64]# mv etcdctl /opt/kubernetes/bin/
创建完配置目录并准备好 Etcd 软件安装包后,即可配置 Etcd 集群。具体操作如下所示。
3 .1 、在 k8s-master主机上部署 Etcd 节点
创建 Etcd 配置文件。
[root@k8s-master etcd-v3.3.18-linux-amd64]# vim /opt/kubernetes/cfg/etcd
#[Member]
ETCD_NAME="etcd01"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.2.116:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.2.116:2379"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.2.116:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.2.116:2379"
ETCD_INITIAL_CLUSTER="etcd01=https://192.168.2.116:2380,etcd02=https://192.168.2.117:2380,etcd03=https://192.168.2.118:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
创建脚本配置文件。
[root@k8s-master etcd-v3.3.18-linux-amd64]# vim /usr/lib/systemd/etcd.service
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
[Service]
Type=notify
EnvironmentFile=-/opt/kubernetes/cfg/etcd
ExecStart=/opt/kubernetes/bin/etcd \
--name=${ETCD_NAME} \
--data-dir=${ETCD_DATA_DIR} \
--listen-peer-urls=${ETCD_LISTEN_PEER_URLS} \
--listen-client-urls=${ETCD_LISTEN_CLIENT_URLS},http://127.0.0.1:2379 \
--advertise-client-urls=${ETCD_ADVERTISE_CLIENT_URLS} \
--initial-advertise-peer-urls=${ETCD_INITIAL_ADVERTISE_PEER_URLS} \
--initial-cluster=${ETCD_INITIAL_CLUSTER} \
--initial-cluster-token=${ETCD_INITIAL_CLUSTER} \
--initial-cluster-state=new \
--cert-file=/opt/kubernetes/ssl/server.pem \
--key-file=/opt/kubernetes/ssl/server-key.pem \
--peer-cert-file=/opt/kubernetes/ssl/server.pem \
--peer-key-file=/opt/kubernetes/ssl/server-key.pem \
--trusted-ca-file=/opt/kubernetes/ssl/ca.pem \
--peer-trusted-ca-file=/opt/kubernetes/ssl/ca.pem
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
拷贝 Etcd 启动所依赖的证书。
[root@k8s-master ~]# ls
admin-key.pem ca-key.pem etcd-v3.3.18-linux-amd64 kube-proxy-key.pem kubernetes server.pem
admin.pem ca.pem etcd-v3.3.18-linux-amd64.tar.gz kube-proxy.pem server-key.pem
[root@k8s-master ~]# cp ca*.pem /opt/kubernetes/ssl/
启动 Etcd 主节点。若主节点启动卡顿,直接 ctrl +c 终止即可。实际 Etcd 进程已经启动,在连接另外两个节点时会超时,因为另外两个节点尚未启动。(建议先做下面node节点在启动)
[root@k8s-master software]# systemctl start etcd
[root@k8s-master software]# systemctl enable etcd
Created symlink from /etc/systemd/system/multi-user.target.wants/etcd.service to /usr/lib/systemd/system/etcd.service.
查看 Etcd 启动结果
[root@k8s-master software]# ps aux | grep etcd
root 10755 1.0 1.1 10610764 46032 ? Ssl 14:50 0:01 /opt/kubernetes/bin/etcd --name=etcd01 --data-dir=/var/lib/etcd/default.etcd --listen-peer-urls=https://192.168.2.116:2380 --listen-client-urls=https://192.168.2.116:2379,http://127.0.0.1:2379 --advertise-client-urls=https://192.168.2.116:2379 --initial-advertise-peer-urls=https://192.168.2.116:2380 --initial-cluster=etcd01=https://192.168.2.116:2380,etcd02=https://192.168.2.117:2380,etcd03=https://192.168.2.118:2380 --initial-cluster-token=etcd01=https://192.168.2.116:2380,etcd02=https://192.168.2.117:2380,etcd03=https://192.168.2.118:2380 --initial-cluster-state=new --cert-file=/opt/kubernetes/ssl/server.pem --key-file=/opt/kubernetes/ssl/server-key.pem --peer-cert-file=/opt/kubernetes/ssl/server.pem --peer-key-file=/opt/kubernetes/ssl/server-key.pem --trusted-ca-file=/opt/kubernetes/ssl/ca.pem --peer-trusted-ca-file=/opt/kubernetes/ssl/ca.pem
root 10798 0.0 0.0 112828 980 pts/1 S+ 14:53 0:00 grep --color=auto etcd
3 .2 、在k8s-node 0 1 、k8s-node 0 2 主机上部署 Etcd 节点
拷贝 Etcd 配置文件到计算节点主机,然后修改对应的主机 IP 地址。
[root@k8s-master ~]# rsync -avcz /opt/kubernetes/* 192.168.2.117:/opt/kubernetes/
root@192.168.2.117's password:
sending incremental file list
bin/
bin/etcd
bin/etcdctl
bin/default.etcd/
bin/default.etcd/member/
bin/default.etcd/member/snap/
bin/default.etcd/member/snap/db
bin/default.etcd/member/wal/
bin/default.etcd/member/wal/0.tmp
bin/default.etcd/member/wal/0000000000000000-0000000000000000.wal
cfg/
cfg/etcd
ssl/
ssl/ca-key.pem
ssl/ca.pem
sent 14,065,864 bytes received 200 bytes 1,339,625.14 bytes/sec
total size is 168,388,923 speedup is 11.97
[root@k8s-master ~]# rsync -avcz /opt/kubernetes/* 192.168.2.118:/opt/kubernetes/
root@192.168.2.118's password:
sending incremental file list
bin/
bin/etcd
bin/etcdctl
bin/default.etcd/
bin/default.etcd/member/
bin/default.etcd/member/snap/
bin/default.etcd/member/snap/db
bin/default.etcd/member/wal/
bin/default.etcd/member/wal/0.tmp
bin/default.etcd/member/wal/0000000000000000-0000000000000000.wal
cfg/
cfg/etcd
ssl/
ssl/ca-key.pem
ssl/ca.pem
sent 14,065,864 bytes received 200 bytes 1,654,831.06 bytes/sec
total size is 168,388,923 speedup is 11.97
[root@k8s-node1 ~]# vim /opt/kubernetes/cfg/etcd
#[Member]
ETCD_NAME="etcd02"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.2.117:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.2.117:2379"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.2.117:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.2.117:2379"
ETCD_INITIAL_CLUSTER="etcd01=https://192.168.2.116:2380,etcd02=https://192.168.2.117:2380,etcd03=https://192.168.2.118:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
[root@k8s-node2 ~]# vim /opt/kubernetes/cfg/etcd
#[Member]
ETCD_NAME="etcd03"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.2.118:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.2.118:2379"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.2.118:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.2.118:2379"
ETCD_INITIAL_CLUSTER="etcd01=https://192.168.2.116:2380,etcd02=https://192.168.2.117:2380,etcd03=https://192.168.2.118:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
拷贝启动脚本文件。
[root@k8s-master software]# scp /usr/lib/systemd/system/etcd.service 192.168.2.117:/usr/lib/systemd/system/
root@192.168.2.117's password:
etcd.service 100% 994 1.8MB/s 00:00
[root@k8s-master software]# scp /usr/lib/systemd/system/etcd.service 192.168.2.118:/usr/lib/systemd/system/
root@192.168.2.118's password:
etcd.service 100% 994 1.8MB/s 00:00
启动 Node 节点上的 Etcd。
[root@k8s-node1 ~]# systemctl start etcd
[root@k8s-node1 ~]# systemctl enable etcd
Created symlink from /etc/systemd/system/multi-user.target.wants/etcd.service to /usr/lib/systemd/system/etcd.service.
[root@k8s-node1 ~]# vim /opt/kubernetes/cfg/etcd
[root@k8s-node2 ~]# systemctl start etcd
[root@k8s-node2 ~]# systemctl enable etcd
Created symlink from /etc/systemd/system/multi-user.target.wants/etcd.service to /usr/lib/systemd/system/etcd.service.
[root@k8s-node2 ~]# vim /opt/kubernetes/cfg/etcd
3 .3 、查看 Etcd 集群部署状况
为 Etcd 命令添加全局环境变量。所有节点都执行。
[root@k8s-master ~]# vim /etc/profile
export PATH=$PATH:/opt/kubernetes/bin
[root@k8s-master ~]# source /etc/profile
查看 Etcd 集群部署状况。
[root@k8s-master ~]# cd /root/software/ssl/
[root@k8s-master ssl]# etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://192.168.2.116:2379,https://192.168.2.117,https://192.168.2.118:2379" cluster-health
member 2e77788f6268c28d is healthy: got healthy result from https://192.168.2.117:2379
member 60b0a20770468ca4 is healthy: got healthy result from https://192.168.2.116:2379
member 980d2d199a3b6f16 is healthy: got healthy result from https://192.168.2.118:2379
cluster is healthy
至此完成 Etcd 集群部署。
四、部署 Flannel 网络
Flannel 是 Overlay 网络的一种,也是将源数据包封装在另一种网络包里面进行路由转发和通信,目前已经支持 UDP、VXLAN、AWS、VPC、和 GCE 路由等数据转发方式。多主机容器网络通信的其他主流方案包括:隧道方案(Weave、OpenSwitch)、路由方案(Calico)等。
4.1 、分配子网段到 Etcd
在主节点写入分配子网段到 Etcd,供 Flanneld 使用。
[root@k8s-master ~]# cd /root/software/ssl/
[root@k8s-master ssl]# etcdctl -ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://192.168.2.116:2379,https://192.168.2.117:2379,https://192.168.2.118:2379" set /coreos.com/network/config '{"Network":"172.17.0.0/16","Backend":{"Type":"vxlan"} }'
{"Network":"172.17.0.0/16","Backend":{"Type":"vxlan"} }
上传 flannel-v0.12.0-linux-amd64.tar.gz 软件包,解压 Flannel 二进制并分别拷贝到 Node 节点。
[root@k8s-master ~]# tar xf flannel-v0.12.0-linux-amd64.tar.gz
[root@k8s-master ~]# scp flannel mk-docker-opts.sh 192.168.2.117:/opt/kubernetes/bin/
root@192.168.2.117's password:
flannel: No such file or directory
mk-docker-opts.sh 100% 2139 2.6MB/s 00:00
[root@k8s-master ~]# scp flannel mk-docker-opts.sh 192.168.2.118:/opt/kubernetes/bin/
root@192.168.2.118's password:
flannel: No such file or directory
mk-docker-opts.sh 100% 2139 2.2MB/s 00:00
4 .2 、配置 Flannel
在 k8s-node1 与 k8s-node2 主机上分别编辑 flanneld 配置文件。下面以 k8s-node1 为例进行操作演示。
[root@k8s-node1 ~]# vim /opt/kubernetes/cfg/flanneld
FLANNEL_OPTIONS="--etcd-endpoints=https://192.168.2.116:2379,https://192.168.2.117:2379,https://192.168.2.118:2379 -etcd-cafile=/opt/kubernetes/ssl/ca.pem -etcd-certfile=/opt/kubernetes/ssl/server.pem -etcd-keyfile=/opt/kubernetes/ssl/server-key.pem"
[root@k8s-node1 ~]# scp /opt/kubernetes/cfg/flanneld 192.168.2.118:/opt/kubernetes/cfg/flanneld
The authenticity of host '192.168.2.118 (192.168.2.118)' can't be established.
ECDSA key fingerprint is SHA256:Xw4oZiqfBLe+vo6o1blQqSAQlde5FbnrawBscx+/dh0.
ECDSA key fingerprint is MD5:fd:e9:93:a2:fe:a1:f1:15:8d:f2:d8:c9:31:35:8c:85.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.2.118' (ECDSA) to the list of known hosts.
root@192.168.2.118's password:
flanneld 100% 251 443.9KB/s 00:00
在 k8s-node1 与 k8s-node2 主机上分别创建 flanneld.service 脚本文件管理 Flanneld。
[root@k8s-node1 ~]# cat <<EOF >/usr/lib/systemd/system/flanneld.service
> [Unit]
> Description=Flanneld overlay address etcd agent
> After=network-online.target network.target
> Before=docker.service
> [Service]
> Type=notify
> EnvironmentFile=/opt/kubernetes/cfg/flanneld
> ExecStart=/opt/kubernetes/bin/flanneld --ip-masq $FLANNEL_OPTIONS
> ExecStartPost=/opt/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env
> Restart=on-failure
> [Install]
> WantedBy=multi-user.target
> EOF
[root@k8s-node1 ~]# scp /usr/lib/systemd/system/flanneld.service 192.168.2.118:/usr/lib/systemd/system/
root@192.168.2.118's password:
flanneld.service 100% 398 708.4KB/s 00:00
在 k8s-node01 与 k8s-node02 主机上配置 Docker 启动指定网段,修改 Docker 配置脚本文件。
[root@k8s-node01 ~]# vim /usr/lib/systemd/system/docker.service
EnvironmentFile=/run/flannel/subnet.env //新添加[Service]块内,目的是让 Docker 网桥分发的 ip 地址与 flanned 网桥在同一个网段
ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS //添加$ DOCKER_NETWORK_OPTIONS 变量,替换原来的 ExecStart,目的是调用 Flannel 网桥 IP
地址
#ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
4 .3 、启动Flannel
启动 k8s-node01和k8s-node02主机上的 Flannel 服务。
[root@k8s-node1 ~]# systemctl start flanneld
[root@k8s-node1 ~]# systemctl enable flanneld
Created symlink from /etc/systemd/system/multi-user.target.wants/flanneld.service to /usr/lib/systemd/system/flanneld.service.
[root@k8s-node1 ~]# systemctl daemon-reload
[root@k8s-node1 ~]# systemctl restart docker
[root@k8s-node1 ~]# ifconfig
docker0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
inet 172.17.84.1 netmask 255.255.255.0 broadcast 172.17.84.255
ether 02:42:76:ad:ac:bb txqueuelen 0 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
flannel.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1450
inet 172.17.84.0 netmask 255.255.255.255 broadcast 0.0.0.0
inet6 fe80::3058:cff:fe3f:fe1a prefixlen 64 scopeid 0x20<link>
ether 32:58:0c:3f:fe:1a txqueuelen 0 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 8 overruns 0 carrier 0 collisions 0
4 .4 、测试 Flanneld 是否安装成功
在 k8s-node02 上测试到 node01 节点 docker0 网桥 IP 地址的连通性,出现如下结果说明Flanneld 安装成功。
[root@k8s-node2 ~]# ping 172.17.84.0
PING 172.17.84.0 (172.17.84.0) 56(84) bytes of data.
64 bytes from 172.17.84.0: icmp_seq=1 ttl=64 time=0.515 ms
64 bytes from 172.17.84.0: icmp_seq=2 ttl=64 time=0.206 ms
64 bytes from 172.17.84.0: icmp_seq=3 ttl=64 time=0.226 ms
^C
--- 172.17.84.0 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2000ms
rtt min/avg/max/mdev = 0.206/0.315/0.515/0.142 ms
至此 Node 节点的 Flannel 配置完成。
五、部署 Kubernetes-master 组件
Kubernetes 二进制安装方式所需的二进制安装程序 Google 已经提供了下载,可以通过地址 https://github.com/kubernetes/kubernetes/releases 进行下载,选择对应的版本之后,从 CHANGELOG 页面下载二进制文件。
在 k8s-master 主机上依次进行如下操作,部署 Kubernetes-master 组件,具体操作如下所示。
5 .1 、添加 kubectl 命令环境
上传 tar zxf kubernetes-server-linux-amd64.tar.gz 软件包,解压并添加 kubectl 命令环境。
[root@k8s-master ~]# tar xf kubernetes-server-linux-amd64.tar.gz
[root@k8s-master ~]# cd kubernetes/server/bin/
[root@k8s-master bin]# cp kubectl /opt/kubernetes/bin/
5.2 、创建 TLS Bootstrapping Token
执行以下命令,创建 TLS Bootstrapping Token。
[root@k8s-master bin]# cd /opt/kubernetes/
[root@k8s-master kubernetes]# export BOOTSTRAP_TOKEN=$(head -c 16 /dev/urandom | od -An -t x | tr -d ' ')
[root@k8s-master kubernetes]# cat > token.csv <<EOF
> ${BOOTSTRAP_TOKEN},kubelet-bootstrap,10001,"system:kubelet-bootstrap"
> EOF
5 .3 、创建 Kubelet kubeconfig
执行以下命令,创建 Kubelet kubeconfig。
[root@k8s-master kubernetes]# export KUBE_APISERVER="https://192.168.2.116:6443"
(1)设置集群参数
[root@k8s-master kubernetes]# cd /root/software/ssl/
[root@k8s-master ssl]# kubectl config set-cluster kubernetes \
> --certificate-authority=./ca.pem \
> --embed-certs=true \
> --server=${KUBE_APISERVER} \
> --kubeconfig=bootstrap.kubeconfig
Cluster "kubernetes" set.
(2)设置客户端认证参数
[root@k8s-master ssl]# kubectl config set-credentials kubelet-bootstrap --token=${BOOTSTRAP_TOKEN} --kubeconfig=bootstrap.kubeconfig
User "kubelet-bootstrap" set.
(3) 设置上下文参数
[root@k8s-master ssl]# kubectl config set-context default \
> --cluster=kubernetes \
> --user=kubelet-bootstrap \
> --kubeconfig=bootstrap.kubeconfig
Context "default" created.
(4)设置默认上下文
[root@k8s-master ssl]# kubectl config use-context default --kubeconfig=bootstrap.kubeconfig
Switched to context "default".
5 .4 、创建 kub e -proxy kubeconfig
执行以下命令,创建 kuby-proxy kubeconfig。
[root@k8s-master ssl]# kubectl config set-cluster kubernetes \
> --certificate-authority=./ca.pem \
> --embed-certs=true \
> --server=${KUBE_APISERVER} \
> --kubeconfig=kube-proxy.kubeconfig
Cluster "kubernetes" set.
[root@k8s-master ssl]# kubectl config set-credentials kube-proxy \
> --client-certificate=./kube-proxy.pem \
> --client-key=./kube-proxy-key.pem \
> --embed-certs=true \
> --kubeconfig=kube-proxy.kubeconfig
User "kube-proxy" set.
[root@k8s-master ssl]# kubectl config set-context default \
> --cluster=kubernetes \
> --user=kube-proxy \
> --kubeconfig=kube-proxy.kubeconfig
Context "default" created.
[root@k8s-master ssl]# kubectl config use-context default \
> --kubeconfig=kube-proxy.kubeconfig
Switched to context "default".
5 .5 、部署 Kube-apiserver
执行以下命令,部署 Kube-apiserver。
[root@k8s-master ssl]# cd /root/kubernetes/server/bin/
[root@k8s-master bin]# cp kube-controller-manager kube-scheduler kube-apiserver /opt/kubernetes/bin/
[root@k8s-master bin]# cp /opt/kubernetes/token.csv /opt/kubernetes/cfg/
[root@k8s-master bin]# cd /opt/kubernetes/bin/
上传master.zip到当前目录
[root@k8s-master bin]# ./apiserver.sh 192.168.2.116 https://192.168.2.116:2379,https://192.168.2.117:2379,https://192.168.2.118:2379
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-apiserver.service to /usr/lib/systemd/system/kube-apiserver.service.
5 .6 、部署 Kube-controller-manager
执行以下命令,部署 Kube-controller-manager。
[root@k8s-master bin]# sh controller-manager.sh 127.0.0.1
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-controller-manager.service to /usr/lib/systemd/system/kube-controller-manager.service.
5 .7 、部署 Kube-scheduler
执行以下命令,部署 Kube-scheduler。
[root@k8s-master bin]# sh scheduler.sh 127.0.0.1
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-scheduler.service to /usr/lib/systemd/system/kube-scheduler.service.
5 .8 、组件运行是否正常
执行以下命令,检测组件运行是否正常。
[root@k8s-master bin]# kubectl get cs
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-0 Healthy {"health":"true"}
etcd-2 Healthy {"health":"true"}
etcd-1 Healthy {"health":"true"}
六、部署 Kubernetes-node 组件
部署完 Kubernetes-master 组件后,即可开始部署 Kubernetes-node 组件。需要依次执行以下步骤。
6.1 、准备环境
执行以下命令,准备 Kubernetes-node 组件的部署环境。
在 k8s-master 主机上执行
[root@k8s-master ~]# cd /root/software/ssl/
[root@k8s-master ssl]# scp *kubeconfig 192.168.2.117:/opt/kubernetes/cfg/
root@192.168.2.117's password:
bootstrap.kubeconfig 100% 2167 2.6MB/s 00:00
kube-proxy.kubeconfig 100% 6269 8.6MB/s 00:00
[root@k8s-master ssl]# scp *kubeconfig 192.168.2.118:/opt/kubernetes/cfg/
root@192.168.2.118's password:
bootstrap.kubeconfig 100% 2167 3.1MB/s 00:00
kube-proxy.kubeconfig 100% 6269 7.5MB/s 00:00
[root@k8s-master ssl]# cd /root/kubernetes/server/bin/
[root@k8s-master bin]# scp kubelet kube-proxy 192.168.2.117:/opt/kubernetes/bin
root@192.168.2.117's password:
kubelet 100% 106MB 129.4MB/s 00:00
kube-proxy 100% 36MB 134.3MB/s 00:00
[root@k8s-master bin]# scp kubelet kube-proxy 192.168.2.118:/opt/kubernetes/bin
root@192.168.2.118's password:
kubelet 100% 106MB 120.3MB/s 00:00
kube-proxy 100% 36MB 119.5MB/s 00:00
[root@k8s-master bin]# kubectl create clusterrolebinding kubelet-bootstrap \
> --clusterrole=system:node-bootstrapper \
> --user=kubelet-bootstrap
clusterrolebinding.rbac.authorization.k8s.io/kubelet-bootstrap created
[root@k8s-master bin]# kubectl describe clusterrolebinding kubelet-bootstrap
Name: kubelet-bootstrap
Labels: <none>
Annotations: <none>
Role:
Kind: ClusterRole
Name: system:node-bootstrapper
Subjects:
Kind Name Namespace
---- ---- ---------
User kubelet-bootstrap
6.2 、部署 kube-kubelet
执行以下命令,部署 kubelet。在 k8s-node1、k8s-node2 主机上都要执行
[root@k8s-node1 ~]# cd /opt/kubernetes/bin/
上传node.zip
[root@k8s-node1 bin]# unzip node.zip
Archive: node.zip
inflating: kubelet.sh
inflating: proxy.sh
[root@k8s-node1 bin]# chmod + *.sh
[root@k8s-node1 bin]# sh kubelet.sh 192.168.2.117 192.168.2.254
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
[root@k8s-node2 bin]# unzip node.zip
Archive: node.zip
inflating: kubelet.sh
inflating: proxy.sh
[root@k8s-node2 bin]# chmod + *.sh
[root@k8s-node2 bin]# sh kubelet.sh 192.168.2.118 192.168.2.254
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
6 .3 、部署 kube-proxy
执行以下命令,部署 kube-proxy。在 k8s-node1、k8s-node2 主机上都要执行
[root@k8s-node1 bin]# sh proxy.sh 192.168.2.117
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /usr/lib/systemd/system/kube-proxy.service.
[root@k8s-node2 bin]# sh proxy.sh 192.168.2.118
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /usr/lib/systemd/system/kube-proxy.service.
6.4 、查看 Node 节点组件是否安装成功
执行以下命令,查看 Node 节点组件是否安装成功。
[root@k8s-node2 bin]# ps -ef | grep kube
root 4859 1 1 14:51 ? 00:01:31 /opt/kubernetes/bin/etcd --name=etcd03 --data-dir=/var/lib/etcd/default.etcd --listen-peer-urls=https://192.168.2.118:2380 --listen-client-urls=https://192.168.2.118:2379,http://127.0.0.1:2379 --advertise-client-urls=https://192.168.2.118:2379 --initial-advertise-peer-urls=https://192.168.2.118:2380 --initial-cluster=etcd01=https://192.168.2.116:2380,etcd02=https://192.168.2.117:2380,etcd03=https://192.168.2.118:2380 --initial-cluster-token=etcd01=https://192.168.2.116:2380,etcd02=https://192.168.2.117:2380,etcd03=https://192.168.2.118:2380 --initial-cluster-state=new --cert-file=/opt/kubernetes/ssl/server.pem --key-file=/opt/kubernetes/ssl/server-key.pem --peer-cert-file=/opt/kubernetes/ssl/server.pem --peer-key-file=/opt/kubernetes/ssl/server-key.pem --trusted-ca-file=/opt/kubernetes/ssl/ca.pem --peer-trusted-ca-file=/opt/kubernetes/ssl/ca.pem
root 5190 1 0 15:59 ? 00:00:01 /opt/kubernetes/bin/flanneld --ip-masq
root 9001 1 0 16:45 ? 00:00:00 /opt/kubernetes/bin/kubelet --logtostderr=true --v=4 --address=192.168.2.118 --hostname-override=192.168.2.118 --kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig --experimental-bootstrap-kubeconfig=/opt/kubrnetes/cfg/bootstrap.kubeconfig --cert-dir=/opt/kubernetes/ssl --cluster-dns=192.168.2.254 --cluster-domain=cluster.local --fail-swap-on=false --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0
root 9236 1 0 16:47 ? 00:00:00 /opt/kubernetes/bin/kube-proxy --logtostderr=true --v=4 --hostname-override=192.168.2.118 --kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig
root 9365 2753 0 16:48 pts/0 00:00:00 grep --color=auto kube
6 .5 、查看自动签发证书
部署完组件后,Master 节点即可获取到 Node 节点的请求证书,然后允许加入集群即
可。
[root@k8s-master bin]# kubectl get csr //查看请求证书
NAME AGE REQUESTOR CONDITION
node-csr-8l5R966htJ1yECVdKq97-yDX25_KREynxrskUFs_ZIs 8m26s kubelet-bootstrap Pending
node-csr-D9o_6AXRpMqRnLU2O0riqbpylNWZhZ6PD0aP6voiC_c 8m27s kubelet-bootstrap Pending
node-csr-nTHbHBv3Wpsk5f1HuaaTEzw0OD6CK5okqnuwFid7rhk 5m21s kubelet-bootstrap Pending
[root@k8s-master bin]# kubectl certificate approve node-csr-8l5R966htJ1yECVdKq97-yDX25_KREynxrskUFs_ZIs // 允许节点加入集群,替换为自己的节点名
certificatesigningrequest.certificates.k8s.io/node-csr-8l5R966htJ1yECVdKq97-yDX25_KREynxrskUFs_ZIs approved
[root@k8s-master bin]# kubectl certificate approve node-csr-D9o_6AXRpMqRnLU2O0riqbpylNWZhZ6PD0aP6voiC_c
certificatesigningrequest.certificates.k8s.io/node-csr-D9o_6AXRpMqRnLU2O0riqbpylNWZhZ6PD0aP6voiC_c approved
[root@k8s-master bin]# kubectl certificate approve node-csr-nTHbHBv3Wpsk5f1HuaaTEzw0OD6CK5okqnuwFid7rhk
certificatesigningrequest.certificates.k8s.io/node-csr-nTHbHBv3Wpsk5f1HuaaTEzw0OD6CK5okqnuwFid7rhk approved
[root@k8s-master bin]# kubectl get node
NAME STATUS ROLES AGE VERSION
192.168.2.117 Ready <none> 2m41s v1.17.3
192.168.2.118 Ready <none> 39s v1.17.3
七、以Deployment方式创建Nginx服务
创建deployment
[root@k8s-master ~]# vim nginx-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.19.4
ports:
- containerPort: 80
创建nginx-deployment应用
[root@k8s-master ~]# kubectl create -f nginx-deployment.yaml
deployment.apps/nginx-deployment created
查看deployment详情
[root@k8s-master ~]# kubectl get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
nginx-deployment 3/3 3 3 4m49s
查看具体某个pod的状态信息
[root@k8s-master ~]# kubectl get pod
NAME READY STATUS RESTARTS AGE
nginx-deployment-fc75999cc-f5lvg 1/1 Running 0 4m52s
nginx-deployment-fc75999cc-fdpsm 1/1 Running 0 4m52s
nginx-deployment-fc75999cc-rmblk 1/1 Running 0 4m52s
[root@k8s-master ~]# kubectl describe deployment nginx-deployment
Name: nginx-deployment
Namespace: default
CreationTimestamp: Fri, 18 Aug 2023 16:54:56 +0800
Labels: app=nginx
Annotations: deployment.kubernetes.io/revision: 1
Selector: app=nginx
Replicas: 3 desired | 3 updated | 3 total | 3 available | 0 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 25% max unavailable, 25% max surge
Pod Template:
Labels: app=nginx
Containers:
nginx:
Image: nginx:1.19.4
Port: 80/TCP
Host Port: 0/TCP
Environment: <none>
Mounts: <none>
Volumes: <none>
Conditions:
Type Status Reason
---- ------ ------
Available True MinimumReplicasAvailable
Progressing True NewReplicaSetAvailable
OldReplicaSets: <none>
NewReplicaSet: nginx-deployment-fc75999cc (3/3 replicas created)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ScalingReplicaSet 5m42s deployment-controller Scaled up replica set nginx-deployment-fc75999cc to 3
查看pod在状态
[root@k8s-master ~]# kubectl get pod
NAME READY STATUS RESTARTS AGE
nginx-deployment-fc75999cc-f5lvg 1/1 Running 0 6m8s
nginx-deployment-fc75999cc-fdpsm 1/1 Running 0 6m8s
nginx-deployment-fc75999cc-rmblk 1/1 Running 0 6m8s
查看具体某个pod的状态信息
[root@k8s-master ~]# kubectl describe pod nginx-deployment-fc75999cc-f5lvg
Name: nginx-deployment-fc75999cc-f5lvg
Namespace: default
Node: 192.168.2.117/192.168.2.117
Start Time: Fri, 18 Aug 2023 16:54:56 +0800
Labels: app=nginx
pod-template-hash=fc75999cc
Annotations: <none>
Status: Running
IP: 172.17.84.2
IPs:
IP: 172.17.84.2
Controlled By: ReplicaSet/nginx-deployment-fc75999cc
Containers:
nginx:
Container ID: docker://f36134e89b059ebeb214d8ebc0ed3625af9e2a4ba8aaf27542fe1f122e832cef
Image: nginx:1.19.4
Image ID: docker-pullable://nginx@sha256:c3a1592d2b6d275bef4087573355827b200b00ffc2d9849890a4f3aa2128c4ae
Port: 80/TCP
Host Port: 0/TCP
State: Running
Started: Fri, 18 Aug 2023 16:59:34 +0800
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-frzl2 (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
default-token-frzl2:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-frzl2
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: <none>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled <unknown> default-scheduler Successfully assigned default/nginx-deployment-fc75999cc-f5lvg to 192.168.2.117
Warning Failed 4m25s kubelet, 192.168.2.117 Failed to pull image "nginx:1.19.4": rpc error: code = Unknown desc = context canceled
Warning Failed 4m25s kubelet, 192.168.2.117 Error: ErrImagePull
Normal BackOff 4m25s kubelet, 192.168.2.117 Back-off pulling image "nginx:1.19.4"
Warning Failed 4m25s kubelet, 192.168.2.117 Error: ImagePullBackOff
Normal Pulling 4m14s (x2 over 6m47s) kubelet, 192.168.2.117 Pulling image "nginx:1.19.4"
Normal Pulled 2m12s kubelet, 192.168.2.117 Successfully pulled image "nginx:1.19.4"
Normal Created 2m12s kubelet, 192.168.2.117 Created container nginx
Normal Started 2m12s kubelet, 192.168.2.117 Started container nginx
[root@k8s-master ~]# kubectl get pod -o wide #创建成功,状态为Running
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-deployment-fc75999cc-f5lvg 1/1 Running 0 7m30s 172.17.84.2 192.168.2.117 <none> <none>
nginx-deployment-fc75999cc-fdpsm 1/1 Running 0 7m30s 172.17.34.2 192.168.2.118 <none> <none>
nginx-deployment-fc75999cc-rmblk 1/1 Running 0 7m30s 172.17.84.3 192.168.2.117 <none> <none>
测试Pod访问
[root@k8s-node1 bin]# elinks --dump http://172.17.84.3
Welcome to nginx!
If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.
For online documentation and support please refer to [1]nginx.org.
Commercial support is available at [2]nginx.com.
Thank you for using nginx.
References
Visible links
1. http://nginx.org/
2. http://nginx.com/