k8s安装详细教程1

环境初始化

1、检查操作系统的版本

c 复制代码
Last login: Mon Mar 18 08:57:10 2024
[root@controller ~]# cat /etc/redhat-release
CentOS Linux release 7.9.2009 (Core)
[root@controller ~]# vi /etc/hosts

2、时间同步

kubernetes要求集群中的节点时间必须精确一直,这里使用chronyd服务从网络同步时间

企业中建议配置内部的会见同步服务器

c 复制代码
[root@controller ~]# date
2024年 03月 18日 星期一 09:10:46 CST
[root@controller ~]# systemctl status chronyd
● chronyd.service - NTP client/server
   Loaded: loaded (/usr/lib/systemd/system/chronyd.service; enabled; vendor preset: enabled)
   Active: active (running) since 一 2024-03-18 08:55:47 CST; 15min ago
     Docs: man:chronyd(8)
           man:chrony.conf(5)
  Process: 874 ExecStartPost=/usr/libexec/chrony-helper update-daemon (code=exited, status=0/SUCCESS)
  Process: 775 ExecStart=/usr/sbin/chronyd $OPTIONS (code=exited, status=0/SUCCESS)
 Main PID: 783 (chronyd)
    Tasks: 1
   Memory: 1.1M
   CGroup: /system.slice/chronyd.service
           └─783 /usr/sbin/chronyd

3月 18 08:55:44 controller chronyd[783]: chronyd version 3.4 starting (+CMD...)
3月 18 08:55:44 controller chronyd[783]: Frequency -2.478 +/- 3.855 ppm rea...t
3月 18 08:55:47 controller systemd[1]: Started NTP client/server.
3月 18 08:55:57 controller chronyd[783]: Selected source 162.159.200.1
3月 18 08:55:57 controller chronyd[783]: System clock wrong by -43.694698 s...d
3月 18 08:55:13 controller chronyd[783]: System clock was stepped by -43.69...s
3月 18 08:55:15 controller chronyd[783]: Selected source 78.46.102.180
3月 18 08:56:20 controller chronyd[783]: Source 119.28.183.184 replaced wit...5
3月 18 08:58:30 controller chronyd[783]: Selected source 193.182.111.141
3月 18 09:02:49 controller chronyd[783]: Selected source 117.80.112.205
Hint: Some lines were ellipsized, use -l to show in full.
[root@controller ~]# systemctl start chronyd
[root@controller ~]# systemctl enable chronyd
[root@controller ~]# systemctl status chronyd
● chronyd.service - NTP client/server
   Loaded: loaded (/usr/lib/systemd/system/chronyd.service; enabled; vendor preset: enabled)
   Active: active (running) since 一 2024-03-18 08:55:47 CST; 15min ago
     Docs: man:chronyd(8)
           man:chrony.conf(5)
 Main PID: 783 (chronyd)
   CGroup: /system.slice/chronyd.service
           └─783 /usr/sbin/chronyd

3月 18 08:55:44 controller chronyd[783]: chronyd version 3.4 starting (+CMD...)
3月 18 08:55:44 controller chronyd[783]: Frequency -2.478 +/- 3.855 ppm rea...t
3月 18 08:55:47 controller systemd[1]: Started NTP client/server.
3月 18 08:55:57 controller chronyd[783]: Selected source 162.159.200.1
3月 18 08:55:57 controller chronyd[783]: System clock wrong by -43.694698 s...d
3月 18 08:55:13 controller chronyd[783]: System clock was stepped by -43.69...s
3月 18 08:55:15 controller chronyd[783]: Selected source 78.46.102.180
3月 18 08:56:20 controller chronyd[783]: Source 119.28.183.184 replaced wit...5
3月 18 08:58:30 controller chronyd[783]: Selected source 193.182.111.141
3月 18 09:02:49 controller chronyd[783]: Selected source 117.80.112.205
Hint: Some lines were ellipsized, use -l to show in full.

3\禁用iptable和firewalld服务

kubernetes和docker 在运行的中会产生大量的iptables规则,为了不让系统规则跟它们混淆,直接关闭系统的规则

c 复制代码
[root@controller ~]# systemctl status firewalld.service
● firewalld.service - firewalld - dynamic firewall daemon
   Loaded: loaded (/usr/lib/systemd/system/firewalld.service; enabled; vendor preset: enabled)
   Active: active (running) since 一 2024-03-18 08:55:49 CST; 16min ago
     Docs: man:firewalld(1)
 Main PID: 891 (firewalld)
   CGroup: /system.slice/firewalld.service
           └─891 /usr/bin/python2 -Es /usr/sbin/firewalld --nofork --nopid

3月 18 08:55:16 controller firewalld[891]: WARNING: COMMAND_FAILED: '/usr/s....
3月 18 08:55:16 controller firewalld[891]: WARNING: COMMAND_FAILED: '/usr/s....
3月 18 08:55:16 controller firewalld[891]: WARNING: COMMAND_FAILED: '/usr/s....
3月 18 08:55:16 controller firewalld[891]: WARNING: COMMAND_FAILED: '/usr/s....
3月 18 08:55:16 controller firewalld[891]: WARNING: COMMAND_FAILED: '/usr/s....
3月 18 08:55:16 controller firewalld[891]: WARNING: COMMAND_FAILED: '/usr/s....
3月 18 08:55:16 controller firewalld[891]: WARNING: COMMAND_FAILED: '/usr/s....
3月 18 08:55:16 controller firewalld[891]: WARNING: COMMAND_FAILED: '/usr/s....
3月 18 08:55:17 controller firewalld[891]: WARNING: COMMAND_FAILED: '/usr/s....
3月 18 08:55:17 controller firewalld[891]: WARNING: COMMAND_FAILED: '/usr/s....
Hint: Some lines were ellipsized, use -l to show in full.
[root@controller ~]# systemctl stop firewalld.service
[root@controller ~]# systemctl disable firewalld.service
Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
[root@controller ~]# systemctl status firewalld.service
● firewalld.service - firewalld - dynamic firewall daemon
   Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled)
   Active: inactive (dead) since 一 2024-03-18 09:12:05 CST; 10s ago
     Docs: man:firewalld(1)
 Main PID: 891 (code=exited, status=0/SUCCESS)

3月 18 08:55:16 controller firewalld[891]: WARNING: COMMAND_FAILED: '/usr/s....
3月 18 08:55:16 controller firewalld[891]: WARNING: COMMAND_FAILED: '/usr/s....
3月 18 08:55:16 controller firewalld[891]: WARNING: COMMAND_FAILED: '/usr/s....
3月 18 08:55:16 controller firewalld[891]: WARNING: COMMAND_FAILED: '/usr/s....
3月 18 08:55:16 controller firewalld[891]: WARNING: COMMAND_FAILED: '/usr/s....
3月 18 08:55:16 controller firewalld[891]: WARNING: COMMAND_FAILED: '/usr/s....
3月 18 08:55:17 controller firewalld[891]: WARNING: COMMAND_FAILED: '/usr/s....
3月 18 08:55:17 controller firewalld[891]: WARNING: COMMAND_FAILED: '/usr/s....
3月 18 09:12:04 controller systemd[1]: Stopping firewalld - dynamic firewal....
3月 18 09:12:05 controller systemd[1]: Stopped firewalld - dynamic firewall....
Hint: Some lines were ellipsized, use -l to show in full.
[root@controller ~]# systemctl status iptables.service
Unit iptables.service could not be found.
[root@controller ~]# systemctl status iptables
Unit iptables.service could not be found.
[root@controller ~]# systemctl stop iptables
Failed to stop iptables.service: Unit iptables.service not loaded.

4、禁用selinux

selinux是linux系统下的一个安全服务,如果不关闭它,在安装集群中会产生各种各样的奇葩问题

c 复制代码
[root@controller ~]# vi /etc/selinux/config
[root@controller ~]# cat /etc/selinux/config

# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
#     enforcing - SELinux security policy is enforced.
#     permissive - SELinux prints warnings instead of enforcing.
#     disabled - No SELinux policy is loaded.
SELINUX=disabled
# SELINUXTYPE= can take one of three two values:
#     targeted - Targeted processes are protected,
#     minimum - Modification of targeted policy. Only selected processes are protected.
#     mls - Multi Level Security protection.
SELINUXTYPE=targeted

5、主机名解析

为了方便集群节点间的直接调用,在这个配置一下主机名解析,企业中推荐使用内部DNS服务器

c 复制代码
[root@controller ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.182.132 master
192.168.182.134 node1
192.168.182.135 node2
[root@controller ~]#

6、禁用swap分区

swap分区指的是虚拟内存分区,它的作用是物理内存使用完,之后将磁盘空间虚拟成内存来使用,启用swap设备会对系统的性能产生非常负面的影响,因此kubernetes要求每个节点都要禁用swap设备,但是如果因为某些原因确实不能关闭swap分区,就需要在集群安装过程中通过明确的参数进行配置说明

编辑分区配置文件/etc/fstab,注释掉swap分区一行

c 复制代码
[root@master ~]# vi /etc/fstab
[root@master ~]# cat /etc/fstab

#
# /etc/fstab
# Created by anaconda on Thu Aug 11 09:40:08 2022
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/centos_controller-root /                       xfs     defaults        0 0
UUID=7b5e8cf6-8498-4f69-a342-8411502e054e /boot                   xfs     defaults        0 0
# /dev/mapper/centos_controller-swap swap                    swap    defaults        0 0
[root@master ~]#

7、修改linux的内核参数

c 复制代码
[root@master sysctl.d]# vi 99-sysctl.conf
[root@master sysctl.d]# cat 99-sysctl.conf
# sysctl settings are defined through files in
# /usr/lib/sysctl.d/, /run/sysctl.d/, and /etc/sysctl.d/.
#
# Vendors settings live in /usr/lib/sysctl.d/.
# To override a whole file, create a new file with the same in
# /etc/sysctl.d/ and put new settings there. To override
# only specific settings, add a file with a lexically later
# name in /etc/sysctl.d/ and put new settings there.
#
# For more information, see sysctl.conf(5) and sysctl.d(5).
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
# 重新加载配置
[root@master sysctl.d]# sysctl -p
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
[root@master sysctl.d]# modprobe br_netfilter
# 查看网桥过滤模块是否加载成功
[root@master sysctl.d]# lsmod | grep br_netfilter
br_netfilter           22256  0
bridge                151336  1 br_netfilter

[root@master sysctl.d]#

8、8 配置ipvs功能

在Kubernetes中Service有两种带来模型,一种是基于iptables的,一种是基于ipvs的两者比较的话,ipvs的性能明显要高一些,但是如果要使用它,需要手动载入ipvs模块

c 复制代码
# 1.安装ipset和ipvsadm
[root@master sysctl.d]# yum install ipset ipvsadm -y
已加载插件:fastestmirror
Loading mirror speeds from cached hostfile
软件包 ipset-7.1-1.el7.x86_64 已安装并且是最新版本
正在解决依赖关系
--> 正在检查事务
---> 软件包 ipvsadm.x86_64.0.1.27-8.el7 将被 安装
--> 解决依赖关系完成

依赖关系解决

================================================================================
 Package           架构             版本                   源              大小
================================================================================
正在安装:
 ipvsadm           x86_64           1.27-8.el7             base            45 k

事务概要
================================================================================
安装  1 软件包

总下载量:45 k
安装大小:75 k
Downloading packages:
ipvsadm-1.27-8.el7.x86_64.rpm                              |  45 kB   00:00
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
  正在安装    : ipvsadm-1.27-8.el7.x86_64                                   1/1
  验证中      : ipvsadm-1.27-8.el7.x86_64                                   1/1

已安装:
  ipvsadm.x86_64 0:1.27-8.el7

完毕!
[root@master sysctl.d]#
# 2.添加需要加载的模块写入脚本文件

[root@master sysctl.d]# cat <<EOF> /etc/sysconfig/modules/ipvs.modules
> #!/bin/bash
> modprobe -- ip_vs
> modprobe -- ip_vs_rr
> modprobe -- ip_vs_wrr
> modprobe -- ip_vs_sh
> modprobe -- nf_conntrack_ipv4
> EOF
[root@master sysctl.d]# cat /etc/sysconfig/modules/ipvs.modules
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
# 3.为脚本添加执行权限
[root@master sysctl.d]# chmod +x /etc/sysconfig/modules/ipvs.modules
# 4.执行脚本文件
[root@master sysctl.d]# /bin/bash /etc/sysconfig/modules/ipvs.modules
# 5.查看对应的模块是否加载成功
[root@master sysctl.d]# lsmod |grep -e ip_vs -e nf_conntrack_ipv4
ip_vs_sh               12688  0
ip_vs_wrr              12697  0
ip_vs_rr               12600  0
ip_vs                 145458  6 ip_vs_rr,ip_vs_sh,ip_vs_wrr
nf_conntrack_ipv4      15053  3
nf_defrag_ipv4         12729  1 nf_conntrack_ipv4
nf_conntrack          139264  7 ip_vs,nf_nat,nf_nat_ipv4,xt_conntrack,nf_nat_masquerade_ipv4,nf_conntrack_netlink,nf_conntrack_ipv4
libcrc32c              12644  4 xfs,ip_vs,nf_nat,nf_conntrack
[root@master sysctl.d]#

9 安装docker

c 复制代码
# 1、切换镜像源
#[root@master ~]# wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo

[root@master sysctl.d]# cd /etc/yum.repos.d/
[root@master yum.repos.d]# ls
CentOS7-Base-163.repo         httpd-2.4.6-97.el7.centos.5.x86_64.rpm
epel-release-7-14.noarch.rpm  mod_ssl-2.4.6-97.el7.centos.5.x86_64.rpm
epel.repo                     openssl-1.0.2k-25.el7_9.x86_64.rpm
epel-testing.repo             repo_bak
# 2、查看当前镜像源中支持的docker版本
[root@master yum.repos.d]# yum list docker-ce --showduplicates
已加载插件:fastestmirror
Loading mirror speeds from cached hostfile
已安装的软件包
docker-ce.x86_64               3:20.10.17-3.el7                @docker-ce-stable
# 3、安装特定版本的docker-ce
# 必须制定--setopt=obsoletes=0,否则yum会自动安装更高版本
[root@master yum.repos.d]# yum install --setopt=obsoletes=0 docker-ce-18.06.3.ce-3.el7 -y
已加载插件:fastestmirror
Loading mirror speeds from cached hostfile
没有可用软件包 docker-ce-18.06.3.ce-3.el7。
错误:无须任何处理
[root@master yum.repos.d]# systemctl status docker
● docker.service - Docker Application Container Engine
   Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled)
   Active: active (running) since 一 2024-03-18 09:22:20 CST; 16min ago
     Docs: https://docs.docker.com
 Main PID: 1371 (dockerd)
    Tasks: 22
   Memory: 124.5M
   CGroup: /system.slice/docker.service
           ├─1371 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/cont...
           ├─1937 /usr/bin/docker-proxy -proto tcp -host-ip 0.0.0.0 -host-por...
           └─1943 /usr/bin/docker-proxy -proto tcp -host-ip :: -host-port 330...

3月 18 09:22:19 master dockerd[1371]: time="2024-03-18T09:22:19.304140611+...pc
3月 18 09:22:20 master dockerd[1371]: time="2024-03-18T09:22:20.187976048+...k"
Hint: Some lines were ellipsized, use -l to show in full.
[root@master yum.repos.d]# docker ps
CONTAINER ID   IMAGE       COMMAND                  CREATED         STATUS          PORTS                                                  NAMES
b3e7df01fd0f   mysql:8.0   "docker-entrypoint.s..."   18 months ago   Up 17 minutes   33060/tcp, 0.0.0.0:3307->3306/tcp, :::3307->3306/tcp   mysql
[root@master yum.repos.d]# docker -v
Docker version 20.10.17, build 100c701
[root@master yum.repos.d]# mkdir /etc/docker
mkdir: 无法创建目录"/etc/docker": 文件已存在
[root@master yum.repos.d]# cd /etc/docker
[root@master docker]# ls
key.json
# 4、添加一个配置文件
#Docker 在默认情况下使用Vgroup Driver为cgroupfs,而Kubernetes推荐使用systemd来替代cgroupfs
[root@master docker]# cat <<EOF> /etc/docker/daemon.json
> {
> key.json "exec-opts": ["native.cgroupdriver=systemd"],
> key.json "registry-mirrors": ["https://kn0t2bca.mirror.aliyuncs.com"]
> }
> EOF
[root@master docker]#

10 安装Kubernetes组件

c 复制代码
1、由于kubernetes的镜像在国外,速度比较慢,这里切换成国内的镜像源
# 2、编辑/etc/yum.repos.d/kubernetes.repo,添加下面的配置
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgchech=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
			http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
			
[root@master docker]# vi /etc/yum.repos.d/kubernetes.repo
# 3、安装kubeadm、kubelet和kubectl
[root@master docker]# yum install --setopt=obsoletes=0 kubeadm-1.17.4-0 kubelet-1.17.4-0 kubectl-1.17.4-0 -y
已加载插件:fastestmirror
Loading mirror speeds from cached hostfile
kubernetes                                               | 1.4 kB     00:00
kubernetes/primary                                         | 137 kB   00:00
kubernetes                                                            1022/1022
正在解决依赖关系
--> 正在检查事务
---> 软件包 kubeadm.x86_64.0.1.17.4-0 将被 安装
--> 正在处理依赖关系 kubernetes-cni >= 0.7.5,它被软件包 kubeadm-1.17.4-0.x86_64 需要
--> 正在处理依赖关系 cri-tools >= 1.13.0,它被软件包 kubeadm-1.17.4-0.x86_64 需要
---> 软件包 kubectl.x86_64.0.1.17.4-0 将被 安装
---> 软件包 kubelet.x86_64.0.1.17.4-0 将被 安装
--> 正在处理依赖关系 socat,它被软件包 kubelet-1.17.4-0.x86_64 需要
--> 正在处理依赖关系 conntrack,它被软件包 kubelet-1.17.4-0.x86_64 需要
--> 正在检查事务
---> 软件包 conntrack-tools.x86_64.0.1.4.4-7.el7 将被 安装
--> 正在处理依赖关系 libnetfilter_cttimeout.so.1(LIBNETFILTER_CTTIMEOUT_1.1)(64bit),它被软件包 conntrack-tools-1.4.4-7.el7.x86_64 需要
--> 正在处理依赖关系 libnetfilter_cttimeout.so.1(LIBNETFILTER_CTTIMEOUT_1.0)(64bit),它被软件包 conntrack-tools-1.4.4-7.el7.x86_64 需要
--> 正在处理依赖关系 libnetfilter_cthelper.so.0(LIBNETFILTER_CTHELPER_1.0)(64bit),它被软件包 conntrack-tools-1.4.4-7.el7.x86_64 需要
--> 正在处理依赖关系 libnetfilter_queue.so.1()(64bit),它被软件包 conntrack-tools-1.4.4-7.el7.x86_64 需要
--> 正在处理依赖关系 libnetfilter_cttimeout.so.1()(64bit),它被软件包 conntrack-tools-1.4.4-7.el7.x86_64 需要
--> 正在处理依赖关系 libnetfilter_cthelper.so.0()(64bit),它被软件包 conntrack-tools-1.4.4-7.el7.x86_64 需要
---> 软件包 cri-tools.x86_64.0.1.26.0-0 将被 安装
---> 软件包 kubernetes-cni.x86_64.0.1.2.0-0 将被 安装
---> 软件包 socat.x86_64.0.1.7.3.2-2.el7 将被 安装
--> 正在检查事务
---> 软件包 libnetfilter_cthelper.x86_64.0.1.0.0-11.el7 将被 安装
---> 软件包 libnetfilter_cttimeout.x86_64.0.1.0.0-7.el7 将被 安装
---> 软件包 libnetfilter_queue.x86_64.0.1.0.2-2.el7_2 将被 安装
--> 解决依赖关系完成

依赖关系解决

================================================================================
 Package                    架构       版本                源              大小
================================================================================
正在安装:
 kubeadm                    x86_64     1.17.4-0            kubernetes     8.7 M
 kubectl                    x86_64     1.17.4-0            kubernetes     9.4 M
 kubelet                    x86_64     1.17.4-0            kubernetes      20 M
为依赖而安装:
 conntrack-tools            x86_64     1.4.4-7.el7         base           187 k
 cri-tools                  x86_64     1.26.0-0            kubernetes     8.6 M
 kubernetes-cni             x86_64     1.2.0-0             kubernetes      17 M
 libnetfilter_cthelper      x86_64     1.0.0-11.el7        base            18 k
 libnetfilter_cttimeout     x86_64     1.0.0-7.el7         base            18 k
 libnetfilter_queue         x86_64     1.0.2-2.el7_2       base            23 k
 socat                      x86_64     1.7.3.2-2.el7       base           290 k

事务概要
================================================================================
安装  3 软件包 (+7 依赖软件包)

总下载量:65 M
安装大小:275 M
Downloading packages:
(1/10): conntrack-tools-1.4.4-7.el7.x86_64.rpm             | 187 kB   00:00
warning: /var/cache/yum/x86_64/7/kubernetes/packages/3f5ba2b53701ac9102ea7c7ab2ca6616a8cd5966591a77577585fde1c434ef74-cri-tools-1.26.0-0.x86_64.rpm: Header V4 RSA/SHA512 Signature, key ID 3e1ba8d5: NOKEY
3f5ba2b53701ac9102ea7c7ab2ca6616a8cd5966591a77577585fde1c434ef74-cri-tools-1.26.0-0.x86_64.rpm 的公钥尚未安装
(2/10): 3f5ba2b53701ac9102ea7c7ab2ca6616a8cd5966591a775775 | 8.6 MB   00:06
(3/10): 0767753f85f415bbdf1df0e974eafccb653bee06149600c3ee | 8.7 MB   00:06
(4/10): 06400b25ef3577561502f9a7a126bf4975c03b30aca0fb19bb | 9.4 MB   00:06
(5/10): libnetfilter_cthelper-1.0.0-11.el7.x86_64.rpm      |  18 kB   00:00
(6/10): libnetfilter_cttimeout-1.0.0-7.el7.x86_64.rpm      |  18 kB   00:00
(7/10): libnetfilter_queue-1.0.2-2.el7_2.x86_64.rpm        |  23 kB   00:00
(8/10): socat-1.7.3.2-2.el7.x86_64.rpm                     | 290 kB   00:00
(9/10): 0c45baca5fcc05bb75f1e953ecaf85844efac01bf9c1ef3c21 |  20 MB   00:14
(10/10): 0f2a2afd740d476ad77c508847bad1f559afc2425816c1f2c |  17 MB   00:12
--------------------------------------------------------------------------------
总计                                               2.6 MB/s |  65 MB  00:24
从 http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg 检索密钥
导入 GPG key 0x13EDEF05:
 用户ID     : "Rapture Automatic Signing Key (cloud-rapture-signing-key-2022-03-07-08_01_01.pub)"
 指纹       : a362 b822 f6de dc65 2817 ea46 b53d c80d 13ed ef05
 来自       : http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
从 http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg 检索密钥
导入 GPG key 0x3E1BA8D5:
 用户ID     : "Google Cloud Packages RPM Signing Key <gc-team@google.com>"
 指纹       : 3749 e1ba 95a8 6ce0 5454 6ed2 f09c 394c 3e1b a8d5
 来自       : http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
  正在安装    : libnetfilter_cthelper-1.0.0-11.el7.x86_64                  1/10
  正在安装    : socat-1.7.3.2-2.el7.x86_64                                 2/10
  正在安装    : libnetfilter_cttimeout-1.0.0-7.el7.x86_64                  3/10
  正在安装    : kubectl-1.17.4-0.x86_64                                    4/10
  正在安装    : cri-tools-1.26.0-0.x86_64                                  5/10
  正在安装    : libnetfilter_queue-1.0.2-2.el7_2.x86_64                    6/10
  正在安装    : conntrack-tools-1.4.4-7.el7.x86_64                         7/10
  正在安装    : kubelet-1.17.4-0.x86_64                                    8/10
  正在安装    : kubernetes-cni-1.2.0-0.x86_64                              9/10
  正在安装    : kubeadm-1.17.4-0.x86_64                                   10/10
  验证中      : kubernetes-cni-1.2.0-0.x86_64                              1/10
  验证中      : conntrack-tools-1.4.4-7.el7.x86_64                         2/10
  验证中      : libnetfilter_queue-1.0.2-2.el7_2.x86_64                    3/10
  验证中      : cri-tools-1.26.0-0.x86_64                                  4/10
  验证中      : kubeadm-1.17.4-0.x86_64                                    5/10
  验证中      : kubectl-1.17.4-0.x86_64                                    6/10
  验证中      : libnetfilter_cttimeout-1.0.0-7.el7.x86_64                  7/10
  验证中      : socat-1.7.3.2-2.el7.x86_64                                 8/10
  验证中      : kubelet-1.17.4-0.x86_64                                    9/10
  验证中      : libnetfilter_cthelper-1.0.0-11.el7.x86_64                 10/10

已安装:
  kubeadm.x86_64 0:1.17.4-0 kubectl.x86_64 0:1.17.4-0 kubelet.x86_64 0:1.17.4-0

作为依赖被安装:
  conntrack-tools.x86_64 0:1.4.4-7.el7
  cri-tools.x86_64 0:1.26.0-0
  kubernetes-cni.x86_64 0:1.2.0-0
  libnetfilter_cthelper.x86_64 0:1.0.0-11.el7
  libnetfilter_cttimeout.x86_64 0:1.0.0-7.el7
  libnetfilter_queue.x86_64 0:1.0.2-2.el7_2
  socat.x86_64 0:1.7.3.2-2.el7

完毕!
[root@master docker]#
4、配置kubelet的cgroup
[root@master docker]# vi /etc/sysconfig/kubelet
[root@master docker]# cat /etc/sysconfig/kubelet
KUBELET_EXTRA_ARGS=
KUBELET_CGROUP_ARGS="--cgroup-driver=systemd"
KUBE_PROXY_MODE="ipvs"
# 5、设置kubelet开机自启
[root@master docker]# systemctl enable kubelet
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
[root@master docker]#

11 准备集群镜像

c 复制代码
# 在安装kubernetes集群之前,必须要提前准备好集群需要的镜像,所需镜像可以通过下面命令查看
[root@master docker]# kubeadm config images list
I0318 09:52:48.138815    3831 version.go:251] remote version is much newer: v1.29.3; falling back to: stable-1.17
W0318 09:52:49.898061    3831 validation.go:28] Cannot validate kube-proxy config - no validator is available
W0318 09:52:49.898074    3831 validation.go:28] Cannot validate kubelet config - no validator is available
k8s.gcr.io/kube-apiserver:v1.17.17
k8s.gcr.io/kube-controller-manager:v1.17.17
k8s.gcr.io/kube-scheduler:v1.17.17
k8s.gcr.io/kube-proxy:v1.17.17
k8s.gcr.io/pause:3.1
k8s.gcr.io/etcd:3.4.3-0
k8s.gcr.io/coredns:1.6.5
[root@master docker]#

删除1.17版本,安装1.23版本

c 复制代码
[root@master docker]# yum remove kubeadm-1.17.4-0 kubelet-1.17.4-0 kubectl-1.17.4-0 -y
已加载插件:fastestmirror
正在解决依赖关系
--> 正在检查事务
---> 软件包 kubeadm.x86_64.0.1.17.4-0 将被 删除
---> 软件包 kubectl.x86_64.0.1.17.4-0 将被 删除
---> 软件包 kubelet.x86_64.0.1.17.4-0 将被 删除
--> 正在处理依赖关系 kubelet,它被软件包 kubernetes-cni-1.2.0-0.x86_64 需要
--> 正在检查事务
---> 软件包 kubernetes-cni.x86_64.0.1.2.0-0 将被 删除
--> 解决依赖关系完成

依赖关系解决

================================================================================
 Package               架构          版本              源                  大小
================================================================================
正在删除:
 kubeadm               x86_64        1.17.4-0          @kubernetes         38 M
 kubectl               x86_64        1.17.4-0          @kubernetes         41 M
 kubelet               x86_64        1.17.4-0          @kubernetes        106 M
为依赖而移除:
 kubernetes-cni        x86_64        1.2.0-0           @kubernetes         49 M

事务概要
================================================================================
移除  3 软件包 (+1 依赖软件包)

安装大小:234 M
Downloading packages:
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
  正在删除    : kubeadm-1.17.4-0.x86_64                                     1/4
  正在删除    : kubelet-1.17.4-0.x86_64                                     2/4
警告:/etc/sysconfig/kubelet 已另存为 /etc/sysconfig/kubelet.rpmsave
  正在删除    : kubernetes-cni-1.2.0-0.x86_64                               3/4
  正在删除    : kubectl-1.17.4-0.x86_64                                     4/4
  验证中      : kubectl-1.17.4-0.x86_64                                     1/4
  验证中      : kubeadm-1.17.4-0.x86_64                                     2/4
  验证中      : kubelet-1.17.4-0.x86_64                                     3/4
  验证中      : kubernetes-cni-1.2.0-0.x86_64                               4/4

删除:
  kubeadm.x86_64 0:1.17.4-0 kubectl.x86_64 0:1.17.4-0 kubelet.x86_64 0:1.17.4-0

作为依赖被删除:
  kubernetes-cni.x86_64 0:1.2.0-0

完毕!
[root@master docker]#
#设置kubelet开机自启
[root@master ~]# systemctl enable kubelet
[root@master ~]# kubelet --version
Kubernetes v1.23.6
[root@master ~]#

11 集群初始化

下面的操作只需要在master节点上执行即可

c 复制代码
kubeadm init \
	--apiserver-advertise-address=192.168.182.132 \
	--image-repository registry.aliyuncs.com/google_containers \
	--kubernetes-version=v1.23.6 \
	--service-cidr=10.96.0.0/12 \
	--pod-network-cidr=10.244.0.0/16

初始化运行,启动docker失败解决办法

c 复制代码
[root@master system]# vi docker.service
[root@master system]# vi /etc/docker/daemon.json
{
"exec-opts":["native.cgroupdriver=systemd"],
"registry-mirrors":["https://kn0t2bca.mirror.aliyuncs.com"]
}
[root@master system]# systemctl daemon-reload
[root@master system]# service docker start
Redirecting to /bin/systemctl start docker.service
Job for docker.service failed because the control process exited with error code. See "systemctl status docker.service" and "                        journalctl -xe" for details.
#鉴于上述几点都无法解决,就想到查看Linux系统操作日志(最后200行就可以排查):
tail -200f /var/log/messages
[root@master system]# tail -200f /var/log/messages
Mar 18 11:38:38 master systemd: Stopped Docker Application Container Engine.
Mar 18 11:38:38 master systemd: Starting Docker Application Container Engine...
Mar 18 11:38:38 master dockerd: unable to configure the Docker daemon with file /etc/docker/daemon.json: invalid character 'k' looking for beginning of object key string
Mar 18 11:38:38 master systemd: docker.service: main process exited, code=exited, status=1/FAILURE
Mar 18 11:38:38 master systemd: Failed to start Docker Application Container Engine.
Mar 18 11:38:38 master systemd: Unit docker.service entered failed state.
#解决办法
查看 /etc/docker/daemon.json 去掉当中的空格,修改为
{
  "registry-mirrors":["https://registry.docker-cn.com"]
}
#运行systemctl daemon-reload,service docker start后
#原文链接:https://blog.csdn.net/fujian9544/article/details/132433035
[root@master ~]# vi /etc/docker/daemon.json
{
"exec-opts":["native.cgroupdriver=systemd"],
"registry-mirrors":["https://kn0t2bca.mirror.aliyuncs.com"]
}
[root@master ~]# systemctl daemon-reload
[root@master ~]# systemctl start docker
[root@master ~]# systemctl status docker
● docker.service - Docker Application Container Engine
   Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled)
   Active: active (running) since 一 2024-03-18 11:42:51 CST; 11s ago
     Docs: https://docs.docker.com
 Main PID: 12755 (dockerd)
    Tasks: 24
   Memory: 54.3M
   CGroup: /system.slice/docker.service
           ├─12755 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/con...
           ├─12890 /usr/bin/docker-proxy -proto tcp -host-ip 0.0.0.0 -host-po...
           └─12896 /usr/bin/docker-proxy -proto tcp -host-ip :: -host-port 33...

3月 18 11:42:48 master dockerd[12755]: time="2024-03-18T11:42:48.874967125+...c
3月 18 11:42:48 master dockerd[12755]: time="2024-03-18T11:42:48.874976404+...c
3月 18 11:42:48 master dockerd[12755]: time="2024-03-18T11:42:48.896736592+..."
3月 18 11:42:49 master dockerd[12755]: time="2024-03-18T11:42:49.565783173+..."
3月 18 11:42:49 master dockerd[12755]: time="2024-03-18T11:42:49.743347679+..."
3月 18 11:42:51 master dockerd[12755]: time="2024-03-18T11:42:51.451855207+..."
3月 18 11:42:51 master dockerd[12755]: time="2024-03-18T11:42:51.481535378+...7
3月 18 11:42:51 master dockerd[12755]: time="2024-03-18T11:42:51.481644141+..."
3月 18 11:42:51 master systemd[1]: Started Docker Application Container Engine.
3月 18 11:42:51 master dockerd[12755]: time="2024-03-18T11:42:51.510215494+..."
Hint: Some lines were ellipsized, use -l to show in full.
[root@master ~]# systemctl

11 集群初始化

下面的操作只需要在master节点上执行即可

c 复制代码
[root@master ~]# kubeadm init \
> --apiserver-advertise-address=192.168.182.132 \
> --image-repository registry.aliyuncs.com/google_containers \
> --kubernetes-version=v1.23.6 \
> --service-cidr=10.96.0.0/12 \
> --pod-network-cidr=10.244.0.0/16
[init] Using Kubernetes version: v1.23.6
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your inte               rnet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config                images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local master] and IPs [10.96.0.1 192.168.182.132]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost master] and IPs [192.168.182.132 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost master] and IPs [192.168.182.132 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 14.002903 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.23" in namespace kube-system with the configuration for the kubelets in the cluster
NOTE: The "kubelet-config-1.23" naming of the kubelet ConfigMap is deprecated. Once the UnversionedKubeletConfigMap feature gate graduates to Beta the default name will become just "kubelet-config". Kubeadm upgrade will handle this transition transparently.
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node master as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: 9duz9p.ypfdiplzgf5zcy7l
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.182.132:6443 --token 9duz9p.ypfdiplzgf5zcy7l \
        --discovery-token-ca-cert-hash sha256:8dcd0d058e7398d719fdc07ae0875f22e0f71da50f61779058d7dc3bf334490f
[root@master ~]#

查看Docker配置信息

c 复制代码
[root@master ~]# docker info |grep Driver
 Storage Driver: overlay2
 Logging Driver: json-file
 Cgroup Driver: systemd

创建必要文件

c 复制代码
[root@master ~]# kebuctl mkdir -p $HOME/.kube^C
[root@master ~]# mkdir -p $HOME/.kube
[root@master ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@master ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
[root@master ~]#

12、下面的操作只需要在node节点上执行即可

c 复制代码
[root@node2 ~]# kubeadm join 192.168.182.132:6443 --token 9duz9p.ypfdiplzgf5zcy7l \
>         --discovery-token-ca-cert-hash sha256:8dcd0d058e7398d719fdc07ae0875f22e0f71da50f61779058d7dc3bf334490f
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

#查看token值
[root@master ~]# kubeadm token list
TOKEN                     TTL         EXPIRES                USAGES                   DESCRIPTION                                                EXTRA GROUPS
9duz9p.ypfdiplzgf5zcy7l   2h          2024-03-19T03:53:41Z   authentication,signing   The default bootstrap token generated by 'kubeadm init'.   system:bootstrappers:kubeadm:default-node-token

#查看哈希值
[root@master ~]# openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null |  openssl dgst -sha256 -hex | sed 's/^.* //'
8dcd0d058e7398d719fdc07ae0875f22e0f71da50f61779058d7dc3bf334490f
[root@master ~]#

13、在master上查看节点信息

c 复制代码
[root@master ~]# kubectl get nodes
NAME     STATUS     ROLES                  AGE     VERSION
master   NotReady   control-plane,master   21h     v1.23.6
node1    NotReady   <none>                 3m28s   v1.23.6
node2    NotReady   <none>                 82s     v1.23.6
[root@master ~]# kubectl get componentstatus
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS    MESSAGE                         ERROR
scheduler            Healthy   ok
controller-manager   Healthy   ok
etcd-0               Healthy   {"health":"true","reason":""}

14、 安装网络插件,只在master节点操作即可

c 复制代码
#查看POD、命名空间
[root@master ~]# kubectl get pods
No resources found in default namespace.
[root@master ~]# kubectl get pods -n kube-system
NAME                             READY   STATUS    RESTARTS   AGE
coredns-6d8c4cb4d-csqvt          0/1     Pending   0          26h
coredns-6d8c4cb4d-trr5k          0/1     Pending   0          26h
etcd-master                      1/1     Running   0          26h
kube-apiserver-master            1/1     Running   0          26h
kube-controller-manager-master   1/1     Running   0          26h
kube-proxy-4qln2                 1/1     Running   0          5h21m
kube-proxy-v872b                 1/1     Running   0          26h
kube-proxy-w8dhj                 1/1     Running   0          5h23m
kube-scheduler-master            1/1     Running   0          26h
[root@master ~]#

.部署容器网络(CNI)

💡 Tips:以后所有yaml文件都只在Master节点执行。

Calico是一个纯三层的数据中心网络方案,是目前Kubernetes主流的网络方案。 下载YAML:

wget https://docs.projectcalico.org/manifests/calico.yaml

下载完后还需要修改里面定义Pod网络(CALICO_IPV4POOL_CIDR),与前面kubeadm init的

--pod-network-cidr指定的一样

c 复制代码
[root@master k8s]# cat calico.yaml | grep 10.244.0.0
            - name: CALICO_IPV4POOL_CIDR
              value: "10.244.0.0/16"

#文件下载后,上传到master服务器,然后执行下面命令

kubectl apply -f calico.yaml

c 复制代码
#查看镜像
[root@master k8s]# grep image calico.yaml
          image: docker.io/calico/cni:v3.25.0
          imagePullPolicy: IfNotPresent
          image: docker.io/calico/cni:v3.25.0
          imagePullPolicy: IfNotPresent
          image: docker.io/calico/node:v3.25.0
          imagePullPolicy: IfNotPresent
          image: docker.io/calico/node:v3.25.0
          imagePullPolicy: IfNotPresent
          image: docker.io/calico/kube-controllers:v3.25.0
          imagePullPolicy: IfNotPresent
 #将dock.io镜像地址修改为空
[root@master k8s]# sed -i 's#docker.io/##g' calico.yaml
[root@master k8s]# grep image calico.yaml
          image: calico/cni:v3.25.0
          imagePullPolicy: IfNotPresent
          image: calico/cni:v3.25.0
          imagePullPolicy: IfNotPresent
          image: calico/node:v3.25.0
          imagePullPolicy: IfNotPresent
          image: calico/node:v3.25.0
          imagePullPolicy: IfNotPresent
          image: calico/kube-controllers:v3.25.0
          imagePullPolicy: IfNotPresent
[root@master k8s]# kubectl apply -f calico.yaml
poddisruptionbudget.policy/calico-kube-controllers configured
serviceaccount/calico-kube-controllers unchanged
serviceaccount/calico-node unchanged
configmap/calico-config unchanged
.......
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers unchanged
clusterrole.rbac.authorization.k8s.io/calico-node unchanged
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers unchanged
clusterrolebinding.rbac.authorization.k8s.io/calico-node unchanged
daemonset.apps/calico-node created
deployment.apps/calico-kube-controllers created

#查看节点状态

kubectl get pods -n nodes

c 复制代码
[root@master k8s]# kubectl get nodes
NAME     STATUS   ROLES                  AGE   VERSION
master   Ready    control-plane,master   45h   v1.23.6
node1    Ready    <none>                 23h   v1.23.6
node2    Ready    <none>                 23h   v1.23.6
[root@master k8s]# kubectl get po -n kube-system
NAME                                     READY   STATUS              RESTARTS        AGE
calico-kube-controllers-cd8566cf-gj6jx   0/1     ContainerCreating   0               7m18s
calico-node-gp76m                        1/1     Running             0               7m18s
calico-node-vlm5h                        1/1     Running             0               7m18s
calico-node-w9nch                        1/1     Running             0               7m18s
coredns-6d8c4cb4d-csqvt                  0/1     Running             0               45h
coredns-6d8c4cb4d-trr5k                  0/1     ContainerCreating   0               45h
etcd-master                              1/1     Running             0               45h
kube-apiserver-master                    1/1     Running             0               45h
kube-controller-manager-master           1/1     Running             2 (2m43s ago)   45h
kube-proxy-4qln2                         1/1     Running             0               23h
kube-proxy-v872b                         1/1     Running             0               45h
kube-proxy-w8dhj                         1/1     Running             0               23h
kube-scheduler-master                    1/1     Running             2 (2m51s ago)   45h
[root@master k8s]# kubectl get po -n kube-system
NAME                                     READY   STATUS    RESTARTS        AGE
calico-kube-controllers-cd8566cf-gj6jx   1/1     Running   0               9m42s
calico-node-gp76m                        1/1     Running   0               9m42s
calico-node-vlm5h                        1/1     Running   0               9m42s
calico-node-w9nch                        1/1     Running   0               9m42s
coredns-6d8c4cb4d-csqvt                  1/1     Running   0               45h
coredns-6d8c4cb4d-trr5k                  1/1     Running   0               45h
etcd-master                              1/1     Running   0               45h
kube-apiserver-master                    1/1     Running   0               45h
kube-controller-manager-master           1/1     Running   2 (5m7s ago)    45h
kube-proxy-4qln2                         1/1     Running   0               23h
kube-proxy-v872b                         1/1     Running   0               45h
kube-proxy-w8dhj                         1/1     Running   0               23h
kube-scheduler-master                    1/1     Running   2 (5m15s ago)   45h
[root@master k8s]#

查看pod状态

c 复制代码
[root@master k8s]# kubectl describe po calico-kube-controllers-cd8566cf-gj6jx -n system
Error from server (NotFound): namespaces "system" not found
[root@master k8s]# kubectl describe po calico-kube-controllers-cd8566cf-gj6jx -n kube-system
Name:                 calico-kube-controllers-cd8566cf-gj6jx
Namespace:            kube-system
Priority:             2000000000
Priority Class Name:  system-cluster-critical
Node:                 master/192.168.182.132
Start Time:           Wed, 20 Mar 2024 08:58:04 +0800
Labels:               k8s-app=calico-kube-controllers
                      pod-template-hash=cd8566cf
Annotations:          cni.projectcalico.org/containerID: cde2f3ae3a9cf61841075a5ad999fc51686e4baf29d55a91ada2160d20656c8a
                      cni.projectcalico.org/podIP: 10.244.219.66/32
                      cni.projectcalico.org/podIPs: 10.244.219.66/32
Status:               Running
IP:                   10.244.219.66
IPs:
  IP:           10.244.219.66
Controlled By:  ReplicaSet/calico-kube-controllers-cd8566cf
Containers:
  calico-kube-controllers:
    Container ID:   docker://99ec97f941b7a9d5ee47356ce20b7c1bce5e6af02cf12841dc7009296b7aed12
    Image:          calico/kube-controllers:v3.25.0
    Image ID:       docker-pullable://calico/kube-controllers@sha256:c45af3a9692d87a527451cf544557138fedf86f92b6e39bf2003e2fdb848dce3
    Port:           <none>
    Host Port:      <none>
    State:          Running
      Started:      Wed, 20 Mar 2024 09:04:50 +0800
    Ready:          True
    Restart Count:  0
    Liveness:       exec [/usr/bin/check-status -l] delay=10s timeout=10s period=10s #success=1 #failure=6
    Readiness:      exec [/usr/bin/check-status -r] delay=0s timeout=1s period=10s #success=1 #failure=3
    Environment:
      ENABLED_CONTROLLERS:  node
      DATASTORE_TYPE:       kubernetes
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-4lml9 (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
Volumes:
  kube-api-access-4lml9:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 CriticalAddonsOnly op=Exists
                             node-role.kubernetes.io/control-plane:NoSchedule
                             node-role.kubernetes.io/master:NoSchedule
                             node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason                  Age                  From               Message
  ----     ------                  ----                 ----               -------
  Warning  FailedScheduling        12m (x2 over 13m)    default-scheduler  0/3 nodes are available: 3 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.
  Normal   Scheduled               11m                  default-scheduler  Successfully assigned kube-system/calico-kube-controllers-cd8566cf-gj6jx to master
  Warning  FailedCreatePodSandBox  10m                  kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "908db3d95fa8b681458edcb8d0a5d337793c77f63290debe833c9eb5d8646b5c" network for pod "calico-kube-controllers-cd8566cf-gj6jx": networkPlugin cni failed to set up pod "calico-kube-controllers-cd8566cf-gj6jx_kube-system" network: stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/
  Warning  FailedCreatePodSandBox  10m                  kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "960633b5ac6c234d2d86423b4363b924e2d01043836356037c0e414a45e17082" network for pod "calico-kube-controllers-cd8566cf-gj6jx": networkPlugin cni failed to set up pod "calico-kube-controllers-cd8566cf-gj6jx_kube-system" network: stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/
  Warning  FailedCreatePodSandBox  8m49s                kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "49989e86bc94850db35432527ef463d7340bc41662e5e660b32b581e9f687572" network for pod "calico-kube-controllers-cd8566cf-gj6jx": networkPlugin cni failed to set up pod "calico-kube-controllers-cd8566cf-gj6jx_kube-system" network: stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/
  Warning  FailedCreatePodSandBox  8m33s                kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "e35266327a75e43bf7751368a0b417b2dfaa4bd923f841b6d7a843672dd9b92a" network for pod "calico-kube-controllers-cd8566cf-gj6jx": networkPlugin cni failed to set up pod "calico-kube-controllers-cd8566cf-gj6jx_kube-system" network: stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/
  Warning  FailedCreatePodSandBox  6m47s                kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "c9439ae19f58ecd3f79e8feaee4458836ab43f092cecb4b5ad6d8150b6ae6ce1" network for pod "calico-kube-controllers-cd8566cf-gj6jx": networkPlugin cni failed to set up pod "calico-kube-controllers-cd8566cf-gj6jx_kube-system" network: stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/
  Warning  FailedCreatePodSandBox  6m42s                kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "833c26e5690fa122d1810cba2c34598fba897456b91b389e8d1cbf2cf7b2865f" network for pod "calico-kube-controllers-cd8566cf-gj6jx": networkPlugin cni failed to set up pod "calico-kube-controllers-cd8566cf-gj6jx_kube-system" network: stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/
  Normal   SandboxChanged          6m41s (x6 over 10m)  kubelet            Pod sandbox changed, it will be killed and re-created.
  Normal   Pulling                 6m9s                 kubelet            Pulling image "calico/kube-controllers:v3.25.0"
  Normal   Pulled                  5m7s                 kubelet            Successfully pulled image "calico/kube-controllers:v3.25.0" in 1m2.295413679s
  Normal   Created                 5m6s                 kubelet            Created container calico-kube-controllers
  Normal   Started                 5m6s                 kubelet            Started container calico-kube-controllers
[root@master k8s]#

查看 网络状态

c 复制代码
[root@master k8s]# ip link
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000
    link/ether 00:0c:29:07:14:2e brd ff:ff:ff:ff:ff:ff
3: ens34: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000
    link/ether 00:0c:29:07:14:38 brd ff:ff:ff:ff:ff:ff
4: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT group default qlen 1000
    link/ether 52:54:00:4e:61:a8 brd ff:ff:ff:ff:ff:ff
5: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast master virbr0 state DOWN mode DEFAULT group default qlen 1000
    link/ether 52:54:00:4e:61:a8 brd ff:ff:ff:ff:ff:ff
6: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default
    link/ether 02:42:14:54:db:06 brd ff:ff:ff:ff:ff:ff
12: vethcc29825@if11: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP mode DEFAULT group default
    link/ether 36:60:d2:ac:0c:bb brd ff:ff:ff:ff:ff:ff link-netnsid 0
13: calie99363035c7@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default
    link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netnsid 1
14: tunl0@NONE: <NOARP,UP,LOWER_UP> mtu 1480 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/ipip 0.0.0.0 brd 0.0.0.0
15: cali5b4e764ed01@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1480 qdisc noqueue state UP mode DEFAULT group default
    link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netnsid 2
16: cali11c3a0146df@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1480 qdisc noqueue state UP mode DEFAULT group default
    link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netnsid 3
[root@master k8s]#

部署nginx测试

c 复制代码
[root@master k8s]# kubectl create deployment nginx --image=nginx
deployment.apps/nginx created
[root@master k8s]# kubectl expose deployment nginx --port=80 --type=NodePort
service/nginx exposed
[root@master k8s]#
#查看暴露的端口
[root@master k8s]# kubectl get pod,svc
NAME                         READY   STATUS    RESTARTS   AGE
pod/nginx-85b98978db-ft2hp   1/1     Running   0          2m28s

NAME                 TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
service/kubernetes   ClusterIP   10.96.0.1      <none>        443/TCP        45h
service/nginx        NodePort    10.99.41.160   <none>        80:31695/TCP   89s
#打开网页测试
[root@master k8s]# curl 192.168.182.132:31695
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>
[root@master k8s]#

在任一节点配置kubectl工具

c 复制代码
#将master节点的admin.conf复制到其他节点相同目录
[root@master k8s]# scp /etc/kubernetes/admin.conf root@192.168.182.134:/etc/kubernetes/
root@192.168.182.134's password:
admin.conf    
#修改环境变量               
[root@node1 ~]#  echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~.bash_profile
[root@node1 ~]# source \~.bash_profile
[root@node1 ~]# cat \~.bash_profile
export KUBECONFIG=/etc/kubernetes/admin.conf

[root@node1 ~]# cd /etc/kubernetes/
[root@node1 kubernetes]# ls
admin.conf  kubelet.conf  manifests  pki
[root@node1 kubernetes]# kubectl get no
NAME     STATUS   ROLES                  AGE   VERSION
master   Ready    control-plane,master   45h   v1.23.6
node1    Ready    <none>                 24h   v1.23.6
node2    Ready    <none>                 24h   v1.23.6
[root@node1 kubernetes]#

增加pod副本

c 复制代码
#查看命名空间
[root@master k8s]# kubectl get namespace
NAME              STATUS   AGE
default           Active   45h
kube-node-lease   Active   45h
kube-public       Active   45h
kube-system       Active   45h
#查看pod
[root@master k8s]# kubectl get po
NAME                     READY   STATUS    RESTARTS   AGE
nginx-85b98978db-ft2hp   1/1     Running   0          36m
[root@master k8s]# kubectl scale --replicas=2 nginx-85b98978db-ft2hp
error: the server doesn't have a resource type "nginx-85b98978db-ft2hp"
[root@master k8s]# kubectl get deploymant
error: the server doesn't have a resource type "deploymant"
[root@master k8s]# kubectl get deployment
NAME    READY   UP-TO-DATE   AVAILABLE   AGE
nginx   1/1     1            1           38m
#增加deployment副本
[root@master k8s]# kubectl scale deployment --replicas=2 nginx
deployment.apps/nginx scaled
[root@master k8s]# kubectl get po
NAME                     READY   STATUS              RESTARTS   AGE
nginx-85b98978db-96qng   0/1     ContainerCreating   0          9s
nginx-85b98978db-ft2hp   1/1     Running             0          39m
[root@master k8s]#
[root@master k8s]# kubectl get deploy
NAME    READY   UP-TO-DATE   AVAILABLE   AGE
nginx   2/2     2            2           41m
[root@master k8s]#
相关推荐
蝎子莱莱爱打怪11 小时前
GitLab CI/CD + Docker Registry + K8s 部署完整实战指南
后端·docker·kubernetes
BingoGo21 小时前
当你的 PHP 应用的 API 没有限流时会发生什么?
后端·php
JaguarJack21 小时前
当你的 PHP 应用的 API 没有限流时会发生什么?
后端·php·服务端
BingoGo2 天前
OpenSwoole 26.2.0 发布:支持 PHP 8.5、io_uring 后端及协程调试改进
后端·php
JaguarJack2 天前
OpenSwoole 26.2.0 发布:支持 PHP 8.5、io_uring 后端及协程调试改进
后端·php·服务端
JaguarJack3 天前
推荐 PHP 属性(Attributes) 简洁读取 API 扩展包
后端·php·服务端
BingoGo3 天前
推荐 PHP 属性(Attributes) 简洁读取 API 扩展包
php
蝎子莱莱爱打怪4 天前
Centos7中一键安装K8s集群以及Rancher安装记录
运维·后端·kubernetes
崔小汤呀4 天前
Docker部署Nacos
docker·容器
缓解AI焦虑4 天前
Docker + K8s 部署大模型推理服务:资源划分与多实例调度
docker·容器