Kubernetes 简介及部署方法

1 应用部署方式演变

在部署应用程序的方式上,主要经历了三个阶段:

传统部署:互联网早期,会直接将应用程序部署在物理机上

  • 优点:简单,不需要其它技术的参与
  • 缺点:不能为应用程序定义资源使用边界,很难合理地分配计算资源,而且程序之间容易产生影响

**虚拟化部署:**可以在一台物理机上运行多个虚拟机,每个虚拟机都是独立的一个环境

  • 优点:程序环境不会相互产生影响,提供了一定程度的安全性
  • 缺点:增加了操作系统,浪费了部分资源

**容器化部署:**与虚拟化类似,但是共享了操作系统

!NOTE

容器化部署方式给带来很多的便利,但是也会出现一些问题,比如说:

一个容器故障停机了,怎么样让另外一个容器立刻启动去替补停机的容器

当并发访问量变大的时候,怎么样做到横向扩展容器数量

2 容器编排应用

为了解决这些容器编排问题,就产生了一些容器编排的软件:

  • Swarm:Docker自己的容器编排工具
  • Mesos:Apache的一个资源统一管控的工具,需要和Marathon结合使用
  • Kubernetes:Google开源的的容器编排工具

3 kubernetes 简介

  • 在Docker 作为高级容器引擎快速发展的同时,在Google内部,容器技术已经应用了很多年
  • Borg系统运行管理着成千上万的容器应用。
  • Kubernetes项目来源于Borg,可以说是集结了Borg设计思想的精华,并且吸收了Borg系统中的经验和教训。
  • Kubernetes对计算资源进行了更高层次的抽象,通过将容器进行细致的组合,将最终的应用服务交给用户。

kubernetes的本质是一组服务器集群,它可以在集群的每个节点上运行特定的程序,来对节点中的容器进行管理。目的是实现资源管理的自动化,主要提供了如下的主要功能:

  • 自我修复:一旦某一个容器崩溃,能够在1秒中左右迅速启动新的容器
  • 弹性伸缩:可以根据需要,自动对集群中正在运行的容器数量进行调整
  • 服务发现:服务可以通过自动发现的形式找到它所依赖的服务
  • 负载均衡:如果一个服务起动了多个容器,能够自动实现请求的负载均衡
  • 版本回退:如果发现新发布的程序版本有问题,可以立即回退到原来的版本
  • 存储编排:可以根据容器自身的需求自动创建存储卷

4 K8S的设计架构

1.4.1 K8S各个组件用途

一个kubernetes集群主要是由控制节点(master)、工作节点(node)构成,每个节点上都会安装不同的组件

1 master:集群的控制平面,负责集群的决策

  • ApiServer : 资源操作的唯一入口,接收用户输入的命令,提供认证、授权、API注册和发现等机制
  • Scheduler : 负责集群资源调度,按照预定的调度策略将Pod调度到相应的node节点上
  • ControllerManager : 负责维护集群的状态,比如程序部署安排、故障检测、自动扩展、滚动更新等
  • Etcd :负责存储集群中各种资源对象的信息

2 node:集群的数据平面,负责为容器提供运行环境

  • kubelet:负责维护容器的生命周期,同时也负责Volume(CVI)和网络(CNI)的管理
  • Container runtime:负责镜像管理以及Pod和容器的真正运行(CRI)
  • kube-proxy:负责为Service提供cluster内部的服务发现和负载均衡

1.4.2 K8S 各组件之间的调用关系

当我们要运行一个web服务时

  • kubernetes环境启动之后,master和node都会将自身的信息存储到etcd数据库中
  • web服务的安装请求会首先被发送到master节点的apiServer组件
  • apiServer组件会调用scheduler组件来决定到底应该把这个服务安装到哪个node节点上
  • 在此时,它会从etcd中读取各个node节点的信息,然后按照一定的算法进行选择,并将结果告知apiServer
  • apiServer调用controller-manager去调度Node节点安装web服务
  • kubelet接收到指令后,会通知docker,然后由docker来启动一个web服务的pod
  • 如果需要访问web服务,就需要通过kube-proxy来对pod产生访问的代理

1.4.3 K8S 的 常用名词感念

  • Master:集群控制节点,每个集群需要至少一个master节点负责集群的管控
  • Node:工作负载节点,由master分配容器到这些node工作节点上,然后node节点上的
  • Pod:kubernetes的最小控制单元,容器都是运行在pod中的,一个pod中可以有1个或者多个容器
  • Controller:控制器,通过它来实现对pod的管理,比如启动pod、停止pod、伸缩pod的数量等等
  • Service:pod对外服务的统一入口,下面可以维护者同一类的多个pod
  • Label:标签,用于对pod进行分类,同一类pod会拥有相同的标签
  • NameSpace:命名空间,用来隔离pod的运行环境

1.4.4 k8S的分层架构

  • 核心层:Kubernetes最核心的功能,对外提供API构建高层的应用,对内提供插件式应用执行环境
  • 应用层:部署(无状态应用、有状态应用、批处理任务、集群应用等)和路由(服务发现、DNS解析等)
  • 管理层:系统度量(如基础设施、容器和网络的度量),自动化(如自动扩展、动态Provision等)以及策略管理(RBAC、Quota、PSP、NetworkPolicy等)
  • 接口层:kubectl命令行工具、客户端SDK以及集群联邦
  • 生态系统:在接口层之上的庞大容器集群管理调度的生态系统,可以划分为两个范畴
  • Kubernetes外部:日志、监控、配置管理、CI、CD、Workflow、FaaS、OTS应用、ChatOps等
  • Kubernetes内部:CRI、CNI、CVI、镜像仓库、Cloud Provider、集群自身的配置和管理等

5 K8S集群环境搭建

5.1 k8s中容器的管理方式

K8S 集群创建方式有3种:

centainerd

默认情况下,K8S在创建集群时使用的方式

docker

Docker使用的普记录最高,虽然K8S在1.24版本后已经费力了kubelet对docker的支持,但时可以借助cri-docker方式来实现集群创建

cri-o

CRI-O的方式是Kubernetes创建容器最直接的一种方式,在创建集群的时候,需要借助于cri-o插件的方式来实现Kubernetes集群的创建。

!NOTE

docker 和cri-o 这两种方式要对kubelet程序的启动参数进行设置

5.2 k8s 集群部署

K8S中文官网:https://kubernetes.io/zh-cn/

|--------|----------------|------------------|
| 主机名 | IP | 角色 |
| harbor | 172.25.254.200 | harbor仓库 |
| master | 172.25.254.100 | master,k8s集群控制节点 |
| node1 | 172.25.254.10 | worker,k8s集群工作节点 |
| node2 | 172.25.254.20 | worker,k8s集群工作节点 |

  • 所有节点禁用selinux和防火墙
  • 所有节点同步时间和解析
  • 所有节点安装docker-ce
  • 所有节点禁用swap,注意注释掉/etc/fstab文件中的定义

6.构建harbor镜像仓库

#安装docekr(四台机子都要操作)

bash 复制代码
[root@harbor ~]# cat > /etc/yum.repos.d/docker.repo <<EOF
[docker]
name = docker
baseurl = https://mirrors.aliyun.com/docker-ce/linux/rhel/9.6/x86_64/stable/
gpgcheck = 0
EOF


[root@harbor ~]# dnf  install docker-ce-3:28.5.2-1.el9  -y
[root@harbor ~]# echo br_netfilter > /etc/modules-load.d/docker_mod.conf
//让 Linux 开机时自动加载网桥过滤模块,Docker 网络必须用它
[root@harbor ~]# modprobe -a br_netfilter  //立刻加载 br_netfilter 内核模块
[root@harbor ~]# vim /etc/sysctl.d/docker.conf
net.bridge.bridge-nf-call-iptables = 1
//让 Linux 网桥(Docker 容器网络)的流量经过 iptables 防火墙规则
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
//开启 Linux 内核 IP 转发
[root@harbor ~]# sysctl  --system

[root@harbor ~]# vim /lib/systemd/system/docker.service
ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock --iptables=true
[root@harbor ~]# systemctl daemon-reload
[root@harbor ~]# systemctl enable --now docker

#生成密钥(在harbor机子操作)

bash 复制代码
[root@harbor ~]# mkdir /data/certs -p
[root@harbor ~]# mkdir /data/certs -p
[root@harbor ~]# openssl req -newkey  rsa:4096 \
-nodes -sha256 -keyout /data/certs/timinglee.org.key \
-addext "subjectAltName = DNS:reg.timinglee.org" \
-x509 -days 365 -out /data/certs/timinglee.org.crt

You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [XX]:CN
State or Province Name (full name) []:Shannxi
Locality Name (eg, city) [Default City]:Xi'an
Organization Name (eg, company) [Default Company Ltd]:kubernetes
Organizational Unit Name (eg, section) []:harbor
Common Name (eg, your name or your server's hostname) []:reg.timinglee.org
Email Address []:admin@timinglee.org

#编辑harbor配置文件

bash 复制代码
[root@harbor ~]# tar zxf  harbor-offline-installer-v2.5.4.tgz -C /opt/
[root@harbor ~]# cd /opt/harbor/
[root@harbor harbor]# ls
common.sh  harbor.v2.5.4.tar.gz  harbor.yml.tmpl  install.sh  LICENSE  prepare
[root@harbor harbor]# cp harbor.yml.tmpl harbor.yml
[root@harbor harbor]# vim harbor.yml
hostname: reg.timinglee.org
  certificate: /data/certs/timinglee.org.crt
  private_key: /data/certs/timinglee.org.key
harbor_admin_password: lee


[root@harbor harbor]# ./install.sh --with-chartmuseum

#启动并验证

bash 复制代码
[root@harbor harbor]# mkdir  /etc/docker/certs.d/reg.timinglee.org/ -p
[root@harbor harbor]# cp /data/certs/timinglee.org.crt  /etc/docker/certs.d/reg.timinglee.org/ca.crt
[root@harbor harbor]# vim /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
172.25.254.200     harbor reg.timinglee.org


[root@harbor harbor]# systemctl restart docker
[root@harbor harbor]# docker compose up -d
自动下载镜像
自动创建容器
自动后台运行
让你的仓库可以正常访问

[root@harbor harbor]# docker login  reg.timinglee.org -u admin
Password:

WARNING! Your credentials are stored unencrypted in '/root/.docker/config.json'.
Configure a credential helper to remove this warning. See
https://docs.docker.com/go/credential-store/

Login Succeeded

提示:所有机子需要关闭swap

bash 复制代码
systemctl disable --now  swap.target
systemctl mask swap.target

#安装docker配置可以使用harbor仓库

bash 复制代码
[root@harbor/master/node1/node2]# mkdir  /etc/docker/certs.d/reg.timinglee.org/ -p
#在harbor主机中分发证书到所有主机
[root@harbor ~]# for i in 100 10 20
> do
> scp /data/certs/timinglee.org.crt root@172.25.254.$i:/etc/docker/certs.d/reg.timinglee.org/ca.crt
> done

[root@harbor/master/node1/node2]# systemctl enable  docker
[root@harbor/master/node1/node2]# systemctl restart docker

#所有主机配置docker加速器

bash 复制代码
[root@harbor/master/node1/node2]# cat >/etc/docker/daemon.json <<EOF
{
  "registry-mirrors":["https://reg.timinglee.org"]
}
EOF
[root@harbor/master/node1/node2]# systemctl restart docker
[root@harbor/master/node1/node2]# docker info
//可以在结尾看到看到
Registry Mirrors:
  https://reg.timinglee.org/

#所有主机彼此建立解析

bash 复制代码
[root@harbor/master/node1/node2]# vim /etc/hosts

127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
172.25.254.100     master
172.25.254.10      node1
172.25.254.20      node2
172.25.254.200     reg.timinglee.org
bash 复制代码
[root@harbor/master/node1/node2]#  vim /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name = kubernetes
baseurl = https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.35/rpm/
gpgcheck = 0
 
#检测
[root@harbor/master/node1/node2]# dnf list kubelet
正在更新 Subscription Management 软件仓库。
无法读取客户身份

本系统尚未在权利服务器中注册。可使用 "rhc" 或 "subscription-manager" 进行注册。

上次元数据过期检查:0:01:04 前,执行于 2026年04月03日 星期五 18时37分25秒。
可安装的软件包
kubelet.aarch64                                                          1.35.3-150500.1.1                                                          kubernetes
kubelet.ppc64le                                                          1.35.3-150500.1.1                                                          kubernetes
kubelet.s390x                                                            1.35.3-150500.1.1                                                          kubernetes
kubelet.src                                                              1.35.3-150500.1.1                                                          kubernetes
kubelet.x86_64                                                           1.35.3-150500.1.1                                                          kubernetes

以上操作完成后重启主机检测swap分区

bash 复制代码
[root@harbor/master/node1/node2]# swapon  -s  //没有任何输出表示ok

7.kubernetes的部署

#安装cri-dockerd(所有主机中安装)

bash 复制代码
[root@master ~]# ls
anaconda-ks.cfg  cri-dockerd-0.3.14-3.el8.x86_64.rpm  libcgroup-0.41-19.el8.x86_64.rpm
[root@master ~]# scp cri-dockerd-0.3.14-3.el8.x86_64.rpm  libcgroup-0.41-19.el8.x86_64.rpm  root@172.25.254.10:/root
root@172.25.254.10's password: 
cri-dockerd-0.3.14-3.el8.x86_64.rpm                                                                                         100%   11MB 178.6MB/s   00:00    
libcgroup-0.41-19.el8.x86_64.rpm                                                                                            100%   69KB  44.4MB/s   00:00    
[root@master ~]# scp cri-dockerd-0.3.14-3.el8.x86_64.rpm  libcgroup-0.41-19.el8.x86_64.rpm  root@172.25.254.20:/root
root@172.25.254.20's password: 
cri-dockerd-0.3.14-3.el8.x86_64.rpm                                                                                         100%   11MB 164.9MB/s   00:00    
libcgroup-0.41-19.el8.x86_64.rpm                                                                                            100%   69KB  47.1MB/s   00:00    
[root@master/node1/node2]# ls
anaconda-ks.cfg  cri-dockerd-0.3.14-3.el8.x86_64.rpm  libcgroup-0.41-19.el8.x86_64.rpm
[root@harbor/master/node1/node2]# rpm -ivh *.rpm
警告:libcgroup-0.41-19.el8.x86_64.rpm: 头V4 RSA/SHA256 Signature, 密钥 ID 6d745a60: NOKEY
Verifying...                          ################################# [100%]
准备中...                          ################################# [100%]
正在升级/安装...
   1:libcgroup-0.41-19.el8            ################################# [ 50%]
   2:cri-dockerd-3:0.3.14-3.el8       ################################# [100%]
[root@master/node1/node2]# vim /lib/systemd/system/cri-docker.service
 10 ExecStart=/usr/bin/cri-dockerd --container-runtime-endpoint fd://  --network-plugin=cni --pod-infra-container-image=reg.timinglee.org/k8s/pause:3.10.1
[root@master/node1/node2 ~]# systemctl daemon-reload
[root@master/node1/node2 ~]# systemctl restart cri-docker.service 
[root@master/node1/node2 ~]# ll /var/run/cri-dockerd.sock
srw-rw---- 1 root docker 0  4月  4 10:42 /var/run/cri-dockerd.sock

#安装构建kubernetes 集群所需软件

master节点

bash 复制代码
[root@master ~]# dnf install kubelet kubeadm kubectl -y
[root@master ~]# systemctl enable --now kubelet.service
Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service
/usr/lib/systemd/system/kubelet.service.

node节点

bash 复制代码
[root@node1 ~]# dnf install kubelet kubeadm  -y
[root@node1 ~]# systemctl enable --now kubelet.service
Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.

master节点中 kubectl 和kubeadm 补齐

bash 复制代码
[root@master ~]# echo "source <(kubectl completion bash)" >> ~/.bashrc
[root@master ~]# echo "source <(kubeadm  completion bash)" >> ~/.bashrc
[root@master ~]# source ~/.bashrc

下载kubernetes集群所需镜像

bash 复制代码
[root@master ~]# kubeadm config images pull \
> --image-repository registry.aliyuncs.com/google_containers \
> --kubernetes-version v1.35.3 \
> --cri-socket=unix:///var/run/cri-dockerd.sock
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-apiserver:v1.35.3
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-controller-manager:v1.35.3
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-scheduler:v1.35.3
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-proxy:v1.35.3
[config/images] Pulled registry.aliyuncs.com/google_containers/coredns:v1.13.1
[config/images] Pulled registry.aliyuncs.com/google_containers/pause:3.10.1
[config/images] Pulled registry.aliyuncs.com/google_containers/etcd:3.6.6-0

上传镜像到本地harbor

bash 复制代码
[root@master ~]#  docker images  --format  "{{.Repository}}:{{.Tag}}" | awk -F "/" '/google/{system("docker tag "$0" reg.timinglee.org/k8s/"$3)}'
[root@master ~]# docker images  --format  "{{.Repository}}:{{.Tag}}" | awk -F "/" '/timinglee/{system("docker push "$0)}'
The push refers to repository [reg.timinglee.org/k8s/kube-apiserver]
b26c04e79c49: Pushed 
e2e29eb5f5ab: Pushed 
33b37ab0b090: Pushed 
6e7fbcf090d0: Pushed 
1a73b54f556b: Pushed 
4cde6b0bb6f5: Pushed 
bd3cdfae1d3f: Pushed 
6f1cdceb6a31: Pushed 
af5aa97ebe6c: Pushed 
4d049f83d9cf: Pushed 
114dde0fefeb: Pushed 
4840c7c54023: Pushed 
8fa10c0194df: Pushed 
a33ba213ad26: Pushed 
v1.35.3: digest: sha256:d84de59faea16b559d4f2e4d05f4a50cd189b6d02ff9805d2c29eb8b64094b0a size: 3233
The push refers to repository [reg.timinglee.org/k8s/kube-proxy]
091c6dc44e2a: Pushed 
ff32924af9e3: Pushed 
v1.35.3: digest: sha256:3673be50038d0e33dc265311eba1bfb17268c40f52dab8baf79ee8ac125cbd0c size: 740
The push refers to repository [reg.timinglee.org/k8s/kube-controller-manager]
097e54dc40b4: Pushed 
e2e29eb5f5ab: Mounted from k8s/kube-apiserver 
33b37ab0b090: Mounted from k8s/kube-apiserver 
6e7fbcf090d0: Mounted from k8s/kube-apiserver 
1a73b54f556b: Mounted from k8s/kube-apiserver 
4cde6b0bb6f5: Mounted from k8s/kube-apiserver 
bd3cdfae1d3f: Mounted from k8s/kube-apiserver 
6f1cdceb6a31: Mounted from k8s/kube-apiserver 
af5aa97ebe6c: Mounted from k8s/kube-apiserver 
4d049f83d9cf: Mounted from k8s/kube-apiserver 
114dde0fefeb: Mounted from k8s/kube-apiserver 
4840c7c54023: Mounted from k8s/kube-apiserver 
8fa10c0194df: Mounted from k8s/kube-apiserver 
a33ba213ad26: Mounted from k8s/kube-apiserver 
v1.35.3: digest: sha256:988a0684f0ffdc95a06f0bcac01d7540f623876b855736cb70cc7aaaca12c731 size: 3233
The push refers to repository [reg.timinglee.org/k8s/kube-scheduler]
87448da54384: Pushed 
e2e29eb5f5ab: Mounted from k8s/kube-controller-manager 
33b37ab0b090: Mounted from k8s/kube-controller-manager 
6e7fbcf090d0: Mounted from k8s/kube-controller-manager 
1a73b54f556b: Mounted from k8s/kube-controller-manager 
4cde6b0bb6f5: Mounted from k8s/kube-controller-manager 
bd3cdfae1d3f: Mounted from k8s/kube-controller-manager 
6f1cdceb6a31: Mounted from k8s/kube-controller-manager 
af5aa97ebe6c: Mounted from k8s/kube-controller-manager 
4d049f83d9cf: Mounted from k8s/kube-controller-manager 
114dde0fefeb: Mounted from k8s/kube-controller-manager 
4840c7c54023: Mounted from k8s/kube-controller-manager 
8fa10c0194df: Mounted from k8s/kube-controller-manager 
a33ba213ad26: Mounted from k8s/kube-controller-manager 
v1.35.3: digest: sha256:212b077fceb2ca7930097d992c20eae7d9574fe058c6d83b6eac83f07bf7b16c size: 3233
The push refers to repository [reg.timinglee.org/k8s/etcd]
37a37e743e2e: Pushed 
660aacb22b09: Pushed 
81ec00dccb40: Pushed 
b336e209998f: Pushed 
f4aee9e53c42: Pushed 
1a73b54f556b: Mounted from k8s/kube-scheduler 
2a92d6ac9e4f: Pushed 
bbb6cacb8c82: Pushed 
6f1cdceb6a31: Mounted from k8s/kube-scheduler 
af5aa97ebe6c: Mounted from k8s/kube-scheduler 
4d049f83d9cf: Mounted from k8s/kube-scheduler 
a80545a98dcd: Pushed 
8fa10c0194df: Mounted from k8s/kube-scheduler 
f920c5680b0b: Pushed 
3.6.6-0: digest: sha256:f6fd2f574e114f4a229e320b387ddc8c412fe6e2156f94965f07e091a64cec42 size: 3235
The push refers to repository [reg.timinglee.org/k8s/coredns]
098153c557ae: Pushed 
f5b520875067: Pushed 
bfe9137a1b04: Pushed 
f4aee9e53c42: Mounted from k8s/etcd 
1a73b54f556b: Mounted from k8s/etcd 
2a92d6ac9e4f: Mounted from k8s/etcd 
bbb6cacb8c82: Mounted from k8s/etcd 
6f1cdceb6a31: Mounted from k8s/etcd 
af5aa97ebe6c: Mounted from k8s/etcd 
4d049f83d9cf: Mounted from k8s/etcd 
114dde0fefeb: Mounted from k8s/kube-scheduler 
4840c7c54023: Mounted from k8s/kube-scheduler 
8fa10c0194df: Mounted from k8s/etcd 
bff7f7a9d443: Pushed 
v1.13.1: digest: sha256:246e7333fde10251c693b68f13d21d6d64c7dbad866bbfa11bd49315e3f725a7 size: 3233
The push refers to repository [reg.timinglee.org/k8s/pause]
cfaaf0813a4d: Pushed 
3.10.1: digest: sha256:d31e7f29ada8b13c6dd047ec4805720fcf14df5fccbb27903e374dc56225ca49 size: 526

#在master中初始化kubernetes集群

在master中完成集群初始化

bash 复制代码
[root@master ~]# kubeadm init --pod-network-cidr=10.244.0.0/16 \
--image-repository reg.timinglee.org/k8s \
--kubernetes-version v1.35.3 \
--cri-socket=unix:///var/run/cri-dockerd.sock
[init] Using Kubernetes version: v1.35.3
[preflight] Running pre-flight checks
	[WARNING ContainerRuntimeVersion]: You must update your container runtime to a version that supports the CRI method RuntimeConfig. Falling back to using cgroupDriver from kubelet config will be removed in 1.36. For more information, see https://git.k8s.io/enhancements/keps/sig-node/4033-group-driver-detection-over-cri
	[WARNING SystemVerification]: kernel release 5.14.0-570.12.1.el9_6.x86_64 is unsupported. Supported LTS versions from the 5.x series are 5.4, 5.10 and 5.15. Any 6.x version is also supported. For cgroups v2 support, the recommended version is 5.10 or newer
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local master] and IPs [10.96.0.1 172.25.254.100]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost master] and IPs [172.25.254.100 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost master] and IPs [172.25.254.100 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 501.66428ms
[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
[control-plane-check] Checking kube-apiserver at https://172.25.254.100:6443/livez
[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
[control-plane-check] kube-controller-manager is healthy after 1.00355242s
[control-plane-check] kube-scheduler is healthy after 2.366129185s
[control-plane-check] kube-apiserver is healthy after 4.003066854s
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node master as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node master as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: edkm6l.key5alswo3n4kz8r
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 172.25.254.100:6443 --token edkm6l.key5alswo3n4kz8r \
	--discovery-token-ca-cert-hash sha256:1c4d35623b9aab09984bb8a62383a85e38fea77d530b10b69a52151b50efce

#每个人不一样,其他主机加入本集群的凭证

kubeadm join 172.25.254.100:6443 --token edkm6l.key5alswo3n4kz8r \
--discovery-token-ca-cert-hash sha256:1c4d35623b9aab09984bb8a62383a85e38fea77d530b10b69a52151b50efc

#如果忘了 查看

bash 复制代码
[root@master ~]# kubeadm token create --print-join-command
kubeadm join 172.25.254.100:6443 --token b6idt2.wez387fr7930anq0 --discovery-token-ca-cert-hash sha256:1c4d35623b9aab09984bb8a62383a85e38fea77d530b10b69a52151b50efce56 

#如果初始化出问题

bash 复制代码
[root@master ~]# kubeadm reset  --cri-socket=unix:///var/run/cri-dockerd.sock #可以重置集群设定

添加kubernets环境变量到本机

bash 复制代码
[root@master ~]# echo "export KUBECONFIG=/etc/kubernetes/admin.conf" > ~/.bash_profile
[root@master ~]# source  ~/.bash_profile
[root@master ~]#  kubectl get nodes
NAME     STATUS     ROLES           AGE   VERSION
master   NotReady   control-plane   14m   v1.35.3

添加node节点到本集群

bash 复制代码
​
[root@node1/node2 ~]# kubeadm join 172.25.254.100:6443 --token b6idt2.wez387fr7930anq0 --discovery-token-ca-cert-hash
sha256:1c4d35623b9aab09984bb8a62383a85e38fea77d530b10b69a52151b50efce56 --cri-socket unix:///var/run/cri-dockerd.sock
[preflight] Running pre-flight checks
	[WARNING ContainerRuntimeVersion]: You must update your container runtime to a version that supports the CRI method RuntimeConfig. Falling back to using cgroupDriver from kubelet config will be removed in 1.36. For more information, see https://git.k8s.io/enhancements/keps/sig-node/4033-group-driver-detection-over-cri
	[WARNING SystemVerification]: kernel release 5.14.0-570.12.1.el9_6.x86_64 is unsupported. Supported LTS versions from the 5.x series are 5.4, 5.10 and 5.15. Any 6.x version is also supported. For cgroups v2 support, the recommended version is 5.10 or newer
[preflight] Reading configuration from the "kubeadm-config" ConfigMap in namespace "kube-system"...
[preflight] Use 'kubeadm init phase upload-config kubeadm --config your-config-file' to re-upload it.
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 501.004359ms
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

#测试
[root@master ~]# kubectl get nodes		#可以看到集群中主机但是因为网络插件问题状态是NotReady
NAME     STATUS     ROLES           AGE    VERSION
master   NotReady   control-plane   21m    v1.35.3
node1    NotReady   <none>          4m5s   v1.35.3
node2    NotReady   <none>          4m     v1.35.3





​

#安装网络插件

bash 复制代码
[root@master ~]# ls
anaconda-ks.cfg  cri-dockerd-0.3.14-3.el8.x86_64.rpm  flannel-0.28.1.tar  kube-flannel.yml  libcgroup-0.41-19.el8.x86_64.rpm
19.el8.x86_64.rpm
[root@master ~]# docker load -i flannel-0.28.1.tar
5aa68bbbc67e: Loading layer [==================================================>]  8.724MB/8.724MB
2c8aa52d4746: Loading layer [==================================================>]  3.018MB/3.018MB
Loaded image: ghcr.io/flannel-io/flannel-cni-plugin:v1.9.0-flannel1
256f393e029f: Loading layer [==================================================>]  8.607MB/8.607MB
70dc5e033175: Loading layer [==================================================>]  9.554MB/9.554MB
13a60c10faeb: Loading layer [==================================================>]  17.13MB/17.13MB
9a7d8cf9ff51: Loading layer [==================================================>]  1.547MB/1.547MB
03767449b95b: Loading layer [==================================================>]  51.85MB/51.85MB
b6233bc105d7: Loading layer [==================================================>]  5.632kB/5.632kB
9738bb9596cf: Loading layer [==================================================>]  6.144kB/6.144kB
00012e17b6cc: Loading layer [==================================================>]  2.173MB/2.173MB
5f70bf18a086: Loading layer [==================================================>]  1.024kB/1.024kB
5668da16a30b: Loading layer [==================================================>]  2.178MB/2.178MB
Loaded image: ghcr.io/flannel-io/flannel:v0.28.1
[root@master ~]#  docker tag ghcr.io/flannel-io/flannel-cni-plugin:v1.9.0-flannel1 reg.timinglee.org/flannel-io/flannel-cni-plugin:v1.9.0-flannel1
[root@master ~]# docker push  reg.timinglee.org/flannel-io/flannel-cni-plugin:v1.9.0-flannel1
The push refers to repository [reg.timinglee.org/flannel-io/flannel-cni-plugin]
2c8aa52d4746: Pushed 
5aa68bbbc67e: Pushed 
v1.9.0-flannel1: digest: sha256:b3d30c221113b30fea3e8a7fccb145e929b097d0319b9eeb6b5a591b10b5c671 size: 739
[root@master ~]# docker tag ghcr.io/flannel-io/flannel:v0.28.1 reg.timinglee.org/flannel-io/flannel:v0.28.1
[root@master ~]# docker push reg.timinglee.org/flannel-io/flannel:v0.28.1
The push refers to repository [reg.timinglee.org/flannel-io/flannel]
5668da16a30b: Pushed 
5f70bf18a086: Pushed 
00012e17b6cc: Pushed 
9738bb9596cf: Pushed 
b6233bc105d7: Pushed 
03767449b95b: Pushed 
9a7d8cf9ff51: Pushed 
13a60c10faeb: Pushed 
70dc5e033175: Pushed 
256f393e029f: Pushed 
v0.28.1: digest: sha256:e671adbc267460164555159210066d3304a43e3b5dd85cc0b5b6ad62e83aab52 size: 2414
[root@master ~]# vim kube-flannel.yml
       148 image: flannel-io/flannel:v0.28.1
       175 image: flannel-io/flannel-cni-plugin:v1.9.0-flannel1
       186 image: flannel-io/flannel:v0.28.1
[root@master ~]# grep "image:" kube-flannel.yml
        image: flannel-io/flannel:v0.28.1
        image: flannel-io/flannel-cni-plugin:v1.9.0-flannel1
        image: flannel-io/flannel:v0.28.1
[root@master ~]#  kubectl apply -f kube-flannel.yml
namespace/kube-flannel created
serviceaccount/flannel created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created
[root@master ~]#  kubectl get  nodes    #需要等待10秒钟 NotReady→Ready
NAME     STATUS     ROLES           AGE   VERSION
master   NotReady   control-plane   46m   v1.35.3
node1    NotReady   <none>          28m   v1.35.3
node2    Ready      <none>          28m   v1.35.3
[root@master ~]#  kubectl get  nodes
NAME     STATUS   ROLES           AGE   VERSION
master   Ready    control-plane   46m   v1.35.3
node1    Ready    <none>          28m   v1.35.3
node2    Ready    <none>          28m   v1.35.3

8.pod

提示:记得把nginx的镜像导入harbor仓库

bash 复制代码
[root@harbor ~]# ls
anaconda-ks.cfg  harbor-offline-installer-v2.5.4.tgz  nginx-1.23.tar.gz
[root@harbor ~]# docker load -i nginx-1.23.tar.gz
8cbe4b54fa88: Loading layer [==================================================>]  84.01MB/84.01MB
5dd6bfd241b4: Loading layer [==================================================>]  62.51MB/62.51MB
043198f57be0: Loading layer [==================================================>]  3.584kB/3.584kB
2731b5cfb616: Loading layer [==================================================>]  4.608kB/4.608kB
6791458b3942: Loading layer [==================================================>]  3.584kB/3.584kB
4d33db9fdf22: Loading layer [==================================================>]  7.168kB/7.168kB
Loaded image: nginx:1.23
[root@harbor ~]# docker images
                                                                                           i Info →   U  In Use
IMAGE                                  ID             DISK USAGE   CONTENT SIZE   EXTRA
goharbor/chartmuseum-photon:v2.5.4     e5134e6ca037        231MB             0B    U   
goharbor/harbor-core:v2.5.4            fb4df7c64e84        208MB             0B    U   
goharbor/harbor-db:v2.5.4              76e7b3295f2b        225MB             0B    U   
goharbor/harbor-exporter:v2.5.4        388b5ac2eed4       87.4MB             0B        
goharbor/harbor-jobservice:v2.5.4      01ec4f1c5ddd        233MB             0B    U   
goharbor/harbor-log:v2.5.4             1c30eb78ebc4        161MB             0B    U   
goharbor/harbor-portal:v2.5.4          bba3d21bc4b9        162MB             0B    U   
goharbor/harbor-registryctl:v2.5.4     984f0c8cd458        136MB             0B    U   
goharbor/nginx-photon:v2.5.4           0e682f78c76f        154MB             0B    U   
goharbor/notary-server-photon:v2.5.4   e542ccac08c2        112MB             0B        
goharbor/notary-signer-photon:v2.5.4   65644cf6aaa1        109MB             0B        
goharbor/prepare:v2.5.4                5582f3ef9fbe        163MB             0B        
goharbor/redis-photon:v2.5.4           c89d59625d5a        155MB             0B    U   
goharbor/registry-photon:v2.5.4        5e2d95b5227f       78.1MB             0B    U   
goharbor/trivy-adapter-photon:v2.5.4   1142826e8329        251MB             0B        
nginx:1.23                             a7be6198544f        142MB             0B        

[root@harbor ~]# docker tag nginx:1.23 reg.timinglee.org/k8s/nginx:latest
[root@harbor ~]# docker push reg.timinglee.org/k8s/nginx:latest
The push refers to repository [reg.timinglee.org/k8s/nginx]
4d33db9fdf22: Pushed 
6791458b3942: Pushed 
2731b5cfb616: Pushed 
043198f57be0: Pushed 
5dd6bfd241b4: Pushed 
8cbe4b54fa88: Pushed 
latest: digest: sha256:a97a153152fcd6410bdf4fb64f5622ecf97a753f07dcc89dab14509d059736cf size: 1570

(1)资源使用的方法

命令式

bash 复制代码
[root@master ~]# kubectl get pods  #镜像大,要等待5秒
NAME     READY   STATUS              RESTARTS   AGE
webpod   0/1     ContainerCreating   0          18s
[root@master ~]# kubectl describe pods webpod  #查看过程
Name:             webpod
Namespace:        default
Priority:         0
Service Account:  default
Node:             node1/172.25.254.10
Start Time:       Sat, 04 Apr 2026 14:51:20 +0800
Labels:           run=webpod
Annotations:      <none>
Status:           Running
IP:               10.244.1.4
IPs:
  IP:  10.244.1.4
Containers:
  webpod:
    Container ID:   docker://a7fb5817d057e407b70715a90da45351fa46dd6f5451b55946395dead209ce6a
    Image:          reg.timinglee.org/k8s/nginx:latest
    Image ID:       docker-pullable://reg.timinglee.org/k8s/nginx@sha256:a97a153152fcd6410bdf4fb64f5622ecf97a753f07dcc89dab14509d059736cf
    Port:           80/TCP
    Host Port:      0/TCP
    State:          Running
      Started:      Sat, 04 Apr 2026 14:51:54 +0800
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-dxqs4 (ro)
Conditions:
  Type                        Status
  PodReadyToStartContainers   True 
  Initialized                 True 
  Ready                       True 
  ContainersReady             True 
  PodScheduled                True 
Volumes:
  kube-api-access-dxqs4:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    Optional:                false
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  46s   default-scheduler  Successfully assigned default/webpod to node1
  Normal  Pulling    46s   kubelet            spec.containers{webpod}: Pulling image "reg.timinglee.org/k8s/nginx:latest"
  Normal  Pulled     42s   kubelet            spec.containers{webpod}: Successfully pulled image "reg.timinglee.org/k8s/nginx:latest" in 3.298s (3.298s including waiting). Image size: 142145851 bytes.
  Normal  Created    12s   kubelet            spec.containers{webpod}: Container created
  Normal  Started    12s   kubelet            spec.containers{webpod}: Container started
[root@master ~]# kubectl get pods
NAME     READY   STATUS    RESTARTS   AGE
webpod   1/1     Running   0          52s
bash 复制代码
[root@master ~]# kubectl get pods -o  wide  #查看运行机子
NAME     READY   STATUS    RESTARTS   AGE   IP           NODE    NOMINATED NODE   READINESS GATES
webpod   1/1     Running   0          12m   10.244.1.4   node1   <none>           <none>
[root@master ~]# kubectl delete pods webpod   #删除容器
pod "webpod" deleted from default namespace

yaml文件方式

bash 复制代码
[root@master ~]# kubectl create deployment test --image nginx --replicas 1  --dry-run=client -o yaml  > test.yml
root@master ~]# cat test.yml 
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: test
  name: test
spec:
  replicas: 1
  selector:
    matchLabels:
      app: test
        #  strategy: {}
  template:
    metadata:
      labels:
        app: test
    spec:
      containers:
        - image: image: reg.timinglee.org/k8s/nginx:latest
        name: nginx
          #resources: {}
          #status: {}
#建立式
[root@master ~]# kubectl create -f test.yml
deployment.apps/test created
[root@master ~]# kubectl get pods
NAME                    READY   STATUS    RESTARTS   AGE
test-7676f665c8-r2bs8   1/1     Running   0          22s
[root@master ~]# kubectl delete -f test.yml
deployment.apps "test" deleted from default namespace
[root@master ~]# kubectl get pods
No resources found in default namespace.


#声明式
[root@master ~]# kubectl apply -f test.yml
deployment.apps/test created
[root@master ~]# kubectl get pods
NAME                    READY   STATUS    RESTARTS   AGE
test-7676f665c8-qvk4m   1/1     Running   0          10s



#注意建立只能建立不能更新,声明可以
[root@master ~]# vim test.yml
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: test
  name: test
spec:
  replicas: 2   #只修改pod数量
  selector:
    matchLabels:
      app: test
        #  strategy: {}
  template:
    metadata:
      labels:
        app: test
    spec:
      containers:
      - image: reg.timinglee.org/k8s/nginx:latest
        name: nginx
          #resources: {}
          #status: {}
		


root@master ~]# kubectl create  -f test.yml
Error from server (AlreadyExists): error when creating "test.yml": deployments.apps "test" already exists
[root@master ~]#  kubectl apply  -f test.yml
deployment.apps/test configured
[root@master ~]# kubectl get pods
NAME                    READY   STATUS    RESTARTS   AGE
test-7676f665c8-qvk4m   1/1     Running   0          59s
test-7676f665c8-vsc69   1/1     Running   0          4s

(2).资源类型

如果出现以下错误:

bash 复制代码
[root@master ~]#  kubectl -n timinglee  get pods
NAME      READY   STATUS         RESTARTS   AGE
testpod   0/1     ErrImagePull   0          17s

可以通过以下方式检查:

bash 复制代码
kubectl -n timinglee get pods   #看这个 Pod 现在状态

kubectl -n timinglee describe pod testpod #看详细错误原因

kubectl -n timinglee logs testpod    看容器日志

node

bash 复制代码
[root@master ~]# kubectl get nodes
NAME     STATUS   ROLES           AGE     VERSION
master   Ready    control-plane   4h37m   v1.35.3
node1    Ready    <none>          4h20m   v1.35.3
node2    Ready    <none>          4h19m   v1.35.3
[root@master ~]# kubeadm token create --print-join-command   #复制到节点上运行,就能加入集群!
kubeadm join 172.25.254.100:6443 --token p680ky.5har9twdw68nu2pp --discovery-token-ca-cert-hash sha256:1c4d35623b9aab09984bb8a62383a85e38fea77d530b10b69a52151b50efce56 

namespace

bash 复制代码
​
[root@master ~]# kubectl get namespaces  #查看 Kubernetes 里所有命名空间
NAME              STATUS   AGE
default           Active   5h45m
kube-flannel      Active   4h59m
kube-node-lease   Active   5h45m
kube-public       Active   5h45m
kube-system       Active   5h45m
[root@master ~]# kubectl get pods
NAME                    READY   STATUS    RESTARTS   AGE
test-7676f665c8-qvk4m   1/1     Running   0          76m
test-7676f665c8-vsc69   1/1     Running   0          75m
[root@master ~]# kubectl -n kube-flannel get pods  #查看集群网络插件 Flannel 的运行状态
NAME                    READY   STATUS    RESTARTS   AGE
kube-flannel-ds-5p8w4   1/1     Running   0          4h59m
kube-flannel-ds-d6545   1/1     Running   0          4h59m
kube-flannel-ds-kmcp5   1/1     Running   0          4h59m

[root@master ~]# kubectl create namespace timinglee 
namespace/timinglee created
[root@master ~]# kubectl get namespaces
NAME              STATUS   AGE
default           Active   5h54m
kube-flannel      Active   5h8m
kube-node-lease   Active   5h54m
kube-public       Active   5h54m
kube-system       Active   5h54m
timinglee         Active   12s


[root@master ~]# kubectl -n timinglee run testpod --image reg.timinglee.org/k8s/nginx:latest
pod/testpod created
[root@master ~]# kubectl get pods
NAME                    READY   STATUS    RESTARTS   AGE
test-7676f665c8-qvk4m   1/1     Running   0          93m
test-7676f665c8-vsc69   1/1     Running   0          92m
[root@master ~]# kubectl -n timinglee  get pods
NAME      READY   STATUS    RESTARTS   AGE
testpod   1/1     Running   0          28s

​

(3).kubectl命令

bash 复制代码
[root@master ~]# cat test.yml 
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: test
  name: test
spec:
  replicas: 4
  selector:
    matchLabels:
      app: test
        #  strategy: {}
  template:
    metadata:
      labels:
        app: test
    spec:
      containers:
        #  - image: reg.timinglee.org/k8s/nginx:latest
      - image: reg.timinglee.org/k8s/myapp:v1
        #name: nginx
        name: myapp
          #resources: {}
          #status: {}

[root@master ~]# kubectl -n timinglee delete pods testpod #清理testpod 避免系统卡顿
pod "testpod" deleted from timinglee namespace
[root@master ~]# kubectl apply -f test.yml
[root@master ~]# kubectl get deployments.apps
NAME   READY   UP-TO-DATE   AVAILABLE   AGE
test   2/2     2            2           122m
[root@master ~]# kubectl get pods
NAME                    READY   STATUS    RESTARTS   AGE
test-7676f665c8-qvk4m   1/1     Running   0          123m
test-7676f665c8-vsc69   1/1     Running   0          122m
[root@master ~]# kubectl edit deployments.apps test
replicas: 4
[root@master ~]# kubectl get pods
NAME                    READY   STATUS    RESTARTS   AGE
test-7676f665c8-hdv67   1/1     Running   0          10s
test-7676f665c8-kvwvt   1/1     Running   0          10s
test-7676f665c8-qvk4m   1/1     Running   0          125m
test-7676f665c8-vsc69   1/1     Running   0          124m
[root@master ~]# kubectl edit deployments.apps test
deployment.apps/test edited
[root@master ~]#  kubectl get pods
NAME                    READY   STATUS    RESTARTS   AGE
test-7676f665c8-qvk4m   1/1     Running   0          150m
[root@master ~]# kubectl edit deployments.apps test
deployment.apps/test edited
[root@master ~]#  kubectl get pods
NAME                    READY   STATUS    RESTARTS   AGE
test-7676f665c8-4qqkp   1/1     Running   0          2s
test-7676f665c8-fwn2x   1/1     Running   0          2s
test-7676f665c8-mrl85   1/1     Running   0          2s
test-7676f665c8-qvk4m   1/1     Running   0          151m
[root@master ~]#  kubectl get pods -o wide
NAME                    READY   STATUS    RESTARTS   AGE    IP            NODE    NOMINATED NODE   READINESS GATES
test-7676f665c8-4qqkp   1/1     Running   0          14s    10.244.2.6    node2   <none>           <none>
test-7676f665c8-fwn2x   1/1     Running   0          14s    10.244.2.7    node2   <none>           <none>
test-7676f665c8-mrl85   1/1     Running   0          14s    10.244.1.13   node1   <none>           <none>
test-7676f665c8-qvk4m   1/1     Running   0          151m   10.244.1.8    node1   <none>           <none>
[root@master ~]# kubectl delete service test
service "test" deleted from default namespace
[root@master ~]# kubectl expose  deployment test --port  80 --target-port 80
service/test exposed
[root@master ~]# kubectl describe service test
Name:                     test
Namespace:                default
Labels:                   app=test
Annotations:              <none>
Selector:                 app=test
Type:                     ClusterIP
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.102.57.191
IPs:                      10.102.57.191
Port:                     <unset>  80/TCP
TargetPort:               80/TCP
Endpoints:                10.244.1.15:80,10.244.2.8:80,10.244.1.16:80 + 1 more...
Session Affinity:         None
Internal Traffic Policy:  Cluster
Events:                   <none>
[root@master ~]# curl 10.102.57.191
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
[root@master ~]# curl 10.102.57.191/hostname.html
test-664bc5f8f8-7g2d8
[root@master ~]# kubectl logs pods/test-664bc5f8f8-7g2d8
10.244.0.0 - - [04/Apr/2026:10:57:01 +0000] "GET /hostname.html HTTP/1.1" 200 22 "-" "curl/7.76.1" "-"

attech

bash 复制代码
[root@master ~]# kubectl run testpod -it --image=reg.timinglee.org/k8s/myapp:v1 -- /bin/sh
All commands and output from this session will be recorded in container logs, including credentials and sensitive information passed through the command prompt.
If you don't see a command prompt, try pressing enter.
/ # 
/ # 
/ # Session ended, resume using 'kubectl attach testpod -c testpod -i -t' command when the pod is running
[root@master ~]# kubectl attach testpod -it
All commands and output from this session will be recorded in container logs, including credentials and sensitive information passed through the command prompt.
If you don't see a command prompt, try pressing enter.
/ # 
/ # 
/ # Session ended, resume using 'kubectl attach testpod -c testpod -i -t' command when the pod is running
[root@master ~]#  kubectl exec -it testpod -c testpod -- /bin/sh
/ # 
/ # 
/ # exec attach failed: error on attach stdin: read escape sequence
command terminated with exit code 126
[root@master ~]#  kubectl exec -it testpod -c testpod -- /bin/sh
/ # ls
bin    dev    etc    home   lib    media  mnt    proc   root   run    sbin   srv    sys    tmp    usr    var
/ # exec attach failed: error on attach stdin: read escape sequence
command terminated with exit code 126

扩容

提示:刚开始可能会出现:

bash 复制代码
[root@master ~]# kubectl get pods
E0405 10:03:59.231503    4255 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"http://localhost:8080/api?timeout=32s\": dial tcp [::1]:8080: connect: connection refused"
E0405 10:03:59.232155    4255 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"http://localhost:8080/api?timeout=32s\": dial tcp [::1]:8080: connect: connection refused"
E0405 10:03:59.233963    4255 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"http://localhost:8080/api?timeout=32s\": dial tcp [::1]:8080: connect: connection refused"
E0405 10:03:59.234521    4255 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"http://localhost:8080/api?timeout=32s\": dial tcp [::1]:8080: connect: connection refused"
E0405 10:03:59.236080    4255 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"http://localhost:8080/api?timeout=32s\": dial tcp [::1]:8080: connect: connection refused"
The connection to the server localhost:8080 was refused - did you specify the right host or port?
[root@master ~]# kubectl apply -f test.yml 
error: error validating "test.yml": error validating data: failed to download openapi: Get "http://localhost:8080/openapi/v2?timeout=32s": dial tcp [::1]:8080: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false

解决方案:

bash 复制代码
[root@master ~]#  source  ~/.bash_profile

扩容

bash 复制代码
[root@master ~]#  kubectl get pods
NAME                    READY   STATUS    RESTARTS      AGE
test-664bc5f8f8-5cj42   1/1     Running   1 (35m ago)   15h
testpod                 1/1     Running   1 (35m ago)   14h
[root@master ~]# vim test.yml 
[root@master ~]# kubectl edit deployments.apps test
deployment.apps/test edited
[root@master ~]#  kubectl get pods
NAME                    READY   STATUS    RESTARTS      AGE
test-664bc5f8f8-5cj42   1/1     Running   1 (37m ago)   15h
test-664bc5f8f8-lxzt4   1/1     Running   0             2s
test-664bc5f8f8-p46tl   1/1     Running   0             2s
test-664bc5f8f8-v5zfq   1/1     Running   0             2s
testpod                 1/1     Running   1 (37m ago)   14h
[root@master ~]# kubectl scale deployment test --replicas 6
deployment.apps/test scaled
[root@master ~]#  kubectl get pods
NAME                    READY   STATUS    RESTARTS      AGE
test-664bc5f8f8-5cj42   1/1     Running   1 (37m ago)   15h
test-664bc5f8f8-6lznm   1/1     Running   0             2s
test-664bc5f8f8-lxzt4   1/1     Running   0             34s
test-664bc5f8f8-p46tl   1/1     Running   0             34s
test-664bc5f8f8-pvt8f   1/1     Running   0             2s
test-664bc5f8f8-v5zfq   1/1     Running   0             34s
testpod                 1/1     Running   1 (37m ago)   14h
[root@master ~]# kubectl scale deployment test --replicas 1
deployment.apps/test scaled
[root@master ~]#  kubectl get pods
NAME                    READY   STATUS    RESTARTS      AGE
test-664bc5f8f8-5cj42   1/1     Running   1 (37m ago)   15h
testpod                 1/1     Running   1 (37m ago)   14h
[root@master ~]# kubectl get pods  --show-labels
NAME                    READY   STATUS    RESTARTS      AGE   LABELS
test-664bc5f8f8-5cj42   1/1     Running   1 (37m ago)   15h   app=test,pod-template-hash=664bc5f8f8
testpod                 1/1     Running   1 (37m ago)   14h   run=testpod
[root@master ~]# kubectl label pods testpod name=lee
pod/testpod labeled
[root@master ~]# kubectl get pods  --show-labels
NAME                    READY   STATUS    RESTARTS      AGE   LABELS
test-664bc5f8f8-5cj42   1/1     Running   1 (38m ago)   15h   app=test,pod-template-hash=664bc5f8f8
testpod                 1/1     Running   1 (38m ago)   14h   name=lee,run=testpod
[root@master ~]# kubectl label pods testpod  name-
pod/testpod unlabeled
[root@master ~]# kubectl get pods  --show-labels
NAME                    READY   STATUS    RESTARTS      AGE   LABELS
test-664bc5f8f8-5cj42   1/1     Running   1 (38m ago)   15h   app=test,pod-template-hash=664bc5f8f8
testpod                 1/1     Running   1 (38m ago)   14h   run=testpod

(4).Pod应用

自助式管理pod

bash 复制代码
[root@master ~]# cat test.yml 
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: test
  name: test
spec:
  replicas: 4
  selector:
    matchLabels:
      app: test
        #  strategy: {}
  template:
    metadata:
      labels:
        app: test
    spec:
      containers:
        #  - image: reg.timinglee.org/k8s/nginx:latest
      - image: reg.timinglee.org/library/myapp-v2:latest
        #name: nginx
        name: myapp-v2
          #resources: {}
          #status: {}
[root@master pod]# kubectl run  myappv2 --image  myapp:v2  --port 80
pod/myappv2 created
[root@master pod]# kubectl get pods
NAME      READY   STATUS              RESTARTS   AGE
myappv2   0/1     ContainerCreating   0          8s				#创建中
[root@master pod]# kubectl get pods
NAME      READY   STATUS         RESTARTS   AGE
myappv2   0/1     ErrImagePull   0          20s					#镜像拉取失败

[root@master pod]# kubectl get pods
NAME      READY   STATUS             RESTARTS   AGE
myappv2   0/1     ImagePullBackOff   0          3m48s			#尝试从新拉去镜像

[root@master ~]# kubectl run myappv2 --image=reg.timinglee.org/library/myapp-v2:latest --port=80
pod/myappv2 created
[root@master ~]# kubectl get pods
NAME      READY   STATUS    RESTARTS   AGE
myappv2   1/1     Running   0          55s
[root@master ~]#  kubectl delete pods myappv2
pod "myappv2" deleted from default namespace
[root@master ~]# kubectl get pods
No resources found in default namespace.

提示:记得上传镜像

bash 复制代码
[root@harbor ~]# docker tag timinglee/myapp:v2 reg.timinglee.org/library/myapp-v1:latest
[root@harbor ~]# docker push reg.timinglee.org/library/myapp-v1:latest
The push refers to repository [reg.timinglee.org/library/myapp-v1]
05a9e65e2d53: Mounted from library/myapp 
68695a6cfd7d: Mounted from library/myapp 
c1dc81a64903: Mounted from library/myapp 
8460a579ab63: Mounted from library/myapp 
d39d92664027: Mounted from library/myapp 
latest: digest: sha256:5f4afc8302ade316fc47c99ee1d41f8ba94dbe7e3e7747dd87215a15429b9102 size: 1362
[root@harbor ~]# docker tag timinglee/myapp:v2 reg.timinglee.org/library/myapp-v2:latest
[root@harbor ~]# docker push reg.timinglee.org/library/myapp-v2:latest
The push refers to repository [reg.timinglee.org/library/myapp-v2]
05a9e65e2d53: Mounted from library/myapp-v1 
68695a6cfd7d: Mounted from library/myapp-v1 
c1dc81a64903: Mounted from library/myapp-v1 
8460a579ab63: Mounted from library/myapp-v1 
d39d92664027: Mounted from library/myapp-v1 
latest: digest: sha256:5f4afc8302ade316fc47c99ee1d41f8ba94dbe7e3e7747dd87215a15429b9102 size: 1362

利用控制器管理pod

bash 复制代码
#创建一个部署(Deployment),名字叫 webcluster
[root@master ~]# kubectl create deployment webcluster --image reg.timinglee.org/library/myapp-v2:latest --replicas 1
deployment.apps/webcluster created
#查看所有部署,显示详细信息(镜像、标签、运行状态)
[root@master ~]# kubectl get deployment.apps -o wide
NAME         READY   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS   IMAGES                                      SELECTOR
webcluster   1/1     1            1           27s   myapp-v2     reg.timinglee.org/library/myapp-v2:latest   app=webcluster
[root@master ~]#  kubectl scale deployment webcluster --replicas 2
deployment.apps/webcluster scaled
[root@master ~]# kubectl get deployment.apps -o wide
NAME         READY   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS   IMAGES                                      SELECTOR
webcluster   2/2     2            2           68s   myapp-v2     reg.timinglee.org/library/myapp-v2:latest   app=webcluster
[root@master ~]# kubectl scale deployment webcluster --replicas 1
deployment.apps/webcluster scaled
[root@master ~]# kubectl get deployment.apps -o wide
NAME         READY   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS   IMAGES                                      SELECTOR
webcluster   1/1     1            1           81s   myapp-v2     reg.timinglee.org/library/myapp-v2:latest   app=webcluster
[root@master ~]# kubectl get pods
NAME                          READY   STATUS    RESTARTS   AGE
webcluster-585458b5b7-tdfvq   1/1     Running   0          114s
#去掉这个 Pod 的 app 标签
[root@master ~]# kubectl label pods webcluster-585458b5b7-tdfvq   app-
pod/webcluster-585458b5b7-tdfvq unlabeled
[root@master ~]# kubectl get pods
#为啥两个,因为你把标签删了 → Deployment 找不到这个 Pod 了
→ Deployment 以为 Pod 丢了 → 立刻新建了一个 Pod
所以现在变成 2 个 Pod
NAME                          READY   STATUS    RESTARTS   AGE
webcluster-585458b5b7-kw5z5   1/1     Running   0          10s
webcluster-585458b5b7-tdfvq   1/1     Running   0          5m58s
[root@master ~]# kubectl label pods webcluster-585458b5b7-tdfvq   app=webcluster
pod/webcluster-585458b5b7-tdfvq labeled
[root@master ~]# kubectl get pods
NAME                          READY   STATUS    RESTARTS   AGE
webcluster-585458b5b7-tdfvq   1/1     Running   0          6m30s
#更新版本
#给 webcluster 这个部署开一个入口(Service)
[root@master ~]# kubectl expose deployment webcluster --port 80 --target-port 80
service/webcluster exposed
[root@master ~]# kubectl describe svc webcluster | tail -n 10 
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.103.116.168
IPs:                      10.103.116.168
Port:                     <unset>  80/TCP
TargetPort:               80/TCP
Endpoints:                10.244.1.13:80
Session Affinity:         None
Internal Traffic Policy:  Cluster
Events:                   <none>
[root@master ~]# curl  10.103.116.168
Hello MyApp | Version: v2 | <a href="hostname.html">Pod Name</a>
#更新镜像 → 改成 v1
[root@master ~]# kubectl set image deployments/webcluster myapp-v2=reg.timinglee.org/library/myapp-v1:latest
deployment.apps/webcluster image updated
Hello MyApp | Version: v2 | <a href="hostname.html">Pod Name</a>
[root@master ~]# kubectl rollout history deployment webcluster
deployment.apps/webcluster 
REVISION  CHANGE-CAUSE
1         <none>
2         <none>

[root@master ~]# kubectl rollout undo deployment webcluster --to-revision 1
deployment.apps/webcluster rolled back
[root@master ~]# curl 10.103.116.168
Hello MyApp | Version: v2 | <a href="hostname.html">Pod Name</a>

利用yaml文件部署应用

#运行单个容器

bash 复制代码
[root@master ~]# kubectl run  lee1 --image reg.timinglee.org/library/myapp-v1:latest --dry-run=client -o yaml > 1test.yml
[root@master ~]# vim  1test.yml 
# API版本,固定写法,Pod用v1
apiVersion: v1

# 资源类型:这里是创建一个Pod
kind: Pod

# 元数据:Pod的名字、标签
metadata:
  # 标签,用于筛选、匹配
  labels:
    run: lee1
  # Pod的名字:lee1
  name: lee1

# 规格:Pod里的容器内容
spec:
  # 容器列表
  containers:
    # 第一个容器
  - image: reg.timinglee.org/library/myapp-v1:latest  # 镜像地址(你的私有仓库v1)
    name: lee1                                        # 容器名字:lee1
    resources: {}                                     # 资源限制(没配置就是空)

  dnsPolicy: ClusterFirst       # DNS策略,默认集群内优先
  restartPolicy: Always         # 重启策略:挂了就自动重启

# 状态字段,创建时不用写,K8s自动填充
status: {}
[root@master ~]# kubectl apply -f 1test.yml
pod/lee1 created
[root@master ~]# kubectl get pods
NAME                          READY   STATUS    RESTARTS   AGE
lee1                          1/1     Running   0          7s
webcluster-585458b5b7-7v864   1/1     Running   0          3h5m
[root@master ~]# kubectl delete deployments.app webcluster 
deployment.apps "webcluster" deleted from default namespace
[root@master ~]# kubectl get pods
NAME   READY   STATUS    RESTARTS   AGE
lee1   1/1     Running   0          82s
[root@master ~]# kubectl describe  pods
Name:             lee1
Namespace:        default
Priority:         0
Service Account:  default
Node:             node1/172.25.254.10
Start Time:       Sun, 05 Apr 2026 14:56:03 +0800
Labels:           run=lee1
Annotations:      <none>
Status:           Running
IP:               10.244.1.17
IPs:
  IP:  10.244.1.17
Containers:
  lee1:
    Container ID:   docker://77f218983d90d2b5a2b2137fa05bd0dee1d8f92b46646d514c4022054c4091b9
    Image:          reg.timinglee.org/library/myapp-v1:latest
    Image ID:       docker-pullable://reg.timinglee.org/library/myapp-v1@sha256:5f4afc8302ade316fc47c99ee1d41f8ba94dbe7e3e7747dd87215a15429b9102
    Port:           <none>
    Host Port:      <none>
    State:          Running
      Started:      Sun, 05 Apr 2026 14:56:04 +0800
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xw4bb (ro)
Conditions:
  Type                        Status
  PodReadyToStartContainers   True 
  Initialized                 True 
  Ready                       True 
  ContainersReady             True 
  PodScheduled                True 
Volumes:
  kube-api-access-xw4bb:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    Optional:                false
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  93s   default-scheduler  Successfully assigned default/lee1 to node1
  Normal  Pulling    92s   kubelet            spec.containers{lee1}: Pulling image "reg.timinglee.org/library/myapp-v1:latest"
  Normal  Pulled     92s   kubelet            spec.containers{lee1}: Successfully pulled image "reg.timinglee.org/library/myapp-v1:latest" in 66ms (66ms including waiting). Image size: 15504059 bytes.
  Normal  Created    92s   kubelet            spec.containers{lee1}: Container created
  Normal  Started    92s   kubelet            spec.containers{lee1}: Container started
[root@master ~]# kubectl get pods
NAME   READY   STATUS    RESTARTS   AGE
lee1   1/1     Running   0          111s
[root@master ~]# kubectl get pods -o wide
NAME   READY   STATUS    RESTARTS   AGE    IP            NODE    NOMINATED NODE   READINESS GATES
lee1   1/1     Running   0          2m3s   10.244.1.17   node1   <none>           <none>
[root@master ~]# kubectl delete -f 1test.yml
pod "lee1" deleted from default namespace
[root@master ~]# kubectl get pods
No resources found in default namespace.

#运行多个容器

bash 复制代码
[root@master ~]# cp 1test.yml  2test.yml
[root@master ~]# vim 2test.yml 
[root@master ~]#  kubectl apply -f 2test.yml
pod/lee1 created
[root@master ~]#  kubectl get pods
NAME   READY   STATUS    RESTARTS   AGE
lee1   2/2     Running   0          6s
[root@master ~]# cat 2test.yml 
apiVersion: v1
kind: Pod
metadata:
  labels:
    run: lee1
  name: lee1
spec:
  containers:
  - image: reg.timinglee.org/library/myapp-v1:latest
    name: lee1
  - image: busybox:latest
    name: busybox
    command:
      - /bin/sh
      - -c
      - sleep 20000

#理解pod间的网络整合

需要在172.25.254.200的主机中把busyboxplus:latest加入harbor中

bash 复制代码
[root@harbor xia]# ls
busyboxplus.tar.gz
[root@harbor xia]# mv busyboxplus.tar.gz /root
[root@harbor xia]# cd
[root@harbor ~]# docker load -i busyboxplus.tar.gz
5f70bf18a086: Loading layer [==================================================>]  1.024kB/1.024kB
075a34aac01b: Loading layer [==================================================>]  13.43MB/13.43MB
774600fa57ae: Loading layer [==================================================>]  6.144kB/6.144kB
Loaded image: busyboxplus:latest     
[root@harbor ~]# docker tag busyboxplus:latest    reg.timinglee.org/library/busyboxplus:latest
[root@harbor ~]# docker push reg.timinglee.org/library/busyboxplus:latest 
The push refers to repository [reg.timinglee.org/library/busyboxplus]
5f70bf18a086: Pushed 
774600fa57ae: Pushed 
075a34aac01b: Pushed 
latest: digest: sha256:9d1c242c1fd588a1b8ec4461d33a9ba08071f0cc5bb2d50d4ca49e430014ab06 size: 1353
bash 复制代码
[root@master ~]# cp 2test.yml  3test.yml
[root@master ~]# vim 3test.yml 
[root@master ~]# cat 3test.yml 
apiVersion: v1
kind: Pod
metadata:
  labels:
    run: lee1
  name: lee1
spec:
  containers:
  - image: reg.timinglee.org/library/myapp-v1:latest
    name: lee1
  - image: busyboxplus:latest
    name: busybox
    command:
      - /bin/sh
      - -c
      - sleep 20000


[root@master ~]# kubectl apply -f 3test.yml
pod/lee1 configured
[root@master ~]# kubectl get pods
NAME   READY   STATUS    RESTARTS   AGE
lee1   2/2     Running   0          11m
[root@master ~]# kubectl exec -it pods/lee1 -c busybox -- /bin/sh
/ #  curl localhost
Hello MyApp | Version: v2 | <a href="hostname.html">Pod Name</a>
/ # exec attach failed: error on attach stdin: read escape sequence
command terminated with exit code 126

#端口映射

bash 复制代码
[root@master ~]#  cp 1test.yml  4test.yml
[root@master ~]# vim 4test.yml 
[root@master ~]# cat 4test.yml 
apiVersion: v1
kind: Pod
metadata:
  labels:
    run: lee1
  name: lee1
spec:
  containers:
  - image: reg.timinglee.org/library/myapp-v1:latest
    name: myapp-v1
    ports:
    - name: webport
      containerPort: 80
      hostPort: 80
      protocol: TCP
    
[root@master ~]# kubectl apply -f 4test.yml
pod/lee1 created
[root@master ~]# kubectl get pods -o wide
NAME   READY   STATUS    RESTARTS   AGE   IP            NODE    NOMINATED NODE   READINESS GATES
lee1   1/1     Running   0          17s   10.244.1.20   node1   <none>           <none>
[root@master ~]# curl 172.25.254.10
Hello MyApp | Version: v2 | <a href="hostname.html">Pod Name</a>

#选择运行节点

bash 复制代码
[root@master ~]# cp 4test.yml  5test.yml  
[root@master ~]# vim 5test.yml 
[root@master ~]# cat 5test.yml 
apiVersion: v1
kind: Pod
metadata:
  labels:
    run: lee1
  name: lee1
spec:
  nodeSelector:
    kubernetes.io/hostname: node2
  containers:
  - image: reg.timinglee.org/library/myapp-v1:latest
    name: myapp-v1
    ports:
    - name: webport
      containerPort: 80
      hostPort: 80
      protocol: TCP
    
[root@master ~]# kubectl apply -f 5test.yml
pod/lee1 created
[root@master ~]# kubectl get pods -o wide
NAME   READY   STATUS    RESTARTS   AGE   IP            NODE    NOMINATED NODE   READINESS GATES
lee1   1/1     Running   0          22s   10.244.2.13   node2   <none>           <none>
此处节点为node2,先前是node1

#共享宿主机网络

bash 复制代码
[root@master ~]# cp 5test.yml  6test.yml
[root@master ~]# vim 6test.yml 
[root@master ~]#  cat 6test.yml 
apiVersion: v1
kind: Pod
metadata:
  labels:
    run: lee1
  name: lee1
spec:
  hostNetwork: true
  nodeSelector:
    kubernetes.io/hostname: node2
  containers:
  - image: reg.timinglee.org/library/busybox:latest
    name: busybox
    command:
      - /bin/sh
      - -c
      - sleep 1000 
     
[root@master ~]#  kubectl apply -f 6test.yml
pod/lee1 created  
[root@master ~]#  kubectl exec -it pods/lee1 -c  reg.timinglee.org/library/busybox:latest -- /bin/sh
Error from server (BadRequest): container reg.timinglee.org/library/busybox:latest is not valid for pod lee1
[root@master ~]# kubectl exec -it pods/lee1 -c busybox -- /bin/sh
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq qlen 1000
    link/ether 00:0c:29:a7:9f:74 brd ff:ff:ff:ff:ff:ff
    inet 172.25.254.20/24 brd 172.25.254.255 scope global noprefixroute eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:fea7:9f74/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue 
    link/ether 36:ef:89:6a:8f:43 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
4: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue 
    link/ether 72:1a:37:ec:de:df brd ff:ff:ff:ff:ff:ff
    inet 10.244.2.0/32 scope global flannel.1
       valid_lft forever preferred_lft forever
    inet6 fe80::701a:37ff:feec:dedf/64 scope link 
       valid_lft forever preferred_lft forever
5: cni0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue qlen 1000
    link/ether 76:38:51:2a:bc:7c brd ff:ff:ff:ff:ff:ff
    inet 10.244.2.1/24 brd 10.244.2.255 scope global cni0
       valid_lft forever preferred_lft forever
    inet6 fe80::7438:51ff:fe2a:bc7c/64 scope link 
       valid_lft forever preferred_lft forever
6: veth5ca446a6@eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 
    link/ether 26:af:20:83:e7:47 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::24af:20ff:fe83:e747/64 scope link 
       valid_lft forever preferred_lft forever
7: veth06041591@eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 
    link/ether c2:31:7e:93:cb:56 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::c031:7eff:fe93:cb56/64 scope link 
       valid_lft forever preferred_lft forever
/ # 

(5).pod的生命周期

init 容器

bash 复制代码
[root@master ~]# cp 6test.yml  init.yml
[root@master ~]# vim init.yml 
apiVersion: v1
kind: Pod
metadata:
  labels:
    run: lee1
  name: lee1
spec:
  initContainers:
  - name: init-myservice
    image: busybox
    command: ["sh","-c","until test -e /testfile;do echo wating for myservice; sleep 2;done"]
  containers:
  - image: reg.timinglee.org/library/myapp-v1
    name: myapp-v1
[root@master ~]# kubectl  apply -f init.yml 
pod/lee1 created

在同一台机子中打开监控

bash 复制代码
[root@master ~]# watch -n 1 kubectl get pods

监控前

bash 复制代码
[root@master ~]#  kubectl exec -it pods/lee1 -c init-myservice -- /bin/sh
/ # touch  /testfile
/ # command terminated with exit code 137

监控后的变化

livenessprobe(存活探针)

bash 复制代码
[root@master ~]# kubectl create deployment webcluster --image myapp-v1 --replicas 1 --dry-run=client -o yaml > liveness.yml
[root@master ~]#  kubectl apply -f liveness.yml
deployment.apps/webcluster created
[root@master ~]#  watch -n 1 "kubectl get pods  ;kubectl describe svc webcluster | tail -n 10"
bash 复制代码
[root@master ~]# kubectl exec -it pods/webcluster-857bd694c5-btlr2 -c myapp-v1 -- /bin/sh
/ # 
/ # 
/ # 
/ # 
/ # 
/ # 
/ #  nginx -s stop
2026/04/05 08:57:34 [notice] 13#13: signal process started
/ # command terminated with exit code 137
[root@master ~]# 
bash 复制代码
[root@master ~]# cp liveness.yml  Readiness.yml
[root@master ~]# vim Readiness.yml 
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: webcluster       # 标签,用来识别这个应用
  name: webcluster        # 部署名字叫 webcluster
spec:
  replicas: 1             # 启动 1 个 Pod 副本
  selector:               # 选择器:管理哪些 Pod
    matchLabels:
      app: webcluster        
  template:               # Pod 模板(创建 Pod 就用这个)
    metadata:
      labels:
        app: webcluster
    spec:
      containers:
      - image: myapp-v1        # 使用 myapp-v1 镜像
        name: myapp-v1        # 容器名字
        readinessProbe:       # 就绪检查(核心功能)
          httpGet:            # 用 HTTP 检查
            path: /test.html  # 检查这个页面是否能访问
            port: 80          # 检查 80 端口
          initialDelaySeconds: 1  # 启动后等 1 秒开始检查
          periodSeconds: 3        # 每 3 秒检查一次
          timeoutSeconds: 1       # 超时 1 秒算失败


#  服务 Service(用来访问 Pod)
apiVersion: v1
kind: Service
metadata:
  labels:
    app: webcluster
  name: webcluster
spec:
  ports:
  - port: 80              # Service 自己的端口
    protocol: TCP
    targetPort: 80        # 转发到 Pod 的 80 端口
  selector:
    app: webcluster       # 选择标签为 app=webcluster 的 Pod[root@master ~]# kubectl apply -f Readiness.yml
deployment.apps/webcluster configured
bash 复制代码
#监控
[root@master pod]# watch -n 1 "kubectl get pods  ;kubectl describe svc webcluster | tail -n 10"

监控前

bash 复制代码
[root@master ~]# kubectl exec -it pods/webcluster-65c95f6d59-rp9mg   -c myapp-v1 -- /bin/sh
/ # echo timinglee > /usr/share/nginx/html/test.html

如果删除了/usr/share/nginx/html/test.html

bash 复制代码
root@master ~]# kubectl exec -it pods/webcluster-65c95f6d59-rp9mg   -c myapp-v1 -- /bin/sh
/ # echo timinglee > /usr/share/nginx/html/test.html
/ # rm -rf  /usr/share/nginx/html/test.html 

恢复

bash 复制代码
[root@master ~]# kubectl exec -it pods/webcluster-65c95f6d59-rp9mg   -c myapp-v1 -- /bin/sh
/ # echo timinglee > /usr/share/nginx/html/test.html
/ # rm -rf  /usr/share/nginx/html/test.html 
/ # echo timinglee > /usr/share/nginx/html/test.html

9.k8s中的控制器应用

(1)什么是控制器

控制器也是管理pod的一种手段

  • 自主式pod:pod退出或意外关闭后不会被重新创建
  • -控制器管理的 Pod:在控制器的生命周期里,始终要维持 Pod 的副本数目

Pod控制器是管理pod的中间层,使用Pod控制器之后,只需要告诉Pod控制器,想要多少个什么样的Pod就可以了,它会创建出满足条件的Pod并确保每一个Pod资源处于用户期望的目标状态。如果Pod资源在运行中出现故障,它会基于指定策略重新编排Pod

当建立控制器后,会把期望值写入etcd,k8s中的apiserver检索etcd中我们保存的期望状态,并对比pod的当前状态,如果出现差异代码自驱动立即恢复

(2)控制器常用类型

|--------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 控制器名称 | 控制器用途 |
| Replication Controller | 比较原始的pod控制器,已经被废弃,由ReplicaSet替代 |
| ReplicaSet | ReplicaSet 确保任何时间都有指定数量的 Pod 副本在运行 |
| Deployment | 一个 Deployment 为 PodReplicaSet 提供声明式的更新能力 |
| DaemonSet | DaemonSet 确保全指定节点上运行一个 Pod 的副本 |
| StatefulSet | StatefulSet 是用来管理有状态应用的工作负载 API 对象。 |
| Job | 执行批处理任务,仅执行一次任务,保证任务的一个或多个Pod成功结束 |
| CronJob | Cron Job 创建基于时间调度的 Jobs。 |
| HPA全称Horizontal Pod Autoscaler | 根据资源利用率自动调整service中Pod数量,实现Pod水平自动缩放 |

(3)replicaset控制器

bash 复制代码
[root@master ~# kubectl create deployment webcluster --image myapp:v1  --dry-run=client -o yaml  > repset.yml

[root@master ~]# vim repset.yml
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: webcluster
  name: webcluster
spec:
  replicas: 4  #运行几个
  selector:
    matchLabels:
      app: webcluster
        #  strategy: {}
  template:
    metadata:
      labels:
        app: webcluster
    spec:
      containers:
     - image:  reg.timinglee.org/library/myapp:v1
        name: myapp

[root@master ~# kubectl apply -f repset.yml
#打开一个新的shell监控结果
[root@master ~]# watch -n 1 kubectl get pods   --show-labels
Every 1.0s: kubectl get pods --show-labels  
NAME                          READY   STATUS    RESTARTS   AGE     IP           NODE    NOMINATED NODE   READINESS GATES
webcluster-566bdd4bff-6znbd   1/1     Running   0          13m     10.244.1.3   node1   <none>           <none>
webcluster-566bdd4bff-9wnx9   1/1     Running   0          13m     10.244.2.5   node2   <none>           <none>
webcluster-566bdd4bff-rxj25   1/1     Running   0          5m52s   10.244.2.6   node2   <none>           <none>
webcluster-566bdd4bff-smpzv   1/1     Running   0          5m52s   10.244.1.5   node1   <none>           <none>

(3)deployment

监控

bash 复制代码
[root@master ~]# watch -n 1 " kubectl get pods   --show-labels;echo ====;kubectl get replicasets.apps"

建立deployment控制器

bash 复制代码
[root@master ~]# kubectl create deployment webcluster --image myapp:v1  --dry-run=client -o yaml  > dep.yml
[root@master controler]# vim dep.yml
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: webcluster
  name: webcluster
spec:
  minReadySeconds: 5  #延迟5秒
  replicas: 6
  selector:
    matchLabels:
      app: webcluster
  strategy:
    rollingUpdate:
      maxSurge: 1                                       #更新时pod数量最多比期望值多一个
      maxUnavailable: 0                         #不能使用pod数量比期望值数量多0
  template:
    metadata:
      labels:
        app: webcluster
    spec:
      containers:
      - image: reg.timinglee.org/library/myapp:v2
        name: myapp
[root@master ~]# kubectl apply -f dep.yml
#监控会显示效果


#发布服务
[root@master ~]# kubectl expose deployment webcluster --port 80 --target-port 80
[root@master ~]# kubectl describe  services 
Name:                     kubernetes
Namespace:                default
Labels:                   component=apiserver
                          provider=kubernetes
Annotations:              <none>
Selector:                 <none>
Type:                     ClusterIP
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.96.0.1
IPs:                      10.96.0.1
Port:                     https  443/TCP
TargetPort:               6443/TCP
Endpoints:                172.25.254.100:6443
Session Affinity:         None
Internal Traffic Policy:  Cluster
Events:                   <none>


Name:                     test
Namespace:                default
Labels:                   app=test
Annotations:              <none>
Selector:                 app=test
Type:                     ClusterIP
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.102.57.191
IPs:                      10.102.57.191
Port:                     <unset>  80/TCP
TargetPort:               80/TCP
Endpoints:                
Session Affinity:         None
Internal Traffic Policy:  Cluster
Events:                   <none>


Name:                     webcluster
Namespace:                default
Labels:                   app=webcluster
Annotations:              <none>
Selector:                 app=webcluster
Type:                     ClusterIP
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.104.21.223
IPs:                      10.104.21.223
Port:                     <unset>  80/TCP
TargetPort:               80/TCP
Endpoints:                10.244.1.8:80,10.244.2.5:80
Session Affinity:         None
Internal Traffic Policy:  Cluster
Events:                   <none>
#访问:
[root@master ~]# curl 10.104.21.223
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>

升级和回滚

bash 复制代码
 [root@master ~]# vim dep.yml

。。。。。。。
 spec:
      containers:
      - image: myapp:v2			#升级为版本2
        name: myapp
[root@master ~]# kubectl  apply -f dep.yml 
deployment.apps/webcluster configured
[root@master ~]# curl 10.104.21.223
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
[root@master ~]# vim dep.yml 
。。。。。。。
 spec:
      containers:
      - image: myapp:v1			#回滚为版本1
        name: myapp
[root@master ~]# kubectl  apply -f dep.yml 
deployment.apps/webcluster configured
[root@master ~]# curl 10.104.21.223
Hello MyApp | Version: v2 | <a href="hostname.html">Pod Name</a>

(4)版本更新管理及优化

bash 复制代码
[root@master ~]# vim dep.yml
spec:
  minReadySeconds: 5
  replicas: 6				#把pod数量设定为6方便观察
  selector:

[root@master~]# kubectl apply -f dep.yml
deployment.apps/webcluster configured

查看更新策略信息

bash 复制代码
[root@master deployment]# kubectl describe deployments.apps webcluster
Name:                   webcluster
Namespace:              default
CreationTimestamp:      Sun, 05 Apr 2026 16:52:40 +0800
Labels:                 app=webcluster
Annotations:            deployment.kubernetes.io/revision: 10
Selector:               app=webcluster
Replicas:               6 desired | 6 updated | 6 total | 6 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        5
RollingUpdateStrategy:  25% max unavailable, 25% max surge
bash 复制代码
root@master ~]# vim dep.yml
。。。。。。。
spec:
  minReadySeconds: 5
  replicas: 6
  selector:
    matchLabels:
      app: webcluster
  strategy:
    rollingUpdate:
      maxSurge: 1					#更新时pod数量最多比期望值多一个
      maxUnavailable: 0				#不能使用pod数量比期望值数量多0
  template:
    metadata:
      labels:
        app: webcluster
    spec:
      containers:
      - image: myapp:v1
        name: myapp


[root@master ~]# kubectl apply -f dep.yml

更新暂停和恢复

bash 复制代码
[root@master deployment]# kubectl rollout history deployment webcluster
deployment.apps/webcluster 
REVISION  CHANGE-CAUSE
1         <none>
2         <none>
4         <none>
5         <none>
6         <none>
7         <none>
9         <none>
10        <none>

[root@master deployment]# kubectl rollout pause deployment webcluster  #暂停更新
deployment.apps/webcluster paused
[root@master deployment]# vim dep.yml
。。。
      containers:
      - image: myapp:v2
        name: myapp
[root@master deployment]# kubectl apply -f dep.yml      
deployment.apps/webcluster configured
[root@master deployment]#  kubectl rollout history deployment webcluster
deployment.apps/webcluster 
REVISION  CHANGE-CAUSE
1         <none>
2         <none>
4         <none>
5         <none>
6         <none>
7         <none>
10        <none>
11        <none>

[root@master deployment]# kubectl rollout resume deployment webcluster
deployment.apps/webcluster resumed#开启更新
[root@master deployment]# kubectl rollout history deployment webcluster
deployment.apps/webcluster 
REVISION  CHANGE-CAUSE
1         <none>
2         <none>
4         <none>
5         <none>
6         <none>
7         <none>
10        <none>
11        <none>

(5)DaemonSet

bash 复制代码
[root@master ~]# kubectl create deployment daemonset --image myapp:v1 --dry-run=client -o yaml  > daemonset.yml
[root@master ~]# vim daemonset.yml
apiVersion: apps/v1
kind: DaemonSet
metadata:
  labels:
    app: daemonset
  name: daemonset
spec:
  # replicas: 1
  selector:
    matchLabels:
      app: daemonset
  template:
    metadata:
      labels:
        app: daemonset
    spec:
      containers:
      - image: reg.timinglee.org/library/myapp:v1
        name: myapp
#另外开启一个主机node3,并设定在初始化集群时的所有设定确保所有服务的开启
#在master中重新生成集群主机注册时需要的token
[root@master ~]# kubeadm token create --print-join-command
kubeadm join 172.25.254.100:6443 --token z5y3ao.wxffq7ra9xdxucjd --discovery-token-ca-cert-hash sha256:1c4d35623b9aab09984bb8a62383a85e38fea77d530b10b69a52151b50efce56 



#node3
[root@node3 ~]# kubeadm join 172.25.254.100:6443 --token z5y3ao.wxffq7ra9xdxucjd --discovery-token-ca-cert-hash sha256:1c4d35623b9aab09984bb8a62383a85e38fea77d530b10b69a52151b50efce56 --cri-socket=unix:///var/run/cri-dockerd.sock
[preflight] Running pre-flight checks
	[WARNING ContainerRuntimeVersion]: You must update your container runtime to a version that supports the CRI method RuntimeConfig. Falling back to using cgroupDriver from kubelet config will be removed in 1.36. For more information, see https://git.k8s.io/enhancements/keps/sig-node/4033-group-driver-detection-over-cri
	[WARNING SystemVerification]: kernel release 5.14.0-570.12.1.el9_6.x86_64 is unsupported. Supported LTS versions from the 5.x series are 5.4, 5.10 and 5.15. Any 6.x version is also supported. For cgroups v2 support, the recommended version is 5.10 or newer
[preflight] Reading configuration from the "kubeadm-config" ConfigMap in namespace "kube-system"...
[preflight] Use 'kubeadm init phase upload-config kubeadm --config your-config-file' to re-upload it.
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 502.418497ms
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

#当node3加入集群后会在node3中立即群星指定的pod,其原因是因为运行了daemonset
#可以在master中观察pod的状态

(6)Job控制器

bash 复制代码
[root@master deployment]#  kubectl create job job --image  reg.timinglee.org/library/perl:latest  --dry-run=client -o yaml > job.yml
[root@master deployment]# vim job.yml 
apiVersion: batch/v1
kind: Job
metadata:
  name: job
spec:
  completions: 6
  parallelism: 2
  template:
    spec:
      containers:
      - image: reg.timinglee.org/library/perl:latest
        name: job
        command: ["perl",  "-Mbignum=bpi", "-wle", "print bpi(2000)"]
      restartPolicy: Never
  backoffLimit: 4

[root@master deployment]# kubectl apply -f job.yml
job.batch/job created
[root@master deployment]# kubectl logs job-7rzdn
3.1415926535897932384626433832795028841971693993751058209749445923078164062862089986280348253421170679821480865132823066470938446095505822317253594081284811174502841027019385211055596446229489549303819644288109756659334461284756482337867831652712019091456485669234603486104543266482133936072602491412737245870066063155881748815209209628292540917153643678925903600113305305488204665213841469519415116094330572703657595919530921861173819326117931051185480744623799627495673518857527248912279381830119491298336733624406566430860213949463952247371907021798609437027705392171762931767523846748184676694051320005681271452635608277857713427577896091736371787214684409012249534301465495853710507922796892589235420199561121290219608640344181598136297747713099605187072113499999983729780499510597317328160963185950244594553469083026425223082533446850352619311881710100031378387528865875332083814206171776691473035982534904287554687311595628638823537875937519577818577805321712268066130019278766111959092164201989380952572010654858632788659361533818279682303019520353018529689957736225994138912497217752834791315155748572424541506959508295331168617278558890750983817546374649393192550604009277016711390098488240128583616035637076601047101819429555961989467678374494482553797747268471040475346462080466842590694912933136770289891521047521620569660240580381501935112533824300355876402474964732639141992726042699227967823547816360093417216412199245863150302861829745557067498385054945885869269956909272107975093029553211653449872027559602364806654991198818347977535663698074265425278625518184175746728909777727938000816470600161452491921732172147723501414419735685481613611573525521334757418494684385233239073941433345477624168625189835694855620992192221842725502542568876717904946016534668049886272327917860857843838279679766814541009538837863609506800642251252051173929848960841284886269456042419652850222106611863067442786220391949450471237137869609563643719172874677646575739624138908658326459958133904780275901

(7)Cronjob控制器

bash 复制代码
[root@master controler]# kubectl create cronjob cronjob --image busybox  --schedule "* * * * *" --dry-run=client -o yaml > cronjob.yml
[root@master controler]# vim cronjob.yml
apiVersion: batch/v1
kind: CronJob
metadata:
  name: cronjob
spec:
  jobTemplate:
    metadata:
      name: cronjob
    spec:
      template:
        spec:
          containers:
          - image: busybox
            name: cronjob
            command:
              - /bin/sh
              - -c
              - echo "hello timinglee"
          restartPolicy: OnFailure
  schedule: '* * * * *'

[root@master controler]# kubectl apply -f cronjob.yml		#整分运行

[root@master controler]# kubectl get cronjobs.batch
NAME      SCHEDULE    TIMEZONE   SUSPEND   ACTIVE   LAST SCHEDULE   AGE
cronjob   * * * * *   <none>     False     0        17s             70s

[root@master controler]# kubectl logs cronjob-29598241-pxggh
hello timinglee

10.微服务

(1)创建一个clusterIP微服务

bash 复制代码
[root@master deployment]# kubectl create  deployment webcluster --image myapp:v1 --replicas 2 --dry-run=client -o yaml > clusterIP.yml
[root@master deployment]# vim clusterIP.yml 
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: webcluster
  name: webcluster
spec:
  replicas: 2
  selector:
    matchLabels:
      app: webcluster
        #  strategy: {}
  template:
    metadata:
      labels:
        app: webcluster
    spec:
      containers:
      - image: reg.timinglee.org/library/myapp:v1
        name: myapp
[root@master server]# kubectl apply -f clusterIP.yml
deployment.apps/webcluster created


[root@master server]# kubectl expose deployment webcluster  --port 80 --target-port 80 --dry-run=client -o yaml >> clusterIP.yml
[root@master server]# vim clusterIP.yml
[root@master server]# kubectl apply -f clusterIP.yml
service/webcluster configured

#检控官效果
[root@master ~]# watch -n 1 " kubectl get pods -o wide ; kubectl describe  svc webcluster  "
bash 复制代码
[root@master server]# curl 10.104.21.223/hostname.html
webcluster-77c87d9946-86h8n
[root@master server]# curl 10.104.21.223/hostname.html
webcluster-77c87d9946-xpfwk

#测试

bash 复制代码
[root@master deployment]# kubectl get pods  --show-labels
NAME                          READY   STATUS    RESTARTS   AGE   LABELS
webcluster-77c87d9946-86h8n   1/1     Running   0          52m   app=webcluster,pod-template-hash=77c87d9946
webcluster-77c87d9946-xpfwk   1/1     Running   0          52m   app=webcluster,pod-template-hash=77c87d9946

[root@master server]# kubectl delete  pods webcluster-77c87d9946-86h8n 
pod "webcluster-77c87d9946-86h8n" deleted from default namespace

[root@master deployment]# kubectl get pods  --show-labels
NAME                          READY   STATUS    RESTARTS   AGE   LABELS
webcluster-77c87d9946-w79q8   1/1     Running   0          5s    app=webcluster,pod-template-hash=77c87d9946
webcluster-77c87d9946-xpfwk   1/1     Running   0          52m   app=webcluster,pod-template-hash=77c87d9946

[root@master server]# kubectl  label  pods webcluster-77c87d9946-w79q8 app-
pod/webcluster-77c87d9946-w79q8 unlabeled
[root@master server]# kubectl  label  pods webcluster-77c87d9946-w79q8 app=webcluster
pod/webcluster-77c87d9946-w79q8 labeled

(2)优化service工作模式

bash 复制代码
[root@master server]# dnf install ipvsadm -y
[root@master server]# kubectl  -n kube-system  edit cm kube-proxy
configmap/kube-proxy edited
[root@master server]# kubectl -n kube-system get pods
NAME                             READY   STATUS    RESTARTS      AGE
coredns-697886855d-555zc         1/1     Running   2 (46h ago)   7d8h
coredns-697886855d-zgnd4         1/1     Running   2 (46h ago)   7d8h
etcd-master                      1/1     Running   2 (46h ago)   7d8h
kube-apiserver-master            1/1     Running   2 (46h ago)   7d8h
kube-controller-manager-master   1/1     Running   2 (46h ago)   7d8h
kube-proxy-8fpr9                 1/1     Running   2 (46h ago)   7d7h
kube-proxy-fl557                 1/1     Running   2 (46h ago)   7d8h
kube-proxy-vl5ds                 1/1     Running   2 (46h ago)   7d7h
kube-proxy-z4fkb                 1/1     Running   0             4h28m
kube-scheduler-master            1/1     Running   2 (46h ago)   7d8h
[root@master server]# kubectl -n kube-system  delete pods kube-proxy-8fpr9 
pod "kube-proxy-8fpr9" deleted from kube-system namespace
[root@master server]# kubectl -n kube-system  delete pods kube-proxy-fl557 
pod "kube-proxy-fl557" deleted from kube-system namespace
[root@master server]# kubectl -n kube-system  delete pods kube-proxy-vl5ds 
pod "kube-proxy-vl5ds" deleted from kube-system namespace

#测试

bash 复制代码
#监控
[root@master ~]# watch -n 1 ipvsadm -Ln
bash 复制代码
[root@master server]# vim clusterIP.yml 
replicas: 2
[root@master server]#  kubectl apply -f clusterIP.yml

(3)微服务解析

bash 复制代码
[root@master ~]# kubectl  -n kube-system  get svc
NAME       TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)                  AGE
kube-dns   ClusterIP   10.96.0.10   <none>        53/UDP,53/TCP,9153/TCP   7d22h
[root@master ~]# kubectl get svc
NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.96.0.1       <none>        443/TCP   7d22h
test         ClusterIP   10.102.57.191   <none>        80/TCP    7d15h
webcluster   ClusterIP   10.104.21.223   <none>        80/TCP    22h
[root@master ~]# dig webcluster.default.svc.cluster.local @10.96.0.10

; <<>> DiG 9.16.23-RH <<>> webcluster.default.svc.cluster.local @10.96.0.10
;; global options: +cmd
;; Got answer:
;; WARNING: .local is reserved for Multicast DNS
;; You are currently testing what happens when an mDNS query is leaked to DNS
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 6731
;; flags: qr aa rd; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
;; WARNING: recursion requested but not available

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
; COOKIE: e7905dbdbb637f4a (echoed)
;; QUESTION SECTION:
;webcluster.default.svc.cluster.local. IN A

;; ANSWER SECTION:
webcluster.default.svc.cluster.local. 30 IN A   10.99.217.241

;; Query time: 2 msec
;; SERVER: 10.96.0.10#53(10.96.0.10)
;; WHEN: Sun Apr 12 09:58:53 CST 2026
;; MSG SIZE  rcvd: 129

(4).clusterIP中的无头服务headless

bash 复制代码
[root@master server]# kubectl delete -f clusterIP.yml
[root@master server]# vim clusterIP.yml
。。。。。
apiVersion: v1
kind: Service
metadata:
  labels:
    app: webcluster
  name: webcluster
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
  selector:
    app: webcluster
  type: ClusterIP
  clusterIP: None		#指定此service使用headless模式


[root@master server]# kubectl apply -f clusterIP.yml
deployment.apps/webcluster created
service/webcluster created
[root@master server]# kubectl  get svc
NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.96.0.1       <none>        443/TCP   7d23h
test         ClusterIP   10.102.57.191   <none>        80/TCP    7d16h
webcluster   ClusterIP   None            <none>        80/TCP    13s
[root@master server]# kubectl get pods  -o wide
NAME                          READY   STATUS    RESTARTS      AGE   IP           NODE    NOMINATED NODE   READINESS GATES
webcluster-77c87d9946-w79q8   1/1     Running   1 (59m ago)   14h   10.244.1.2   node1   <none>           <none>
webcluster-77c87d9946-xpfwk   1/1     Running   1 (58m ago)   15h   10.244.3.2   node3   <none>           <none>
[root@master server]#  dig webcluster.default.svc.cluster.local @10.96.0.10

; <<>> DiG 9.16.23-RH <<>> webcluster.default.svc.cluster.local @10.96.0.10
;; global options: +cmd
;; Got answer:
;; WARNING: .local is reserved for Multicast DNS
;; You are currently testing what happens when an mDNS query is leaked to DNS
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 17924
;; flags: qr aa rd; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 1
;; WARNING: recursion requested but not available

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
; COOKIE: 8f7d6f3e07b330ef (echoed)
;; QUESTION SECTION:
;webcluster.default.svc.cluster.local. IN A

;; ANSWER SECTION:
webcluster.default.svc.cluster.local. 30 IN A	10.244.1.2
webcluster.default.svc.cluster.local. 30 IN A	10.244.3.2

;; Query time: 1 msec
;; SERVER: 10.96.0.10#53(10.96.0.10)
;; WHEN: Sun Apr 12 10:19:40 CST 2026
;; MSG SIZE  rcvd: 181

(5).nodeport

nodeport的建立

bash 复制代码
[root@master server]# kubectl delete -f clusterIP.yml
deployment.apps "webcluster" deleted from default namespace
service "webcluster" deleted from default namespace
[root@master server]# cp  clusterIP.yml  nodeport.yml
[root@master services]# vim nodeport.yml
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app: webcluster
  name: webcluster
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
  selector:
    app: webcluster
  type: NodePort				#改变类型
  
  [root@master server]# kubectl apply -f nodeport.yml
deployment.apps/webcluster created
service/webcluster created
[root@master server]# kubectl get svc
NAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP   10.96.0.1        <none>        443/TCP        7d23h
test         ClusterIP   10.102.57.191    <none>        80/TCP         7d16h
webcluster   NodePort    10.108.149.155   <none>        80:32129/TCP   7s
[root@master server]#  curl  172.25.254.100:32129/hostname.html
webcluster-77c87d9946-xpfwk
[root@master server]#  curl  172.25.254.100:32129/hostname.html
webcluster-77c87d9946-w79q8
[root@master server]#  curl  172.25.254.100:32129/hostname.html
webcluster-77c87d9946-xpfwk
[root@master server]#  curl  172.25.254.100:32129/hostname.html
webcluster-77c87d9946-w79q8

nodeport端口管理(默认范围30000~32767)

bash 复制代码
[root@master server]# kubectl delete -f nodeport.yml
service "webcluster" deleted from default namespace
[root@master server]# vim nodeport.yml 
apiVersion: v1
kind: Service
metadata:
  labels:
    app: webcluster
  name: webcluster
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
    nodePort: 31111					#设定端口
  selector:
    app: webcluster
  type: NodePort
[root@master server]# kubectl apply -f nodeport.yml
service/webcluster created
[root@master server]# kubectl  get svc webcluster 
NAME         TYPE       CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
webcluster   NodePort   10.105.102.161   <none>        80:31111/TCP   12s
[root@master server]# kubectl delete -f nodeport.yml
service "webcluster" deleted from default namespace
[root@master server]# vim nodeport.yml 
apiVersion: v1
kind: Service
metadata:
  labels:
    app: webcluster
  name: webcluster
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
    nodePort: 33334					#设定端口
  selector:
    app: webcluster
  type: NodePort
[root@master server]# kubectl apply -f nodeport.yml
The Service "webcluster" is invalid: spec.ports[0].nodePort: Invalid value: 33334: provided port is not in the valid range. The range of valid ports is 30000-32767


#解锁默认端口范围
[root@master server]#  vim /etc/kubernetes/manifests/kube-apiserver.yaml
42     - --service-node-port-range=30000-40000
[root@master server]# kubectl get nodes
NAME     STATUS   ROLES           AGE     VERSION
master   Ready    control-plane   7d23h   v1.35.3
node1    Ready    <none>          7d23h   v1.35.3
node2    Ready    <none>          7d23h   v1.35.3
node3    Ready    <none>          20h     v1.35.3
[root@master server]# kubectl apply -f nodeport.yml
service/webcluster created
[root@master server]# kubectl  get svc
NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP   10.96.0.1       <none>        443/TCP        7d23h
test         ClusterIP   10.102.57.191   <none>        80/TCP         7d16h
webcluster   NodePort    10.105.11.95    <none>        80:33334/TCP   11s
[root@master server]# kubectl delete  svc test
service "test" deleted from default namespace
[root@master server]# kubectl  get svc
NAME         TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP   10.96.0.1      <none>        443/TCP        8d
webcluster   NodePort    10.105.11.95   <none>        80:33334/TCP   75s

(6).loadbalancer

创建loadbalancer

bash 复制代码
[root@master server]# kubectl delete  -f nodeport.yml 
service "webcluster" deleted from default namespace
[root@master server]# cp nodeport.yml  loadbalance.yml
[root@master server]# vim loadbalance.yml 
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app: webcluster
  name: webcluster
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
  selector:
    app: webcluster
  type: LoadBalancer				#指定services模式
[root@master server]# kubectl  get scv
error: the server doesn't have a resource type "scv"
[root@master server]# kubectl  get svc
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   8d
[root@master server]# kubectl  apply -f loadbalance.yml 
service/webcluster created
[root@master server]# kubectl  get svc
NAME         TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP      10.96.0.1        <none>        443/TCP        8d
webcluster   LoadBalancer   10.106.252.249   <pending>     80:32800/TCP   1s
[root@master server]# curl 172.25.254.100:32800
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
[root@master server]# curl 10.106.252.249
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>

部署metallb

bash 复制代码
[root@master server]# kubectl edit configmap -n kube-system kube-proxy
 40     ipvs:
 41       excludeCIDRs: null
 42       minSyncPeriod: 0s
 43       scheduler: ""
 44       strictARP: true				#arp响应禁止
[root@master server]# kubectl -n kube-system  get pods
NAME                             READY   STATUS    RESTARTS        AGE
coredns-697886855d-555zc         1/1     Running   3 (5h14m ago)   8d
coredns-697886855d-zgnd4         1/1     Running   3 (5h14m ago)   8d
etcd-master                      1/1     Running   3 (5h14m ago)   8d
kube-apiserver-master            1/1     Running   0               3h24m
kube-controller-manager-master   1/1     Running   4 (3h24m ago)   8d
kube-proxy-5f955                 1/1     Running   0               23m
kube-proxy-8rksv                 1/1     Running   0               23m
kube-proxy-p994t                 1/1     Running   0               23m
kube-proxy-tqpf7                 1/1     Running   0               23m
kube-scheduler-master            1/1     Running   4 (3h24m ago)   8d
[root@master server]# kubectl  -n kube-system  delete  pods kube-proxy-5f955 
pod "kube-proxy-5f955" deleted from kube-system namespace
[root@master server]# kubectl  -n kube-system  delete  pods kube-proxy-8rksv 
pod "kube-proxy-8rksv" deleted from kube-system namespace
[root@master server]# kubectl  -n kube-system  delete  pods kube-proxy-p994t 
pod "kube-proxy-p994t" deleted from kube-system namespace
[root@master server]# kubectl  -n kube-system  delete  pods kube-proxy-tqpf7 
pod "kube-proxy-tqpf7" deleted from kube-system namespace


#下载metallb的yml文件
[root@master server]# wget https://raw.githubusercontent.com/metallb/metallb/v0.15.3/config/manifests/metallb-native.yaml
--2026-04-12 14:38:05--  https://raw.githubusercontent.com/metallb/metallb/v0.15.3/config/manifests/metallb-native.yaml
正在解析主机 raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.108.133, 185.199.110.133, 185.199.111.133, ...
正在连接 raw.githubusercontent.com (raw.githubusercontent.com)|185.199.108.133|:443... 已连接。
已发出 HTTP 请求,正在等待回应... 200 OK
长度:80890 (79K) [text/plain]
正在保存至: "metallb-native.yaml.1"

metallb-native.yaml.1                  100%[============================================================================>]  78.99K   342KB/s  用时 0.2s    

2026-04-12 14:38:06 (342 KB/s) - 已保存 "metallb-native.yaml" [80890/80890])
[root@master server]# ls
clusterIP.yml  configmap.yml  loadbalance.yml  metallb-native.yaml  nodeport.yml
[root@master server]# vim metallb-native.yaml 
[root@master server]# grep -n image: metallb-native.yaml        
2001:        image: metallb/controller:v0.15.3
2098:        image: metallb/speaker:v0.15.3

[root@master server]# kubectl -n metallb-system  get all
NAME                              READY   STATUS    RESTARTS   AGE
pod/controller-5fbf6546f9-hvtkx   1/1     Running   0          18s
pod/speaker-f24fn                 1/1     Running   0          18s
pod/speaker-kq6mj                 1/1     Running   0          18s
pod/speaker-lwdnz                 1/1     Running   0          18s
pod/speaker-nmpbd                 1/1     Running   0          18s

NAME                              TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGE
service/metallb-webhook-service   ClusterIP   10.98.45.202   <none>        443/TCP   18s

NAME                     DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE
daemonset.apps/speaker   4         4         4       4            4           kubernetes.io/os=linux   18s

NAME                         READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/controller   1/1     1            1           18s

NAME                                    DESIRED   CURRENT   READY   AGE
replicaset.apps/controller-5fbf6546f9   1         1         1       18s
[root@master server]#  vim configmap.yml
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
  name: first-pool
  namespace: metallb-system
spec:
  addresses:
  - 172.25.254.50-172.25.254.80

---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
  name: example
  namespace: metallb-system
spec:
  ipAddressPools:
  - first-pool
[root@master server]# kubectl apply -f configmap.yml
ipaddresspool.metallb.io/first-pool created
l2advertisement.metallb.io/example created
[root@master server]# kubectl get svc
NAME         TYPE           CLUSTER-IP      EXTERNAL-IP     PORT(S)        AGE
kubernetes   ClusterIP      10.96.0.1       <none>          443/TCP        8d
webcluster   LoadBalancer   10.111.38.157   172.25.254.50   80:35608/TCP   8m3s
[root@master server]# curl  172.25.254.50
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
[root@master server]# curl  172.25.254.50
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>

(7) externalname

bash 复制代码
[root@master server]# cp configmap.yml  externalname.yml
[root@master server]# vim externalname.yml
apiVersion: v1
kind: Service
metadata:
  labels:
    app: webcluster
  name: webcluster
spec:
  selector:
    app: webcluster
  type: ExternalName
  externalName: www.baidu.com
[root@master server]# kubectl  apply -f externalname.yml 
service/webcluster configured
[root@master server]# dig webcluster.default.svc.cluster.local @10.96.0.10

; <<>> DiG 9.16.23-RH <<>> webcluster.default.svc.cluster.local @10.96.0.10
;; global options: +cmd
;; Got answer:
;; WARNING: .local is reserved for Multicast DNS
;; You are currently testing what happens when an mDNS query is leaked to DNS
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 9718
;; flags: qr aa rd; QUERY: 1, ANSWER: 5, AUTHORITY: 0, ADDITIONAL: 1
;; WARNING: recursion requested but not available

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
; COOKIE: d38c256b49e71bf9 (echoed)
;; QUESTION SECTION:
;webcluster.default.svc.cluster.local. IN A

;; ANSWER SECTION:
webcluster.default.svc.cluster.local. 13 IN CNAME www.baidu.com.
www.baidu.com.		13	IN	CNAME	www.a.shifen.com.
www.a.shifen.com.	13	IN	CNAME	www.wshifen.com.
www.wshifen.com.	13	IN	A	103.235.46.102
www.wshifen.com.	13	IN	A	103.235.46.115

;; Query time: 269 msec
;; SERVER: 10.96.0.10#53(10.96.0.10)
;; WHEN: Sun Apr 12 15:07:44 CST 2026
;; MSG SIZE  rcvd: 290

(8)ingress-nginx

安装

bash 复制代码
#在100机子
[root@master server]#  wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.15.1/deploy/static/provider/baremetal/deploy.yaml
#在200机子(上传镜像)
[root@harbor ~]# docker load -i ingress-nginx-1.15.1.tar
[root@harbor ~]# docker images                                                                                                                                     
reg.timinglee.org/ingress-nginx/controller:v1.15.1            895ddb49053a        327MB             0B        
reg.timinglee.org/ingress-nginx/kube-webhook-certgen:v1.6.9   1442d220fcdd         66MB             0B        
[root@harbor ~]# docker push reg.timinglee.org/ingress-nginx/kube-webhook-certgen:v1.6.9
The push refers to repository [reg.timinglee.org/ingress-nginx/kube-webhook-certgen]
cc793a5d21da: Pushed 
5fd2536c39c0: Pushed 
187cfc6d1e3e: Pushed 
ad51d0769d16: Pushed 
4cde6b0bb6f5: Pushed 
bd3cdfae1d3f: Pushed 
6f1cdceb6a31: Mounted from metallb/controller 
af5aa97ebe6c: Mounted from metallb/controller 
4d049f83d9cf: Mounted from metallb/controller 
275a30dd8ce9: Pushed 
f15316efa997: Pushed 
ac2a91ec876d: Pushed 
621c35e751a5: Pushed 
82c60ccaf916: Pushed 
v1.6.9: digest: sha256:e347a9a55e736c0edb511d6dffa45daa1216aa145939ab5786873f6597d5e243 size: 3233
[root@harbor ~]# docker push reg.timinglee.org/ingress-nginx/controller:v1.15.1 
The push refers to repository [reg.timinglee.org/ingress-nginx/controller]
1e96675930b6: Pushed 
c9e38804ebf0: Pushed 
edb0fa71ed38: Pushed 
c32c16c7cabb: Pushed 
56957f72a91b: Pushed 
69a045f95c25: Pushed 
3fa31c0d82e5: Pushed 
1c46e7b0d4e5: Pushed 
5f70bf18a086: Mounted from library/busyboxplus 
2f223e3f4bee: Pushed 
df667cfb3f88: Pushed 
04fbd85b0cc6: Pushed 
340fa9e1c28b: Pushed 
d66981fbad27: Pushed 
989e799e6349: Pushed 
v1.15.1: digest: sha256:c052017bcdadbf289f62f314327e5d9359194bcbeef2e6718848e5ca3566f9c8 size: 3468
bash 复制代码
[root@master server]# kubectl apply -f deploy-1.15.1.yaml
namespace/ingress-nginx created
serviceaccount/ingress-nginx created
serviceaccount/ingress-nginx-admission created
role.rbac.authorization.k8s.io/ingress-nginx created
role.rbac.authorization.k8s.io/ingress-nginx-admission created
clusterrole.rbac.authorization.k8s.io/ingress-nginx created
clusterrole.rbac.authorization.k8s.io/ingress-nginx-admission created
rolebinding.rbac.authorization.k8s.io/ingress-nginx created
rolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created
clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx created
clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created
configmap/ingress-nginx-controller created
service/ingress-nginx-controller created
service/ingress-nginx-controller-admission created
deployment.apps/ingress-nginx-controller created
job.batch/ingress-nginx-admission-create created
job.batch/ingress-nginx-admission-patch created
ingressclass.networking.k8s.io/nginx created
service/ingress-nginx-controller edited
validatingwebhookconfiguration.admissionregistration.k8s.io/ingress-nginx-admission created
[root@master server]# kubectl -n ingress-nginx edit svc ingress-nginx-controller
49   type: LoadBalancer
[root@master server]# kubectl -n ingress-nginx get all
NAME                                            READY   STATUS    RESTARTS   AGE
pod/ingress-nginx-controller-6bcbfdbd4b-kbgd5   1/1     Running   0          79s

NAME                                         TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE
service/ingress-nginx-controller             LoadBalancer   10.107.203.57   <pending>     80:30534/TCP,443:35595/TCP   79s
service/ingress-nginx-controller-admission   ClusterIP      10.97.154.122   <none>        443/TCP                      79s

NAME                                       READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/ingress-nginx-controller   1/1     1            1           79s

NAME                                                  DESIRED   CURRENT   READY   AGE
replicaset.apps/ingress-nginx-controller-6bcbfdbd4b   1         1         1       79s
[root@master server]# kubectl  get ingressclasses.networking.k8s.io
NAME    CONTROLLER             PARAMETERS   AGE
nginx   k8s.io/ingress-nginx   <none>       90s

首个ingress

ngress-Nginx 的作用: 它就是 Kubernetes 集群的 "统一大门 + 负载均衡 + 网关"

集群唯一的入口所有外部访问先经过它,它再根据域名、路径转发到不同的服务

核心作用:

  1. 域名访问(你现在用的)

不用记一堆 IP,只用域名:

myapp1.timinglee.org → myapp1

myapp2.timinglee.org → myapp2

这就是 Ingress-Nginx 做的。

  1. HTTPS 加密(你配置的 TLS)

把 http 变成 https

配置证书,实现数据加密

保证传输安全

  1. 账号密码认证(Basic Auth)

你刚配置的:

不知道账号密码,直接 401 拒绝

保护内部应用不被随便访问

  1. 负载均衡

流量进来后,自动均匀分给多个 Pod。

  1. 统一入口

集群里成百上千个服务,不用暴露一堆端口

只开放 80/443 一个入口,全部由 Nginx 转发。

用一句话总结

Ingress-Nginx = Kubernetes 的统一网关

负责域名转发、HTTPS、认证、限流、负载均衡,让外部能安全访问集群内部服务。

准备2个又明显区分的svc和控制器
bash 复制代码
[root@master server]#  vim myapp1.yml
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: myapp1
  name: myapp1
spec:
  replicas: 1
  selector:
    matchLabels:
      app: myapp1
  template:
    metadata:
      labels:
        app: myapp1
    spec:
      containers:
      - image: myapp:v1
        name: myapp
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app: myapp1
  name: myapp1
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
  selector:
    app: myapp1
  type: ClusterIP

[root@master server]# vim myapp2.yml 
 apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: myapp2
  name: myapp2
spec:
  replicas: 1
  selector:
    matchLabels:
      app: myapp2
  template:
    metadata:
      labels:
        app: myapp2
    spec:
      containers:
      - image: myapp:v2
        name: myapp
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app: myapp2
  name: myapp2
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
  selector:
    app: myapp2
  type: ClusterIP
  
[root@master server]# kubectl apply -f myapp1.yml
deployment.apps/myapp1 created
service/myapp1 created
[root@master server]#  kubectl apply -f myapp2.yml
deployment.apps/myapp2 created
service/myapp2 created
[root@master server]# kubectl get svc
NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.96.0.1       <none>        443/TCP   8d
myapp1       ClusterIP   10.111.230.40   <none>        80/TCP    14s
myapp2       ClusterIP   10.109.2.57     <none>        80/TCP    6s
创建ingress代理
bash 复制代码
[root@master server]#  kubectl create ingress webcluster --rule='*/=myapp1:80' --dry-run=client -o yaml > 1-ingress.yml
[root@master server]# vim 1-ingress.yml 
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: webcluster
spec:
  ingressClassName: nginx
  rules:
  - http:
      paths:
      - backend:
          service:
            name: myapp1
            port:
              number: 80
        path: /
        pathType: Prefix
[root@master server]# kubectl apply -f 1-ingress.yml
[root@master server]# kubectl apply -f configmap.yml
ipaddresspool.metallb.io/first-pool created
l2advertisement.metallb.io/example created
[root@master server]#  kubectl -n ingress-nginx get svc
NAME                                 TYPE           CLUSTER-IP      EXTERNAL-IP     PORT(S)                      AGE
ingress-nginx-controller             LoadBalancer   10.107.203.57   172.25.254.50   80:30534/TCP,443:35595/TCP   17m
ingress-nginx-controller-admission   ClusterIP      10.97.154.122   <none>          443/TCP                      17m
[root@master server]# curl  172.25.254.50
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>

基于域名的ingress

bash 复制代码
[root@master server]# kubectl create ingress webcluster --annotation=nginx.ingress.kubernetes.io/rewrite-target=/ --class=nginx --rule=myapp1.timinglee.org/=myapp1:80 --rule=myapp2.timinglee.org/=myapp2:80 --dry-run=client -o yaml > 2-ingress.yml
[root@master server]# vim 2-ingress.yml 
apiVersion: networking.k8s.io/v1
kind: Ingress  #域名转发、7 层负载均衡
metadata:
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
  name: webcluster
spec:
  ingressClassName: nginx   #使用 nginx ingress 控制器
  rules:
  - host: myapp1.timinglee.org
    http:
      paths:
      - backend:
          service:
            name: myapp1
            port:
              number: 80
        path: /
        pathType: Prefix
  - host: myapp2.timinglee.org
    http:
      paths:
      - backend:
          service:
            name: myapp2
            port:
              number: 80
        path: /
        pathType: Prefix
[root@master server]# kubectl delete -f 1-ingress.yml   #删除旧的 ingress
ingress.networking.k8s.io "webcluster" deleted from default namespace
[root@master server]# kubectl  apply -f 2-ingress.yml 
ingress.networking.k8s.io/webcluster created
[root@master server]# kubectl describe  ingress
Name:             webcluster
Labels:           <none>
Namespace:        default
Address:          172.25.254.30
Ingress Class:    nginx
Default backend:  <default>
Rules:
  Host                  Path  Backends
  ----                  ----  --------
  myapp1.timinglee.org  
                        /   myapp1:80 (10.244.2.4:80)
  myapp2.timinglee.org  
                        /   myapp2:80 (10.244.3.2:80)
Annotations:            nginx.ingress.kubernetes.io/rewrite-target: /
Events:
  Type    Reason  Age                From                      Message
  ----    ------  ----               ----                      -------
  Normal  Sync    10s (x2 over 15s)  nginx-ingress-controller  Scheduled for sync
[root@master server]# vim /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
172.25.254.100     master
172.25.254.10      node1
172.25.254.20      node2
172.25.254.30     node3
172.25.254.200     reg.timinglee.org
172.25.254.50  myapp1.timinglee.org myapp2.timinglee.org
~                                                                          
[root@master server]#  curl  myapp2.timinglee.org
Hello MyApp | Version: v2 | <a href="hostname.html">Pod Name</a>
[root@master server]# curl myapp1.timinglee.org
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>

建立tls加密

bash 复制代码
[root@master server]# kubectl delete -f 2-ingress.yml 
ingress.networking.k8s.io "webcluster" deleted from default namespace
[root@master server]# openssl req -newkey rsa:2048 -nodes -keyout tls.key -x509 -days 365 -subj "/CN=nginxsvc/O=nginxsvc" -out tls.crt  #生成 SSL 证书(用于 HTTPS 加密)
..+.+..............+....+...+..+..........+..+.+...............+...+..+.........+......+...+.........+++++++++++++++++++++++++++++++++++++++*............+......+.........+++++++++++++++++++++++++++++++++++++++*...++++++
.+.......+........+...+++++++++++++++++++++++++++++++++++++++*......+.......+..+....+.....+++++++++++++++++++++++++++++++++++++++*.......+...+.......+...+.............................+.......+...+......+.....+......+......+...+.........+......++++++
-----
[root@master server]# kubectl create secret tls  web-tls-secret --key tls.key --cert tls.crt  #把刚才生成的证书上传到 Kubernetes,名字叫 web-tls-secret,专门给 Ingress 提供 HTTPS 加密。
secret/web-tls-secret created  
[root@master server]# kubectl get secrets  # 查看 Ingress 状态
NAME             TYPE                DATA   AGE
web-tls-secret   kubernetes.io/tls   2      18s
[root@master server]# kubectl create ingress webcluster --annotation=nginx.ingress.kubernetes.io/rewrite-target=/ --class=nginx --rule=myapp1.timinglee.org/=myapp1:80 --dry-run=client -o yaml > 3-ingress.yml
[root@master server]# vim 3-ingress.yml 
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
  name: webcluster
spec:
  tls:
  - hosts:
    - myapp1.timinglee.org
    secretName: web-tls-secret
  ingressClassName: nginx
  rules:
  - host: myapp1.timinglee.org
    http:
      paths:
      - backend:
          service:
            name: myapp1
            port:
              number: 80
        path: /
        pathType: Prefix
[root@master server]# kubectl apply -f 3-ingress.yml 
ingress.networking.k8s.io/webcluster created
[root@master server]# kubectl describe  ingress
Name:             webcluster
Labels:           <none>
Namespace:        default
Address:          
Ingress Class:    nginx
Default backend:  <default>
TLS:
  web-tls-secret terminates myapp1.timinglee.org
Rules:
  Host                  Path  Backends
  ----                  ----  --------
  myapp1.timinglee.org  
                        /   myapp1:80 (10.244.2.4:80)
Annotations:            nginx.ingress.kubernetes.io/rewrite-target: /
Events:
  Type    Reason  Age   From                      Message
  ----    ------  ----  ----                      -------
  Normal  Sync    13s   nginx-ingress-controller  Scheduled for sync
[root@master server]#  curl -k  https://myapp1.timinglee.org
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>

建立auth认证

bash 复制代码
[root@master server]# dnf  install httpd-tools -y
[root@master server]# htpasswd -cm auth lee
New password:   #写新密码
Re-type new password: 
Adding password for user lee
[root@master server]# cat auth
lee:$apr1$sdoDlC7.$rx7PLjhOoh4CenntcLKnh0
[root@master server]# kubectl create secret generic  auth-web --from-file auth
secret/auth-web created
[root@master server]# kubectl  get secrets auth-web -o yaml #查看 Secret
apiVersion: v1
data:
  auth: bGVlOiRhcHIxJHZUU09vd24vJENFSFlQaHAzNU1COGdzUXVmNS9Tcy8K  #Base64 加密后的账号密码,K8s 只能存加密数据
kind: Secret
metadata:
  creationTimestamp: "2026-04-16T10:30:46Z"
  name: auth-web
  namespace: default
  resourceVersion: "216130"
  uid: 4ae64962-72c9-409c-b942-2db8978b0f5c
type: Opaque
[root@master server]# cat auth
lee:$apr1$vTSOown/$CEHYPhp35MB8gsQuf5/Ss/
[root@master server]# echo "bGVlOiRhcHIxJHZUU09vd24vJENFSFlQaHAzNU1COGdzUXVmNS9Tcy8K" | base64 -d
lee:$apr1$vTSOown/$CEHYPhp35MB8gsQuf5/Ss/
[root@master server]#  vim 4-ingress.yml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
    nginx.ingress.kubernetes.io/auth-type: basic            # 开启账号密码登录
    nginx.ingress.kubernetes.io/auth-secret: auth-web       # 使用刚才的密码密钥
    nginx.ingress.kubernetes.io/auth-realm: "Please input..." # 登录提示语
    nginx.ingress.kubernetes.io/rewrite-target: /           # 路径重写
  name: webcluster
spec:
  tls:
  - hosts:
    - myapp1.timinglee.org
    secretName: web-tls-secret    # HTTPS 证书
  ingressClassName: nginx
  rules:
  - host: myapp1.timinglee.org
    http:
      paths:
      - backend:
          service:
            name: myapp1
            port:
              number: 80
        path: /
        pathType: Prefix
[root@master server]# kubectl apply -f 4-ingress.yml
ingress.networking.k8s.io/webcluster created
[root@master server]#  kubectl describe  ingress
Name:             webcluster
Labels:           <none>
Namespace:        default
Address:          
Ingress Class:    nginx
Default backend:  <default>
TLS:
  web-tls-secret terminates myapp1.timinglee.org
Rules:
  Host                  Path  Backends
  ----                  ----  --------
  myapp1.timinglee.org  
                        /   myapp1:80 (10.244.2.4:80)
Annotations:            nginx.ingress.kubernetes.io/auth-realm: Please input username and password
                        nginx.ingress.kubernetes.io/auth-secret: auth-web
                        nginx.ingress.kubernetes.io/auth-type: basic
                        nginx.ingress.kubernetes.io/rewrite-target: /
Events:
  Type    Reason  Age   From                      Message
  ----    ------  ----  ----                      -------
  Normal  Sync    6s    nginx-ingress-controller  Scheduled for sync
[root@master server]# curl  -k https://myapp1.timinglee.org
<html>
<head><title>401 Authorization Required</title></head>
<body>
<center><h1>401 Authorization Required</h1></center>
<hr><center>nginx</center>
</body>
</html>
[root@master server]# curl  -k https://myapp1.timinglee.org  -u lee:1
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>

auth认证的作用:Kubernetes 里的 myapp1 应用做了两层安全保护

  • HTTPS 加密(别人无法窃听数据)
  • 账号密码登录(不知道密码无法访问)

最终效果:

  • 不知道密码 → 401 拒绝访问
  • 输入 lee / 1 → 成功打开应用
  • 全程安全加密访问

网页重写

示例1
bash 复制代码
[root@master server]# cp 4-ingress.yml  5-ingress.yml 
[root@master server]# vim 5-ingress.yml 
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:		
    nginx.ingress.kubernetes.io/app-root: /hostname.html			#设定默认测试也
    nginx.ingress.kubernetes.io/auth-type: basic
    nginx.ingress.kubernetes.io/auth-secret: auth-web
    nginx.ingress.kubernetes.io/auth-realm: "Please input username and password"
    nginx.ingress.kubernetes.io/rewrite-target: /
  name: webcluster
spec:
  tls:
  - hosts:
    - myapp1.timinglee.org
    secretName: web-tls-secret
  ingressClassName: nginx
  rules:
  - host: myapp1.timinglee.org
    http:
      paths:
      - backend:
          service:
            name: myapp1
            port:
              number: 80
        path: /
        pathType: Prefix

[root@master server]# kubectl  apply -f 5-ingress.yml 
ingress.networking.k8s.io/webcluster configured
[root@master server]# kubectl describe  ingress webcluster 
Name:             webcluster
Labels:           <none>
Namespace:        default
Address:          172.25.254.30
Ingress Class:    nginx
Default backend:  <default>
TLS:
  web-tls-secret terminates myapp1.timinglee.org
Rules:
  Host                  Path  Backends
  ----                  ----  --------
  myapp1.timinglee.org  
                        /   myapp1:80 (10.244.2.4:80)
Annotations:            nginx.ingress.kubernetes.io/app-root: /hostname.html
                        nginx.ingress.kubernetes.io/auth-realm: Please input username and password
                        nginx.ingress.kubernetes.io/auth-secret: auth-web
                        nginx.ingress.kubernetes.io/auth-type: basic
                        nginx.ingress.kubernetes.io/rewrite-target: /
Events:
  Type    Reason  Age                From                      Message
  ----    ------  ----               ----                      -------
  Normal  Sync    22s (x3 over 13m)  nginx-ingress-controller  Scheduled for sync
[root@master server]# curl -lk https://myapp1.timinglee.org -u lee:1
<html>
<head><title>302 Found</title></head>
<body>
<center><h1>302 Found</h1></center>
<hr><center>nginx</center>
</body>
</html>
[root@master server]# curl -Lk https://myapp1.timinglee.org -u lee:1
myapp1-8456b584d6-gxp6p
示例2
bash 复制代码
[root@master ~]# cd server/
[root@master server]# vim 5-ingress.yml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
    #nginx.ingress.kubernetes.io/app-root: /hostname.html    #设定默认测试
    nginx.ingress.kubernetes.io/use-regex: "true"   #开启路径正则表达式匹配
    nginx.ingress.kubernetes.io/rewrite-target: /$2  #把用户访问的路径,重写成后端服务真正能识别的路径
    nginx.ingress.kubernetes.io/auth-type: basic
    nginx.ingress.kubernetes.io/auth-secret: auth-web
    nginx.ingress.kubernetes.io/auth-realm: "Please input username and password"
    nginx.ingress.kubernetes.io/rewrite-target: /
  name: webcluster
spec:
  tls:
  - hosts:
    - myapp1.timinglee.org
    secretName: web-tls-secret
  ingressClassName: nginx
  rules:
  - host: myapp1.timinglee.org
    http:
      paths:
      - backend:
          service:
            name: myapp1
            port:
              number: 80
        path: /lee(/|$)(.*)
        pathType: ImplementationSpecific  #让 Nginx 接管解析 
[root@master server]# kubectl delete  -f 5-ingress.yml 
ingress.networking.k8s.io "webcluster" deleted from default namespace
[root@master server]# kubectl apply  -f 5-ingress.yml 
ingress.networking.k8s.io/webcluster created
[root@master server]# kubectl  describe  ingress webcluster
Name:             webcluster
Labels:           <none>
Namespace:        default
Address:          
Ingress Class:    nginx
Default backend:  <default>
TLS:
  web-tls-secret terminates myapp1.timinglee.org
Rules:
  Host                  Path  Backends
  ----                  ----  --------
  myapp1.timinglee.org  
                        /lee(/|$)(.*)   myapp1:80 (10.244.2.2:80)
Annotations:            nginx.ingress.kubernetes.io/auth-realm: Please input username and password
                        nginx.ingress.kubernetes.io/auth-secret: auth-web
                        nginx.ingress.kubernetes.io/auth-type: basic
                        nginx.ingress.kubernetes.io/rewrite-target: /
                        nginx.ingress.kubernetes.io/use-regex: true
Events:
  Type    Reason  Age   From                      Message
  ----    ------  ----  ----                      -------
  Normal  Sync    3s    nginx-ingress-controller  Scheduled for sync

[root@master server]# curl -Lk https:/myapp1.timinglee.org/lee/hostname.html -u lee:1
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
[root@master server]#  curl  -Lk https://myapp1.timinglee.org/lee/aaaa/  -u lee:1
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>

(9)Canary金丝雀发布

基于head的灰度发布

bash 复制代码
[root@master server]# kubectl  create ingress  webcluster --class=nginx 
--rule="myapp1.timinglee.org/=myapp1:80" --annotation 
nginx.ingress.kubernetes.io/rewrite-target=/ -o yaml --dry-run=1 
> 6-ingress.yml
  • kubectl create ingress:创建 Ingress 规则
  • webcluster:Ingress 名字
  • --class=nginx:使用 nginx ingress 控制器
  • --rule="域名/=服务名:端口":访问这个域名,转发给 myapp1 服务
  • --annotation nginx.ingress.kubernetes.io/rewrite-target:/:路径重写为根路径
  • -o yaml:输出 yaml 格式
  • --dry-run=client:不真正创建,只生成文件
bash 复制代码
[root@master server]# vim 6-ingress.yml 
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
  name: webcluster
spec:
  ingressClassName: nginx
  rules:
  - host: myapp1.timinglee.org
    http:
      paths:
      - backend:
          service:
            name: myapp1
            port:
              number: 80
        path: /
        pathType: Prefix
[root@master server]# kubectl  apply  -f 6-ingress.yml 
ingress.networking.k8s.io/webcluster created
[root@master server]# kubectl  describe  ingress webcluster
Name:             webcluster
Labels:           <none>
Namespace:        default
Address:          
Ingress Class:    nginx
Default backend:  <default>
Rules:
  Host                  Path  Backends
  ----                  ----  --------
  myapp1.timinglee.org  
                        /   myapp1:80 (10.244.2.2:80)
Annotations:            nginx.ingress.kubernetes.io/rewrite-target: /
Events:
  Type    Reason  Age   From                      Message
  ----    ------  ----  ----                      -------
  Normal  Sync    25s   nginx-ingress-controller  Scheduled for sync
[root@master server]# curl myapp1.timinglee.org
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
bash 复制代码
#建立新的ingress
[root@master server]# kubectl create  ingress webcluster --class=nginx --rule="myapp1.timinglee.org/=myapp2:80" --annotation nginx.ingress.kuberetes.io/canary-by-header-value=2 --annotation nginx.ingress.kubernetes.io/rewrite-target=/ -o yaml --dry-run=client > 7-ingress.yml
[root@master server]# vim 7-ingress.yml 
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
    nginx.ingress.kubernetes.io/canary: "true"
    nginx.ingress.kubernetes.io/canary-by-header: "version"
    nginx.ingress.kuberetes.io/canary-by-header-value: "2"
    nginx.ingress.kubernetes.io/rewrite-target: /
  name: webcluster-new
spec:
  ingressClassName: nginx
  rules:
  - host: myapp1.timinglee.org
    http:
      paths:
      - backend:
          service:
            name: myapp2
            port:
              number: 80
        path: /
        pathType: Prefix
[root@master server]# kubectl  apply  -f 7-ingress.yml 
ingress.networking.k8s.io/webcluster-new created
[root@master server]# kubectl get ingress
NAME             CLASS   HOSTS                  ADDRESS         PORTS   AGE
webcluster       nginx   myapp1.timinglee.org   172.25.254.30   80      27m
webcluster-new   nginx   myapp1.timinglee.org                   80      10s
[root@master server]#  curl myapp1.timinglee.org
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
[root@master server]#  curl -H "version:2" myapp1.timinglee.org
Hello MyApp | Version: v2 | <a href="hostname.html">Pod Name</a>
  • canary: "true" → 开启金丝雀模式
  • canary-by-header: version → 按请求头 version 来分流
  • canary-by-header-value: "2" → 请求头等于 2 时,走 myapp2

基于权重的灰度发布

bash 复制代码
[root@master server]# vim 7-ingress.yml 
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
    nginx.ingress.kubernetes.io/canary: "true"   #开启金丝雀(灰度发布)模式 = 是
    nginx.ingress.kubernetes.io/canary-weight: "10"  #权重:10% 的流量分到 myapp2(v2)
    nginx.ingress.kubernetes.io/canary-weight-total: "100" #权重总数:默认就是100
    nginx.ingress.kubernetes.io/rewrite-target: /  #路径重写:把访问的路径重写成 /
  name: webcluster-new
spec:
  ingressClassName: nginx
  rules:
  - host: myapp1.timinglee.org
    http:
      paths:
      - backend:
          service:
            name: myapp2
            port:
              number: 80
        path: /
        pathType: Prefix
[root@master server]#  kubectl apply -f 7-ingress.yml
ingress.networking.k8s.io/webcluster-new configured
[root@master server]# kubectl  get ingress
NAME             CLASS   HOSTS                  ADDRESS         PORTS   AGE
webcluster       nginx   myapp1.timinglee.org   172.25.254.30   80      32m
webcluster-new   nginx   myapp1.timinglee.org   172.25.254.30   80      5m47s
[root@master server]# vim haha.sh
#!/bin/bash
v1=0
v2=0

for (( i=0; i<100; i++ ))
do
    response=$(curl -s myapp1.timinglee.org | grep -c "v1")
    v1=$(expr $v1 + $response)
    v2=$(expr $v2 + 1 - $response)
done

echo "v1:$v1, v2:$v2"

#权重:10% 的流量分到 myapp2(v2)结果

#权重:40% 的流量分到 myapp2(v2)

#测试结果

11.K8s存储

(1)ConfigMap

configmap的功能

  • configMap用于保存配置数据,以键值对形式存储。
  • configMap 资源提供了向 Pod 注入配置数据的方法。
  • 镜像和配置文件解耦,以便实现镜像的可移植性和可复用性。
  • etcd限制了文件大小不能超过1M

configmap的使用场景

  • 填充环境变量的值
  • 设置容器内的命令行参数
  • 填充卷的配置文件

ComfigMap的建立

通过字符方式
bash 复制代码
[root@master ~]# mkdir storage
[root@master ~]# cd storage/
[root@master storage]# kubectl create cm timinglee --from-literal fname=timing --from-literal lname=lee
configmap/timinglee created
[root@master storage]# kubectl  get cm
NAME               DATA   AGE
kube-root-ca.crt   1      14d
timinglee          2      71s
[root@master storage]# kubectl get cm --no-headers
kube-root-ca.crt   1     14d
timinglee          2     2m28s
[root@master storage]# kubectl  describe  cm timinglee
Name:         timinglee
Namespace:    default    #在默认命名空间 default 里
Labels:       <none>
Annotations:  <none>

Data          #干净环境
====
fname:
----
timing

lname:
----
lee


BinaryData
====

Events:  <none>
[root@master storage]# kubectl  get cm timinglee -o yaml
apiVersion: v1
data:
  fname: timing
  lname: lee
kind: ConfigMap
metadata:
  creationTimestamp: "2026-04-18T11:49:16Z"
  name: timinglee
  namespace: default
  resourceVersion: "272138"
  uid: 3eb689d4-05dd-4b41-895e-70c73fc06a55

kubectl get cm timinglee -o yaml
就是把你创建的配置,以 "原始格式" 展示出来。

通过文件方式

bash 复制代码
[root@master storage]# echo xiarunyu >timinglee
[root@master storage]# kubectl  create cm timinglee2 --from-file timinglee
configmap/timinglee2 created
[root@master storage]# kubectl describe cm timinglee2
Name:         timinglee2
Namespace:    default
Labels:       <none>
Annotations:  <none>

Data
====
timinglee:
----
xiarunyu      #输出xiarunyu



BinaryData
====

Events:  <none>

通过目录方式

bash 复制代码
[root@master storage]# mkdir test
[root@master storage]# echo timinglee > test/tfile
[root@master storage]# echo lee > test/lfile
[root@master storage]# ls test/
lfile  tfile
[root@master storage]# cat test/*
lee
timinglee
[root@master storage]# kubectl create  cm timinglee3 --from-file test/
configmap/timinglee3 created
[root@master storage]# kubectl  describe  cm timinglee3
Name:         timinglee3
Namespace:    default
Labels:       <none>
Annotations:  <none>

Data
====
lfile:
----
lee


tfile:
----
timinglee



BinaryData
====

Events:  <none>

通过yaml方式

bash 复制代码
[root@master storage]# kubectl  create  cm timinglee4 --from-literal timinglee=abc  --from-literal lee=def --dry-run=client -o yaml > timinglee4.yml
[root@master storage]# vim timinglee4.yml 
apiVersion: v1
data:
  lee: def
  timinglee: abc
kind: ConfigMap
metadata:
  name: timinglee4                               
[root@master storage]# kubectl apply  -f timinglee4.yml 
configmap/timinglee4 created
[root@master storage]# kubectl  describe cm timinglee4
Name:         timinglee4
Namespace:    default
Labels:       <none>
Annotations:  <none>

Data
====
lee:
----
def

timinglee:
----
abc


BinaryData
====

Events:  <none>

ConfigMap的使用方法

使用configmap填充环境变量

填充自定义变量
bash 复制代码
[root@master storage]# vim timinglee4.yml
 apiVersion: v1
data:
  ipaddress: "172.25.254.50"
  port: "3306"
kind: ConfigMap
metadata:
  name: timinglee4
[root@master storage]# kubectl apply -f timinglee4.yml 
configmap/timinglee4 configured
[root@master storage]# kubectl describe cm timinglee4
Name:         timinglee4
Namespace:    default
Labels:       <none>
Annotations:  <none>

Data
====
ipaddress:
----
172.25.254.50

port:
----
3306


BinaryData
====

Events:  <none>




[root@master storage]# kubectl  run testpod --image busybox --dry-run=client -o yaml > testpod.yml
[root@master storage]# vim testpod.yml
apiVersion: v1
kind: Pod
metadata:
  labels:
    run: testpod
  name: testpod
spec:
  containers:
  - image: busybox
    name: testpod
    command:
    - /bin/sh
    - -c
    - env
    env:
    - name: key1		#设定key1的值
      valueFrom:
        configMapKeyRef:
          name: timinglee4
          key: ipaddress
    - name: key2		#设定key2的值
      valueFrom:
        configMapKeyRef:
          name: timinglee4
          key: port
  restartPolicy: Never
[root@master storage]# kubectl apply -f testpod.yml 
pod/testpod created
[root@master storage]# kubectl  logs pods/testpod
KUBERNETES_SERVICE_PORT=443
KUBERNETES_PORT=tcp://10.96.0.1:443
HOSTNAME=testpod
SHLVL=1
HOME=/root
KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
KUBERNETES_PORT_443_TCP_PORT=443
key1=172.25.254.50
KUBERNETES_PORT_443_TCP_PROTO=tcp
key2=3306
KUBERNETES_SERVICE_PORT_HTTPS=443
KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443
KUBERNETES_SERVICE_HOST=10.96.0.1
PWD=/
[root@master storage]# kubectl delete pods testpod
pod "testpod" deleted from default namespace
configMap:K8s 里存普通配置文件 / 文字的地方(不敏感)
Key:配置里的某个键(比如 username app.name
Ref:reference = 引用
直接把configMap中的变量填充到系统给变量
bash 复制代码
[root@master storage]# vim testpod.yml 
apiVersion: v1
kind: Pod
metadata:
  labels:
    run: testpod
  name: testpod
spec:
  containers:
  - image: busybox
    name: testpod
    command:
    - /bin/sh
    - -c
    - env
    envFrom:
    - configMapRef:
        name: timinglee4
  restartPolicy: Never
[root@master storage]# kubectl apply -f testpod.yml 
pod/testpod created
[root@master storage]# kubectl  logs pods/testpod 
KUBERNETES_SERVICE_PORT=443
KUBERNETES_PORT=tcp://10.96.0.1:443
HOSTNAME=testpod
SHLVL=1
HOME=/root
KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
port=3306
KUBERNETES_PORT_443_TCP_PORT=443
KUBERNETES_PORT_443_TCP_PROTO=tcp
ipaddress=172.25.254.50
KUBERNETES_SERVICE_PORT_HTTPS=443
KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443
KUBERNETES_SERVICE_HOST=10.96.0.1
PWD=/
变量的表示方式
bash 复制代码
[root@master storage]# vim testpod.yml 
apiVersion: v1
kind: Pod
metadata:
  labels:
    run: testpod
  name: testpod
spec:
  containers:
  - image: busybox
    name: testpod
    command:
    - /bin/sh
    - -c
    - echo ${ipaddress}_${port}
    envFrom:
    - configMapRef:
        name: timinglee
[root@master storage]# kubectl apply -f testpod.yml
pod/testpod created
[root@master storage]#  kubectl logs pods/testpod
172.25.254.50_3306

通过数据卷使用configmap

bash 复制代码
[root@master storage]# vim testpod.yml 
apiVersion: v1
kind: Pod
metadata:
  labels:
    run: testpod
  name: testpod
spec:
  containers:
  - image: busybox
    name: testpod
    command:
    - /bin/sh
    - -c
    - sleep 100000
    volumeMounts:
    - name: config-volume
      mountPath: /config
  volumes:
  - name: config-volume
    configMap:
      name: timinglee4
  restartPolicy: Never
[root@master storage]# kubectl apply -f testpod.yml 
pod/testpod created
[root@master storage]# kubectl exec -it pods/testpod --/bin/sh
error: unknown flag: --/bin/sh
See 'kubectl exec --help' for usage.
[root@master storage]# kubectl exec -it pods/testpod -- /bin/sh
/ # ls
bin     config  dev     etc     home    lib     lib64   proc    root    sys     tmp     usr     var
/ # config/
/bin/sh: config/: Permission denied
/ # cd config/
/config # cat ipaddress 
172.25.254.50/config # cat port
3306/config #  
[root@master storage]#  kubectl delete -f testpod.yml  --force
Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
pod "testpod" force deleted from default namespace
通过cm来配置pod
bash 复制代码
[root@master storage]# vim nginx.conf
server {
  listen 8000;
  server_name _;
  root /usr/share/nginx/html;
  index index.html;
}
[root@master storage]#  kubectl create cm nginx --from-file nginx.conf
configmap/nginx created
[root@master storage]# kubectl  describe cm nginx
Name:         nginx
Namespace:    default
Labels:       <none>
Annotations:  <none>

Data
====
nginx.conf:
----
server {
  listen 8000;
  server_name_;
  root /usr/share/nginx/html;
  index index.html;
}



BinaryData
====

Events:  <none>
[root@master storage]# vim testpod.yml 
[root@master storage]# kubectl apply -f testpod.yml
pod/nginx created
[root@master storage]# kubectl get pods  -o wide
NAME    READY   STATUS    RESTARTS   AGE   IP            NODE    NOMINATED NODE   READINESS GATES
nginx   1/1     Running   0          8s    10.244.2.15   node2   <none>           <none>
[root@master storage]# curl 10.244.2.15:8000
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>
通过修改该cm的内容来更新配置
bash 复制代码
[root@master storage]# kubectl edit cm nginx
 apiVersion: v1
data:
  nginx.conf: |
    server {
      listen 8080;    #接口改成8080
      server_name _;
      root /usr/share/nginx/html;
      index index.html;
    }
kind: ConfigMap
metadata:
  

[root@master storage]# kubectl delete -f testpod.yml
pod "nginx" deleted from default namespace
[root@master storage]# kubectl apply -f testpod.yml
pod/nginx created
[root@master storage]# kubectl get pods  -o wide
NAME    READY   STATUS    RESTARTS   AGE   IP            NODE    NOMINATED NODE   READINESS GATES
nginx   1/1     Running   0          73s   10.244.2.17   node2   <none>           <none>
[root@master storage]# curl 10.244.2.17:8080  #IP可能会变记得查一下
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

(2).Secrets

  • Secret 对象类型用来保存敏感信息,例如密码、OAuth 令牌和 ssh key。
  • 敏感信息放在 secret 中比放在 Pod 的定义或者容器镜像中来说更加安全和灵活
  • Pod 可以用两种方式使用 secret:
  • 作为 volume 中的文件被挂载到 pod 中的一个或者多个容器里。
  • 当 kubelet 为 pod 拉取镜像时使用。
  • Secret的类型:
  • Service Account:Kubernetes 自动创建包含访问 API 凭据的 secret,并自动修改 pod 以使用此类型的 secret。
  • Opaque:使用base64编码存储信息,可以通过base64 --decode解码获得原始数据,因此安全性弱。
  • kubernetes.io/dockerconfigjson:用于存储docker registry的认证信息

secrets建立

通过命令方式建立
bash 复制代码
[root@master storage]# kubectl  create secret generic timinglee --from-literal userlist=timinglee --from-literal password=lee
secret/timinglee created
  • kubectl:k8s 命令工具
  • create:创建资源
  • secret:创建密钥(存密码、敏感信息)
  • generic:普通密钥类型(存自定义键值对)
  • timinglee:密钥的名字
  • --from-literal userlist=timinglee:
  • 创建一个键值对:键 = userlist,值 = timinglee
  • --from-literal password=lee:
  • 创建第二个键值对:键 = password,值 = lee
bash 复制代码
[root@master storage]# kubectl  get secrets
NAME             TYPE                DATA   AGE
auth-web         Opaque              1      2d20h
timinglee        Opaque              2      16s
web-tls-secret   kubernetes.io/tls   2      2d22h
#DATA   AGE   #timinglee:你刚创建的密钥
#Opaque:普通密钥
 #DATA=2:里面存了 2 条数据(userlist、password)
[root@master storage]# kubectl describe secrets timinglee#查看 timinglee 这个密钥的详细信息
Name:         timinglee
Namespace:    default
Labels:       <none>
Annotations:  <none>

Type:  Opaque

Data
====
password:  3 bytes
userlist:  9 bytes
[root@master storage]# kubectl  get secrets -o yaml timinglee   #YAML 格式查看密钥原始数据
apiVersion: v1
data:
  password: bGVl                       #password 编码后:bGVl
  userlist: dGltaW5nbGVl               # userlist 编码后:dGltaW5nbGVl
kind: Secret
metadata:
  creationTimestamp: "2026-04-19T07:04:36Z"
  name: timinglee
  namespace: default
  resourceVersion: "316275"
  uid: 92d10e89-b6de-46ba-a217-e07729b5d783
type: Opaque
[root@master storage]# echo -n "dGltaW5nbGVl" | base64 -d 
timinglee[root@master storage]# 
[root@master storage]# kubectl  delete  secrets timinglee
secret "timinglee" deleted from default namespace
通过文件方式建立
bash 复制代码
[root@master storage]# echo timinglee > userlist
[root@master storage]# echo lee >password
[root@master storage]# cat userlist
timinglee
[root@master storage]# cat password
lee
[root@master storage]# kubectl create secret generic timinglee --from-file userlist --from-file password
secret/timinglee created
[root@master storage]# kubectl  get secrets
NAME             TYPE                DATA   AGE
auth-web         Opaque              1      2d20h
timinglee        Opaque              2      13s
web-tls-secret   kubernetes.io/tls   2      2d22h
通过yaml的方式建立
bash 复制代码
[root@master storage]# vim timinglee_secrets.yml 
apiVersion: v1
data:
  userlist: dGltaW5nbGVl      #'timinglee' 文本编码的转换
  passwd: MTIz                #'123' 文本编码的转换
kind: Secret                  #指的是密钥
metadata:
  name: timinglee              #Secret的名字叫timinglee
[root@master storage]# kubectl apply  -f timinglee_secrets.yml
secret/timinglee configured
[root@master storage]# kubectl  describe  secrets timinglee 
Name:         timinglee
Namespace:    default
Labels:       <none>
Annotations:  <none>

Type:  Opaque        #自定义的普通密码密钥

Data
====
passwd:    3 bytes    #'123'3个字符   
userlist:  9 bytes    #'timinglee' 9个字符

secrets用法

注入pod文件中
bash 复制代码
[root@master storage]# vim testpod.yml 
# 声明资源类型,这里是 Pod(K8s 最小的部署单元)
kind: Pod
metadata: 
  labels:
    run: busybox
  name: busybox
spec:
  containers:
  - image: busybox  #  容器使用的镜像:busybox(轻量级 Linux 工具集)
    name: busybox    # 容器的名字:busybox
    command:         # 容器启动时执行的命令(覆盖镜像默认命令)
    - /bin/sh        # 执行 shell
    - -c             # 执行后面的字符串命令
    - sleep 10000000 #  让容器休眠很长时间(防止容器启动后直接退出)
    volumeMounts:    # 容器内的挂载配置(把外部存储挂到容器里)
    - name: config-volume  # 挂载的卷名称(要和下面 volumes 对应)
      mountPath: /userlist # 挂载到容器内的路径 /userlist
  volumes:
    # 19. 第一个卷
  - name: config-volume  #  卷名称(和上面挂载对应)
    secret:              # 卷类型:Secret(存放密码、令牌等敏感数据)
      secretName: timinglee # 2 使用的 Secret 名字叫 timinglee

  # 重启策略:容器退出后永远不重启
  restartPolicy: Never
[root@master storage]# kubectl  delete  -f testpod.yml 
pod "busybox" deleted from default namespace
[root@master storage]# kubectl  apply  -f testpod.yml 
pod/busybox created
[root@master storage]# kubectl exec -it pods/busybox -- /bin/sh 
#进入busybox的pod内部,打开终端
/ # ls /userlist
fname  lname

注入pod指定文件中

bash 复制代码
[root@master storage]# vim testpod.yml 
apiVersion: v1
kind: Pod
metadata:
  labels:
    run: busybox
  name: busybox
spec:
  containers:
  - image: busybox
    name: busybox
    command:
    - /bin/sh
    - -c
    - sleep 100000000
    volumeMounts:
    - name: config-volume
      mountPath: /userlist
  volumes:
  - name: config-volume
    secret:
      secretName: timinglee
       items:                   # 只把 Secret 里的 某个key 挂载出来
      - key: userlist          # 取 Secret 里 key = userlist 的内容
        path: my-users/username # 挂载到容器里变成:
                                # /userlist/my-users/username 文件
  restartPolicy: Never
[root@master storage]# kubectl  delete -f testpod.yml 、
pod "busybox" deleted from default namespace
[root@master storage]# kubectl apply -f testpod.yml 
pod/busybox created
[root@master storage]# kubectl  exec -it busybox  -- /bin/sh
#k8s打开容器busybox的pod 的终端编写
/ # ls /userlist/my-users/username
/userlist/my-users/username
/ # cat /userlist/my-users/username 
timinglee/ # exec attach failed: error on attach stdin: read escape sequence
command terminated with exit code 126
将Secret设置为环境变量
bash 复制代码
[root@master storage]# vim testpod.yml 
apiVersion: v1
kind: Pod
metadata:
  labels:
    run: busybox
  name: busybox
spec:
  containers:
  - image: busybox
    name: busybox
    command:
    - /bin/sh
    - -c
    - env
    env: #给容器设置环境的变量
    - name: USERNAME
      valueFrom:
        secretKeyRef: #从secret里面取数据
          name: timinglee
          key: userlist  #取这个 Secret 里 key = userlist 的值
    - name: PASSWD           #环境变量名字
      valueFrom:
        secretKeyRef:
          name: timinglee
          key: passwd
  restartPolicy: Never
[root@master storage]# kubectl delete -f testpod.yml
pod "busybox" deleted from default namespace
[root@master storage]# kubectl  apply -f testpod.yml
pod/busybox created
[root@master storage]#  kubectl logs  pods/busybox busybox
KUBERNETES_PORT=tcp://10.96.0.1:443
KUBERNETES_SERVICE_PORT=443
HOSTNAME=busybox
SHLVL=1
HOME=/root
USERNAME=timinglee
KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
KUBERNETES_PORT_443_TCP_PORT=443
KUBERNETES_PORT_443_TCP_PROTO=tcp
PASSWD=123
KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443
KUBERNETES_SERVICE_PORT_HTTPS=443
KUBERNETES_SERVICE_HOST=10.96.0.1
PWD=/
存储docker registry的认证信息
建立harbor私有仓

#上传镜像在私有仓库里面

bash 复制代码
#在172.25.254.200 harbor机子上上传镜像
[root@harbor ~]# docker tag timinglee/myapp:v1  reg.timinglee.org/timinglee/myapp:v1
[root@harbor ~]# docker push reg.timinglee.org/timinglee/myapp:v1 
The push refers to repository [reg.timinglee.org/timinglee/myapp]
a0d2c4392b06: Mounted from library/myapp 
05a9e65e2d53: Mounted from library/myapp-v2 
68695a6cfd7d: Mounted from library/myapp-v2 
c1dc81a64903: Mounted from library/myapp-v2 
8460a579ab63: Mounted from library/myapp-v2 
d39d92664027: Mounted from library/myapp-v2 
v1: digest: sha256:9eeca44ba2d410e54fccc54cbe9c021802aa8b9836a0bcf3d3229354e4c8870e size: 1569
使用私有仓库镜像时的问题
bash 复制代码
[root@master storage]# kubectl run  myapp-v1 --image=reg.timinglee/timinglee/myapp:v1  --restart=Never --dry-run=client -o yaml >test.yml
[root@master storage]# vim test.yml 
apiVersion: v1
kind: Pod
metadata:
  labels:
    run: myapp-v1
  name: myapp-v1
spec:
  containers:
  - image: reg.timinglee.org/timinglee/myapp:v1
    name: myapp-v1

[root@master storage]#  kubectl delete -f testpod.yml
[root@master storage]# kubectl delete pod nginx
pod "nginx" deleted from default namespace
pod "busybox" deleted from default namespace
[root@master storage]#  kubectl apply -f test.yml 
pod/myapp-v1 created
[root@master storage]# kubectl get pods
NAME       READY   STATUS         RESTARTS   AGE
myapp-v1   0/1     ErrImagePull   0          2s
[root@master storage]# kubectl  describe pods myapp-v1 
。。。。。。。。。
Events:
  Type     Reason     Age                   From               Message
  ----     ------     ----                  ----               -------
  Normal   Scheduled  2m51s                 default-scheduler  Successfully assigned default/myapp-v1 to node3
  Normal   Pulling    87s (x4 over 2m51s)   kubelet            spec.containers{myapp-v1}: Pulling image "reg.timinglee/timinglee/myapp:v1"
  Warning  Failed     87s (x4 over 2m50s)   kubelet            spec.containers{myapp-v1}: Failed to pull image "reg.timinglee/timinglee/myapp:v1": Error response from daemon: Get "https://reg.timinglee/v2/": dial tcp: lookup reg.timinglee on 8.8.8.8:53: no such host
  Warning  Failed     87s (x4 over 2m50s)   kubelet            spec.containers{myapp-v1}: Error: ErrImagePull
  Normal   BackOff    10s (x10 over 2m49s)  kubelet            spec.containers{myapp-v1}: Back-off pulling image "reg.timinglee/timinglee/myapp:v1"
  Warning  Failed     10s (x10 over 2m49s)  kubelet            spec.containers{myapp-v1}: Error: ImagePullBackOff

#为啥没有拉去成功,因为它是私有镜像,没有权限无法拉去成功。所以要用secrets注册账号与信息

制作用户docker认证的secrets
bash 复制代码
[root@master storage]# kubectl create secret docker-registry docker-auth --docker-server reg.timinglee.org --docker-username admin --docker-password lee --docker-email timinglee@timinglee.org
secret/docker-auth created
[root@master storage]# kubectl  get secrets  docker-auth
NAME          TYPE                             DATA   AGE
docker-auth   kubernetes.io/dockerconfigjson   1      21s
[root@master storage]# kubectl  describe  secrets docker-auth
Name:         docker-auth
Namespace:    default
Labels:       <none>
Annotations:  <none>

Type:  kubernetes.io/dockerconfigjson

Data
====
.dockerconfigjson:  125 bytes
#解码的 base64 字符串逐行解释
[root@master storage]# kubectl  get  secrets docker-auth -o yaml 
apiVersion: v1
data:
  .dockerconfigjson: eyJhdXRocyI6eyJyZWcudGltaW5nbGVlLm9yZyI6eyJ1c2VybmFtZSI6ImFkbWluIiwicGFzc3dvcmQiOiJsZWUiLCJlbWFpbCI6InRpbWluZ2xlZUB0aW1pbmdsZWUub3JnIiwiYXV0aCI6IllXUnRhVzQ2YkdWbCJ9fX0=
kind: Secret
metadata:
  creationTimestamp: "2026-04-20T14:38:01Z"
  name: docker-auth
  namespace: default
  resourceVersion: "354493"
  uid: cb065deb-bae7-41b3-ac48-ddd429728b3a
type: kubernetes.io/dockerconfigjson
[root@master storage]# echo "eyJhdXRocyI6eyJyZWcudGltaW5nbGVlLm9yZyI6eyJ1c2VybmFtZSI6ImFkbWluIiwicGFzc3dvcmQiOiJsZWUiLCJlbWFpbCI6InRpbWluZ2xlZUB0aW1pbmdsZWUub3JnIiwiYXV0aCI6IllXUnRhVzQ2YkdWbCJ9fX0=" | base64 -d
{"auths":{"reg.timinglee.org":{"username":"admin","password":"lee","email":"timinglee@timinglee.org","auth":"YWRtaW46bGVl"}}}
[root@master storage]# 

使用docker的secrets

bash 复制代码
[root@master storage]# kubectl  delete -f test.yml 
pod "myapp-v1" deleted from default namespace
[root@master storage]# vim test.yml 
apiVersion: v1
kind: Pod
metadata:
  labels:
    run: myapp-v1
  name: myapp-v1
spec:
  containers:
  - image: reg.timinglee.org/timinglee/myapp:v1
    name: myapp-v1
  imagePullSecrets:    #仓库登录密钥
  - name: docker-auth   #使用你创建的密钥
[root@master storage]# kubectl apply -f  test.yml 
pod/myapp-v1 created
[root@master storage]# kubectl get pods
NAME       READY   STATUS    RESTARTS   AGE
myapp-v1   1/1     Running   0          5s

(3)volumes配置管理

容器中文件在磁盘上是临时存放的,这给容器中运行的特殊应用程序带来一些问题

当容器崩溃时,kubelet将重新启动容器,容器中的文件将会丢失,因为容器会以干净的状态重建。

当在一个 Pod 中同时运行多个容器时,常常需要在这些容器之间共享文件。

Kubernetes 卷具有明确的生命周期与使用它的 Pod 相同

卷比 Pod 中运行的任何容器的存活期都长,在容器重新启动时数据也会得到保留

当一个 Pod 不再存在时,卷也将不再存在。

Kubernetes 可以支持许多类型的卷,Pod 也能同时使用任意数量的卷。

卷不能挂载到其他卷,也不能与其他卷有硬链接。 Pod 中的每个容器必须独立地指定每个卷的挂载位置。

kubernets支持的卷的类型

官网:https://kubernetes.io/zh/docs/concepts/storage/volumes/

k8s支持的卷的类型如下:

  • awsElasticBlockStore 、azureDisk、azureFile、cephfs、cinder、configMap、csi
  • downwardAPI、emptyDir、fc (fibre channel)、flexVolume、flocker
  • gcePersistentDisk、gitRepo (deprecated)、glusterfs、hostPath、iscsi、local、
  • nfs、persistentVolumeClaim、projected、portworxVolume、quobyte、rbd
  • scaleIO、secret、storageos、vsphereVolume

emptyDir卷

功能:

当Pod指定到某个节点上时,首先创建的是一个emptyDir卷,并且只要 Pod 在该节点上运行,卷就一直存在。卷最初是空的。 尽管 Pod 中的容器挂载 emptyDir 卷的路径可能相同也可能不同,但是这些容器都可以读写 emptyDir 卷中相同的文件。 当 Pod 因为某些原因被从节点上删除时,emptyDir 卷中的数据也会永久删除

emptyDir 的使用场景:

  • 缓存空间,例如基于磁盘的归并排序。
  • 耗时较长的计算任务提供检查点,以便任务能方便地从崩溃前状态恢复执行。
  • 在 Web 服务器容器服务数据时,保存内容管理器容器获取的文件。

示例:

bash 复制代码
[root@master ~]# mkdir volumes
[root@master ~]# cd volumes
[root@master volumes]# kubectl run empty --image busybox --dry-run=client -o yaml > empty.yml
[root@master volumes]# vim empty.yml 
apiVersion: v1
kind: Pod
metadata:
  labels:
    run: empty
  name: empty
spec:
  ontainers:
  - image: busybox                # 使用 busybox 镜像(轻量级Linux工具集)
    name: busybox                 # 容器名字叫 busybox
    command:                     # 容器启动命令
    - /bin/sh                    # 用sh执行
    - -c                         # 执行后面的命令
    - sleep 100000               # 让容器休眠10万秒(保持容器不退出)
    volumeMounts:                # 把卷挂载到容器内部
    - mountPath: /cache          # 挂载到容器的 /cache 目录
      name: cache-vol            # 挂载的卷名字叫 cache-vol(和下面定义对应)

    
  - image: nginx                 # 使用 nginx 镜像(网页服务)
    name: nginx                  # 容器名字叫 nginx
    volumeMounts:                # 同样挂载这个共享卷
    - mountPath: /usr/share/nginx/html  # 挂载到nginx网页根目录
      name: cache-vol            # 同一个卷:cache-vol

  # 定义Pod级别的存储卷
  volumes:
  - name: cache-vol              # 卷名:cache-vol
    emptyDir:                    # 卷类型:emptyDir(临时空目录,Pod删除就消失)
      medium: Memory             # 存储介质:内存(速度极快,重启丢失)
      sizeLimit: 100Mi           # 最大限制100MB
[root@master volumes]# kubectl  apply  -f empty.yml 
pod/empty created
[root@master volumes]# kubectl  get  pods  -o wide
NAME    READY   STATUS    RESTARTS   AGE   IP           NODE    NOMINATED NODE   READINESS GATES
empty   2/2     Running   0          48s   10.244.1.3   node1   <none>           <none>
[root@master volumes]# curl 10.244.1.3
<html>
<head><title>403 Forbidden</title></head>
<body>
<center><h1>403 Forbidden</h1></center>
<hr><center>nginx/1.23.4</center>
</body>
</html>
[root@master volumes]# kubectl  exec -it pods/empty  -c
error: flag needs an argument: 'c' in -c
See 'kubectl exec --help' for usage.
[root@master volumes]# kubectl  exec -it pods/empty  -c busybox  -- /bin/sh
/ # ls
bin    cache  dev    etc    home   lib    lib64  proc   root   sys    tmp    usr    var
/ # cd cache/
/cache # ls
/cache # echo hello xiarunyu > index.html
/cache # dd if=/dev/zero of=bigfile bs=1M count=99
99+0 records in
99+0 records out
103809024 bytes (99.0MB) copied, 0.240377 seconds, 411.9MB/s
/cache # 
[root@master volumes]# curl 10.244.1.3
hello xiarunyu

hostpath卷

功能:

  • hostPath 卷能将主机节点文件系统上的文件或目录挂载到您的 Pod 中,不会因为pod关闭而被删除
  • hostPath 的一些用法
  • 运行一个需要访问 Docker 引擎内部机制的容器,挂载 /var/lib/docker 路径。
  • 在容器中运行 cAdvisor(监控) 时,以 hostPath 方式挂载 /sys。
  • 允许 Pod 指定给定的 hostPath 在运行 Pod 之前是否应该存在,是否应该创建以及应该以什么方式存在

hostPath的安全隐患

  • 具有相同配置(例如从 podTemplate 创建)的多个 Pod 会由于节点上文件的不同而在不同节点上有不同的行为。
  • 当 Kubernetes 按照计划添加资源感知的调度时,这类调度机制将无法考虑由 hostPath 使用的资源。
  • 基础主机上创建的文件或目录只能由 root 用户写入。您需要在 特权容器 中以 root 身份运行进程,或者修改主机上的文件权限以便容器能够写入 hostPath 卷。

示例:

bash 复制代码
[root@master volumes]# kubectl  run hostpath --image nginx --dry-run=client -o yaml > hostpath.yml
[root@master volumes]# vim hostpath.yml
apiVersion: v1
kind: Pod
metadata:
  labels:          # 给 Pod 打标签,用于筛选、匹配
    run: hostpath  # 标签 key=run,value=hostpath
  name: hostpath   # Pod 名称:hostpath

# Pod 核心规格配置
spec:
  containers:    # 容器列表(这里只跑 1 个 nginx 容器)
  - image: nginx               # 使用 nginx 官方镜像
    name: hostpath             # 容器名字:hostpath
    volumeMounts:              # 挂载存储卷到容器内部
    - mountPath: /usr/share/nginx/html  # 容器内的挂载路径(nginx 网页根目录)
      name: timinglee                   # 挂载的卷名称,要和下面 volumes 里的名字对应

  # 定义 Pod 使用的存储卷
  volumes:
  - name: timinglee       # 卷名称:timinglee
    hostPath:             # 卷类型:hostPath(挂载**宿主机**的目录/文件到容器)
      path: /data         # 宿主机上的真实路径:/data
      type: DirectoryOrCreate  # 类型:如果宿主机没有 /data 目录,就自动创建 
[root@master volumes]#  kubectl apply -f hostpath.yml
pod/hostpath create
[root@master volumes]# kubectl  delete  -f empty.yml 
pod "empty" deleted from default namespace
[root@master volumes]#  kubectl get pods -o wide
NAME       READY   STATUS    RESTARTS   AGE   IP           NODE    NOMINATED NODE   READINESS GATES
hostpath   1/1     Running   0          63s   10.244.2.4   node2   <none>           <none>
[root@master volumes]# curl 10.244.2.4
<html>
<head><title>403 Forbidden</title></head>
<body>
<center><h1>403 Forbidden</h1></center>
<hr><center>nginx/1.23.4</center>
</body>
</html>

在node2自动创建了从 /data/文件,要知道登入node几

#我是node2

bash 复制代码
[root@master volumes]# ssh -l root node2
[root@node2 ~]# ll /data/
总用量 0
[root@node2 ~]# echo how are you timinglee > /data/index.html
[root@node2 ~]# exit
注销
Connection to node2 closed.
[root@master volumes]# curl 10.244.2.4
how are you timinglee

nfs卷

NFS 卷允许将一个现有的 NFS 服务器上的目录挂载到 Kubernetes 中的 Pod 中。这对于在多个 Pod 之间共享数据或持久化存储数据非常有用

例如,如果有多个容器需要访问相同的数据集,或者需要将容器中的数据持久保存到外部存储,NFS 卷可以提供一种方便的解决方案。

建立nfs共享存储
bash 复制代码
[root@node3 ~]# mkdir /share
[root@node3 ~]# mount /dev/sr0 /mnt
mount: /mnt: WARNING: source write-protected, mounted read-only.
[root@node3 ~]# dnf install nfs-utils -y
[root@node3 ~]# systemctl enable --now nfs-server.service
Created symlink /etc/systemd/system/multi-user.target.wants/nfs-server.service → /usr/lib/systemd/system/nfs-server.service.
[root@node3 ~]# vim /etc/exports
/share   *(rw,sync,no_root_squash)
# /share         要共享的宿主机目录
# *               允许所有IP地址的机器访问
# rw              读写权限
# sync            数据同步写入内存+磁盘,安全可靠
# no_root_squash  允许客户端的root用户,在服务端也拥有root权限(测试常用)

# 重新加载 NFS 配置,让刚才的设置生效(r=重新加载,v=显示详情)
[root@node3 ~]# exportfs  -rv
exporting *:/share  # 系统输出:正在对外共享 /share 目录

# 查看本机 NFS 服务的共享目录列表
[root@node3 ~]# showmount -e
Export list for node3:
/share *    # 最终结果:node3 机器对外共享了 /share 目录,所有IP可访问
部署nfs卷
bash 复制代码
[root@master ~]# for i in 10 20 ; do ssh -l root 172.25.254.$i  dnf install nfs-utlis -y ; done
[root@master ~]# for i in 10 20; do ssh -l root 172.25.254.$i showmount -e  172.25.254.30 ; done
root@172.25.254.10's password: 
Export list for 172.25.254.30:
/share *
root@172.25.254.20's password: 
Export list for 172.25.254.30:
/share *
建立nfs卷
bash 复制代码
[root@master volumes]# kubectl run web --image nginx  --dry-run=client -o yaml  >> nfs.yml
[root@master volumes]# vim nfs.yml 
apiVersion: v1
kind: Pod
metadata:
  labels:
    run: web1
  name: web1
spec:
  metadata:
  labels:          # 给 Pod 打标签,方便筛选
    run: web1       # 标签 key=run,value=web1
  name: web1        # Pod 名字:web1

# Pod 核心配置
spec:
  nodeName: node2   # 强制把 Pod 调度到 node2 节点运行(固定节点)

  containers:      # 容器列表(这里只跑 1 个 Nginx)
  - image: nginx                # 使用 Nginx 镜像
    name: web1                  # 容器名字:web1
    volumeMounts:               # 挂载存储卷到容器内
    - mountPath: /usr/share/nginx/html  # 容器内挂载路径(Nginx 网页目录)
      name: cache-vol                    # 卷名称,和下面 volumes 对应

  # 定义存储卷:使用 NFS 网络存储
  volumes:
  - name: cache-vol       # 卷名称:cache-vol
    nfs:                  # 卷类型:NFS(网络文件共享)
      server: 172.25.254.30  # NFS 服务器的 IP 地址
      path: /share           # NFS 服务端共享的目录(/share)

[root@master volumes]# kubectl apply -f nfs.yml
pod/web1 created
[root@master volumes]# kubectl  delete -f hostpath.yml 
pod "hostpath" deleted from default namespace
[root@master volumes]# kubectl  get pods -o wide
NAME   READY   STATUS    RESTARTS   AGE   IP           NODE    NOMINATED NODE   READINESS GATES
web1   1/1     Running   0          79s   10.244.2.5   node2   <none>           <none>
[root@master volumes]# curl  10.244.2.5
<html>
<head><title>403 Forbidden</title></head>
<body>
<center><h1>403 Forbidden</h1></center>
<hr><center>nginx/1.23.4</center>
</body>
</html>
[root@master volumes]# curl  10.244.2.5
<html>
<head><title>403 Forbidden</title></head>
<body>
<center><h1>403 Forbidden</h1></center>
<hr><center>nginx/1.23.4</center>
</body>
</html>




#node3
[root@node3 ~]# echo hello timinglee > /share/index.html




[root@master volumes]# curl  10.244.2.5
hello timinglee
[root@master volumes]#  kubectl delete -f nfs.yml
pod "web1" deleted from default namespace
[root@master volumes]# vim nfs.yml 
apiVersion: v1
kind: Pod
metadata:
  labels:
    run: web1
  name: web1
spec:
  nodeName: node1			#运行主机更改
  containers:
  - image: nginx
    name: web1
    volumeMounts:
    - mountPath: /usr/share/nginx/html
      name: cache-vol
  volumes:
  - name: cache-vol
    nfs:
      server: 172.25.254.30
      path: /share
[root@master volumes]# kubectl apply -f nfs.yml
pod/web1 created
[root@master volumes]# kubectl get pods -o wide
NAME   READY   STATUS    RESTARTS   AGE   IP           NODE    NOMINATED NODE   READINESS GATES
web1   1/1     Running   0          10s   10.244.1.3   node1   <none>           <none>
[root@master volumes]# curl 10.244.1.3
hello timinglee

PersistentVolume持久卷

PersistentVolume(持久卷,简称PV)
  • pv是集群内由管理员提供的网络存储的一部分。
  • PV也是集群中的一种资源。是一种volume插件,
  • 但是它的生命周期却是和使用它的Pod相互独立的。
  • PV这个API对象,捕获了诸如NFS、ISCSI、或其他云存储系统的实现细节
  • pv有两种提供方式:静态和动态
  • 静态PV:集群管理员创建多个PV,它们携带着真实存储的详细信息,它们存在于Kubernetes API中,并可用于存储使用
  • 动态PV:当管理员创建的静态PV都不匹配用户的PVC时,集群可能会尝试专门地供给volume给PVC。这种供给基于StorageClass
PersistentVolumeClaim(持久卷声明,简称PVC)
  • 是用户的一种存储请求
  • 它和Pod类似,Pod消耗Node资源,而PVC消耗PV资源
  • Pod能够请求特定的资源(如CPU和内存)。PVC能够请求指定的大小和访问的模式持久卷配置
  • PVC与PV的绑定是一对一的映射。没找到匹配的PV,那么PVC会无限期得处于unbound未绑定状态
volumes访问模式
  • ReadWriteOnce -- 该volume只能被单个节点以读写的方式映射
  • ReadOnlyMany -- 该volume可以被多个节点以只读方式映射
  • ReadWriteMany -- 该volume可以被多个节点以读写的方式映射

在命令行中,访问模式可以简写为:

  • RWO - ReadWriteOnce
  • ROX - ReadOnlyMan
  • RWX -- ReadWriteMany
volumes回收策略
  • Retain:保留,需要手动回收
  • Recycle:回收,自动删除卷中数据(在当前版本中已经废弃)
  • Delete:删除,相关联的存储资产,如AWS EBS,GCE PD,Azure Disk,or OpenStack Cinder卷都会被删除
静态持久卷

#建立存储目录

bash 复制代码
[root@node3 ~]# mkdir  /share/pv{1..3} -p
[root@node3 ~]# dnf install nfs-utils -y
[root@node3 ~]# systemctl enable --now nfs-server
[root@node3 ~]# vim /etc/exports
/share  *(sync,rw,no_root_squash)

[root@node3 ~]# exportfs  -rv
exporting *:/share

#编写pv建立的yaml文件

bash 复制代码
 #分页查看 Kubernetes 集群支持的所有 API 资源对象。
[root@master ~]# kubectl api-resources  | less

[root@master volumes]# vim pv.yml
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv1
spec:
  capacity:
    storage: 5Gi
  volumeMode: Filesystem
  accessModes:
  - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  storageClassName: nfs
  nfs:
     path: /share/pv1
     server: 172.25.254.30
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv2
spec:
  capacity:
    storage: 10Gi
  volumeMode: Filesystem
  accessModes:
  - ReadWriteMany
  persistentVolumeReclaimPolicy: Retain
  storageClassName: nfs
  nfs:
     path: /share/pv2
     server: 172.25.254.30

---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv3
spec:
  capacity:
    storage: 15Gi
  volumeMode: Filesystem
  accessModes:
  - ReadOnlyMany
  persistentVolumeReclaimPolicy: Retain
  storageClassName: nfs
  nfs:
     path: /share/pv3
     server: 172.25.254.30

[root@master volumes]# kubectl  apply -f pv.yml 
persistentvolume/pv1 created
persistentvolume/pv2 created
persistentvolume/pv3 created
[root@master volumes]# kubectl  get pv
NAME   CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   VOLUMEATTRIBUTESCLASS   REASON   AGE
pv1    5Gi        RWO            Retain           Available           nfs            <unset>                          8s
pv2    10Gi       RWX            Retain           Available           nfs            <unset>                          8s
pv3    15Gi       ROX            Retain           Available           nfs            <unset>                          8s



[root@master volumes]# vim pvc.yml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc1
spec:
  storageClassName: nfs
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc2
spec:
  storageClassName: nfs
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 10Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc3
spec:
  storageClassName: nfs
  accessModes:
    - ReadOnlyMany
  resources:
    requests:
      storage: 15Gi
[root@master volumes]# kubectl  apply -f pv.yml 
persistentvolumeclaim/pvc1 created
persistentvolumeclaim/pvc2 created
persistentvolumeclaim/pvc3 created
[root@master volumes]# kubectl get pvc
NAME   STATUS   VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   VOLUMEATTRIBUTESCLASS   AGE
pvc1   Bound    pv1      5Gi        RWO            nfs            <unset>                 4s
pvc2   Bound    pv2      10Gi       RWX            nfs            <unset>                 4s
pvc3   Bound    pv3      15Gi       ROX            nfs            <unset>                 4s
[root@master volumes]# kubectl get pv
NAME   CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM          STORAGECLASS   VOLUMEATTRIBUTESCLASS   REASON   AGE
pv1    5Gi        RWO            Retain           Bound    default/pvc1   nfs            <unset>                          2m2s
pv2    10Gi       RWX            Retain           Bound    default/pvc2   nfs            <unset>                          2m2s
pv3    15Gi       ROX            Retain           Bound    default/pvc3   nfs            <unset>                          2m2s
[root@master volumes]# vim checkpvpod.yml
apiVersion: v1
kind: Pod
metadata:
  name: timinglee
spec:
  containers:
  - image: nginx
    name: nginx
    volumeMounts:
    - mountPath: /usr/share/nginx/html
      name: vol1
  volumes:
  - name: vol1
    persistentVolumeClaim:
      claimName: pvc1
      

[root@master volumes]# kubectl apply -f checkpvpod.yml
pod/timinglee created
[root@master volumes]# kubectl  get pods
NAME        READY   STATUS    RESTARTS   AGE
timinglee   1/1     Running   0          9s
[root@master volumes]# kubectl  get pods -o wide
NAME        READY   STATUS    RESTARTS   AGE   IP           NODE    NOMINATED NODE   READINESS GATES
timinglee   1/1     Running   0          48s   10.244.1.4   node1   <none>           <none>
[root@master volumes]# curl 10.244.1.4
<html>
<head><title>403 Forbidden</title></head>
<body>
<center><h1>403 Forbidden</h1></center>
<hr><center>nginx/1.23.4</center>
</body>
</html>

#在node3上面的/share/pv1下编写输入文件
[root@node3 ~]# echo good timinglee >  /share/pv1/index.html




[root@master volumes]# curl 10.244.1.4
good timinglee
动态持久卷

#上传所需镜像

bash 复制代码
[root@node3 ~]# docker login  reg.timinglee.org
[root@node3 ~]# docker load -i  nfs-subdir-external-provisioner-4.0.2.tar
[root@node3 ~]# docker images                                                                                                                              
registry.k8s.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2   932b0bface75       43.8MB             0B        
[root@node3 ~]# docker tag registry.k8s.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2  reg.timinglee.org/sig-storage/nfs-subdir-external-provisioner:v4.0.2  
[root@node3 ~]# docker push  reg.timinglee.org/sig-storage/nfs-subdir-external-provisioner:v4.0.2  
The push refers to repository [reg.timinglee.org/sig-storage/nfs-subdir-external-provisioner]
ad321585b8f5: Pushed 
1a5ede0c966b: Pushed 
v4.0.2: digest: sha256:f741e403b3ca161e784163de3ebde9190905fdbf7dfaa463620ab8f16c0f6423 size: 739

#建立授权

bash 复制代码
[root@master volumes]# vim storagesa.yml
# 创建一个命名空间,名字叫 nfs-client-provisioner
apiVersion: v1
kind: Namespace
metadata:
  name: nfs-client-provisioner  # 命名空间名称,所有资源都放这里
---
# 创建服务账号,给 nfs  provisioner 程序用
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-client-provisioner         # 账号名
  namespace: nfs-client-provisioner    # 放在上面的命名空间里
---
# 集群角色:给权限(整个集群都能用)
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: nfs-client-provisioner-runner  # 角色名
rules:
  - apiGroups: [""]                    # 核心 API 组
    resources: ["nodes"]               # 能看节点
    verbs: ["get", "list", "watch"]    # 权限:查看、列表、监听
  - apiGroups: [""]
    resources: ["persistentvolumes"]    # 能管理 PV
    verbs: ["get", "list", "watch", "create", "delete"]  # 增删改查
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"] # 能管理 PVC
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]         # 能看存储类
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]                 # 能写事件
    verbs: ["create", "update", "patch"]
---
# 集群角色绑定:把上面的角色 绑定 给上面的服务账号
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-client-provisioner  # 绑定名
subjects:
  - kind: ServiceAccount                     # 绑定对象:服务账号
    name: nfs-client-provisioner             # 账号名
    namespace: nfs-client-provisioner        # 所在命名空间
roleRef:
  kind: ClusterRole                          # 绑定的角色类型
  name: nfs-client-provisioner-runner        # 角色名
  apiGroup: rbac.authorization.k8s.io        # 固定写法
---
# 角色(只在当前命名空间有效):用于选举主节点(leader 锁)
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner    # 角色名
  namespace: nfs-client-provisioner              # 命名空间
rules:
  - apiGroups: [""]
    resources: ["endpoints"]        # 用 endpoints 做分布式锁
    verbs: ["get", "list", "watch", "create", "update", "patch"]
---
# 角色绑定:把锁权限 绑定 给服务账号
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  namespace: nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    namespace: nfs-client-provisioner
roleRef:
  kind: Role
  name: leader-locking-nfs-client-provisioner
  apiGroup: rbac.authorization.k8s.io

[root@master volumes]# kubectl apply -f storagesa.yml
namespace/nfs-client-provisioner created
serviceaccount/nfs-client-provisioner created
clusterrole.rbac.authorization.k8s.io/nfs-client-provisioner-runner created
clusterrolebinding.rbac.authorization.k8s.io/run-nfs-client-provisioner created
role.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner created
rolebinding.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner created

[root@master volumes]# kubectl -n nfs-client-provisioner get sa
NAME                     AGE
default                  2m23s
nfs-client-provisioner   2m23s

#建立控制器

bash 复制代码
[root@master volumes]# vim storageclassdep.yml
# 必须写!指定API版本(Deployment固定用apps/v1)
apiVersion: apps/v1
# 必须写!指定资源类型:这是一个部署(用来跑容器)
kind: Deployment
# 资源名字 + 命名空间
metadata:
  name: nfs-client-provisioner
  namespace: nfs-client-provisioner
# 核心配置
spec:
  replicas: 1                                # 只启动1个Pod(单实例)
  strategy:
    type: Recreate                           # 升级策略:先删再建(防止多实例冲突)
  selector:
    matchLabels:
      app: nfs-client-provisioner            # 标签匹配,管理下面的Pod
  template:
    metadata:
      labels:
        app: nfs-client-provisioner          # Pod标签,必须和上面一致
    spec:
      serviceAccountName: nfs-client-provisioner  # 使用之前创建的权限账号
      containers:
        - name: nfs-client-provisioner           # 容器名称
          image: sig-storage/nfs-subdir-external-provisioner:v4.0.2  # NFS自动供给器镜像
          volumeMounts:
            - name: nfs-client-root               # 挂载的卷名称(和下面volumes对应)
              mountPath: /persistentvolumes       # 容器内部挂载点
          env:                                    # 环境变量(核心配置)
            - name: PROVISIONER_NAME              # 供给器名称
              value: k8s-sigs.io/nfs-subdir-external-provisioner
            - name: NFS_SERVER                    # NFS服务器IP
              value: 172.25.254.30
            - name: NFS_PATH                      # NFS共享根目录
              value: /share
      volumes:                                    # 挂载NFS到Pod里
        - name: nfs-client-root                   # 卷名
          nfs:
            server: 172.25.254.30                 # NFS地址
            path: /share                          # NFS路径
[root@master volumes]# kubectl  apply -f storageclassdep.yml 
deployment.apps/nfs-client-provisioner created
[root@master volumes]#  kubectl -n nfs-client-provisioner get pods
NAME                                      READY   STATUS    RESTARTS   AGE
nfs-client-provisioner-6b445b9454-w7n6j   1/1     Running   0          10s

#建立储存类

bash 复制代码
[root@master volumes]# vim storageclass.yml
# API版本,存储类固定用这个
apiVersion: storage.k8s.io/v1
# 资源类型:存储类(用来自动创建 PV 的规则)
kind: StorageClass
metadata:
  # 存储类名称:nfs-client
  # 后面创建 PVC 时,必须填这个名字
  name: nfs-client
# 关键:指定使用哪个 NFS 自动供给器
# 必须和 Deployment 里的 PROVISIONER_NAME 一模一样!
provisioner: k8s-sigs.io/nfs-subdir-external-provisioner
parameters:
  # 删除 PVC 时,不保留数据,直接删除
  archiveOnDelete: "false"
[root@master volumes]# kubectl  apply -f storageclass.yml 
storageclass.storage.k8s.io/nfs-client created
[root@master volumes]# kubectl  get storageclass.storage.k8s.io
NAME         PROVISIONER                                   RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
nfs-client   k8s-sigs.io/nfs-subdir-external-provisioner   Delete          Immediate           false                  27s

#建立pvc

bash 复制代码
[root@master volumes]# vim pvc.yml
# 资源类型:PVC 持久卷声明
kind: PersistentVolumeClaim
# API 版本,固定 v1
apiVersion: v1
metadata:
  # PVC 名字:test-claim
  name: test-claim
spec:
  # 关键:使用你刚创建的 NFS 自动存储类
  storageClassName: nfs-client
  # 访问模式:多节点可读写
  accessModes:
    - ReadWriteMany
  # 请求 1G 存储空间
  resources:
    requests:
      storage: 1Gi
[root@master volumes]#  kubectl get pv
No resources found
[root@master volumes]# kubectl  apply -f pvc.yml 
persistentvolumeclaim/test-claim created
[root@master volumes]# kubectl get pvc
NAME         STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   VOLUMEATTRIBUTESCLASS   AGE
test-claim   Bound    pvc-4662a7a7-0021-4a23-aece-a620b51605eb   1G         RWX            nfs-client     <unset>                 7s
[root@master volumes]# kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                STORAGECLASS   VOLUMEATTRIBUTESCLASS   REASON   AGE
pvc-4662a7a7-0021-4a23-aece-a620b51605eb   1G         RWX            Delete           Bound    default/test-claim   nfs-client     <unset>                          15s
[root@master volumes]#  kubectl delete  -f pvc.yml
persistentvolumeclaim "test-claim" deleted from default namespace
[root@master volumes]# kubectl get pvc
No resources found in default namespace.
[root@master volumes]# kubectl get pv
No resources found

#设定默认存储类

bash 复制代码
[root@master volumes]# kubectl annotate sc nfs-client storageclass.kubernetes.io/is-default-class=true
storageclass.storage.k8s.io/nfs-client annotated
[root@master volumes]#  kubectl get sc
NAME                   PROVISIONER                                   RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
nfs-client (default)   k8s-sigs.io/nfs-subdir-external-provisioner   Delete          Immediate           false                  60m
[root@master volumes]# vim pvc.yml 
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: test-claim
spec:
  # storageClassName: nfs-client  #自动生成储存类
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 1G

[root@master volumes]# kubectl  apply -f pvc.yml 
persistentvolumeclaim/test-claim created
[root@master volumes]# kubectl get pvc
NAME         STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   VOLUMEATTRIBUTESCLASS   AGE
test-claim   Bound    pvc-0ae3b0ac-f320-4345-897a-c7cac2f3cc29   1G         RWX            nfs-client     <unset>                 10s
statfulset控制器整合动态卷
bash 复制代码
#建立无头服务
[root@master ~]# mkdir statfulset
[root@master ~]# cd statfulset/
[root@master statfulset]# kubectl  create service clusterip timinglee --tcp 80:80 --clusterip="None" --dry-run=client -o yaml > headless.yml
[root@master statfulset]# vim headless.yml 
apiVersion: v1
kind: Service
metadata:
  labels:
    app: timinglee
  name: timinglee
spec:
  clusterIP: None
  ports:
  - name: webport
    port: 80
    protocol: TCP
    targetPort: 80
  selector:
    app: webserver
  type: ClusterIP
[root@master statfulset]# kubectl apply -f headless.yml 
service/timinglee created
bash 复制代码
#创建statefulset
[root@master statfulset]# kubectl  create deployment webserver --image nginx --replicas 1 --dry-run=client -o yaml > statefulset.yml
[root@master statfulset]# vim statefulset.yml 
# 定义API版本,StatefulSet必须用这个版本
apiVersion: apps/v1

# 资源类型:有状态应用控制器(固定Pod名字、固定存储、固定域名)
kind: StatefulSet

# 元数据:名字、标签
metadata:
  # 标签:用来让Service找到这个Pod
  labels:
    app: webserver
  # 这个StatefulSet的名字叫 webserver
  name: webserver

# 核心配置
spec:
  # 👇 最重要!指定用哪个Headless Service做域名解析
  # 你刚才的服务名叫 timinglee,所以这里必须写它!
  serviceName: "timinglee"

  # 启动 1 个副本(你刚才改成了3)
  replicas: 1

  # 选择器:匹配下面Pod的标签
  selector:
    matchLabels:
      app: webserver

  # Pod模板(容器长啥样)
  template:
    metadata:
      # Pod标签,必须和上面selector、Service选择器一致
      labels:
        app: webserver
    spec:
      # 容器配置
      containers:
      # 使用 nginx 镜像
      - image: nginx
        # 容器名字叫 nginx
        name: nginx
        # 容器内挂载目录
        volumeMounts:
          # 挂载名字叫 www 的存储
          - name: www
            # 挂载到 Nginx 的网页目录
            mountPath: /usr/share/nginx/html

  # 👇 存储模板:自动给每个Pod创建PVC
  volumeClaimTemplates:
  # 存储名字
  - metadata:
      name: www
    spec:
      # 存储类:nfs-client(你刚才设为默认的那个)
      storageClassName: nfs-client
      # 访问模式:单节点读写
      accessModes:
       - ReadWriteOnce
      # 资源请求
      resources:
        requests:
          # 申请 1G 存储空间
          storage: 1Gi
[root@master statfulset]# kubectl apply -f statefulset.yml
statefulset.apps/webserver created
[root@master statfulset]# kubectl  get statefulset.apps
NAME        READY   AGE
webserver   1/1     21s
[root@master statfulset]# kubectl  get pods
NAME          READY   STATUS    RESTARTS   AGE
webserver-0   1/1     Running   0          30s
[root@master statfulset]# kubectl  get pvc
NAME              STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   VOLUMEATTRIBUTESCLASS   AGE
test-claim        Bound    pvc-0ae3b0ac-f320-4345-897a-c7cac2f3cc29   1G         RWX            nfs-client     <unset>                 3h44m
www-webserver-0   Bound    pvc-857b6df3-06a9-444c-a474-2d5fa1318885   1Gi        RWO            nfs-client     <unset>                 36s
[root@master statfulset]#  kubectl scale statefulset webserver --replicas 2
statefulset.apps/webserver scaled
[root@master statfulset]# kubectl  get pods
NAME          READY   STATUS    RESTARTS   AGE
webserver-0   1/1     Running   0          80s
webserver-1   1/1     Running   0          7s
[root@master statfulset]# kubectl  get pvc
NAME              STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   VOLUMEATTRIBUTESCLASS   AGE
test-claim        Bound    pvc-0ae3b0ac-f320-4345-897a-c7cac2f3cc29   1G         RWX            nfs-client     <unset>                 3h44m
www-webserver-0   Bound    pvc-857b6df3-06a9-444c-a474-2d5fa1318885   1Gi        RWO            nfs-client     <unset>                 85s
www-webserver-1   Bound    pvc-04f41843-8319-4c5b-90c3-2589f239f062   1Gi        RWO            nfs-client     <unset>                 12s
[root@master statfulset]# kubectl scale statefulset webserver --replicas 3
statefulset.apps/webserver scaled
[root@master statfulset]#  kubectl get pods
NAME          READY   STATUS    RESTARTS   AGE
webserver-0   1/1     Running   0          101s
webserver-1   1/1     Running   0          28s
webserver-2   1/1     Running   0          6s
[root@master statfulset]# kubectl get pvc
NAME              STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   VOLUMEATTRIBUTESCLASS   AGE
test-claim        Bound    pvc-0ae3b0ac-f320-4345-897a-c7cac2f3cc29   1G         RWX            nfs-client     <unset>                 3h45m
www-webserver-0   Bound    pvc-857b6df3-06a9-444c-a474-2d5fa1318885   1Gi        RWO            nfs-client     <unset>                 108s
www-webserver-1   Bound    pvc-04f41843-8319-4c5b-90c3-2589f239f062   1Gi        RWO            nfs-client     <unset>                 35s
www-webserver-2   Bound    pvc-a9235828-438d-4aa2-a5d5-24b3aae7a3a6   1Gi        RWO            nfs-client     <unset>                 13s

#在node3操作

bash 复制代码
[root@node3 ~]# cd /share/
[root@node3 share]# ll
总用量 4
drwxrwxrwx 2 root root  6  4月 24 17:27 default-test-claim-pvc-0ae3b0ac-f320-4345-897a-c7cac2f3cc29
drwxrwxrwx 2 root root  6  4月 24 21:10 default-www-webserver-0-pvc-857b6df3-06a9-444c-a474-2d5fa1318885
drwxrwxrwx 2 root root  6  4月 24 21:12 default-www-webserver-1-pvc-04f41843-8319-4c5b-90c3-2589f239f062
drwxrwxrwx 2 root root  6  4月 24 21:12 default-www-webserver-2-pvc-a9235828-438d-4aa2-a5d5-24b3aae7a3a6
-rw-r--r-- 1 root root 15  4月 24 15:32 index.html
drwxr-xr-x 2 root root 24  4月 24 15:35 pv1
drwxr-xr-x 2 root root  6  4月 24 14:47 pv2
drwxr-xr-x 2 root root  6  4月 24 14:47 pv3
[root@node3 share]# echo webserver1 > default-www-webserver-0-pvc-857b6df3-06a9-444c-a474-2d5fa1318885/index.html
[root@node3 share]# echo webserver2 > default-www-webserver-1-pvc-04f41843-8319-4c5b-90c3-2589f239f062/index.html
[root@node3 share]# echo webserver3 > default-www-webserver-2-pvc-a9235828-438d-4aa2-a5d5-24b3aae7a3a6/index.html
bash 复制代码
#验证
[root@master statfulset]# kubectl describe svc timinglee
Name:                     timinglee
Namespace:                default
Labels:                   app=timinglee
Annotations:              <none>
Selector:                 app=webserver
Type:                     ClusterIP
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       None
IPs:                      None
Port:                     webport  80/TCP
TargetPort:               80/TCP
Endpoints:                10.244.2.6:80,10.244.3.5:80,10.244.1.10:80
Session Affinity:         None
Internal Traffic Policy:  Cluster
Events:                   <none>

[root@master statfulset]# kubectl run  -it testpod --image  busyboxplus
All commands and output from this session will be recorded in container logs, including credentials and sensitive information passed through the command prompt.
If you don't see a command prompt, try pressing enter.
/ # curl  webserver-0.timinglee
webserver1
/ # curl  webserver-1.timinglee
webserver2
/ # curl  webserver-2.timinglee
webserver3
/ # exit
Session ended, resume using 'kubectl attach testpod -c testpod -i -t' command when the pod is running
[root@master statfulset]# kubectl delete -f statefulset.yml
statefulset.apps "webserver" deleted from default namespace
[root@master statfulset]# kubectl apply  -f statefulset.yml
statefulset.apps/webserver created

[root@master statfulset]# kubectl scale statefulset webserver --replicas 3
statefulset.apps/webserver scaled
[root@master statfulset]# kubectl  get pods
NAME          READY   STATUS    RESTARTS      AGE
testpod       1/1     Running   1 (55s ago)   84s
webserver-0   1/1     Running   0             39s
webserver-1   1/1     Running   0             3s
webserver-2   1/1     Running   0             2s
[root@master statfulset]# kubectl describe svc timinglee
Name:                     timinglee
Namespace:                default
Labels:                   app=timinglee
Annotations:              <none>
Selector:                 app=webserver
Type:                     ClusterIP
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       None
IPs:                      None
Port:                     webport  80/TCP
TargetPort:               80/TCP
Endpoints:                10.244.2.7:80,10.244.3.6:80,10.244.1.12:80
Session Affinity:         None
Internal Traffic Policy:  Cluster
Events:                   <none>
[root@master statfulset]# curl  webserver-0.timinglee
curl: (6) Could not resolve host: webserver-0.timinglee
[root@master statfulset]# kubectl describe svc timinglee
Name:                     timinglee
Namespace:                default
Labels:                   app=timinglee
Annotations:              <none>
Selector:                 app=webserver
Type:                     ClusterIP
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       None
IPs:                      None
Port:                     webport  80/TCP
TargetPort:               80/TCP
Endpoints:                10.244.2.7:80,10.244.3.6:80,10.244.1.12:80
Session Affinity:         None
Internal Traffic Policy:  Cluster
Events:                   <none>
[root@master statfulset]# kubectl exec -it testpod -- sh
/ # curl webserver-0.timinglee
webserver1
/ # curl webserver-1.timinglee
webserver2
/ # curl webserver-2.timinglee
webserver3
/ # 

12.fannel插件转换为calico插件

(1).删除fannel插件

bash 复制代码
[root@master ~]# find / -name "kube-flannel.yml" 2>/dev/null
/root/pod/kube-flannel.yml
[root@master ~]# cd pod
[root@master pod]# kubectl delete -f kube-flannel.yml
namespace "kube-flannel" deleted
serviceaccount "flannel" deleted from kube-flannel namespace
clusterrole.rbac.authorization.k8s.io "flannel" deleted
clusterrolebinding.rbac.authorization.k8s.io "flannel" deleted
configmap "kube-flannel-cfg" deleted from kube-flannel namespace
daemonset.apps "kube-flannel-ds" deleted from kube-flannel namespace
[root@master pod]# cd
[root@master ~]# mkdir  fannel
[root@master ~]# cd fannel/
[root@master fannel]# ls
kube-flannel.yml
[root@master fannel]# for i in 100 10 20; do ssh -l root 172.25.254.$i rm -fr /etc/cni/net.d/10-flannel.conflist; done
[root@master fannel]#  for i in 100 10 20; do ssh -l root 172.25.254.$i ls 

(2).部署calico

下载yaml文件

bash 复制代码
root@k8s-master calico]# curl https://raw.githubusercontent.com/projectcalico/calico/v3.31.4/manifests/calico-typha.yaml -o calico.yaml

建立calico的harbor项目

上传镜像

bash 复制代码
[root@harbor calico]# docker images                                                                                                                               
quay.io/calico/cni:v3.31.4                                    c433a27dd94c        164MB             0B        
quay.io/calico/kube-controllers:v3.31.4                       ff033cc89dab        125MB             0B        
quay.io/calico/node:v3.31.4                                   e6536b93706e        412MB             0B        
quay.io/calico/typha:v3.31.4                                  46766605472b       87.6MB             0B        
quay.io/metallb/controller:v0.15.3                            c2f8b367b1a2       51.8MB             0B        
quay.io/metallb/speaker:v0.15.3                               7976d450b330        121MB             0B         
[root@harbor calico]#  docker tag quay.io/calico/cni:v3.31.4 reg.timinglee.org/calico/cni:v3.31.4
[root@harbor calico]# docker push reg.timinglee.org/calico/cni:v3.31.4
The push refers to repository [reg.timinglee.org/calico/cni]
5f70bf18a086: Mounted from library/busyboxplus 
8458b54301c8: Pushed 
bbfe4156501d: Pushed 
v3.31.4: digest: sha256:01cb0e089f07ff5de369c6b7258df3e48e2adcd8ee44964b3f3a6c42a5b2b296 size: 946
[root@harbor calico]#  docker tag quay.io/calico/node:v3.31.4 reg.timinglee.org/calico/node:v3.31.4
[root@harbor calico]#  docker push reg.timinglee.org/calico/node:v3.31.4
The push refers to repository [reg.timinglee.org/calico/node]
6cc958e4b1c0: Pushed 
v3.31.4: digest: sha256:650c038e391fdbfc898968652347e31eccd452d5eae0f404cf43e6f5f36e4f1b size: 530
[root@harbor calico]# docker tag quay.io/calico/kube-controllers:v3.31.4 reg.timinglee.org/calico/kube-controllers:v3.31.4
[root@harbor calico]# docker push reg.timinglee.org/calico/kube-controllers:v3.31.4
The push refers to repository [reg.timinglee.org/calico/kube-controllers]
af8385a3f211: Pushed 
bbfe4156501d: Mounted from calico/cni 
v3.31.4: digest: sha256:90c50deaeefa83d8c334a1212c9ceab7713a73280fd09fe65910c2c2473a7358 size: 740
[root@harbor calico]#  docker tag quay.io/calico/typha:v3.31.4 reg.timinglee.org/calico/typha:v3.31.4
[root@harbor calico]# docker push reg.timinglee.org/calico/typha:v3.31.4
The push refers to repository [reg.timinglee.org/calico/typha]
b546fafd25bb: Pushed 
bbfe4156501d: Mounted from calico/kube-controllers 
v3.31.4: digest: sha256:93da548bd93705c71694f405d713c502228f6fa344219dff20e3206941d43848 size: 740

修改配置文件

bash 复制代码
[root@master fannel]# wget https://docs.projectcalico.org/v3.25/manifests/calico.yaml
[root@master fannel]# vim calico。yml  #在官网下载的
7116:          image: calico/cni:v3.31.4
7144:          image: calico/cni:v3.31.4
7188:          image: calico/node:v3.31.4
7214:          image: calico/node:v3.31.4
7443:          image: calico/kube-controllers:v3.31.4
7536:        - image: calico/typha:v3.31.4

7252             - name: CALICO_IPV4POOL_IPIP
7253               value: "Never"

7281             - name: CALICO_IPV4POOL_CIDR
7282               value: "10.244.0.0/16"
 - name: CALICO_IPV4POOL_CIDR
7282                value: "192.168.0.0/16"


7284             - name: CALICO_AUTODETECTION_METHOD
7285               value: "interface=eth0"				#网络设备是什么就写什么
[root@master fannel]#  kubectl apply -f calico.yaml
[root@master fannel]#  kubectl -n kube-system get pods
NAME                             READY   STATUS    RESTARTS         AGE
coredns-697886855d-555zc         1/1     Running   12 (7h42m ago)   20d
coredns-697886855d-zgnd4         1/1     Running   12 (7h42m ago)   20d
etcd-master                      1/1     Running   13 (7h42m ago)   20d
kube-apiserver-master            1/1     Running   10 (7h42m ago)   12d
kube-controller-manager-master   1/1     Running   14 (7h42m ago)   20d
kube-proxy-8m4q7                 1/1     Running   9 (7h42m ago)    12d
kube-proxy-s5kht                 1/1     Running   9 (7h42m ago)    12d
kube-proxy-w5jfb                 1/1     Running   9 (7h42m ago)    12d
kube-proxy-zcs6m                 1/1     Running   10 (7h42m ago)   12d
kube-scheduler-master            1/1     Running   14 (7h42m ago)   20d

测试

bash 复制代码
[root@master fannel]# kubectl delete pod testpod
pod "testpod" deleted from default namespace
[root@master fannel]# kubectl run  testpod --image nginx
pod/testpod created
[root@master fannel]# kubectl scale statefulset webserver --replicas 0
statefulset.apps/webserver scaled
[root@master fannel]#  kubectl get pods
NAME      READY   STATUS    RESTARTS   AGE
testpod   1/1     Running   0          26s
[root@master fannel]# kubectl get pods  -o wide
NAME      READY   STATUS    RESTARTS   AGE   IP           NODE    NOMINATED NODE   READINESS GATES
testpod   1/1     Running   0          34s   10.244.2.8   node2   <none>           <none>
[root@master fannel]# curl 10.244.2.8
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

13.调度器

调度在Kubernetes中的作用

  • 调度是指将未调度的Pod自动分配到集群中的节点的过程
  • 调度器通过 kubernetes 的 watch 机制来发现集群中新创建且尚未被调度到 Node 上的 Pod
  • 调度器会将发现的每一个未调度的 Pod 调度到一个合适的 Node 上来运行

调度原理:

  • 创建Pod用户

通过Kubernetes API创建Pod对象,并在其中指定Pod的资源需求、容器镜像等信息。

  • 调度器监视Pod

Kubernetes调度器监视集群中的未调度Pod对象,并为其选择最佳的节点。

  • 选择节点

调度器通过算法选择最佳的节点,并将Pod绑定到该节点上。调度器选择节点的依据包括节点的资源使用情况、Pod的资源需求、亲和性和反亲和性等。

  • 绑定Pod到节点

调度器将Pod和节点之间的绑定信息保存在etcd数据库中,以便节点可以获取Pod的调度信息。

  • 节点启动Pod

节点定期检查etcd数据库中的Pod调度信息,并启动相应的Pod。如果节点故障或资源不足,调度器会重新调度Pod,并将其绑定到其他节点上运行。

调度器种类

  • 默认调度器(Default Scheduler):是Kubernetes中的默认调度器,负责对新创建的Pod进行调度,并将Pod调度到合适的节点上。
  • 自定义调度器(Custom Scheduler):是一种自定义的调度器实现,可以根据实际需求来定义调度策略和规则,以实现更灵活和多样化的调度功能。
  • 扩展调度器(Extended Scheduler):是一种支持调度器扩展器的调度器实现,可以通过调度器扩展器来添加自定义的调度规则和策略,以实现更灵活和多样化的调度功能。
  • kube-scheduler是kubernetes中的默认调度器,在kubernetes运行后会自动在控制节点运行

nodeName

    • nodeName 是节点选择约束的最简单方法,但一般不推荐
    • 如果 nodeName 在 PodSpec 中指定了,则它优先于其他的节点选择方法
    • 使用 nodeName 来选择节点的一些限制
    • 如果指定的节点不存在。
    • 如果指定的节点没有资源来容纳 pod,则pod 调度失败。
    • 云环境中的节点名称并非总是可预测或稳定的
bash 复制代码
#监控
[root@master ~]# watch -n 1 kubectl get pods  -o wide

没创建前:

bash 复制代码
#打开插件
[root@master fannel]# kubectl  apply  -f kube-flannel.yml 
namespace/kube-flannel created
serviceaccount/flannel created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created
#在其他的shell中建立pod
[root@master Scheduler]#  kubectl run  nginx --image nginx --dry-run=client -o yaml  > nginx.yml
[root@master Scheduler]# vim nginx.yml 
apiVersion: v1
kind: Pod
metadata:
  labels:
    run: nginx
  name: nginx
spec:
#  nodeName:				#未指定调度阶段
  containers:
  - image: nginx
    name: nginx
[root@master Scheduler]# kubectl apply -f nginx.yml
pod/nginx created

监控信息

bash 复制代码
[root@master Scheduler]# kubectl delete -f nginx.yml
pod "nginx" deleted from default namespace

[root@master Scheduler]# vim nginx.yml
apiVersion: v1
kind: Pod
metadata:
  labels:
    run: nginx
  name: nginx
spec:
  nodeName:  node2			#自定调度的节点
  containers:
  - image: nginx
    name: nginx


[root@master Scheduler]# kubectl apply -f nginx.yml
pod/nginx created

监控

nodeselector

    • nodeSelector 是节点选择约束的最简单推荐形式
    • 给选择的节点添加标签:
bash 复制代码
kubectl label nodes k8s-node1 lab=lee
    • 可以给多个节点设定相同标签
bash 复制代码
[root@master Scheduler]# vim nginx.yml 
[root@master Scheduler]# vim nginx.yml
apiVersion: v1
kind: Pod
metadata:
  labels:
    run: nginx
  name: nginx
spec:
  nodeSelector:
    app: timinglee
  containers:
  - image: nginx
    name: nginx
[root@master Scheduler]# kubectl delete -f nginx.yml
pod "nginx" deleted from default namespace
[root@master Scheduler]# kubectl apply  -f nginx.yml 
pod/nginx created

监控开启

bash 复制代码
Every 1.0s: kubectl get pods -o wide      

#没有给任何节点打上 app=timinglee,所以没有运行

#打标签

bash 复制代码
[root@master Scheduler]# kubectl  get nodes --show-labels
NAME     STATUS   ROLES           AGE   VERSION   LABELS
master   Ready    control-plane   21d   v1.35.3   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kuberostname=master,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node.kubernetes.io/exclude-from-exte
node1    Ready    <none>          20d   v1.35.3   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kuberostname=node1,kubernetes.io/os=linux
node2    Ready    <none>          20d   v1.35.3   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kuberostname=node2,kubernetes.io/os=linux
node3    Ready    <none>          13d   v1.35.3   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kuberostname=node3,kubernetes.io/os=linux
#给node2打上timinglee的标签
[root@master Scheduler]#  kubectl label nodes node2 app=timinglee
node/node2 labeled
[root@master Scheduler]# kubectl get nodes --show-labels
NAME     STATUS   ROLES           AGE   VERSION   LABELS
master   Ready    control-plane   21d   v1.35.3   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=master,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node.kubernetes.io/exclude-from-external-load-balancers=
node1    Ready    <none>          20d   v1.35.3   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=node1,kubernetes.io/os=linux
node2    Ready    <none>          20d   v1.35.3   app=timinglee,beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=node2,kubernetes.io/os=linux
node3    Ready    <none>          13d   v1.35.3   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=node3,kubernetes.io/os=linux

监控

#执行的是删除标签

bash 复制代码
[root@master Scheduler]#  kubectl label nodes node2 app-
node/node2 unlabeled

节点亲和

官方文档 :

https://kubernetes.io/zh/docs/concepts/scheduling-eviction/assign-pod-node

亲和与反亲和

  • nodeSelector 提供了一种非常简单的方法来将 pod 约束到具有特定标签的节点上。亲和/反亲和功能极大地扩展了你可以表达约束的类型。
  • 使用节点上的 pod 的标签来约束,而不是使用节点本身的标签,来允许哪些 pod 可以或者不可以被放置在一起。

nodeAffinity节点亲和

  • 那个节点服务指定条件就在那个节点运行
  • requiredDuringSchedulingIgnoredDuringExecution 必须满足,但不会影响已经调度
  • preferredDuringSchedulingIgnoredDuringExecution 倾向满足,在无法满足情况下也会调度pod
  • IgnoreDuringExecution 表示如果在Pod运行期间Node的标签发生变化,导致亲和性策略不能满足,则继续运行当前的Pod。
  • nodeaffinity还支持多种规则匹配条件的配置如

|--------------|--------------------------|
| 匹配规则 | 功能 |
| In | label 的值在列表内 |
| Notln | label 的值不在列表内 |
| Gt | label 的值大于设置的值,不支持Pod亲和性 |
| Lt | label 的值小于设置的值,不支持pod亲和性 |
| Exists | 设置的label 存在 |
| DoesNotExist | 设置的 label 不存在 |

1.倾向满足

bash 复制代码
[root@master Scheduler]# vim nginx.yml 
[root@master Scheduler]#  kubectl apply -f nginx.yml
# 1. 使用的 Kubernetes API 版本,Pod 固定用 v1
apiVersion: v1

# 2. 资源类型:这里定义的是一个 Pod(最小的运行容器单元)
kind: Pod

# 3. Pod 的元数据(名字、标签等)
metadata:
  # 4. 给 Pod 打标签,key=run,value=nginx
  labels:
    run: nginx
  # 5. Pod 的名字叫 nginx
  name: nginx

# 6. Pod 规格(里面运行什么、怎么调度)
spec:
  # 7. 容器列表
  containers:
  # 8. 定义一个容器
  - image: nginx    # 9. 使用 nginx 官方镜像
    name: nginx     # 10. 容器名字叫 nginx

  # 11. 节点亲和性配置(决定 Pod 喜欢被调度到哪些节点)
  affinity:
    # 12. 节点亲和(针对节点的调度偏好)
    nodeAffinity:
      # 13. 软亲和(优先调度到符合条件的节点,没有也能运行)
      # 调度时生效,运行中节点标签变了也不影响
      preferredDuringSchedulingIgnoredDuringExecution:
      # 14. 一组偏好规则
      - preference:
          # 15. 匹配表达式规则
          matchExpressions:
            # 16. 规则:检查节点的 disk 标签
            - key: disk
              # 17. 操作符:In 表示标签值在下面列表里就算匹配
              operator: In
              # 18. 标签值只要是 ssd 或 iscsi 都满足
              values:
              - ssd
              - iscsi
        # 19. 权重 50(范围1-100),数值越高越优先选择
        weight: 50
[root@master Scheduler]# kubectl delete -f nginx.yml
pod "nginx" deleted from default namespace
[root@master Scheduler]#  kubectl apply -f nginx.yml
pod/nginx created

监控

##在不符合条件情况下也可以调度pod

#满足条件后

bash 复制代码
[root@master Scheduler]# kubectl  label nodes node2 disk=ssd
node/node2 labeled

#告诉 Kubernetes:给我启动一个 Nginx 容器,但是必须运行在带有 disk=ssd 标签的节点上

bash 复制代码
[root@master Scheduler]# vim nginx.yml
# 1. 固定写法:使用 Kubernetes 核心 API 版本 v1
apiVersion: v1

# 2. 固定写法:我要创建的资源类型是 Pod(最小运行单元)
kind: Pod

# 3. Pod 的元数据(名字、标签等基本信息)
metadata:

  # 4. 给 Pod 打标签:key=run,value=nginx(方便识别、筛选)
  labels:
    run: nginx

  # 5. 给这个 Pod 起名字:nginx
  name: nginx

# 6. Pod 的规格(里面运行什么、怎么运行、调度规则)
spec:

  # 7. 定义 Pod 里的容器列表
  containers:

  # 8. 列表里第一个(也是唯一一个)容器
  - image: nginx    # 9. 容器使用的镜像:nginx 官方镜像
    name: nginx     # 10. 给容器起名字:nginx

  # 11. 亲和性配置(调度规则:Pod 想运行在哪些节点)
  affinity:

    # 12. 节点亲和性(针对 节点 的调度规则)
    nodeAffinity:

      # 13. 【硬亲和】调度时必须满足,运行中不影响
      # 意思:必须找到符合条件的节点,否则永远不启动
      requiredDuringSchedulingIgnoredDuringExecution:

        # 14. 节点选择条件列表
        nodeSelectorTerms:

        # 15. 一组匹配规则(满足这一组就行)
        - matchExpressions:

          # 16. 规则开始:检查节点标签的 key
          - key: disk

            # 17. 操作符:In = 标签值 在下面列表里就算匹配
            operator: In

            # 18. 允许的值:只要节点的 disk 标签是 ssd 或 iscsi 就符合
            values:
            - ssd
            - iscsi
            
[root@master Scheduler]# kubectl apply -f nginx.yml
pod/nginx created

#pod状态

为啥没启动呢?因为是必须、强制、一定要满足下面条件,否则永远不运行,这里面集群里 所有节点都没有这个标签

  • 调度器一看:没节点能满足要求
  • 不敢把 Pod 放上去
  • Pod 就一直 Pending

2.必须满足

bash 复制代码
#与上面的脚本不变
[root@master Scheduler]# vim nginx.yml
apiVersion: v1
kind: Pod
metadata:
  labels:
    run: nginx
  name: nginx
spec:
  containers:
  - image: nginx
    name: nginx
  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
        - matchExpressions:
          - key: disk
            operator: In
            values:
            - ssd
            - iscsi
#加入节点
[root@master Scheduler]# kubectl label nodes node2  disk=ssd
node/node2 labeled

[root@master Scheduler]# kubectl delete -f nginx.yml
pod "nginx" deleted from default namespace
[root@master Scheduler]# kubectl apply -f nginx.yml
pod/nginx created

            

#条件必须满足

#反向选择

bash 复制代码
[root@master Scheduler]# vim nginx.yml
apiVersion: v1
kind: Pod
metadata:
  labels:
    run: nginx
  name: nginx
spec:
  containers:
  - image: nginx
    name: nginx
  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
        - matchExpressions:
          - key: disk
            operator: NotIn			#反向选择
            values:
            - ssd
            - iscsi
[root@master Scheduler]# kubectl apply -f nginx.yml
pod/nginx created

监控

POD亲和

    • 那个节点有符合条件的POD就在那个节点运行
    • podAffinity 主要解决POD可以和哪些POD部署在同一个节点中的问题
    • podAntiAffinity主要解决POD不能和哪些POD部署在同一个节点中的问题。它们处理的是Kubernetes集群内部POD和POD之间的关系。
    • Pod 间亲和与反亲和在与更高级别的集合(例如 ReplicaSets,StatefulSets,Deployments 等)一起使用时,
    • Pod 间亲和与反亲和需要大量的处理,这可能会显著减慢大规模集群中的调度。
bash 复制代码
[root@master Scheduler]# kubectl create deployment webcluster --image nginx --replicas 2 --dry-run=client -o yaml > webcluster.yml
[root@master Scheduler]# vim webcluster.yml
# 1. 使用的K8s API版本,Deployment固定用 apps/v1
apiVersion: apps/v1

# 2. 资源类型:Deployment(用来管理多个Pod的工具)
kind: Deployment

# 3. Deployment的元数据(名字、标签)
metadata:
  # 4. 给Deployment打标签:app=webcluster
  labels:
    app: webcluster
  # 5. Deployment名字叫 webcluster
  name: webcluster

# 6. Deployment规格(怎么运行、运行几个)
spec:
  # 7. 副本数量:启动 2 个 Nginx Pod
  replicas: 2

  # 8. 标签选择器:用来找到它管理的Pod
  selector:
    matchLabels:
      app: webcluster

  # 9. Pod模板:按照这个模板创建Pod
  template:
    metadata:
      # 10. 创建出来的Pod都会带标签 app=webcluster
      labels:
        app: webcluster

    # 11. Pod里面的具体内容
    spec:
      # 12. 容器列表
      containers:
      # 13. 使用nginx镜像,容器名字nginx
      - image: nginx
        name: nginx

      # ====================== 核心规则 ======================
      # 14. 亲和性配置
      affinity:
        # 15. Pod 亲和(我要和别人凑一起)
        podAffinity:

          # 16. 硬亲和:必须满足,不满足就不启动
          requiredDuringSchedulingIgnoredDuringExecution:

          # 17. 亲和规则
          - labelSelector:
              # 18. 匹配标签 app=webcluster 的Pod
              matchExpressions:
              - key: app
                operator: In
                values:
                - webcluster

            # 19. 拓扑域:按【节点】判断
            topologyKey: "kubernetes.io/hostname"
#可写也不用写,运行3给容器
[root@master Scheduler]# kubectl scale deployment webcluster --replicas=3
deployment.apps/webcluster scaled
[root@master Scheduler]# kubectl delete -f nginx.yml
pod "nginx" deleted from default namespace
[root@master Scheduler]# kubectl apply -f webcluster.yml
deployment.apps/webcluster created

监控

POD反亲和

bash 复制代码
[root@master Scheduler]# vim webcluster.yml 
[root@master Scheduler]# kubectl create deployment webcluster --image nginx --replicas 3 --dry-run=client -o yaml > webcluster1.yml
[root@master Scheduler]# vim webcluster1.yml 
# 1. API 版本(固定写法)
apiVersion: apps/v1

# 2. 资源类型:Deployment(用来管理多副本 Pod 的控制器)
kind: Deployment

# 3. 元数据:Deployment 的名字、标签
metadata:
  # 4. 给 Deployment 打标签
  labels:
    app: webcluster
  # 5. Deployment 名字叫 webcluster
  name: webcluster

# 6. Deployment 的规格(怎么管理 Pod)
spec:
  # 7. 副本数:要启动 3 个一样的 Pod
  replicas: 3

  # 8. 标签选择器:管理哪些 Pod
  selector:
    matchLabels:
      app: webcluster

  # 9. Pod 模板:用这个模板创建 3 个 Pod
  template:
    metadata:
      # 10. 给创建出来的 Pod 打标签
      labels:
        app: webcluster

    # 11. Pod 里面的具体内容
    spec:
      # 12. 容器列表
      containers:
      # 13. 用 nginx 镜像,容器名字 nginx
      - image: nginx
        name: nginx

      # ====================== 核心重点 ======================
      # 14. 亲和性配置
      affinity:
        # 15. Pod 反亲和(重点!)
        podAntiAffinity:

          # 16. 硬反亲和:必须满足,不满足就不调度
          requiredDuringSchedulingIgnoredDuringExecution:

          # 17. 反亲和规则
          - labelSelector:
              # 18. 匹配标签为 app=webcluster 的 Pod
              matchExpressions:
              - key: app
                operator: In
                values:
                - webcluster

            # 19. 拓扑域:以"节点"为单位(同一个机器就算同一个域)
            topologyKey: "kubernetes.io/hostname"
[root@master Scheduler]# kubectl delete -f webcluster.yml 
deployment.apps "webcluster" deleted from default namespace
[root@master Scheduler]#  kubectl apply -f webcluster1.yml
deployment.apps/webcluster created

监控

节点的污点设定

  • Taints(污点)是Node的一个属性,设置了Taints后,默认Kubernetes是不会将Pod调度到这个Node上
  • Kubernetes如果为Pod设置Tolerations(容忍),只要Pod能够容忍Node上的污点,那么Kubernetes就会忽略Node上的污点,就能够(不是必须)把Pod调度过去
  • 可以使用命令 kubectl taint 给节点增加一个 taint:
bash 复制代码
$ kubectl taint nodes <nodename> key=string:effect   #命令执行方法
$ kubectl taint nodes node1 key=value:NoSchedule    #创建
$ kubectl describe nodes server1 | grep Taints        #查询
$ kubectl taint nodes node1 key-                  #删除

|------------------|----------------------------------------|
| effect值 | 解释 |
| NoSchedule | POD 不会被调度到标记为 taints 节点 |
| PreferNoSchedule | NoSchedule 的软策略版本,尽量不调度到此节点 |
| NoExecute | 如该节点内正在运行的 POD 没有对应 Tolerate 设置,会直接被逐出 |

开启监控并启动该一个deployment控制器

bash 复制代码
[root@master ~]# watch -n 1 kubectl get pods  -o wide


[root@master Scheduler]# kubectl  create deployment webscluster --image nginx --replicas 2 --dry-run=client -o ya
[root@master Scheduler]# vim dep.yml 
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: webcluster
  name: webcluster
spec:
  replicas: 2
  selector:
    matchLabels:
      app: webcluster
  template:
    metadata:
      labels:
        app: webcluster
    spec:
      containers:
      - image: nginx
        name: nginx
[root@master Scheduler]# kubectl delete -f  webcluster1.yml 
deployment.apps "webcluster" deleted from default namespace
[root@master Scheduler]#  kubectl apply -f dep.yml
deployment.apps/webscluster created

监控

设定污点并购观察

NoExecute

bash 复制代码
#给node1加污点,不会传到node1的pod
[root@master Scheduler]# kubectl taint node node1 nodetype=badnode:NoExecute

监控

NoSchedule

bash 复制代码
#给node2加污点,我有3个pod,传到node3
[root@master Scheduler]# kubectl taint node node2 nodetype=badnode:NoSchedule
node/node2 tainted
[root@master Scheduler]# kubectl delete -f dep.yml
deployment.apps "webscluster" deleted from default namespace
[root@master Scheduler]# kubectl apply -f dep.yml
deployment.apps/webscluster created

监控

PreferNoSchedule

bash 复制代码
[root@master Scheduler]# kubectl delete -f dep.yml
deployment.apps "webcluster" deleted from default namespace

[root@master Scheduler]# kubectl taint node node2 nodetype=badnode:PreferNoSchedule
node/node2 tainted

[root@master Scheduler]# kubectl apply -f dep.yml
deployment.apps/webcluster created

监控

#pod反亲

bash 复制代码
[root@master Scheduler]# vim webcluster.yml			
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: webcluster
  name: webcluster
spec:
  replicas: 3
  selector:
    matchLabels:
      app: webcluster
  template:
    metadata:
      labels:
        app: webcluster
    spec:
      containers:
      - image: nginx
        name: nginx
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
          - labelSelector:
              matchExpressions:
              - key: app
                operator: In
                values:
                - webcluster
            topologyKey: "kubernetes.io/hostname"

#去污点

bash 复制代码
[root@master Scheduler]# kubectl taint node node2 nodetype=badnode:PreferNoSchedule
node/node2 tainted

监控

污点容忍

  • tolerations中定义的key、value、effect,要与node上设置的taint保持一直:
  1. 如果 operator 是 Equal ,则key与value之间的关系必须相等。
  2. 如果 operator 是 Exists ,value可以省略
  3. 如果不指定operator属性,则默认值为Equal。
  • 还有两个特殊值:
  1. 当不指定key,再配合Exists 就能匹配所有的key与value ,可以容忍所有污点。
  2. 当不指定effect ,则匹配所有的effect

1.设置节点不同类型给的污点

bash 复制代码
[root@master Scheduler]#  kubectl taint  node node1 name=lee:NoSchedule
node/node1 tainted
[root@master Scheduler]#  kubectl taint  node node2 name=lee:NoSchedule
node/node2 tainted
[root@master Scheduler]#  kubectl taint  node node3 name=lee:NoSchedule
node/node3 tainted

2.运行deployment控制器

bash 复制代码
[root@master Scheduler]# kubectl delete -f webcluster1.yml 
deployment.apps "webcluster" deleted from default namespace
[root@master Scheduler]# kubectl delete -f dep.yml
deployment.apps "webscluster" deleted from default namespace
[root@master Scheduler]#  kubectl apply -f dep.yml
deployment.apps/webscluster created

监控

精确容忍指定污点

bash 复制代码
[root@master Scheduler]# vim dep.yml
# 1. API 版本,Deployment 固定用 apps/v1
apiVersion: apps/v1

# 2. 资源类型:Deployment(管理多个 Pod 的工具)
kind: Deployment

# 3. Deployment 的元数据(名字、标签)
metadata:
  # 4. 给 Deployment 打标签:app=webscluster
  labels:
    app: webscluster
  # 5. Deployment 名字叫 webscluster
  name: webscluster

# 6. Deployment 规格
spec:
  # 7. 副本数:启动 3 个 Nginx Pod
  replicas: 3

  # 8. 标签选择器:管理哪些 Pod
  selector:
    matchLabels:
      app: webscluster

  # 9. Pod 模板:用这个模板创建 Pod
  template:
    metadata:
      # 10. 给创建出来的 Pod 打标签:app=webscluster
      labels:
        app: webscluster

    # 11. Pod 内部配置
    spec:
      # 12. 容器列表
      containers:
      # 13. 使用 nginx 镜像,容器名字 nginx
      - image: nginx
        name: nginx

      # ====================== 核心:污点容忍 ======================
      # 14. 污点容忍(Pod 可以容忍哪些节点污点)
      tolerations:

      # 15. 第一条容忍规则
      - operator: Equal       # 规则:完全匹配 key + value
        key: nodetype         # 匹配污点的 key = nodetype
        value: badnode        # 匹配污点的 value = badnode
        effect: NoSchedule    # 匹配污点的效果 = NoSchedule

      # 16. 第二条容忍规则(你刚加的,非常关键!)
      - operator: Equal       # 完全匹配
        key: name             # 匹配污点 key = name
        value: lee            # 匹配污点 value = lee
        effect: NoSchedule    # 匹配污点效果 = NoSchedule 
[root@master Scheduler]# kubectl apply -f dep.yml
deployment.apps/webscluster created

监控

容忍所有标签的NoSchedule污点模式

bash 复制代码
[root@master Scheduler]# kubectl delete -f dep.yml
deployment.apps "webscluster" deleted from default namespace
[root@master Scheduler]# vim dep.yml 
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: webscluster
  name: webscluster
spec:
  replicas: 4
  selector:
    matchLabels:
      app: webscluster
  template:
    metadata:
      labels:
        app: webscluster
    spec:
      containers:
      - image: nginx
        name: nginx
      tolerations:
      - operator: Exists
        effect: NoSchedule
~                                     
[root@master Scheduler]# kubectl apply -f dep.yml
deployment.apps/webscluster created

监控

容忍所有污点

bash 复制代码
[root@master Scheduler]# kubectl delete -f dep.yml
deployment.apps "webscluster" deleted from default namespace
[root@master Scheduler]# vim dep.yml 
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: webscluster
  name: webscluster
spec:
  replicas: 4
  selector:
    matchLabels:
      app: webscluster
  template:
    metadata:
      labels:
        app: webscluster
    spec:
      containers:
      - image: nginx
        name: nginx
      tolerations:
      - operator: Exists
[root@master Scheduler]#  kubectl apply -f dep.yml
deployment.apps/webscluster created

监控

13.k8s的认证和授权

Authentication(认证)

  • 认证方式现共有8种,可以启用一种或多种认证方式,只要有一种认证方式通过,就不再进行其它方式的认证。通常启用X509 Client Certs和Service Accout Tokens两种认证方式。
  • Kubernetes集群有两类用户:由Kubernetes管理的Service Accounts (服务账户)和(Users Accounts) 普通账户。k8s中账号的概念不是我们理解的账号,它并不真的存在,它只是形式上存在。

Authorization(授权)

  • 必须经过认证阶段,才到授权请求,根据所有授权策略匹配请求资源属性,决定允许或拒绝请求。授权方式现共有6种,AlwaysDeny、AlwaysAllow、ABAC、RBAC、Webhook、Node。默认集群强制开启RBAC。

Admission Control(准入控制)

  • 用于拦截请求的一种方式,运行在认证、授权之后,是权限认证链上的最后一环,对请求API资源对象进行修改和校验。

UserAccount与ServiceAccount

    • 用户账户是针对人而言的。 服务账户是针对运行在 pod 中的进程而言的。
    • 用户账户是全局性的。 其名称在集群各 namespace 中都是全局唯一的,未来的用户资源不会做 namespace 隔离, 服务账户是 namespace 隔离的。
    • 集群的用户账户可能会从企业数据库进行同步,其创建需要特殊权限,并且涉及到复杂的业务流程。 服务账户创建的目的是为了更轻量,允许集群用户为了具体的任务创建服务账户 ( 即权限最小化原则 )。

1.ServiceAccount

    • 服务账户控制器(Service account controller)
    • 服务账户管理器管理各命名空间下的服务账户
    • 每个活跃的命名空间下存在一个名为 "default" 的服务账户
    • 服务账户准入控制器(Service account admission controller)
    • 相似pod中 ServiceAccount默认设为 default。
    • 保证 pod 所关联的 ServiceAccount 存在,否则拒绝该 pod。
    • 如果pod不包含ImagePullSecrets设置那么ServiceAccount中的ImagePullSecrets 被添加到pod中
    • 将挂载于 /var/run/secrets/kubernetes.io/serviceaccount 的 volumeSource 添加到 pod 下的每个容器中
    • 将一个包含用于 API 访问的 token 的 volume 添加到 pod 中

现在私有仓库时遇到的授权问题

bash 复制代码
[root@master Scheduler]# cd
[root@master ~]# mkdir auth
[root@master ~]# cd auth
[root@master auth]#  kubectl  run testpod --image nginx  --dry-run=client -o yaml > testpod.yaml
[root@master auth]# vim testpod.yaml
 apiVersion: v1
kind: Pod
metadata:
  labels:
    run: testpod
  name: testpod
spec:
  containers:
  - image: nginx
    name: testpod

[root@master auth]# kubectl apply -f testpod.yaml
pod/testpod created
[root@master auth]# kubectl  describe pods testpod | grep Service
Service Account:  default

#指定使用过私有仓库镜像

bash 复制代码
[root@master auth]# vim testpod.yaml
 apiVersion: v1
kind: Pod
metadata:
  labels:
    run: testpod
  name: testpod
spec:
  containers:
  - image: reg.timinglee.org/timinglee/myapp:v1			
    name: testpod
	imagePullPolicy: Always
[root@master auth]# kubectl apply -f testpod.yaml 
pod/testpod created

解决授权问题

bash 复制代码
[root@master auth]# kubectl create secret docker-registry docker-login --docker-username admin --docker-password lee --docker-server reg.timinglee.org --docker-email lee@timinglee.org
secret/docker-login created
[root@master auth]#  kubectl describe sa timinglee
Name:                timinglee
Namespace:           default
Labels:              <none>
Annotations:         <none>
Image pull secrets:  <none>
Events:              <none>
[root@master auth]#  kubectl edit  sa timinglee
serviceaccount/timinglee edited
apiVersion: v1
imagePullSecrets:
- name: docker-login					#指定授权读取的内容
kind: ServiceAccount
metadata:
  creationTimestamp: "2026-04-25T07:49:09Z"
  name: timinglee
  namespace: default
  resourceVersion: "337771"
  uid: f6711f5a-85a2-45c9-8f20-3f45387ebd54
[root@master auth]#  kubectl describe sa timinglee
Name:                timinglee
Namespace:           default
Labels:              <none>
Annotations:         <none>
Image pull secrets:  docker-login
Events:              <none>
[root@master auth]# vim testpod.yaml 
apiVersion: v1
kind: Pod
metadata:
  labels:
    run: testpod
  name: testpod
spec:
  serviceAccountName: timinglee
  containers:
  - image: reg.timinglee.org/timinglee/myapp:v1
    name: testpod
    imagePullPolicy: Always
[root@master auth]# kubectl  apply  -f testpod.yaml 
pod/testpod created

监控

温馨提示:记得把污点删除

bash 复制代码
[root@master Scheduler]# kubectl taint node node1 name=lee:NoSchedule-
node/node1 untainted
[root@master Scheduler]# kubectl taint node node2 name=lee:NoSchedule-
node/node2 untainted
[root@master Scheduler]# kubectl taint node node3 name=lee:NoSchedule-
node/node3 untainted

2.建立集群用户

建立用户证书

bash 复制代码
[root@master auth]# cd  /etc/kubernetes/pki/ #进入 Kubernetes 集群核心证书存放目录
#生成 RSA 私钥,2048的密钥长度,输出文件:timinglee.key
[root@master pki]# openssl genrsa -out timinglee.key 2048 
#生成证书签名请求文件(CSR)
-key timinglee.key:用刚才的私钥生成请求
-out timinglee.csr:输出请求文件
/CN=timinglee:用户名 = timingleeK8s是通过 CN 字段识别用户名的。
[root@master pki]#  openssl req  -new -key timinglee.key -out timinglee.csr -subj "/CN=timinglee"
#用集群的 ca.crt 和 ca.key 给用户签名生成用户证书 timinglee.crt有效期 365 天执行成功后,timinglee 就成为了K8s 认可的用户。
[root@master pki]#  openssl x509 -req  -in timinglee.csr -CA ca.crt -CAkey ca.key -CAcreateserial  -out timinglee.crt -days 365
Certificate request self-signature ok
subject=CN=timinglee
#验证证书是否正确生成:
[root@master pki]# openssl x509 -in timinglee.crt -text -noout
Certificate:
    Data:
        Version: 3 (0x2)
        Serial Number:
            3e:dc:fc:6c:85:d0:5a:24:14:af:12:e1:34:3c:9e:73:fa:1c:cb:0d
        Signature Algorithm: sha256WithRSAEncryption
        Issuer: CN=kubernetes    #颁发者
        Validity
            Not Before: Apr 25 12:53:13 2026 GMT
            Not After : Apr 25 12:53:13 2027 GMT
        Subject: CN=timinglee #用户名
        Subject Public Key Info:
            Public Key Algorithm: rsaEncryption
                Public-Key: (2048 bit)
                Modulus:
                    00:99:b9:f4:6e:5b:d5:8b:63:d2:ab:6c:88:3f:1d:
                    68:7c:b2:a3:35:11:aa:1d:21:c3:9e:9e:d9:df:39:
                    a1:da:5c:7d:ea:7b:a3:5a:65:cb:3a:d8:2b:fb:ff:
                    71:18:58:94:d4:6d:75:c9:38:b9:c5:2d:49:c8:b2:
                    a5:44:4a:ab:2c:28:91:dc:b4:66:90:e9:58:43:e5:
                    8f:d7:3b:41:9e:c2:57:21:07:56:0e:34:5e:52:ca:
                    2f:ef:26:02:44:87:f1:be:30:e6:bc:40:76:6a:26:
                    64:81:f9:fa:68:e6:0f:03:80:de:f3:4a:5b:17:01:
                    00:68:b9:a8:7d:e5:23:d9:ae:9a:d1:75:38:31:c4:
                    38:3b:6e:72:c9:7c:e4:3b:89:89:8a:2c:03:ec:db:
                    c1:20:02:8c:78:fd:4a:a7:05:4f:c9:71:08:cd:b1:
                    b2:ee:2e:33:2a:19:12:5a:c2:35:b7:ab:70:11:e2:
                    c5:61:ec:4b:52:2d:98:79:60:a2:a6:ab:dd:cf:f6:
                    f4:f7:39:c3:b0:fb:db:0b:cf:5d:a6:93:5b:4c:78:
                    66:56:96:d2:16:63:e4:25:8e:bf:ec:91:7f:ef:48:
                    cb:49:5a:52:ed:8a:64:8a:1c:9d:90:db:9e:b0:ef:
                    1a:33:ca:81:e0:6b:56:95:8a:a7:01:29:02:ae:40:
                    36:f5
                Exponent: 65537 (0x10001)
        X509v3 extensions:
            X509v3 Subject Key Identifier: 
                94:93:04:E9:78:8F:D8:64:E8:FD:81:30:0E:A9:D7:FC:1D:4A:9C:5E
            X509v3 Authority Key Identifier: 
                5A:A7:16:F2:8D:33:9E:6A:62:34:73:E2:19:6D:62:BF:77:E8:DC:8F
    Signature Algorithm: sha256WithRSAEncryption
    Signature Value:
        2d:6c:ca:dc:8c:ac:6c:96:a0:9f:e2:34:8f:22:7b:be:1f:ff:
        6c:34:d8:9d:f9:00:e2:59:76:0e:70:d9:2a:c6:8c:5f:e6:a1:
        a2:cc:c6:e6:5b:da:4e:a6:b3:4a:51:96:db:02:30:0b:ef:df:
        f9:22:d1:24:a9:52:53:f8:07:84:14:bb:e1:07:ff:5b:a0:05:
        cf:c5:6f:a4:14:0d:a2:9a:39:e9:c6:d9:e4:65:37:cc:05:93:
        29:40:37:d0:e4:9d:3d:5b:50:f8:68:49:e0:f9:8c:d4:24:d3:
        2b:17:ec:81:b2:0f:9e:a0:8a:e4:87:88:0b:11:8d:ff:ef:b7:
        e8:21:90:06:9b:c2:a9:27:ba:2f:e0:cc:e4:55:db:2f:bf:d6:
        8e:b0:33:9b:05:7e:20:2a:3b:f2:38:b6:89:a7:1b:37:cc:da:
        17:4c:84:44:c0:1d:0b:e2:60:74:e3:23:49:9d:ed:7c:dc:cf:
        f8:6a:c4:40:8d:8e:10:9d:53:b0:68:84:a2:29:2f:ec:bc:75:
        ef:ae:79:de:b0:15:90:d9:1a:91:dd:3d:df:1a:b1:1c:03:f3:
        27:55:06:8c:df:d4:69:0a:f5:1c:1e:58:4f:8c:39:e1:18:d2:
        ec:75:3c:45:2a:5d:cd:3b:65:bb:22:8c:94:18:aa:9b:35:43:
        a1:2c:41:7e

建立用户

bash 复制代码
#查看当前 kubectl 配置
[root@master pki]# kubectl  config view
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: DATA+OMITTED
    server: https://172.25.254.100:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: kubernetes-admin
  name: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Config
users:
- name: kubernetes-admin
  user:
    client-certificate-data: DATA+OMITTED
    client-key-data: DATA+OMITTED
#把新用户添加到 kubectl 配置里告诉 kubectl:有个用户叫 timinglee使用证书:timinglee.crt,使用私钥:timinglee.key
--embed-certs=true:把证书内容写入配置文件,方便迁移。
[root@master pki]# kubectl config set-credentials timinglee --client-certificate /etc/kubernetes/pki/timinglee.crt --client-key /etc/kubernetes/pki/timinglee.key --embed-certs=true
User "timinglee" set.
#创建一个 "上下文"
[root@master pki]#  kubectl config set-context timinglee@kubernetes --cluster kubernetes --user timinglee
Context "timinglee@kubernetes" created.
#切换当前用户为 timinglee
[root@master pki]#  kubectl config use-context timinglee@kubernetes
Switched to context "timinglee@kubernetes".
[root@master pki]#  kubectl config view
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: DATA+OMITTED
    server: https://172.25.254.100:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: kubernetes-admin
  name: kubernetes-admin@kubernetes
- context:
    cluster: kubernetes
    user: timinglee
  name: timinglee@kubernetes
current-context: timinglee@kubernetes
kind: Config
users:
- name: kubernetes-admin
  user:
    client-certificate-data: DATA+OMITTED
    client-key-data: DATA+OMITTED
- name: timinglee
  user:
    client-certificate-data: DATA+OMITTED
    client-key-data: DATA+OMITTED

3.授权

    • 允许管理员通过Kubernetes API动态配置授权策略。RBAC就是用户通过角色与权限进行关联。
    • RBAC只有授权,没有拒绝授权,所以只需要定义允许该用户做什么即可
    • RBAC的三个基本概念
    • Subject:被作用者,它表示k8s中的三类主体, user, group, serviceAccount
    • Role:角色,它其实是一组规则,定义了一组对 Kubernetes API 对象的操作权限。
    • RoleBinding:定义了"被作用者"和"角色"的绑定关系
    • RBAC包括四种类型:Role、ClusterRole、RoleBinding、ClusterRoleBinding
    • Role 和 ClusterRole
    • Role是一系列的权限的集合,Role只能授予单个namespace 中资源的访问权限。
    • ClusterRole 跟 Role 类似,但是可以在集群中全局使用。
    • Kubernetes 还提供了四个预先定义好的 ClusterRole 来供用户直接使用
    • cluster-amdin、admin、edit、view

切换回管理员

bash 复制代码
[root@master pki]# cd 
[root@master ~]# cd auth/
[root@master auth]# kube
kubeadm  kubectl  kubelet  
[root@master auth]# kubectl  config use-context kubernetes-admin@kubernetes 
Switched to context "kubernetes-admin@kubernetes".

建立role授权

bash 复制代码
[root@master auth]# kubectl create role timingleerole --dry-run=client --verb=get --resource pods -o yaml > timingleerole.yaml
[root@master auth]# vim timingleerole.yaml 
apiVersion: rbac.authorization.k8s.io/v1  # 1. RBAC 权限API版本(固定写法)
kind: Role                                  # 2. 类型:角色(只在某个命名空间生效)
metadata:
  name: timingleerole                       # 3. 角色名字:timmingleerole
rules:                                      # 4. 权限规则开始
- apiGroups:                                 # 5. 要访问的API组
  - ""                                       # 6. 空 = 核心核心组(pod、service 都在这里)
  resources:                                 # 7. 可以操作哪些资源
  - pods                                     # 8. 允许操作 Pod
  verbs:                                     # 9. 允许执行的动作
  - get                                      # 10. 查看单个
  - watch                                    # 11. 监听变化
  - list                                     # 12. 列表查看
  - create                                   # 13. 创建
  - update                                   # 14. 更新
  - path                                     # 15. 打补丁(应该是 patch,你打错字了)
  - delete                                   # 16. 删除
- apiGroups:                                 # 17. 第二个API组
  - "apps"                                   # 18. apps 组(deploy、statefulset 在这里)
  resources:                                 # 19. 资源
  - deployments                              # 20. 允许操作 Deployment
  verbs:                                     # 21. 动作
  - get                                      # 22. 查看
  - list                                     # 23. 列表
  - watch                                    # 24. 监听
  - create                                   # 25. 创建

[root@master auth]#  kubectl apply -f timingleerole.yaml
role.rbac.authorization.k8s.io/timingleerole created
#查看当前命名空间(default)下有哪些角色
[root@master auth]# kubectl get roles.rbac.authorization.k8s.io 
[root@master auth]# kubectl describe  roles.rbac.authorization.k8s.io timingleerole
Name:         timingleerole
Labels:       <none>
Annotations:  <none>
PolicyRule:
  Resources         Non-Resource URLs  Resource Names  Verbs
  ---------         -----------------  --------------  -----
  deployments.apps  []                 []              [get list watch create]
  pods              []                 []              [get watch list create update path delete]

调用role授权(rolebinding)

bash 复制代码
[root@master auth]# kubectl create rolebinding timingleerole-binding  --role timingleerole --namespace default --user timinglee --dry-run=client -o yaml > timingleerole-binding.yml
[root@master auth]# vim timingleerole-binding.yml
# API版本:K8s RBAC权限固定版本
apiVersion: rbac.authorization.k8s.io/v1

# 资源类型:角色绑定(将角色与用户绑定)
kind: RoleBinding

# 元数据:定义名称与生效空间
metadata:
  # 此绑定规则的名称
  name: timingleerole-binding
  # 权限仅在default命名空间生效
  namespace: default

# 绑定的角色信息
roleRef:
  # 所属API组(固定)
  apiGroup: rbac.authorization.k8s.io
  # 绑定的类型为Role
  kind: Role
  # 绑定的角色名称(你创建的timingleerole)
  name: timingleerole

# 授权对象(绑定给谁)
subjects:
  # 授权对象的API组(固定)
- apiGroup: rbac.authorization.k8s.io
  # 授权类型:普通用户
  kind: User
  # 被授权的用户名:timinglee
  name: timinglee
[root@master auth]# kubectl apply -f timingleerole-binding.yml
rolebinding.rbac.authorization.k8s.io/timingleerole-binding created
[root@master auth]# kubectl describe  rolebindings.rbac.authorization.k8s.io timingleerole-binding
Name:         timingleerole-binding
Labels:       <none>
Annotations:  <none>
Role:
  Kind:  Role
  Name:  timingleerole
Subjects:
  Kind  Name       Namespace
  ----  ----       ---------
  User  timinglee  

#测试

  • ✅ 可以 查看 / 创建 Pod
  • ✅ 可以 查看 / 创建 Deployment
  • ❌ 不能看 service(svc)
  • ❌ 不能删除 Deployment
bash 复制代码
[root@master auth]#  kubectl config use-context  timinglee@kubernetes
Switched to context "timinglee@kubernetes".
[root@master auth]# kubectl get svc
Error from server (Forbidden): services is forbidden: User "timinglee" cannot list resource "services" in API group "" in the namespace "default"
[root@master auth]# kubectl get deployments.apps
No resources found in default namespace.
[root@master auth]# kubectl get pods
NAME      READY   STATUS    RESTARTS      AGE
testpod   1/1     Running   1 (55m ago)   5h8m
[root@master auth]# kubectl create deployment test --image nginx --replicas 2
deployment.apps/test created
[root@master auth]# kubectl get  deployments.apps
NAME   READY   UP-TO-DATE   AVAILABLE   AGE
test   2/2     2            2           8s
[root@master auth]# kubectl delete deployments.apps test
Error from server (Forbidden): deployments.apps "test" is forbidden: User "timinglee" cannot delete resource "deployments" in API group "apps" in the namespace "default"

clusterrole和clusterrolebind

建立clusterrole

#记得进入admin账号删除前面的容器

bash 复制代码
[root@master auth]# kubectl config use-context  kubernetes-admin@kubernetes
Switched to context "kubernetes-admin@kubernetes".
[root@master auth]# kubectl  delete -f timingleerole.yaml 
role.rbac.authorization.k8s.io "timingleerole" deleted from default namespace
[root@master auth]# kubectl  delete   deployment test
deployment.apps "test" deleted from default namespace
[root@master auth]# kubectl  delete   pod testpod 
pod "testpod" deleted from default namespace
bash 复制代码
[root@master auth]#  kubectl create clusterrole timingleeclusterrole --resource=deployment --verb get --dry-run=client -o yaml > timingleeclusterrole.yml

[root@master auth]# vim timingleeclusterrole.yml
# 1. 定义 API 版本(固定写法,K8s 权限必须用这个版本)
apiVersion: rbac.authorization.k8s.io/v1

# 2. 资源类型:ClusterRole = 集群级别的角色(整个集群都生效)
kind: ClusterRole

# 3. 元数据(名字、标签等基本信息)
metadata:
  # 给这个角色起名字:timingleeclusterrole
  name: timingleeclusterrole

# 4. 权限规则(核心!真正给权限的地方)
rules:

# 第一条规则:给 Pod 开权限
# 空 "" 代表核心 API 组(pod、service、node 都属于这里)
- apiGroups:
  - ""
  # 要控制的资源:Pod(容器组)
  resources:
  - pods
  # 允许执行的动作(动词)
  verbs:
  - get        # 查看
  - watch      # 监听变化
  - list       # 列表展示
  - create     # 创建
  - update     # 更新
  - path       # ❌ 这里写错了!应该是 patch(打补丁更新)
  - delete     # 删除

# 第二条规则:给 Deployment 开权限
- apiGroups:
  - "apps"     # apps 是管理 Deployment、StatefulSet 的 API 组
  resources:
  - deployments  # 资源:Deployment(用来管理多副本 Pod)
  verbs:
  - get        # 查看
  - watch      # 监听
  - list       # 列表
  - create     # 创建

# 第三条规则:给 Service 开权限
- apiGroups:
  - ""         # 核心 API 组
  resources:
  - services   # 资源:Service(服务访问入口)
  verbs:
  - get        # 查看
  - watch      # 监听
  - list       # 列表
  - create     # 创建
  
[root@master auth]# kubectl apply -f timingleeclusterrole.yml
clusterrole.rbac.authorization.k8s.io/timingleeclusterrole created
[root@master auth]#  kubectl describe clusterrole  timingleeclusterrole 
Name:         timingleeclusterrole
Labels:       <none>
Annotations:  <none>
PolicyRule:
  Resources         Non-Resource URLs  Resource Names  Verbs
  ---------         -----------------  --------------  -----
  pods              []                 []              [get watch list create update path delete]
  services          []                 []              [get watch list create]
  deployments.apps  []                 []              [get watch list create]

调用该集群授权

bash 复制代码
[root@master auth]# kubectl create clusterrolebinding  clusterrolebind-timinleeclusterrole --clusterrole timingleeclusterrole   --user timinglee
clusterrolebinding.rbac.authorization.k8s.io/clusterrolebind-timinleeclusterrole created
[root@master auth]# kubectl  describe  clusterrolebindings.rbac.authorization.k8s.io  clusterrolebind-timinleeclusterrole 
Name:         clusterrolebind-timinleeclusterrole
Labels:       <none>
Annotations:  <none>
Role:
  Kind:  ClusterRole
  Name:  timingleeclusterrole
Subjects:
  Kind  Name       Namespace
  ----  ----       ---------
  User  timinglee
bash 复制代码
#测试
#进入timinglee  
[root@master auth]# kubectl config use-context timinglee@kubernetes
Switched to context "timinglee@kubernetes".
[root@master auth]# kubectl  get pv
Error from server (Forbidden): persistentvolumes is forbidden: User "timinglee" cannot list resource "persistentvolumes" in API group "" at the cluster scope
[root@master auth]# kubectl get pv
Error from server (Forbidden): persistentvolumes is forbidden: User "timinglee" cannot list resource "persistentvolumes" in API group "" at the cluster scope
[root@master auth]# kubectl  get svc
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   21d
timinglee    ClusterIP   None         <none>        80/TCP    36h
[root@master auth]# kubectl  get  deployments.apps 
No resources found in default namespace.
[root@master auth]# kubectl  get  pods
No resources found in default namespace.
[root@master auth]# kubectl  get  pods -A
NAMESPACE                NAME                                        READY   STATUS    RESTARTS       AGE
ingress-nginx            ingress-nginx-controller-6bcbfdbd4b-w48gv   1/1     Running   10 (39m ago)   9d
kube-flannel             kube-flannel-ds-9x97v                       1/1     Running   2 (39m ago)    23h
kube-flannel             kube-flannel-ds-fjzl4                       1/1     Running   2 (40m ago)    23h
kube-flannel             kube-flannel-ds-qfgkh                       1/1     Running   2 (40m ago)    23h
kube-system              coredns-697886855d-555zc                    1/1     Running   15 (40m ago)   21d
kube-system              coredns-697886855d-zgnd4                    1/1     Running   15 (40m ago)   21d
kube-system              etcd-master                                 1/1     Running   16 (40m ago)   21d
kube-system              kube-apiserver-master                       1/1     Running   13 (40m ago)   13d
kube-system              kube-controller-manager-master              1/1     Running   17 (40m ago)   21d
kube-system              kube-proxy-8m4q7                            1/1     Running   12 (40m ago)   13d
kube-system              kube-proxy-s5kht                            1/1     Running   12 (39m ago)   13d
kube-system              kube-proxy-w5jfb                            1/1     Running   12 (40m ago)   13d
kube-system              kube-proxy-zcs6m                            1/1     Running   13 (40m ago)   13d
kube-system              kube-scheduler-master                       1/1     Running   17 (40m ago)   21d
metallb-system           controller-5fbf6546f9-fxlqb                 1/1     Running   2 (40m ago)    19h
metallb-system           speaker-kq6mj                               1/1     Running   13 (40m ago)   13d
metallb-system           speaker-lwdnz                               1/1     Running   12 (39m ago)   13d
metallb-system           speaker-nmpbd                               1/1     Running   12 (40m ago)   13d
nfs-client-provisioner   nfs-client-provisioner-6b445b9454-w7n6j     1/1     Running   3 (39m ago)    41h

14.Kubernetes项目包管理

helm简介

  • Helm是Kubernetes 应用的包管理工具,主要用来管理 Charts,类似Linux系统的yum。
  • Helm Chart是用来封装Kubernetes原生应用程序的一系列YAML文件。可以在你部署应用的时候自定义应用程序的一些 Metadata,以便于应用程序的分发。
  • 对于应用发布者而言
  • 通过Helm打包应用、管理应用依赖关系、管理应用版本并发布应用到软件仓库。
  • 对于使用者而言
  • 使用Helm后可以以简单的方式在Kubernetes上查找、安装、升级、回滚、卸载应用程序

部署helm

官网与资源

bash 复制代码
[root@master auth]# cd
[root@master ~]# mkdir helm
[root@master ~]# cd helm/
[root@master helm]#  wget https://get.helm.sh/helm-v4.1.1-linux-amd64.tar.gz
[root@master helm]# tar zxf helm-v4.1.1-linux-amd64.tar.gz 
[root@master helm]# ls
helm-v4.1.1-linux-amd64.tar.gz  linux-amd64


[root@master helm]# cd linux-amd64/
[root@master linux-amd64]# ls
helm  LICENSE  README.md
[root@master linux-amd64]# cp -p helm /usr/local/bin/
[root@master linux-amd64]# echo "source <(helm completion bash)" >> ~/.bashrc
[root@master linux-amd64]# source  ~/.bashrc 
#按 Tab键就可以补全下面的命令了
[root@master linux-amd64]# helm 
completion  (generate autocompletion scripts for the specified shell)
create      (create a new chart with the given name)
dependency  (manage a chart's dependencies)
env         (helm client environment information)
get         (download extended information of a named release)
help        (Help about any command)
history     (fetch release history)
install     (install a chart)
lint        (examine a chart for possible issues)
list        (list releases)
package     (package a chart directory into a chart archive)
plugin      (install, list, or uninstall Helm plugins)
pull        (download a chart from a repository and (optionally) unpack it in local directory)
push        (push a chart to remote)
registry    (login to or logout from a registry)
repo        (add, list, remove, update, and index chart repositories)
rollback    (roll back a release to a previous revision)
search      (search for a keyword in charts)
show        (show information of a chart)
status      (display the status of the named release)
template    (locally render templates)
test        (run tests for a release)
uninstall   (uninstall a release)
upgrade     (upgrade a release)
verify      (verify that a chart at the given path has been signed and is valid)
version     (print the helm version information)
bash 复制代码
#在线安装
[root@master helm]# dnf install -y git
[root@master helm]# helm plugin install https://github.com/chartmuseum/helm-push --verify=false
WARNING: Skipping plugin signature verification
Downloading and installing helm-push v0.11.1 ...
https://github.com/chartmuseum/helm-push/releases/download/v0.11.1/helm-push_0.11.1_linux_amd64.tar.gz
Installed plugin: cm-push
[root@master helm]# ls
helm-push_0.11.1.tar.gz  helm-v4.1.3-linux-amd64.tar.gz  linux-amd64
[root@master helm]# helm plugin  list
NAME   	VERSION	TYPE  	APIVERSION	PROVENANCE	SOURCE 
cm-push	0.11.1 	cli/v1	v1        	unknown   	unknown


#离线安装的helm-push
[root@master helm]# cd .local/share/helm/
[root@master helm]# ls
plugins
[root@master helm]# cd plugins
[root@master plugins]# ls
helm-push
[root@master plugins]# cd helm-push/
[root@master helm-push]# ls
acceptance_tests  bin  BUILDING.md  CHANGELOG.md  cmd  go.mod  go.sum  LICENSE  Makefile  pkg  plugin.yaml  README.md  releases  scripts  testdata
[root@master helm]# tar zxf helm-push_0.11.1.tar.gz 
[root@master helm]# helm
Available Commands:
  cm-push     Push chart package to ChartMuseum  #多了一个cm-push
  completion  generate autocompletion scripts for the specified shell
  create      create a new chart with the given name
 .......

helm操作

bash 复制代码
#添加第三方源
[root@master helm]# helm repo  add  aliyun https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts
"aliyun" has been added to your repositories
[root@master helm]# helm repo list
NAME  	URL                                                   
aliyun	https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts
#本地仓库搜索
[root@master helm]# helm search repo aliyun/nginx
NAME                	CHART VERSION	APP VERSION	DESCRIPTION                                       
aliyun/nginx-ingress	0.9.5        	0.10.2     	An nginx Ingress controller that uses ConfigMap...
aliyun/nginx-lego   	0.3.1        	           	Chart for nginx-ingress-controller and kube-lego
#查找chart的信息 
[root@master helm]# helm show chart aliyun/nginx-ingress
apiVersion: v1
appVersion: 0.10.2
description: An nginx Ingress controller that uses ConfigMap to store the nginx configuration.
icon: https://upload.wikimedia.org/wikipedia/commons/thumb/c/c5/Nginx_logo.svg/500px-Nginx_logo.svg.png
keywords:
- ingress
- nginx
maintainers:
- email: jack.zampolin@gmail.com
  name: jackzampolin
- email: mgoodness@gmail.com
  name: mgoodness
- email: chance.zibolski@coreos.com
  name: chancez
name: nginx-ingress
sources:
- https://github.com/kubernetes/ingress-nginx
version: 0.9.5
#把 Bitnami 官方 Helm 仓库添加到本地 Helm 里
[root@master helm]# helm repo add bitnami https://charts.bitnami.com/bitnami
"bitnami" has been added to your repositories
#查看仓库信息
[root@master helm]# helm repo list
NAME   	URL                                                   
aliyun 	https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts
bitnami	https://charts.bitnami.com/bitnami                    
#查看仓库存储helm的nginx的Bitnami 官方包清单
[root@master helm]# helm search repo bitnami/nginx
NAME                            	CHART VERSION	APP VERSION	DESCRIPTION                                       
bitnami/nginx                   	23.0.3       	1.30.0     	NGINX Open Source is a web server that can be a...
bitnami/nginx-ingress-controller	12.0.7       	1.13.1     	NGINX Ingress Controller is an Ingress controll...
bitnami/nginx-intel             	2.1.15       	0.4.9      	DEPRECATED NGINX Open Source for Intel is a lig...

#显示版本
[root@master helm]# helm search repo bitnami/nginx -l | head -n 3
NAME                            	CHART VERSION	APP VERSION	DESCRIPTION                                       
bitnami/nginx                   	23.0.3       	1.30.0     	NGINX Open Source is a web server that can be a...
bitnami/nginx                   	23.0.2       	1.30.0     	NGINX Open Source is a web server that can be a...

#这个是要开外网服务器

bash 复制代码
[root@master ~]# helm install webserver bitnami/nginx
NAME: webserver
LAST DEPLOYED: Sun Apr 26 10:56:43 2026
NAMESPACE: default
STATUS: deployed
REVISION: 1
DESCRIPTION: Install complete
TEST SUITE: None
NOTES:
CHART NAME: nginx
CHART VERSION: 23.0.3
APP VERSION: 1.30.0

#查看状态
[root@master ~]# helm status webserver

[root@master ~]# helm list
NAME            NAMESPACE       REVISION        UPDATED                                 STATUS          CHART       APP VERSION
webserver       default         1               2026-04-26 10:56:43.861108531 +0800 CST deployed        nginx-23.0.31.30.0
#删除
[root@master ~]# helm uninstall  webserver
release "webserver" uninstalled
[root@master ~]# helm list
NAME    NAMESPACE       REVISION        UPDATED STATUS  CHART   APP VERSION

#拉去时指定版本

bash 复制代码
[root@master helm]# ls
helm-push_0.11.1.tar.gz         helm-v4.1.3-linux-amd64.tar.gz  nginx-23.0.3.tgz
helm-v4.1.1-linux-amd64.tar.gz  linux-amd64
[root@master helm]# tar zxf nginx-23.0.3.tgz 
[root@master helm]# ls
helm-push_0.11.1.tar.gz         helm-v4.1.3-linux-amd64.tar.gz  nginx
helm-v4.1.1-linux-amd64.tar.gz  linux-amd64                     nginx-23.0.3.tgz
[root@master helm]# cd nginx/
[root@master nginx]# ls
charts  Chart.yaml  README.md  templates  values.yaml
#修改一下部分
[root@master nginx]# vim values.yaml 
global:
  imageRegistry: "reg.timinglee.org"


image:
  registry: registry-1.docker.io
  repository: library/nginx
  tag: latest
  digest: ""


enableDefaultInitContainers: false 

livenessProbe:
  enabled: false
  
readinessProbe:
  enabled: false

containerSecurityContext:
  enabled: false
  seLinuxOptions: {}
  runAsUser: 1001
  runAsGroup: 1001
  runAsNonRoot: true
  privileged: false
  readOnlyRootFilesystem: false
  
 containerPorts:
  http: 80
  https: 443

#之前我登录的时timinglee用户,权限太低转换到主用户
[root@master nginx]# kubectl config use-context  kubernetes-admin@kubernetes
Switched to context "kubernetes-admin@kubernetes".
#用 Helm 安装一个名叫 webserver 的 Nginx 服务,安装包在 /root/helm/nginx 目录,并且允许使用不安全的镜像。
[root@master nginx]# helm install webserver  /root/helm/nginx --set
global.security.allowInsecureImages=true
NAME: webserver
LAST DEPLOYED: Mon Apr 27 18:45:32 2026
NAMESPACE: default
STATUS: deployed
REVISION: 1
DESCRIPTION: Install complete
TEST SUITE: None
NOTES:
CHART NAME: nginx
CHART VERSION: 23.0.3
APP VERSION: 1.30.0
........................
environment variables.
[root@master nginx]# helm list
NAME     	NAMESPACE	REVISION	UPDATED                                	STATUS  	CHART       	APP VERSION
webserver	default  	1       	2026-04-27 18:45:32.814345758 +0800 CST	deployed	nginx-23.0.3	1.30.0     
[root@master nginx]# kubectl get svc
NAME              TYPE           CLUSTER-IP      EXTERNAL-IP     PORT(S)                      AGE
kubernetes        ClusterIP      10.96.0.1       <none>          443/TCP                      23d
timinglee         ClusterIP      None            <none>          80/TCP                       2d21h
webserver-nginx   LoadBalancer   10.107.130.60   172.25.254.51   80:32563/TCP,443:39017/TCP   88s
[root@master nginx]# curl 10.107.130.60
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

#更新

bash 复制代码
[root@master nginx]# vim values.yaml 
892 ingress:
895   enabled: true
907   hostname: myapp.timinglee.org
927   ingressClassName: "nginx"
#对已经安装好的 webserver 进行 "升级 / 更新"
[root@master nginx]# helm upgrade webserver /root/helm/nginx
Release "webserver" has been upgraded. Happy Helming!
NAME: webserver
LAST DEPLOYED: Mon Apr 27 18:52:34 2026
NAMESPACE: default
STATUS: deployed
REVISION: 2
DESCRIPTION: Upgrade complete
TEST SUITE: None
NOTES:
CHART NAME: nginx
CHART VERSION: 23.0.3
APP VERSION: 1.30.0
.................
environment variables.
[root@master nginx]# kubectl  describe  ingress
Name:             webserver-nginx
Labels:           app.kubernetes.io/instance=webserver
                  app.kubernetes.io/managed-by=Helm
                  app.kubernetes.io/name=nginx
                  app.kubernetes.io/version=1.30.0
                  helm.sh/chart=nginx-23.0.3
Namespace:        default
Address:          172.25.254.30
Ingress Class:    nginx
Default backend:  <default>
Rules:
  Host                 Path  Backends
  ----                 ----  --------
  myapp.timinglee.org  
                       /   webserver-nginx:http (10.244.3.4:80)
Annotations:           meta.helm.sh/release-name: webserver
                       meta.helm.sh/release-namespace: default
Events:
  Type    Reason  Age                 From                      Message
  ----    ------  ----                ----                      -------
  Normal  Sync    69s (x3 over 3m1s)  nginx-ingress-controller  Scheduled for sync

[root@master nginx]# vim /etc/hosts
172.25.254.50      myapp1.timinglee.org myapp2.timinglee.org myapp.timinglee.org
#显示:1、2、3 号版本:旧版本,4 号版本:正在运行、最新、有 Ingress
[root@master nginx]# helm history webserver
REVISION	UPDATED                 	STATUS    	CHART       	APP VERSION	DESCRIPTION     
1       	Mon Apr 27 18:45:32 2026	superseded	nginx-23.0.3	1.30.0     	Install complete
2       	Mon Apr 27 18:52:34 2026	superseded	nginx-23.0.3	1.30.0     	Upgrade complete
3       	Mon Apr 27 18:54:26 2026	superseded	nginx-23.0.3	1.30.0     	Upgrade complete
4       	Mon Apr 27 19:02:02 2026	deployed  	nginx-23.0.3	1.30.0     	Upgrade complete
#回退到第一版本
[root@master nginx]# helm rollback webserver 1
Rollback was a success! Happy Helming!
[root@master nginx]# kubectl  get ingress
No resources found in default namespace.
#回退到第四版本
[root@master nginx]# helm rollback webserver 4
Rollback was a success! Happy Helming!
[root@master nginx]# kubectl  get ingress
NAME              CLASS   HOSTS                 ADDRESS         PORTS   AGE
webserver-nginx   nginx   myapp.timinglee.org   172.25.254.30   80      43s

helm项目构建

1.生成项目

bash 复制代码
[root@master ~]# cd helm/
[root@master helm]# mkdir mnt
[root@master helm]# cd mnt/
#自动创建一个标准 Helm Chart 模板文件夹,名字叫 timinglee
#生成 timinglee/ 目录,里面有 Chart.yaml、values.yaml、templates charts
[root@master mnt]# helm create timinglee
Creating timinglee

2.设定项目内容

bash 复制代码
[root@master timinglee]# ls
charts  Chart.yaml  templates  values.yaml
[root@master mnt]# vim timinglee/Chart.yaml 
apiVersion: v2
name: timinglee
description: use nginx create webserver  #用 nginx 创建 web 服务 

# A chart can be either an 'application' or a 'library' chart.
#
# Application charts are a collection of templates that can be packaged into versioned archives
# to be deployed.
#
# Library charts provide useful utilities or functions for the chart developer. They're included as
# a dependency of application charts to inject those utilities and functions into the rendering
# pipeline. Library charts do not define any templates and therefore cannot be deployed.
type: application

# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 1.0.0

# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to
# follow Semantic Versioning. They should reflect the version the application is using.
# It is recommended to use it with quotes.
appVersion: "v1"

[root@master mnt]# vim timinglee/values.yaml 
6 replicaCount: 2
9 image:
10   repository: myapp


60 ingress:
61   enabled: true
62   className: "nginx"
63   annotations: {}
64     # kubernetes.io/ingress.class: nginx
65     # kubernetes.io/tls-acme: "true"
66   hosts:
67     - host: myappv1.timinglee.org  #域名:myappv1.timinglee.org
68       paths:
69         - path: /
70           pathType: ImplementationSpecific
#检查 Chart 语法是否正确
[root@master mnt]# helm lint timinglee/
==> Linting timinglee/
[INFO] Chart.yaml: icon is recommended

1 chart(s) linted, 0 chart(s) failed
[root@master mnt]# helm package timinglee/
Successfully packaged chart and saved it to: /root/helm/mnt/timinglee-0.1.0.tgz
[root@master mnt]# ls
timinglee  timinglee-0.1.0.tgz
[root@master mnt]# kubectl delete sa timinglee -n default
serviceaccount "timinglee" deleted from default namespace
[root@master mnt]# helm install timinglee timinglee-0.1.0.tgz 
NAME: timinglee
LAST DEPLOYED: Sat May  2 08:51:28 2026
NAMESPACE: default
STATUS: deployed
REVISION: 1
DESCRIPTION: Install complete
NOTES:
1. Get the application URL by running these commands:
  http://myappv1.timinglee.org/
bash 复制代码
#上方的地址为myappv1.timinglee.org,所以我们要加域名myappv1.timinglee.org
[root@master mnt]# vim /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
172.25.254.100     master   myapp1.timinglee.org
172.25.254.10      node1
172.25.254.20      node2
172.25.254.30     node3
172.25.254.200     reg.timinglee.org
172.25.254.50  myapp1.timinglee.org myapp2.timinglee.org myapp.timinglee.org myappv1.timinglee.org
#测试
[root@master mnt]# curl myappv1.timinglee.org
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>

#如果出现一下错误,代表Chart 里还创建了同名,所以要进行删除

bash 复制代码
[root@master mnt]# helm install timinglee timinglee-0.1.0.tgz 
Error: INSTALLATION FAILED: unable to continue with install: ServiceAccount "timinglee" in namespace "default" exists and cannot be imported into the current release: invalid ownership metadata; label validation error: missing key "app.kubernetes.io/managed-by": must be set to "Helm"; annotation validation error: missing key "meta.helm.sh/release-name": must be set to "timinglee"; annotation validation error: missing key "meta.helm.sh/release-namespace": must be set to "default"
[root@master mnt]# helm install timinglee timinglee-0.1.0.tgz 
Error: INSTALLATION FAILED: unable to continue with install: Service "timinglee" in namespace "default" exists and cannot be imported into the current release: invalid ownership metadata; label validation error: missing key "app.kubernetes.io/managed-by": must be set to "Helm"; annotation validation error: missing key "meta.helm.sh/release-name": must be set to "timinglee"; annotation validation error: missing key "meta.helm.sh/release-namespace": must be set to "default"

解决方案:

bash 复制代码
[root@master mnt]# kubectl delete sa timinglee -n default
serviceaccount "timinglee" deleted from default namespace
[root@master mnt]# kubectl delete svc timinglee -n default
service "timinglee" deleted from default namespace

helm仓库的使用

1.oci模式

建立在harbor中helm仓库

官方网址:https://github.com/chartmuseum/helm-push

bash 复制代码
#登录私有仓库
#向 Harbor 仓库进行身份认证,同时指定自签名 CA 证书,确保 HTTPS 连接可信
[root@master anchors]# helm registry login  reg.timinglee.org --username admin --password lee --ca-file /etc/docker/certs.d/reg.timinglee.org/ca.crt 
level=WARN msg="using --password via the CLI is insecure. Use --password-stdin"
Login Succeeded
[root@master anchors]# cd /root/helm/mnt/
[root@master mnt]# ls
timinglee  timinglee-0.1.0.tgz
# 将 Harbor 自签名证书复制到系统信任目录
[root@master mnt]# cp /etc/docker/certs.d/reg.timinglee.org/ca.crt /etc/pki/ca-trust/source/anchors/ca.crt -p
# 更新系统证书信任库(使新证书立即生效)
[root@master mnt]# update-ca-trust
#上传 Helm Chart 到 Harbor 仓库
[root@master mnt]# helm push timinglee-0.1.0.tgz oci://reg.timinglee.org/helm-carts
Pushed: reg.timinglee.org/helm-carts/timinglee:0.1.0
Digest: sha256:d83896339121dc37a0baa05f7311b5f2497d976281ff82c5fb81e5da7fd59bde
#从 Harbor 下载 Helm Chart
[root@master mnt]# helm pull oci://reg.timinglee.org/helm-carts/timinglee --version 0.1.0
Pulled: reg.timinglee.org/helm-carts/timinglee:0.1.0
Digest: sha256:d83896339121dc37a0baa05f7311b5f2497d976281ff82c5fb81e5da7fd59bde
#使用 Helm 将 Chart 部署到 Kubernetes 集群,应用名称为 timinglee
[root@master mnt]# helm install timinglee oci://reg.timinglee.org/helm-carts/timinglee --Pulled: reg.timinglee.org/helm-carts/timinglee:0.1.0
# 已成功从Harbor仓库拉取Chart包
Digest: sha256:d83896339121dc37a0baa05f711b5f2497d976281ff82c5fb81e5da7fd59bde
# Chart包的唯一校验值,用于验证文件完整性
NAME: timinglee
# 部署的应用名称:timinglee
LAST DEPLOYED: Sat May  2 10:01:19 2026
# 应用部署完成的时间
NAMESPACE: default
# 应用运行在K8s的default默认命名空间
STATUS: deployed
# 应用状态:部署成功(正在运行)
REVISION: 1
# 版本号:第一次安装
DESCRIPTION: Install complete
NOTES:
1. Get the application URL by running these commands:
  http://myappv1.timinglee.org/
# 应用访问地址:通过域名 myappv1.timinglee.org 访问

#如果出现以下错误

bash 复制代码
#错误:timinglee 的应用在运行了
[root@master mnt]# helm install timinglee oci://reg.timinglee.org/helm-carts/timinglee --version 0.1.0
Pulled: reg.timinglee.org/helm-carts/timinglee:0.1.0
Digest: sha256:d83896339121dc37a0baa05f7311b5f2497d976281ff82c5fb81e5da7fd59bde
level=ERROR msg="release name check failed" error="cannot reuse a name that is still in use"
Error: INSTALLATION FAILED: release name check failed: cannot reuse a name that is still in use
#所以我们要删除
[root@master mnt]# helm uninstall timinglee
release "timinglee" uninstalled

2.helm charts

构建harbor仓库

安装docker-ce

bash 复制代码
[root@node3 ~]# mount /dev/sr0 /mnt
mount: /mnt: WARNING: source write-protected, mounted read-only.
[root@node3 ~]#  vim /etc/yum.repos.d/docker.repo
[root@node3 ~]#  dnf install docker-ce-29.3.1-1.el9 -y
[root@node3 ~]#  vim /etc/modules-load.d/docker_mod.conf
[root@node3 ~]# modprobe -a br_netfilter
[root@node3 ~]# vim /etc/sysctl.d/docker.conf
[root@node3 ~]#  systemctl enable --now docker
安装harbor的2.5.4版本并设定安装模式未helm-charts
bash 复制代码
[root@node3 ~]#  mkdir  /data/certs -p
[root@node3 ~]# openssl req -newkey  rsa:4096 \
-nodes -sha256 -keyout /data/certs/timinglee.org.key \
-addext "subjectAltName = DNS:charts.timinglee.org" \
-x509 -days 365 -out /data/certs/timinglee.org.crt
..................................
Country Name (2 letter code) [XX]:CN
State or Province Name (full name) []:Shannxi
Locality Name (eg, city) [Default City]:Xi'an
Organization Name (eg, company) [Default Company Ltd]:helm
Organizational Unit Name (eg, section) []:charts
Common Name (eg, your name or your server's hostname) []:charts.timinglee.org
Email Address []:charts@timinglee.org
[root@node3 ~]# ls
anaconda-ks.cfg   harbor-offline-installer-v2.5.4.tgz  
[root@node3 ~]# tar zxf harbor-offline-installer-v2.5.4.tgz  -C /opt/
[root@node3 ~]# cd /opt/harbor/
[root@node3 harbor]# cp harbor.yml.tmpl harbor.yml
[root@node3 harbor]# vim harbor.yml
[root@node3 harbor]# ./install.sh --with-chartmuseum
.................................................
? ----Harbor has been installed and started successfully.----
登录harbor建立charts项目
添加仓库到本地helm的repo中
bash 复制代码
#一定要把证书传到master里面
[root@node3 harbor]# scp /data/certs/timinglee.org.crt  root@172.25.254.100:/etc/pki/ca-trust/source/anchors/ca.crt


#在master里面更新
[root@master mnt]# update-ca-trust 

[root@master mnt]# helm repo add timimnglee
Error: "helm repo add" requires 2 arguments

Usage:  helm repo add [NAME] [URL] [flags]


[root@master mnt]# ll /etc/pki/ca-trust/source/anchors/ca.crt 
-rw-r--r-- 1 root root 2199  5月  2 13:55 /etc/pki/ca-trust/source/anchors/ca.crt
[root@master mnt]# date
2026年 05月 02日 星期六 13:59:27 CST
#加域名
[root@master mnt]# vim /etc/hosts
bash 复制代码
#给 Helm 绑定一个 "软件商店"
[root@master mnt]# helm repo add timimnglee https://charts.timinglee.org/chartrepo/charts
"timimnglee" has been added to your repositories
上传项目
bash 复制代码
[root@master mnt]# ls
timinglee  timinglee-0.1.0.tgz
#查看 Helm 已经添加的软件仓库
[root@master mnt]# helm repo list
NAME     	URL                                                   
aliyun   	https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts
bitnami  	https://charts.bitnami.com/bitnami                    
timinglee	https://charts.timinglee.org/chartrepo/charts         
[root@master mnt]# helm cm-push timinglee-0.1.0.tgz  timinglee -u admin -p lee
Pushing timinglee-0.1.0.tgz to timinglee...
Done.
[root@master mnt]# helm search repo timinglee
No results found
#记得更新仓库缓存
[root@master mnt]#  helm repo update timinglee
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "timinglee" chart repository
Update Complete. ?Happy Helming!?
[root@master mnt]# helm search  repo  timinglee
NAME               	CHART VERSION	APP VERSION	DESCRIPTION               
timinglee/timinglee	0.1.0        	v1         	use nginx create webserver
[root@master mnt]# helm search  repo  timinglee/timinglee -l
NAME               	CHART VERSION	APP VERSION	DESCRIPTION               
timinglee/timinglee	0.1.0        	v1         	use nginx create webserver
[root@master mnt]# helm install timinglee timinglee/timinglee --version 0.1.0
NAME: timinglee
LAST DEPLOYED: Sat May  2 14:12:45 2026
NAMESPACE: default
STATUS: deployed
REVISION: 1
DESCRIPTION: Install complete
NOTES:
1. Get the application URL by running these commands:
  http://myappv1.timinglee.org/

#我遇到的问题和解决

#查看软件列表,是否添加错误

解决方案

bash 复制代码
#删除并重新添加
[root@master mnt]# helm repo remove timimnglee 
"timimnglee" has been removed from your repositories
[root@master mnt]# helm repo list
NAME   	URL                                                   
aliyun 	https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts
bitnami	https://charts.bitnami.com/bitnami                    
[root@master mnt]# helm repo add timinglee https://charts.timinglee.org/chartrepo/charts
"timinglee" has been added to your repositories

#名字 timinglee 已经被占用了,你不能重复用或者集群里没装 Nginx Ingress,或者 Ingress 坏了 → 连接拒绝

#首先查看列表

#确实多余

bash 复制代码
#直接卸载掉有问题的 Ingress(最干净)
[root@master mnt]# kubectl delete -A ValidatingWebhookConfiguration ingress-nginx-admission
validatingwebhookconfiguration.admissionregistration.k8s.io "ingress-nginx-admission" deleted

[root@master mnt]# helm list
NAME     	NAMESPACE	REVISION	UPDATED                                	STATUS  	CHART          	APP VERSION
timinglee	default  	1       	2026-05-02 14:10:58.597925695 +0800 CST	failed  	timinglee-0.1.0	v1         
webserver	default  	7       	2026-04-27 19:04:45.333309841 +0800 CST	deployed	nginx-23.0.3   	1.30.0     
#删除timinglee
[root@master mnt]# helm uninstall timinglee
release "timinglee" uninstalled
[root@master mnt]# helm list
NAME     	NAMESPACE	REVISION	UPDATED                                	STATUS  	CHART       	APP VERSION
webserver	default  	7       	2026-04-27 19:04:45.333309841 +0800 CST	deployed	nginx-23.0.3	1.30.0     

helm的命令总结

|------------|-------------------------------------------------------------|
| 命令 | 描述 |
| create | 创建一个 chart 并指定名字 |
| dependency | 管理 chart 依赖 |
| get | 下载一个 release。可用子命令:all、hooks、manifest、notes、values |
| history | 获取 release 历史 |
| install | 安装一个 chart |
| list | 列出 release |
| package | 将 chart 目录打包到 chart 存档文件中 |
| pull | 从远程仓库中下载 chart 并解压到本地 # helm pull stable/mysql -- untar |
| repo | 添加,列出,移除,更新和索引 chart 仓库。可用子命令:add、index、 list、remove、update |
| rollback | 从之前版本回滚 |
| search | 根据关键字搜索 chart。可用子命令:hub、repo |
| show | 查看 chart 详细信息。可用子命令:all、chart、readme、values |
| status | 显示已命名版本的状态 |
| template | 本地呈现模板 |
| uninstall | 卸载一个 release |
| upgrade | 更新一个 release |
| version | 查看 helm 客户端版本 |

相关推荐
ezreal_pan1 小时前
Docker部署Kafka持久化遇到的各种问题及解决方案
docker·容器·kafka
IT邦德2 小时前
26ai OGG 微服务高可用部署及切换
微服务·云原生·架构
AI攻城狮2 小时前
上下文窗口不是你的问题,你塞进去的东西才是——RAG 精排技术深度解析
云原生
.柒宇.12 小时前
AI掘金头条项目-K8s部署实战教程
python·云原生·容器·kubernetes·fastapi
AI攻城狮13 小时前
DeepSeek V4:LLM 世界的"好又多"超市
云原生
AI精钢13 小时前
AI Agent 从上线到删库跑路始末
网络·人工智能·云原生·aigc
AI攻城狮15 小时前
RAG 的 Chunking 有什么好方案?从原理到实战选型
云原生
AI攻城狮17 小时前
如何提高 RAG 的检索质量?这才是真正的瓶颈所在
云原生
AI攻城狮20 小时前
DeepSeek KV Cache 入门解读:98% 命中率背后的工程逻辑
云原生