目录
[1.1 容器编排应用](#1.1 容器编排应用)
[1.2 Kubernetes简介](#1.2 Kubernetes简介)
[1.3 k8s的设计架构](#1.3 k8s的设计架构)
[1.3.1 k8s各个组件的用途](#1.3.1 k8s各个组件的用途)
[1.3.2 k8s各组件之间的调用关系](#1.3.2 k8s各组件之间的调用关系)
[1.3.3 k8s的常用名词概念](#1.3.3 k8s的常用名词概念)
[1.3.4 k8s的分层结构](#1.3.4 k8s的分层结构)
[2.1 k8s中容器的管理方式](#2.1 k8s中容器的管理方式)
[2.2 k8s环境部署](#2.2 k8s环境部署)
[2.2.1 禁用所有swap并配置本地解析](#2.2.1 禁用所有swap并配置本地解析)
[2.2.2 安装docker](#2.2.2 安装docker)
[2.2.3 复制harbor仓库证书并启动docker](#2.2.3 复制harbor仓库证书并启动docker)
[2.2.4 安装部署k8s的工具](#2.2.4 安装部署k8s的工具)
[2.2.5 安装cri-docker](#2.2.5 安装cri-docker)
[2.2.6 在Master节点拉取k8s所需镜像](#2.2.6 在Master节点拉取k8s所需镜像)
[2.2.7 初始化集群](#2.2.7 初始化集群)
[2.2.8 安装flannel网络插件](#2.2.8 安装flannel网络插件)
[2.2.9 节点扩容](#2.2.9 节点扩容)
[3.1 资源管理介绍](#3.1 资源管理介绍)
[3.2 资源管理方式](#3.2 资源管理方式)
[3.2.1 命令式对象管理](#3.2.1 命令式对象管理)
[3.2.2 资源类型](#3.2.2 资源类型)
[3.3 基本命令测试](#3.3 基本命令测试)
[3.4 运行和调试命令示例](#3.4 运行和调试命令示例)
[3.5 高级命令示例](#3.5 高级命令示例)
[4.1 创建自主式pod](#4.1 创建自主式pod)
[4.2 利用控制器管理pod](#4.2 利用控制器管理pod)
[4.3 应用版本的更新](#4.3 应用版本的更新)
[4.4 利用yaml文件部署应用](#4.4 利用yaml文件部署应用)
[4.4.1 用yaml文件部署应用有以下优点](#4.4.1 用yaml文件部署应用有以下优点)
[4.4.2 资源清单](#4.4.2 资源清单)
[4.5 编写示例](#4.5 编写示例)
[4.5.1 示例1:运行简单的单个容器pod](#4.5.1 示例1:运行简单的单个容器pod)
[4.5.2 示例2:运行多个容器pod](#4.5.2 示例2:运行多个容器pod)
[4.5.3 示例3:理解pod间的网络整合](#4.5.3 示例3:理解pod间的网络整合)
[4.5.4 示例4:端口映射](#4.5.4 示例4:端口映射)
[4.5.5 示例5:如何设定环境变量](#4.5.5 示例5:如何设定环境变量)
[4.5.6 示例6:资源限制](#4.5.6 示例6:资源限制)
[4.5.7 示例7 容器启动管理](#4.5.7 示例7 容器启动管理)
[4.5.8 示例8 选择运行节点](#4.5.8 示例8 选择运行节点)
[4.5.9 示例9 共享宿主机网络](#4.5.9 示例9 共享宿主机网络)
[5.1 INIT容器](#5.1 INIT容器)
[5.2 INIT容器的功能](#5.2 INIT容器的功能)
[5.3 INIT容器示例](#5.3 INIT容器示例)
[5.4 探针](#5.4 探针)
[5.4.1 探针实例](#5.4.1 探针实例)
[5.4.1.1 存活探针示例:](#5.4.1.1 存活探针示例:)
[5.4.1.2 就绪探针示例](#5.4.1.2 就绪探针示例)
[6.1 控制器常用类型](#6.1 控制器常用类型)
[6.1.2 replicaset控制器](#6.1.2 replicaset控制器)
[6.1.2.1 replicaset功能](#6.1.2.1 replicaset功能)
[6.1.2.2 replicaset参数说明](#6.1.2.2 replicaset参数说明)
[6.1.2.3 replicaset 示例](#6.1.2.3 replicaset 示例)
[6.1.3 deployment 控制器](#6.1.3 deployment 控制器)
[6.1.3.1 deployment控制器的功能](#6.1.3.1 deployment控制器的功能)
[6.1.3.2 deployment控制器示例](#6.1.3.2 deployment控制器示例)
[6.1.3.2.1 版本回滚](#6.1.3.2.1 版本回滚)
[6.1.3.2.2 滚动更新策略](#6.1.3.2.2 滚动更新策略)
[6.1.3.2.3 暂停及恢复](#6.1.3.2.3 暂停及恢复)
[6.1.4 job 控制器](#6.1.4 job 控制器)
[6.1.4.1 job控制器功能](#6.1.4.1 job控制器功能)
[6.1.4.2 job控制器示例](#6.1.4.2 job控制器示例)
[6.1.5 cronjob 控制器](#6.1.5 cronjob 控制器)
[6.1.5.1 cronjob 控制器功能](#6.1.5.1 cronjob 控制器功能)
[6.1.5.2 cronjob 控制器 示例](#6.1.5.2 cronjob 控制器 示例)
[7.1 什么是微服务](#7.1 什么是微服务)
[7.2 微服务的类型](#7.2 微服务的类型)
[7.2.1 IPVS模式](#7.2.1 IPVS模式)
[7.2.1.1 ipvs模式配置方式](#7.2.1.1 ipvs模式配置方式)
[7.3 微服务类型详解](#7.3 微服务类型详解)
[7.3.1 ClusterIP](#7.3.1 ClusterIP)
[7.3.1.1 ClusterIP中的特殊模式hesdless](#7.3.1.1 ClusterIP中的特殊模式hesdless)
[7.3.2 nodeport](#7.3.2 nodeport)
[7.3.3 LoadBalancer](#7.3.3 LoadBalancer)
[7.3.3.1 metalLB](#7.3.3.1 metalLB)
[7.3.4 ExternalName](#7.3.4 ExternalName)
[7.4 ingress-nginx](#7.4 ingress-nginx)
[7.4.1 ingress-nginx功能](#7.4.1 ingress-nginx功能)
[7.4.2 部署ingress](#7.4.2 部署ingress)
[7.4.2.1 测试ingress](#7.4.2.1 测试ingress)
[7.4.3 ingress的高级用法](#7.4.3 ingress的高级用法)
[7.4.3.1 基于路径的访问](#7.4.3.1 基于路径的访问)
[7.4.3.2 基于域名的访问](#7.4.3.2 基于域名的访问)
[7.4.3.3 建立tls加密](#7.4.3.3 建立tls加密)
[7.4.3.4 建立auth认证](#7.4.3.4 建立auth认证)
[7.4.3.5 rewrite重定向](#7.4.3.5 rewrite重定向)
[7.5 Canary金丝雀发布](#7.5 Canary金丝雀发布)
[7.5.1 什么是金丝雀发布](#7.5.1 什么是金丝雀发布)
[7.5.2 Canary发布方式](#7.5.2 Canary发布方式)
[7.5.2.1 基于header(http包头)灰度](#7.5.2.1 基于header(http包头)灰度)
[7.5.2.2 基于权重的灰度发布](#7.5.2.2 基于权重的灰度发布)
[8.1 configmap](#8.1 configmap)
[8.1.1 configmap的功能](#8.1.1 configmap的功能)
[8.1.2 configmap的使用](#8.1.2 configmap的使用)
[8.1.3 configmap创建方式](#8.1.3 configmap创建方式)
[8.1.3.1 字面值创建](#8.1.3.1 字面值创建)
[8.1.3.2 通过文件创建](#8.1.3.2 通过文件创建)
[8.1.3.3 通过目录创建](#8.1.3.3 通过目录创建)
[8.1.3.4 通过yaml文件创建](#8.1.3.4 通过yaml文件创建)
[8.1.4 configmap的使用方式](#8.1.4 configmap的使用方式)
[8.1.4.1 使用configmap填充环境变量](#8.1.4.1 使用configmap填充环境变量)
[8.1.4.2 通过数据卷使用configmap](#8.1.4.2 通过数据卷使用configmap)
[8.1.4.3 利用configMap填充pod的配置文件](#8.1.4.3 利用configMap填充pod的配置文件)
[8.1.4.3.1 通过热更新cm修改配置](#8.1.4.3.1 通过热更新cm修改配置)
[8.2 secrets配置管理](#8.2 secrets配置管理)
[8.2.2 secrets的创建](#8.2.2 secrets的创建)
[8.2.2.1 从文件创建](#8.2.2.1 从文件创建)
[8.2.2.2 编写yaml文件](#8.2.2.2 编写yaml文件)
[8.2.3 Secret的使用方法](#8.2.3 Secret的使用方法)
[8.2.3.1 将Secret挂载到Volume中](#8.2.3.1 将Secret挂载到Volume中)
[8.2.3.2 向指定路径映射 secret 密钥](#8.2.3.2 向指定路径映射 secret 密钥)
[8.2.3.3 将Secret设置为环境变量](#8.2.3.3 将Secret设置为环境变量)
[8.2.3.4 存储docker registry的认证信息](#8.2.3.4 存储docker registry的认证信息)
[8.3 volumes配置管理](#8.3 volumes配置管理)
[8.3.1 kubernets支持的卷的类型](#8.3.1 kubernets支持的卷的类型)
[8.3.2 emptyDir卷](#8.3.2 emptyDir卷)
[8.3.3 hostpath卷](#8.3.3 hostpath卷)
[8.3.4 nfs卷](#8.3.4 nfs卷)
[8.3.4.1 部署一台nfs共享主机并在所有k8s节点中安装nfs-utils](#8.3.4.1 部署一台nfs共享主机并在所有k8s节点中安装nfs-utils)
[8.3.4.2 部署nfs卷](#8.3.4.2 部署nfs卷)
[8.3.5 PersistentVolume持久卷](#8.3.5 PersistentVolume持久卷)
[8.3.5.1 静态持久卷pv与静态持久卷声明pvc](#8.3.5.1 静态持久卷pv与静态持久卷声明pvc)
[8.3.5 存储类storageclass](#8.3.5 存储类storageclass)
[8.3.5.1 StorageClass说明](#8.3.5.1 StorageClass说明)
[8.3.5.3 存储分配器NFS Client Provisioner](#8.3.5.3 存储分配器NFS Client Provisioner)
[8.3.5.4 部署NFS Client Provisioner](#8.3.5.4 部署NFS Client Provisioner)
[8.3.5.4.1 创建sa并授权](#8.3.5.4.1 创建sa并授权)
[8.3.5.4.2 部署应用](#8.3.5.4.2 部署应用)
[8.3.5.4.3 创建存储类](#8.3.5.4.3 创建存储类)
[8.3.5.4.4 创建PVC](#8.3.5.4.4 创建PVC)
[8.3.5.4.5 创建测试pod](#8.3.5.4.5 创建测试pod)
[8.3.5.4.6 设置默认存储类](#8.3.5.4.6 设置默认存储类)
[8.4 statefulset控制器](#8.4 statefulset控制器)
[8.4.1 功能特性](#8.4.1 功能特性)
[8.4.2 StatefulSet的组成部分](#8.4.2 StatefulSet的组成部分)
[8.4.3 构建方法](#8.4.3 构建方法)
[8.4.4 测试](#8.4.4 测试)
[8.4.5 statefulset的弹缩](#8.4.5 statefulset的弹缩)
一、Kubernetes简介
在部署应用程序的方式上,主要经历了三个阶段:
传统部署:互联网早期,会直接将应用程序部署在物理机上
-
优点:简单,不需要其它技术的参与
-
缺点:不能为应用程序定义资源使用边界,很难合理地分配计算资源,而且程序之间容易产生影响
虚拟化部署:可以在一台物理机上运行多个虚拟机,每个虚拟机都是独立的一个环境
-
优点:程序环境不会相互产生影响,提供了一定程度的安全性
-
缺点:增加了操作系统,浪费了部分资源
容器化部署:与虚拟化类似,但是共享了操作系统
容器化部署方式给带来很多的便利,但是也会出现一些问题,比如说:
-
一个容器故障停机了,怎么样让另外一个容器立刻启动去替补停机的容器
-
当并发访问量变大的时候,怎么样做到横向扩展容器数量
1.1 容器编排应用
为了解决这些容器编排问题,就产生了一些容器编排的软件:
-
Swarm:Docker自己的容器编排工具
-
Mesos:Apache的一个资源统一管控的工具,需要和Marathon结合使用
-
Kubernetes:Google开源的的容器编排工具
1.2 Kubernetes简介
-
在Docker 作为高级容器引擎快速发展的同时,在Google内部,容器技术已经应用了很多年
-
Borg系统运行管理着成千上万的容器应用。
-
Kubernetes项目来源于Borg,可以说是集结了Borg设计思想的精华,并且吸收了Borg系统中的经验和教训。
-
Kubernetes对计算资源进行了更高层次的抽象,通过将容器进行细致的组合,将最终的应用服务交给用户。
kubernetes的本质是一组服务器集群,它可以在集群的每个节点上运行特定的程序,来对节点中的容器进行管理。目的是实现资源管理的自动化,主要提供了如下的主要功能:
-
自我修复:一旦某一个容器崩溃,能够在1秒中左右迅速启动新的容器
-
弹性伸缩:可以根据需要,自动对集群中正在运行的容器数量进行调整
-
服务发现:服务可以通过自动发现的形式找到它所依赖的服务
-
负载均衡:如果一个服务起动了多个容器,能够自动实现请求的负载均衡
-
版本回退:如果发现新发布的程序版本有问题,可以立即回退到原来的版本
-
存储编排:可以根据容器自身的需求自动创建存储卷
1.3 k8s的设计架构
1.3.1 k8s各个组件的用途
一个kubernetes集群主要是由控制节点(master) 、**工作节点(node)**构成,每个节点上都会安装不同的组件
1、master:集群的控制平面,负责集群的决策
-
ApiServer : 资源操作的唯一入口,接收用户输入的命令,提供认证、授权、API注册和发现等机制
-
Scheduler : 负责集群资源调度,按照预定的调度策略将Pod调度到相应的node节点上
-
ControllerManager : 负责维护集群的状态,比如程序部署安排、故障检测、自动扩展、滚动更新等
-
Etcd :负责存储集群中各种资源对象的信息
2、node:集群的数据平面,负责为容器提供运行环境
-
kubelet:负责维护容器的生命周期,同时也负责Volume(CVI)和网络(CNI)的管理
-
Container runtime:负责镜像管理以及Pod和容器的真正运行(CRI)
-
kube-proxy:负责为Service提供cluster内部的服务发现和负载均衡
1.3.2 k8s各组件之间的调用关系
当我们要运行一个web服务时
-
kubernetes环境启动之后,master和node都会将自身的信息存储到etcd数据库中
-
web服务的安装请求会首先被发送到master节点的apiServer组件
-
apiServer组件会调用scheduler组件来决定到底应该把这个服务安装到哪个node节点上
在此时,它会从etcd中读取各个node节点的信息,然后按照一定的算法进行选择,并将结果告知apiServer
-
apiServer调用controller-manager去调度Node节点安装web服务
-
kubelet接收到指令后,会通知docker,然后由docker来启动一个web服务的pod
-
如果需要访问web服务,就需要通过kube-proxy来对pod产生访问的代理
1.3.3 k8s的常用名词概念
-
Master:集群控制节点,每个集群需要至少一个master节点负责集群的管控
-
Node:工作负载节点,由master分配容器到这些node工作节点上,然后node节点上的
-
Pod:kubernetes的最小控制单元,容器都是运行在pod中的,一个pod中可以有1个或者多个容器
-
Controller:控制器,通过它来实现对pod的管理,比如启动pod、停止pod、伸缩pod的数量等等
-
Service:pod对外服务的统一入口,下面可以维护者同一类的多个pod
-
Label:标签,用于对pod进行分类,同一类pod会拥有相同的标签
-
NameSpace:命名空间,用来隔离pod的运行环境
1.3.4 k8s的分层结构
-
核心层:Kubernetes最核心的功能,对外提供API构建高层的应用,对内提供插件式应用执行环境
-
应用层:部署(无状态应用、有状态应用、批处理任务、集群应用等)和路由(服务发现、DNS解析等)
-
管理层:系统度量(如基础设施、容器和网络的度量),自动化(如自动扩展、动态Provision等)以及策略管理(RBAC、Quota、PSP、NetworkPolicy等)
-
接口层:kubectl命令行工具、客户端SDK以及集群联邦
-
生态系统:在接口层之上的庞大容器集群管理调度的生态系统,可以划分为两个范畴
-
Kubernetes外部:日志、监控、配置管理、CI、CD、Workflow、FaaS、OTS应用、ChatOps等
-
Kubernetes内部:CRI、CNI、CVI、镜像仓库、Cloud Provider、集群自身的配置和管理等
二、k8s集群环境搭建
2.1 k8s中容器的管理方式
K8S 集群创建方式有3种:
centainerd:
默认情况下,K8S在创建集群时使用的方式
docker:
Docker使用的普记录最高,虽然K8S在1.24版本后已经费力了kubelet对docker的支持,但时可以借助cri-docker方式来实现集群创建
cri-o:
CRI-O的方式是Kubernetes创建容器最直接的一种方式,在创建集群的时候,需要借助于cri-o插件的方式来实现Kubernetes集群的创建。
docker 和cri-o 这两种方式要对kubelet程序的启动参数进行设置
2.2 k8s环境部署
K8S中文官网:Kubernetes
实验环境:
|------------|----------------|------------------|
| 主机名 | IP | 角色 |
| harbor | 172.25.254.150 | harbor仓库 |
| k8s-master | 172.25.254.100 | Master,k8s集群控制节点 |
| k8s-node1 | 172.25.254.10 | Worker,k8s集群工作节点 |
| k8s-node2 | 172.25.254.20 | Worker,k8s集群工作节点 |
-
所有节点禁用selinux和防火墙
-
所有节点同步时间和解析
-
所有节点安装docker-ce
-
所有节点禁用swap,注意注释掉/etc/fstab文件中的定义
2.2.1 禁用所有swap并配置本地解析
查看开启的swap信息:
[root@k8s-master ~]# swapon -s
关闭掉:
[root@k8s-master ~]# swapoff -a
也可以用mask命令给swap锁住:
[root@k8s-master ~]# systemctl mask (跟上你的swap名称)
[root@k8s-master ~]# free -m
total used free shared buff/cache available
Mem: 1935 1379 154 15 566 555
Swap: 0 0 0
[root@k8s-master ~]# vim /etc/fstab
注释掉开机启动:
#/dev/mapper/rhel-swap none swap defaults 0 0
本地解析:
[root@k8s-master ~]# vim /etc/hosts
172.25.254.100 k8s-master
172.25.254.10 k8s-node1
172.25.254.20 k8s-node2
172.25.254.150 docker-harbor.timingding.org reg.timingding.org
2.2.2 安装docker
可以通过软件仓库的方式下载安装:
[root@k8s-master ~]# vim /etc/yum.repos.d/docker.repo
[docker]
name=docker
baseurl=https://mirrors.aliyun.com/docker-ce/linux/rhel/9/x86_64/stable/
gpgcheck=0
指定镜像下载路径:
[root@k8s-master ~]# vim /etc/docker/daemon.json
{
"registry-mirrors": ["https://reg.timingding.org"]
}
[root@k8s-master ~]# dnf install docker-ce -y
[root@k8s-master ~]# systemctl enable --now docker
所有节点设定docker的资源管理模式为systemd,只有红帽7需要,红帽9默认的就是systemd
[root@k8s-master ~]# vim /etc/docker/daemon.json
{
"registry-mirrors": ["https://reg.westos.org"],
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2"
}
2.2.3 复制harbor仓库证书并启动docker
创建证书目录:
[root@k8s-master ~]# mkdir -p /etc/docker/certs.d/reg.timingding.org
去harbor主机用scp命令把证书传过来:
[root@docker-harbor ~]# scp /data/certs/timingding.org.crt root@172.25.254.100:/etc/docker/certs.d/reg.timingding.org/ca.crt
登录harbor仓库:
[root@k8s-master ~]# docker login reg.timingding.org
[root@k8s-master ~]# docker info ----- 查看docker信息
2.2.4 安装部署k8s的工具
[root@k8s-master ~]# vim /etc/yum.repos.d/k8s.repo
[k8s]
name=k8s
baseurl=https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.30/rpm
gpgcheck=0
#安装软件
[root@k8s-master ~]# dnf install kubelet-1.30.0 kubeadm-1.30.0 kubectl-1.30.0 -y
k8s命令补齐工具:
[root@k8s-master ~]# dnf install bash-completion -y
[root@k8s-master ~]# echo "source <(kubectl completion bash)" >> ~/.bashrc
[root@k8s-master ~]# source ~/.bashrc
2.2.5 安装cri-docker
安装cri-docker和它的依赖性:
[root@k8s-master ~]# dnf install libcgroup-0.41-19.e18.x86_64.rpm cri-dockerd-0.3.14-3.e18.x86_64.rpm -y
添加网络插件下载地址:
[root@k8s-master ~]# vim /lib/systemd/system/cri-docker.service
[Service]
Type=notify
ExecStart=/usr/bin/cri-dockerd --container-runtime-endpoint fd:// --network-plugin=cni --pod-infra-container-image=reg.timingding.org/k8s/pause:3.9 ##### 添加网络插件名称及基础容器的镜像
ExecReload=/bin/kill -s HUP $MAINPID
TimeoutSec=0
RestartSec=2
Restart=always
2.2.6 在Master节点拉取k8s所需镜像
拉取镜像:
[root@k8s-master ~]# kubeadm config images pull \
--image-repository registry.aliyuncs.com/google_containers \
--kubernetes-version v1.30.0 \
--cri-socket=unix:///var/run/cri-dockerd.sock
#上传镜像到harbor仓库
给镜像打标签:
[root@k8s-master ~]# docker images | awk '/google/{ print $1":"$2}' \
| awk -F "/" '{system("docker tag "$0" reg.timingding.org/k8s/"$3)}'
push上传镜像到harbor仓库:
[root@k8s-master ~]# docker images | awk '/k8s/{system("docker push "$1":"$2)}'
2.2.7 初始化集群
开始初始化一定要确保cri-docker和kubelet处于开启状态。
启动kubelet服务:
[root@k8s-master ~]# systemctl start kubelet.service
查看服务状态:
[root@k8s-master ~]# systemctl status kubelet.service
#执行初始化命令
[root@k8s-master ~]# kubeadm init --pod-network-cidr=10.244.0.0/16 \
--image-repository reg.timingding.org/k8s \
--kubernetes-version v1.30.0 \
--cri-socket=unix:///var/run/cri-dockerd.sock
#指定集群配置文件变量
[root@k8s-master ~]# echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile
[root@k8s-master ~]# source ~/.bash_profile
#当前节点没有就绪,因为还没有安装网络插件,容器没有运行
[root@k8s-master ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master NotReady control-plane 4m25s v1.30.0
root@k8s-master ~]# kubectl get pod -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-647dc95897-2sgn8 0/1 Pending 0 6m13s
kube-system coredns-647dc95897-bvtxb 0/1 Pending 0 6m13s
kube-system etcd-k8s-master 1/1 Running 0 6m29s
kube-system kube-apiserver-k8s-master 1/1 Running 0 6m30s
kube-system kube-controller-manager-k8s-master 1/1 Running 0 6m29s
kube-system kube-proxy-fq85m 1/1 Running 0 6m14s
kube-system kube-scheduler-k8s-master 1/1 Running 0 6m29s
如果生成的集群token找不到了可以重新生成:
这样生成的token不是永久的,24h就会自动删除。
[root@k8s-master ~]# kubeadm token create --print-join-command
可以用kubeadm查看生成过的token:
kubeadm token list
2.2.8 安装flannel网络插件
#下载flannel的yaml部署文件
[root@k8s-master ~]# wget https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml
把flannel镜像拉取出来:
[root@k8s-master ~]# docker load -i flannel-0.25.5.tag.gz
[root@k8s-master ~]# docker pull docker.io/flannel/flannel:v0.25.5
[root@k8s-master ~]# docker pull docker.io/flannel/flannel-cni-plugin:v1.5.1-flannel1
打标签并上传镜像到harbor仓库:
[root@k8s-master ~]# docker tag docker.io/flannel/flannel-cni-plugin:v1.5.1-flannel1 reg.timingding.org/flannel/flannel-cni-plugin:v1.5.1-flannel1
[root@k8s-master ~]# docker push reg.timingding.org/flannel/flannel-cni-plugin:v1.5.1-flannel1
#编辑kube-flannel.yml 修改镜像下载位置
[root@k8s-master ~]# vim kube-flannel.yml
#需要修改以下几行
[root@k8s-master ~]# grep -n image kube-flannel.yml
146: image: flannel/flannel:v0.25.5
173: image: flannel/flannel-cni-plugin:v1.5.1-flannel1
184: image: flannel/flannel:v0.25.5
#安装flannel网络插件
[root@k8s-master ~]# kubectl apply -f kube-flannel.yml
此时再查看kubectl:
master是Ready状态就部署成功:
[root@k8s-master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master Ready control-plane 35h v1.30.0
就应该全部都是Running,如果不是全部都是Running,就是有问题,删掉重新安装:
[root@k8s-master ~]# kubectl get pod -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-flannel kube-flannel-ds-dcchb 1/1 Running 3 (13h ago) 34h
kube-flannel kube-flannel-ds-nnzlg 1/1 Running 4 (13h ago) 34h
kube-flannel kube-flannel-ds-wczj2 1/1 Running 0 48m
kube-system coredns-6cbb5ddd55-c978j 1/1 Running 3 (13h ago) 35h
kube-system coredns-6cbb5ddd55-hbcs4 1/1 Running 3 (13h ago) 35h
kube-system etcd-k8s-master 1/1 Running 3 (13h ago) 35h
kube-system kube-apiserver-k8s-master 1/1 Running 3 (13h ago) 35h
kube-system kube-controller-manager-k8s-master 1/1 Running 3 (13h ago) 35h
kube-system kube-proxy-2rkpf 1/1 Running 3 (13h ago) 34h
kube-system kube-proxy-76q54 1/1 Running 3 (13h ago) 35h
kube-system kube-proxy-7tvhz 1/1 Running 0 48m
kube-system kube-scheduler-k8s-master 1/1 Running 3 (13h ago) 35h
2.2.9 节点扩容
跟上面masater部署差不多,不需要上传镜像,安装k8s,cri-docker。
在所有的worker节点中
1 确认部署好以下内容
2 禁用swap
3 安装:
-
kubelet-1.30.0
-
kubeadm-1.30.0
-
kubectl-1.30.0
-
docker-ce
-
cri-dockerd
4 修改cri-dockerd启动文件添加
-
--network-plugin=cni
-
--pod-infra-container-image=reg.timinglee.org/k8s/pause:3.9
5 启动服务
-
kubelet.service
-
cri-docker.service
以上信息确认完毕后即可加入集群
如果生成的集群token找不到了可以重新生成:
[root@k8s-master ~]# kubeadm token create --print-join-command
master上面生成的token集群到两个进行初始化:
[root@k8s-node1 & 2 ~]# kubeadm join 172.25.254.100:6443 --token y2gkbl.ev9wtxwc2t2p0j2t --discovery-token-ca-cert-hash sha256:9c47c30926de814a4ae59f5275d9cae00b51ab77a1cce1d48ea42ea59efe33ea --cri-socket=unix:///var/run/cri-dockerd.sock
需要所有节点都为Ready状态:
[root@k8s-master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master Ready control-plane 35h v1.30.0
k8s-node1 Ready <none> 47m v1.30.0
k8s-node2 Ready <none> 34h v1.30.0
[root@k8s-master ~]# kubectl -n kube-flannel get pods
NAME READY STATUS RESTARTS AGE
kube-flannel-ds-dcchb 1/1 Running 3 (12h ago) 34h
kube-flannel-ds-nnzlg 1/1 Running 4 (12h ago) 33h
kube-flannel-ds-wczj2 1/1 Running 0 2m13s
OK,k8s就已经部署成功了
如果集群出问题了:
删除网络插件及配置文件:
kubectl delete -f kube-flannel.yml
重置初始化
kubeadm reset --cri-socket=unix:///var/run/cri-dockerd.sock
然后再重新初始化。
三、k8s资源管理
3.1 资源管理介绍
-
在kubernetes中,所有的内容都抽象为资源,用户需要通过操作资源来管理kubernetes。
-
kubernetes的本质上就是一个集群系统,用户可以在集群中部署各种服务
-
所谓的部署服务,其实就是在kubernetes集群中运行一个个的容器,并将指定的程序跑在容器中。
-
kubernetes的最小管理单元是pod而不是容器,只能将容器放在
Pod
中, -
kubernetes一般也不会直接管理Pod,而是通过Pod控制器来管理Pod的。
-
Pod中服务服务的访问是由kubernetes提供的
Service
资源来实现。 -
Pod中程序的数据需要持久化是由kubernetes提供的各种
存储
系统来实现
3.2 资源管理方式
-
命令式对象管理:直接使用命令去操作kubernetes资源
kubectl run nginx-pod --image=nginx:latest --port=80
-
命令式对象配置:通过命令配置和配置文件去操作kubernetes资源
kubectl create/patch -f nginx-pod.yaml
-
声明式对象配置:通过apply命令和配置文件去操作kubernetes资源
kubectl apply -f nginx-pod.yaml
|---------|----------|---------|------------------|
| 类型 | 适用环境 | 优点 | 缺点 |
| 命令式对象管理 | 测试 | 简单 | 只能操作活动对象,无法审计、跟踪 |
| 命令式对象配置 | 开发 | 可以审计、跟踪 | 项目大时,配置文件多,操作麻烦 |
| 声明式对象配置 | 开发 | 支持目录操作 | 意外情况下难以调试 |
3.2.1 命令式对象管理
kubectl是kubernetes集群的命令行工具,通过它能够对集群本身进行管理,并能够在集群上进行容器化应用的安装部署
kubectl命令的语法如下:
kubectl [command] [type] [name] [flags]
comand:指定要对资源执行的操作,例如create、get、delete
type:指定资源类型,比如deployment、pod、service
name:指定资源的名称,名称大小写敏感
flags:指定额外的可选参数
# 查看所有pod
kubectl get pod
# 查看某个pod
kubectl get pod pod_name
# 查看某个pod,以yaml格式展示结果
kubectl get pod pod_name -o yaml
3.2.2 资源类型
kubernetes中所有的内容都抽象为资源
kubectl api-resources
常用的类型资源:
kubectl常见命令 :
3.3 基本命令测试
声明式对象配置,生成yaml文件
[root@k8s-master yaml]# kubectl run testpod1 --image nginx --dry-run=client -o yaml > testpod1.yml
查看版本:
[root@k8s-master ~]# kubectl version
Client Version: v1.30.0
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.30.0
[root@k8s-master ~]#
查看所有资源:
[root@k8s-master ~]# kubectl api-resources
创建一个webcluster控制器,控制器中pod数量为2
[root@k8s-master ~]# kubectl create deployment webcluster --image nginx --replicas 2
deployment.apps/webcluster created
[root@k8s-master ~]#
查看控制器:
[root@k8s-master ~]# kubectl get deployments.apps
NAME READY UP-TO-DATE AVAILABLE AGE
webcluster 2/2 2 2 59s
[root@k8s-master ~]#
编辑控制器配置,里面可以修改配置
[root@k8s-master ~]# kubectl edit deployments.apps webcluster
利用补丁更改控制器配置
[root@k8s-master ~]# kubectl patch deployments.apps webcluster -p '{"spec":{"replicas":4}}'
deployment.apps/webcluster patched
[root@k8s-master ~]# kubectl get deployments.apps
NAME READY UP-TO-DATE AVAILABLE AGE
webcluster 4/4 4 4 3m56s
[root@k8s-master ~]#
#删除资源
[root@k8s-master ~]# kubectl delete deployments.apps webcluster
deployment.apps "webcluster" deleted
[root@k8s-master ~]# kubectl get deployments.apps
No resources found in default namespace.
[root@k8s-master ~]#
3.4 运行和调试命令示例
运行pod:
[root@k8s-master ~]# kubectl run ding --image timinglee/myapp:v1
pod/ding created
[root@k8s-master ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
ding 1/1 Running 0 6s
[root@k8s-master ~]#
暴漏端口:
[root@k8s-master ~]# kubectl expose pod ding --port 80 --target-port 80
service/ding exposed
[root@k8s-master ~]#
[root@k8s-master ~]# kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ding ClusterIP 10.110.78.55 <none> 80/TCP 68s
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 37h
[root@k8s-master ~]#
[root@k8s-master ~]# curl 10.110.78.55
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
[root@k8s-master ~]#
如果想对外暴漏:
[root@k8s-master ~]# kubectl edit service ding
apiVersion: v1
kind: Service
metadata:
creationTimestamp: "2024-09-04T03:55:40Z"
labels:
run: ding
name: ding
namespace: default
resourceVersion: "79784"
uid: d6cdc1b2-8d25-4163-8fd0-4d1b1c032d86
spec:
clusterIP: 10.110.78.55
clusterIPs:
- 10.110.78.55
internalTrafficPolicy: Cluster
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
run: ding
sessionAffinity: None
type: NodePort -------------- 改为NodePort
status:
loadBalancer: {}
[root@k8s-master ~]# kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ding NodePort 10.110.78.55 <none> 80:31090/TCP 48m
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 38h
[root@k8s-master ~]#
[root@k8s-master ~]# curl 172.25.254.100:31090
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
[root@k8s-master ~]#
查看资源详细信息:
[root@k8s-master ~]# kubectl describe pods ding
查看资源日志:
[root@k8s-master ~]# kubectl logs pods/ding
10.244.0.0 - - [04/Sep/2024:03:57:25 +0000] "GET / HTTP/1.1" 200 65 "-" "curl/7.76.1" "-"
10.244.0.0 - - [04/Sep/2024:04:46:29 +0000] "GET / HTTP/1.1" 200 65 "-" "curl/7.76.1" "-"
[root@k8s-master ~]#
运行交互模式的pod:
[root@k8s-master ~]# kubectl run -it testpod --image busyboxplus
If you don't see a command prompt, try pressing enter.
/ #
Ctrl+pq退出不停止
想要再进去:
[root@k8s-master ~]# kubectl attach pods/testpod -it
If you don't see a command prompt, try pressing enter.
/ #
复制文件到pod中
[root@k8s-master ~]# kubectl cp /etc/passwd ding:/
[root@k8s-master ~]#
[root@k8s-master ~]# kubectl exec pods/ding -c ding -it -- /bin/sh
/ # ls
bin etc lib mnt proc run srv tmp var
dev home media passwd root sbin sys usr
/ #
3.5 高级命令示例
利用命令生成yaml模板文件:
[root@k8s-master ~]# kubectl run ding --image timinglee/myapp:v1 --dry-run=client -o yaml > ding.yml
利用命令生成控制器:
[root@k8s-master ~]# kubectl create deployment ding --image timinglee/myapp:v1 --dry-run=client -o yaml > deployment-ding.yml
如何运行:
[root@k8s-master ~]# kubectl apply -f deployment-ding.yml
deployment.apps/ding created
[root@k8s-master ~]# kubectl get deployments.apps ding
NAME READY UP-TO-DATE AVAILABLE AGE
ding 2/2 2 2 13s
[root@k8s-master ~]#
如何删除:
[root@k8s-master ~]# kubectl delete -f deployment-ding.yml
deployment.apps "ding" deleted
[root@k8s-master ~]# kubectl get deployments.apps
No resources found in default namespace.
[root@k8s-master ~]#
管理资源的标签:
[root@k8s-master ~]# kubectl apply -f deployment-ding.yml
deployment.apps/ding created
[root@k8s-master ~]# kubectl get pods --show-labels
NAME READY STATUS RESTARTS AGE LABELS
ding-5799974786-2fz8p 1/1 Running 0 18s app=ding,pod-template-hash=5799974786
ding-5799974786-bdm88 1/1 Running 0 18s app=ding,pod-template-hash=5799974786
[root@k8s-master ~]# kubectl label pods ding-5799974786-2fz8p app=timingding --overwrite
pod/ding-5799974786-2fz8p labeled
[root@k8s-master ~]#
资源发现名为ding的pod少了一个,会重新建议一个
[root@k8s-master ~]# kubectl label pods ding-5799974786-2fz8p app=timingding --overwrite
pod/ding-5799974786-2fz8p labeled
[root@k8s-master ~]# kubectl label pods ding-5799974786-2fz8p app=ding --overwrite
pod/ding-5799974786-2fz8p labeled
[root@k8s-master ~]#
资源会发现名为ding的多一个,然后就会关掉一个
删除标签:
[root@k8s-master ~]# kubectl label pods ding-5799974786-2fz8p app-
pod/ding-5799974786-2fz8p unlabeled
[root@k8s-master ~]#
四、什么是pod以及pod的管理方式
-
Pod是可以创建和管理Kubernetes计算的最小可部署单元
-
一个Pod代表着集群中运行的一个进程,每个pod都有一个唯一的ip。
-
一个pod类似一个豌豆荚,包含一个或多个容器(通常是docker)
-
多个容器间共享IPC、Network和UTC namespace。
4.1 创建自主式pod
优点:
灵活性高:
- 可以精确控制 Pod 的各种配置参数,包括容器的镜像、资源限制、环境变量、命令和参数等,满足特定的应用需求。
学习和调试方便:
- 对于学习 Kubernetes 的原理和机制非常有帮助,通过手动创建 Pod 可以深入了解 Pod 的结构和配置方式。在调试问题时,可以更直接地观察和调整 Pod 的设置。
适用于特殊场景:
- 在一些特殊情况下,如进行一次性任务、快速验证概念或在资源受限的环境中进行特定配置时,手动创建 Pod 可能是一种有效的方式。
缺点:
管理复杂:
- 如果需要管理大量的 Pod,手动创建和维护会变得非常繁琐和耗时。难以实现自动化的扩缩容、故障恢复等操作。
缺乏高级功能:
- 无法自动享受 Kubernetes 提供的高级功能,如自动部署、滚动更新、服务发现等。这可能导致应用的部署和管理效率低下。
可维护性差:
-
手动创建的 Pod 在更新应用版本或修改配置时需要手动干预,容易出现错误,并且难以保证一致性。相比之下,通过声明式配置或使用 Kubernetes 的部署工具可以更方便地进行应用的维护和更新。
#查看所有pods
[root@k8s-master ~]# kubectl get pods
No resources found in default namespace.#建立一个名为timingding的pod
[root@k8s-master ~]# kubectl run ding --image nginx
pod/ding created[root@k8s-master ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
ding 1/1 Running 0 6s
4.2 利用控制器管理pod
高可用性和可靠性:
-
自动故障恢复:如果一个 Pod 失败或被删除,控制器会自动创建新的 Pod 来维持期望的副本数量。确保应用始终处于可用状态,减少因单个 Pod 故障导致的服务中断。
-
健康检查和自愈:可以配置控制器对 Pod 进行健康检查(如存活探针和就绪探针)。如果 Pod 不健康,控制器会采取适当的行动,如重启 Pod 或删除并重新创建它,以保证应用的正常运行。
可扩展性:
-
轻松扩缩容:可以通过简单的命令或配置更改来增加或减少 Pod 的数量,以满足不同的工作负载需求。例如,在高流量期间可以快速扩展以处理更多请求,在低流量期间可以缩容以节省资源。
-
水平自动扩缩容(HPA):可以基于自定义指标(如 CPU 利用率、内存使用情况或应用特定的指标)自动调整 Pod 的数量,实现动态的资源分配和成本优化。
版本管理和更新:
-
滚动更新:对于 Deployment 等控制器,可以执行滚动更新来逐步替换旧版本的 Pod 为新版本,确保应用在更新过程中始终保持可用。可以控制更新的速率和策略,以减少对用户的影ls响。
-
回滚:如果更新出现问题,可以轻松回滚到上一个稳定版本,保证应用的稳定性和可靠性。
声明式配置:
-
简洁的配置方式:使用 YAML 或 JSON 格式的声明式配置文件来定义应用的部署需求。这种方式使得配置易于理解、维护和版本控制,同时也方便团队协作。
-
期望状态管理:只需要定义应用的期望状态(如副本数量、容器镜像等),控制器会自动调整实际状态与期望状态保持一致。无需手动管理每个 Pod 的创建和删除,提高了管理效率。
服务发现和负载均衡:
-
自动注册和发现:Kubernetes 中的服务(Service)可以自动发现由控制器管理的 Pod,并将流量路由到它们。这使得应用的服务发现和负载均衡变得简单和可靠,无需手动配置负载均衡器。
-
流量分发:可以根据不同的策略(如轮询、随机等)将请求分发到不同的 Pod,提高应用的性能和可用性。
多环境一致性:
- 一致的部署方式:在不同的环境(如开发、测试、生产)中,可以使用相同的控制器和配置来部署应用,确保应用在不同环境中的行为一致。这有助于减少部署差异和错误,提高开发和运维效率。
示例:
利用yaml文件创建:
[root@k8s-master ~]# vim deployment-ding.yml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: ding
name: ding
spec:
replicas: 2
selector:
matchLabels:
app: ding
strategy: {}
template:
metadata:
labels:
app: ding
spec:
containers:
- image: timinglee/myapp:v1
name: myapp
[root@k8s-master ~]# kubectl apply -f deployment-ding.yml
deployment.apps/ding created
[root@k8s-master ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
ding-5799974786-8w2xw 1/1 Running 0 6s
ding-5799974786-nzpj2 1/1 Running 0 6s
暴漏端口:
[root@k8s-master ~]# kubectl expose deployment ding --port 80 --target-port 80
service/ding exposed
[root@k8s-master ~]# kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ding ClusterIP 10.111.191.171 <none> 80/TCP 13s
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 44h
查看暴漏端口:
[root@k8s-master ~]# kubectl describe service ding
Name: ding
Namespace: default
Labels: app=ding
Annotations: <none>
Selector: app=ding
Type: ClusterIP
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.111.191.171
IPs: 10.111.191.171
Port: <unset> 80/TCP
TargetPort: 80/TCP
Endpoints: 10.244.1.30:80,10.244.4.29:80 ---- 可以做负载均衡
Session Affinity: None
Events: <none>
查看IP:
[root@k8s-master ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
ding-5799974786-8w2xw 1/1 Running 0 8m1s 10.244.4.29 k8s-node2 <none> <none>
ding-5799974786-nzpj2 1/1 Running 0 8m1s 10.244.1.30 k8s-node1 <none> <none>
[root@k8s-master ~]#
如果这里访问不到80端口,有可能是网络插件挂了,重新删了启动再试
kubectl get pods -n kube-system
查看信息:
[root@k8s-master ~]# curl 10.111.191.171/hostname.html
ding-5799974786-r8krh
[root@k8s-master ~]# curl 10.111.191.171/hostname.html
ding-5799974786-r8krh
[root@k8s-master ~]# curl 10.111.191.171/hostname.html
ding-5799974786-r8krh
[root@k8s-master ~]# curl 10.111.191.171/hostname.html
ding-5799974786-2mrzz
[root@k8s-master ~]# curl 10.111.191.171/hostname.html
ding-5799974786-2mrzz
[root@k8s-master ~]# curl 10.111.191.171/hostname.html
ding-5799974786-r8krh
为ding扩容:
[root@k8s-master ~]# kubectl scale deployment ding --replicas 4
deployment.apps/ding scaled
[root@k8s-master ~]#
[root@k8s-master ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
ding-5799974786-2mrzz 1/1 Running 0 8m28s
ding-5799974786-2tchv 1/1 Running 0 102s
ding-5799974786-2zxq4 1/1 Running 0 102s
ding-5799974786-r8krh 1/1 Running 0 8m28s
为ding缩容:
[root@k8s-master ~]# kubectl scale deployment ding --replicas 2
deployment.apps/ding scaled
[root@k8s-master ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
ding-5799974786-2mrzz 1/1 Running 0 10m
ding-5799974786-r8krh 1/1 Running 0 10m
[root@k8s-master ~]#
###########################################################################################################
手动创建:
[root@k8s-master ~]# kubectl create deployment ding --image timinglee/myapp:v1 --replicas 2
deployment.apps/ding created
[root@k8s-master ~]#
[root@k8s-master ~]#
[root@k8s-master ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
ding-5799974786-5stbs 1/1 Running 0 7s
ding-5799974786-hk8cn 1/1 Running 0 7s
[root@k8s-master ~]# kubectl expose deployment ding --port 80 --target-port 80
service/ding exposed
[root@k8s-master ~]# kubectl get service ding
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ding ClusterIP 10.109.3.193 <none> 80/TCP 7s
[root@k8s-master ~]# kubectl describe service ding
Name: ding
Namespace: default
Labels: app=ding
Annotations: <none>
Selector: app=ding
Type: ClusterIP
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.109.3.193
IPs: 10.109.3.193
Port: <unset> 80/TCP
TargetPort: 80/TCP
Endpoints: 10.244.1.39:80,10.244.4.40:80
Session Affinity: None
Events: <none>
[root@k8s-master ~]# curl 10.109.3.193
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
[root@k8s-master ~]# curl 10.109.3.193/hostname.html
ding-5799974786-5stbs
[root@k8s-master ~]# curl 10.109.3.193/hostname.html
ding-5799974786-hk8cn
[root@k8s-master ~]#
4.3 应用版本的更新
利用控制器建立pod:
[root@k8s-master ~]# kubectl create deployment ding --image timinglee/myapp:v1 --replicas 2
deployment.apps/ding created
[root@k8s-master ~]#
暴漏端口:
[root@k8s-master ~]# kubectl expose deployment ding --port 80 --target-port 80
service/ding exposed
访问服务:
[root@k8s-master ~]# curl 10.109.3.193
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
[root@k8s-master ~]# curl 10.109.3.193
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
[root@k8s-master ~]#
查看历史版本:
[root@k8s-master ~]# kubectl rollout history deployment ding
deployment.apps/ding
REVISION CHANGE-CAUSE
1 <none>
[root@k8s-master ~]#
更新控制器镜像版本:
[root@k8s-master ~]# kubectl set image deployments/ding myapp=timinglee/myapp:v2
deployment.apps/ding image updated
再次查看版本:
[root@k8s-master ~]# kubectl rollout history deployment ding
deployment.apps/ding
REVISION CHANGE-CAUSE
1 <none>
2 <none>
[root@k8s-master ~]#
访问测试内容:
[root@k8s-master ~]# curl 10.109.3.193
Hello MyApp | Version: v2 | <a href="hostname.html">Pod Name</a>
版本回滚:
[root@k8s-master ~]# kubectl rollout undo deployment ding --to-revision 1
再次访问:
[root@k8s-master ~]# curl 10.109.3.193
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
[root@k8s-master ~]# curl 10.109.3.193
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
[root@k8s-master ~]# curl 10.109.3.193
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
[root@k8s-master ~]#
4.4 利用yaml文件部署应用
4.4.1 用yaml文件部署应用有以下优点
声明式配置:
-
清晰表达期望状态:以声明式的方式描述应用的部署需求,包括副本数量、容器配置、网络设置等。这使得配置易于理解和维护,并且可以方便地查看应用的预期状态。
-
可重复性和版本控制:配置文件可以被版本控制,确保在不同环境中的部署一致性。可以轻松回滚到以前的版本或在不同环境中重复使用相同的配置。
-
团队协作:便于团队成员之间共享和协作,大家可以对配置文件进行审查和修改,提高部署的可靠性和稳定性。
灵活性和可扩展性:
-
丰富的配置选项:可以通过 YAML 文件详细地配置各种 Kubernetes 资源,如 Deployment、Service、ConfigMap、Secret 等。可以根据应用的特定需求进行高度定制化。
-
组合和扩展:可以将多个资源的配置组合在一个或多个 YAML 文件中,实现复杂的应用部署架构。同时,可以轻松地添加新的资源或修改现有资源以满足不断变化的需求。
与工具集成:
-
与 CI/CD 流程集成:可以将 YAML 配置文件与持续集成和持续部署(CI/CD)工具集成,实现自动化的应用部署。例如,可以在代码提交后自动触发部署流程,使用配置文件来部署应用到不同的环境。
-
命令行工具支持:Kubernetes 的命令行工具
kubectl
对 YAML 配置文件有很好的支持,可以方便地应用、更新和删除配置。同时,还可以使用其他工具来验证和分析 YAML 配置文件,确保其正确性和安全性。
4.4.2 资源清单
|------------------------------------------------|---------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 参数名称 | 类型 | 参数说明 |
| version | String | 这里是指的是K8S API的版本,目前基本上是v1,可以用kubectl api-versions命令查询 |
| kind | String | 这里指的是yaml文件定义的资源类型和角色,比如:Pod |
| metadata | Object | 元数据对象,固定值就写metadata |
| metadata.name | String | 元数据对象的名字,这里由我们编写,比如命名Pod的名字 |
| metadata.namespace | String | 元数据对象的命名空间,由我们自身定义 |
| Spec | Object | 详细定义对象,固定值就写Spec |
| spec.containers[] | list | 这里是Spec对象的容器列表定义,是个列表 |
| spec.containers[].name | String | 这里定义容器的名字 |
| spec.containers[].image | string | 这里定义要用到的镜像名称 |
| spec.containers[].imagePullPolicy | String | 定义镜像拉取策略,有三个值可选: (1) Always: 每次都尝试重新拉取镜像 (2) IfNotPresent:如果本地有镜像就使用本地镜像 (3) )Never:表示仅使用本地镜像 |
| spec.containers[].command[] | list | 指定容器运行时启动的命令,若未指定则运行容器打包时指定的命令 |
| spec.containers[].args[] | list | 指定容器运行参数,可以指定多个 |
| spec.containers[].workingDir | String | 指定容器工作目录 |
| spec.containers[].volumeMounts[] | list | 指定容器内部的存储卷配置 |
| spec.containers[].volumeMounts[].name | String | 指定可以被容器挂载的存储卷的名称 |
| spec.containers[].volumeMounts[].mountPath | String | 指定可以被容器挂载的存储卷的路径 |
| spec.containers[].volumeMounts[].readOnly | String | 设置存储卷路径的读写模式,ture或false,默认为读写模式 |
| spec.containers[].ports[] | list | 指定容器需要用到的端口列表 |
| spec.containers[].ports[].name | String | 指定端口名称 |
| spec.containers[].ports[].containerPort | String | 指定容器需要监听的端口号 |
| spec.containers[] ports[].hostPort | String | 指定容器所在主机需要监听的端口号,默认跟上面containerPort相同,注意设置了hostPort同一台主机无法启动该容器的相同副本(因为主机的端口号不能相同,这样会冲突) |
| spec.containers[].ports[].protocol | String | 指定端口协议,支持TCP和UDP,默认值为 TCP |
| spec.containers[].env[] | list | 指定容器运行前需设置的环境变量列表 |
| spec.containers[].env[].name | String | 指定环境变量名称 |
| spec.containers[].env[].value | String | 指定环境变量值 |
| spec.containers[].resources | Object | 指定资源限制和资源请求的值(这里开始就是设置容器的资源上限) |
| spec.containers[].resources.limits | Object | 指定设置容器运行时资源的运行上限 |
| spec.containers[].resources.limits.cpu | String | 指定CPU的限制,单位为核心数,1=1000m |
| spec.containers[].resources.limits.memory | String | 指定MEM内存的限制,单位为MIB、GiB |
| spec.containers[].resources.requests | Object | 指定容器启动和调度时的限制设置 |
| spec.containers[].resources.requests.cpu | String | CPU请求,单位为core数,容器启动时初始化可用数量 |
| spec.containers[].resources.requests.memory | String | 内存请求,单位为MIB、GIB,容器启动的初始化可用数量 |
| spec.restartPolicy | string | 定义Pod的重启策略,默认值为Always. (1)Always: Pod-旦终止运行,无论容器是如何 终止的,kubelet服务都将重启它 (2)OnFailure: 只有Pod以非零退出码终止时,kubelet才会重启该容器。如果容器正常结束(退出码为0),则kubelet将不会重启它 (3) Never: Pod终止后,kubelet将退出码报告给Master,不会重启该 |
| spec.nodeSelector | Object | 定义Node的Label过滤标签,以key:value格式指定 |
| spec.imagePullSecrets | Object | 定义pull镜像时使用secret名称,以name:secretkey格式指定 |
| spec.hostNetwork | Boolean | 定义是否使用主机网络模式,默认值为false。设置true表示使用宿主机网络,不使用docker网桥,同时设置了true将无法在同一台宿主机 上启动第二个副本 |
如何获得资源的帮助:
kubectl explain pod.spec.containers
namespace参数:
[root@k8s-master yaml]# kubectl create namespace ding
namespace/ding created
[root@k8s-master yaml]# kubectl get namespaces
NAME STATUS AGE
default Active 46h
ding Active 6s
kube-flannel Active 80m
kube-node-lease Active 46h
kube-public Active 46h
kube-system Active 46h
[root@k8s-master yaml]#
[root@k8s-master yaml]# vim pod.yml
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: ding ------ 添加这个
labels:
app: timingding
name: timingding
spec:
replicas: 2
selector:
matchLabels:
app: timingding
template:
metadata:
labels:
app: timingding
spec:
containers:
- image: timinglee/myapp:v1
name: myapp
[root@k8s-master yaml]# kubectl apply -f pod.yml
deployment.apps/timingding created
在deployments里面看不到创建的容器:
[root@k8s-master yaml]# kubectl get deployments.apps
No resources found in default namespace.
在namespace里面,它是有隔离性的:
[root@k8s-master yaml]# kubectl -n ding get deployments.apps
NAME READY UP-TO-DATE AVAILABLE AGE
timingding 2/2 2 2 3m46s
[root@k8s-master yaml]#
删除命名空间:
[root@k8s-master yaml]# kubectl delete namespaces ding
namespace "ding" deleted
4.5 编写示例
4.5.1 示例1:运行简单的单个容器pod
用命令获取yaml模板:
[root@k8s-master yaml]# kubectl run timingding --image timinglee/myapp:v1 --dry-run=client -o yaml > pod.yml
[root@k8s-master yaml]#
可以查看所有资源:
[root@k8s-master yaml]# kubectl api-resources
[root@k8s-master ~]# vim pod.yml
apiVersion: v1
kind: Pod
metadata:
labels:
run: myapp1 #pod标签
name: ding #pod名称
spec:
containers:
- image: myapp:v1 #pod镜像
name: ding #容器名称
4.5.2 示例2:运行多个容器pod
[root@k8s-master ~]# vim pod.yml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: timingding
name: timingding
spec:
containers:
- image: timinglee/myapp:v1
name: timingding
- image: busybox:latest
name: busybox
command: ["/bin/sh","-c","sleep 1000000"]
[root@k8s-master yaml]# kubectl apply -f pod.yml
pod/timingding created
[root@k8s-master yaml]# kubectl get pods timingding
NAME READY STATUS RESTARTS AGE
timingding 2/2 Running 0 18s
[root@k8s-master yaml]#
两个是共用一个网络栈的
在一个pod中开启多个容器时一定要确保容器彼此不能互相干扰,有冲突的话会运行失败
[root@k8s-master ~]# vim pod.yml
apiVersion: v1
kind: Pod
metadata:
labels:
run: timingding
name: timingding
spec:
containers:
- image: nginx:latest
name: web1
- image: nginx:latest
name: web2
[root@k8s-master ~]# kubectl apply -f pod.yml
pod/timingding created
[root@k8s-master ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
timingding 1/2 Error 1 (14s ago) 18s
#查看日志
[root@k8s-master ~]# kubectl logs timingding web2
2024/08/31 12:43:20 [emerg] 1#1: bind() to [::]:80 failed (98: Address already in use)
nginx: [emerg] bind() to [::]:80 failed (98: Address already in use)
2024/08/31 12:43:20 [notice] 1#1: try again to bind() after 500ms
2024/08/31 12:43:20 [emerg] 1#1: still could not bind()
nginx: [emerg] still could not bind()
4.5.3 示例3:理解pod间的网络整合
[root@k8s-master ~]# vim pod.yml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: timingding
name: timingding
spec:
containers:
- image: timinglee/myapp:v1
name: timingding
- image: busyboxplus:latest
name: busyboxplus
command: ["/bin/sh","-c","sleep 1000000"]
[root@k8s-master yaml]# kubectl apply -f pod.yml
pod/timingding created
[root@k8s-master yaml]# kubectl exec pods/timingding -c busyboxplus -it -- /bin/sh
/ # curl localhost
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
/ #
busyboxplus中是没有Web服务的,现在却能访问到,是因为它们是共享的同一个网络栈。
4.5.4 示例4:端口映射
[root@k8s-master ~]# vim pod.yml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: timingding
name: timingding
spec:
containers:
- image: timinglee/myapp:v1
name: timingding
ports:
- name: webport
containerPort: 80
hostPort: 80
protocol: TCP
[root@k8s-master yaml]# kubectl apply -f pod.yml
pod/timingding created
[root@k8s-master yaml]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
timingding 1/1 Running 0 66s 10.244.1.46 k8s-node1 <none> <none>
[root@k8s-master yaml]# curl 10.244.1.46
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
[root@k8s-master yaml]#
[root@k8s-master yaml]# curl 172.25.254.10
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
[root@k8s-master yaml]#
4.5.5 示例5:如何设定环境变量
[root@k8s-master ~]# vim pod.yml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: timingding
name: timingding
spec:
containers:
- image: busybox:latest
name: busybox
command: ["/bin/sh","-c","echo $NAME;sleep 100000"]
env:
- name: NAME
value: ding
[root@k8s-master yaml]# kubectl apply -f pod.yml
pod/timingding created
[root@k8s-master yaml]# kubectl get pods
NAME READY STATUS RESTARTS AGE
timingding 1/1 Running 0 18s
[root@k8s-master yaml]# kubectl logs pods/timingding busybox
ding
[root@k8s-master yaml]#
4.5.6 示例6:资源限制
资源限制会影响pod的Qos Class资源优先级,资源优先级分为Guaranteed > Burstable > BestEffort
QoS(Quality of Service)即服务质量
|-----------------|----------------|
| 资源限定未设定 | BestEffort |
| 资源设定 | 优先级类型 |
| 资源限定设定且最大和最小不一致 | Burstable |
| 资源限定设定且最大和最小一致 | Guaranteed |
默认是没有做资源限制的:
[root@k8s-master yaml]# kubectl delete -f pod.yml
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort ---- 默认的是这个,但它的优先级最高,资源不够用的时候,它最优先
[root@k8s-master ~]# vim pod.yml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: timingding
name: timingding
spec:
containers:
- image: timinglee/myapp:v1
name: webserver1
resources:
limits:
cpu: 1
memory: 200Mi
requests:
cpu: 500m ----也可以写成0.5
memory: 100Mi
[root@k8s-master yaml]# kubectl apply -f pod.yml
pod/timingding created
[root@k8s-master yaml]# kubectl get pods timingding
NAME READY STATUS RESTARTS AGE
timingding 1/1 Running 0 44s
[root@k8s-master yaml]# kubectl describe pods timingding
Limits: --------- limit的值一定要大于requests,不然pod运行不起来
cpu: 1
memory: 200Mi
Requests:
cpu: 500m
memory: 100Mi
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: Burstable --- 变成这个了,资源次敏感型
改成资源敏感型,limit和requests设置的一样大:
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: timingding
name: timingding
spec:
containers:
- image: timinglee/myapp:v1
name: webserver1
resources:
limits:
cpu: 0.5
memory: 100Mi
requests:
cpu: 500m
memory: 100Mi
[root@k8s-master yaml]# kubectl apply -f pod.yml
pod/timingding created
[root@k8s-master yaml]# kubectl describe pods timingding
Limits:
cpu: 500m
memory: 100Mi
Requests:
cpu: 500m
memory: 100Mi
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: Guaranteed --- 就变成敏感型了
4.5.7 示例7 容器启动管理
[root@k8s-master ~]# vim pod.yml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: timingding
name: timingding
spec:
restartPolicy: Always ----- 添加这个,容器挂了就会重新启动,一直处于重启状态
containers:
- image: busybox:latest
name: wwebserver1
command: ["/bin/sh","-c","echo hello timingding"]
[root@k8s-master yaml]# kubectl apply -f pod.yml
pod/timingding created
[root@k8s-master yaml]# kubectl get pods
NAME READY STATUS RESTARTS AGE
timingding 0/1 CrashLoopBackOff 2 (25s ago) 42s
[root@k8s-master yaml]#
never参数:
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: timingding
name: timingding
spec:
restartPolicy: Never
containers:
- image: busybox:latest
name: wwebserver1
command: ["/bin/sh","-c","echo hello timingding"]
[root@k8s-master yaml]# kubectl apply -f pod.yml
pod/timingding created
[root@k8s-master yaml]# kubectl get pods
NAME READY STATUS RESTARTS AGE
timingding 0/1 Completed 0 9s
[root@k8s-master yaml]#
OnFailure参数:
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: timingding
name: timingding
spec:
restartPolicy: OnFailure ---- 换成这个参数
containers:
- image: busybox:latest
name: wwebserver1
command: ["/bin/sh","-c","echo hello timingding"]
[root@k8s-master yaml]# kubectl apply -f pod.yml
pod/timingding created
[root@k8s-master yaml]# kubectl get pods
NAME READY STATUS RESTARTS AGE
timingding 0/1 Completed 0 7s
非异常终止,非零:
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: timingding
name: timingding
spec:
restartPolicy: OnFailure
containers:
- image: busybox:latest
name: wwebserver1
command: ["/bin/sh","-c","echo hello timingding;exit 66"] ----- 设置个非零值
[root@k8s-master yaml]# kubectl apply -f pod.yml
pod/timingding created
[root@k8s-master yaml]# kubectl get pods
NAME READY STATUS RESTARTS AGE
timingding 0/1 Error 1 (8s ago) 8s
[root@k8s-master yaml]#
[root@k8s-master yaml]# kubectl get pods
NAME READY STATUS RESTARTS AGE
timingding 0/1 CrashLoopBackOff 2 (14s ago) 30s
[root@k8s-master yaml]#
4.5.8 示例8 选择运行节点
[root@k8s-master ~]# vim pod.yml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: timingding
name: timingding
spec:
nodeSelector: ----- 添加这个参数
kubernetes.io/hostname: k8s-node1
containers:
- image: busybox:latest
name: busybox
就被指定调度到k8s-node1上面了
[root@k8s-master yaml]# kubectl apply -f pod.yml
pod/timingding created
[root@k8s-master yaml]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
timingding 0/1 CrashLoopBackOff 1 (10s ago) 12s 10.244.1.50 k8s-node1 <none> <none>
[root@k8s-master yaml]#
4.5.9 示例9 共享宿主机网络
[root@k8s-master ~]# vim pod.yml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: timingding
name: timingding
spec:
restartPolicy: OnFailure
containers:
- image: busybox:latest
name: busybox
command: ["/bin/sh","-c","sleep 100000"]
现在是自己的网络:
[root@k8s-master yaml]# kubectl apply -f pod.yml
pod/timingding created
[root@k8s-master yaml]# kubectl get pods
NAME READY STATUS RESTARTS AGE
timingding 1/1 Running 0 9s
[root@k8s-master yaml]# kubectl exec pods/timingding -c busybox -it -- /bin/sh
/ #
/ # ifconfig
eth0 Link encap:Ethernet HWaddr 82:7E:B7:A1:1E:70
inet addr:10.244.1.49 Bcast:10.244.1.255 Mask:255.255.255.0
inet6 addr: fe80::807e:b7ff:fea1:1e70/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1450 Metric:1
RX packets:15 errors:0 dropped:0 overruns:0 frame:0
TX packets:9 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:2092 (2.0 KiB) TX bytes:682 (682.0 B)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
/ # exit
[root@k8s-master yaml]#
共享宿主机:
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: timingding
name: timingding
spec:
restartPolicy: OnFailure
hostNetwork: true ----- 添加这个参数
containers:
- image: busybox:latest
name: busybox
command: ["/bin/sh","-c","sleep 100000"]
[root@k8s-master yaml]# kubectl apply -f pod.yml
pod/timingding created
[root@k8s-master yaml]# kubectl exec pods/timingding -c busybox -it -- /bin/sh
/ #
/ # ifconfig
cni0 Link encap:Ethernet HWaddr FE:66:AA:78:31:54
inet addr:10.244.4.1 Bcast:10.244.4.255 Mask:255.255.255.0
UP BROADCAST MULTICAST MTU:1500 Metric:1
RX packets:689 errors:0 dropped:0 overruns:0 frame:0
TX packets:369 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:50722 (49.5 KiB) TX bytes:32862 (32.0 KiB)
docker0 Link encap:Ethernet HWaddr 02:42:81:E1:41:4F
UP BROADCAST MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
ens160 Link encap:Ethernet HWaddr 00:0C:29:FD:65:54
inet addr:172.25.254.20 Bcast:172.25.254.255 Mask:255.255.255.0
inet6 addr: fe80::fab3:2c21:57c9:1c93/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:39262 errors:0 dropped:0 overruns:0 frame:0
TX packets:33051 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:22504852 (21.4 MiB) TX bytes:4027983 (3.8 MiB)
flannel.1 Link encap:Ethernet HWaddr 56:B1:1B:0C:80:AE
inet addr:10.244.4.0 Bcast:0.0.0.0 Mask:255.255.255.255
UP BROADCAST RUNNING MULTICAST MTU:1450 Metric:1
RX packets:224 errors:0 dropped:0 overruns:0 frame:0
TX packets:160 errors:0 dropped:56 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:14546 (14.2 KiB) TX bytes:17606 (17.1 KiB)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:2445 errors:0 dropped:0 overruns:0 frame:0
TX packets:2445 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:194572 (190.0 KiB) TX bytes:194572 (190.0 KiB)
/ #
调度到k8s-node2上面了:
[root@k8s-master yaml]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
timingding 1/1 Running 0 65s 172.25.254.20 k8s-node2 <none> <none>
[root@k8s-master yaml]#
五、pod的生命周期
5.1 INIT容器
-
Pod 可以包含多个容器,应用运行在这些容器里面,同时 Pod 也可以有一个或多个先于应用容器启动的 Init 容器。
-
Init 容器与普通的容器非常像,除了如下两点:
-
它们总是运行到完成
-
init 容器不支持 Readiness,因为它们必须在 Pod 就绪之前运行完成,每个 Init 容器必须运行成功,下一个才能够运行。
-
-
如果Pod的 Init 容器失败,Kubernetes 会不断地重启该 Pod,直到 Init 容器成功为止。但是,如果 Pod 对应的 restartPolicy 值为 Never,它不会重新启动。
5.2 INIT容器的功能
-
Init 容器可以包含一些安装过程中应用容器中不存在的实用工具或个性化代码。
-
Init 容器可以安全地运行这些工具,避免这些工具导致应用镜像的安全性降低。
-
应用镜像的创建者和部署者可以各自独立工作,而没有必要联合构建一个单独的应用镜像。
-
Init 容器能以不同于Pod内应用容器的文件系统视图运行。因此,Init容器可具有访问 Secrets 的权限,而应用容器不能够访问。
-
由于 Init 容器必须在应用容器启动之前运行完成,因此 Init 容器提供了一种机制来阻塞或延迟应用容器的启动,直到满足了一组先决条件。一旦前置条件满足,Pod内的所有的应用容器会并行启动。
5.3 INIT容器示例
[root@k8s-master ~]# mkdir pod
[root@k8s-master ~]# cd pod/
[root@k8s-master pod]# kubectl run timingding --image myapp --dry-run=client -o yaml > pod.yml
[root@k8s-master pod]#
[root@k8s-master pod]# kubectl apply -f pod.yml
The Pod "timingding" is invalid: spec.initContainers: Forbidden: pod updates may not add or remove containers
[root@k8s-master pod]# vim pod.yml
apiVersion: v1
kind: Pod
metadata:
labels:
run: initpod
name: initpod
spec:
containers:
- image: timinglee/myapp:v1
name: myapp
initContainers:
- name: init-myservice
image: busybox:latest
command: ["/bin/sh","-c","until test -e /testfile;do echo wating for myservice; sleep 2;done"]
[root@k8s-master pod]# kubectl apply -f pod.yml
pod/initpod created
[root@k8s-master pod]# kubectl get pods
NAME READY STATUS RESTARTS AGE
initpod 0/1 Init:0/1 0 6s
[root@k8s-master pod]# kubectl logs pods/initpod init-myservice
wating for myservice
wating for myservice
wating for myservice
wating for myservice
wating for myservice
[root@k8s-master pod]# kubectl exec pods/initpod -c init-myservice -- /bin/sh -c "touch /testfile"
[root@k8s-master pod]# kubectl get pods
NAME READY STATUS RESTARTS AGE
initpod 1/1 Running 0 105s
5.4 探针
探针是由 kubelet 对容器执行的定期诊断:
-
ExecAction:在容器内执行指定命令。如果命令退出时返回码为 0 则认为诊断成功。
-
TCPSocketAction:对指定端口上的容器的 IP 地址进行 TCP 检查。如果端口打开,则诊断被认为是成功的。
-
HTTPGetAction:对指定的端口和路径上的容器的 IP 地址执行 HTTP Get 请求。如果响应的状态码大于等于200 且小于 400,则诊断被认为是成功的。
每次探测都将获得以下三种结果之一:
-
成功:容器通过了诊断。
-
失败:容器未通过诊断。
-
未知:诊断失败,因此不会采取任何行动。
Kubelet 可以选择是否执行在容器上运行的三种探针执行和做出反应:
-
livenessProbe:指示容器是否正在运行。如果存活探测失败,则 kubelet 会杀死容器,并且容器将受到其 重启策略 的影响。如果容器不提供存活探针,则默认状态为 Success。
-
readinessProbe:指示容器是否准备好服务请求。如果就绪探测失败,端点控制器将从与 Pod 匹配的所有 Service 的端点中删除该 Pod 的 IP 地址。初始延迟之前的就绪状态默认为 Failure。如果容器不提供就绪探针,则默认状态为 Success。
-
startupProbe: 指示容器中的应用是否已经启动。如果提供了启动探测(startup probe),则禁用所有其他探测,直到它成功为止。如果启动探测失败,kubelet 将杀死容器,容器服从其重启策略进行重启。如果容器没有提供启动探测,则默认状态为成功Success。
ReadinessProbe 与 LivenessProbe 的区别
-
ReadinessProbe 当检测失败后,将 Pod 的 IP:Port 从对应的 EndPoint 列表中删除。
-
LivenessProbe 当检测失败后,将杀死容器并根据 Pod 的重启策略来决定作出对应的措施
StartupProbe 与 ReadinessProbe、LivenessProbe 的区别
-
如果三个探针同时存在,先执行 StartupProbe 探针,其他两个探针将会被暂时禁用,直到 pod 满足 StartupProbe 探针配置的条件,其他 2 个探针启动,如果不满足按照规则重启容器。
-
另外两种探针在容器启动后,会按照配置,直到容器消亡才停止探测,而 StartupProbe 探针只是在容器启动后按照配置满足一次后,不在进行后续的探测。
5.4.1 探针实例
5.4.1.1 存活探针示例:
[root@k8s-master pod]# vim pod.yml
apiVersion: v1
kind: Pod
metadata:
labels:
run: liveness
name: liveness
spec:
containers:
- image: timinglee/myapp:v1
name: myapp
livenessProbe:
tcpSocket: #检测端口存在性
port: 8080
initialDelaySeconds: 3 #容器启动后要等待多少秒后就探针开始工作,默认是 0
periodSeconds: 1 #执行探测的时间间隔,默认为 10s
timeoutSeconds: 1 #探针执行检测请求后,等待响应的超时时间,默认为 1s
[root@k8s-master pod]# kubectl apply -f pod.yml
pod/liveness created
[root@k8s-master pod]# cd
[root@k8s-master ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
liveness 0/1 CrashLoopBackOff 2 (6s ago) 22s
[root@k8s-master ~]# kubectl describe pods
5.4.1.2 就绪探针示例
[root@k8s-master ~]# kubectl apply -f pod/pod.yml
pod/readiness created
[root@k8s-master ~]# kubectl expose pod readiness --port 80 --target-port 80
service/readiness exposed
[root@k8s-master ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
readiness 0/1 Running 0 22s
[root@k8s-master ~]# kubectl describe service readiness
Name: readiness
Namespace: default
Labels: run=readiness
Annotations: <none>
Selector: run=readiness
Type: ClusterIP
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.102.8.167
IPs: 10.102.8.167
Port: <unset> 80/TCP
TargetPort: 80/TCP
Endpoints: ------ 设置了暴漏条件,现在不满足条件,没有暴漏
Session Affinity: None
Events: <none>
满足设置的条件:
[root@k8s-master ~]# kubectl exec pods/readiness -c myapp -- /bin/sh -c "echo test > /usr/share/nginx/html/test.html"
[root@k8s-master ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
readiness 1/1 Running 0 118s
[root@k8s-master ~]# kubectl describe service readiness
Name: readiness
Namespace: default
Labels: run=readiness
Annotations: <none>
Selector: run=readiness
Type: ClusterIP
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.102.8.167
IPs: 10.102.8.167
Port: <unset> 80/TCP
TargetPort: 80/TCP
Endpoints: 10.244.4.56:80 ----- 满足条件之后,端口就暴漏出来了
Session Affinity: None
Events: <none>
[root@k8s-master ~]# curl 10.244.4.56:80
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
[root@k8s-master ~]#
六、什么是控制器
官方文档:
控制器也是管理pod的一种手段
-
自主式pod:pod退出或意外关闭后不会被重新创建
-
控制器管理的 Pod:在控制器的生命周期里,始终要维持 Pod 的副本数目
Pod控制器是管理pod的中间层,使用Pod控制器之后,只需要告诉Pod控制器,想要多少个什么样的Pod就可以了,它会创建出满足条件的Pod并确保每一个Pod资源处于用户期望的目标状态。如果Pod资源在运行中出现故障,它会基于指定策略重新编排Pod
当建立控制器后,会把期望值写入etcd,k8s中的apiserver检索etcd中我们保存的期望状态,并对比pod的当前状态,如果出现差异代码自驱动立即恢复
6.1 控制器常用类型
|--------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 控制器名称 | 控制器用途 |
| Replication Controller | 比较原始的pod控制器,已经被废弃,由ReplicaSet替代 |
| ReplicaSet | ReplicaSet 确保任何时间都有指定数量的 Pod 副本在运行 |
| Deployment | 一个 Deployment 为 Pod 和 ReplicaSet 提供声明式的更新能力 |
| DaemonSet | DaemonSet 确保全指定节点上运行一个 Pod 的副本 |
| StatefulSet | StatefulSet 是用来管理有状态应用的工作负载 API 对象。 |
| Job | 执行批处理任务,仅执行一次任务,保证任务的一个或多个Pod成功结束 |
| CronJob | Cron Job 创建基于时间调度的 Jobs。 |
| HPA全称Horizontal Pod Autoscaler | 根据资源利用率自动调整service中Pod数量,实现Pod水平自动缩放 |
6.1.2 replicaset控制器
6.1.2.1 replicaset功能
-
ReplicaSet 是下一代的 Replication Controller,官方推荐使用ReplicaSet
-
ReplicaSet和Replication Controller的唯一区别是选择器的支持,ReplicaSet支持新的基于集合的选择器需求
-
ReplicaSet 确保任何时间都有指定数量的 Pod 副本在运行
-
虽然 ReplicaSets 可以独立使用,但今天它主要被Deployments 用作协调 Pod 创建、删除和更新的机制
6.1.2.2 replicaset参数说明
|-------------------------------------|----------|------------------------------------|
| 参数名称 | 字段类型 | 参数说明 |
| spec | Object | 详细定义对象,固定值就写Spec |
| spec.replicas | integer | 指定维护pod数量 |
| spec.selector | Object | Selector是对pod的标签查询,与pod数量匹配 |
| spec.selector.matchLabels | string | 指定Selector查询标签的名称和值,以key:value方式指定 |
| spec.template | Object | 指定对pod的描述信息,比如lab标签,运行容器的信息等 |
| spec.template.metadata | Object | 指定pod属性 |
| spec.template.metadata.labels | string | 指定pod标签 |
| spec.template.spec | Object | 详细定义对象 |
| spec.template.spec.containers | list | Spec对象的容器列表定义 |
| spec.template.spec.containers.name | string | 指定容器名称 |
| spec.template.spec.containers.image | string | 指定容器镜像 |
6.1.2.3 replicaset 示例
[root@k8s-master ~]# kubectl create deployment replicaset --image timinglee/myapp:v1 --dry-run=client -o yaml > replicaset.yml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: replicaset
name: replicaset #指定pod名称,一定小写,如果出现大写报错
spec:
replicas: 2 #指定维护pod数量为2
selector: #指定检测匹配方式
matchLabels: #指定匹配方式为匹配标签
app: replicaset #指定匹配的标签为app=replicaset
template: #模板,当副本数量不足时,会根据下面的模板创建pod副本
metadata:
labels:
app: replicaset
spec:
containers:
- image: timinglee/myapp:v1
name: myapp
[root@k8s-master ~]# kubectl get pods --show-labels
NAME READY STATUS RESTARTS AGE LABELS
replicaset-79fc74b7db-gj5rz 1/1 Running 0 20s app=replicaset,pod-template-hash=79fc74b7db
replicaset-79fc74b7db-kh4db 1/1 Running 0 20s app=replicaset,pod-template-hash=79fc74b7db
[root@k8s-master ~]#
#replicaset是通过标签匹配pod
[root@k8s-master ~]# kubectl label pod replicaset-79fc74b7db-gj5rz app=timingding --overwrite
pod/replicaset-79fc74b7db-gj5rz labeled
[root@k8s-master ~]#
[root@k8s-master ~]# kubectl get pods --show-labels
NAME READY STATUS RESTARTS AGE LABELS
replicaset-79fc74b7db-9m8ds 1/1 Running 0 26s app=replicaset,pod-template-hash=79fc74b7db
replicaset-79fc74b7db-gj5rz 1/1 Running 0 2m18s app=timingding,pod-template-hash=79fc74b7db
replicaset-79fc74b7db-kh4db 1/1 Running 0 2m18s app=replicaset,pod-template-hash=79fc74b7db
[root@k8s-master ~]#
#恢复标签后
[root@k8s-master ~]# kubectl label pod replicaset-79fc74b7db-gj5rz app-
pod/replicaset-79fc74b7db-gj5rz unlabeled
[root@k8s-master ~]#
[root@k8s-master ~]# kubectl get pods --show-labels
NAME READY STATUS RESTARTS AGE LABELS
replicaset-79fc74b7db-9m8ds 1/1 Running 0 94s app=replicaset,pod-template-hash=79fc74b7db
replicaset-79fc74b7db-gj5rz 1/1 Running 0 3m26s pod-template-hash=79fc74b7db
replicaset-79fc74b7db-kh4db 1/1 Running 0 3m26s app=replicaset,pod-template-hash=79fc74b7db
[root@k8s-master ~]#
#replicaset自动控制副本数量,pod可以自愈
[root@k8s-master ~]# kubectl delete pods replicaset-79fc74b7db-gj5rz
pod "replicaset-79fc74b7db-gj5rz" deleted
[root@k8s-master ~]# kubectl get pods --show-labels
NAME READY STATUS RESTARTS AGE LABELS
replicaset-79fc74b7db-9m8ds 1/1 Running 0 2m32s app=replicaset,pod-template-hash=79fc74b7db
replicaset-79fc74b7db-kh4db 1/1 Running 0 4m24s app=replicaset,pod-template-hash=79fc74b7db
[root@k8s-master ~]#
回收资源:
[root@k8s-master ~]# kubectl delete -f replicaset.yml
deployment.apps "replicaset" deleted
[root@k8s-master ~]#
6.1.3 deployment 控制器
6.1.3.1 deployment控制器的功能
-
为了更好的解决服务编排的问题,kubernetes在V1.2版本开始,引入了Deployment控制器。
-
Deployment控制器并不直接管理pod,而是通过管理ReplicaSet来间接管理Pod
-
Deployment管理ReplicaSet,ReplicaSet管理Pod
-
Deployment 为 Pod 和 ReplicaSet 提供了一个申明式的定义方法
-
在Deployment中ReplicaSet相当于一个版本
典型的应用场景:
-
用来创建Pod和ReplicaSet
-
滚动更新和回滚
-
扩容和缩容
-
暂停与恢复
6.1.3.2 deployment控制器示例
生成yaml文件:
[root@k8s-master ~]# kubectl create deployment deployment --image myapp:v1 --dry-run=client -o yaml > deployment.yml
[root@k8s-master ~]# vim deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: deployment
spec:
replicas: 4
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- image: timinglee/myapp:v1
name: myapp
查看pod信息:
[root@k8s-master ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
deployment-748957ccc-fl9sd 1/1 Running 0 2m26s 10.244.1.60 k8s-node1 <none> <none>
deployment-748957ccc-grp56 1/1 Running 0 2m26s 10.244.1.59 k8s-node1 <none> <none>
deployment-748957ccc-mcgkw 1/1 Running 0 2m26s 10.244.4.59 k8s-node2 <none> <none>
deployment-748957ccc-n85j6 1/1 Running 0 2m26s 10.244.4.60 k8s-node2 <none> <none>
[root@k8s-master ~]# curl 10.244.1.60
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
[root@k8s-master ~]#
更新容器运行版本:
apiVersion: apps/v1
kind: Deployment
metadata:
name: deployment
spec:
replicas: 4
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- image: timinglee/myapp:v2 ---- 版本更新为2
name: myapp
[root@k8s-master ~]# kubectl apply -f deployment.yml
deployment.apps/deployment created
更新过程:
[root@k8s-master ~]# kubectl get pods -w
[root@k8s-master ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
deployment-787f4c6f48-4h8t2 1/1 Running 0 9s
deployment-787f4c6f48-rm569 1/1 Running 0 9s
deployment-787f4c6f48-slv46 1/1 Running 0 9s
deployment-787f4c6f48-vj6mg 1/1 Running 0 9s
测试更新效果:
[root@k8s-master ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
deployment-787f4c6f48-4h8t2 1/1 Running 0 112s 10.244.4.66 k8s-node2 <none> <none>
deployment-787f4c6f48-rm569 1/1 Running 0 112s 10.244.1.65 k8s-node1 <none> <none>
deployment-787f4c6f48-slv46 1/1 Running 0 112s 10.244.1.66 k8s-node1 <none> <none>
deployment-787f4c6f48-vj6mg 1/1 Running 0 112s 10.244.4.65 k8s-node2 <none> <none>
[root@k8s-master ~]# curl 10.244.4.66
Hello MyApp | Version: v2 | <a href="hostname.html">Pod Name</a>
[root@k8s-master ~]#
更新的过程是重新建立一个版本的RS,新版本的RS会把pod 重建,然后把老版本的RS回收
6.1.3.2.1 版本回滚
[root@k8s-master ~]# vim deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: deployment
spec:
replicas: 4
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- image: timinglee/myapp:v1
name: myapp
测试回滚效果:
[root@k8s-master ~]# kubectl apply -f deployment.yml
deployment.apps/deployment configured
[root@k8s-master ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
deployment-748957ccc-82t4m 1/1 Running 0 12s 10.244.1.67 k8s-node1 <none> <none>
deployment-748957ccc-brkhx 1/1 Running 0 11s 10.244.4.68 k8s-node2 <none> <none>
deployment-748957ccc-l9lkt 1/1 Running 0 12s 10.244.4.67 k8s-node2 <none> <none>
deployment-748957ccc-pg7vs 1/1 Running 0 10s 10.244.1.68 k8s-node1 <none> <none>
[root@k8s-master ~]# curl 10.244.1.67
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
[root@k8s-master ~]#
6.1.3.2.2 滚动更新策略
[root@k8s-master ~]# vim deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: deployment
spec:
minReadySeconds: 5 #最小就绪时间,指定pod每隔多久更新一次
replicas: 4
strategy: #指定更新策略
rollingUpdate:
maxSurge: 1 #比定义pod数量多几个
maxUnavailable: 0 #比定义pod数量少几个
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- image: timinglee/myapp:v1
name: myapp
[root@k8s-master ~]# kubectl apply -f deployment.yml
deployment.apps/deployment created
[root@k8s-master ~]#
[root@k8s-master ~]#
[root@k8s-master ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
deployment-748957ccc-kcb6g 1/1 Running 0 86s 10.244.4.69 k8s-node2 <none> <none>
deployment-748957ccc-lhs2f 1/1 Running 0 85s 10.244.1.70 k8s-node1 <none> <none>
deployment-748957ccc-nbxhd 1/1 Running 0 85s 10.244.4.70 k8s-node2 <none> <none>
deployment-748957ccc-qpsv4 1/1 Running 0 85s 10.244.1.69 k8s-node1 <none> <none>
查看信息:
[root@k8s-master ~]# kubectl describe deployments.apps deployment
Name: deployment
Namespace: default
CreationTimestamp: Thu, 05 Sep 2024 03:12:35 -0400
Labels: <none>
Annotations: deployment.kubernetes.io/revision: 1
Selector: app=myapp
Replicas: 4 desired | 4 updated | 4 total | 4 available | 0 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 5
RollingUpdateStrategy: 0 max unavailable, 1 max surge
6.1.3.2.3 暂停及恢复
在实际生产环境中我们做的变更可能不止一处,当修改了一处后,如果执行变更就直接触发了
我们期望的触发时当我们把所有修改都搞定后一次触发暂停,避免触发不必要的线上更新
[root@k8s-master ~]# vim deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: deployment
spec:
minReadySeconds: 5
replicas: 4
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
replicas: 6
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- image: timinglee/myapp:v1
name: myapp
resources:
limits:
cpu: 0.5
memory: 200Mi
requests:
cpu: 0.5
memory: 200Mi
6.1.4 job 控制器
6.1.4.1 job控制器功能
Job,主要用于负责批量处理(一次要处理指定数量任务)短暂的一次性(每个任务仅运行一次就结束)任务
Job特点如下:
-
当Job创建的pod执行成功结束时,Job将记录成功结束的pod数量
-
当成功结束的pod达到指定的数量时,Job将完成执行
6.1.4.2 job控制器示例
[root@k8s-master ~]# kubectl create job ding-job --image perl-5.34.0 --dry-run=client -o yaml > ding-job.yml
[root@k8s-master ~]#
[root@k8s-master ~]# vim ding-job.yml
apiVersion: batch/v1
kind: Job
metadata:
name: pi
spec:
completions: 6 #一共完成任务数为6
parallelism: 1 #每次并行完成2个
template:
spec:
containers:
- name: perl
image: perl:5.34.0
command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"] 计算Π的后2000位
restartPolicy: Never #关闭后不自动重启
backoffLimit: 4 #运行失败后尝试4重新运行
[root@k8s-master ~]# kubectl apply -f ding-job.yml
job.batch/pi created
[root@k8s-master ~]#
测试结果:
[root@k8s-master ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
pi-bkts6 0/1 Completed 0 22s
pi-gqdgp 0/1 Completed 0 52s
pi-hzzqt 0/1 Completed 0 74s
pi-jvr4l 0/1 Completed 0 15s
pi-qvrq5 0/1 Completed 0 9s
pi-srr4f 0/1 Completed 0 46s
关于重启策略设置的说明:
-
如果指定为OnFailure,则job会在pod出现故障时重启容器
而不是创建pod,failed次数不变
-
如果指定为Never,则job会在pod出现故障时创建新的pod
并且故障pod不会消失,也不会重启,failed次数加1
-
如果指定为Always的话,就意味着一直重启,意味着job任务会重复去执行了
6.1.5 cronjob 控制器
6.1.5.1 cronjob 控制器功能
-
Cron Job 创建基于时间调度的 Jobs。
-
CronJob控制器以Job控制器资源为其管控对象,并借助它管理pod资源对象,
-
CronJob可以以类似于Linux操作系统的周期性任务作业计划的方式控制其运行时间点及重复运行的方式。
-
CronJob可以在特定的时间点(反复的)去运行job任务。
6.1.5.2 cronjob 控制器 示例
[root@k8s-master ~]# vim ding-job.yml
apiVersion: batch/v1
kind: CronJob
metadata:
name: timingding-cjob
spec:
schedule: "* * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: testjob image: busyboxplus:latest
command: ["/bin/sh","-c","date;echo Hello from the Kubernetes cluster"]
restartPolicy: Never
[root@k8s-master ~]# kubectl apply -f ding-job.yml
cronjob.batch/timingding-cjob created
[root@k8s-master ~]# kubectl get cronjobs.batch
NAME SCHEDULE TIMEZONE SUSPEND ACTIVE LAST SCHEDULE AGE
timingding-cjob * * * * * <none> False 0 <none> 33s
[root@k8s-master ~]#
七、微服务
7.1 什么是微服务
用控制器来完成集群的工作负载,那么应用如何暴漏出去?需要通过微服务暴漏出去后才能被访问
-
Service是一组提供相同服务的Pod对外开放的接口。
-
借助Service,应用可以实现服务发现和负载均衡。
-
service默认只支持4层负载均衡能力,没有7层功能。(可以通过Ingress实现)
7.2 微服务的类型
|--------------|----------------------------------------------------------------------------------|
| 微服务类型 | 作用描述 |
| ClusterIP | 默认值,k8s系统给service自动分配的虚拟IP,只能在集群内部访问 |
| NodePort | 将Service通过指定的Node上的端口暴露给外部,访问任意一个NodeIP:nodePort都将路由到ClusterIP |
| LoadBalancer | 在NodePort的基础上,借助cloud provider创建一个外部的负载均衡器,并将请求转发到 NodeIP:NodePort,此模式只能在云服务器上使用 |
| ExternalName | 将服务通过 DNS CNAME 记录方式转发到指定的域名(通过 spec.externlName 设定 |
示例:
#生成控制器文件并建立控制器
[root@k8s-master yaml]# kubectl create deployment ding --image /timinglee/myapp:v1 --replicas 2 --dry-run=client -o yaml > xiaoding.yaml
[root@k8s-master yaml]# kubectl apply -f xiaoding.yaml
#生成微服务yaml追加到已有yaml中
[root@k8s-master yaml]# kubectl expose deployment ding --port 80 --target-port 80 --dry-run=client -o yaml >> xiaoding.yaml
[root@k8s-master yaml]# vim xiaoding.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: ding
name: ding
spec:
replicas: 2
selector:
matchLabels:
app: ding
template:
metadata:
labels:
app: ding
spec:
containers:
- image: timinglee/myapp:v1
name: myapp
--- #不同资源要用---隔开
apiVersion: v1
kind: Service
metadata:
labels:
app: ding
name: ding
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: ding
[root@k8s-master ~]# kubectl apply -f xiaoding.yaml
[root@k8s-master yaml]# kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ding ClusterIP 10.96.70.145 <none> 80/TCP 9m48s
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 77m
[root@k8s-master yaml]#
微服务默认使用iptables调度:
[root@k8s-master yaml]# kubectl get service -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
ding ClusterIP 10.96.70.145 <none> 80/TCP 10m app=ding
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 78m <none>
[root@k8s-master yaml]#
#可以在火墙中查看到策略信息
[root@k8s-master ~]# iptables -t nat -nL
7.2.1 IPVS模式
-
Service 是由 kube-proxy 组件,加上 iptables 来共同实现的
-
kube-proxy 通过 iptables 处理 Service 的过程,需要在宿主机上设置相当多的 iptables 规则,如果宿主机有大量的Pod,不断刷新iptables规则,会消耗大量的CPU资源
-
IPVS模式的service,可以使K8s集群支持更多量级的Pod
7.2.1.1 ipvs模式配置方式
[root@k8s-所有节点 pod]yum install ipvsadm --y
修改master节点的代理配置:
[root@k8s-master ~]# kubectl -n kube-system edit cm kube-proxy
metricsBindAddress: ""
mode: "ipvs" #设置kube-proxy使用ipvs模式
nftables:
重启pod,在pod运行时配置文件中采用默认配置,当改变配置文件后已经运行的pod状态不会变化,所以要重启pod
[root@k8s-master ~]# kubectl -n kube-system get pods | awk '/kube-proxy/{system("kubectl -n kube-system delete pods "$1)}'
[root@k8s-master yaml]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 10.96.0.1:443 rr
-> 172.25.254.100:6443 Masq 1 0 0
TCP 10.96.0.10:53 rr
-> 10.244.0.6:53 Masq 1 0 0
-> 10.244.0.7:53 Masq 1 0 0
TCP 10.96.0.10:9153 rr
-> 10.244.0.6:9153 Masq 1 0 0
-> 10.244.0.7:9153 Masq 1 0 0
TCP 10.96.70.145:80 rr
-> 10.244.1.83:80 Masq 1 0 0
-> 10.244.2.4:80 Masq 1 0 0
UDP 10.96.0.10:53 rr
-> 10.244.0.6:53 Masq 1 0 0
-> 10.244.0.7:53 Masq 1 0 0
[root@k8s-master yaml]#
切换ipvs模式后,kube-proxy会在宿主机上添加一个虚拟网卡:kube-ipvs0,并分配所有service IP:
[root@k8s-master yaml]# ip a | tail
inet6 fe80::7cf1:9bff:fe8d:1df9/64 scope link
valid_lft forever preferred_lft forever
12: kube-ipvs0: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN group default
link/ether 42:f0:5a:9e:6c:37 brd ff:ff:ff:ff:ff:ff
inet 10.96.0.10/32 scope global kube-ipvs0
valid_lft forever preferred_lft forever
inet 10.96.70.145/32 scope global kube-ipvs0
valid_lft forever preferred_lft forever
inet 10.96.0.1/32 scope global kube-ipvs0
valid_lft forever preferred_lft forever
[root@k8s-master yaml]#
7.3 微服务类型详解
7.3.1 ClusterIP
特点:
clusterip模式只能在集群内访问,并对集群内的pod提供健康检测和自动发现功能
示例:
[root@k8s2 service]# vim myapp.yml
---
apiVersion: v1
kind: Service
metadata:
labels:
app: timinglee
name: timinglee
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: timinglee
type: ClusterIP
service创建后集群DNS提供解析
[root@k8s-master ~]# dig timinglee.default.svc.cluster.local @10.96.0.10
; <<>> DiG 9.16.23-RH <<>> timinglee.default.svc.cluster.local @10.96.0.10
;; global options: +cmd
;; Got answer:
;; WARNING: .local is reserved for Multicast DNS
;; You are currently testing what happens when an mDNS query is leaked to DNS
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 27827
;; flags: qr aa rd; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
;; WARNING: recursion requested but not available
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
; COOKIE: 057d9ff344fe9a3a (echoed)
;; QUESTION SECTION:
;timinglee.default.svc.cluster.local. IN A
;; ANSWER SECTION:
timinglee.default.svc.cluster.local. 30 IN A 10.97.59.25
;; Query time: 8 msec
;; SERVER: 10.96.0.10#53(10.96.0.10)
;; WHEN: Wed Sep 04 13:44:30 CST 2024
;; MSG SIZE rcvd: 127
7.3.1.1 ClusterIP中的特殊模式hesdless
headless(无头服务)
对于无头 Services
并不会分配 Cluster IP,kube-proxy不会处理它们, 而且平台也不会为它们进行负载均衡和路由,集群访问通过dns解析直接指向到业务pod上的IP,所有的调度有dns单独完成
[root@k8s-master ~]# vim timinglee.yaml
---
apiVersion: v1
kind: Service
metadata:
labels:
app: timinglee
name: timinglee
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: timinglee
type: ClusterIP
clusterIP: None
[root@k8s-master ~]# kubectl delete -f timinglee.yaml
[root@k8s-master ~]# kubectl apply -f timinglee.yaml
deployment.apps/timinglee created
#测试
[root@k8s-master ~]# kubectl get services timinglee
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
timinglee ClusterIP None <none> 80/TCP 6s
[root@k8s-master ~]# dig timinglee.default.svc.cluster.local @10.96.0.10
; <<>> DiG 9.16.23-RH <<>> timinglee.default.svc.cluster.local @10.96.0.10
;; global options: +cmd
;; Got answer:
;; WARNING: .local is reserved for Multicast DNS
;; You are currently testing what happens when an mDNS query is leaked to DNS
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 51527
;; flags: qr aa rd; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 1
;; WARNING: recursion requested but not available
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
; COOKIE: 81f9c97b3f28b3b9 (echoed)
;; QUESTION SECTION:
;timinglee.default.svc.cluster.local. IN A
;; ANSWER SECTION:
timinglee.default.svc.cluster.local. 20 IN A 10.244.2.14 #直接解析到pod上
timinglee.default.svc.cluster.local. 20 IN A 10.244.1.18
;; Query time: 0 msec
;; SERVER: 10.96.0.10#53(10.96.0.10)
;; WHEN: Wed Sep 04 13:58:23 CST 2024
;; MSG SIZE rcvd: 178
#开启一个busyboxplus的pod测试
[root@k8s-master ~]# kubectl run test --image busyboxplus -it
If you don't see a command prompt, try pressing enter.
/ # nslookup timinglee-service
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
Name: timinglee-service
Address 1: 10.244.2.16 10-244-2-16.timinglee-service.default.svc.cluster.local
Address 2: 10.244.2.17 10-244-2-17.timinglee-service.default.svc.cluster.local
Address 3: 10.244.1.22 10-244-1-22.timinglee-service.default.svc.cluster.local
Address 4: 10.244.1.21 10-244-1-21.timinglee-service.default.svc.cluster.local
/ # curl timinglee-service
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
/ # curl timinglee-service/hostname.html
timinglee-c56f584cf-b8t6m
7.3.2 nodeport
通过ipvs暴漏端口从而使外部主机通过master节点的对外ip:<port>来访问pod业务
其访问过程为:
示例:
[root@k8s-master yaml]# vim testpod.svc.yml
apiVersion: v1
kind: Service
metadata:
labels:
run: testpod
name: testpod
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
run: testpod
type: NodePort
[root@k8s-master yaml]# kubectl apply -f testpod.svc.yml
service/testpod created
[root@k8s-master yaml]# kubectl get svc testpod
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
testpod NodePort 10.103.64.246 <none> 80:31704/TCP 28s
[root@k8s-master yaml]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 172.25.254.100:31704 rr
-> 10.244.1.122:80 Masq 1 0 0
-> 10.244.2.43:80 Masq 1 0 0
TCP 10.96.0.1:443 rr
-> 172.25.254.100:6443 Masq 1 0 0
TCP 10.96.0.10:53 rr
-> 10.244.0.4:53 Masq 1 0 0
-> 10.244.0.5:53 Masq 1 0 0
TCP 10.96.0.10:9153 rr
-> 10.244.0.4:9153 Masq 1 0 0
-> 10.244.0.5:9153 Masq 1 0 0
TCP 10.103.64.246:80 rr
-> 10.244.1.122:80 Masq 1 0 0
-> 10.244.2.43:80 Masq 1 0 0
TCP 10.244.0.0:31704 rr
-> 10.244.1.122:80 Masq 1 0 0
-> 10.244.2.43:80 Masq 1 0 0
TCP 10.244.0.1:31704 rr
-> 10.244.1.122:80 Masq 1 0 0
-> 10.244.2.43:80 Masq 1 0 0
UDP 10.96.0.10:53 rr
-> 10.244.0.4:53 Masq 1 0 0
-> 10.244.0.5:53 Masq 1 0 1
[root@k8s-master yaml]#
nodeport默认端口
nodeport默认端口是30000-32767,超出会报错
[root@k8s-master yaml]# vim testpod.svc.yml
apiVersion: v1
kind: Service
metadata:
labels:
run: testpod
name: testpod
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
nodePort: 33333
selector:
run: testpod
type: NodePort
[root@k8s-master yaml]#
[root@k8s-master yaml]# kubectl apply -f testpod.svc.yml
The Service "testpod" is invalid: spec.ports[0].nodePort: Invalid value: 33333: provided port is not in the valid range. The range of valid ports is 30000-32767
[root@k8s-master yaml]#
如果需要使用这个范围以外的端口就需要特殊设定
加了这个参数集群会自动重启,有自愈功能。
[root@k8s-master yaml]# vim /etc/kubernetes/manifests/kube-apiserver.yaml
- --service-node-port-range=30000-40000 ---- 加上这个参数
image: reg.timingding.org/k8s/kube-apiserver:v1.30.0
[root@k8s-master yaml]# kubectl apply -f testpod.svc.yml
service/testpod created
[root@k8s-master yaml]# kubectl get svc testpod
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
testpod NodePort 10.96.236.153 <none> 80:33333/TCP 16s
[root@k8s-master yaml]#
7.3.3 LoadBalancer
云平台会为我们分配vip并实现访问,如果是裸金属主机那么需要metallb来实现ip的分配
[root@k8s-master yaml]# vim testpod.svc.yml
apiVersion: v1
kind: Service
metadata:
labels:
run: testpod
name: testpod
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
run: testpod
type: LoadBalancer
[root@k8s-master yaml]# kubectl apply -f testpod.svc.yml
service/testpod created
[root@k8s-master yaml]# kubectl get svc testpod
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
testpod LoadBalancer 10.106.146.191 <pending> 80:33062/TCP 11s
[root@k8s-master yaml]# curl 10.106.146.191
^C
[root@k8s-master yaml]# curl 10.106.146.191
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
LoadBalancer模式适用云平台,裸金属环境需要安装metallb提供支持
7.3.3.1 metalLB
官网:Installation :: MetalLB, bare metal load-balancer for Kubernetes
metalLB功能
为LoadBalancer分配vip
[root@k8s-master metalLB]# docker load -i metalLB.tag.gz
[root@k8s-master metalLB]# docker tag quay.io/metallb/controller:v0.14.8 reg.timingding.org/metallb/controller:v0.14.8
[root@k8s-master metalLB]# docker push reg.timingding.org/metallb/controller:v0.14.8
[root@k8s-master metalLB]# docker tag quay.io/metallb/speaker:v0.14.8 reg.timingding.org/metallb/speaker:v0.14.8
[root@k8s-master metalLB]# docker push reg.timingding.org/metallb/speaker:v0.14.8
[root@k8s-master metalLB]# kubectl apply -f metallb-native.yaml
[root@k8s-master metalLB]# kubectl get namespaces
NAME STATUS AGE
default Active 65m
kube-node-lease Active 65m
kube-public Active 65m
kube-system Active 65m
metallb-system Active 36s
[root@k8s-master metalLB]#
[root@k8s-master metalLB]# kubectl -n metallb-system get pods
NAME READY STATUS RESTARTS AGE
controller-65957f77c8-kbllj 1/1 Running 0 98s
speaker-dwdrd 1/1 Running 0 98s
speaker-hxr2s 1/1 Running 0 98s
speaker-vwq59 1/1 Running 0 98s
[root@k8s-master metalLB]#
[root@k8s-master metalLB]# vim configmap.yml
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
name: first-pool
namespace: metallb-system
spec:
addresses:
- 172.25.254.50-172.25.254.99
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
name: example
namespace: metallb-system
spec:
ipAddressPools:
- first-pool
[root@k8s-master metalLB]# kubectl apply -f configmap.yml
ipaddresspool.metallb.io/first-pool created
bgpadvertisement.metallb.io/example created
[root@k8s-master metalLB]# kubectl -n metallb-system get configmaps
NAME DATA AGE
kube-root-ca.crt 1 2m26s
metallb-excludel2 1 2m26s
[root@k8s-master metalLB]#
[root@k8s-master ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
testpod 1/1 Running 0 10s
[root@k8s-master ~]#
[root@k8s-master ~]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 6h19m
testpod LoadBalancer 10.109.177.151 172.25.254.50 80:32295/TCP 24s
[root@k8s-master ~]# curl 172.25.254.50
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
7.3.4 ExternalName
-
开启services后,不会被分配IP,而是用dns解析CNAME固定域名来解决ip变化问题
-
一般应用于外部业务和pod沟通或外部业务迁移到pod内时
-
在应用向集群迁移过程中,externalname在过度阶段就可以起作用了。
-
集群外的资源迁移到集群时,在迁移的过程中ip可能会变化,但是域名+dns解析能完美解决此问题
示例:
[root@k8s-master ~]# vim testpod.svc.yml
apiVersion: v1
kind: Pod
metadata:
labels:
run: testpod
name: testpod
spec:
containers:
- image: timinglee/myapp:v1
name: testpod
---
apiVersion: v1
kind: Service
metadata:
labels:
run: testpod
name: testpod
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
run: testpod
type: ExternalName
externalName: www.ding.com
[root@k8s-master ~]# kubectl get svc testpod
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
testpod ExternalName <none> www.ding.com 80/TCP 4m51s
[root@k8s-master ~]#
7.4 ingress-nginx
官网:
Installation Guide - Ingress-Nginx Controller
7.4.1 ingress-nginx功能
-
一种全局的、为了代理不同后端 Service 而设置的负载均衡服务,支持7层
-
Ingress由两部分组成:Ingress controller和Ingress服务
-
Ingress Controller 会根据你定义的 Ingress 对象,提供对应的代理能力。
-
业界常用的各种反向代理项目,比如 Nginx、HAProxy、Envoy、Traefik 等,都已经为Kubernetes 专门维护了对应的 Ingress Controller。
7.4.2 部署ingress
[root@k8s-master ingress]# kubectl create deployment myapp:v1 --image myapp:v1 --dry-run=client -o yaml > myapp-v1.yml
[root@k8s-master ingress]# vim myapp-v1.yml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: myappv1
name: myappv1
spec:
replicas: 1
selector:
matchLabels:
app: myappv1
template:
metadata:
labels:
app: myappv1
spec:
containers:
- image: myapp:v1
name: myapp
把v1里面改成v2:
[root@k8s-master ingress]# cp myapp-v1.yml myapp-v2.yml
[root@k8s-master ingress]# kubectl apply -f myapp-v1.yml
deployment.apps/myappv1 created
[root@k8s-master ingress]# kubectl apply -f myapp-v2.yml
deployment.apps/myappv2 created
[root@k8s-master ingress]# kubectl get pods
NAME READY STATUS RESTARTS AGE
myappv1-586444467f-jknvm 1/1 Running 0 11s
myappv2-6766b895c7-wm49d 1/1 Running 0 8s
[root@k8s-master ingress]#
暴漏端口:
[root@k8s-master ingress]# kubectl expose deployment myappv1 --port 80 --target-port 80 --dry-run=client -o yaml > myapp-v1.yml
[root@k8s-master ingress]# kubectl expose deployment myappv2 --port 80 --target-port 80 --dry-run=client -o yaml > myapp-v2.yml
[root@k8s-master ingress]#
v1和v2上面都改一下:
[root@k8s-master ingress]# vim myapp-v1.yml
apiVersion: apps/v1
kind: Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: myappv1
name: myappv1
spec:
replicas: 1
selector:
matchLabels:
app: myappv1
template:
metadata:
labels:
app: myappv1
spec:
containers:
- image: myapp:v1
name: myapp
---
apiVersion: v1
kind: Service
metadata:
labels:
app: myappv1
name: myappv1
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: myappv1
[root@k8s-master ingress]# kubectl apply -f myapp-v1.yml
deployment.apps/myappv1 unchanged
service/myappv1 created
[root@k8s-master ingress]# kubectl apply -f myapp-v2.yml
deployment.apps/myappv2 unchanged
service/myappv2 created
[root@k8s-master ingress]# kubectl get pods
NAME READY STATUS RESTARTS AGE
myappv1-586444467f-jknvm 1/1 Running 0 10m
myappv2-6766b895c7-wm49d 1/1 Running 0 10m
[root@k8s-master ingress]#
[root@k8s-master ingress]# docker load -i ingress-nginx-1.11.2.tar.gz
上传镜像到harbor仓库:
[root@k8s-master ingress]# docker tag reg.harbor.org/ingress-nginx/controller:v1.11.2 reg.timingding.org/ingress-nginx/controller:v1.11.2
[root@k8s-master ingress]# docker push reg.timingding.org/ingress-nginx/controller:v1.11.2
[root@k8s-master ingress]# docker tag reg.harbor.org/ingress-nginx/kube-webhook-certgen:v1.4.3 reg.timingding.org/ingress-nginx/kube-webhook-certgen:v1.4.3
[root@k8s-master ingress]# docker push reg.timingding.org/ingress-nginx/kube-webhook-certgen:v1.4.3
修改镜像路径:
[root@k8s-master ingress]# vim deploy.yaml
image: ingress-nginx/controller:v1.11.2
image: ingress-nginx/kube-webhook-certgen:v1.4.3
[root@k8s-master ingress]# kubectl apply -f deploy.yaml
启动ingress:
[root@k8s-master ingress]# kubectl -n ingress-nginx get pods
NAME READY STATUS RESTARTS AGE
ingress-nginx-admission-create-gvnrw 0/1 Completed 0 18s
ingress-nginx-admission-patch-z7wrd 0/1 Completed 0 18s
ingress-nginx-controller-bb7d8f97c-q4h9q 1/1 Running 0 18s
7.4.2.1 测试ingress
[root@k8s-master ingress]# kubectl create ingress myappv1 --class nginx --rule='/=myappv1:80' --dry-run=client -o yaml > ingress1.yml
[root@k8s-master ingress]# vim ingress1.yml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: myappv1
spec:
ingressClassName: nginx
rules:
- http:
paths:
- backend:
service:
name: myappv1
port:
number: 80
path: /
pathType: Prefix
#Exact(精确匹配),ImplementationSpecific(特定实现),Prefix(前缀匹配),Regular expression(正则表达式匹配)
修改微服务为loadbalancer:
[root@k8s-master ingress]# kubectl -n ingress-nginx edit svc ingress-nginx-controller
49 type: LoadBalancer
[root@k8s-master ingress]# kubectl -n ingress-nginx get all
NAME READY STATUS RESTARTS AGE
pod/ingress-nginx-admission-create-gvnrw 0/1 Completed 0 35m
pod/ingress-nginx-admission-patch-z7wrd 0/1 Completed 0 35m
pod/ingress-nginx-controller-bb7d8f97c-q4h9q 1/1 Running 0 35m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/ingress-nginx-controller LoadBalancer 10.97.167.77 172.25.254.50 80:30764/TCP,443:31044/TCP 35m
service/ingress-nginx-controller-admission ClusterIP 10.104.152.255 <none> 443/TCP 35m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/ingress-nginx-controller 1/1 1 1 35m
NAME DESIRED CURRENT READY AGE
replicaset.apps/ingress-nginx-controller-bb7d8f97c 1 1 1 35m
NAME STATUS COMPLETIONS DURATION AGE
job.batch/ingress-nginx-admission-create Complete 1/1 3s 35m
job.batch/ingress-nginx-admission-patch Complete 1/1 3s 35m
建立控制器;
[root@k8s-master ingress]# kubectl apply -f ingress1.yml
ingress.networking.k8s.io/myappv1 created
[root@k8s-master ingress]#
[root@k8s-master ingress]# kubectl get ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
myappv1 nginx * 172.25.254.20 80 54s
访问外部IP:
[root@k8s-master ingress]# curl 172.25.254.50
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
[root@k8s-master ingress]#
7.4.3 ingress的高级用法
7.4.3.1 基于路径的访问
1.建立用于测试的控制器myapp
[root@k8s-master ingress]# kubectl create deployment myapp:v1 --image myapp:v1 --dry-run=client -o yaml > myapp-v1.yml
[root@k8s-master ingress]# vim myapp-v1.yml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: myappv1
name: myappv1
spec:
replicas: 1
selector:
matchLabels:
app: myappv1
template:
metadata:
labels:
app: myappv1
spec:
containers:
- image: myapp:v1
name: myapp
把v1里面改成v2:
[root@k8s-master ingress]# cp myapp-v1.yml myapp-v2.yml
[root@k8s-master ingress]# kubectl apply -f myapp-v1.yml
deployment.apps/myappv1 created
[root@k8s-master ingress]# kubectl apply -f myapp-v2.yml
deployment.apps/myappv2 created
[root@k8s-master ingress]# kubectl get pods
NAME READY STATUS RESTARTS AGE
myappv1-586444467f-jknvm 1/1 Running 0 11s
myappv2-6766b895c7-wm49d 1/1 Running 0 8s
[root@k8s-master ingress]#
暴漏端口:
[root@k8s-master ingress]# kubectl expose deployment myappv1 --port 80 --target-port 80 --dry-run=client -o yaml > myapp-v1.yml
[root@k8s-master ingress]# kubectl expose deployment myappv2 --port 80 --target-port 80 --dry-run=client -o yaml > myapp-v2.yml
[root@k8s-master ingress]#
v1和v2上面都改一下:
[root@k8s-master ingress]# vim myapp-v1.yml
apiVersion: apps/v1
kind: Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: myappv1
name: myappv1
spec:
replicas: 1
selector:
matchLabels:
app: myappv1
template:
metadata:
labels:
app: myappv1
spec:
containers:
- image: myapp:v1
name: myapp
---
apiVersion: v1
kind: Service
metadata:
labels:
app: myappv1
name: myappv1
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: myappv1
[root@k8s-master app]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 10h
myappv1 ClusterIP 10.97.60.74 <none> 80/TCP 46m
myappv2 ClusterIP 10.103.72.115 <none> 80/TCP 44m
[root@k8s-master app]#
2.建立ingress的yaml
[root@k8s-master app]# cp ingress1.yml ingress2.yml
[root@k8s-master app]# vim ingress2.yml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
name: myapp
spec:
ingressClassName: nginx
rules:
- host: www.xiaoding.com
http:
paths:
- backend:
service:
name: myappv1
port:
number: 80
path: /v1
pathType: Prefix
- backend:
service:
name: myappv2
port:
number: 80
path: /v2
pathType: Prefix
测试:
[root@k8s-master app]# echo 172.25.254.50 www.xiaoding.com >> /etc/hosts
[root@k8s-master app]#
[root@k8s-master app]# kubectl apply -f ingress2.yml
ingress.networking.k8s.io/myapp created
[root@k8s-master app]# kubectl describe ingress myapp
Name: myapp
Labels: <none>
Namespace: default
Address:
Ingress Class: nginx
Default backend: <default>
Rules:
Host Path Backends
---- ---- --------
www.xiaoding.com
/v1 myappv1:80 (10.244.36.69:80)
/v2 myappv2:80 (10.244.36.71:80)
Annotations: nginx.ingress.kubernetes.io/rewrite-target: /
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Sync 6s nginx-ingress-controller Scheduled for sync
[root@k8s-master app]# curl www.xiaoding.com/v1
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
[root@k8s-master app]# curl www.xiaoding.com/v2
Hello MyApp | Version: v2 | <a href="hostname.html">Pod Name</a>
[root@k8s-master app]#
7.4.3.2 基于域名的访问
#在测试主机中设定解析
[root@k8s-master app]# vim /etc/hosts
172.25.254.50 www.xiaoding.com myappv1.xiaoding.com myappv2.xiaoding.com
[root@k8s-master app]# vim ingress3.yml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
name: myapp
spec:
ingressClassName: nginx
rules:
- host: myappv1.xiaoding.com
http:
paths:
- backend:
service:
name: myappv1
port:
number: 80
path: /v1
pathType: Prefix
- host: myappv2.xiaoding.com
http:
paths:
- backend:
service:
name: myappv2
port:
number: 80
path: /v2
pathType: Prefix
建立控制器:
[root@k8s-master app]# kubectl apply -f ingress3.yml
ingress.networking.k8s.io/myapp created
[root@k8s-master app]# kubectl describe ingress myapp
Name: myapp
Labels: <none>
Namespace: default
Address:
Ingress Class: nginx
Default backend: <default>
Rules:
Host Path Backends
---- ---- --------
myappv1.xiaoding.com
/v1 myappv1:80 (10.244.36.69:80)
myappv2.xiaoding.com
/v2 myappv2:80 (10.244.36.71:80)
Annotations: nginx.ingress.kubernetes.io/rewrite-target: /
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Sync 20s nginx-ingress-controller Scheduled for sync
[root@k8s-master app]#
#在测试主机中测试:
[root@k8s-master app]# curl myappv1.xiaoding.com/v1
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
[root@k8s-master app]# curl myappv2.xiaoding.com/v2
Hello MyApp | Version: v2 | <a href="hostname.html">Pod Name</a>
[root@k8s-master app]#
7.4.3.3 建立tls加密
建立证书:
[root@k8s-master app]# openssl req -newkey rsa:2048 -nodes -keyout tls.key -x509 -days 365 -subj "/CN=nginxsvc/O=nginxsvc" -out tls.crt
建立加密资源类型secret:
[root@k8s-master app]# kubectl create secret tls web-tls-secret --key tls.key --cert tls.crt
[root@k8s-master app]# kubectl get secrets
NAME TYPE DATA AGE
web-tls-secret kubernetes.io/tls 2 55s
secret通常在kubernetes中存放敏感数据,他并不是一种加密方式,在后面中会着重讲解
建立ingress4基于tls认证的yml文件:
[root@k8s-master app]# vim ingress4.yml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
name: myapp
spec:
tls:
- hosts:
- myapp-tls.xiaoding.com
secretName: web-tls-secret
ingressClassName: nginx
rules:
- host: myapp-tls.xiaoding.com
http:
paths:
- backend:
service:
name: myappv1
port:
number: 80
path: /
pathType: Prefix
[root@k8s-master app]# kubectl apply -f ingress4.yml
ingress.networking.k8s.io/myapp created
[root@k8s-master app]# vim /etc/hosts
加上解析:
myapp-tls.xiaoding.com
[root@k8s-master app]# curl -k https://myapp-tls.xiaoding.com
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
[root@k8s-master app]#
7.4.3.4 建立auth认证
建立认证文件:
[root@k8s-master app]# dnf intall httpd-tools -y
[root@k8s-master app]# htpasswd -cm auth ding
New password:
Re-type new password:
Adding password for user ding
[root@k8s-master app]# cat auth
ding:$apr1$dGC6IrRm$l00ajPpXJnuIS0OWKnmel0
[root@k8s-master app]#
建立认证类型资源:
[root@k8s-master app]# kubectl create secret generic auth-web --from-file auth
secret/auth-web created
[root@k8s-master app]# kubectl describe secrets auth-web
Name: auth-web
Namespace: default
Labels: <none>
Annotations: <none>
Type: Opaque
Data
====
auth: 43 bytes
[root@k8s-master app]#
#建立ingress4基于用户认证的yaml文件:
[root@k8s-master app]# vim ingress5.yml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/auth-type: "basic"
nginx.ingress.kubernetes.io/auth-secret: "auth-web"
nginx.ingress.kubernetes.io/auth-realm: "Please input username and password"
name: myapp
spec:
tls:
- hosts:
- myapp-tls.xiaoding.com
secretName: web-tls-secret
ingressClassName: nginx
rules:
- host: myapp-tls.xiaoding.com
http:
paths:
- backend:
service:
name: myappv1
port:
number: 80
path: /
pathType: Prefix
[root@k8s-master app]# kubectl apply -f ingress5.yml
ingress.networking.k8s.io/myapp created
[root@k8s-master app]# kubectl describe ingress myapp
Name: myapp
Labels: <none>
Namespace: default
Address:
Ingress Class: nginx
Default backend: <default>
TLS:
web-tls-secret terminates myapp-tls.xiaoding.com
Rules:
Host Path Backends
---- ---- --------
myapp-tls.xiaoding.com
/ myappv1:80 (10.244.36.69:80)
Annotations: nginx.ingress.kubernetes.io/auth-realm: Please input username and password
nginx.ingress.kubernetes.io/auth-secret: auth-web
nginx.ingress.kubernetes.io/auth-type: basic
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Sync 17s nginx-ingress-controller Scheduled for sync
[root@k8s-master app]#
测试:
不加认证访问不到:
[root@k8s-master app]# curl -k https://myapp-tls.xiaoding.com
<html>
<head><title>401 Authorization Required</title></head>
<body>
<center><h1>401 Authorization Required</h1></center>
<hr><center>nginx</center>
</body>
</html>
加上认证:
[root@k8s-master app]# curl -k https://myapp-tls.xiaoding.com -uding:123
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
[root@k8s-master app]#
7.4.3.5 rewrite重定向
指定默认访问的文件到hostname.html上:
[root@k8s-master app]# vim ingress5.yml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/app-root: "/hostname.html" ---加上这个
nginx.ingress.kubernetes.io/auth-type: "basic"
nginx.ingress.kubernetes.io/auth-secret: "auth-web"
nginx.ingress.kubernetes.io/auth-realm: "Please input username and password"
name: myapp
spec:
tls:
- hosts:
- myapp-tls.xiaoding.com
secretName: web-tls-secret
ingressClassName: nginx
rules:
- host: myapp-tls.xiaoding.com
http:
paths:
- backend:
service:
name: myappv1
port:
number: 80
path: /
pathType: Prefix
[root@k8s-master app]# kubectl apply -f ingress5.yml
ingress.networking.k8s.io/myapp created
[root@k8s-master app]# kubectl describe ingress myapp
Name: myapp
Labels: <none>
Namespace: default
Address:
Ingress Class: nginx
Default backend: <default>
TLS:
web-tls-secret terminates myapp-tls.xiaoding.com
Rules:
Host Path Backends
---- ---- --------
myapp-tls.xiaoding.com
/ myappv1:80 (10.244.36.69:80)
Annotations: nginx.ingress.kubernetes.io/app-root: /hostname.html
nginx.ingress.kubernetes.io/auth-realm: Please input username and password
nginx.ingress.kubernetes.io/auth-secret: auth-web
nginx.ingress.kubernetes.io/auth-type: basic
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Sync 9s nginx-ingress-controller Scheduled for sync
[root@k8s-master app]#
[root@k8s-master app]# curl -Lk https://myapp-tls.xiaoding.com -uding:123
myappv1-586444467f-jknvm
[root@k8s-master app]# curl -Lk https://myapp-tls.xiaoding.com/ding/hostname.html -uding:123
<html>
<head><title>404 Not Found</title></head>
<body bgcolor="white">
<center><h1>404 Not Found</h1></center>
<hr><center>nginx/1.12.2</center>
</body>
</html>
[root@k8s-master app]#
#解决重定向路径问题:
[root@k8s-master app]# vim ingress5.yml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/rewrite-target: "/$2"
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/auth-type: "basic"
nginx.ingress.kubernetes.io/auth-secret: "auth-web"
nginx.ingress.kubernetes.io/auth-realm: "Please input username and password"
name: myapp
spec:
tls:
- hosts:
- myapp-tls.xiaoding.com
secretName: web-tls-secret
ingressClassName: nginx
rules:
- host: myapp-tls.xiaoding.com
http:
paths:
- backend:
service:
name: myappv1
port:
number: 80
path: /ding(/|$)(.*) ----正则表达式匹配/ding/,/ding/abc
pathType: ImplementationSpecific
此时加上路径就能访问到了:
[root@k8s-master app]# kubectl apply -f ingress5.yml
ingress.networking.k8s.io/myapp created
[root@k8s-master app]# curl -Lk https://myapp-tls.xiaoding.com/ding/hostname.html -uding:123
myappv1-586444467f-jknvm
[root@k8s-master app]#
7.5 Canary金丝雀发布
7.5.1 什么是金丝雀发布
金丝雀发布(Canary Release)也称为灰度发布,是一种软件发布策略。
主要目的是在将新版本的软件全面推广到生产环境之前,先在一小部分用户或服务器上进行测试和验证,以降低因新版本引入重大问题而对整个系统造成的影响。
是一种Pod的发布方式。金丝雀发布采取先添加、再删除的方式,保证Pod的总量不低于期望值。并且在更新部分Pod后,暂停更新,当确认新Pod版本运行正常后再进行其他版本的Pod的更新。
7.5.2 Canary发布方式
其中header和weiht中的最多
7.5.2.1 基于header(http包头)灰度
-
通过Annotaion扩展
-
创建灰度ingress,配置灰度头部key以及value
-
灰度流量验证完毕后,切换正式ingress到新版本
-
之前我们在做升级时可以通过控制器做滚动更新,默认25%利用header可以使升级更为平滑,通过key 和vule 测试新的业务体系是否有问题。
示例:
建立ingress:
[root@k8s-master app]# vim ingress6.yml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: myappv1
spec:
ingressClassName: nginx
rules:
- host: myapp.xiaoding.com
http:
paths:
- backend:
service:
name: myappv1
port:
number: 80
path: /
pathType: Prefix
[root@k8s-master app]# kubectl apply -f ingress6.yml
ingress.networking.k8s.io/myappv1 created
[root@k8s-master app]# kubectl describe ingress myappv1
Name: myappv1
Labels: <none>
Namespace: default
Address:
Ingress Class: nginx
Default backend: <default>
Rules:
Host Path Backends
---- ---- --------
myapp.xiaoding.com
/ myappv1:80 (10.244.36.69:80)
Annotations: <none>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Sync 20s nginx-ingress-controller Scheduled for sync
[root@k8s-master app]#
建立基于header的ingress:
[root@k8s-master app]# vim ingress7.yml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/canary: "true"
nginx.ingress.kubernetes.io/canary-by-header: "version"
nginx.ingress.kubernetes.io/canary-by-header-value: "2"
name: myappv2
spec:
ingressClassName: nginx
rules:
- host: myapp.xiaoding.com
http:
paths:
- backend:
service:
name: myappv2
port:
number: 80
path: /
pathType: Prefix
[root@k8s-master app]# kubectl apply -f ingress7.yml
ingress.networking.k8s.io/myappv2 created
[root@k8s-master app]# kubectl describe ingress myappv2
Name: myappv2
Labels: <none>
Namespace: default
Address: 172.25.254.20
Ingress Class: nginx
Default backend: <default>
Rules:
Host Path Backends
---- ---- --------
myapp.xiaoding.com
/ myappv2:80 (10.244.36.71:80)
Annotations: nginx.ingress.kubernetes.io/canary: true
nginx.ingress.kubernetes.io/canary-by-header: version
nginx.ingress.kubernetes.io/canary-by-header-value: 2
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Sync 19s (x2 over 29s) nginx-ingress-controller Scheduled for sync
[root@k8s-master app]#
测试:
把域名添加到本地解析中:
[root@k8s-master app]# curl myapp.xiaoding.com
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
[root@k8s-master app]# curl -H "version: 2" myapp.xiaoding.com
Hello MyApp | Version: v2 | <a href="hostname.html">Pod Name</a>
[root@k8s-master app]#
7.5.2.2 基于权重的灰度发布
-
通过Annotaion拓展
-
创建灰度ingress,配置灰度权重以及总权重
-
灰度流量验证完毕后,切换正式ingress到新版本
示例:
上面创建的ingress6不要删掉,修改ingress7上面的配置就行。
建立基于权重的ingress:
[root@k8s-master app]# vim ingress7.yml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/canary: "true"
nginx.ingress.kubernetes.io/canary-weight: "10" ---- 更改权重
nginx.ingress.kubernetes.io/canary-weight-total: "100"
name: myappv2
spec:
ingressClassName: nginx
rules:
- host: myapp.xiaoding.com
http:
paths:
- backend:
service:
name: myappv2
port:
number: 80
path: /
pathType: Prefix
[root@k8s-master app]# kubectl apply -f ingress7.yml
ingress.networking.k8s.io/myappv2 created
测试:
[root@k8s-master app]# vim check_ingress.sh
#!/bin/bash
v1=0
v2=0
for (( i=0; i<100; i++))
do
response=`curl -s myapp.xiaoding.com |grep -c v1`
v1=`expr $v1 + $response`
v2=`expr $v2 + 1 - $response`
done
echo "v1:$v1, v2:$v2"
[root@k8s-master app]# sh check_ingress.sh
v1:90, v2:10
[root@k8s-master app]#
#更改完毕权重后继续测试可观察变化
八、k8s中的存储
8.1 configmap
8.1.1 configmap的功能
-
configMap用于保存配置数据,以键值对形式存储。
-
configMap 资源提供了向 Pod 注入配置数据的方法。
-
镜像和配置文件解耦,以便实现镜像的可移植性和可复用性。
-
etcd限制了文件大小不能超过1M
8.1.2 configmap的使用
-
填充环境变量的值
-
设置容器内的命令行参数
-
填充卷的配置文件
8.1.3 configmap创建方式
8.1.3.1 字面值创建
[root@k8s-master ~]# kubectl create cm ding-config --from-literal fname=xiao --from-literal lname=ding
configmap/ding-config created
[root@k8s-master ~]# kubectl describe cm ding-config
Name: ding-config
Namespace: default
Labels: <none>
Annotations: <none>
Data ------- 键值信息
====
fname:
----
xiao
lname:
----
ding
BinaryData
====
Events: <none>
[root@k8s-master ~]#
8.1.3.2 通过文件创建
[root@k8s-master ~]# cat /etc/resolv.conf
# Generated by NetworkManager
nameserver 114.114.114.114
[root@k8s-master ~]# kubectl create cm ding2-config --from-file /etc/resolv.conf
configmap/ding2-config created
[root@k8s-master ~]# kubectl describe cm ding2-config
Name: ding2-config
Namespace: default
Labels: <none>
Annotations: <none>
Data
====
resolv.conf:
----
# Generated by NetworkManager
nameserver 114.114.114.114
BinaryData
====
Events: <none>
[root@k8s-master ~]#
8.1.3.3 通过目录创建
[root@k8s-master ~]# mkdir dingconfig
[root@k8s-master ~]# cp /etc/fstab /etc/rc.d/rc.local dingconfig/
[root@k8s-master ~]# kubectl create cm ding3-config --from-file dingconfig/
configmap/ding3-config created
[root@k8s-master ~]# kubectl describe cm ding3-config
Name: ding3-config
Namespace: default
Labels: <none>
Annotations: <none>
Data
====
fstab:
----
#
# /etc/fstab
# Created by anaconda on Tue Jul 30 16:19:53 2024
#
# Accessible filesystems, by reference, are maintained under '/dev/disk/'.
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info.
#
# After editing this file, run 'systemctl daemon-reload' to update systemd
# units generated from this file.
#
/dev/mapper/rhel-root / xfs defaults 0 0
UUID=b79e35cf-86e7-4886-bed3-ae7ac75312b3 /boot xfs defaults 0 0
#/dev/mapper/rhel-swap none swap defaults 0 0
rc.local:
----
#!/bin/bash
# THIS FILE IS ADDED FOR COMPATIBILITY PURPOSES
#
# It is highly advisable to create own systemd services or udev rules
# to run scripts during boot instead of using this file.
#
# In contrast to previous versions due to parallel execution during boot
# this script will NOT be run after all other services.
#
# Please note that you must run 'chmod +x /etc/rc.d/rc.local' to ensure
# that this script will be executed during boot.
touch /var/lock/subsys/local
mount /dev/cdrom /rhel9
BinaryData
====
Events: <none>
[root@k8s-master ~]#
8.1.3.4 通过yaml文件创建
[root@k8s-master ~]# kubectl create cm ding4-config --from-literal db_host=172.25.254.100 --from-literal db_port=3306 --dry-run=client -o yaml > ding-config.yaml
[root@k8s-master ~]# vim ding-config.yaml
apiVersion: v1
data:
db_host: 172.25.254.100
db_port: "3306"
kind: ConfigMap
metadata:
name: ding4-config
data:
db_host: "172.25.254.100"
db_port: "3306"
[root@k8s-master ~]# kubectl apply -f ding-config.yaml
configmap/ding4-config created
[root@k8s-master ~]# kubectl describe cm ding4-config
Name: ding4-config
Namespace: default
Labels: <none>
Annotations: <none>
Data
====
db_host:
----
172.25.254.100
db_port:
----
3306
BinaryData
====
Events: <none>
[root@k8s-master ~]#
8.1.4 configmap的使用方式
-
通过环境变量的方式直接传递给pod
-
通过pod的 命令行运行方式
-
作为volume的方式挂载到pod内
8.1.4.1 使用configmap填充环境变量
将cm中的内容映射为指定变量:
[root@k8s-master ~]# vim testpod1.yml
apiVersion: v1
kind: Pod
metadata:
labels:
run: testpod
name: testpod
spec:
containers:
- image: busyboxplus:latest
name: testpod
command:
- /bin/sh
- -c
- env
env:
- name: key1
valueFrom:
configMapKeyRef:
name: ding4-config
key: db_host
- name: key2
valueFrom:
configMapKeyRef:
name: ding4-config
key: db_port
restartPolicy: Never
查看日志:
[root@k8s-master ~]# kubectl apply -f testpod1.yml
pod/testpod created
[root@k8s-master ~]# kubectl logs pods/testpod
MYAPPV1_PORT_80_TCP_ADDR=10.97.60.74
MYAPPV2_PORT_80_TCP_ADDR=10.103.72.115
KUBERNETES_SERVICE_PORT=443
KUBERNETES_PORT=tcp://10.96.0.1:443
MYAPPV1_PORT_80_TCP_PORT=80
MYAPPV2_PORT_80_TCP_PORT=80
MYAPPV1_PORT_80_TCP_PROTO=tcp
HOSTNAME=testpod
SHLVL=1
MYAPPV2_PORT_80_TCP_PROTO=tcp
HOME=/
MYAPPV1_PORT_80_TCP=tcp://10.97.60.74:80
MYAPPV2_PORT_80_TCP=tcp://10.103.72.115:80
KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
KUBERNETES_PORT_443_TCP_PORT=443
KUBERNETES_PORT_443_TCP_PROTO=tcp
key1=172.25.254.100
key2=3306
MYAPPV1_SERVICE_HOST=10.97.60.74
MYAPPV2_SERVICE_HOST=10.103.72.115
KUBERNETES_SERVICE_PORT_HTTPS=443
KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443
PWD=/
MYAPPV1_PORT=tcp://10.97.60.74:80
MYAPPV1_SERVICE_PORT=80
KUBERNETES_SERVICE_HOST=10.96.0.1
MYAPPV2_PORT=tcp://10.103.72.115:80
MYAPPV2_SERVICE_PORT=80
[root@k8s-master ~]#
把cm中的值直接映射为变量:
[root@k8s-master ~]# kubectl apply -f testpod2.yml
pod/testpod created
[root@k8s-master ~]# kubectl logs pods/testpod
MYAPPV1_PORT_80_TCP_ADDR=10.97.60.74
KUBERNETES_PORT=tcp://10.96.0.1:443
KUBERNETES_SERVICE_PORT=443
MYAPPV2_PORT_80_TCP_ADDR=10.103.72.115
MYAPPV1_PORT_80_TCP_PORT=80
MYAPPV1_PORT_80_TCP_PROTO=tcp
MYAPPV2_PORT_80_TCP_PORT=80
HOSTNAME=testpod
SHLVL=1
MYAPPV2_PORT_80_TCP_PROTO=tcp
HOME=/
db_port=3306
MYAPPV1_PORT_80_TCP=tcp://10.97.60.74:80
MYAPPV2_PORT_80_TCP=tcp://10.103.72.115:80
KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
KUBERNETES_PORT_443_TCP_PORT=443
KUBERNETES_PORT_443_TCP_PROTO=tcp
MYAPPV1_SERVICE_HOST=10.97.60.74
MYAPPV2_SERVICE_HOST=10.103.72.115
KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443
KUBERNETES_SERVICE_PORT_HTTPS=443
PWD=/
MYAPPV1_PORT=tcp://10.97.60.74:80
KUBERNETES_SERVICE_HOST=10.96.0.1
MYAPPV1_SERVICE_PORT=80
MYAPPV2_SERVICE_PORT=80
MYAPPV2_PORT=tcp://10.103.72.115:80
db_host=172.25.254.100
[root@k8s-master ~]#
在pod命令行中使用变量:
[root@k8s-master ~]# vim testpod3.yml
apiVersion: v1
kind: Pod
metadata:
labels:
run: testpod
name: testpod
spec:
containers:
- image: busyboxplus:latest
name: testpod
command:
- /bin/sh
- -c
- echo ${db_host} ${db_port}
envFrom:
- configMapRef:
name: ding4-config
restartPolicy: Never
查看日志:
[root@k8s-master ~]# kubectl apply -f testpod3.yml
pod/testpod created
[root@k8s-master ~]# kubectl logs pods/testpod
172.25.254.100 3306
[root@k8s-master ~]#
8.1.4.2 通过数据卷使用configmap
[root@k8s-master ~]# vim testpod4.yml
apiVersion: v1
kind: Pod
metadata:
labels:
run: testpod
name: testpod
spec:
containers:
- image: busyboxplus:latest
name: testpod
command:
- /bin/sh
- -c
- cat /config/db_host
volumeMounts: #调用卷策略
- name: config-volume #卷名称
mountPath: /config
volumes: #声明卷的配置
- name: config-volume #卷名称
configMap:
name: ding4-config
restartPolicy: Never
查看日志:
[root@k8s-master ~]# kubectl apply -f testpod4.yml
pod/testpod created
[root@k8s-master ~]# kubectl logs testpod
172.25.254.100
8.1.4.3 利用configMap填充pod的配置文件
建立配置文件模板:
[root@k8s-master ~]# vim nginx.conf
server {
listen 8000;
server_name _;
root /usr/share/nginx/html;
index index.html;
}
利用模板生成cm:
[root@k8s-master ~]# kubectl create cm nginx.conf --from-file nginx.conf
configmap/nginx.conf created
[root@k8s-master ~]# kubectl describe cm nginx.conf
Name: nginx.conf
Namespace: default
Labels: <none>
Annotations: <none>
Data
====
nginx.conf:
----
server {
listen 8000;
server_name _;
root /usr/share/nginx/html;
index index.html;
}
BinaryData
====
Events: <none>
[root@k8s-master ~]#
建立nginx控制器文件:
[root@k8s-master ~]# kubectl create deployment nginx --image nginx:latest --replicas 1 --dry-run=client -o yaml > nginx.yml
设定nginx.yml中的卷
[root@k8s-master ~]# vim nginx.yml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nginx
name: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- image: nginx:latest
name: nginx
volumeMounts:
- name: config-volume
mountPath: /etc/nginx/conf.d
volumes:
- name: config-volume
configMap:
name: nginx-conf
[root@k8s-master ~]# kubectl apply -f nginx.yml
deployment.apps/myapp created
[root@k8s-master ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
myapp-76d5b94497-2lcjq 1/1 Running 0 27s 10.244.36.75 k8s-node1 <none> <none>
[root@k8s-master ~]# curl 10.244.36.75:8000
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
[root@k8s-master ~]#
8.1.4.3.1 通过热更新cm修改配置
[root@k8s-master ~]# kubectl edit cm nginx.conf
apiVersion: v1
data:
nginx.conf: |
server {
listen 8080; #端口修改为8080
server_name _;
root /usr/share/nginx/html;
index index.html;
}
kind: ConfigMap
metadata:
creationTimestamp: "2024-09-09T14:45:29Z"
name: nginx.conf
namespace: default
resourceVersion: "66186"
uid: 848c401c-6777-4458-ba8e-a628171b778e
查看配置文件:
[root@k8s-master ~]# kubectl exec pods/myapp-76d5b94497-2lcjq -- cat /etc/nginx/conf.d/nginx.conf
server {
listen 8080;
server_name _;
root /usr/share/nginx/html;
index index.html;
}
[root@k8s-master ~]#
!!!配置文件修改后不会生效,需要删除pod后控制器会重建pod,这时就生效了
[root@k8s-master ~]# curl 10.244.169.139:8080
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
[root@k8s-master ~]#
8.2 secrets配置管理
8.2.1secrets的功能介绍
-
Secret 对象类型用来保存敏感信息,例如密码、OAuth 令牌和 ssh key。
-
敏感信息放在 secret 中比放在 Pod 的定义或者容器镜像中来说更加安全和灵活
-
Pod 可以用两种方式使用 secret:
-
作为 volume 中的文件被挂载到 pod 中的一个或者多个容器里。
-
当 kubelet 为 pod 拉取镜像时使用。
-
-
Secret的类型:
-
Service Account:Kubernetes 自动创建包含访问 API 凭据的 secret,并自动修改 pod 以使用此类型的 secret。
-
Opaque:使用base64编码存储信息,可以通过base64 --decode解码获得原始数据,因此安全性弱。
-
kubernetes.io/dockerconfigjson:用于存储docker registry的认证信息
-
8.2.2 secrets的创建
在创建secrets时我们可以用命令的方法或者yaml文件的方法
8.2.2.1 从文件创建
[root@k8s-master ~]# mkdir secrets
[root@k8s-master ~]# cd secrets/
[root@k8s-master secrets]#
[root@k8s-master secrets]# echo -n xiaoding > username.txt
[root@k8s-master secrets]# echo -n ding > password.txt
[root@k8s-master secrets]# ls
password.txt username.txt
[root@k8s-master secrets]# kubectl create secret generic userlist --from-file username.txt --from-file password.txt
secret/userlist created
[root@k8s-master secrets]#
[root@k8s-master secrets]# kubectl get secrets userlist -o yaml
apiVersion: v1
data:
password.txt: ZGluZw==
username.txt: eGlhb2Rpbmc=
kind: Secret
metadata:
creationTimestamp: "2024-09-09T15:13:49Z"
name: userlist
namespace: default
resourceVersion: "69267"
uid: 7e480c8d-7fb3-41e1-bf92-77e4582360cc
type: Opaque
[root@k8s-master secrets]#
8.2.2.2 编写yaml文件
[root@k8s-master secrets]# echo -n xiaoding | base64
eGlhb2Rpbmc=
[root@k8s-master secrets]# echo -n ding | base64
ZGluZw==
[root@k8s-master secrets]# kubectl create secret generic userlist --dry-run=client -o yaml > userlist.yml
[root@k8s-master secrets]#
[root@k8s-master secrets]# vim userlist.yml
apiVersion: v1
kind: Secret
metadata:
name: userlist
type: Opaque
data:
username: eGlhb2Rpbmc=
password: ZGluZw==
[root@k8s-master secrets]# kubectl apply -f userlist.yml
secret/userlist unchanged
[root@k8s-master secrets]# kubectl describe secrets userlist
Name: userlist
Namespace: default
Labels: <none>
Annotations: <none>
Type: Opaque
Data
====
username.txt: 8 bytes
password: 4 bytes
password.txt: 4 bytes
username: 8 bytes
[root@k8s-master secrets]#
8.2.3 Secret的使用方法
8.2.3.1 将Secret挂载到Volume中
[root@k8s-master secrets]# kubectl run nginx --image nginx --dry-run=client -o yaml > pod1.yaml
[root@k8s-master secrets]# vim pod1.yaml
apiVersion: v1
kind: Pod
metadata:
labels:
run: nginx
name: nginx
spec:
containers:
- image: nginx
name: nginx
volumeMounts:
- name: secrets
mountPath: /secret
readOnly: true
volumes:
- name: secrets
secret:
secretName: userlist
[root@k8s-master secrets]# kubectl apply -f pod1.yaml
pod/nginx created
[root@k8s-master secrets]# kubectl exec pods/nginx -it --/bin/bash
error: unknown flag: --/bin/bash
See 'kubectl exec --help' for usage.
[root@k8s-master secrets]# kubectl exec pods/nginx -it -- /bin/bash
root@nginx:/# cd /secret/
root@nginx:/secret# ls
password password.txt username username.txt
root@nginx:/secret# cat password
ding
root@nginx:/secret# cat username
xiaoding
root@nginx:/secret#
8.2.3.2 向指定路径映射 secret 密钥
向指定路径映射:
[root@k8s-master secrets]# cp pod1.yaml pod2.yaml
[root@k8s-master secrets]# vim pod2.yaml
apiVersion: v1
kind: Pod
metadata:
labels:
run: nginx1
name: nginx1
spec:
containers:
- image: nginx
name: nginx1
volumeMounts:
- name: secrets
mountPath: /secret
readOnly: true
volumes:
- name: secrets
secret:
secretName: userlist
items:
- key: username
path: my-users/username
[root@k8s-master secrets]# kubectl apply -f pod2.yaml
pod/nginx1 created
[root@k8s-master secrets]# kubectl exec pods/nginx1 -it -- /bin/bash
root@nginx1:/# cd /secret/
root@nginx1:/secret# ls
my-users
root@nginx1:/secret# cd my-users
root@nginx1:/secret/my-users# ls
username
root@nginx1:/secret/my-users# cat username
xiaoding
root@nginx1:/secret/my-users#
8.2.3.3 将Secret设置为环境变量
[root@k8s-master secrets]# cp pod1.yaml pod3.yaml
[root@k8s-master secrets]# vim pod3.yaml
apiVersion: v1
kind: Pod
metadata:
labels:
run: busybox
name: busybox
spec:
containers:
- image: busybox
name: busybox
command:
- /bin/sh
- -c
- env
env:
- name: USERNAME
valueFrom:
secretKeyRef:
name: userlist
key: username
- name: PASS
valueFrom:
secretKeyRef:
name: userlist
key: password
restartPolicy: Never
[root@k8s-master secrets]# kubectl apply -f pod3.yaml
pod/busybox created
[root@k8s-master secrets]# kubectl logs pods/busybox
KUBERNETES_SERVICE_PORT=443
KUBERNETES_PORT=tcp://10.96.0.1:443
HOSTNAME=busybox
SHLVL=1
HOME=/root
USERNAME=xiaoding
KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
KUBERNETES_PORT_443_TCP_PORT=443
KUBERNETES_PORT_443_TCP_PROTO=tcp
PASS=ding
KUBERNETES_SERVICE_PORT_HTTPS=443
KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443
KUBERNETES_SERVICE_HOST=10.96.0.1
PWD=/
[root@k8s-master secrets]#
8.2.3.4 存储docker registry的认证信息
建立私有仓库并上传镜像
登录仓库:
[root@k8s-master secrets]# docker login reg.timingding.org
Authenticating with existing credentials...
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credential-stores
Login Succeeded
上传镜像:
[root@k8s-master secrets]# docker tag timinglee/game2048:latest reg.timingding.org/xiaoding/game2048:latest
[root@k8s-master secrets]# docker push reg.timingding.org/xiaoding/game2048:latest
建立用于docker认证的secret:
[root@k8s-master secrets]# kubectl create secret docker-registry my-docker-auth --docker-server reg.timingding.org --docker-username admin --docker-password 123456 --docker-email xiaoding@timingding.org
secret/my-docker-auth created
[root@k8s-master secrets]#
[root@k8s-master secrets]# vim pod3.yaml
apiVersion: v1
kind: Pod
metadata:
labels:
run: game2048
name: game2048
spec:
containers:
- image: reg.timingding.org/xiaoding/game2048:latest
name: game2048
imagePullSecrets: #不设定docker认证时无法下载镜像
- name: docker-auth
[root@k8s-master secrets]# kubectl delete pods busybox
pod "busybox" deleted
[root@k8s-master secrets]# kubectl get pods
NAME READY STATUS RESTARTS AGE
game2048 1/1 Running 0 38s
8.3 volumes配置管理
-
容器中文件在磁盘上是临时存放的,这给容器中运行的特殊应用程序带来一些问题
-
当容器崩溃时,kubelet将重新启动容器,容器中的文件将会丢失,因为容器会以干净的状态重建。
-
当在一个 Pod 中同时运行多个容器时,常常需要在这些容器之间共享文件。
-
Kubernetes 卷具有明确的生命周期与使用它的 Pod 相同
-
卷比 Pod 中运行的任何容器的存活期都长,在容器重新启动时数据也会得到保留
-
当一个 Pod 不再存在时,卷也将不再存在。
-
Kubernetes 可以支持许多类型的卷,Pod 也能同时使用任意数量的卷。
-
卷不能挂载到其他卷,也不能与其他卷有硬链接。 Pod 中的每个容器必须独立地指定每个卷的挂载位置。
8.3.1 kubernets支持的卷的类型
k8s支持的卷的类型如下:
-
awsElasticBlockStore 、azureDisk、azureFile、cephfs、cinder、configMap、csi
-
downwardAPI、emptyDir、fc (fibre channel)、flexVolume、flocker
-
gcePersistentDisk、gitRepo (deprecated)、glusterfs、hostPath、iscsi、local、
-
nfs、persistentVolumeClaim、projected、portworxVolume、quobyte、rbd
-
scaleIO、secret、storageos、vsphereVolume
8.3.2 emptyDir卷
功能:
当Pod指定到某个节点上时,首先创建的是一个emptyDir卷,并且只要 Pod 在该节点上运行,卷就一直存在。卷最初是空的。 尽管 Pod 中的容器挂载 emptyDir 卷的路径可能相同也可能不同,但是这些容器都可以读写 emptyDir 卷中相同的文件。 当 Pod 因为某些原因被从节点上删除时,emptyDir 卷中的数据也会永久删除。
emptyDir 的使用场景:
-
缓存空间,例如基于磁盘的归并排序。
-
耗时较长的计算任务提供检查点,以便任务能方便地从崩溃前状态恢复执行。
-
在 Web 服务器容器服务数据时,保存内容管理器容器获取的文件。
示例:
[root@k8s-master ~]# mkdir volumes
[root@k8s-master ~]# cd volumes/
[root@k8s-master volumes]# vim pod1.yml
apiVersion: v1
kind: Pod
metadata:
name: vol1
spec:
containers:
- image: busyboxplus:latest
name: vm1
command:
- /bin/sh
- -c
- sleep 30000000
volumeMounts:
- mountPath: /cache
name: cache-vol
- image: nginx:latest
name: vm2
volumeMounts:
- mountPath: /usr/share/nginx/html
name: cache-vol
volumes:
- name: cache-vol
emptyDir:
medium: Memory
sizeLimit: 100Mi
[root@k8s-master volumes]# kubectl apply -f pod1.yml
pod/vol1 created
可以查看卷的使用情况:
[root@k8s-master volumes]# kubectl describe pods vol1
测试效果:
[root@k8s-master volumes]# kubectl exec -it pods/vol1 -c vm1 -- /bin/sh
/ # cd /cache/
/cache # curl localhost
<html>
<head><title>403 Forbidden</title></head>
<body>
<center><h1>403 Forbidden</h1></center>
<hr><center>nginx/1.27.1</center>
</body>
</html>
/cache # echo xiaoding > index.html
/cache # curl localhost
xiaoding
/cache # dd if=/dev/zero of=bigfile bs=1M count=101
dd: writing 'bigfile': No space left on device
101+0 records in
99+1 records out
/cache #
8.3.3 hostpath卷
功能:
hostPath 卷能将主机节点文件系统上的文件或目录挂载到您的 Pod 中,不会因为pod关闭而被删除
hostPath 的一些用法
-
运行一个需要访问 Docker 引擎内部机制的容器,挂载 /var/lib/docker 路径。
-
在容器中运行 cAdvisor(监控) 时,以 hostPath 方式挂载 /sys。
-
允许 Pod 指定给定的 hostPath 在运行 Pod 之前是否应该存在,是否应该创建以及应该以什么方式存在
hostPath的安全隐患
-
具有相同配置(例如从 podTemplate 创建)的多个 Pod 会由于节点上文件的不同而在不同节点上有不同的行为。
-
当 Kubernetes 按照计划添加资源感知的调度时,这类调度机制将无法考虑由 hostPath 使用的资源。
-
基础主机上创建的文件或目录只能由 root 用户写入。您需要在 特权容器 中以 root 身份运行进程,或者修改主机上的文件权限以便容器能够写入 hostPath 卷。
示例:
[root@k8s-master volumes]# cp pod1.yml pod2.yml
[root@k8s-master volumes]# vim pod2.yml
apiVersion: v1
kind: Pod
metadata:
name: vol1
spec:
containers:
- image: nginx:latest
name: vm1
volumeMounts:
- mountPath: /usr/share/nginx/html
name: cache-vol
volumes:
- name: cache-vol
hostPath:
path: /data
type: DirectoryOrCreate #当/data目录不存在时自动建立
测试:
[root@k8s-master volumes]# kubectl apply -f pod2.yml
pod/vol1 created
[root@k8s-master volumes]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
vol1 1/1 Running 0 14s 10.244.36.81 k8s-node1 <none> <none>
[root@k8s-master volumes]#
[root@k8s-master volumes]# curl 10.244.36.81
<html>
<head><title>403 Forbidden</title></head>
<body>
<center><h1>403 Forbidden</h1></center>
<hr><center>nginx/1.27.1</center>
</body>
</html>
看调度到了那个Worker节点上面,去echo内容:
[root@k8s-node1 ~]# echo xiaoding > /data/index.html
[root@k8s-master volumes]# curl 10.244.36.81
xiaoding
[root@k8s-master volumes]#
当pod被删除后hostPath不会被清理:
[root@k8s-master volumes]# kubectl delete -f pod2.yml
pod "vol1" deleted
[root@k8s-master volumes]#
[root@k8s-node1 ~]# ls /data/
index.html
[root@k8s-node1 ~]#
8.3.4 nfs卷
NFS 卷允许将一个现有的 NFS 服务器上的目录挂载到 Kubernetes 中的 Pod 中。这对于在多个 Pod 之间共享数据或持久化存储数据非常有用
例如,如果有多个容器需要访问相同的数据集,或者需要将容器中的数据持久保存到外部存储,NFS 卷可以提供一种方便的解决方案。
8.3.4.1 部署一台nfs共享主机并在所有k8s节点中安装nfs-utils
部署nfs主机:
[root@docker-harbor ~]# cd /nfsdata/
[root@docker-harbor nfsdata]# dnf install nfs-utils -y
[root@docker-harbor nfsdata]# vim /etc/exports
[root@docker-harbor nfsdata]# exportfs -rv
exporting *:/nfsdata
[root@docker-harbor nfsdata]# showmount -e 172.25.254.150
Export list for 172.25.254.150:
/nfsdata *
在k8s所有节点中安装nfs-utils
[root@k8s-master & node1 & node2 ~]# dnf install nfs-utils -y
8.3.4.2 部署nfs卷
[root@k8s-master volumes]# vim pod3.yml
apiVersion: v1
kind: Pod
metadata:
name: vol1
spec:
containers:
- image: myapp:v1
name: vm1
volumeMounts:
- mountPath: /usr/share/nginx/html
name: cache-vol
volumes:
- name: cache-vol
nfs:
server: 172.25.254.150
path: /nfsdata
[root@k8s-master volumes]# kubectl apply -f pod3.yml
[root@k8s-master volumes]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
vol1 1/1 Running 0 5m32s 10.244.36.82 k8s-node1 <none> <none>
[root@k8s-master volumes]#
[root@k8s-master volumes]# curl 10.244.36.82
xiaoding
[root@k8s-master volumes]#
8.3.5 PersistentVolume持久卷
8.3.5.1 静态持久卷pv与静态持久卷声明pvc
PersistentVolume(持久卷,简称PV)
-
pv是集群内由管理员提供的网络存储的一部分。
-
PV也是集群中的一种资源。是一种volume插件,
-
但是它的生命周期却是和使用它的Pod相互独立的。
-
PV这个API对象,捕获了诸如NFS、ISCSI、或其他云存储系统的实现细节
-
pv有两种提供方式:静态和动态
-
静态PV:集群管理员创建多个PV,它们携带着真实存储的详细信息,它们存在于Kubernetes API中,并可用于存储使用
-
动态PV:当管理员创建的静态PV都不匹配用户的PVC时,集群可能会尝试专门地供给volume给PVC。这种供给基于StorageClass
-
PersistentVolumeClaim(持久卷声明,简称PVC)
-
是用户的一种存储请求
-
它和Pod类似,Pod消耗Node资源,而PVC消耗PV资源
-
Pod能够请求特定的资源(如CPU和内存)。PVC能够请求指定的大小和访问的模式持久卷配置
-
PVC与PV的绑定是一对一的映射。没找到匹配的PV,那么PVC会无限期得处于unbound未绑定状态
volumes访问模式
-
ReadWriteOnce -- 该volume只能被单个节点以读写的方式映射
-
ReadOnlyMany -- 该volume可以被多个节点以只读方式映射
-
ReadWriteMany -- 该volume可以被多个节点以读写的方式映射
-
在命令行中,访问模式可以简写为:
- RWO - ReadWriteOnce
-
ROX - ReadOnlyMany
-
RWX -- ReadWriteMany
volumes回收策略
-
Retain:保留,需要手动回收
-
Recycle:回收,自动删除卷中数据(在当前版本中已经废弃)
-
Delete:删除,相关联的存储资产,如AWS EBS,GCE PD,Azure Disk,or OpenStack Cinder卷都会被删除
volumes状态说明
-
Available 卷是一个空闲资源,尚未绑定到任何申领
-
Bound 该卷已经绑定到某申领
-
Released 所绑定的申领已被删除,但是关联存储资源尚未被集群回收
-
Failed 卷的自动回收操作失败
静态pv实例:
在nfs主机上面建立实验目录:
[root@docker-harbor ~]# mkdir /data/pv{1..3}
编写yml文件,PV是集群资源,不在任何namespace中:
[root@k8s-master ~]# mkdir pvc
[root@k8s-master ~]# cd pvc/
[root@k8s-master pvc]#
[root@k8s-master pvc]# vim pv.yml
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv1
spec:
capacity:
storage: 5Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: nfs
nfs:
path: /nfsdata/pv1
server: 172.25.254.150
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv2
spec:
capacity:
storage: 15Gi
volumeMode: Filesystem
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
storageClassName: nfs
nfs:
path: /nfsdata/pv2
server: 172.25.254.150
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv3
spec:
capacity:
storage: 25Gi
volumeMode: Filesystem
accessModes:
- ReadOnlyMany
persistentVolumeReclaimPolicy: Retain
storageClassName: nfs
nfs:
path: /nfsdata/pv3
server: 172.25.254.150
[root@k8s-master pvc]# kubectl apply -f pv.yml
persistentvolume/pv1 created
persistentvolume/pv2 created
persistentvolume/pv3 created
[root@k8s-master pvc]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS VOLUMEATTRIBUTESCLASS REASON AGE
pv1 5Gi RWO Retain Available nfs <unset> 13s
pv2 15Gi RWX Retain Available nfs <unset> 13s
pv3 25Gi ROX Retain Available nfs <unset> 13s
[root@k8s-master pvc]#
建立pvc,pvc是pv使用的申请,需要保证和pod在一个namesapce中:
[root@k8s-master pvc]# vim pvc.yml
apiVersion: v1
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc1
spec:
storageClassName: nfs
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc2
spec:
storageClassName: nfs
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc3
spec:
storageClassName: nfs
accessModes:
- ReadOnlyMany
[root@k8s-master pvc]# kubectl apply -f pvc.yml
persistentvolumeclaim/pvc1 created
persistentvolumeclaim/pvc2 created
persistentvolumeclaim/pvc3 created
[root@k8s-master pvc]# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE
pvc1 Bound pv1 5Gi RWO nfs <unset> 20s
pvc2 Bound pv2 15Gi RWX nfs <unset> 20s
pvc3 Bound pv3 25Gi ROX nfs <unset> 20s
[root@k8s-master pvc]#
在其他namespace中无法应用:
[root@k8s-master pvc]# kubectl -n kube-system get pvc
No resources found in kube-system namespace.
[root@k8s-master pvc]#
在pod中使用pvc
[root@k8s-master pvc]# vim pod.yml
apiVersion: v1
kind: Pod
metadata:
name: xiaoding
spec:
containers:
- image: nginx
name: nginx
volumeMounts:
- mountPath: /usr/share/nginx/html
name: vol1
volumes:
- name: vol1
persistentVolumeClaim:
claimName: pvc1
[root@k8s-master pvc]# kubectl apply -f pod.yml
pod/xiaoding created
[root@k8s-master pvc]# kubectl get pods
NAME READY STATUS RESTARTS AGE
xiaoding 1/1 Running 0 34s
[root@k8s-master pvc]#
[root@k8s-master pvc]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
xiaoding 1/1 Running 0 2m32s 10.244.36.85 k8s-node1 <none> <none>
现在访问不到:
[root@k8s-master pvc]# curl 10.244.36.85
<html>
<head><title>403 Forbidden</title></head>
<body>
<center><h1>403 Forbidden</h1></center>
<hr><center>nginx/1.27.1</center>
</body>
</html>
[root@k8s-master pvc]#
到nfs主机上面:
[root@docker-harbor nfsdata]# echo xiaoding > pv1/index.html、
再次访问:
[root@k8s-master pvc]# curl 10.244.36.85
xiaoding
[root@k8s-master pvc]#
也可以进到容器里面去查看:
[root@k8s-master pvc]# kubectl exec -it pods/xiaoding -- /bin/bash
root@xiaoding:/# curl localhost
<html>
<head><title>403 Forbidden</title></head>
<body>
<center><h1>403 Forbidden</h1></center>
<hr><center>nginx/1.27.1</center>
</body>
</html>
root@xiaoding:/# cd /usr/share/nginx/html/
root@xiaoding:/usr/share/nginx/html# ls
8.3.5 存储类storageclass
8.3.5.1 StorageClass说明
-
StorageClass提供了一种描述存储类(class)的方法,不同的class可能会映射到不同的服务质量等级和备份策略或其他策略等。
-
每个 StorageClass 都包含 provisioner、parameters 和 reclaimPolicy 字段, 这些字段会在StorageClass需要动态分配 PersistentVolume 时会使用到
8.3.5.2 StorageClass的属性
属性说明:存储类 | Kubernetes
Provisioner(存储分配器):用来决定使用哪个卷插件分配 PV,该字段必须指定。可以指定内部分配器,也可以指定外部分配器。外部分配器的代码地址为: kubernetes-incubator/external-storage,其中包括NFS和Ceph等。
Reclaim Policy(回收策略):通过reclaimPolicy字段指定创建的Persistent Volume的回收策略,回收策略包括:Delete 或者 Retain,没有指定默认为Delete。
8.3.5.3 存储分配器NFS Client Provisioner
-
NFS Client Provisioner是一个automatic provisioner,使用NFS作为存储,自动创建PV和对应的PVC,本身不提供NFS存储,需要外部先有一套NFS存储服务。
-
PV以 {namespace}-{pvcName}-${pvName}的命名格式提供(在NFS服务器上)
-
PV回收的时候以 archieved-{namespace}-{pvcName}-${pvName} 的命名格式(在NFS服务器上)
8.3.5.4 部署NFS Client Provisioner
8.3.5.4.1 创建sa并授权
[root@k8s-master ~]# mkdir storageclass
[root@k8s-master ~]# cd storageclass/
[root@k8s-master storageclass]# vim rbac.yml
apiVersion: v1
- apiGroups: [""]
- apiGroups: ["storage.k8s.io"]
apiVersion: rbac.authorization.k8s.io/v1
apiVersion: v1
kind: Namespace
metadata:
name: nfs-client-provisioner
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: nfs-client-provisioner
namespace: nfs-client-provisioner
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: nfs-client-provisioner-runner
rules:
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: run-nfs-client-provisioner
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
namespace: nfs-client-provisioner
roleRef:
kind: ClusterRole
name: nfs-client-provisioner-runner
apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-client-provisioner
namespace: nfs-client-provisioner
rules:
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-client-provisioner
namespace: nfs-client-provisioner
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
namespace: nfs-client-provisioner
roleRef:
kind: Role
name: leader-locking-nfs-client-provisioner
apiGroup: rbac.authorization.k8s.io
[root@k8s-master storageclass]# kubectl apply -f rbac.yml
namespace/nfs-client-provisioner created
serviceaccount/nfs-client-provisioner created
clusterrole.rbac.authorization.k8s.io/nfs-client-provisioner-runner created
clusterrolebinding.rbac.authorization.k8s.io/run-nfs-client-provisioner created
role.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner created
rolebinding.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner created
查看rbac信息:
[root@k8s-master storageclass]# kubectl -n nfs-client-provisioner get sa
NAME SECRETS AGE
default 0 20s
nfs-client-provisioner 0 20s
[root@k8s-master storageclass]#
8.3.5.4.2 部署应用
拉取已有的镜像:
[root@k8s-master storageclass]# docker load -i nfs-subdir-external-provisioner-4.0.2.tar
上传镜像:
[root@k8s-master storageclass]# docker tag registry.k8s.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2 reg.timingding.org/sig-storage/nfs-subdir-external-provisioner:v4.0.2
[root@k8s-master storageclass]# docker push reg.timingding.org/sig-storage/nfs-subdir-external-provisioner:v4.0.2
创建控制器:
[root@k8s-master storageclass]# vim deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nfs-client-provisioner
labels:
app: nfs-client-provisioner
namespace: nfs-client-provisioner
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app: nfs-client-provisioner
template:
metadata:
labels:
app: nfs-client-provisioner
spec:
serviceAccountName: nfs-client-provisioner
containers:
- name: nfs-client-provisioner
image: sig-storage/nfs-subdir-external-provisioner:v4.0.2
volumeMounts:
- name: nfs-client-root
mountPath: /persistentvolumes
env:
- name: PROVISIONER_NAME
value: k8s-sigs.io/nfs-subdir-external-provisioner
- name: NFS_SERVER
value: 172.25.254.150
- name: NFS_PATH
value: /nfsdata
volumes:
- name: nfs-client-root
nfs:
server: 172.25.254.150
path: /nfsdata
[root@k8s-master storageclass]# kubectl apply -f deployment.yml
deployment.apps/nfs-client-provisioner created
[root@k8s-master storageclass]# kubectl -n nfs-client-provisioner get deployments.apps nfs-client-provisioner
NAME READY UP-TO-DATE AVAILABLE AGE
nfs-client-provisioner 1/1 1 1 19s
[root@k8s-master storageclass]#
8.3.5.4.3 创建存储类
[root@k8s-master storageclass]# vim class.yml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: nfs-client
provisioner: k8s-sigs.io/nfs-subdir-external-provisioner #这个标签和上面的value保持一致
parameters:
archiveOnDelete: "false"
[root@k8s-master storageclass]# kubectl apply -f class.yml
storageclass.storage.k8s.io/nfs-client created
[root@k8s-master storageclass]# kubectl get storageclasses.storage.k8s.io
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
nfs-client k8s-sigs.io/nfs-subdir-external-provisioner Delete Immediate false 15s
[root@k8s-master storageclass]#
8.3.5.4.4 创建PVC
[root@k8s-master storageclass]# vim pvc.yml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: test-claim
spec:
storageClassName: nfs-client
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1G
[root@k8s-master storageclass]# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE
test-claim Bound pvc-8bce5f75-fb9d-4649-8e1d-b05d35de516f 1G RWX nfs-client <unset> 8s
[root@k8s-master storageclass]#
8.3.5.4.5 创建测试pod
[root@k8s-master storageclass]# vim pod.yml
kind: Pod
apiVersion: v1
kind: Pod
metadata:
name: test-pod
spec:
containers:
- name: test-pod
image: busybox
command:
- "/bin/sh"
args:
- "-c"
- "touch /mnt/SUCCESS && exit 0 || exit 1"
volumeMounts:
- name: nfs-pvc
mountPath: "/mnt"
restartPolicy: "Never"
volumes:
- name: nfs-pvc
persistentVolumeClaim:
claimName: test-claim
去nfs主机上面查看:
[root@docker-harbor harbor]# ls /nfsdata/default-test-claim-pvc-8bce5f75-fb9d-4649-8e1d-b05d35de516f/
SUCCESS
[root@docker-harbor harbor]#
8.3.5.4.6 设置默认存储类
-
在未设定默认存储类时pvc必须指定使用类的名称
-
在设定存储类后创建pvc时可以不用指定storageClassName
一次性指定多个pvc:
[root@k8s-master storageclass]# vim pvc.yml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: test-claim
spec:
storageClassName: nfs-client
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1G
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc1
spec:
storageClassName: nfs-client
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc2
spec:
storageClassName: nfs-client
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Gi
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc3
spec:
storageClassName: nfs-client
accessModes:
- ReadOnlyMany[root@k8s-master storageclass]# kubectl apply -f pvc.yml
persistentvolumeclaim/test-claim created
persistentvolumeclaim/pvc1 created
persistentvolumeclaim/pvc2 created
persistentvolumeclaim/pvc3 created
[root@k8s-master storageclass]# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE
pvc1 Bound pvc-33359779-28e9-47ce-885c-9fc08d7f95be 1Gi RWO nfs-client <unset> 7s
pvc2 Bound pvc-259dbcdf-d220-4d1c-859b-1118c61001fe 10Gi RWX nfs-client <unset> 7s
pvc3 Bound pvc-0d9c5fff-d1b2-4661-be16-f514a094e30c 15Gi ROX nfs-client <unset> 7s
test-claim Bound pvc-aa39a285-0588-479d-b798-fd3894320830 1G RWX nfs-client <unset> 7s
[root@k8s-master storageclass]#
设定默认存储类
[root@k8s-master storageclass]# kubectl edit sc nfs-client
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{},"name":"nfs-client"},"parameters":{"archiveOnDelete":"false"},"provisioner":"k8s-sigs.io/nfs-subdir-external-provisioner"}
storageclass.kubernetes.io/is-default-class: "true" #设定默认存储类
creationTimestamp: "2024-09-10T02:18:21Z"
name: nfs-client
resourceVersion: "85408"
uid: 45ef4339-4b29-40db-975d-862ef85b364d
parameters:
archiveOnDelete: "false"
provisioner: k8s-sigs.io/nfs-subdir-external-provisioner
reclaimPolicy: Delete
volumeBindingMode: Immediate
会有个default:
[root@k8s-master storageclass]# kubectl get sc
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
nfs-client (default) k8s-sigs.io/nfs-subdir-external-provisioner Delete Immediate false 18m
[root@k8s-master storageclass]#
8.4 statefulset控制器
8.4.1 功能特性
-
Statefulset是为了管理有状态服务的问提设计的
-
StatefulSet将应用状态抽象成了两种情况:
-
拓扑状态:应用实例必须按照某种顺序启动。新创建的Pod必须和原来Pod的网络标识一样
-
存储状态:应用的多个实例分别绑定了不同存储数据。
-
StatefulSet给所有的Pod进行了编号,编号规则是:(statefulset名称)-(序号),从0开始。
-
Pod被删除后重建,重建Pod的网络标识也不会改变,Pod的拓扑状态按照Pod的"名字+编号"的方式固定下来,并且为每个Pod提供了一个固定且唯一的访问入口,Pod对应的DNS记录。
8.4.2 StatefulSet的组成部分
-
Headless Service:用来定义pod网络标识,生成可解析的DNS记录
-
volumeClaimTemplates:创建pvc,指定pvc名称大小,自动创建pvc且pvc由存储类供应。
-
StatefulSet:管理pod的
8.4.3 构建方法
建立无头服务:
[root@k8s-master ~]# mkdir statefulset
[root@k8s-master ~]# cd statefulset/
[root@k8s-master statefulset]# vim headless.yml
apiVersion: v1
kind: Service
metadata:
name: nginx-svc
labels:
app: nginx
spec:
ports:
- port: 80
name: web
clusterIP: None
selector:
app: nginx
[root@k8s-master statefulset]# kubectl apply -f headless.yml
service/nginx-svc created
[root@k8s-master statefulset]#
建立statefulset:
[root@k8s-master statefulset]# vim statefulset.yml
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web
spec:
serviceName: "nginx-svc"
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
volumeMounts:
- name: www
mountPath: /usr/share/nginx/html
volumeClaimTemplates:
- metadata:
name: www
spec:
storageClassName: nfs-client
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
[root@k8s-master statefulset]# kubectl apply -f statefulset.yml
statefulset.apps/web created
[root@k8s-master statefulset]# kubectl get pods
NAME READY STATUS RESTARTS AGE
test-pod 0/1 Completed 0 49s
web-0 1/1 Running 0 14s
web-1 1/1 Running 0 13s
web-2 1/1 Running 0 9s
[root@k8s-master statefulset]#
去nfs主机上面查看是否自动创建:
[root@docker-harbor nfsdata]# ls
default-pvc1-pvc-20db511e-abf7-4904-8f02-40306a28ca74
default-pvc2-pvc-a6356667-3d14-471b-b650-86ab977ae3c9
default-pvc3-pvc-5fc3ab54-9b60-4c24-8190-1581f8a5942a
default-test-claim-pvc-4b6de616-d99b-45a4-9095-87ec8b01e13c
default-www-web-0-pvc-5a90e8ad-de4e-45ba-bdc7-70b3edcdbfe6
default-www-web-1-pvc-19857fe3-8951-452d-a214-ed9dd5aff7d1
default-www-web-2-pvc-0f6ee266-b609-451b-ab20-43bb003937ea
[root@docker-harbor nfsdata]#
8.4.4 测试
为每个pod建立index.html文件:
[root@docker-harbor nfsdata]# echo web-0 > default-www-web-0-pvc-5a90e8ad-de4e-45ba-bdc7-70b3edcdbfe6/index.html
[root@docker-harbor nfsdata]# echo web-1 > default-www-web-1-pvc-19857fe3-8951-452d-a214-ed9dd5aff7d1/index.html
[root@docker-harbor nfsdata]# echo web-2 > default-www-web-2-pvc-0f6ee266-b609-451b-ab20-43bb003937ea/index.html
[root@docker-harbor nfsdata]#
建立测试pod访问web-0~2:
[root@k8s-master statefulset]# kubectl run -it testpod --image busyboxplus
If you don't see a command prompt, try pressing enter.
/ # curl web-0.nginx-svc
web-0
/ # curl web-1.nginx-svc
web-1
/ # curl web-2.nginx-svc
web-2
/ #
删掉重新建立statefulset:
[root@k8s-master statefulset]# kubectl delete -f statefulset.yml
statefulset.apps "web" deleted
[root@k8s-master statefulset]# kubectl apply -f statefulset.yml
statefulset.apps/web created
[root@k8s-master statefulset]#
访问依然不变:
[root@k8s-master statefulset]# kubectl attach testpod -c testpod -i -t
If you don't see a command prompt, try pressing enter.
/ # curl web-0.nginx-svc
web-0
/ # curl web-1.nginx-svc
web-1
/ # curl web-2.nginx-svc
web-2
/ #
也可以访问IP:
[root@k8s-master statefulset]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
web-0 1/1 Running 0 6m58s 10.244.36.94 k8s-node1 <none> <none>
web-1 1/1 Running 0 6m57s 10.244.169.144 k8s-node2 <none> <none>
web-2 1/1 Running 0 6m52s 10.244.36.95 k8s-node1 <none> <none>
[root@k8s-master statefulset]#
[root@k8s-master statefulset]# curl 10.244.36.94
web-0
[root@k8s-master statefulset]# curl 10.244.169.144
web-1
[root@k8s-master statefulset]# curl 10.244.36.95
web-2
[root@k8s-master statefulset]#
把容器改为0个,就都没有了
[root@k8s-master statefulset]# kubectl scale statefulset web --replicas 0
statefulset.apps/web scaled
[root@k8s-master statefulset]# kubectl get pods -o wide
No resources found in default namespace.
[root@k8s-master statefulset]#
再改回来:
[root@k8s-master statefulset]# kubectl scale statefulset web --replicas 3
[root@k8s-master statefulset]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
web-0 1/1 Running 0 6s 10.244.36.96 k8s-node1 <none> <none>
web-1 1/1 Running 0 4s 10.244.169.145 k8s-node2 <none> <none>
web-2 1/1 Running 0 2s 10.244.36.97 k8s-node1 <none> <none>
[root@k8s-master statefulset]#
8.4.5 statefulset的弹缩
首先,想要弹缩的StatefulSet. 需先清楚是否能弹缩该应用
用命令改变副本数
$ kubectl scale statefulsets <stateful-set-name> --replicas=<new-replicas>
示例:
设置2,就只有两个了
[root@k8s-master statefulset]# kubectl scale statefulset web --replicas 2
statefulset.apps/web scaled
[root@k8s-master statefulset]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
web-0 1/1 Running 0 3m3s 10.244.36.96 k8s-node1 <none> <none>
web-1 1/1 Running 0 3m1s 10.244.169.145 k8s-node2 <none> <none>
[root@k8s-master statefulset]#
设置4,就会再多创建两个:
[root@k8s-master statefulset]# kubectl scale statefulset web --replicas 4
[root@k8s-master statefulset]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
web-0 1/1 Running 0 3m57s 10.244.36.96 k8s-node1 <none> <none>
web-1 1/1 Running 0 3m55s 10.244.169.145 k8s-node2 <none> <none>
web-2 1/1 Running 0 8s 10.244.36.98 k8s-node1 <none> <none>
web-3 1/1 Running 0 6s 10.244.169.146 k8s-node2 <none> <none>
[root@k8s-master statefulset]#
通过编辑配置改变副本数
$ kubectl edit statefulsets.apps <stateful-set-name>
statefulset有序回收
[root@k8s-master statefulset]# kubectl scale statefulset web --replicas 0
statefulset.apps/web scaled
[root@k8s-master statefulset]# kubectl delete -f statefulset.yml
statefulset.apps "web" deleted
[root@k8s-master statefulset]# kubectl delete pvc --all
persistentvolumeclaim "pvc1" deleted
persistentvolumeclaim "pvc2" deleted
persistentvolumeclaim "pvc3" deleted
persistentvolumeclaim "test-claim" deleted
persistentvolumeclaim "www-web-0" deleted
persistentvolumeclaim "www-web-1" deleted
persistentvolumeclaim "www-web-2" deleted
persistentvolumeclaim "www-web-3" deleted