Kubernetes

一 Kubernetes 简介及部署方法

1 应用部署方式演变

在部署应用程序的方式上,主要经历了三个阶段:

传统部署:互联网早期,会直接将应用程序部署在物理机上

  • 优点:简单,不需要其它技术的参与

  • 缺点:不能为应用程序定义资源使用边界,很难合理地分配计算资源,而且程序之间容易产生影响

虚拟化部署:可以在一台物理机上运行多个虚拟机,每个虚拟机都是独立的一个环境

  • 优点:程序环境不会相互产生影响,提供了一定程度的安全性

  • 缺点:增加了操作系统,浪费了部分资源

容器化部署:与虚拟化类似,但是共享了操作系统

容器化部署方式给带来很多的便利,但是也会出现一些问题,比如说:

  • 一个容器故障停机了,怎么样让另外一个容器立刻启动去替补停机的容器

  • 当并发访问量变大的时候,怎么样做到横向扩展容器数量

2 容器编排应用

为了解决这些容器编排问题,就产生了一些容器编排的软件:

  • Swarm:Docker自己的容器编排工具

  • Mesos:Apache的一个资源统一管控的工具,需要和Marathon结合使用

  • Kubernetes :Google开源的的容器编排工具

3 kubernetes 简介

  • 在Docker 作为高级容器引擎快速发展的同时,在Google内部,容器技术已经应用了很多年

  • Borg系统运行管理着成千上万的容器应用。

  • Kubernetes项目来源于Borg,可以说是集结了Borg设计思想的精华,并且吸收了Borg系统中的经验和教训。

  • Kubernetes对计算资源进行了更高层次的抽象,通过将容器进行细致的组合,将最终的应用服务交给用户。

    kubernetes的本质是一组服务器集群,它可以在集群的每个节点上运行特定的程序,来对节点中的容器进行管理。目的是实现资源管理的自动化,主要提供了如下的主要功能:

  • 自我修复:一旦某一个容器崩溃,能够在1秒中左右迅速启动新的容器

  • 弹性伸缩:可以根据需要,自动对集群中正在运行的容器数量进行调整

  • 服务发现:服务可以通过自动发现的形式找到它所依赖的服务

  • 负载均衡:如果一个服务起动了多个容器,能够自动实现请求的负载均衡

  • 版本回退:如果发现新发布的程序版本有问题,可以立即回退到原来的版本

  • 存储编排:可以根据容器自身的需求自动创建存储卷

4 K8S的设计架构

1.4.1 K8S各个组件用途

一个kubernetes集群主要是由控制节点(master) 、**工作节点(node)**构成,每个节点上都会安装不同的组件

1 master:集群的控制平面,负责集群的决策

  • ApiServer : 资源操作的唯一入口,接收用户输入的命令,提供认证、授权、API注册和发现等机制

  • Scheduler : 负责集群资源调度,按照预定的调度策略将Pod调度到相应的node节点上

  • ControllerManager : 负责维护集群的状态,比如程序部署安排、故障检测、自动扩展、滚动更新等

  • Etcd :负责存储集群中各种资源对象的信息

2 node:集群的数据平面,负责为容器提供运行环境

  • kubelet:负责维护容器的生命周期,同时也负责Volume(CVI)和网络(CNI)的管理

  • Container runtime:负责镜像管理以及Pod和容器的真正运行(CRI)

  • kube-proxy:负责为Service提供cluster内部的服务发现和负载均衡

1.4.2 K8S 各组件之间的调用关系

当我们要运行一个web服务时

  1. kubernetes环境启动之后,master和node都会将自身的信息存储到etcd数据库中

  2. web服务的安装请求会首先被发送到master节点的apiServer组件

  3. apiServer组件会调用scheduler组件来决定到底应该把这个服务安装到哪个node节点上

    在此时,它会从etcd中读取各个node节点的信息,然后按照一定的算法进行选择,并将结果告知apiServer

  4. apiServer调用controller-manager去调度Node节点安装web服务

  5. kubelet接收到指令后,会通知docker,然后由docker来启动一个web服务的pod

  6. 如果需要访问web服务,就需要通过kube-proxy来对pod产生访问的代理

1.4.3 K8S 的 常用名词感念

  • Master:集群控制节点,每个集群需要至少一个master节点负责集群的管控

  • Node:工作负载节点,由master分配容器到这些node工作节点上,然后node节点上的

  • Pod:kubernetes的最小控制单元,容器都是运行在pod中的,一个pod中可以有1个或者多个容器

  • Controller:控制器,通过它来实现对pod的管理,比如启动pod、停止pod、伸缩pod的数量等等

  • Service:pod对外服务的统一入口,下面可以维护者同一类的多个pod

  • Label:标签,用于对pod进行分类,同一类pod会拥有相同的标签

  • NameSpace:命名空间,用来隔离pod的运行环境

1.4.4 k8S的分层架构

  • 核心层:Kubernetes最核心的功能,对外提供API构建高层的应用,对内提供插件式应用执行环境

  • 应用层:部署(无状态应用、有状态应用、批处理任务、集群应用等)和路由(服务发现、DNS解析等)

  • 管理层:系统度量(如基础设施、容器和网络的度量),自动化(如自动扩展、动态Provision等)以及策略管理(RBAC、Quota、PSP、NetworkPolicy等)

  • 接口层:kubectl命令行工具、客户端SDK以及集群联邦

  • 生态系统:在接口层之上的庞大容器集群管理调度的生态系统,可以划分为两个范畴

  • Kubernetes外部:日志、监控、配置管理、CI、CD、Workflow、FaaS、OTS应用、ChatOps等

  • Kubernetes内部:CRI、CNI、CVI、镜像仓库、Cloud Provider、集群自身的配置和管理等

二 K8S集群环境搭建

2.1 k8s中容器的管理方式

K8S 集群创建方式有3种:

centainerd

默认情况下,K8S在创建集群时使用的方式

docker

Docker使用的普记录最高,虽然K8S在1.24版本后已经费力了kubelet对docker的支持,但时可以借助cri-docker方式来实现集群创建

cri-o

CRI-O的方式是Kubernetes创建容器最直接的一种方式,在创建集群的时候,需要借助于cri-o插件的方式来实现Kubernetes集群的创建。

docker 和cri-o 这两种方式要对kubelet程序的启动参数进行设置

2.2 k8s 集群部署

2.2.1 k8s 环境部署说明

K8S中文官网:Kubernetes

主机名 ip 角色
harbor.timinglee.org 172.25.254.254 harbor仓库
k8s-master.timinglee.org 172.25.254.100 master,k8s集群控制节点
k8s-node1.timinglee.org 172.25.254.10 worker,k8s集群工作节点
k8s-node2.timinglee.org 172.25.254.20 worker,k8s集群工作节点
  • 所有节点禁用selinux和防火墙

  • 所有节点同步时间和解析

  • 所有节点安装docker-ce

  • 所有节点禁用swap,注意注释掉/etc/fstab文件中的定义

2.2.2 集群环境初始化

所有k8s集群节点执行以下步骤

2.2.2.1.所有禁用swap和本地解析
bash 复制代码
]# systemctl mask swap.target
]# swapoff -a
]# vim /etc/fstab

#
# /etc/fstab
# Created by anaconda on Sun Feb 19 17:38:40 2023
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/rhel-root   /                       xfs     defaults        0 0
UUID=ddb06c77-c9da-4e92-afd7-53cd76e6a94a /boot                   xfs     defaults        0 0
#/dev/mapper/rhel-swap   swap                    swap    defaults        0 0
/dev/cdrom      /media  iso9660 defaults        0 0




[root@k8s-master ~]# vim /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
172.25.254.100      k8s-master.timinglee.org
172.25.254.10       k8s-node1.timinglee.org
172.25.254.20       k8s-node2.timinglee.org
172.25.254.254      reg.timinglee.org
2.2.2.2.所有安装docker
bash 复制代码
[root@k8s-master ~]# vim /etc/yum.repos.d/docker.repo
[docker]
name=docker
baseurl=https://mirrors.aliyun.com/docker-ce/linux/rhel/9/x86_64/stable/
gpgcheck=0

[root@k8s-master ~]# dnf install docker-ce -y
2.2.2.3.所有节点设定docker的资源管理模式为systemd
bash 复制代码
[root@k8s-master ~]# vim /etc/docker/daemon.json
{
        "registry-mirrors": ["https://reg.timinglee.org"],
        "exec-opts": ["native.cgroupdriver=systemd"],
        "log-driver": "json-file",
        "log-opts": {
                "max-size": "100m"
        },
        "storage-driver": "overlay2"
}
2.2.2.4.所有阶段复制harbor仓库中的证书并启动docker
bash 复制代码
[root@k8s-master ~]# ls -l /etc/docker/certs.d/reg.timinglee.org/ca.crt
[root@k8s-master ~]# systemctl enable --now docker

#登陆harbor仓库
[root@k8s-master ~]# docker login reg.timinglee.org
[root@k8s-master ~]# docker info
Client: Docker Engine - Community
 Version:    27.1.2
 Context:    default
 Debug Mode: false
 Plugins:
  buildx: Docker Buildx (Docker Inc.)
    Version:  v0.16.2
    Path:     /usr/libexec/docker/cli-plugins/docker-buildx
  compose: Docker Compose (Docker Inc.)
    Version:  v2.29.1
    Path:     /usr/libexec/docker/cli-plugins/docker-compose

Server:
 Containers: 0
  Running: 0
  Paused: 0
  Stopped: 0
 Images: 0
 Server Version: 27.1.2
 Storage Driver: overlay2
  Backing Filesystem: xfs
  Supports d_type: true
  Using metacopy: false
  Native Overlay Diff: true
  userxattr: false
 Logging Driver: json-file
 Cgroup Driver: systemd			#资源管理更改为systemd
 Cgroup Version: 2
 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local splunk syslog
 Swarm: inactive
 Runtimes: io.containerd.runc.v2 runc
 Default Runtime: runc
 Init Binary: docker-init
 containerd version: 8fc6bcff51318944179630522a095cc9dbf9f353
 runc version: v1.1.13-0-g58aa920
 init version: de40ad0
 Security Options:
  seccomp
   Profile: builtin
  cgroupns
 Kernel Version: 5.14.0-427.13.1.el9_4.x86_64
 Operating System: Red Hat Enterprise Linux 9.4 (Plow)
 OSType: linux
 Architecture: x86_64
 CPUs: 1
 Total Memory: 736.3MiB
 Name: k8s-master.timinglee.org
 ID: f3c291bf-287d-4cf6-8e69-5f21c79fa7c6
 Docker Root Dir: /var/lib/docker
 Debug Mode: false
 Experimental: false
 Insecure Registries:
  127.0.0.0/8
 Registry Mirrors:
  https://reg.westos.org/			#认证harbor仓库
 Live Restore Enabled: false
2.2.2.5 安装K8S部署工具
bash 复制代码
#部署软件仓库,添加K8S源
[root@k8s-master ~]# vim /etc/yum.repos.d/k8s.repo
[k8s]
name=k8s
baseurl=https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.30/rpm
gpgcheck=0

#安装软件
[root@k8s-master ~]# dnf install kubelet-1.30.0 kubeadm-1.30.0 kubectl-1.30.0 -y
2.2.2.6 设置kubectl命令补齐功能
bash 复制代码
[root@k8s-master ~]# dnf install bash-completion -y
[root@k8s-master ~]# echo "source <(kubectl completion bash)" >> ~/.bashrc
[root@k8s-master ~]# source  ~/.bashrc
2.2.2.7 在所节点安装cri-docker

k8s从1.24版本开始移除了dockershim,所以需要安装cri-docker插件才能使用docker

软件下载:https://github.com/Mirantis/cri-dockerd

bash 复制代码
[root@k8s-master ~]# dnf install libcgroup-0.41-19.el8.x86_64.rpm \
> cri-dockerd-0.3.14-3.el8.x86_64.rpm -y

[root@k8s-master ~]# vim /lib/systemd/system/cri-docker.service
[Unit]
Description=CRI Interface for Docker Application Container Engine
Documentation=https://docs.mirantis.com
After=network-online.target firewalld.service docker.service
Wants=network-online.target
Requires=cri-docker.socket

[Service]
Type=notify

#指定网络插件名称及基础容器镜像
ExecStart=/usr/bin/cri-dockerd --container-runtime-endpoint fd:// --network-plugin=cni --pod-infra-container-image=reg.timinglee.org/k8s/pause:3.9

ExecReload=/bin/kill -s HUP $MAINPID
TimeoutSec=0
RestartSec=2
Restart=always

[root@k8s-master ~]# systemctl daemon-reload
[root@k8s-master ~]# systemctl start cri-docker
[root@k8s-master ~]# ll /var/run/cri-dockerd.sock
srw-rw---- 1 root docker 0  8月 26 22:14 /var/run/cri-dockerd.sock		#cri-dockerd的套接字文件
2.2.2.8 在master节点拉取K8S所需镜像
bash 复制代码
#拉取k8s集群所需要的镜像
[root@k8s-master ~]# kubeadm config images pull \
--image-repository registry.aliyuncs.com/google_containers \
--kubernetes-version v1.30.0 \
--cri-socket=unix:///var/run/cri-dockerd.sock

#上传镜像到harbor仓库
[root@k8s-master ~]# docker images | awk '/google/{ print $1":"$2}' \
| awk -F "/" '{system("docker tag "$0" reg.timinglee.org/k8s/"$3)}'

[root@k8s-master ~]# docker images  | awk '/k8s/{system("docker push "$1":"$2)}'	
2.2.2.9 集群初始化
bash 复制代码
#启动kubelet服务
[root@k8s-master ~]# systemctl status kubelet.service

#执行初始化命令
[root@k8s-master ~]# kubeadm init --pod-network-cidr=10.244.0.0/16 \
--image-repository reg.timinglee.org/k8s \
--kubernetes-version v1.30.0 \
--cri-socket=unix:///var/run/cri-dockerd.sock

#指定集群配置文件变量
[root@k8s-master ~]# echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile

#当前节点没有就绪,因为还没有安装网络插件,容器没有运行
[root@k8s-master ~]# kubectl get node
NAME                       STATUS     ROLES           AGE     VERSION
k8s-master.timinglee.org   NotReady   control-plane   4m25s   v1.30.0
root@k8s-master ~]# kubectl get pod -A
NAMESPACE     NAME                                               READY   STATUS    RESTARTS   AGE
kube-system   coredns-647dc95897-2sgn8                           0/1     Pending   0          6m13s
kube-system   coredns-647dc95897-bvtxb                           0/1     Pending   0          6m13s
kube-system   etcd-k8s-master.timinglee.org                      1/1     Running   0          6m29s
kube-system   kube-apiserver-k8s-master.timinglee.org            1/1     Running   0          6m30s
kube-system   kube-controller-manager-k8s-master.timinglee.org   1/1     Running   0          6m29s
kube-system   kube-proxy-fq85m                                   1/1     Running   0          6m14s
kube-system   kube-scheduler-k8s-master.timinglee.org            1/1     Running   0          6m29s

在此阶段如果生成的集群token找不到了可以重新生成

bash 复制代码
[root@k8s-master ~]#   kubeadm token create --print-join-command
kubeadm join 172.25.254.100:6443 --token 5hwptm.zwn7epa6pvatbpwf --discovery-token-ca-cert-hash sha256:52f1a83b70ffc8744db5570288ab51987ef2b563bf906ba4244a300f61e9db23
2.2.2.10 安装flannel网络插件

官方网站:https://github.com/flannel-io/flannel

bash 复制代码
#下载flannel的yaml部署文件
[root@k8s-master ~]# wget https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml

#下载镜像:
[root@k8s-master ~]# docker pull docker.io/flannel/flannel:v0.25.5
[root@k8s-master ~]# docekr docker.io/flannel/flannel-cni-plugin:v1.5.1-flannel1

#上传镜像到仓库
[root@k8s-master ~]# docker tag flannel/flannel:v0.25.5 \
reg.timinglee.org/flannel/flannel:v0.25.5
[root@k8s-master ~]# docker push reg.timinglee.org/flannel/flannel:v0.25.5

[root@k8s-master ~]# docker tag flannel/flannel-cni-plugin:v1.5.1-flannel1 \
reg.timinglee.org/flannel/flannel-cni-plugin:v1.5.1-flannel1
[root@k8s-master ~]# docker push reg.timinglee.org/flannel/flannel-cni-plugin:v1.5.1-flannel1


#编辑kube-flannel.yml 修改镜像下载位置
[root@k8s-master ~]# vim kube-flannel.yml

#需要修改以下几行
[root@k8s-master ~]# grep -n image kube-flannel.yml
146:        image: reg.timinglee.org/flannel/flannel:v0.25.5
173:        image: reg.timinglee.org/flannel/flannel-cni-plugin:v1.5.1-flannel1
184:        image: reg.timinglee.org/flannel/flannel:v0.25.5

#安装flannel网络插件
[root@k8s-master ~]# kubectl apply -f kube-flannel.yml
2.2.2.11 节点扩容

在所有的worker节点中

1 确认部署好以下内容

2 禁用swap

3 安装:

  • kubelet-1.30.0

  • kubeadm-1.30.0

  • kubectl-1.30.0

  • docker-ce

  • cri-dockerd

4 修改cri-dockerd启动文件添加

5 启动服务

  • kubelet.service

  • cri-docker.service

以上信息确认完毕后即可加入集群

bash 复制代码
[root@k8s-node1 & 2  ~]# kubeadm join 172.25.254.100:6443 --token 5hwptm.zwn7epa6pvatbpwf --discovery-token-ca-cert-hash sha256:52f1a83b70ffc8744db5570288ab51987ef2b563bf906ba4244a300f61e9db23 --cri-socket=unix:///var/run/cri-dockerd.sock

在master阶段中查看所有node的状态

bash 复制代码
[root@k8s-master ~]# kubectl get nodes
NAME                       STATUS   ROLES           AGE   VERSION
k8s-master.timinglee.org   Ready    control-plane   98m   v1.30.0
k8s-node1.timinglee.org    Ready    <none>          21m   v1.30.0
k8s-node2.timinglee.org    Ready    <none>          21m   v1.30.0

所有阶段的STATUS为Ready状态,那么恭喜你,你的kubernetes就装好了!!

测试集群运行情况

bash 复制代码
#建立一个pod
[root@k8s-master ~]# kubectl run test --image nginx

#查看pod状态
[root@k8s-master ~]# kubectl get pods
NAME   READY   STATUS    RESTARTS   AGE
test   1/1     Running   0          6m29s

#删除pod
root@k8s-master ~]# kubectl delete pod

noet: 注意建立pod时需要确认使用的image 是否存在于仓库中,否则会报错

2.3 命令示例

kubectl的详细说明地址:Kubectl Reference Docs

bash 复制代码
#显示集群版本
[root@k8s-master ~]# kubectl version
Client Version: v1.30.0
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.30.0
bash 复制代码
#显示集群信息
[root@k8s-master ~]# kubectl  cluster-info
Kubernetes control plane is running at https://172.25.254.100:6443
CoreDNS is running at https://172.25.254.100:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
bash 复制代码
#创建一个webcluster控制器,控制器中pod数量为2
[root@k8s-master ~]# kubectl create deployment webcluseter --image nginx --replicas 2
bash 复制代码
#查看控制器
[root@k8s-master ~]# kubectl get  deployments.apps
NAME   READY   UP-TO-DATE   AVAILABLE   AGE
web    3/3     3            3           69m
bash 复制代码
#查看资源帮助
[root@k8s-master ~]# kubectl explain deployment
GROUP:      apps
KIND:       Deployment
VERSION:    v1

DESCRIPTION:
    Deployment enables declarative updates for Pods and ReplicaSets.

FIELDS:
  apiVersion    <string>
    APIVersion defines the versioned schema of this representation of an object.
    Servers should convert recognized schemas to the latest internal value, and
    may reject unrecognized values. More info:
    https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources

  kind  <string>
    Kind is a string value representing the REST resource this object
    represents. Servers may infer this from the endpoint the client submits
    requests to. Cannot be updated. In CamelCase. More info:
    https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds

  metadata      <ObjectMeta>
    Standard object's metadata. More info:
    https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata

  spec  <DeploymentSpec>
    Specification of the desired behavior of the Deployment.

  status        <DeploymentStatus>
    Most recently observed status of the Deployment.

#查看控制器参数帮助 
[root@k8s-master ~]# kubectl explain deployment.spec
GROUP:      apps
KIND:       Deployment
VERSION:    v1

FIELD: spec <DeploymentSpec>


DESCRIPTION:
    Specification of the desired behavior of the Deployment.
    DeploymentSpec is the specification of the desired behavior of the
    Deployment.

FIELDS:
  minReadySeconds       <integer>
    Minimum number of seconds for which a newly created pod should be ready
    without any of its container crashing, for it to be considered available.
    Defaults to 0 (pod will be considered available as soon as it is ready)

  paused        <boolean>
    Indicates that the deployment is paused.

  progressDeadlineSeconds       <integer>
    The maximum time in seconds for a deployment to make progress before it is
    considered to be failed. The deployment controller will continue to process
    failed deployments and a condition with a ProgressDeadlineExceeded reason
    will be surfaced in the deployment status. Note that progress will not be
    estimated during the time a deployment is paused. Defaults to 600s.

  replicas      <integer>
    Number of desired pods. This is a pointer to distinguish between explicit
    zero and not specified. Defaults to 1.

  revisionHistoryLimit  <integer>
    The number of old ReplicaSets to retain to allow rollback. This is a pointer
    to distinguish between explicit zero and not specified. Defaults to 10.

  selector      <LabelSelector> -required-
    Label selector for pods. Existing ReplicaSets whose pods are selected by
    this will be the ones affected by this deployment. It must match the pod
    template's labels.

  strategy      <DeploymentStrategy>
    The deployment strategy to use to replace existing pods with new ones.

  template      <PodTemplateSpec> -required-
    Template describes the pods that will be created. The only allowed
    template.spec.restartPolicy value is "Always".
bash 复制代码
#编辑控制器配置
[root@k8s-master ~]# kubectl edit deployments.apps web
@@@@省略内容@@@@@@
spec:
  progressDeadlineSeconds: 600
  replicas: 2

[root@k8s-master ~]# kubectl get deployments.apps
NAME   READY   UP-TO-DATE   AVAILABLE   AGE
web    2/2     2            2           73m
bash 复制代码
#利用补丁更改控制器配置
[root@k8s-master ~]# kubectl patch  deployments.apps web -p '{"spec":{"replicas":4}}'
deployment.apps/web patched
[root@k8s-master ~]# kubectl get deployments.apps
NAME   READY   UP-TO-DATE   AVAILABLE   AGE
web    4/4     4            4           74m
bash 复制代码
#删除资源
[root@k8s-master ~]# kubectl delete deployments.apps web
deployment.apps "web" deleted
[root@k8s-master ~]# kubectl get deployments.apps
No resources found in default namespace.

2.3.1 运行和调试命令示例

bash 复制代码
#运行pod
[root@k8s-master ~]# kubectl run testpod --image nginx
pod/testpod created
[root@k8s-master ~]# kubectl get pods
NAME      READY   STATUS    RESTARTS   AGE
testpod   1/1     Running   0          7s
bash 复制代码
#端口暴漏
[root@k8s-master ~]# kubectl get  services
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   2d14h

[root@k8s-master ~]# kubectl expose pod testpod --port 80 --target-port 80
service/testpod exposed

[root@k8s-master ~]# kubectl get services
NAME         TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.96.0.1      <none>        443/TCP   2d14h
testpod      ClusterIP   10.106.78.42   <none>        80/TCP    18s
[root@k8s-master ~]# curl  10.106.78.42
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
bash 复制代码
#查看资源详细信息 
[root@k8s-master ~]# kubectl describe pods testpod
bash 复制代码
#查看资源日志
[root@k8s-master ~]# kubectl logs pods/testpod
/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
/docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
2024/08/29 05:37:29 [notice] 1#1: using the "epoll" event method
2024/08/29 05:37:29 [notice] 1#1: nginx/1.27.1
2024/08/29 05:37:29 [notice] 1#1: built by gcc 12.2.0 (Debian 12.2.0-14)
2024/08/29 05:37:29 [notice] 1#1: OS: Linux 5.14.0-427.13.1.el9_4.x86_64
2024/08/29 05:37:29 [notice] 1#1: getrlimit(RLIMIT_NOFILE): 1073741816:1073741816
2024/08/29 05:37:29 [notice] 1#1: start worker processes
2024/08/29 05:37:29 [notice] 1#1: start worker process 29
10.244.0.0 - - [29/Aug/2024:05:41:11 +0000] "GET / HTTP/1.1" 200 615 "-" "curl/7.76.1" "-"
10.244.0.0 - - [29/Aug/2024:05:42:51 +0000] "GET / HTTP/1.1" 200 615 "-" "curl/7.76.1" "-"
bash 复制代码
#运行交互pod
[root@k8s-master ~]# kubectl run -it testpod --image busybox

If you don't see a command prompt, try pressing enter.
/ #
/ #	              #ctrl+pq退出不停止pod

#运行非交互pod
[root@k8s-master ~]# kubectl run  nginx  --image nginx
pod/nginx created

#进入到已经运行的容器,且容器有交互环境
[root@k8s-master ~]# kubectl attach pods/testpod  -it
If you don't see a command prompt, try pressing enter.
/ #
/ #

#在已经运行的pod中运行指定命令
[root@k8s-master ~]# kubectl exec  -it pods/nginx  /bin/bash
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
root@nginx:/#
bash 复制代码
#日志文件到pod中
[root@k8s-master ~]# kubectl cp anaconda-ks.cfg nginx:/
[root@k8s-master ~]# kubectl exec  -it pods/nginx  /bin/bash
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
root@nginx:/# ls
anaconda-ks.cfg  boot  docker-entrypoint.d   etc   lib    media  opt   root  sbin  sys  usr
bin              dev   docker-entrypoint.sh  home  lib64  mnt    proc  run   srv   tmp  var
root@nginx:/#

#复制pod中的文件到本机
[root@k8s-master ~]# kubectl cp  nginx:/anaconda-ks.cfg anaconda-ks.cfg
tar: Removing leading `/' from member names

2.3.2高级命令示例

bash 复制代码
#利用命令生成yaml模板文件
[root@k8s-master ~]# kubectl create deployment --image nginx webcluster --dry-run=client   -o yaml  > webcluster.yml


#利用yaml文件生成资源
[root@k8s-master ~]# vim webcluster.yml
apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: webcluster
  name: webcluster
spec:
  replicas: 2
  selector:
    matchLabels:
      app: webcluster
  template:
    metadata:
      labels:
        app: webcluster
    spec:
      containers:
      - image: nginx
        name: nginx

[root@k8s-master ~]# kubectl apply -f webcluster.yml
deployment.apps/webcluster created

[root@k8s-master ~]# kubectl get deployments.apps
NAME         READY   UP-TO-DATE   AVAILABLE   AGE
webcluster   2/2     2            2           21s

[root@k8s-master ~]# kubectl delete -f webcluster.yml
deployment.apps "webcluster" deleted
bash 复制代码
#管理资源标签
[root@k8s-master ~]# kubectl run nginx --image  nginx
[root@k8s-master ~]# kubectl get pods  --show-labels
NAME                          READY   STATUS    RESTARTS   AGE   LABELS
nginx                         1/1     Running   0          12s   run=nginx

[root@k8s-master ~]# kubectl label pods nginx app=lee
[root@k8s-master ~]# kubectl get pods  --show-labels
NAME                          READY   STATUS    RESTARTS   AGE   LABELS
nginx                         1/1     Running   0          57s   app=lee,run=nginx

#更改标签
[root@k8s-master ~]# kubectl label pods nginx app=webcluster --overwrite

#删除标签
[root@k8s-master ~]# kubectl label pods nginx app-
pod/nginx unlabeled
[root@k8s-master ~]# kubectl get pods nginx --show-labels
NAME    READY   STATUS    RESTARTS   AGE     LABELS
nginx   1/1     Running   0          7m56s   run=nginx

#标签时控制器识别pod示例的标识
[root@k8s-master ~]# kubectl get pods --show-labels
NAME                          READY   STATUS    RESTARTS   AGE     LABELS
nginx                         1/1     Running   0          11m     run=nginx
webcluster-7c584f774b-66zbd   1/1     Running   0          2m10s   app=webcluster,pod-template-hash=7c584f774b
webcluster-7c584f774b-9x2x2   1/1     Running   0          35m     app=webcluster,pod-template-hash=7c584f774b

#删除pod上的标签
root@k8s-master ~]# kubectl label pods webcluster-7c584f774b-66zbd app-
pod/webcluster-7c584f774b-66zbd unlabeled

#控制器会重新启动新pod
[root@k8s-master ~]# kubectl get pods --show-labels
NAME                          READY   STATUS    RESTARTS   AGE     LABELS
nginx                         1/1     Running   0          11m     run=nginx
webcluster-7c584f774b-66zbd   1/1     Running   0          2m39s   pod-template-hash=7c584f774b
webcluster-7c584f774b-9x2x2   1/1     Running   0          36m     app=webcluster,pod-template-hash=7c584f774b
webcluster-7c584f774b-hgprn   1/1     Running   0          2s      app=webcluster,pod-template-hash=7c584f774b

三 什么是pod

  • Pod是可以创建和管理Kubernetes计算的最小可部署单元

  • 一个Pod代表着集群中运行的一个进程,每个pod都有一个唯一的ip。

  • 一个pod类似一个豌豆荚,包含一个或多个容器(通常是docker)

  • 多个容器间共享IPC、Network和UTC namespace。

3.1 创建自主式pod (生产不推荐)

优点:

灵活性高

  • 可以精确控制 Pod 的各种配置参数,包括容器的镜像、资源限制、环境变量、命令和参数等,满足特定的应用需求。

学习和调试方便

  • 对于学习 Kubernetes 的原理和机制非常有帮助,通过手动创建 Pod 可以深入了解 Pod 的结构和配置方式。在调试问题时,可以更直接地观察和调整 Pod 的设置。

适用于特殊场景

  • 在一些特殊情况下,如进行一次性任务、快速验证概念或在资源受限的环境中进行特定配置时,手动创建 Pod 可能是一种有效的方式。

缺点:

管理复杂

  • 如果需要管理大量的 Pod,手动创建和维护会变得非常繁琐和耗时。难以实现自动化的扩缩容、故障恢复等操作。

缺乏高级功能

  • 无法自动享受 Kubernetes 提供的高级功能,如自动部署、滚动更新、服务发现等。这可能导致应用的部署和管理效率低下。

可维护性差

  • 手动创建的 Pod 在更新应用版本或修改配置时需要手动干预,容易出现错误,并且难以保证一致性。相比之下,通过声明式配置或使用 Kubernetes 的部署工具可以更方便地进行应用的维护和更新。
bash 复制代码
#查看所有pods
[root@k8s-master ~]# kubectl get pods
No resources found in default namespace.

#建立一个名为timinglee的pod
[root@k8s-master ~]# kubectl run timinglee --image nginx
pod/timinglee created


[root@k8s-master ~]# kubectl get pods
NAME        READY   STATUS    RESTARTS   AGE
timinglee   1/1     Running   0          6s

#显示pod的较为详细的信息
[root@k8s-master ~]# kubectl get pods -o wide
NAME        READY   STATUS    RESTARTS   AGE   IP            NODE                      NOMINATED NODE   READINESS GATES
timinglee   1/1     Running   0          11s   10.244.1.17   k8s-node1.timinglee.org   <none>           <none>

3.2 利用控制器管理pod(推荐)

高可用性和可靠性

  • 自动故障恢复:如果一个 Pod 失败或被删除,控制器会自动创建新的 Pod 来维持期望的副本数量。确保应用始终处于可用状态,减少因单个 Pod 故障导致的服务中断。

  • 健康检查和自愈:可以配置控制器对 Pod 进行健康检查(如存活探针和就绪探针)。如果 Pod 不健康,控制器会采取适当的行动,如重启 Pod 或删除并重新创建它,以保证应用的正常运行。

可扩展性

  • 轻松扩缩容:可以通过简单的命令或配置更改来增加或减少 Pod 的数量,以满足不同的工作负载需求。例如,在高流量期间可以快速扩展以处理更多请求,在低流量期间可以缩容以节省资源。

  • 水平自动扩缩容(HPA):可以基于自定义指标(如 CPU 利用率、内存使用情况或应用特定的指标)自动调整 Pod 的数量,实现动态的资源分配和成本优化。

版本管理和更新

  • 滚动更新:对于 Deployment 等控制器,可以执行滚动更新来逐步替换旧版本的 Pod 为新版本,确保应用在更新过程中始终保持可用。可以控制更新的速率和策略,以减少对用户的影响。

  • 回滚:如果更新出现问题,可以轻松回滚到上一个稳定版本,保证应用的稳定性和可靠性。

声明式配置

  • 简洁的配置方式:使用 YAML 或 JSON 格式的声明式配置文件来定义应用的部署需求。这种方式使得配置易于理解、维护和版本控制,同时也方便团队协作。

  • 期望状态管理:只需要定义应用的期望状态(如副本数量、容器镜像等),控制器会自动调整实际状态与期望状态保持一致。无需手动管理每个 Pod 的创建和删除,提高了管理效率。

服务发现和负载均衡

  • 自动注册和发现:Kubernetes 中的服务(Service)可以自动发现由控制器管理的 Pod,并将流量路由到它们。这使得应用的服务发现和负载均衡变得简单和可靠,无需手动配置负载均衡器。

  • 流量分发:可以根据不同的策略(如轮询、随机等)将请求分发到不同的 Pod,提高应用的性能和可用性。

多环境一致性

  • 一致的部署方式:在不同的环境(如开发、测试、生产)中,可以使用相同的控制器和配置来部署应用,确保应用在不同环境中的行为一致。这有助于减少部署差异和错误,提高开发和运维效率。

示例:

bash 复制代码
#建立控制器并自动运行pod
[root@k8s-master ~]# kubectl create deployment timinglee --image nginx
[root@k8s-master ~]# kubectl get pods
NAME                         READY   STATUS    RESTARTS   AGE
timinglee-859fbf84d6-mrjvx   1/1     Running   0          37m

#为timinglee扩容
[root@k8s-master ~]# kubectl scale deployment timinglee --replicas 6
deployment.apps/timinglee scaled
[root@k8s-master ~]# kubectl get pods
NAME                         READY   STATUS              RESTARTS   AGE
timinglee-859fbf84d6-8rgkz   0/1     ContainerCreating   0          1s
timinglee-859fbf84d6-ddndl   0/1     ContainerCreating   0          1s
timinglee-859fbf84d6-m4r9l   0/1     ContainerCreating   0          1s
timinglee-859fbf84d6-mrjvx   1/1     Running             0          37m
timinglee-859fbf84d6-tsn97   1/1     Running             0          20s
timinglee-859fbf84d6-xgskk   0/1     ContainerCreating   0          1s

#为timinglee缩容
root@k8s-master ~]# kubectl scale deployment timinglee --replicas 2
deployment.apps/timinglee scaled
[root@k8s-master ~]# kubectl get pods
NAME                         READY   STATUS    RESTARTS   AGE
timinglee-859fbf84d6-mrjvx   1/1     Running   0          38m
timinglee-859fbf84d6-tsn97   1/1     Running   0          73s

3.3 应用版本的更新

bash 复制代码
#利用控制器建立pod
[root@k8s-master ~]# kubectl create  deployment timinglee --image myapp:v1 --replicas 2
deployment.apps/timinglee created

#暴漏端口
[root@k8s-master ~]# kubectl expose deployment timinglee --port 80 --target-port 80
service/timinglee exposed
[root@k8s-master ~]# kubectl get services
NAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.96.0.1        <none>        443/TCP   2d17h
timinglee    ClusterIP   10.110.195.120   <none>        80/TCP    8s

#访问服务
[root@k8s-master ~]# curl  10.110.195.120
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
[root@k8s-master ~]# curl  10.110.195.120
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
[root@k8s-master ~]# curl  10.110.195.120

#产看历史版本
[root@k8s-master ~]# kubectl rollout history deployment timinglee
deployment.apps/timinglee
REVISION  CHANGE-CAUSE
1         <none>

#更新控制器镜像版本
[root@k8s-master ~]# kubectl set image deployments/timinglee myapp=myapp:v2
deployment.apps/timinglee image updated

#查看历史版本
[root@k8s-master ~]# kubectl rollout history deployment timinglee
deployment.apps/timinglee
REVISION  CHANGE-CAUSE
1         <none>
2         <none>

#访问内容测试
[root@k8s-master ~]# curl 10.110.195.120
Hello MyApp | Version: v2 | <a href="hostname.html">Pod Name</a>
[root@k8s-master ~]# curl 10.110.195.120

#版本回滚
[root@k8s-master ~]# kubectl rollout undo deployment timinglee --to-revision 1
deployment.apps/timinglee rolled back
[root@k8s-master ~]# curl 10.110.195.120
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>

3.4 利用yaml文件部署应用

3.4.1 用yaml文件部署应用有以下优点

声明式配置

  • 清晰表达期望状态:以声明式的方式描述应用的部署需求,包括副本数量、容器配置、网络设置等。这使得配置易于理解和维护,并且可以方便地查看应用的预期状态。

  • 可重复性和版本控制:配置文件可以被版本控制,确保在不同环境中的部署一致性。可以轻松回滚到以前的版本或在不同环境中重复使用相同的配置。

  • 团队协作:便于团队成员之间共享和协作,大家可以对配置文件进行审查和修改,提高部署的可靠性和稳定性。

灵活性和可扩展性

  • 丰富的配置选项:可以通过 YAML 文件详细地配置各种 Kubernetes 资源,如 Deployment、Service、ConfigMap、Secret 等。可以根据应用的特定需求进行高度定制化。

  • 组合和扩展:可以将多个资源的配置组合在一个或多个 YAML 文件中,实现复杂的应用部署架构。同时,可以轻松地添加新的资源或修改现有资源以满足不断变化的需求。

与工具集成

  • 与 CI/CD 流程集成:可以将 YAML 配置文件与持续集成和持续部署(CI/CD)工具集成,实现自动化的应用部署。例如,可以在代码提交后自动触发部署流程,使用配置文件来部署应用到不同的环境。

  • 命令行工具支持:Kubernetes 的命令行工具 kubectl 对 YAML 配置文件有很好的支持,可以方便地应用、更新和删除配置。同时,还可以使用其他工具来验证和分析 YAML 配置文件,确保其正确性和安全性。

3.4.2 资源清单参数

参数名称 类型 参数说明
version String 这里是指的是K8S API的版本,目前基本上是v1,可以用kubectl api-versions命令查询
kind String 这里指的是yaml文件定义的资源类型和角色,比如:Pod
metadata Object 元数据对象,固定值就写metadata
metadata.name String 元数据对象的名字,这里由我们编写,比如命名Pod的名字
metadata.namespace String 元数据对象的命名空间,由我们自身定义
Spec Object 详细定义对象,固定值就写Spec
spec.containers[] list 这里是Spec对象的容器列表定义,是个列表
spec.containers[].name String 这里定义容器的名字
spec.containers[].image string 这里定义要用到的镜像名称
spec.containers[].imagePullPolicy String 定义镜像拉取策略,有三个值可选: (1) Always: 每次都尝试重新拉取镜像 (2) IfNotPresent:如果本地有镜像就使用本地镜像 (3) )Never:表示仅使用本地镜像
spec.containers[].command[] list 指定容器运行时启动的命令,若未指定则运行容器打包时指定的命令
spec.containers[].args[] list 指定容器运行参数,可以指定多个
spec.containers[].workingDir String 指定容器工作目录
spec.containers[].volumeMounts[] list 指定容器内部的存储卷配置
spec.containers[].volumeMounts[].name String 指定可以被容器挂载的存储卷的名称
spec.containers[].volumeMounts[].mountPath String 指定可以被容器挂载的存储卷的路径
spec.containers[].volumeMounts[].readOnly String 设置存储卷路径的读写模式,ture或false,默认为读写模式
spec.containers[].ports[] list 指定容器需要用到的端口列表
spec.containers[].ports[].name String 指定端口名称
spec.containers[].ports[].containerPort String 指定容器需要监听的端口号
spec.containers[] ports[].hostPort String 指定容器所在主机需要监听的端口号,默认跟上面containerPort相同,注意设置了hostPort同一台主机无法启动该容器的相同副本(因为主机的端口号不能相同,这样会冲突)
spec.containers[].ports[].protocol String 指定端口协议,支持TCP和UDP,默认值为 TCP
spec.containers[].env[] list 指定容器运行前需设置的环境变量列表
spec.containers[].env[].name String 指定环境变量名称
spec.containers[].env[].value String 指定环境变量值
spec.containers[].resources Object 指定资源限制和资源请求的值(这里开始就是设置容器的资源上限)
spec.containers[].resources.limits Object 指定设置容器运行时资源的运行上限
spec.containers[].resources.limits.cpu String 指定CPU的限制,单位为核心数,1=1000m
spec.containers[].resources.limits.memory String 指定MEM内存的限制,单位为MIB、GiB
spec.containers[].resources.requests Object 指定容器启动和调度时的限制设置
spec.containers[].resources.requests.cpu String CPU请求,单位为core数,容器启动时初始化可用数量
spec.containers[].resources.requests.memory String 内存请求,单位为MIB、GIB,容器启动的初始化可用数量
spec.restartPolicy string 定义Pod的重启策略,默认值为Always. (1)Always: Pod-旦终止运行,无论容器是如何 终止的,kubelet服务都将重启它 (2)OnFailure: 只有Pod以非零退出码终止时,kubelet才会重启该容器。如果容器正常结束(退出码为0),则kubelet将不会重启它 (3) Never: Pod终止后,kubelet将退出码报告给Master,不会重启该
spec.nodeSelector Object 定义Node的Label过滤标签,以key:value格式指定
spec.imagePullSecrets Object 定义pull镜像时使用secret名称,以name:secretkey格式指定
spec.hostNetwork Boolean 定义是否使用主机网络模式,默认值为false。设置true表示使用宿主机网络,不使用docker网桥,同时设置了true将无法在同一台宿主机 上启动第二个副本

3.4.3 如何获得资源帮助

bash 复制代码
kubectl explain pod.spec.containers

3.4.4 编写示例

3.4.4.1 示例1:运行简单的单个容器pod

用命令获取yaml模板

bash 复制代码
[root@k8s-master ~]# kubectl run timinglee --image myapp:v1 --dry-run=client -o yaml > pod.yml
[root@k8s-master ~]# vim pod.yml
apiVersion: v1
kind: Pod
metadata:
  labels:
    run: timing			#pod标签
  name: timinglee		#pod名称
spec:
  containers:
  - image: myapp:v1		#pod镜像
    name: timinglee		#容器名称

3.4.4.2 示例2:运行多个容器pod

注意如果多个容器运行在一个pod中,资源共享的同时在使用相同资源时也会干扰,比如端口

bash 复制代码
#一个端口干扰示例:
[root@k8s-master ~]# vim pod.yml 
apiVersion: v1
kind: Pod
metadata:
  labels:
    run: timing
  name: timinglee
spec:
  containers:
    - image:  nginx:latest
      name: web1

    - image: nginx:latest
      name: web2
      
[root@k8s-master ~]# kubectl apply -f pod.yml
pod/timinglee created

[root@k8s-master ~]# kubectl get pods
NAME        READY   STATUS   RESTARTS      AGE
timinglee   1/2     Error    1 (14s ago)   18s

#查看日志
[root@k8s-master ~]# kubectl logs timinglee web2
2024/08/31 12:43:20 [emerg] 1#1: bind() to [::]:80 failed (98: Address already in use)
nginx: [emerg] bind() to [::]:80 failed (98: Address already in use)
2024/08/31 12:43:20 [notice] 1#1: try again to bind() after 500ms
2024/08/31 12:43:20 [emerg] 1#1: still could not bind()
nginx: [emerg] still could not bind()

在一个pod中开启多个容器时一定要确保容器彼此不能互相干扰

bash 复制代码
[root@k8s-master ~]# vim pod.yml

[root@k8s-master ~]# kubectl apply -f pod.yml
pod/timinglee created
apiVersion: v1
kind: Pod 
metadata:
  labels:
    run: timing
  name: timinglee
spec:
  containers:
    - image: nginx:latest
      name: web1

    - image: busybox:latest
      name: busybox
      command: ["/bin/sh","-c","sleep 1000000"]

[root@k8s-master ~]# kubectl get pods
NAME        READY   STATUS    RESTARTS   AGE
timinglee   2/2     Running   0          19s

3.4.4.3 示例3:理解pod间的网络整合

同在一个pod中的容器公用一个网络

bash 复制代码
[root@k8s-master ~]# vim pod.yml
apiVersion: v1
kind: Pod
metadata:
  labels:
    run: timinglee
  name: test
spec:
  containers:
    - image: myapp:v1
      name: myapp1

    - image: busyboxplus:latest
      name: busyboxplus
      command: ["/bin/sh","-c","sleep 1000000"]


[root@k8s-master ~]# kubectl apply -f pod.yml
pod/test created
[root@k8s-master ~]# kubectl get pods
NAME   READY   STATUS    RESTARTS   AGE
test   2/2     Running   0          8s
[root@k8s-master ~]# kubectl exec test -c busyboxplus -- curl -s localhost
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>

3.4.4.4 示例4:端口映射

bash 复制代码
[root@k8s-master ~]# vim pod.yml
apiVersion: v1
kind: Pod
metadata:
  labels:
    run: timinglee
  name: test
spec:
  containers:
    - image: myapp:v1
      name: myapp1
      ports:
      - name: http
        containerPort: 80
        hostPort: 80
        protocol: TCP

#测试
[root@k8s-master ~]# kubectl apply -f pod.yml
pod/test created

[root@k8s-master ~]# kubectl get pods  -o wide
NAME   READY   STATUS    RESTARTS   AGE   IP           NODE                      NOMINATED NODE   READINESS GATES
test   1/1     Running   0          12s   10.244.1.2   k8s-node1.timinglee.org   <none>           <none>
[root@k8s-master ~]# curl  k8s-node1.timinglee.org
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>

3.4.4.5 示例5:如何设定环境变量

bash 复制代码
[root@k8s-master ~]# vim pod.yml
apiVersion: v1
kind: Pod
metadata:
  labels:
    run: timinglee
  name: test
spec:
  containers:
    - image: busybox:latest
      name: busybox
      command: ["/bin/sh","-c","echo $NAME;sleep 3000000"]
      env:
      - name: NAME
        value: timinglee
        
[root@k8s-master ~]# kubectl apply -f pod.yml
pod/test created
[root@k8s-master ~]# kubectl logs pods/test busybox
timinglee

3.4.4.6 示例6:资源限制

资源限制会影响pod的Qos Class资源优先级,资源优先级分为Guaranteed > Burstable > BestEffort

QoS(Quality of Service)即服务质量

资源设定 优先级类型
资源限定未设定 BestEffort
资源限定设定且最大和最小不一致 Burstable
资源限定设定且最大和最小一致 Guaranteed
bash 复制代码
[root@k8s-master ~]# vim pod.yml
apiVersion: v1
kind: Pod
metadata:
  labels:
    run: timinglee
  name: test
spec:
  containers:
    - image: myapp:v1
      name: myapp
      resources:
        limits:						#pod使用资源的最高限制	
          cpu: 500m
          memory: 100M
        requests:					#pod期望使用资源量,不能大于limits
          cpu: 500m
          memory: 100M
   
root@k8s-master ~]# kubectl apply -f pod.yml
pod/test created

[root@k8s-master ~]# kubectl get pods
NAME   READY   STATUS    RESTARTS   AGE
test   1/1     Running   0          3s

[root@k8s-master ~]# kubectl describe pods test
    Limits:
      cpu:     500m
      memory:  100M
    Requests:
      cpu:        500m
      memory:     100M
QoS Class:                   Guaranteed

3.4.4.7 示例7 容器启动管理

bash 复制代码
[root@k8s-master ~]# vim pod.yml
apiVersion: v1
kind: Pod
metadata:
  labels:
    run: timinglee
  name: test
spec:
  restartPolicy: Always
  containers:
    - image: myapp:v1
      name: myapp
[root@k8s-master ~]# kubectl apply -f pod.yml
pod/test created

[root@k8s-master ~]# kubectl get pods  -o wide
NAME   READY   STATUS    RESTARTS   AGE   IP           NODE        NOMINATED NODE   READINESS GATES
test   1/1     Running   0          6s    10.244.2.3   k8s-node2   <none>           <none>

[root@k8s-node2 ~]# docker rm -f ccac1d64ea81

3.4.4.8 示例8 选择运行节点

bash 复制代码
[root@k8s-master ~]# vim pod.yml
apiVersion: v1
kind: Pod
metadata:
  labels:
    run: timinglee
  name: test
spec:
  nodeSelector:
    kubernetes.io/hostname: k8s-node1
  restartPolicy: Always
  containers:
    - image: myapp:v1
      name: myapp

[root@k8s-master ~]# kubectl apply -f pod.yml
pod/test created

[root@k8s-master ~]# kubectl get pods  -o wide
NAME   READY   STATUS    RESTARTS   AGE   IP           NODE        NOMINATED NODE   READINESS GATES
test   1/1     Running   0          21s   10.244.1.5   k8s-node1   <none>           <none>

3.4.4.9 示例9 共享宿主机网络

bash 复制代码
[root@k8s-master ~]# vim pod.yml
apiVersion: v1
kind: Pod
metadata:
  labels:
    run: timinglee
  name: test
spec:
  hostNetwork: true
  restartPolicy: Always
  containers:
    - image: busybox:latest
      name: busybox
      command: ["/bin/sh","-c","sleep 100000"]
[root@k8s-master ~]# kubectl apply -f pod.yml
pod/test created
[root@k8s-master ~]# kubectl exec -it pods/test -c busybox -- /bin/sh
/ # ifconfig
cni0      Link encap:Ethernet  HWaddr E6:D4:AA:81:12:B4
          inet addr:10.244.2.1  Bcast:10.244.2.255  Mask:255.255.255.0
          inet6 addr: fe80::e4d4:aaff:fe81:12b4/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1450  Metric:1
          RX packets:6259 errors:0 dropped:0 overruns:0 frame:0
          TX packets:6495 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:506704 (494.8 KiB)  TX bytes:625439 (610.7 KiB)

docker0   Link encap:Ethernet  HWaddr 02:42:99:4A:30:DC
          inet addr:172.17.0.1  Bcast:172.17.255.255  Mask:255.255.0.0
          UP BROADCAST MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

eth0      Link encap:Ethernet  HWaddr 00:0C:29:6A:A8:61
          inet addr:172.25.254.20  Bcast:172.25.254.255  Mask:255.255.255.0
          inet6 addr: fe80::8ff3:f39c:dc0c:1f0e/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:27858 errors:0 dropped:0 overruns:0 frame:0
          TX packets:14454 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:26591259 (25.3 MiB)  TX bytes:1756895 (1.6 MiB)

flannel.1 Link encap:Ethernet  HWaddr EA:36:60:20:12:05
          inet addr:10.244.2.0  Bcast:0.0.0.0  Mask:255.255.255.255
          inet6 addr: fe80::e836:60ff:fe20:1205/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1450  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:40 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

lo        Link encap:Local Loopback
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:163 errors:0 dropped:0 overruns:0 frame:0
          TX packets:163 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:13630 (13.3 KiB)  TX bytes:13630 (13.3 KiB)

veth9a516531 Link encap:Ethernet  HWaddr 7A:92:08:90:DE:B2
          inet6 addr: fe80::7892:8ff:fe90:deb2/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1450  Metric:1
          RX packets:6236 errors:0 dropped:0 overruns:0 frame:0
          TX packets:6476 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:592532 (578.6 KiB)  TX bytes:622765 (608.1 KiB)

/ # exit

四 pod的生命周期

4.1 INIT 容器

官方文档:Pod | Kubernetes

  • Pod 可以包含多个容器,应用运行在这些容器里面,同时 Pod 也可以有一个或多个先于应用容器启动的 Init 容器。

  • Init 容器与普通的容器非常像,除了如下两点:

    • 它们总是运行到完成

    • init 容器不支持 Readiness,因为它们必须在 Pod 就绪之前运行完成,每个 Init 容器必须运行成功,下一个才能够运行。

  • 如果Pod的 Init 容器失败,Kubernetes 会不断地重启该 Pod,直到 Init 容器成功为止。但是,如果 Pod 对应的 restartPolicy 值为 Never,它不会重新启动。

4.1.1 INIT 容器的功能

  • Init 容器可以包含一些安装过程中应用容器中不存在的实用工具或个性化代码。

  • Init 容器可以安全地运行这些工具,避免这些工具导致应用镜像的安全性降低。

  • 应用镜像的创建者和部署者可以各自独立工作,而没有必要联合构建一个单独的应用镜像。

  • Init 容器能以不同于Pod内应用容器的文件系统视图运行。因此,Init容器可具有访问 Secrets 的权限,而应用容器不能够访问。

  • 由于 Init 容器必须在应用容器启动之前运行完成,因此 Init 容器提供了一种机制来阻塞或延迟应用容器的启动,直到满足了一组先决条件。一旦前置条件满足,Pod内的所有的应用容器会并行启动。

4.1.2 INIT 容器示例

bash 复制代码
[root@k8s-master ~]# vim pod.yml
apiVersion: v1
kind: Pod
metadata:
  labels:
    name: initpod
  name: initpod
spec:
  containers:
    - image: myapp:v1
      name: myapp
  initContainers:
    - name: init-myservice
      image: busybox
      command: ["sh","-c","until test -e /testfile;do echo wating for myservice; sleep 2;done"]

[root@k8s-master ~]# kubectl apply  -f pod.yml
pod/initpod created
[root@k8s-master ~]# kubectl get  pods
NAME      READY   STATUS     RESTARTS   AGE
initpod   0/1     Init:0/1   0          3s

[root@k8s-master ~]# kubectl logs pods/initpod init-myservice
wating for myservice
wating for myservice
wating for myservice
wating for myservice
wating for myservice
wating for myservice
[root@k8s-master ~]# kubectl exec pods/initpod -c init-myservice -- /bin/sh -c "touch /testfile"

[root@k8s-master ~]# kubectl get  pods                                                     NAME      READY   STATUS    RESTARTS   AGE
initpod   1/1     Running   0          62s

4.2 探针

探针是由 kubelet 对容器执行的定期诊断:

  • ExecAction:在容器内执行指定命令。如果命令退出时返回码为 0 则认为诊断成功。

  • TCPSocketAction:对指定端口上的容器的 IP 地址进行 TCP 检查。如果端口打开,则诊断被认为是成功的。

  • HTTPGetAction:对指定的端口和路径上的容器的 IP 地址执行 HTTP Get 请求。如果响应的状态码大于等于200 且小于 400,则诊断被认为是成功的。

每次探测都将获得以下三种结果之一:

  • 成功:容器通过了诊断。

  • 失败:容器未通过诊断。

  • 未知:诊断失败,因此不会采取任何行动。

Kubelet 可以选择是否执行在容器上运行的三种探针执行和做出反应:

  • livenessProbe:指示容器是否正在运行。如果存活探测失败,则 kubelet 会杀死容器,并且容器将受到其重启策略的影响。如果容器不提供存活探针,则默认状态为 Success。

  • readinessProbe:指示容器是否准备好服务请求。如果就绪探测失败,端点控制器将从与 Pod 匹配的所有 Service 的端点中删除该 Pod 的 IP 地址。初始延迟之前的就绪状态默认为 Failure。如果容器不提供就绪探针,则默认状态为 Success。

  • startupProbe: 指示容器中的应用是否已经启动。如果提供了启动探测(startup probe),则禁用所有其他探测,直到它成功为止。如果启动探测失败,kubelet 将杀死容器,容器服从其重启策略进行重启。如果容器没有提供启动探测,则默认状态为成功Success。

ReadinessProbe 与 LivenessProbe 的区别

  • ReadinessProbe 当检测失败后,将 Pod 的 IP:Port 从对应的 EndPoint 列表中删除。

  • LivenessProbe 当检测失败后,将杀死容器并根据 Pod 的重启策略来决定作出对应的措施

StartupProbe 与 ReadinessProbe、LivenessProbe 的区别

  • 如果三个探针同时存在,先执行 StartupProbe 探针,其他两个探针将会被暂时禁用,直到 pod 满足 StartupProbe 探针配置的条件,其他 2 个探针启动,如果不满足按照规则重启容器。

  • 另外两种探针在容器启动后,会按照配置,直到容器消亡才停止探测,而 StartupProbe 探针只是在容器启动后按照配置满足一次后,不在进行后续的探测。

4.2.1 探针实例

4.2.1.1 存活探针示例:
bash 复制代码
[root@k8s-master ~]# vim pod.yml
apiVersion: v1
kind: Pod
metadata:
  labels:
    name: liveness
  name: liveness
spec:
  containers:
    - image: myapp:v1
      name: myapp
      livenessProbe:
        tcpSocket:					#检测端口存在性
          port: 8080
        initialDelaySeconds: 3		#容器启动后要等待多少秒后就探针开始工作,默认是 0
        periodSeconds: 1			#执行探测的时间间隔,默认为 10s
        timeoutSeconds: 1			#探针执行检测请求后,等待响应的超时时间,默认为 1s
        

#测试:
[root@k8s-master ~]# kubectl apply -f pod.yml
pod/liveness created
[root@k8s-master ~]# kubectl get pods
NAME       READY   STATUS             RESTARTS     AGE
liveness   0/1     CrashLoopBackOff   2 (7s ago)   22s

[root@k8s-master ~]# kubectl describe pods
Warning  Unhealthy  1s (x9 over 13s)  kubelet            Liveness probe failed: dial tcp 10.244.2.6:8080: connect: connection refused
4.2.1.2 就绪探针示例:
bash 复制代码
[root@k8s-master ~]# vim pod.yml
apiVersion: v1
kind: Pod
metadata:
  labels:
    name: readiness
  name: readiness
spec:
  containers:
    - image: myapp:v1
      name: myapp
      readinessProbe:
        httpGet:
          path: /test.html
          port: 80
        initialDelaySeconds: 1
        periodSeconds: 3
        timeoutSeconds: 1


#测试:
[root@k8s-master ~]# kubectl expose pod readiness --port 80 --target-port 80

[root@k8s-master ~]# kubectl get pods
NAME        READY   STATUS    RESTARTS   AGE
readiness   0/1     Running   0          5m25s

[root@k8s-master ~]# kubectl describe pods readiness
Warning  Unhealthy  26s (x66 over 5m43s)  kubelet            Readiness probe failed: HTTP probe failed with statuscode: 404

[root@k8s-master ~]# kubectl describe services readiness
Name:              readiness
Namespace:         default
Labels:            name=readiness
Annotations:       <none>
Selector:          name=readiness
Type:              ClusterIP
IP Family Policy:  SingleStack
IP Families:       IPv4
IP:                10.100.171.244
IPs:               10.100.171.244
Port:              <unset>  80/TCP
TargetPort:        80/TCP
Endpoints:										#没有暴漏端口,就绪探针探测不满足暴漏条件
Session Affinity:  None
Events:            <none>

kubectl exec pods/readiness -c myapp -- /bin/sh -c "echo test > /usr/share/nginx/html/test.html"

[root@k8s-master ~]# kubectl get pods
NAME        READY   STATUS    RESTARTS   AGE
readiness   1/1     Running   0          7m49s

[root@k8s-master ~]# kubectl describe services readiness
Name:              readiness
Namespace:         default
Labels:            name=readiness
Annotations:       <none>
Selector:          name=readiness
Type:              ClusterIP
IP Family Policy:  SingleStack
IP Families:       IPv4
IP:                10.100.171.244
IPs:               10.100.171.244
Port:              <unset>  80/TCP
TargetPort:        80/TCP
Endpoints:         10.244.2.8:80			#满组条件端口暴漏
Session Affinity:  None
Events:            <none>

五 控制器

官方文档:

工作负载管理 | Kubernetes

控制器也是管理pod的一种手段

  • 自主式pod:pod退出或意外关闭后不会被重新创建

  • 控制器管理的 Pod:在控制器的生命周期里,始终要维持 Pod 的副本数目

Pod控制器是管理pod的中间层,使用Pod控制器之后,只需要告诉Pod控制器,想要多少个什么样的Pod就可以了,它会创建出满足条件的Pod并确保每一个Pod资源处于用户期望的目标状态。如果Pod资源在运行中出现故障,它会基于指定策略重新编排Pod

当建立控制器后,会把期望值写入etcd,k8s中的apiserver检索etcd中我们保存的期望状态,并对比pod的当前状态,如果出现差异代码自驱动立即恢复

六 控制器常用类型

控制器名称 控制器用途
Replication Controller 比较原始的pod控制器,已经被废弃,由ReplicaSet替代
ReplicaSet ReplicaSet 确保任何时间都有指定数量的 Pod 副本在运行
Deployment 一个 Deployment 为 PodReplicaSet 提供声明式的更新能力
DaemonSet DaemonSet 确保全指定节点上运行一个 Pod 的副本
StatefulSet StatefulSet 是用来管理有状态应用的工作负载 API 对象。
Job 执行批处理任务,仅执行一次任务,保证任务的一个或多个Pod成功结束
CronJob Cron Job 创建基于时间调度的 Jobs。
HPA全称Horizontal Pod Autoscaler 根据资源利用率自动调整service中Pod数量,实现Pod水平自动缩放

七 replicaset控制器

7.1 replicaset功能

  • ReplicaSet 是下一代的 Replication Controller,官方推荐使用ReplicaSet

  • ReplicaSet和Replication Controller的唯一区别是选择器的支持,ReplicaSet支持新的基于集合的选择器需求

  • ReplicaSet 确保任何时间都有指定数量的 Pod 副本在运行

  • 虽然 ReplicaSets 可以独立使用,但今天它主要被Deployments 用作协调 Pod 创建、删除和更新的机制

7.2 replicaset参数说明

参数名称 字段类型 参数说明
spec Object 详细定义对象,固定值就写Spec
spec.replicas integer 指定维护pod数量
spec.selector Object Selector是对pod的标签查询,与pod数量匹配
spec.selector.matchLabels string 指定Selector查询标签的名称和值,以key:value方式指定
spec.template Object 指定对pod的描述信息,比如lab标签,运行容器的信息等
spec.template.metadata Object 指定pod属性
spec.template.metadata.labels string 指定pod标签
spec.template.spec Object 详细定义对象
spec.template.spec.containers list Spec对象的容器列表定义
spec.template.spec.containers.name string 指定容器名称
spec.template.spec.containers.image string 指定容器镜像

7.3 replicaset 示例

bash 复制代码
#生成yml文件
[root@k8s-master ~]# kubectl create deployment replicaset --image myapp:v1 --dry-run=client -o yaml > replicaset.yml

[root@k8s-master ~]# vim replicaset.yml
apiVersion: apps/v1
kind: ReplicaSet
metadata:
  name: replicaset		#指定pod名称,一定小写,如果出现大写报错
spec:
  replicas: 2			#指定维护pod数量为2
  selector:				#指定检测匹配方式
    matchLabels:		#指定匹配方式为匹配标签
      app: myapp		#指定匹配的标签为app=myapp

  template:				#模板,当副本数量不足时,会根据下面的模板创建pod副本
    metadata:
      labels:
        app: myapp
    spec:
      containers:
      - image: myapp:v1
        name: myapp

[root@k8s-master ~]# kubectl apply -f replicaset.yml
replicaset.apps/replicaset created


[root@k8s-master ~]# kubectl get pods  --show-labels
NAME               READY   STATUS    RESTARTS   AGE   LABELS
replicaset-l4xnr   1/1     Running   0          96s   app=myapp
replicaset-t2s5p   1/1     Running   0          96s   app=myapp


#replicaset是通过标签匹配pod
[root@k8s-master ~]# kubectl label pod replicaset-l4xnr app=timinglee --overwrite
pod/replicaset-l4xnr labeled
[root@k8s-master ~]# kubectl get pods  --show-labels
NAME               READY   STATUS    RESTARTS   AGE     LABELS
replicaset-gd5fh   1/1     Running   0          2s      app=myapp		#新开启的pod
replicaset-l4xnr   1/1     Running   0          3m19s   app=timinglee
replicaset-t2s5p   1/1     Running   0          3m19s   app=myapp

#恢复标签后
[root@k8s2 pod]# kubectl label pod replicaset-example-q2sq9 app- 
[root@k8s2 pod]# kubectl get pod --show-labels
NAME                       READY   STATUS    RESTARTS   AGE     LABELS
replicaset-example-q2sq9   1/1     Running   0          3m14s   app=nginx
replicaset-example-th24v   1/1     Running   0          3m14s   app=nginx
replicaset-example-w7zpw   1/1     Running   0          3m14s   app=nginx

#replicaset自动控制副本数量,pod可以自愈
[root@k8s-master ~]# kubectl delete pods replicaset-t2s5p
pod "replicaset-t2s5p" deleted

[root@k8s-master ~]# kubectl get pods --show-labels
NAME               READY   STATUS    RESTARTS   AGE     LABELS
replicaset-l4xnr   1/1     Running   0          5m43s   app=myapp
replicaset-nxmr9   1/1     Running   0          15s     app=myapp


回收资源
[root@k8s2 pod]# kubectl delete -f rs-example.yml

八 deployment 控制器

8.1 deployment控制器的功能

  • 为了更好的解决服务编排的问题,kubernetes在V1.2版本开始,引入了Deployment控制器。

  • Deployment控制器并不直接管理pod,而是通过管理ReplicaSet来间接管理Pod

  • Deployment管理ReplicaSet,ReplicaSet管理Pod

  • Deployment 为 Pod 和 ReplicaSet 提供了一个申明式的定义方法

  • 在Deployment中ReplicaSet相当于一个版本

典型的应用场景:

  • 用来创建Pod和ReplicaSet

  • 滚动更新和回滚

  • 扩容和缩容

  • 暂停与恢复

8.2 deployment控制器示例

bash 复制代码
#生成yaml文件
[root@k8s-master ~]# kubectl create deployment deployment --image myapp:v1  --dry-run=client -o yaml > deployment.yml

[root@k8s-master ~]# vim deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: deployment
spec:
  replicas: 4
  selector:
    matchLabels:
      app: myapp
  template:
    metadata:
      labels:
        app: myapp
    spec:
      containers:
      - image: myapp:v1
        name: myapp
#建立pod
root@k8s-master ~]# kubectl apply -f deployment.yml
deployment.apps/deployment created

#查看pod信息
[root@k8s-master ~]# kubectl get pods --show-labels
NAME                          READY   STATUS    RESTARTS   AGE   LABELS
deployment-5d886954d4-2ckqw   1/1     Running   0          23s   app=myapp,pod-template-hash=5d886954d4
deployment-5d886954d4-m8gpd   1/1     Running   0          23s   app=myapp,pod-template-hash=5d886954d4
deployment-5d886954d4-s7pws   1/1     Running   0          23s   app=myapp,pod-template-hash=5d886954d4
deployment-5d886954d4-wqnvv   1/1     Running   0          23s   app=myapp,pod-template-hash=5d886954d4

8.2.1 版本迭代

更新的过程是重新建立一个版本的RS,新版本的RS会把pod 重建,然后把老版本的RS回收

8.2.2 版本回滚

bash 复制代码
[root@k8s-master ~]# vim deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: deployment
spec:
  replicas: 4
  selector:
    matchLabels:
      app: myapp

  template:
    metadata:
      labels:
        app: myapp
    spec:
      containers:
      - image: myapp:v1				#回滚到之前版本
        name: myapp


[root@k8s-master ~]# kubectl apply -f deployment.yml
deployment.apps/deployment configured

#测试回滚效果
[root@k8s-master ~]# kubectl get pods -o wide
NAME                          READY   STATUS    RESTARTS   AGE   IP            NODE        NOMINATED NODE   READINESS GATES
deployment-5d886954d4-dr74h   1/1     Running   0          8s    10.244.2.26   k8s-node2   <none>           <none>
deployment-5d886954d4-thpf9   1/1     Running   0          7s    10.244.1.29   k8s-node1   <none>           <none>
deployment-5d886954d4-vmwl9   1/1     Running   0          8s    10.244.1.28   k8s-node1   <none>           <none>
deployment-5d886954d4-wprpd   1/1     Running   0          6s    10.244.2.27   k8s-node2   <none>           <none>

[root@k8s-master ~]# curl  10.244.2.26
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>

8.2.3 滚动更新策略

bash 复制代码
[root@k8s-master ~]# vim deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: deployment
spec:
  minReadySeconds: 5		#最小就绪时间,指定pod每隔多久更新一次
  replicas: 4
  strategy:					#指定更新策略
    rollingUpdate:
      maxSurge: 1			#比定义pod数量多几个
      maxUnavailable: 0		#比定义pod个数少几个
  selector:
    matchLabels:
      app: myapp

  template:
    metadata:
      labels:
        app: myapp
    spec:
      containers:
      - image: myapp:v1
        name: myapp
[root@k8s2 pod]# kubectl apply -f deployment-example.yaml

8.2.4 暂停及恢复

在实际生产环境中我们做的变更可能不止一处,当修改了一处后,如果执行变更就直接触发了

我们期望的触发时当我们把所有修改都搞定后一次触发暂停,避免触发不必要的线上更新

bash 复制代码
[root@k8s2 pod]# kubectl rollout pause deployment deployment-example

[root@k8s2 pod]# vim deployment-example.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: deployment-example
spec:
  minReadySeconds: 5
  strategy:
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 0
  replicas: 6				
  selector:
    matchLabels:
      app: myapp
  template:
    metadata:
      labels:
        app: myapp
    spec:
      containers:
      - name: myapp
        image: nginx
        resources:
          limits:
            cpu: 0.5
            memory: 200Mi
          requests:
            cpu: 0.5
            memory: 200Mi

[root@k8s2 pod]# kubectl apply -f deployment-example.yaml

#调整副本数,不受影响
[root@k8s-master ~]# kubectl describe pods deployment-7f4786db9c-8jw22
Name:             deployment-7f4786db9c-8jw22
Namespace:        default
Priority:         0
Service Account:  default
Node:             k8s-node1/172.25.254.10
Start Time:       Mon, 02 Sep 2024 00:27:20 +0800
Labels:           app=myapp
                  pod-template-hash=7f4786db9c
Annotations:      <none>
Status:           Running
IP:               10.244.1.31
IPs:
  IP:           10.244.1.31
Controlled By:  ReplicaSet/deployment-7f4786db9c
Containers:
  myapp:
    Container ID:   docker://01ad7216e0a8c2674bf17adcc9b071e9bfb951eb294cafa2b8482bb8b4940c1d
    Image:          myapp:v2
    Image ID:       docker-pullable://myapp@sha256:5f4afc8302ade316fc47c99ee1d41f8ba94dbe7e3e7747dd87215a15429b9102
    Port:           <none>
    Host Port:      <none>
    State:          Running
      Started:      Mon, 02 Sep 2024 00:27:21 +0800
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-mfjjp (ro)
Conditions:
  Type                        Status
  PodReadyToStartContainers   True
  Initialized                 True
  Ready                       True
  ContainersReady             True
  PodScheduled                True
Volumes:
  kube-api-access-mfjjp:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type    Reason     Age    From               Message
  ----    ------     ----   ----               -------
  Normal  Scheduled  6m22s  default-scheduler  Successfully assigned default/deployment-7f4786db9c-8jw22 to k8s-node1
  Normal  Pulled     6m22s  kubelet            Container image "myapp:v2" already present on machine
  Normal  Created    6m21s  kubelet            Created container myapp
  Normal  Started    6m21s  kubelet            Started container myapp


#但是更新镜像和修改资源并没有触发更新
[root@k8s2 pod]# kubectl rollout history deployment deployment-example
deployment.apps/deployment-example
REVISION  CHANGE-CAUSE
3         <none>
4         <none>

#恢复后开始触发更新
[root@k8s2 pod]# kubectl rollout resume deployment deployment-example

[root@k8s2 pod]# kubectl rollout history  deployment deployment-example
deployment.apps/deployment-example
REVISION  CHANGE-CAUSE
3         <none>
4         <none>
5         <none>

#回收
[root@k8s2 pod]# kubectl delete -f deployment-example.yaml

九 daemonset控制器

9.1 daemonset功能

DaemonSet 确保全部(或者某些)节点上运行一个 Pod 的副本。当有节点加入集群时, 也会为他们新增一个 Pod ,当有节点从集群移除时,这些 Pod 也会被回收。删除 DaemonSet 将会删除它创建的所有 Pod

DaemonSet 的典型用法:

  • 在每个节点上运行集群存储 DaemonSet,例如 glusterd、ceph。

  • 在每个节点上运行日志收集 DaemonSet,例如 fluentd、logstash。

  • 在每个节点上运行监控 DaemonSet,例如 Prometheus Node Exporter、zabbix agent等

  • 一个简单的用法是在所有的节点上都启动一个 DaemonSet,将被作为每种类型的 daemon 使用

  • 一个稍微复杂的用法是单独对每种 daemon 类型使用多个 DaemonSet,但具有不同的标志, 并且对不同硬件类型具有不同的内存、CPU 要求

9.2 daemonset 示例

bash 复制代码
root@k8s2 pod]# cat daemonset-example.yml
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: daemonset-example
spec:
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      tolerations:		#对于污点节点的容忍
      - effect: NoSchedule
        operator: Exists
      containers:
      - name: nginx
        image: nginx

[root@k8s-master ~]# kubectl get pods  -o wide
NAME              READY   STATUS    RESTARTS   AGE   IP            NODE         NOMINATED NODE   READINESS GATES
daemonset-87h6s   1/1     Running   0          47s   10.244.0.8    k8s-master   <none>           <none>
daemonset-n4vs4   1/1     Running   0          47s   10.244.2.38   k8s-node2    <none>           <none>
daemonset-vhxmq   1/1     Running   0          47s   10.244.1.40   k8s-node1    <none>           <none>


#回收
[root@k8s2 pod]# kubectl delete -f daemonset-example.yml

十 job 控制器

10.1 job控制器功能

Job,主要用于负责批量处理(一次要处理指定数量任务)短暂的一次性(每个任务仅运行一次就结束)任务

Job特点如下:

  • 当Job创建的pod执行成功结束时,Job将记录成功结束的pod数量

  • 当成功结束的pod达到指定的数量时,Job将完成执行

10.2 job 控制器示例:

bash 复制代码
[root@k8s2 pod]# vim job.yml
apiVersion: batch/v1
kind: Job
metadata:
  name: pi
spec:
  completions: 6		#一共完成任务数为6		
  parallelism: 2		#每次并行完成2个
  template:
    spec:
      containers:
      - name: pi
        image: perl:5.34.0
        command: ["perl",  "-Mbignum=bpi", "-wle", "print bpi(2000)"]	计算Π的后2000位
      restartPolicy: Never						#关闭后不自动重启
  backoffLimit: 4								#运行失败后尝试4重新运行

[root@k8s2 pod]# kubectl apply -f job.yml

关于重启策略设置的说明:

  • 如果指定为OnFailure,则job会在pod出现故障时重启容器

    而不是创建pod,failed次数不变

  • 如果指定为Never,则job会在pod出现故障时创建新的pod

    并且故障pod不会消失,也不会重启,failed次数加1

  • 如果指定为Always的话,就意味着一直重启,意味着job任务会重复去执行了

十一 cronjob 控制器

11.1 cronjob 控制器功能

  • Cron Job 创建基于时间调度的 Jobs。

  • CronJob控制器以Job控制器资源为其管控对象,并借助它管理pod资源对象,

  • CronJob可以以类似于Linux操作系统的周期性任务作业计划的方式控制其运行时间点及重复运行的方式。

  • CronJob可以在特定的时间点(反复的)去运行job任务。

11.2 cronjob 控制器 示例

bash 复制代码
[root@k8s2 pod]# vim cronjob.yml
apiVersion: batch/v1
kind: CronJob
metadata:
  name: hello
spec:
  schedule: "* * * * *"
  jobTemplate:
    spec:
      template:
        spec:
          containers:
          - name: hello
            image: busybox
            imagePullPolicy: IfNotPresent
            command:
            - /bin/sh
            - -c
            - date; echo Hello from the Kubernetes cluster
          restartPolicy: OnFailure

[root@k8s2 pod]# kubectl apply -f cronjob.yml

十二 微服务的类型

微服务类型 作用描述
ClusterIP 默认值,k8s系统给service自动分配的虚拟IP,只能在集群内部访问
NodePort 将Service通过指定的Node上的端口暴露给外部,访问任意一个NodeIP:nodePort都将路由到ClusterIP
LoadBalancer 在NodePort的基础上,借助cloud provider创建一个外部的负载均衡器,并将请求转发到 NodeIP:NodePort,此模式只能在云服务器上使用
ExternalName 将服务通过 DNS CNAME 记录方式转发到指定的域名(通过 spec.externlName 设定

示例:

复制代码
#生成控制器文件并建立控制器
[root@k8s-master ~]# kubectl create deployment timinglee --image myapp:v1  --replicas 2 --dry-run=client -o yaml > timinglee.yaml

#生成微服务yaml追加到已有yaml中
[root@k8s-master ~]# kubectl expose deployment timinglee --port 80 --target-port 80 --dry-run=client -o yaml >> timinglee.yaml

[root@k8s-master ~]# vim timinglee.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: timinglee
  name: timinglee
spec:
  replicas: 2
  selector:
    matchLabels:
      app: timinglee
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: timinglee
    spec:
      containers:
      - image: myapp:v1
        name: myapp
---										#不同资源间用---隔开

apiVersion: v1
kind: Service
metadata:
  labels:
    app: timinglee
  name: timinglee
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
  selector:
    app: timinglee

[root@k8s-master ~]# kubectl apply  -f timinglee.yaml
deployment.apps/timinglee created
service/timinglee created

[root@k8s-master ~]# kubectl get services
NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.96.0.1       <none>        443/TCP   19h
timinglee    ClusterIP   10.99.127.134   <none>        80/TCP    16s

微服务默认使用iptables调度

复制代码
[root@k8s-master ~]# kubectl get services  -o wide
NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE    SELECTOR
kubernetes   ClusterIP   10.96.0.1       <none>        443/TCP   19h    <none>
timinglee    ClusterIP   10.99.127.134   <none>        80/TCP    119s   app=timinglee			#集群内部IP 134

#可以在火墙中查看到策略信息
[root@k8s-master ~]# iptables -t nat -nL
KUBE-SVC-I7WXYK76FWYNTTGM  6    --  0.0.0.0/0            10.99.127.134        /* default/timinglee cluster IP */ tcp dpt:80

ipvs模式

  • Service 是由 kube-proxy 组件,加上 iptables 来共同实现的

  • kube-proxy 通过 iptables 处理 Service 的过程,需要在宿主机上设置相当多的 iptables 规则,如果宿主机有大量的Pod,不断刷新iptables规则,会消耗大量的CPU资源

  • IPVS模式的service,可以使K8s集群支持更多量级的Pod

ipvs模式配置方式

1 在所有节点中安装ipvsadm

复制代码
[root@k8s-所有节点 pod]yum install ipvsadm --y

2 修改master节点的代理配置

复制代码
[root@k8s-master ~]# kubectl -n kube-system edit cm kube-proxy
    metricsBindAddress: ""
    mode: "ipvs"							#设置kube-proxy使用ipvs模式
    nftables:

3 重启pod,在pod运行时配置文件中采用默认配置,当改变配置文件后已经运行的pod状态不会变化,所以要重启pod

复制代码
[root@k8s-master ~]# kubectl -n kube-system get  pods   | awk '/kube-proxy/{system("kubectl -n kube-system delete pods "$1)}'

[root@k8s-master ~]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  10.96.0.1:443 rr
  -> 172.25.254.100:6443          Masq    1      0          0
TCP  10.96.0.10:53 rr
  -> 10.244.0.2:53                Masq    1      0          0
  -> 10.244.0.3:53                Masq    1      0          0
TCP  10.96.0.10:9153 rr
  -> 10.244.0.2:9153              Masq    1      0          0
  -> 10.244.0.3:9153              Masq    1      0          0
TCP  10.97.59.25:80 rr
  -> 10.244.1.17:80               Masq    1      0          0
  -> 10.244.2.13:80               Masq    1      0          0
UDP  10.96.0.10:53 rr
  -> 10.244.0.2:53                Masq    1      0          0
  -> 10.244.0.3:53                Masq    1      0          0

切换ipvs模式后,kube-proxy会在宿主机上添加一个虚拟网卡:kube-ipvs0,并分配所有service IP

复制代码
[root@k8s-master ~]# ip a | tail
 inet6 fe80::c4fb:e9ff:feee:7d32/64 scope link
    valid_lft forever preferred_lft forever
8: kube-ipvs0: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN group default
 link/ether fe:9f:c8:5d:a6:c8 brd ff:ff:ff:ff:ff:ff
 inet 10.96.0.10/32 scope global kube-ipvs0
    valid_lft forever preferred_lft forever
 inet 10.96.0.1/32 scope global kube-ipvs0
    valid_lft forever preferred_lft forever
 inet 10.97.59.25/32 scope global kube-ipvs0
    valid_lft forever preferred_lft forever

十三微服务类型详解

13.1 clusterip

特点:

clusterip模式只能在集群内访问,并对集群内的pod提供健康检测和自动发现功能

示例:

复制代码
[root@k8s2 service]# vim myapp.yml
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app: timinglee
  name: timinglee
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
  selector:
    app: timinglee
  type: ClusterIP


service创建后集群DNS提供解析
[root@k8s-master ~]# dig  timinglee.default.svc.cluster.local @10.96.0.10

; <<>> DiG 9.16.23-RH <<>> timinglee.default.svc.cluster.local @10.96.0.10
;; global options: +cmd
;; Got answer:
;; WARNING: .local is reserved for Multicast DNS
;; You are currently testing what happens when an mDNS query is leaked to DNS
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 27827
;; flags: qr aa rd; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
;; WARNING: recursion requested but not available

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
; COOKIE: 057d9ff344fe9a3a (echoed)
;; QUESTION SECTION:
;timinglee.default.svc.cluster.local. IN        A

;; ANSWER SECTION:
timinglee.default.svc.cluster.local. 30 IN A    10.97.59.25

;; Query time: 8 msec
;; SERVER: 10.96.0.10#53(10.96.0.10)
;; WHEN: Wed Sep 04 13:44:30 CST 2024
;; MSG SIZE  rcvd: 127

13.2 ClusterIP中的特殊模式headless

headless(无头服务)

对于无头 `Services` 并不会分配 Cluster IP,kube-proxy不会处理它们, 而且平台也不会为它们进行负载均衡和路由,集群访问通过dns解析直接指向到业务pod上的IP,所有的调度有dns单独完成

复制代码
[root@k8s-master ~]# vim timinglee.yaml
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app: timinglee
  name: timinglee
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
  selector:
    app: timinglee
  type: ClusterIP
  clusterIP: None


[root@k8s-master ~]# kubectl delete -f timinglee.yaml
[root@k8s-master ~]# kubectl apply -f timinglee.yaml
deployment.apps/timinglee created

#测试
[root@k8s-master ~]# kubectl get services timinglee
NAME        TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
timinglee   ClusterIP   None         <none>        80/TCP    6s

[root@k8s-master ~]# dig  timinglee.default.svc.cluster.local @10.96.0.10

; <<>> DiG 9.16.23-RH <<>> timinglee.default.svc.cluster.local @10.96.0.10
;; global options: +cmd
;; Got answer:
;; WARNING: .local is reserved for Multicast DNS
;; You are currently testing what happens when an mDNS query is leaked to DNS
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 51527
;; flags: qr aa rd; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 1
;; WARNING: recursion requested but not available

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
; COOKIE: 81f9c97b3f28b3b9 (echoed)
;; QUESTION SECTION:
;timinglee.default.svc.cluster.local. IN        A

;; ANSWER SECTION:
timinglee.default.svc.cluster.local. 20 IN A    10.244.2.14		#直接解析到pod上
timinglee.default.svc.cluster.local. 20 IN A    10.244.1.18

;; Query time: 0 msec
;; SERVER: 10.96.0.10#53(10.96.0.10)
;; WHEN: Wed Sep 04 13:58:23 CST 2024
;; MSG SIZE  rcvd: 178


#开启一个busyboxplus的pod测试
[root@k8s-master ~]# kubectl run  test --image busyboxplus -it
If you don't see a command prompt, try pressing enter.
/ # nslookup timinglee-service
Server:    10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local

Name:      timinglee-service
Address 1: 10.244.2.16 10-244-2-16.timinglee-service.default.svc.cluster.local
Address 2: 10.244.2.17 10-244-2-17.timinglee-service.default.svc.cluster.local
Address 3: 10.244.1.22 10-244-1-22.timinglee-service.default.svc.cluster.local
Address 4: 10.244.1.21 10-244-1-21.timinglee-service.default.svc.cluster.local
/ # curl timinglee-service
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
/ # curl timinglee-service/hostname.html
timinglee-c56f584cf-b8t6m

13.3 nodeport

通过ipvs暴漏端口从而使外部主机通过master节点的对外ip:<port>来访问pod业务

其访问过程为:

示例:

复制代码
[root@k8s-master ~]# vim timinglee.yaml
---

apiVersion: v1
kind: Service
metadata:
  labels:
    app: timinglee-service
  name: timinglee-service
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
  selector:
    app: timinglee
  type: NodePort

[root@k8s-master ~]# kubectl apply -f timinglee.yaml
deployment.apps/timinglee created
service/timinglee-service created
[root@k8s-master ~]# kubectl get services  timinglee-service
NAME                TYPE       CLUSTER-IP    EXTERNAL-IP   PORT(S)        AGE
timinglee-service   NodePort   10.98.60.22   <none>        80:31771/TCP   8

nodeport在集群节点上绑定端口,一个端口对应一个服务
[root@k8s-master ~]# for i in {1..5}
> do
> curl 172.25.254.100:31771/hostname.html
> done
timinglee-c56f584cf-fjxdk
timinglee-c56f584cf-5m2z5
timinglee-c56f584cf-z2w4d
timinglee-c56f584cf-tt5g6
timinglee-c56f584cf-fjxdk

nodeport默认端口

nodeport默认端口是30000-32767,超出会报错

复制代码
[root@k8s-master ~]# vim timinglee.yaml
apiVersion: v1
kind: Service
metadata:
  labels:
    app: timinglee-service
  name: timinglee-service
spec:
  ports:

  - port: 80
      protocol: TCP
      targetPort: 80
      nodePort: 33333
    selector:
      app: timinglee
    type: NodePort

[root@k8s-master ~]# kubectl apply -f timinglee.yaml
deployment.apps/timinglee created
The Service "timinglee-service" is invalid: spec.ports[0].nodePort: Invalid value: 33333: provided port is not in the valid range. The range of valid ports is 30000-32767

如果需要使用这个范围以外的端口就需要特殊设定

复制代码
```
[root@k8s-master ~]# vim /etc/kubernetes/manifests/kube-apiserver.yaml

- --service-node-port-range=30000-40000
```

> [!NOTE]
>
> 

添加"--service-node-port-range=" 参数,端口范围可以自定义

修改后api-server会自动重启,等apiserver正常启动后才能操作集群

集群重启自动完成在修改完参数后全程不需要人为干预

13.4 loadbalancer

云平台会为我们分配vip并实现访问,如果是裸金属主机那么需要metallb来实现ip的分配

复制代码
[root@k8s-master ~]# vim timinglee.yaml

---
apiVersion: v1
kind: Service
metadata:
  labels:
    app: timinglee-service
  name: timinglee-service
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
  selector:
    app: timinglee
  type: LoadBalancer

[root@k8s2 service]# kubectl apply -f myapp.yml

默认无法分配外部访问IP
[root@k8s2 service]# kubectl get svc
NAME         TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP      10.96.0.1       <none>        443/TCP        4d1h
myapp        LoadBalancer   10.107.23.134   <pending>     80:32537/TCP   4s

LoadBalancer模式适用云平台,裸金属环境需要安装metallb提供支持

13.5 metalLB

官网:Installation :: MetalLB, bare metal load-balancer for Kubernetes

metalLB功能

为LoadBalancer分配vip

部署方式

复制代码
1.设置ipvs模式
[root@k8s-master ~]# kubectl edit cm -n kube-system kube-proxy
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: "ipvs"
ipvs:
  strictARP: true

[root@k8s-master ~]# kubectl -n kube-system get  pods   | awk '/kube-proxy/{system("kubectl -n kube-system delete pods "$1)}'

2.下载部署文件
[root@k8s2 metallb]# wget https://raw.githubusercontent.com/metallb/metallb/v0.13.12/config/manifests/metallb-native.yaml

3.修改文件中镜像地址,与harbor仓库路径保持一致
[root@k8s-master ~]# vim metallb-native.yaml
...
image: metallb/controller:v0.14.8
image: metallb/speaker:v0.14.8

4.上传镜像到harbor
[root@k8s-master ~]# docker pull quay.io/metallb/controller:v0.14.8
[root@k8s-master ~]# docker pull quay.io/metallb/speaker:v0.14.8

[root@k8s-master ~]# docker tag quay.io/metallb/speaker:v0.14.8 reg.timinglee.org/metallb/speaker:v0.14.8
[root@k8s-master ~]# docker tag quay.io/metallb/controller:v0.14.8 reg.timinglee.org/metallb/controller:v0.14.8

[root@k8s-master ~]# docker push reg.timinglee.org/metallb/speaker:v0.14.8
[root@k8s-master ~]# docker push reg.timinglee.org/metallb/controller:v0.14.8


部署服务
[root@k8s2 metallb]# kubectl apply -f metallb-native.yaml
[root@k8s-master ~]# kubectl -n metallb-system get pods
NAME                          READY   STATUS    RESTARTS   AGE
controller-65957f77c8-25nrw   1/1     Running   0          30s
speaker-p94xq                 1/1     Running   0          29s
speaker-qmpct                 1/1     Running   0          29s
speaker-xh4zh                 1/1     Running   0          30s

配置分配地址段
[root@k8s-master ~]# vim configmap.yml
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
  name: first-pool						#地址池名称
  namespace: metallb-system
spec:
  addresses:
  - 172.25.254.50-172.25.254.99			#修改为自己本地地址段

---										#两个不同的kind中间必须加分割
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
  name: example
  namespace: metallb-system
spec:
  ipAddressPools:
  - first-pool							#使用地址池 

[root@k8s-master ~]# kubectl apply -f configmap.yml
ipaddresspool.metallb.io/first-pool created
l2advertisement.metallb.io/example created


[root@k8s-master ~]# kubectl get services
NAME                TYPE           CLUSTER-IP      EXTERNAL-IP     PORT(S)        AGE
kubernetes          ClusterIP      10.96.0.1       <none>          443/TCP        21h
timinglee-service   LoadBalancer   10.109.36.123   172.25.254.50   80:31595/TCP   9m9s


#通过分配地址从集群外访问服务
[root@reg ~]# curl  172.25.254.50
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>

13.6 externalname

  • 开启services后,不会被分配IP,而是用dns解析CNAME固定域名来解决ip变化问题

  • 一般应用于外部业务和pod沟通或外部业务迁移到pod内时

  • 在应用向集群迁移过程中,externalname在过度阶段就可以起作用了。

  • 集群外的资源迁移到集群时,在迁移的过程中ip可能会变化,但是域名+dns解析能完美解决此问题

示例:

复制代码
[root@k8s-master ~]# vim timinglee.yaml
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app: timinglee-service
  name: timinglee-service
spec:
  selector:
    app: timinglee
  type: ExternalName
  externalName: www.timinglee.org


[root@k8s-master ~]# kubectl apply -f timinglee.yaml

[root@k8s-master ~]# kubectl get services  timinglee-service
NAME                TYPE           CLUSTER-IP   EXTERNAL-IP         PORT(S)   AGE
timinglee-service   ExternalName   <none>       www.timinglee.org   <none>    2m58s

十四 Ingress-nginx

官网:

https://kubernetes.github.io/ingress-nginx/deploy/#bare-metal-clusters

14.1 ingress-nginx功能

  • 一种全局的、为了代理不同后端 Service 而设置的负载均衡服务,支持7层

  • Ingress由两部分组成:Ingress controller和Ingress服务

  • Ingress Controller 会根据你定义的 Ingress 对象,提供对应的代理能力。

  • 业界常用的各种反向代理项目,比如 Nginx、HAProxy、Envoy、Traefik 等,都已经为Kubernetes 专门维护了对应的 Ingress Controller。

14.2 部署ingress

14.2.1 下载部署文件

复制代码
[root@k8s-master ~]# wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.11.2/deploy/static/provider/baremetal/deploy.yaml

上传ingress所需镜像到harbor

复制代码
[root@k8s-master ~]# docker tag registry.k8s.io/ingress-nginx/controller:v1.11.2@sha256:d5f8217feeac4887cb1ed21f27c2674e58be06bd8f5184cacea2a69abaf78dce reg.timinglee.org/ingress-nginx/controller:v1.11.2

[root@k8s-master ~]# docker tag registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3 reg.timinglee.org/ingress-nginx/kube-webhook-certgen:v1.4.3

[root@k8s-master ~]# docker push reg.timinglee.org/ingress-nginx/controller:v1.11.2
[root@k8s-master ~]# docker push reg.timinglee.org/ingress-nginx/kube-webhook-certgen:v1.4.3

14.2.2 安装ingress

复制代码
[root@k8s-master ~]# vim deploy.yaml
445         image: ingress-nginx/controller:v1.11.2
546         image: ingress-nginx/kube-webhook-certgen:v1.4.3
599         image: ingress-nginx/kube-webhook-certgen:v1.4.3

[root@k8s-master ~]# kubectl -n ingress-nginx get pods
NAME                                       READY   STATUS      RESTARTS   AGE
ingress-nginx-admission-create-ggqm6       0/1     Completed   0          82s
ingress-nginx-admission-patch-q4wp2        0/1     Completed   0          82s
ingress-nginx-controller-bb7d8f97c-g2h4p   1/1     Running     0          82s
[root@k8s-master ~]# kubectl -n ingress-nginx get svc
NAME                                 TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE
ingress-nginx-controller             NodePort    10.103.33.148   <none>        80:34512/TCP,443:34727/TCP   108s
ingress-nginx-controller-admission   ClusterIP   10.103.183.64   <none>        443/TCP                      108s


#修改微服务为loadbalancer
[root@k8s-master ~]# kubectl -n ingress-nginx edit svc ingress-nginx-controller
49   type: LoadBalancer

[root@k8s-master ~]# kubectl -n ingress-nginx get services
NAME                                 TYPE           CLUSTER-IP      EXTERNAL-IP     PORT(S)                      AGE
ingress-nginx-controller             LoadBalancer   10.103.33.148   172.25.254.50   80:34512/TCP,443:34727/TCP   4m43s
ingress-nginx-controller-admission   ClusterIP      10.103.183.64   <none>          443/TCP                      4m43s

在ingress-nginx-controller中看到的对外IP就是ingress最终对外开放的ip

14.2.3 测试ingress

复制代码
#生成yaml文件
[root@k8s-master ~]# kubectl create ingress webcluster --rule '*/=timinglee-svc:80' --dry-run=client -o yaml > timinglee-ingress.yml

[root@k8s-master ~]# vim timinglee-ingress.yml
aapiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: test-ingress
spec:
  ingressClassName: nginx
  rules:
  - http:
      paths:
      - backend:
          service:
            name: timinglee-svc
            port:
              number: 80
        path: /
        pathType: Prefix	
        #Exact(精确匹配),ImplementationSpecific(特定实现),Prefix(前缀匹配),Regular expression(正则表达式匹配)
      
#建立ingress控制器
[root@k8s-master ~]# kubectl apply -f timinglee-ingress.yml
ingress.networking.k8s.io/webserver created

[root@k8s-master ~]# kubectl get ingress
NAME           CLASS   HOSTS   ADDRESS         PORTS   AGE
test-ingress   nginx   *       172.25.254.10   80      8m30s


[root@reg ~]# for n in {1..5}; do curl 172.25.254.50/hostname.html; done
timinglee-c56f584cf-8jhn6
timinglee-c56f584cf-8cwfm
timinglee-c56f584cf-8jhn6
timinglee-c56f584cf-8cwfm
timinglee-c56f584cf-8jhn6

14.3 ingress 的高级用法

14.3.1 基于路径的访问

复制代码
[root@k8s-master app]# kubectl create deployment myapp-v1 --image myapp:v1 --dry-run=client -o yaml > myapp-v1.yaml

[root@k8s-master app]# kubectl create deployment myapp-v2 --image myapp:v2 --dry-run=client -o yaml > myapp-v2.yaml


[root@k8s-master app]# vim myapp-v1.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: myapp-v1
  name: myapp-v1
spec:
  replicas: 1
  selector:
    matchLabels:
      app: myapp-v1
  strategy: {}
  template:
    metadata:
      labels:
        app: myapp-v1
    spec:
      containers:
      - image: myapp:v1
        name: myapp

---

apiVersion: v1
kind: Service
metadata:
  labels:
    app: myapp-v1
  name: myapp-v1
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
  selector:
    app: myapp-v1


[root@k8s-master app]# vim myapp-v2.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: myapp-v2
  name: myapp-v2
spec:
  replicas: 1
  selector:
    matchLabels:
      app: myapp-v2
  template:
    metadata:
      labels:
        app: myapp-v2
    spec:
      containers:
      - image: myapp:v2
        name: myapp
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app: myapp-v2
  name: myapp-v2
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
  selector:
    app: myapp-v2

[root@k8s-master app]# kubectl expose deployment myapp-v1 --port 80 --target-port 80 --dry-run=client -o yaml >> myapp-v1.yaml

[root@k8s-master app]# kubectl expose deployment myapp-v2 --port 80 --target-port 80 --dry-run=client -o yaml >> myapp-v1.yaml


[root@k8s-master app]# kubectl get services
NAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.96.0.1        <none>        443/TCP   29h
myapp-v1     ClusterIP   10.104.84.65     <none>        80/TCP    13s
myapp-v2     ClusterIP   10.105.246.219   <none>        80/TCP    7s

2.建立ingress的yaml

复制代码
[root@k8s-master app]# vim ingress1.yml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /		#访问路径后加任何内容都被定向到/
  name: ingress1
spec:
  ingressClassName: nginx
  rules:
  - host: www.timinglee.org
    http:
      paths:
      - backend:
          service:
            name: myapp-v1
            port:
              number: 80
        path: /v1
        pathType: Prefix

      - backend:
          service:
            name: myapp-v2
            port:
              number: 80
        path: /v2
        pathType: Prefix

#测试:
[root@reg ~]# echo 172.25.254.50 www.timinglee.org >> /etc/hosts

[root@reg ~]# curl  www.timinglee.org/v1
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
[root@reg ~]# curl  www.timinglee.org/v2
Hello MyApp | Version: v2 | <a href="hostname.html">Pod Name</a>


#nginx.ingress.kubernetes.io/rewrite-target: / 的功能实现
[root@reg ~]# curl  www.timinglee.org/v2/aaaa
Hello MyApp | Version: v2 | <a href="hostname.html">Pod Name</a>

14.3.2 基于域名的访问

复制代码
#在测试主机中设定解析
[root@reg ~]# vim /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
172.25.254.250 reg.timinglee.org
172.25.254.50 www.timinglee.org myappv1.timinglee.org myappv2.timinglee.org

# 建立基于域名的yml文件
[root@k8s-master app]# vim ingress2.yml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
  name: ingress2
spec:
  ingressClassName: nginx
  rules:
  - host: myappv1.timinglee.org
    http:
      paths:
      - backend:
          service:
            name: myapp-v1
            port:
              number: 80
        path: /
        pathType: Prefix
  - host: myappv2.timinglee.org
    http:
      paths:
      - backend:
          service:
            name: myapp-v2
            port:
              number: 80
        path: /
        pathType: Prefix
        
#利用文件建立ingress
[root@k8s-master app]# kubectl apply -f ingress2.yml
ingress.networking.k8s.io/ingress2 created

[root@k8s-master app]# kubectl describe ingress ingress2
Name:             ingress2
Labels:           <none>
Namespace:        default
Address:
Ingress Class:    nginx
Default backend:  <default>
Rules:
  Host                   Path  Backends
  ----                   ----  --------
  myappv1.timinglee.org
                         /   myapp-v1:80 (10.244.2.31:80)
  myappv2.timinglee.org
                         /   myapp-v2:80 (10.244.2.32:80)
Annotations:             nginx.ingress.kubernetes.io/rewrite-target: /
Events:
  Type    Reason  Age   From                      Message
  ----    ------  ----  ----                      -------
  Normal  Sync    21s   nginx-ingress-controller  Scheduled for sync


#在测试主机中测试
[root@reg ~]# curl  www.timinglee.org/v1
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
[root@reg ~]# curl  www.timinglee.org/v2
Hello MyApp | Version: v2 | <a href="hostname.html">Pod Name</a>

14.3.3 建立tls加密

复制代码
#建立证书
[root@k8s-master app]# openssl req -newkey rsa:2048 -nodes -keyout tls.key -x509 -days 365 -subj "/CN=nginxsvc/O=nginxsvc" -out tls.crt

#建立加密资源类型secret
[root@k8s-master app]# kubectl create secret tls  web-tls-secret --key tls.key --cert tls.crt
secret/web-tls-secret created
[root@k8s-master app]# kubectl get secrets
NAME             TYPE                DATA   AGE
web-tls-secret   kubernetes.io/tls   2      6s

secret通常在kubernetes中存放敏感数据,他并不是一种加密方式,在后面课程中会有专门讲解

复制代码
#建立ingress3基于tls认证的yml文件
[root@k8s-master app]# vim ingress3.yml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
  name: ingress3
spec:
  tls:
  - hosts:
    - myapp-tls.timinglee.org
    secretName: web-tls-secret
  ingressClassName: nginx
  rules:
  - host: myapp-tls.timinglee.org
    http:
      paths:
      - backend:
          service:
            name: myapp-v1
            port:
              number: 80
        path: /
        pathType: Prefix
        
#测试
[root@reg ~]# curl -k https://myapp-tls.timinglee.org
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>

14.3.4 建立auth认证

复制代码
#建立认证文件
[root@k8s-master app]# dnf install httpd-tools -y
[root@k8s-master app]# htpasswd -cm auth lee
New password:
Re-type new password:
Adding password for user lee
[root@k8s-master app]# cat auth
lee:$apr1$BohBRkkI$hZzRDfpdtNzue98bFgcU10

#建立认证类型资源
[root@k8s-master app]# kubectl create secret generic auth-web --from-file auth
root@k8s-master app]# kubectl describe secrets auth-web
Name:         auth-web
Namespace:    default
Labels:       <none>
Annotations:  <none>

Type:  Opaque

Data
====
auth:  42 bytes

#建立ingress4基于用户认证的yaml文件
[root@k8s-master app]# vim ingress4.yml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
    nginx.ingress.kubernetes.io/auth-type: basic
    nginx.ingress.kubernetes.io/auth-secret: auth-web
    nginx.ingress.kubernetes.io/auth-realm: "Please input username and password"
  name: ingress4
spec:
  tls:
  - hosts:
    - myapp-tls.timinglee.org
    secretName: web-tls-secret
  ingressClassName: nginx
  rules:
  - host: myapp-tls.timinglee.org
    http:
      paths:
      - backend:
          service:
            name: myapp-v1
            port:
              number: 80
        path: /
        pathType: Prefix

#建立ingress4
[root@k8s-master app]# kubectl apply -f ingress4.yml
ingress.networking.k8s.io/ingress4 created
[root@k8s-master app]# kubectl describe ingress ingress4
Name:             ingress4
Labels:           <none>
Namespace:        default
Address:
Ingress Class:    nginx
Default backend:  <default>
TLS:
  web-tls-secret terminates myapp-tls.timinglee.org
Rules:
  Host                     Path  Backends
  ----                     ----  --------
  myapp-tls.timinglee.org
                           /   myapp-v1:80 (10.244.2.31:80)
Annotations:               nginx.ingress.kubernetes.io/auth-realm: Please input username and password
                           nginx.ingress.kubernetes.io/auth-secret: auth-web
                           nginx.ingress.kubernetes.io/auth-type: basic
Events:
  Type    Reason  Age   From                      Message
  ----    ------  ----  ----                      -------
  Normal  Sync    14s   nginx-ingress-controller  Scheduled for sync


#测试:
[root@reg ~]# curl -k https://myapp-tls.timinglee.org
<html>
<head><title>401 Authorization Required</title></head>
<body>
<center><h1>401 Authorization Required</h1></center>
<hr><center>nginx</center>
</body>
</html>

[root@reg ~]# curl -k https://myapp-tls.timinglee.org -ulee:lee
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>

14.3.5 rewrite重定向十五 Canary金丝雀发布

15.1 什么是金丝雀发布

金丝雀发布(Canary Release)也称为灰度发布,是一种软件发布策略。

主要目的是在将新版本的软件全面推广到生产环境之前,先在一小部分用户或服务器上进行测试和验证,以降低因新版本引入重大问题而对整个系统造成的影响。

是一种Pod的发布方式。金丝雀发布采取先添加、再删除的方式,保证Pod的总量不低于期望值。并且在更新部分Pod后,暂停更新,当确认新Pod版本运行正常后再进行其他版本的Pod的更新。

15.2 Canary发布方式

其中header和weiht中的最多

15.2.1 基于header(http包头)灰度

  • 通过Annotaion扩展

  • 创建灰度ingress,配置灰度头部key以及value

  • 灰度流量验证完毕后,切换正式ingress到新版本

  • 之前我们在做升级时可以通过控制器做滚动更新,默认25%利用header可以使升级更为平滑,通过key 和vule 测试新的业务体系是否有问题。

示例:

复制代码
#建立版本1的ingress
[root@k8s-master app]# vim ingress7.yml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
  name: myapp-v1-ingress
spec:
  ingressClassName: nginx
  rules:
  - host: myapp.timinglee.org
    http:
      paths:
      - backend:
          service:
            name: myapp-v1
            port:
              number: 80
        path: /
        pathType: Prefix
        
[root@k8s-master app]# kubectl describe ingress myapp-v1-ingress
Name:             myapp-v1-ingress
Labels:           <none>
Namespace:        default
Address:          172.25.254.10
Ingress Class:    nginx
Default backend:  <default>
Rules:
  Host                 Path  Backends
  ----                 ----  --------
  myapp.timinglee.org
                       /   myapp-v1:80 (10.244.2.31:80)
Annotations:           <none>
Events:
  Type    Reason  Age                From                      Message
  ----    ------  ----               ----                      -------
  Normal  Sync    44s (x2 over 73s)  nginx-ingress-controller  Scheduled for sync


#建立基于header的ingress
[root@k8s-master app]# vim ingress8.yml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
    nginx.ingress.kubernetes.io/canary: "true"
    nginx.ingress.kubernetes.io/canary-by-header: "version"
    nginx.ingress.kubernetes.io/canary-by-header-value: "2"
  name: myapp-v2-ingress
spec:
  ingressClassName: nginx
  rules:
  - host: myapp.timinglee.org
    http:
      paths:
      - backend:
          service:
            name: myapp-v2
            port:
              number: 80
        path: /
        pathType: Prefix
[root@k8s-master app]# kubectl apply -f ingress8.yml
ingress.networking.k8s.io/myapp-v2-ingress created
[root@k8s-master app]# kubectl describe ingress myapp-v2-ingress
Name:             myapp-v2-ingress
Labels:           <none>
Namespace:        default
Address:
Ingress Class:    nginx
Default backend:  <default>
Rules:
  Host                 Path  Backends
  ----                 ----  --------
  myapp.timinglee.org
                       /   myapp-v2:80 (10.244.2.32:80)
Annotations:           nginx.ingress.kubernetes.io/canary: true
                       nginx.ingress.kubernetes.io/canary-by-header: version
                       nginx.ingress.kubernetes.io/canary-by-header-value: 2
Events:
  Type    Reason  Age   From                      Message
  ----    ------  ----  ----                      -------
  Normal  Sync    21s   nginx-ingress-controller  Scheduled for sync

#测试:
[root@reg ~]# curl  myapp.timinglee.org
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
[root@reg ~]# curl -H "version: 2" myapp.timinglee.org
Hello MyApp | Version: v2 | <a href="hostname.html">Pod Name</a>

15.2.2 基于权重的灰度发布

  • 通过Annotaion拓展

  • 创建灰度ingress,配置灰度权重以及总权重

  • 灰度流量验证完毕后,切换正式ingress到新版本

示例

复制代码
#基于权重的灰度发布
[root@k8s-master app]# vim ingress8.yml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
    nginx.ingress.kubernetes.io/canary: "true"
    nginx.ingress.kubernetes.io/canary-weight: "10"		#更改权重值
    nginx.ingress.kubernetes.io/canary-weight-total: "100"
  name: myapp-v2-ingress
spec:
  ingressClassName: nginx
  rules:
  - host: myapp.timinglee.org
    http:
      paths:
      - backend:
          service:
            name: myapp-v2
            port:
              number: 80
        path: /
        pathType: Prefix

[root@k8s-master app]# kubectl apply -f ingress8.yml
ingress.networking.k8s.io/myapp-v2-ingress created

#测试:
[root@reg ~]# vim check_ingress.sh
#!/bin/bash
v1=0
v2=0

for (( i=0; i<100; i++))
do
    response=`curl -s myapp.timinglee.org |grep -c v1`

    v1=`expr $v1 + $response`
    v2=`expr $v2 + 1 - $response`

done
echo "v1:$v1, v2:$v2"

[root@reg ~]# sh check_ingress.sh
v1:90, v2:10

#更改完毕权重后继续测试可观察变化

十六 configmap

16.1 configmap的功能

  • configMap用于保存配置数据,以键值对形式存储。

  • configMap 资源提供了向 Pod 注入配置数据的方法。

  • 镜像和配置文件解耦,以便实现镜像的可移植性和可复用性。

  • etcd限制了文件大小不能超过1M

16.2 configmap的使用场景

  • 填充环境变量的值

  • 设置容器内的命令行参数

  • 填充卷的配置文件

16.3 configmap创建方式

16.3.1 字面值创建

复制代码
[root@k8s-master ~]# kubectl create cm lee-config --from-literal  fname=timing --from-literal name=lee
configmap/lee-config created

[root@k8s-master ~]# kubectl describe cm lee-config
Name:         lee-config
Namespace:    default
Labels:       <none>
Annotations:  <none>

Data							#键值信息显示
====
fname:
----
timing
lname:
----
lee

BinaryData
====

Events:  <none> 

16.3.2 通过文件创建

复制代码
[root@k8s-master ~]# cat /etc/resolv.conf
# Generated by NetworkManager
nameserver 114.114.114.114

[root@k8s-master ~]# kubectl create cm lee2-config --from-file /etc/resolv.conf
configmap/lee2-config created
[root@k8s-master ~]# kubectl describe cm lee2-config
Name:         lee2-config
Namespace:    default
Labels:       <none>
Annotations:  <none>

Data
====
resolv.conf:
----
# Generated by NetworkManager
nameserver 114.114.114.114


BinaryData
====

Events:  <none>

16.3.3 通过目录创建

复制代码
[root@k8s-master ~]# mkdir leeconfig
[root@k8s-master ~]# cp /etc/fstab /etc/rc.d/rc.local  leeconfig/
[root@k8s-master ~]# kubectl create cm lee3-config --from-file leeconfig/
configmap/lee3-config created
[root@k8s-master ~]# kubectl describe cm lee3-config
Name:         lee3-config
Namespace:    default
Labels:       <none>
Annotations:  <none>

Data
====
fstab:
----

#
# /etc/fstab
# Created by anaconda on Fri Jul 26 13:04:22 2024
#
# Accessible filesystems, by reference, are maintained under '/dev/disk/'.
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info.
#
# After editing this file, run 'systemctl daemon-reload' to update systemd
# units generated from this file.
#
UUID=6577c44f-9c1c-44f9-af56-6d6b505fcfa8 /                       xfs     defaults        0 0
UUID=eec689b4-73d5-4f47-b999-9a585bb6da1d /boot                   xfs     defaults        0 0
UUID=ED00-0E42          /boot/efi               vfat    umask=0077,shortname=winnt 0 2
#UUID=be2f2006-6072-4c77-83d4-f2ff5e237f9f none                    swap    defaults        0 0

rc.local:
----
#!/bin/bash
# THIS FILE IS ADDED FOR COMPATIBILITY PURPOSES
#
# It is highly advisable to create own systemd services or udev rules
# to run scripts during boot instead of using this file.
#
# In contrast to previous versions due to parallel execution during boot
# this script will NOT be run after all other services.
#
# Please note that you must run 'chmod +x /etc/rc.d/rc.local' to ensure
# that this script will be executed during boot.

touch /var/lock/subsys/local
mount /dev/cdrom /rhel9


BinaryData
====

Events:  <none>

16.3.4 通过yaml文件创建

复制代码
[root@k8s-master ~]# kubectl create cm lee4-config --from-literal db_host=172.25.254.100 --from-literal db_port=3306 --dry-run=client -o yaml > lee-config.yaml

[root@k8s-master ~]# vim lee-config.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: lee4-config
data:
  db_host: "172.25.254.100"
  db_port: "3306"

[root@k8s-master ~]# kubectl describe cm lee4-config
Name:         lee4-config
Namespace:    default
Labels:       <none>
Annotations:  <none>

Data
====
db_host:
----
172.25.254.100
db_port:
----
3306

BinaryData
====

Events:  <none>

16.3.5 configmap的使用方式

  • 通过环境变量的方式直接传递给pod

  • 通过pod的 命令行运行方式

  • 作为volume的方式挂载到pod内

16.3.5.1 使用configmap填充环境变量
复制代码
#讲cm中的内容映射为指定变量
[root@k8s-master ~]# vim testpod1.yml
apiVersion: v1
kind: Pod
metadata:
  labels:
    run: testpod
  name: testpod
spec:
  containers:
  - image: busyboxplus:latest
    name: testpod
    command:
    - /bin/sh
    - -c
    - env
    env:
    - name: key1
      valueFrom:
        configMapKeyRef:
          name: lee4-config
          key: db_host
    - name: key2
      valueFrom:
        configMapKeyRef:
          name: lee4-config
          key: db_port
  restartPolicy: Never

[root@k8s-master ~]# kubectl apply -f testpod.yml
pod/testpod created

[root@k8s-master ~]# kubectl logs pods/testpod
KUBERNETES_PORT=tcp://10.96.0.1:443
KUBERNETES_SERVICE_PORT=443
MYAPP_V1_SERVICE_HOST=10.104.84.65
HOSTNAME=testpod
SHLVL=1
MYAPP_V2_SERVICE_HOST=10.105.246.219
HOME=/
MYAPP_V1_PORT=tcp://10.104.84.65:80
MYAPP_V1_SERVICE_PORT=80
MYAPP_V2_SERVICE_PORT=80
MYAPP_V2_PORT=tcp://10.105.246.219:80
MYAPP_V1_PORT_80_TCP_ADDR=10.104.84.65
MYAPP_V2_PORT_80_TCP_ADDR=10.105.246.219
KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1
MYAPP_V1_PORT_80_TCP_PORT=80
MYAPP_V2_PORT_80_TCP_PORT=80
MYAPP_V1_PORT_80_TCP_PROTO=tcp
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
MYAPP_V2_PORT_80_TCP_PROTO=tcp
KUBERNETES_PORT_443_TCP_PORT=443
KUBERNETES_PORT_443_TCP_PROTO=tcp
key1=172.25.254.100
key2=3306
MYAPP_V1_PORT_80_TCP=tcp://10.104.84.65:80
MYAPP_V2_PORT_80_TCP=tcp://10.105.246.219:80
KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443
KUBERNETES_SERVICE_PORT_HTTPS=443
PWD=/
KUBERNETES_SERVICE_HOST=10.96.0.1


#把cm中的值直接映射为变量
[root@k8s-master ~]# vim testpod2.yml
apiVersion: v1
kind: Pod
metadata:
  labels:
    run: testpod
  name: testpod
spec:
  containers:
  - image: busyboxplus:latest
    name: testpod
    command:
    - /bin/sh
    - -c
    - env
    envFrom:
    - configMapRef:
        name: lee4-config
  restartPolicy: Never

#查看日志
[root@k8s-master ~]# kubectl logs pods/testpod
KUBERNETES_PORT=tcp://10.96.0.1:443
KUBERNETES_SERVICE_PORT=443
MYAPP_V1_SERVICE_HOST=10.104.84.65
HOSTNAME=testpod
SHLVL=1
MYAPP_V2_SERVICE_HOST=10.105.246.219
HOME=/
db_port=3306
MYAPP_V1_SERVICE_PORT=80
MYAPP_V1_PORT=tcp://10.104.84.65:80
MYAPP_V2_SERVICE_PORT=80
MYAPP_V2_PORT=tcp://10.105.246.219:80
MYAPP_V1_PORT_80_TCP_ADDR=10.104.84.65
MYAPP_V2_PORT_80_TCP_ADDR=10.105.246.219
KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1
MYAPP_V1_PORT_80_TCP_PORT=80
age=18
MYAPP_V2_PORT_80_TCP_PORT=80
MYAPP_V1_PORT_80_TCP_PROTO=tcp
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
KUBERNETES_PORT_443_TCP_PORT=443
MYAPP_V2_PORT_80_TCP_PROTO=tcp
KUBERNETES_PORT_443_TCP_PROTO=tcp
MYAPP_V1_PORT_80_TCP=tcp://10.104.84.65:80
MYAPP_V2_PORT_80_TCP=tcp://10.105.246.219:80
KUBERNETES_SERVICE_PORT_HTTPS=443
KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443
name=lee
PWD=/
KUBERNETES_SERVICE_HOST=10.96.0.1
db_host=172.25.254.100


#在pod命令行中使用变量
[root@k8s-master ~]# vim testpod3.yml
apiVersion: v1
kind: Pod
metadata:
  labels:
    run: testpod
  name: testpod
spec:
  containers:
  - image: busyboxplus:latest
    name: testpod
    command:
    - /bin/sh
    - -c
    - echo ${db_host} ${db_port}		#变量调用需
    envFrom:
    - configMapRef:
        name: lee4-config
  restartPolicy: Never

#查看日志
[root@k8s-master ~]# kubectl logs pods/testpod
172.25.254.100 3306
16.3.5.2 通过数据卷使用configmap
复制代码
[root@k8s-master ~]# vim testpod4.yml
apiVersion: v1
kind: Pod
metadata:
  labels:
    run: testpod
  name: testpod
spec:
  containers:
  - image: busyboxplus:latest
    name: testpod
    command:
    - /bin/sh
    - -c
    - cat /config/db_host
    volumeMounts:					#调用卷策略
    - name: config-volume			#卷名称
      mountPath: /config
  volumes:							#声明卷的配置
  - name: config-volume				#卷名称
    configMap:
      name: lee4-config
  restartPolicy: Never

#查看日志
[root@k8s-master ~]# kubectl logs testpod
172.25.254.100
16.3.5.3 利用configMap填充pod的配置文件
复制代码
#建立配置文件模板
[root@k8s-master ~]# vim nginx.conf
server {
  listen 8000;
  server_name _;
  root /usr/share/nginx/html;
  index index.html;
}

#利用模板生成cm
root@k8s-master ~]# kubectl create cm nginx-conf --from-file nginx.conf
configmap/nginx-conf created
[root@k8s-master ~]# kubectl describe cm nginx-conf
Name:         nginx-conf
Namespace:    default
Labels:       <none>
Annotations:  <none>

Data
====
nginx.conf:
----
server {
  listen 8000;
  server_name _;
  root /usr/share/nginx/html;
  index index.html;
}


BinaryData
====

Events:  <none>

#建立nginx控制器文件
[root@k8s-master ~]# kubectl create deployment nginx --image nginx:latest --replicas 1 --dry-run=client -o yaml > nginx.yml

#设定nginx.yml中的卷
[root@k8s-master ~]# vim nginx.yml
[root@k8s-master ~]# cat nginx.
cat: nginx.: 没有那个文件或目录
[root@k8s-master ~]# cat nginx.yml
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: nginx
  name: nginx
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - image: nginx:latest
        name: nginx
        volumeMounts:
        - name: config-volume
          mountPath: /etc/nginx/conf.d

      volumes:
        - name: config-volume
          configMap:
            name: nginx-conf


#测试
[root@k8s-master ~]# kubectl get pods -o wide
NAME                     READY   STATUS    RESTARTS   AGE    IP            NODE        NOMINATED NODE   READINESS GATES
nginx-8487c65cfc-cz5hd   1/1     Running   0          3m7s   10.244.2.38   k8s-node2   <none>           <none>
[root@k8s-master ~]# curl 10.244.2.38:8000
16.3.5.4 通过热更新cm修改配置
复制代码
[root@k8s-master ~]# kubectl edit cm nginx-conf
apiVersion: v1
data:
  nginx.conf: |
    server {
      listen 8080;						#端口改为8080
      server_name _;
      root /usr/share/nginx/html;
      index index.html;
    }
kind: ConfigMap
metadata:
  creationTimestamp: "2024-09-07T02:49:20Z"
  name: nginx-conf
  namespace: default
  resourceVersion: "153055"
  uid: 20bee584-2dab-4bd5-9bcb-78318404fa7a

#查看配置文件
[root@k8s-master ~]# kubectl exec pods/nginx-8487c65cfc-cz5hd -- cat /etc/nginx/conf.d/nginx.conf
server {
  listen 8080;
  server_name _;
  root /usr/share/nginx/html;
  index index.html;
}

十七 secrets配置管理

17.1 secrets的功能介绍

  • Secret 对象类型用来保存敏感信息,例如密码、OAuth 令牌和 ssh key。

  • 敏感信息放在 secret 中比放在 Pod 的定义或者容器镜像中来说更加安全和灵活

  • Pod 可以用两种方式使用 secret:

    • 作为 volume 中的文件被挂载到 pod 中的一个或者多个容器里。

    • 当 kubelet 为 pod 拉取镜像时使用。

  • Secret的类型:

    • Service Account:Kubernetes 自动创建包含访问 API 凭据的 secret,并自动修改 pod 以使用此类型的 secret。

    • Opaque:使用base64编码存储信息,可以通过base64 --decode解码获得原始数据,因此安全性弱。

    • kubernetes.io/dockerconfigjson:用于存储docker registry的认证信息

17.2 secrets的创建

在创建secrets时我们可以用命令的方法或者yaml文件的方法

17.2.1从文件创建

复制代码
[root@k8s-master secrets]# echo -n timinglee > username.txt
[root@k8s-master secrets]# echo -n lee > password.txt
root@k8s-master secrets]# kubectl create secret generic userlist --from-file username.txt --from-file password.txt
secret/userlist created
[root@k8s-master secrets]# kubectl get secrets userlist -o yaml
apiVersion: v1
data:
  password.txt: bGVl
  username.txt: dGltaW5nbGVl
kind: Secret
metadata:
  creationTimestamp: "2024-09-07T07:30:42Z"
  name: userlist
  namespace: default
  resourceVersion: "177216"
  uid: 9d76250c-c16b-4520-b6f2-cc6a8ad25594
type: Opaque

编写yaml文件

复制代码
[root@k8s-master secrets]# echo -n timinglee | base64
dGltaW5nbGVl
[root@k8s-master secrets]# echo -n lee | base64
bGVl

[root@k8s-master secrets]# kubectl create secret generic userlist --dry-run=client -o yaml > userlist.yml

[root@k8s-master secrets]# vim userlist.yml
apiVersion: v1
kind: Secret
metadata:
  creationTimestamp: null
  name: userlist
type: Opaque
data:
  username: dGltaW5nbGVl
  password: bGVl

[root@k8s-master secrets]# kubectl apply -f userlist.yml
secret/userlist created

[root@k8s-master secrets]# kubectl describe secrets userlist
Name:         userlist
Namespace:    default
Labels:       <none>
Annotations:  <none>

Type:  Opaque

Data
====
password:  3 bytes
username:  9 byte

17.3 Secret的使用方法

17.3.1 将Secret挂载到Volume中

复制代码
[root@k8s-master secrets]# kubectl run  nginx --image nginx --dry-run=client -o yaml > pod1.yaml

#向固定路径映射
[root@k8s-master secrets]# vim pod1.yaml
apiVersion: v1
kind: Pod
metadata:
  labels:
    run: nginx
  name: nginx
spec:
  containers:
  - image: nginx
    name: nginx
    volumeMounts:
    - name: secrets
      mountPath: /secret
      readOnly: true

  volumes:
  - name: secrets
    secret:
      secretName: userlist

[root@k8s-master secrets]# kubectl apply -f pod1.yaml
pod/nginx created


[root@k8s-master secrets]# kubectl exec  pods/nginx -it -- /bin/bash
root@nginx:/# cat /secret/
cat: /secret/: Is a directory
root@nginx:/# cd /secret/
root@nginx:/secret# ls
password  username
root@nginx:/secret# cat password
leeroot@nginx:/secret# cat username
timingleeroot@nginx:/secret#

17.3.2 向指定路径映射 secret 密钥

复制代码
#向指定路径映射
[root@k8s-master secrets]# vim pod2.yaml
apiVersion: v1
kind: Pod
metadata:
  labels:
    run: nginx1
  name: nginx1
spec:
  containers:
  - image: nginx
    name: nginx1
    volumeMounts:
    - name: secrets
      mountPath: /secret
      readOnly: true

  volumes:
  - name: secrets
    secret:
      secretName: userlist
      items:
      - key: username
        path: my-users/username

[root@k8s-master secrets]# kubectl apply -f pod2.yaml
pod/nginx1 created
[root@k8s-master secrets]# kubectl exec  pods/nginx1 -it -- /bin/bash
root@nginx1:/# cd secret/
root@nginx1:/secret# ls
my-users
root@nginx1:/secret# cd my-users
root@nginx1:/secret/my-users# ls
username
root@nginx1:/secret/my-users# cat username 

17.3.3 将Secret设置为环境变量

复制代码
[root@k8s-master secrets]# vim pod3.yaml
apiVersion: v1
kind: Pod
metadata:
  labels:
    run: busybox
  name: busybox
spec:
  containers:
  - image: busybox
    name: busybox
    command:
    - /bin/sh
    - -c
    - env
    env:
    - name: USERNAME
      valueFrom:
        secretKeyRef:
          name: userlist
          key: username
    - name: PASS
      valueFrom:
        secretKeyRef:
          name: userlist
          key: password
  restartPolicy: Never

[root@k8s-master secrets]# kubectl apply -f pod3.yaml
pod/busybox created
[root@k8s-master secrets]# kubectl logs pods/busybox
KUBERNETES_SERVICE_PORT=443
KUBERNETES_PORT=tcp://10.96.0.1:443
HOSTNAME=busybox
MYAPP_V1_SERVICE_HOST=10.104.84.65
MYAPP_V2_SERVICE_HOST=10.105.246.219
SHLVL=1
HOME=/root
MYAPP_V1_SERVICE_PORT=80
MYAPP_V1_PORT=tcp://10.104.84.65:80
MYAPP_V2_SERVICE_PORT=80
MYAPP_V2_PORT=tcp://10.105.246.219:80
MYAPP_V1_PORT_80_TCP_ADDR=10.104.84.65
USERNAME=timinglee
MYAPP_V2_PORT_80_TCP_ADDR=10.105.246.219
KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1
MYAPP_V1_PORT_80_TCP_PORT=80
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
MYAPP_V2_PORT_80_TCP_PORT=80
MYAPP_V1_PORT_80_TCP_PROTO=tcp
KUBERNETES_PORT_443_TCP_PORT=443
MYAPP_V2_PORT_80_TCP_PROTO=tcp
KUBERNETES_PORT_443_TCP_PROTO=tcp
MYAPP_V1_PORT_80_TCP=tcp://10.104.84.65:80
MYAPP_V2_PORT_80_TCP=tcp://10.105.246.219:80
PASS=lee
KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443
KUBERNETES_SERVICE_PORT_HTTPS=443
KUBERNETES_SERVICE_HOST=10.96.0.1
PWD=/

17.3.4 存储docker registry的认证信息

建立私有仓库并上传镜像

复制代码
#登陆仓库
[root@k8s-master secrets]# docker login  reg.timinglee.org
Authenticating with existing credentials...
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credential-stores

Login Succeeded

#上传镜像
[root@k8s-master secrets]# docker tag timinglee/game2048:latest  reg.timinglee.org/timinglee/game2048:latest
[root@k8s-master secrets]# docker push reg.timinglee.org/timinglee/game2048:latest
The push refers to repository [reg.timinglee.org/timinglee/game2048]
88fca8ae768a: Pushed
6d7504772167: Pushed
192e9fad2abc: Pushed
36e9226e74f8: Pushed
011b303988d2: Pushed
latest: digest: sha256:8a34fb9cb168c420604b6e5d32ca6d412cb0d533a826b313b190535c03fe9390 size: 1364

#登陆仓库
[root@k8s-master secrets]# docker login  reg.timinglee.org
Authenticating with existing credentials...
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credential-stores

Login Succeeded

#上传镜像
[root@k8s-master secrets]# docker tag timinglee/game2048:latest  reg.timinglee.org/timinglee/game2048:latest
[root@k8s-master secrets]# docker push reg.timinglee.org/timinglee/game2048:latest
The push refers to repository [reg.timinglee.org/timinglee/game2048]
88fca8ae768a: Pushed
6d7504772167: Pushed
192e9fad2abc: Pushed
36e9226e74f8: Pushed
011b303988d2: Pushed
latest: digest: sha256:8a34fb9cb168c420604b6e5d32ca6d412cb0d533a826b313b190535c03fe9390 size: 1364

[root@k8s-master secrets]# vim pod3.yml
apiVersion: v1
kind: Pod
metadata:
  labels:
    run: game2048
  name: game2048
spec:
  containers:
  - image: reg.timinglee.org/timinglee/game2048:latest
    name: game2048
  imagePullSecrets:					#不设定docker认证时无法下载镜像
  - name: docker-auth

[root@k8s-master secrets]# kubectl get pods
NAME       READY   STATUS    RESTARTS   AGE
game2048   1/1     Running   0          4s

十八 volumes配置管理

  • 容器中文件在磁盘上是临时存放的,这给容器中运行的特殊应用程序带来一些问题

  • 当容器崩溃时,kubelet将重新启动容器,容器中的文件将会丢失,因为容器会以干净的状态重建。

  • 当在一个 Pod 中同时运行多个容器时,常常需要在这些容器之间共享文件。

  • Kubernetes 卷具有明确的生命周期与使用它的 Pod 相同

  • 卷比 Pod 中运行的任何容器的存活期都长,在容器重新启动时数据也会得到保留

  • 当一个 Pod 不再存在时,卷也将不再存在。

  • Kubernetes 可以支持许多类型的卷,Pod 也能同时使用任意数量的卷。

  • 卷不能挂载到其他卷,也不能与其他卷有硬链接。 Pod 中的每个容器必须独立地指定每个卷的挂载位置。

18.1 kubernets支持的卷的类型

官网:卷 | Kubernetes

k8s支持的卷的类型如下:

  • awsElasticBlockStore 、azureDisk、azureFile、cephfs、cinder、configMap、csi

  • downwardAPI、emptyDir、fc (fibre channel)、flexVolume、flocker

  • gcePersistentDisk、gitRepo (deprecated)、glusterfs、hostPath、iscsi、local、

  • nfs、persistentVolumeClaim、projected、portworxVolume、quobyte、rbd

  • scaleIO、secret、storageos、vsphereVolume

18.2 emptyDir卷

功能:

当Pod指定到某个节点上时,首先创建的是一个emptyDir卷,并且只要 Pod 在该节点上运行,卷就一直存在。卷最初是空的。 尽管 Pod 中的容器挂载 emptyDir 卷的路径可能相同也可能不同,但是这些容器都可以读写 emptyDir 卷中相同的文件。 当 Pod 因为某些原因被从节点上删除时,emptyDir 卷中的数据也会永久删除

emptyDir 的使用场景:

  • 缓存空间,例如基于磁盘的归并排序。

  • 耗时较长的计算任务提供检查点,以便任务能方便地从崩溃前状态恢复执行。

  • 在 Web 服务器容器服务数据时,保存内容管理器容器获取的文件。

示例:

复制代码
[root@k8s-master volumes]# vim pod1.yml
apiVersion: v1
kind: Pod
metadata:
  name: vol1
spec:
  containers:
  - image: busyboxplus:latest
    name: vm1
    command:
    - /bin/sh
    - -c
    - sleep 30000000
    volumeMounts:
    - mountPath: /cache
      name: cache-vol
  - image: nginx:latest
    name: vm2
    volumeMounts:
    - mountPath: /usr/share/nginx/html
      name: cache-vol
  volumes:
  - name: cache-vol
    emptyDir:
      medium: Memory
      sizeLimit: 100Mi

[root@k8s-master volumes]# kubectl apply -f pod1.yml

#查看pod中卷的使用情况
[root@k8s-master volumes]# kubectl describe pods vol1

#测试效果

[root@k8s-master volumes]# kubectl exec -it pods/vol1 -c vm1 -- /bin/sh
/ # cd /cache/
/cache # ls
/cache # curl localhost
<html>
<head><title>403 Forbidden</title></head>
<body>
<center><h1>403 Forbidden</h1></center>
<hr><center>nginx/1.27.1</center>
</body>
</html>
/cache # echo timinglee > index.html
/cache # curl  localhost
timinglee
/cache # dd if=/dev/zero of=bigfile bs=1M count=101
dd: writing 'bigfile': No space left on device
101+0 records in
99+1 records out

18.3 hostpath卷

功能:

hostPath 卷能将主机节点文件系统上的文件或目录挂载到您的 Pod 中,不会因为pod关闭而被删除

hostPath 的一些用法

  • 运行一个需要访问 Docker 引擎内部机制的容器,挂载 /var/lib/docker 路径。

  • 在容器中运行 cAdvisor(监控) 时,以 hostPath 方式挂载 /sys。

  • 允许 Pod 指定给定的 hostPath 在运行 Pod 之前是否应该存在,是否应该创建以及应该以什么方式存在

hostPath的安全隐患

  • 具有相同配置(例如从 podTemplate 创建)的多个 Pod 会由于节点上文件的不同而在不同节点上有不同的行为。

  • 当 Kubernetes 按照计划添加资源感知的调度时,这类调度机制将无法考虑由 hostPath 使用的资源。

  • 基础主机上创建的文件或目录只能由 root 用户写入。您需要在 特权容器 中以 root 身份运行进程,或者修改主机上的文件权限以便容器能够写入 hostPath 卷。

示例:

复制代码
[root@k8s-master volumes]# vim pod2.yml
apiVersion: v1
kind: Pod
metadata:
  name: vol1
spec:
  containers:
  - image: nginx:latest
    name: vm1
    volumeMounts:
    - mountPath: /usr/share/nginx/html
      name: cache-vol
  volumes:
  - name: cache-vol
    hostPath:
      path: /data
      type: DirectoryOrCreate				#当/data目录不存在时自动建立

#测试:
[root@k8s-master volumes]# kubectl apply -f pod2.yml
pod/vol1 created
[root@k8s-master volumes]# kubectl get  pods  -o wide
NAME   READY   STATUS    RESTARTS   AGE   IP            NODE        NOMINATED NODE   READINESS GATES
vol1   1/1     Running   0          10s   10.244.2.48   k8s-node2   <none>           <none>

[root@k8s-master volumes]# curl  10.244.2.48
<html>
<head><title>403 Forbidden</title></head>
<body>
<center><h1>403 Forbidden</h1></center>
<hr><center>nginx/1.27.1</center>
</body>
</html>

[root@k8s-node2 ~]# echo timinglee > /data/index.html
[root@k8s-master volumes]# curl  10.244.2.48
timinglee

#当pod被删除后hostPath不会被清理
[root@k8s-master volumes]# kubectl delete -f pod2.yml
pod "vol1" deleted
[root@k8s-node2 ~]# ls /data/
index.html

18.4 nfs卷

NFS 卷允许将一个现有的 NFS 服务器上的目录挂载到 Kubernetes 中的 Pod 中。这对于在多个 Pod 之间共享数据或持久化存储数据非常有用

例如,如果有多个容器需要访问相同的数据集,或者需要将容器中的数据持久保存到外部存储,NFS 卷可以提供一种方便的解决方案。

18.4.1 部署一台nfs共享主机并在所有k8s节点中安装nfs-utils

复制代码
#部署nfs主机
[root@reg ~]# dnf install nfs-utils -y
[root@reg ~]# systemctl enable --now nfs-server.service

[root@reg ~]# vim /etc/exports
/nfsdata   *(rw,sync,no_root_squash)

[root@reg ~]# exportfs -rv
exporting *:/nfsdata

[root@reg ~]# showmount  -e
Export list for reg.timinglee.org:
/nfsdata *

#在k8s所有节点中安装nfs-utils
[root@k8s-master & node1 & node2  ~]# dnf install nfs-utils -y

18.4.2 部署nfs卷

复制代码
[root@k8s-master volumes]# vim pod3.yml
apiVersion: v1
kind: Pod
metadata:
  name: vol1
spec:
  containers:
  - image: nginx:latest
    name: vm1
    volumeMounts:
    - mountPath: /usr/share/nginx/html
      name: cache-vol
  volumes:
  - name: cache-vol
    nfs:
      server: 172.25.254.250
      path: /nfsdata

[root@k8s-master volumes]# kubectl apply -f pod3.yml
pod/vol1 created

#测试
[root@k8s-master volumes]# kubectl get pods   -o wide
NAME   READY   STATUS    RESTARTS   AGE    IP            NODE        NOMINATED NODE   READINESS GATES
vol1   1/1     Running   0          100s   10.244.2.50   k8s-node2   <none>           <none>
[root@k8s-master volumes]# curl  10.244.2.50
<html>
<head><title>403 Forbidden</title></head>
<body>
<center><h1>403 Forbidden</h1></center>
<hr><center>nginx/1.27.1</center>
</body>
</html>

##在nfs主机中
[root@reg ~]# echo timinglee > /nfsdata/index.html
[root@k8s-master volumes]# curl  10.244.2.50
timinglee

18.5 PersistentVolume持久卷

18.5.1 静态持久卷pv与静态持久卷声明pvc

PersistentVolume(持久卷,简称PV)
  • pv是集群内由管理员提供的网络存储的一部分。

  • PV也是集群中的一种资源。是一种volume插件,

  • 但是它的生命周期却是和使用它的Pod相互独立的。

  • PV这个API对象,捕获了诸如NFS、ISCSI、或其他云存储系统的实现细节

  • pv有两种提供方式:静态和动态

    • 静态PV:集群管理员创建多个PV,它们携带着真实存储的详细信息,它们存在于Kubernetes API中,并可用于存储使用

    • 动态PV:当管理员创建的静态PV都不匹配用户的PVC时,集群可能会尝试专门地供给volume给PVC。这种供给基于StorageClass

PersistentVolumeClaim(持久卷声明,简称PVC)
  • 是用户的一种存储请求

  • 它和Pod类似,Pod消耗Node资源,而PVC消耗PV资源

  • Pod能够请求特定的资源(如CPU和内存)。PVC能够请求指定的大小和访问的模式持久卷配置

  • PVC与PV的绑定是一对一的映射。没找到匹配的PV,那么PVC会无限期得处于unbound未绑定状态

volumes访问模式
  • ReadWriteOnce -- 该volume只能被单个节点以读写的方式映射

  • ReadOnlyMany -- 该volume可以被多个节点以只读方式映射

  • ReadWriteMany -- 该volume可以被多个节点以读写的方式映射

  • 在命令行中,访问模式可以简写为:

    • RWO - ReadWriteOnce
    • ROX - ReadOnlyMany

    • RWX -- ReadWriteMany

volumes回收策略
  • Retain:保留,需要手动回收

  • Recycle:回收,自动删除卷中数据(在当前版本中已经废弃)

  • Delete:删除,相关联的存储资产,如AWS EBS,GCE PD,Azure Disk,or OpenStack Cinder卷都会被删除

注意:

只有NFS和HostPath支持回收利用

AWS EBS,GCE PD,Azure Disk,or OpenStack Cinder卷支持删除操作。

volumes状态说明
  • Available 卷是一个空闲资源,尚未绑定到任何申领

  • Bound 该卷已经绑定到某申领

  • Released 所绑定的申领已被删除,但是关联存储资源尚未被集群回收

  • Failed 卷的自动回收操作失败

静态pv实例:
复制代码
#在nfs主机中建立实验目录
[root@reg ~]# mkdir  /nfsdata/pv{1..3}

#编写创建pv的yml文件,pv是集群资源,不在任何namespace中
[root@k8s-master pvc]# vim pv.yml
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv1
spec:
  capacity:
    storage: 5Gi
  volumeMode: Filesystem
  accessModes:
  - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  storageClassName: nfs
  nfs:
    path: /nfsdata/pv1
    server: 172.25.254.250

---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv2
spec:
  capacity:
    storage: 15Gi
  volumeMode: Filesystem
  accessModes:
  - ReadWriteMany
  persistentVolumeReclaimPolicy: Retain
  storageClassName: nfs
  nfs:
    path: /nfsdata/pv2
    server: 172.25.254.250
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv3
spec:
  capacity:
    storage: 25Gi
  volumeMode: Filesystem
  accessModes:
  - ReadOnlyMany
  persistentVolumeReclaimPolicy: Retain
  storageClassName: nfs
  nfs:
    path: /nfsdata/pv3
    server: 172.25.254.250

[root@k8s-master pvc]# kubectl get  pv
NAME   CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   VOLUMEATTRIBUTESCLASS   REASON   AGE
pv1    5Gi        RWO            Retain           Available           nfs            <unset>                          4m50s
pv2    15Gi       RWX            Retain           Available           nfs            <unset>                          4m50s
pv3    25Gi       ROX            Retain           Available           nfs            <unset>                          4m50s

#建立pvc,pvc是pv使用的申请,需要保证和pod在一个namesapce中
[root@k8s-master pvc]# vim pvc.ym
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc1
spec:
  storageClassName: nfs
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi

---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc2
spec:
  storageClassName: nfs
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 10Gi

---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc3
spec:
  storageClassName: nfs
  accessModes:
    - ReadOnlyMany
  resources:
    requests:
      storage: 15Gi
[root@k8s-master pvc]# kubectl get pvc
NAME   STATUS   VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   VOLUMEATTRIBUTESCLASS   AGE
pvc1   Bound    pv1      5Gi        RWO            nfs            <unset>                 5s
pvc2   Bound    pv2      15Gi       RWX            nfs            <unset>                 4s
pvc3   Bound    pv3      25Gi       ROX            nfs            <unset>                 4s

#在其他namespace中无法应用
[root@k8s-master pvc]# kubectl -n kube-system  get pvc
No resources found in kube-system namespace.

在pod中使用pvc

复制代码
[root@k8s-master pvc]# vim pod.yml
apiVersion: v1
kind: Pod
metadata:
  name: timinglee
spec:
  containers:
  - image: nginx
    name: nginx
    volumeMounts:
    - mountPath: /usr/share/nginx/html
      name: vol1
  volumes:
  - name: vol1
    persistentVolumeClaim:
      claimName: pvc1

[root@k8s-master pvc]# kubectl get pods  -o wide
NAME        READY   STATUS    RESTARTS   AGE   IP            NODE        NOMINATED NODE   READINESS GATES
timinglee   1/1     Running   0          83s   10.244.2.54   k8s-node2   <none>           <none>
[root@k8s-master pvc]# kubectl exec -it pods/timinglee -- /bin/bash
root@timinglee:/# curl  localhost
<html>
<head><title>403 Forbidden</title></head>
<body>
<center><h1>403 Forbidden</h1></center>
<hr><center>nginx/1.27.1</center>
</body>
</html>
root@timinglee:/# cd /usr/share/nginx/
root@timinglee:/usr/share/nginx# ls
html
root@timinglee:/usr/share/nginx# cd html/
root@timinglee:/usr/share/nginx/html# ls

[root@reg ~]# echo timinglee > /data/pv1/index.html

[root@k8s-master pvc]# kubectl exec -it pods/timinglee -- /bin/bash
root@timinglee:/# cd /usr/share/nginx/html/
root@timinglee:/usr/share/nginx/html# ls
index.html

十九 存储类storageclass

官网: https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner

19.1 StorageClass说明

  • StorageClass提供了一种描述存储类(class)的方法,不同的class可能会映射到不同的服务质量等级和备份策略或其他策略等。

  • 每个 StorageClass 都包含 provisioner、parameters 和 reclaimPolicy 字段, 这些字段会在StorageClass需要动态分配 PersistentVolume 时会使用到

19.2 StorageClass的属性

属性说明:存储类 | Kubernetes

Provisioner(存储分配器):用来决定使用哪个卷插件分配 PV,该字段必须指定。可以指定内部分配器,也可以指定外部分配器。外部分配器的代码地址为: kubernetes-incubator/external-storage,其中包括NFS和Ceph等。

Reclaim Policy(回收策略):通过reclaimPolicy字段指定创建的Persistent Volume的回收策略,回收策略包括:Delete 或者 Retain,没有指定默认为Delete。

19.3 存储分配器NFS Client Provisioner

源码地址:https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner

  • NFS Client Provisioner是一个automatic provisioner,使用NFS作为存储,自动创建PV和对应的PVC,本身不提供NFS存储,需要外部先有一套NFS存储服务。

  • PV以 {namespace}-{pvcName}-${pvName}的命名格式提供(在NFS服务器上)

  • PV回收的时候以 archieved-{namespace}-{pvcName}-${pvName} 的命名格式(在NFS服务器上)

19.4 部署NFS Client Provisioner

19.4.1 创建sa并授权
复制代码
[root@k8s-master storageclass]# vim rbac.yml
apiVersion: v1
kind: Namespace
metadata:
  name: nfs-client-provisioner
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-client-provisioner
  namespace: nfs-client-provisioner
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: nfs-client-provisioner-runner
rules:
  - apiGroups: [""]
    resources: ["nodes"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    namespace: nfs-client-provisioner
roleRef:
  kind: ClusterRole
  name: nfs-client-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  namespace: nfs-client-provisioner
rules:
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  namespace: nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    namespace: nfs-client-provisioner
roleRef:
  kind: Role
  name: leader-locking-nfs-client-provisioner
  apiGroup: rbac.authorization.k8s.io


#查看rbac信息
[root@k8s-master storageclass]# kubectl apply -f rbac.yml
namespace/nfs-client-provisioner created
serviceaccount/nfs-client-provisioner created
clusterrole.rbac.authorization.k8s.io/nfs-client-provisioner-runner created
clusterrolebinding.rbac.authorization.k8s.io/run-nfs-client-provisioner created
role.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner created
rolebinding.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner created
[root@k8s-master storageclass]# kubectl -n nfs-client-provisioner get sa
NAME                     SECRETS   AGE
default                  0         14s
nfs-client-provisioner   0         14s
19.4.2 部署应用
复制代码
[root@k8s-master storageclass]# vim deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nfs-client-provisioner
  labels:
    app: nfs-client-provisioner
  namespace: nfs-client-provisioner
spec:
  replicas: 1
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: nfs-client-provisioner
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccountName: nfs-client-provisioner
      containers:
        - name: nfs-client-provisioner
          image: sig-storage/nfs-subdir-external-provisioner:v4.0.2
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: k8s-sigs.io/nfs-subdir-external-provisioner
            - name: NFS_SERVER
              value: 172.25.254.250
            - name: NFS_PATH
              value: /nfsdata
      volumes:
        - name: nfs-client-root
          nfs:
            server: 172.25.254.250
            path: /nfsdata

[root@k8s-master storageclass]# kubectl -n nfs-client-provisioner get deployments.apps nfs-client-provisioner
NAME                     READY   UP-TO-DATE   AVAILABLE   AGE
nfs-client-provisioner   1/1     1            1           86s
19.4.3 创建存储类
复制代码
[root@k8s-master storageclass]# vim class.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: nfs-client
provisioner: k8s-sigs.io/nfs-subdir-external-provisioner
parameters:
  archiveOnDelete: "false"
 
[root@k8s-master storageclass]# kubectl apply -f class.yaml
storageclass.storage.k8s.io/nfs-client created
[root@k8s-master storageclass]# kubectl get storageclasses.storage.k8s.io
NAME         PROVISIONER                                   RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
nfs-client   k8s-sigs.io/nfs-subdir-external-provisioner   Delete          Immediate           false                  9s
19.4.4 创建pvc
复制代码
[root@k8s-master storageclass]# vim pvc.yml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: test-claim
spec:
  storageClassName: nfs-client
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 1G
[root@k8s-master storageclass]# kubectl apply -f pvc.yml
persistentvolumeclaim/test-claim created

[root@k8s-master storageclass]# kubectl get pvc
NAME         STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   VOLUMEATTRIBUTESCLASS   AGE
test-claim   Bound    pvc-7782a006-381a-440a-addb-e9d659b8fe0b   1Gi        RWX            nfs-client     <unset>                 21m
19.4.5 创建测试pod
复制代码
[root@k8s-master storageclass]# vim pod.yml
kind: Pod
apiVersion: v1
metadata:
  name: test-pod
spec:
  containers:
  - name: test-pod
    image: busybox
    command:
      - "/bin/sh"
    args:
      - "-c"
      - "touch /mnt/SUCCESS && exit 0 || exit 1"
    volumeMounts:
      - name: nfs-pvc
        mountPath: "/mnt"
  restartPolicy: "Never"
  volumes:
    - name: nfs-pvc
      persistentVolumeClaim:
        claimName: test-claim

[root@k8s-master storageclass]# kubectl apply -f pod.yml

[root@reg ~]# ls /data/default-test-claim-pvc-b1aef9cc-4be9-4d2a-8c5e-0fe7716247e2/
SUCCESS
19.4.6 设置默认存储类
  • 在未设定默认存储类时pvc必须指定使用类的名称

  • 在设定存储类后创建pvc时可以不用指定storageClassName

    #一次性指定多个pvc
    [root@k8s-master pvc]# vim pvc.yml
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
    name: pvc1
    spec:
    storageClassName: nfs-client
    accessModes:
    - ReadWriteOnce
    resources:
    requests:
    storage: 1Gi


    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
    name: pvc2
    spec:
    storageClassName: nfs-client
    accessModes:
    - ReadWriteMany
    resources:
    requests:
    storage: 10Gi


    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
    name: pvc3
    spec:
    storageClassName: nfs-client
    accessModes:
    - ReadOnlyMany
    resources:
    requests:
    storage: 15Gi

    root@k8s-master pvc]# kubectl apply -f pvc.yml
    persistentvolumeclaim/pvc1 created
    persistentvolumeclaim/pvc2 created
    persistentvolumeclaim/pvc3 created
    [root@k8s-master pvc]# kubectl get pvc
    NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE
    pvc1 Bound pvc-25a3c8c5-2797-4240-9270-5c51caa211b8 1Gi RWO nfs-client <unset> 4s
    pvc2 Bound pvc-c7f34d1c-c8d3-4e7f-b255-e29297865353 10Gi RWX nfs-client <unset> 4s
    pvc3 Bound pvc-5f1086ad-2999-487d-88d2-7104e3e9b221 15Gi ROX nfs-client <unset> 4s
    test-claim Bound pvc-b1aef9cc-4be9-4d2a-8c5e-0fe7716247e2 1Gi RWX nfs-client <unset> 9m9s

二十 statefulset控制器

20.1 功能特性

  • Statefulset是为了管理有状态服务的问提设计的

  • StatefulSet将应用状态抽象成了两种情况:

  • 拓扑状态:应用实例必须按照某种顺序启动。新创建的Pod必须和原来Pod的网络标识一样

  • 存储状态:应用的多个实例分别绑定了不同存储数据。

  • StatefulSet给所有的Pod进行了编号,编号规则是:(statefulset名称)-(序号),从0开始。

  • Pod被删除后重建,重建Pod的网络标识也不会改变,Pod的拓扑状态按照Pod的"名字+编号"的方式固定下来,并且为每个Pod提供了一个固定且唯一的访问入口,Pod对应的DNS记录。

20.2 StatefulSet的组成部分

  • Headless Service:用来定义pod网络标识,生成可解析的DNS记录

  • volumeClaimTemplates:创建pvc,指定pvc名称大小,自动创建pvc且pvc由存储类供应。

  • StatefulSet:管理pod的

20.3 构建方法

复制代码
#建立无头服务
[root@k8s-master statefulset]# vim headless.yml
apiVersion: v1
kind: Service
metadata:
 name: nginx-svc
 labels:
  app: nginx
spec:
 ports:
 - port: 80
   name: web
 clusterIP: None
 selector:
  app: nginx
[root@k8s-master statefulset]# kubectl apply -f headless.yml


#建立statefulset
[root@k8s-master statefulset]# vim statefulset.yml
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: web
spec:
  serviceName: "nginx-svc"
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx
        volumeMounts:
        - name: www
          mountPath: /usr/share/nginx/html

  volumeClaimTemplates:
  - metadata:
      name: www
    spec:
      storageClassName: nfs-client
      accessModes:
      - ReadWriteOnce
      resources:
        requests:
          storage: 1Gi
[root@k8s-master statefulset]# kubectl apply -f statefulset.yml
statefulset.apps/web configured
root@k8s-master statefulset]# kubectl get pods
NAME    READY   STATUS    RESTARTS   AGE
web-0   1/1     Running   0          3m26s
web-1   1/1     Running   0          3m22s
web-2   1/1     Running   0          3m18s


[root@reg nfsdata]# ls /nfsdata/
default-test-claim-pvc-34b3d968-6c2b-42f9-bbc3-d7a7a02dcbac
default-www-web-0-pvc-0390b736-477b-4263-9373-a53d20cc8f9f
default-www-web-1-pvc-a5ff1a7b-fea5-4e77-afd4-cdccedbc278c
default-www-web-2-pvc-83eff88b-4ae1-4a8a-b042-8899677ae854

20.4 测试:

复制代码
#为每个pod建立index.html文件

[root@reg nfsdata]# echo web-0 > default-www-web-0-pvc-0390b736-477b-4263-9373-a53d20cc8f9f/index.html
[root@reg nfsdata]# echo web-1 > default-www-web-1-pvc-a5ff1a7b-fea5-4e77-afd4-cdccedbc278c/index.html
[root@reg nfsdata]# echo web-2 > default-www-web-2-pvc-83eff88b-4ae1-4a8a-b042-8899677ae854/index.html

#建立测试pod访问web-0~2
[root@k8s-master statefulset]# kubectl run -it testpod --image busyboxplus
/ # curl  web-0.nginx-svc
web-0
/ # curl  web-1.nginx-svc
web-1
/ # curl  web-2.nginx-svc
web-2

#删掉重新建立statefulset
[root@k8s-master statefulset]# kubectl delete -f statefulset.yml
statefulset.apps "web" deleted
[root@k8s-master statefulset]# kubectl apply  -f statefulset.yml
statefulset.apps/web created

#访问依然不变
[root@k8s-master statefulset]# kubectl attach testpod -c testpod -i -t
If you don't see a command prompt, try pressing enter.
/ # cu
curl  cut
/ # curl  web-0.nginx-svc
web-0
/ # curl  web-1.nginx-svc
web-1
/ # curl  web-2.nginx-svc
web-2

20.5 statefulset的弹缩

首先,想要弹缩的StatefulSet. 需先清楚是否能弹缩该应用

用命令改变副本数

复制代码
$ kubectl scale statefulsets <stateful-set-name> --replicas=<new-replicas>

通过编辑配置改变副本数

复制代码
$ kubectl edit statefulsets.apps <stateful-set-name>

statefulset有序回收

复制代码
[root@k8s-master statefulset]# kubectl scale statefulset web --replicas 0
statefulset.apps/web scaled
[root@k8s-master statefulset]# kubectl delete -f statefulset.yml
statefulset.apps "web" deleted
[root@k8s-master statefulset]# kubectl delete pvc --all
persistentvolumeclaim "test-claim" deleted
persistentvolumeclaim "www-web-0" deleted
persistentvolumeclaim "www-web-1" deleted
persistentvolumeclaim "www-web-2" deleted
persistentvolumeclaim "www-web-3" deleted
persistentvolumeclaim "www-web-4" deleted
persistentvolumeclaim "www-web-5" deleted
[root@k8s2 statefulset]# kubectl scale statefulsets web --replicas=0

[root@k8s2 statefulset]# kubectl delete -f statefulset.yaml

[root@k8s2 mysql]# kubectl delete pvc --all

二十一 k8s网络通信

21.1 k8s通信整体架构

  • k8s通过CNI接口接入其他插件来实现网络通讯。目前比较流行的插件有flannel,calico等

  • CNI插件存放位置:# cat /etc/cni/net.d/10-flannel.conflist

  • 插件使用的解决方案如下

  • 虚拟网桥,虚拟网卡,多个容器共用一个虚拟网卡进行通信。

  • 多路复用:MacVLAN,多个容器共用一个物理网卡进行通信。

  • 硬件交换:SR-LOV,一个物理网卡可以虚拟出多个接口,这个性能最好。

  • 容器间通信:

  • 同一个pod内的多个容器间的通信,通过lo即可实现pod之间的通信

  • 同一节点的pod之间通过cni网桥转发数据包。

  • 不同节点的pod之间的通信需要网络插件支持

  • pod和service通信: 通过iptables或ipvs实现通信,ipvs取代不了iptables,因为ipvs只能做负载均衡,而做不了nat转换

  • pod和外网通信:iptables的MASQUERADE

  • Service与集群外部客户端的通信;(ingress、nodeport、loadbalancer)

21.2 flannel网络插件

插件组成:

21.2.1 flannel跨主机通信原理

  • 当容器发送IP包,通过veth pair 发往cni网桥,再路由到本机的flannel.1设备进行处理。

  • VTEP设备之间通过二层数据帧进行通信,源VTEP设备收到原始IP包后,在上面加上一个目的MAC地址,封装成一个内部数据帧,发送给目的VTEP设备。

  • 内部数据桢,并不能在宿主机的二层网络传输,Linux内核还需要把它进一步封装成为宿主机的一个普通的数据帧,承载着内部数据帧通过宿主机的eth0进行传输。

  • Linux会在内部数据帧前面,加上一个VXLAN头,VXLAN头里有一个重要的标志叫VNI,它是VTEP识别某个数据桢是不是应该归自己处理的重要标识。

  • flannel.1设备只知道另一端flannel.1设备的MAC地址,却不知道对应的宿主机地址是什么。在linux内核里面,网络设备进行转发的依据,来自FDB的转发数据库,这个flannel.1网桥对应的FDB信息,是由flanneld进程维护的。

  • linux内核在IP包前面再加上二层数据帧头,把目标节点的MAC地址填进去,MAC地址从宿主机的ARP表获取。

  • 此时flannel.1设备就可以把这个数据帧从eth0发出去,再经过宿主机网络来到目标节点的eth0设备。目标主机内核网络栈会发现这个数据帧有VXLAN Header,并且VNI为1,Linux内核会对它进行拆包,拿到内部数据帧,根据VNI的值,交给本机flannel.1设备处理,flannel.1拆包,根据路由表发往cni网桥,最后到达目标容器。

    #默认网络通信路由
    [root@k8s-master ~]# ip r
    default via 172.25.254.2 dev eth0 proto static metric 100
    10.244.0.0/24 dev cni0 proto kernel scope link src 10.244.0.1
    10.244.1.0/24 via 10.244.1.0 dev flannel.1 onlink
    10.244.2.0/24 via 10.244.2.0 dev flannel.1 onlink
    172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown
    172.25.254.0/24 dev eth0 proto kernel scope link src 172.25.254.100 metric 100

    #桥接转发数据库
    [root@k8s-master ~]# bridge fdb
    01:00:5e:00:00:01 dev eth0 self permanent
    33:33:00:00:00:01 dev eth0 self permanent
    01:00:5e:00:00:fb dev eth0 self permanent
    33:33:ff:65:cb:fa dev eth0 self permanent
    33:33:00:00:00:fb dev eth0 self permanent
    33:33:00:00:00:01 dev docker0 self permanent
    01:00:5e:00:00:6a dev docker0 self permanent
    33:33:00:00:00:6a dev docker0 self permanent
    01:00:5e:00:00:01 dev docker0 self permanent
    01:00:5e:00:00:fb dev docker0 self permanent
    02:42:76:94:aa:bc dev docker0 vlan 1 master docker0 permanent
    02:42:76:94:aa:bc dev docker0 master docker0 permanent
    33:33:00:00:00:01 dev kube-ipvs0 self permanent
    82:14:17:b1:1d:d0 dev flannel.1 dst 172.25.254.20 self permanent
    22:7f:e7:fd:33:77 dev flannel.1 dst 172.25.254.10 self permanent
    33:33:00:00:00:01 dev cni0 self permanent
    01:00:5e:00:00:6a dev cni0 self permanent
    33:33:00:00:00:6a dev cni0 self permanent
    01:00:5e:00:00:01 dev cni0 self permanent
    33:33:ff:aa:13:2f dev cni0 self permanent
    01:00:5e:00:00:fb dev cni0 self permanent
    33:33:00:00:00:fb dev cni0 self permanent
    0e:49:e3:aa:13:2f dev cni0 vlan 1 master cni0 permanent
    0e:49:e3:aa:13:2f dev cni0 master cni0 permanent
    7a:1c:2d:5d:0e:9e dev vethf29f1523 master cni0
    5e:4e:96:a0:eb:db dev vethf29f1523 vlan 1 master cni0 permanent
    5e:4e:96:a0:eb:db dev vethf29f1523 master cni0 permanent
    33:33:00:00:00:01 dev vethf29f1523 self permanent
    01:00:5e:00:00:01 dev vethf29f1523 self permanent
    33:33:ff:a0:eb:db dev vethf29f1523 self permanent
    33:33:00:00:00:fb dev vethf29f1523 self permanent
    b2:f9:14:9f:71:29 dev veth18ece01e master cni0
    3a:05:06:21:bf:7f dev veth18ece01e vlan 1 master cni0 permanent
    3a:05:06:21:bf:7f dev veth18ece01e master cni0 permanent
    33:33:00:00:00:01 dev veth18ece01e self permanent
    01:00:5e:00:00:01 dev veth18ece01e self permanent
    33:33:ff:21:bf:7f dev veth18ece01e self permanent
    33:33:00:00:00:fb dev veth18ece01e self permanent

    #arp列表
    [root@k8s-master ~]# arp -n
    Address HWtype HWaddress Flags Mask Iface
    10.244.0.2 ether 7a:1c:2d:5d:0e:9e C cni0
    172.25.254.1 ether 00:50:56:c0:00:08 C eth0
    10.244.2.0 ether 82:14:17:b1:1d:d0 CM flannel.1
    10.244.1.0 ether 22:7f:e7:fd:33:77 CM flannel.1
    172.25.254.20 ether 00:0c:29:6a:a8:61 C eth0
    172.25.254.10 ether 00:0c:29:ea:52:cb C eth0
    10.244.0.3 ether b2:f9:14:9f:71:29 C cni0
    172.25.254.2 ether 00:50:56:fc:e0:b9 C eth0

21.2.2 flannel支持的后端模式

网络模式 功能
vxlan 报文封装,默认模式
Directrouting 直接路由,跨网段使用vxlan,同网段使用host-gw模式
host-gw 主机网关,性能好,但只能在二层网络中,不支持跨网络 如果有成千上万的Pod,容易产生广播风暴,不推荐
UDP 性能差,不推荐

更改flannel的默认模式

复制代码
[root@k8s-master ~]# kubectl -n kube-flannel edit cm kube-flannel-cfg
apiVersion: v1
data:
  cni-conf.json: |
    {
      "name": "cbr0",
      "cniVersion": "0.3.1",
      "plugins": [
        {
          "type": "flannel",
          "delegate": {
            "hairpinMode": true,
            "isDefaultGateway": true
          }
        },
        {
          "type": "portmap",
          "capabilities": {
            "portMappings": true
          }
        }
      ]
    }
  net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "EnableNFTables": false,
      "Backend": {
        "Type": "host-gw"			#更改内容
      }
    }
#重启pod
[root@k8s-master ~]# kubectl -n kube-flannel delete pod --all
pod "kube-flannel-ds-bk8wp" deleted
pod "kube-flannel-ds-mmftf" deleted
pod "kube-flannel-ds-tmfdn" deleted

[root@k8s-master ~]# ip r
default via 172.25.254.2 dev eth0 proto static metric 100
10.244.0.0/24 dev cni0 proto kernel scope link src 10.244.0.1
10.244.1.0/24 via 172.25.254.10 dev eth0
10.244.2.0/24 via 172.25.254.20 dev eth0
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown
172.25.254.0/24 dev eth0 proto kernel scope link src 172.25.254.100 metric 100

21.3 calico网络插件

官网:

Installing on on-premises deployments | Calico Documentation

21.3.1 calico简介:

  • 纯三层的转发,中间没有任何的NAT和overlay,转发效率最好。

  • Calico 仅依赖三层路由可达。Calico 较少的依赖性使它能适配所有 VM、Container、白盒或者混合环境场景。

21.3.2 calico网络架构

  • Felix:监听ECTD中心的存储获取事件,用户创建pod后,Felix负责将其网卡、IP、MAC都设置好,然后在内核的路由表里面写一条,注明这个IP应该到这张网卡。同样如果用户制定了隔离策略,Felix同样会将该策略创建到ACL中,以实现隔离。

  • BIRD:一个标准的路由程序,它会从内核里面获取哪一些IP的路由发生了变化,然后通过标准BGP的路由协议扩散到整个其他的宿主机上,让外界都知道这个IP在这里,路由的时候到这里

1.3.3 部署calico

删除flannel插件

复制代码
[root@k8s-master ~]# kubectl delete  -f kube-flannel.yml

删除所有节点上flannel配置文件,避免冲突

复制代码
[root@k8s-master & node1-2 ~]# rm -rf /etc/cni/net.d/10-flannel.conflist

下载部署文件

复制代码
[root@k8s-master calico]# curl https://raw.githubusercontent.com/projectcalico/calico/v3.28.1/manifests/calico-typha.yaml -o calico.yaml

下载镜像上传至仓库:

复制代码
[root@k8s-master ~]# docker pull docker.io/calico/cni:v3.28.1
[root@k8s-master ~]# docker pull docker.io/calico/node:v3.28.1
[root@k8s-master ~]# docker pull docker.io/calico/kube-controllers:v3.28.1
[root@k8s-master ~]# docker pull docker.io/calico/typha:v3.28.1

更改yml设置

复制代码
[root@k8s-master calico]# vim calico.yaml
4835           image: calico/cni:v3.28.1
4835           image: calico/cni:v3.28.1
4906           image: calico/node:v3.28.1
4932           image: calico/node:v3.28.1
5160           image: calico/kube-controllers:v3.28.1
5249         - image: calico/typha:v3.28.1

4970             - name: CALICO_IPV4POOL_IPIP
4971               value: "Never"

4999             - name: CALICO_IPV4POOL_CIDR
5000               value: "10.244.0.0/16"
5001             - name: CALICO_AUTODETECTION_METHOD
5002               value: "interface=eth0"

[root@k8s-master calico]# kubectl apply -f calico.yaml
[root@k8s-master calico]# kubectl -n kube-system get pods
NAME                                       READY   STATUS    RESTARTS       AGE
calico-kube-controllers-6849cb478c-g5h5p   1/1     Running   0              75s
calico-node-dzzjp                          1/1     Running   0              75s
calico-node-ltz7n                          1/1     Running   0              75s
calico-node-wzdnq                          1/1     Running   0              75s
calico-typha-fff9df85f-vm5ks               1/1     Running   0              75s
coredns-647dc95897-nchjr                   1/1     Running   1 (139m ago)   4d7h
coredns-647dc95897-wjbg2                   1/1     Running   1 (139m ago)   4d7h
etcd-k8s-master                            1/1     Running   1 (139m ago)   4d7h
kube-apiserver-k8s-master                  1/1     Running   1 (139m ago)   3d10h
kube-controller-manager-k8s-master         1/1     Running   3 (139m ago)   4d7h
kube-proxy-9g5z2                           1/1     Running   1 (139m ago)   3d10h
kube-proxy-cd5wk                           1/1     Running   1 (139m ago)   3d10h
kube-proxy-mvq4c                           1/1     Running   1 (139m ago)   3d10h
kube-scheduler-k8s-master                  1/1     Running   3 (139m ago)   4d7h

测试:

复制代码
[root@k8s-master calico]# kubectl run  web --image myapp:v1
pod/web created
[root@k8s-master calico]# kubectl get pods  -o wide
NAME   READY   STATUS    RESTARTS   AGE   IP               NODE        NOMINATED NODE   READINESS GATES
web    1/1     Running   0          5s    10.244.169.129   k8s-node2   <none>           <none>
[root@k8s-master calico]# curl  10.244.169.129
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>

二十二 k8s调度(Scheduling)

22.1 调度在Kubernetes中的作用

  • 调度是指将未调度的Pod自动分配到集群中的节点的过程

  • 调度器通过 kubernetes 的 watch 机制来发现集群中新创建且尚未被调度到 Node 上的 Pod

  • 调度器会将发现的每一个未调度的 Pod 调度到一个合适的 Node 上来运行

22.2 调度原理:

  • 创建Pod

    • 用户通过Kubernetes API创建Pod对象,并在其中指定Pod的资源需求、容器镜像等信息。
  • 调度器监视Pod

    • Kubernetes调度器监视集群中的未调度Pod对象,并为其选择最佳的节点。
  • 选择节点

    • 调度器通过算法选择最佳的节点,并将Pod绑定到该节点上。调度器选择节点的依据包括节点的资源使用情况、Pod的资源需求、亲和性和反亲和性等。
  • 绑定Pod到节点

    • 调度器将Pod和节点之间的绑定信息保存在etcd数据库中,以便节点可以获取Pod的调度信息。
  • 节点启动Pod

    • 节点定期检查etcd数据库中的Pod调度信息,并启动相应的Pod。如果节点故障或资源不足,调度器会重新调度Pod,并将其绑定到其他节点上运行。

22.3 调度器种类

  • 默认调度器(Default Scheduler):

    • 是Kubernetes中的默认调度器,负责对新创建的Pod进行调度,并将Pod调度到合适的节点上。
  • 自定义调度器(Custom Scheduler):

    • 是一种自定义的调度器实现,可以根据实际需求来定义调度策略和规则,以实现更灵活和多样化的调度功能。
  • 扩展调度器(Extended Scheduler):

    • 是一种支持调度器扩展器的调度器实现,可以通过调度器扩展器来添加自定义的调度规则和策略,以实现更灵活和多样化的调度功能。
  • kube-scheduler是kubernetes中的默认调度器,在kubernetes运行后会自动在控制节点运行

22.4 常用调度方法

22.4.1 nodename

  • nodeName 是节点选择约束的最简单方法,但一般不推荐

  • 如果 nodeName 在 PodSpec 中指定了,则它优先于其他的节点选择方法

  • 使用 nodeName 来选择节点的一些限制

    • 如果指定的节点不存在。

    • 如果指定的节点没有资源来容纳 pod,则pod 调度失败。

    • 云环境中的节点名称并非总是可预测或稳定的

实例:

复制代码
建立pod文件
[[root@k8s-master scheduler]# kubectl run  testpod  --image myapp:v1 --dry-run=client -o yaml > pod1.yml

#设置调度
[root@k8s-master scheduler]# vim pod1.yml
apiVersion: v1
kind: Pod
metadata:
  labels:
    run: testpod
  name: testpod
spec:
  nodeName: k8s-node2
  containers:
  - image: myapp:v1
    name: testpod

#建立pod
[root@k8s-master scheduler]# kubectl apply -f pod1.yml
pod/testpod created

[root@k8s-master scheduler]# kubectl get pods  -o wide
NAME      READY   STATUS    RESTARTS   AGE   IP               NODE        NOMINATED NODE   READINESS GATES
testpod   1/1     Running   0          18s   10.244.169.130   k8s-node2   <none>           <none>

nodeName: k8s3 #找不到节点pod会出现pending,优先级最高,其他调度方式无效

22.4.2 Nodeselector(通过标签控制节点)

  • nodeSelector 是节点选择约束的最简单推荐形式

  • 给选择的节点添加标签:

    复制代码
    kubectl label nodes k8s-node1 lab=lee
  • 可以给多个节点设定相同标签

示例:

复制代码
#查看节点标签
[root@k8s-master scheduler]# kubectl get nodes --show-labels
NAME         STATUS   ROLES           AGE    VERSION   LABELS
k8s-master   Ready    control-plane   5d3h   v1.30.0   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-master,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node.kubernetes.io/exclude-from-external-load-balancers=
k8s-node1    Ready    <none>          5d3h   v1.30.0   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-node1,kubernetes.io/os=linux
k8s-node2    Ready    <none>          5d3h   v1.30.0   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-node2,kubernetes.io/os=linux


#设定节点标签
[root@k8s-master scheduler]# kubectl label nodes k8s-node1 lab=timinglee
node/k8s-node1 labeled
[root@k8s-master scheduler]# kubectl get nodes k8s-node1 --show-labels
NAME        STATUS   ROLES    AGE    VERSION   LABELS
k8s-node1   Ready    <none>   5d3h   v1.30.0   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-node1,kubernetes.io/os=linux,lab=timinglee

#调度设置
[root@k8s-master scheduler]# vim pod2.yml
apiVersion: v1
kind: Pod
metadata:
  labels:
    run: testpod
  name: testpod
spec:
  nodeSelector:
    lab: timinglee
  containers:
  - image: myapp:v1
    name: testpod

[root@k8s-master scheduler]# kubectl apply -f pod2.yml
pod/testpod created
[root@k8s-master scheduler]# kubectl get pods  -o wide
NAME      READY   STATUS    RESTARTS   AGE   IP             NODE        NOMINATED NODE   READINESS GATES
testpod   1/1     Running   0          4s    10.244.36.65   k8s-node1   <none>           <none>

节点标签可以给N个节点加

22.5 affinity(亲和性)

官方文档 :

https://kubernetes.io/zh/docs/concepts/scheduling-eviction/assign-pod-node

22.5.1 亲和与反亲和

  • nodeSelector 提供了一种非常简单的方法来将 pod 约束到具有特定标签的节点上。亲和/反亲和功能极大地扩展了你可以表达约束的类型。

  • 使用节点上的 pod 的标签来约束,而不是使用节点本身的标签,来允许哪些 pod 可以或者不可以被放置在一起。

22.5.2 nodeAffinity节点亲和

  • 那个节点服务指定条件就在那个节点运行

  • requiredDuringSchedulingIgnoredDuringExecution 必须满足,但不会影响已经调度

  • preferredDuringSchedulingIgnoredDuringExecution 倾向满足,在无法满足情况下也会调度pod

  • IgnoreDuringExecution 表示如果在Pod运行期间Node的标签发生变化,导致亲和性策略不能满足,则继续运行当前的Pod。

  • nodeaffinity还支持多种规则匹配条件的配置如

匹配规则 功能
ln label 的值在列表内
Notln label 的值不在列表内
Gt label 的值大于设置的值,不支持Pod亲和性
Lt label 的值小于设置的值,不支持pod亲和性
Exists 设置的label 存在
DoesNotExist 设置的 label 不存在

nodeAffinity示例

复制代码
#示例1 
[root@k8s-master scheduler]# vim pod3.yml
apiVersion: v1
kind: Pod
metadata:
  name: node-affinity
spec:
  containers:
  - name: nginx
    image: nginx
  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
           nodeSelectorTerms:
           - matchExpressions:
             - key: disk
               operator: In | NotIn			#两个结果相反
               values:
                 - ssd

22.5.3 Podaffinity(pod的亲和)

  • 那个节点有符合条件的POD就在那个节点运行

  • podAffinity 主要解决POD可以和哪些POD部署在同一个节点中的问题

  • podAntiAffinity主要解决POD不能和哪些POD部署在同一个节点中的问题。它们处理的是Kubernetes集群内部POD和POD之间的关系。

  • Pod 间亲和与反亲和在与更高级别的集合(例如 ReplicaSets,StatefulSets,Deployments 等)一起使用时,

  • Pod 间亲和与反亲和需要大量的处理,这可能会显著减慢大规模集群中的调度。

Podaffinity示例

复制代码
[root@k8s-master scheduler]# vim example4.yml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx
      affinity:
        podAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
          - labelSelector:
              matchExpressions:
              - key: app
                operator: In
                values:
                - nginx
            topologyKey: "kubernetes.io/hostname"

[root@k8s-master scheduler]# kubectl get pods  -o wide
NAME                               READY   STATUS    RESTARTS   AGE   IP               NODE        NOMINATED NODE   READINESS GATES
nginx-deployment-658496fff-d58bk   1/1     Running   0          39s   10.244.169.133   k8s-node2   <none>           <none>
nginx-deployment-658496fff-g25nq   1/1     Running   0          39s   10.244.169.134   k8s-node2   <none>           <none>
nginx-deployment-658496fff-vnlxz   1/1     Running   0          39s   10.244.169.135   k8s-node2   <none>           <none>

22.5.4 Podantiaffinity(pod反亲和)

Podantiaffinity示例

复制代码
[root@k8s-master scheduler]# vim example5.yml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx
      affinity:
        podAntiAffinity:		#反亲和
          requiredDuringSchedulingIgnoredDuringExecution:
          - labelSelector:
              matchExpressions:
              - key: app
                operator: In
                values:
                - nginx
            topologyKey: "kubernetes.io/hostname"

[root@k8s-master scheduler]# kubectl get pods  -o wide
NAME                                READY   STATUS    RESTARTS   AGE   IP               NODE        NOMINATED NODE   READINESS GATES
nginx-deployment-5f5fc7b8b9-hs9kz   1/1     Running   0          6s    10.244.169.136   k8s-node2   <none>           <none>
nginx-deployment-5f5fc7b8b9-ktzsh   0/1     Pending   0          6s    <none>           <none>      <none>           <none>
nginx-deployment-5f5fc7b8b9-txdt9   1/1     Running   0          6s    10.244.36.67     k8s-node1   <none>           <none>

22.6 Taints(污点模式,禁止调度)

  • Taints(污点)是Node的一个属性,设置了Taints后,默认Kubernetes是不会将Pod调度到这个Node上

  • Kubernetes如果为Pod设置Tolerations(容忍),只要Pod能够容忍Node上的污点,那么Kubernetes就会忽略Node上的污点,就能够(不是必须)把Pod调度过去

  • 可以使用命令 kubectl taint 给节点增加一个 taint:

    kubectl taint nodes key=string:effect #命令执行方法 kubectl taint nodes node1 key=value:NoSchedule #创建
    kubectl describe nodes server1 | grep Taints #查询 kubectl taint nodes node1 key- #删除

其中[effect] 可取值:

effect值 解释
NoSchedule POD 不会被调度到标记为 taints 节点
PreferNoSchedule NoSchedule 的软策略版本,尽量不调度到此节点
NoExecute 如该节点内正在运行的 POD 没有对应 Tolerate 设置,会直接被逐出
Taints示例
复制代码
#建立控制器并运行
[root@k8s-master scheduler]# vim example6.yml
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: web
  name: web
spec:
  replicas: 2
  selector:
    matchLabels:
      app: web
  template:
    metadata:
      labels:
        app: web
    spec:
      containers:
      - image: nginx
        name: nginx

[root@k8s-master scheduler]# kubectl apply -f example6.yml
deployment.apps/web created

root@k8s-master scheduler]# kubectl get pod -o wide
NAME                   READY   STATUS    RESTARTS   AGE   IP               NODE        NOMINATED NODE   READINESS GATES
web-7c56dcdb9b-9wwdg   1/1     Running   0          25s   10.244.36.68     k8s-node1   <none>           <none>
web-7c56dcdb9b-qsx6w   1/1     Running   0          25s   10.244.169.137   k8s-node2   <none>           <none>

#设定污点为NoSchedule
[root@k8s-master scheduler]# kubectl taint node k8s-node1 name=lee:NoSchedule
node/k8s-node1 tainted
[root@k8s-master scheduler]# kubectl describe nodes k8s-node1 | grep Tain
Taints:             name=lee:NoSchedule

#控制器增加pod
[root@k8s-master scheduler]# kubectl get pod -o wide
NAME                   READY   STATUS    RESTARTS   AGE     IP               NODE        NOMINATED NODE   READINESS GATES
web-7c56dcdb9b-4l759   1/1     Running   0          6s      10.244.169.140   k8s-node2   <none>           <none>
web-7c56dcdb9b-9wwdg   1/1     Running   0          6m35s   10.244.36.68     k8s-node1   <none>           <none>
web-7c56dcdb9b-bqd75   1/1     Running   0          6s      10.244.169.141   k8s-node2   <none>           <none>
web-7c56dcdb9b-m8kx8   1/1     Running   0          6s      10.244.169.138   k8s-node2   <none>           <none>
web-7c56dcdb9b-qsx6w   1/1     Running   0          6m35s   10.244.169.137   k8s-node2   <none>           <none>
web-7c56dcdb9b-rhft4   1/1     Running   0          6s      10.244.169.139   k8s-node2   <none>           <none>


#设定污点为NoExecute
[root@k8s-master scheduler]# kubectl taint node k8s-node1 name=lee:NoExecute
node/k8s-node1 tainted
[root@k8s-master scheduler]# kubectl describe nodes k8s-node1 | grep Tain
Taints:             name=lee:NoExecute

[root@k8s-master scheduler]# kubectl get pod -o wide
NAME                   READY   STATUS              RESTARTS   AGE     IP               NODE        NOMINATED NODE   READINESS GATES
web-7c56dcdb9b-4l759   1/1     Running             0          108s    10.244.169.140   k8s-node2   <none>           <none>
web-7c56dcdb9b-bqd75   1/1     Running             0          108s    10.244.169.141   k8s-node2   <none>           <none>
web-7c56dcdb9b-m8kx8   1/1     Running             0          108s    10.244.169.138   k8s-node2   <none>           <none>
web-7c56dcdb9b-mhkhl   0/1     ContainerCreating   0          14s     <none>           k8s-node2   <none>           <none>
web-7c56dcdb9b-qsx6w   1/1     Running             0          8m17s   10.244.169.137   k8s-node2   <none>           <none>
web-7c56dcdb9b-rhft4   1/1     Running             0          108s    10.244.169.139   k8s-node2   <none>           <none>


#删除污点
[root@k8s-master scheduler]# kubectl taint node k8s-node1 name-
node/k8s-node1 untainted
[root@k8s-master scheduler]#
[root@k8s-master scheduler]# kubectl describe nodes k8s-node1 | grep Tain
Taints:             <none>

tolerations(污点容忍)

  • tolerations中定义的key、value、effect,要与node上设置的taint保持一直:

  • 如果 operator 是 Equal ,则key与value之间的关系必须相等。

  • 如果 operator 是 Exists ,value可以省略

  • 如果不指定operator属性,则默认值为Equal。

  • 还有两个特殊值:

  • 当不指定key,再配合Exists 就能匹配所有的key与value ,可以容忍所有污点。

  • 当不指定effect ,则匹配所有的effect

污点容忍示例:

复制代码
#设定节点污点
[root@k8s-master scheduler]# kubectl taint node k8s-node1 name=lee:NoExecute
node/k8s-node1 tainted
[root@k8s-master scheduler]# kubectl taint node k8s-node2 nodetype=bad:NoSchedule
node/k8s-node2 tainted


[root@k8s-master scheduler]# vim example7.yml
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: web
  name: web
spec:
  replicas: 6
  selector:
    matchLabels:
      app: web
  template:
    metadata:
      labels:
        app: web
    spec:
      containers:
      - image: nginx
        name: nginx
      
      tolerations:				#容忍所有污点
      - operator: Exists
    
	  tolerations:				#容忍effect为Noschedule的污点
      - operator: Exists
        effect: NoSchedule
	
	  tolerations:				#容忍指定kv的NoSchedule污点
      - key: nodetype
        value: bad
        effect: NoSchedule

三种容忍方式每次测试写一个即可

二十三 kubernetes API 访问控制

Authentication(认证)

  • 认证方式现共有8种,可以启用一种或多种认证方式,只要有一种认证方式通过,就不再进行其它方式的认证。通常启用X509 Client Certs和Service Accout Tokens两种认证方式。

  • Kubernetes集群有两类用户:由Kubernetes管理的Service Accounts (服务账户)和(Users Accounts) 普通账户。k8s中账号的概念不是我们理解的账号,它并不真的存在,它只是形式上存在。

Authorization(授权)

  • 必须经过认证阶段,才到授权请求,根据所有授权策略匹配请求资源属性,决定允许或拒绝请求。授权方式现共有6种,AlwaysDeny、AlwaysAllow、ABAC、RBAC、Webhook、Node。默认集群强制开启RBAC。

Admission Control(准入控制)

  • 用于拦截请求的一种方式,运行在认证、授权之后,是权限认证链上的最后一环,对请求API资源对象进行修改和校验。

23.1 UserAccount与ServiceAccount

  • 用户账户是针对人而言的。 服务账户是针对运行在 pod 中的进程而言的。

  • 用户账户是全局性的。 其名称在集群各 namespace 中都是全局唯一的,未来的用户资源不会做 namespace 隔离, 服务账户是 namespace 隔离的。

  • 集群的用户账户可能会从企业数据库进行同步,其创建需要特殊权限,并且涉及到复杂的业务流程。 服务账户创建的目的是为了更轻量,允许集群用户为了具体的任务创建服务账户 ( 即权限最小化原则 )。

23.1.1 ServiceAccount

  • 服务账户控制器(Service account controller)

    • 服务账户管理器管理各命名空间下的服务账户

    • 每个活跃的命名空间下存在一个名为 "default" 的服务账户

  • 服务账户准入控制器(Service account admission controller)

    • 相似pod中 ServiceAccount默认设为 default。

    • 保证 pod 所关联的 ServiceAccount 存在,否则拒绝该 pod。

    • 如果pod不包含ImagePullSecrets设置那么ServiceAccount中的ImagePullSecrets 被添加到pod中

    • 将挂载于 /var/run/secrets/kubernetes.io/serviceaccount 的 volumeSource 添加到 pod 下的每个容器中

    • 将一个包含用于 API 访问的 token 的 volume 添加到 pod 中

23.1.2 ServiceAccount示例:

建立名字为admin的ServiceAccount

复制代码
[root@k8s-master ~]# kubectl create sa timinglee
serviceaccount/timinglee created
[root@k8s-master ~]# kubectl describe  sa timinglee
Name:                timinglee
Namespace:           default
Labels:              <none>
Annotations:         <none>
Image pull secrets:  <none>
Mountable secrets:   <none>
Tokens:              <none>
Events:              <none>

建立secrets

复制代码
[root@k8s-master ~]# kubectl create secret docker-registry docker-login --docker-username admin --docker-password lee --docker-server reg.timinglee.org --docker-email lee@timinglee.org
secret/docker-login created
[root@k8s-master ~]# kubectl describe secrets docker-login
Name:         docker-login
Namespace:    default
Labels:       <none>
Annotations:  <none>

Type:  kubernetes.io/dockerconfigjson

Data
====
.dockerconfigjson:  119 bytes

将secrets注入到sa中

复制代码
[root@k8s-master ~]# kubectl edit sa timinglee
apiVersion: v1
imagePullSecrets:
- name: docker-login
kind: ServiceAccount
metadata:
  creationTimestamp: "2024-09-08T15:44:04Z"
  name: timinglee
  namespace: default
  resourceVersion: "262259"
  uid: 7645a831-9ad1-4ae8-a8a1-aca7b267ea2d

[root@k8s-master ~]# kubectl describe sa timinglee
Name:                timinglee
Namespace:           default
Labels:              <none>
Annotations:         <none>
Image pull secrets:  docker-login
Mountable secrets:   <none>
Tokens:              <none>
Events:              <none>

建立私有仓库并且利用pod访问私有仓库

复制代码
[root@k8s-master auth]# vim example1.yml
[root@k8s-master auth]# kubectl apply -f example1.yml
pod/testpod created
[root@k8s-master auth]# kubectl describe pod testpod
 Warning  Failed     5s               kubelet            Failed to pull image "reg.timinglee.org/lee/nginx:latest": Error response from daemon: unauthorized: unauthorized to access repository: lee/nginx, action: pull: unauthorized to access repository: lee/nginx, action: pull
  Warning  Failed     5s               kubelet            Error: ErrImagePull
  Normal   BackOff    3s (x2 over 4s)  kubelet            Back-off pulling image "reg.timinglee.org/lee/nginx:latest"
  Warning  Failed     3s (x2 over 4s)  kubelet            Error: ImagePullBackOff

在创建pod时会镜像下载会受阻,因为docker私有仓库下载镜像需要认证

pod绑定sa

复制代码
[root@k8s-master auth]# vim example1.yml
apiVersion: v1
kind: Pod
metadata:
  name: testpod
spec:
  serviceAccountName: timinglee
  containers:
  - image: reg.timinglee.org/lee/nginx:latest
    name: testpod

[root@k8s-master auth]# kubectl apply -f example1.yml
pod/testpod created
[root@k8s-master auth]# kubectl get pods
NAME      READY   STATUS    RESTARTS   AGE
testpod   1/1     Running   0          2s

二十四 认证(在k8s中建立认证用户)

24.1 创建UserAccount

复制代码
#建立证书
[root@k8s-master auth]# cd /etc/kubernetes/pki/
[root@k8s-master pki]# openssl genrsa -out timinglee.key 2048
[root@k8s-master pki]# openssl req  -new -key timinglee.key -out timinglee.csr -subj "/CN=timinglee"
[root@k8s-master pki]# openssl x509 -req  -in timinglee.csr -CA ca.crt -CAkey ca.key -CAcreateserial  -out timinglee.crt -days 365
Certificate request self-signature ok

[root@k8s-master pki]# openssl x509 -in timinglee.crt -text -noout
Certificate:
    Data:
        Version: 1 (0x0)
        Serial Number:
            76:06:6c:a7:36:53:b9:3f:5a:6a:93:3a:f2:e8:82:96:27:57:8e:58
        Signature Algorithm: sha256WithRSAEncryption
        Issuer: CN = kubernetes
        Validity
            Not Before: Sep  8 15:59:55 2024 GMT
            Not After : Sep  8 15:59:55 2025 GMT
        Subject: CN = timinglee
        Subject Public Key Info:
            Public Key Algorithm: rsaEncryption
                Public-Key: (2048 bit)
                Modulus:
                    00:a6:6d:be:5d:7f:4c:bf:36:96:dc:4e:1b:24:64:
                    f7:4b:57:d3:45:ad:e8:b5:07:e7:78:2b:9e:6e:53:
                    2f:16:ff:00:f4:c8:41:2c:89:3d:86:7c:1b:16:08:
                    2e:2c:bc:2c:1e:df:60:f0:80:60:f9:79:49:91:1d:
                    9f:47:16:9a:d1:86:c7:4f:02:55:27:12:93:b7:f4:
                    07:fe:13:64:fd:78:32:8d:12:d5:c2:0f:be:67:65:
                    f2:56:e4:d1:f6:fe:f6:d5:7c:2d:1d:c8:90:2a:ac:
                    3f:62:85:9f:4a:9d:85:73:33:26:5d:0f:4a:a9:14:
                    12:d4:fb:b3:b9:73:d0:a3:be:58:41:cb:a0:62:3e:
                    1b:44:ef:61:b5:7f:4a:92:5b:e3:71:77:99:b4:ea:
                    4d:27:80:14:e9:95:4c:d5:62:56:d6:54:7b:f7:c2:
                    ea:0e:47:b2:19:75:59:22:00:bd:ea:83:6b:cd:12:
                    46:7a:4a:79:83:ee:bc:59:6f:af:8e:1a:fd:aa:b4:
                    bd:84:4d:76:38:e3:1d:ea:56:b5:1e:07:f5:39:ef:
                    56:57:a2:3d:91:c0:3f:38:ce:36:5d:c7:fe:5e:0f:
                    53:75:5a:f0:6e:37:71:4b:90:03:2f:2e:11:bb:a1:
                    a1:5b:dc:89:b8:19:79:0a:ee:e9:b5:30:7d:16:44:
                    4a:53
                Exponent: 65537 (0x10001)
    Signature Algorithm: sha256WithRSAEncryption
    Signature Value:
        62:db:0b:58:a9:59:57:91:7e:de:9e:bb:20:2f:24:fe:b7:7f:
        33:aa:d5:74:0e:f9:96:ce:1b:a9:65:08:7f:22:6b:45:ee:58:
        68:d8:26:44:33:5e:45:e1:82:b2:5c:99:41:6b:1e:fa:e8:1a:
        a2:f1:8f:44:22:e1:d6:58:5f:4c:28:3d:e0:78:21:ea:aa:85:
        08:a5:c8:b3:34:19:d3:c7:e2:fe:a2:a4:f5:68:18:53:5f:ff:
        7d:35:22:3c:97:3d:4e:ad:62:5f:bb:4d:88:fb:67:f4:d5:2d:
        81:c8:2c:6c:5e:0e:e2:2c:f5:e9:07:34:16:01:e2:bf:1f:cd:
        6a:66:db:b6:7b:92:df:13:a1:d0:58:d8:4d:68:96:66:e3:00:
        6e:ce:11:99:36:9c:b3:b5:81:bf:d1:5b:d7:f2:08:5e:7d:ea:
        97:fe:b3:80:d6:27:1c:89:e6:f2:f3:03:fc:dc:de:83:5e:24:
        af:46:a6:2a:8e:b1:34:67:51:2b:19:eb:4c:78:12:ac:00:4e:
        58:5e:fd:6b:4c:ce:73:dd:b3:91:73:4a:d6:6f:2c:86:25:f0:
        6a:fb:96:66:b3:39:a4:b0:d9:46:c2:fc:6b:06:b2:90:9c:13:
        e1:02:8b:6f:6e:ab:cf:e3:21:7e:a9:76:c1:38:15:eb:e6:2d:
        a5:6f:e5:ab

#建立k8s中的用户
[root@k8s-master pki]# kubectl config set-credentials timinglee --client-certificate /etc/kubernetes/pki/timinglee.crt --client-key /etc/kubernetes/pki/timinglee.key --embed-certs=true
User "timinglee" set.

[root@k8s-master pki]# kubectl config view
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: DATA+OMITTED
    server: https://172.25.254.100:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: kubernetes-admin
  name: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
  user:
    client-certificate-data: DATA+OMITTED
    client-key-data: DATA+OMITTED
- name: timinglee
  user:
    client-certificate-data: DATA+OMITTED
    client-key-data: DATA+OMITTED

#为用户创建集群的安全上下文
root@k8s-master pki]# kubectl config set-context timinglee@kubernetes --cluster kubernetes --user timinglee
Context "timinglee@kubernetes" created.

#切换用户,用户在集群中只有用户身份没有授权
[root@k8s-master ~]# kubectl config use-context timinglee@kubernetes
Switched to context "timinglee@kubernetes".
[root@k8s-master ~]# kubectl get pods
Error from server (Forbidden): pods is forbidden: User "timinglee" cannot list resource "pods" in API group "" in the namespace "default"

#切换会集群管理
[root@k8s-master ~]# kubectl config use-context kubernetes-admin@kubernetes
Switched to context "kubernetes-admin@kubernetes".

#如果需要删除用户
[root@k8s-master pki]# kubectl config delete-user timinglee
deleted user timinglee from /etc/kubernetes/admin.conf

24.2 RBAC(Role Based Access Control)

24.2.1 基于角色访问控制授权:
  • 允许管理员通过Kubernetes API动态配置授权策略。RBAC就是用户通过角色与权限进行关联。

  • RBAC只有授权,没有拒绝授权,所以只需要定义允许该用户做什么即可

  • RBAC的三个基本概念

  • Subject:被作用者,它表示k8s中的三类主体, user, group, serviceAccount

  • Role:角色,它其实是一组规则,定义了一组对 Kubernetes API 对象的操作权限。

  • RoleBinding:定义了"被作用者"和"角色"的绑定关系

  • RBAC包括四种类型:Role、ClusterRole、RoleBinding、ClusterRoleBinding

  • Role 和 ClusterRole

  • Role是一系列的权限的集合,Role只能授予单个namespace 中资源的访问权限。

  • ClusterRole 跟 Role 类似,但是可以在集群中全局使用。

  • Kubernetes 还提供了四个预先定义好的 ClusterRole 来供用户直接使用

  • cluster-amdin、admin、edit、view

24.2.2 role授权实施
复制代码
#生成role的yaml文件
[root@k8s-master rbac]# kubectl create role myrole --dry-run=client --verb=get --resource pods -o yaml > myrole.yml

#更改文件内容
[root@k8s-master rbac]# vim myrole.yml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  creationTimestamp: null
  name: myrole
rules:
- apiGroups:
  - ""
  resources:
  - pods
  verbs:
  - get
  - watch
  - list
  - create
  - update
  - path
  - delete

#创建role
[root@k8s-master rbac]# kubectl apply -f  myrole.yml
[root@k8s-master rbac]# kubectl describe role myrole
Name:         myrole
Labels:       <none>
Annotations:  <none>
PolicyRule:
  Resources  Non-Resource URLs  Resource Names  Verbs
  ---------  -----------------  --------------  -----
  pods       []                 []              [get watch list create update path delete]
复制代码
#建立角色绑定
[root@k8s-master rbac]# kubectl create rolebinding timinglee --role myrole --namespace default --user timinglee --dry-run=client -o yaml  > rolebinding-myrole.yml

[root@k8s-master rbac]# vim rolebinding-myrole.yml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: timinglee
  namespace: default		#角色绑定必须指定namespace
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: myrole
subjects:
- apiGroup: rbac.authorization.k8s.io
  kind: User
  name: timinglee
  
[root@k8s-master rbac]# kubectl apply -f rolebinding-myrole.yml
rolebinding.rbac.authorization.k8s.io/timinglee created
[root@k8s-master rbac]# kubectl get rolebindings.rbac.authorization.k8s.io timinglee
NAME        ROLE          AGE
timinglee   Role/myrole   9s

#切换用户测试授权
[root@k8s-master rbac]# kubectl config use-context timinglee@kubernetes
Switched to context "timinglee@kubernetes".

[root@k8s-master rbac]# kubectl get pods
No resources found in default namespace.
[root@k8s-master rbac]# kubectl get svc			#只针对pod进行了授权,所以svc依然不能操作
Error from server (Forbidden): services is forbidden: User "timinglee" cannot list resource "services" in API group "" in the namespace "default"

#切换回管理员
[root@k8s-master rbac]# kubectl config use-context kubernetes-admin@kubernetes
Switched to context "kubernetes-admin@kubernetes".
24.2.3 clusterrole授权实施
复制代码
#建立集群角色
[root@k8s-master rbac]# kubectl create clusterrole myclusterrole --resource=deployment --verb get --dry-run=client -o yaml > myclusterrole.yml
[root@k8s-master rbac]# vim myclusterrole.yml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: myclusterrole
rules:
- apiGroups:
  - apps
  resources:
  - deployments
  verbs:
  - get
  - list
  - watch
  - create
  - update
  - path
  - delete
- apiGroups:
  - ""
  resources:
  - pods
  verbs:
  - get
  - list
  - watch
  - create
  - update
  - path
  - delete

[root@k8s-master rbac]# kubectl describe clusterrole myclusterrole
Name:         myclusterrole
Labels:       <none>
Annotations:  <none>
PolicyRule:
  Resources         Non-Resource URLs  Resource Names  Verbs
  ---------         -----------------  --------------  -----
  deployments.apps  []                 []              [get list watch create update path delete]
  pods.apps         []                 []              [get list watch create update path delete]

#建立集群角色绑定
[root@k8s-master rbac]# kubectl create clusterrolebinding  clusterrolebind-myclusterrole --clusterrole myclusterrole  --user timinglee --dry-run=client -o yaml > clusterrolebind-myclusterrole.yml
[root@k8s-master rbac]# vim clusterrolebind-myclusterrole.yml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: clusterrolebind-myclusterrole
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: myclusterrole
subjects:
- apiGroup: rbac.authorization.k8s.io
  kind: User
  name: timinglee

[root@k8s-master rbac]# kubectl describe clusterrolebindings.rbac.authorization.k8s.io clusterrolebind-myclusterrole
Name:         clusterrolebind-myclusterrole
Labels:       <none>
Annotations:  <none>
Role:
  Kind:  ClusterRole
  Name:  myclusterrole
Subjects:
  Kind  Name       Namespace
  ----  ----       ---------
  User  timinglee

#测试:
[root@k8s-master rbac]# kubectl get pods  -A
[root@k8s-master rbac]# kubectl get deployments.apps -A
[root@k8s-master rbac]# kubectl get svc -A
Error from server (Forbidden): services is forbidden: User "timinglee" cannot list resource "services" in API group "" at the cluster scope
24.2.4 服务账户的自动化

服务账户准入控制器(Service account admission controller)

  • 如果该 pod 没有 ServiceAccount 设置,将其 ServiceAccount 设为 default。

  • 保证 pod 所关联的 ServiceAccount 存在,否则拒绝该 pod。

  • 如果 pod 不包含 ImagePullSecrets 设置,那么 将 ServiceAccount 中的 ImagePullSecrets 信息添加到 pod 中。

  • 将一个包含用于 API 访问的 token 的 volume 添加到 pod 中。

  • 将挂载于 /var/run/secrets/kubernetes.io/serviceaccount 的 volumeSource 添加到 pod 下的每个容器中。

服务账户控制器(Service account controller)

服务账户管理器管理各命名空间下的服务账户,并且保证每个活跃的命名空间下存在一个名为 "default" 的服务账户

相关推荐
Dobby_054 小时前
【Ansible】变量与敏感数据管理:Vault加密与Facts采集详解
linux·运维·云原生·ansible
记忆不曾留4 小时前
unbuntu 20.04 docker 部署wordpress
运维·docker·容器·wordpress·独立站建站
你的人类朋友5 小时前
说说你对go的认识
后端·云原生·go
C-20026 小时前
kubernetes中pod的管理及优化
云原生·容器·kubernetes
照物华6 小时前
深入理解Kubernetes核心:标签与标签选择器实战解析
云原生·容器·kubernetes·k8s
dexianshen9 小时前
k8s中的微服务
微服务·容器·kubernetes
xiaoye37089 小时前
微服务之间的调用关系如何处理,才能防止循环依赖
微服务·云原生·架构
裸奔的大金毛10 小时前
K8S - NetworkPolicy的使用
容器·kubernetes·k8s
蒋星熠12 小时前
全栈开发:从LAMP到云原生的技术革命
微服务·云原生·职场和发展·架构·系统架构·web·devops