- Deployment
- StatefulSet
- Daemonset
- replicaSet
- Replicacontroller // 从K8S的近期版本中将逐渐移除rc
- Job
- cronjob
K8s 网络:
- 平台中的POD如何通信:
- CNI 容器网络插件
- Coredns的组件 负责提供平台中的名称解析
- 平台中的应用如何被客户端访问
- Service // 将部署的应用暴露给一个统一的入口
- ClusterIP // 平台内部IP地址
- NodePort // 节点的端口 从30000以上的端口中随机选择一个端口来暴露服务对应的pod,可以从K8S集群的任意节点IP加端口号 访问服务
- ExternalIP // 外部IP 需要管理员手动分配一个既不属于节点IP所在网段,也不属于集群IP地址段,也不能POD地址段重复的公网IP地址
- Loadbalance // 负载均衡 通过公有云服务上提供的负载均衡服务来实现对于pod的访问
- Ingress // 应用层路由 ingress controller
- 应用网关 // kong nginx等 应用层路由
- Service // 将部署的应用暴露给一个统一的入口
K8S 存储:
- 任何容器在运行时,都需要获取一定的存储空间来存储容器所使用的镜像数据,以及容器的copy-on-write技术,对镜像层数据进行修改
- 使用卷这个抽象概念来定义容器所需要使用的存储
- 在k8s平台中卷是对应pod的概念
卷使用属性:
- 临时卷 // 伴随pod的生命周期,动态制备,在未指定存储驱动的情况下。直接使用pod所在节点的文件系统创建emptydir解决
- 持久卷 // 可以通过持久卷配置对于pod的存储、以及在pod被移除的时候,仅仅卸载卷,而不删除卷的数据 持久卷不是通过卷名来映射并挂在到容器,一般需要引入一个PVC 实现pvc-pv的对应关系,而容器的mount中只需要说明pvc的名称和挂载点
从卷存储驱动角度:简单的说就是卷中的数据会被存储到那个地方
- 节点上本地文件系统、逻辑卷、块设备、分区:
- Hostpath // 指定的路径必须已经在工作节点上存在
- Empdir // empty 一般会在pod删除时,清空数据
- Local // 不支持动态制备
- 基于一些网络文件共享服务: // 需要预先设置好,对应服务的服务端,并在客户端,也就是K8S的节点上,预先配置好客户端的环境
- Nfs
- Iscsi
- 基于公有云的云存储服务,所提供的存储驱动:下面列出了常见的公有云厂商
- Aws
- Azure
- Vshpere
- 基于常规云存储服务:
- Cinder // openstack的一个组件,用来提供块存储服务
- 基于常规的分布式存储解决方案:
- Ceph
- Glusterfs
- 直接将已有的文件|数据(可以在本地,也可以保存在某个网上的地址)制备成卷,然后挂载到pod中 //需要注意卷的更新是否及时同步到容器中
- Config-map
- Secret
- downloadAPI
- Gitrepos
CSI // 容器的存储接口
存储类 : 提供卷的存储引擎 卷的存储空间来自哪里?
K8S 平台中支持同时创建多个存储类,同时只能有一个存储类是默认的存储类
持久卷的使用步骤:
- PV制备
- 静态制备
- Pv 生命周期和pvc 不绑定。删除pvc 不会删除pv 但是直接删除PV ,PVC 的状态从bind (绑定)转为 未绑定
- 动态制备
- 不是所有的存储类都支持动态制备,需要参考指定存储类的文档
- 动态制备免去手动创建和pvc 申领PV的过程,在动态制备的过程中,只需要创建的PVC ,PVC会主动向 指定的存储类请求符合大小、读写限制等属性的PV,存储类完成PV创建,PVC 绑定PV
- 一般动态制备的卷,在移除pvc 时,自动移除对应的PV
- 静态制备
- Pvc 绑定PV
- 基于PVC 声明的大小和操作模式属性进行匹配,符合条件的PV 将自动绑定到PVC
- Pvc 和 pv 之间的绑定关系具备排他性
- 如果PVC 没有匹配到符合条件的PV ,那么PVC 会一直处于未绑定状态
- Pod 声明使用指定的PVC
- Pod的volumes的属性下,规定persistentVolumeClaim来使用PVC
- Pvc 如果已经绑定到pod中,那么PVC 不会被删除,只有在pod移除后,平台才会将PVC删除
- 回收:
- 如果pod 不在需要使用额外的存储卷,那么可以直接修改pod的配置,删除PVC使用的段落,此时可以删除PVC
- Pvc 绑定的PV 是否需要删除,取决于创建PV时设定的回收策略Retained(保留数据)、Recycled(回收在重新使用之前,清空数据)或 Deleted(删除绑定的PVC删除后,也会删除PV)
使用nfs 存储引擎,制备PV:
- 设置一个nfs的共享
- 动态制备PV
- 首先安装nfs-csi-driver
- 创建新的存储类
- Nfs的共享
- 声明挂载选项
- 直接创建PVC
- 在某个pod中使用PVC
使用node1 作为nfs的服务端,为了避免使用本身根文件系统,建议新增一块硬盘,格式化后挂载到nfs的共享目录下:
Node1:
[root@node1 ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sr0 11:0 1 10.5G 0 rom
nvme0n1 259:0 0 50G 0 disk
├─nvme0n1p1 259:1 0 600M 0 part /boot/efi
├─nvme0n1p2 259:2 0 1G 0 part /boot
└─nvme0n1p3 259:3 0 48.4G 0 part
├─cs_bogon-root 253:0 0 44.5G 0 lvm /
└─cs_bogon-swap 253:1 0 3.9G 0 lvm
nvme0n2 259:4 0 50G 0 disk // 新增硬盘
[root@node1 ~]# vgcreate nfs-group /dev/nvme0n2
Volume group "nfs-group" successfully created
[root@node1 ~]# vgs
VG #PV #LV #SN Attr VSize VFree
cs_bogon 1 2 0 wz--n- 48.41g 0
nfs-group 1 0 0 wz--n- <50.00g <50.00g
[root@node1 ~]# lvcreate -l +100%FREE -n nfs-server nfs-group
WARNING: xfs signature detected on /dev/nfs-group/nfs-server at offset 0. Wipe it? [y/n]: y
Wiping xfs signature on /dev/nfs-group/nfs-server.
Logical volume "nfs-server" created.
[root@node1 ~]# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
root cs_bogon -wi-ao---- 44.49g
swap cs_bogon -wi-a----- <3.92g
nfs-server nfs-group -wi-a----- <50.00g
[root@node1 ~]# vgs
VG #PV #LV #SN Attr VSize VFree
cs_bogon 1 2 0 wz--n- 48.41g 0
nfs-group 1 1 0 wz--n- <50.00g 0
[root@node1 ~]# mkfs.xfs /dev/nfs-group/nfs-server
meta-data=/dev/nfs-group/nfs-server isize=512 agcount=4, agsize=3276544 blks
= sectsz=512 attr=2, projid32bit=1
= crc=1 finobt=1, sparse=1, rmapbt=0
= reflink=1 bigtime=1 inobtcount=1 nrext64=0
data = bsize=4096 blocks=13106176, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0, ftype=1
log =internal log bsize=4096 blocks=16384, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
[root@node1 ~]# mkdir /nfs-share
[root@node1 ~]# mount -t xfs /dev/nfs-group/nfs-server /nfs-share/
[root@node1 ~]# df | grep nfs
/dev/mapper/nfs--group-nfs--server 52359168 398104 51961064 1% /nfs-share
[root@node1 ~]# yum -y install nfs-utils
[root@node1 ~]# vim /etc/exports
[root@node1 ~]# cat /etc/exports
/nfs-share 192.168.110.0/24(rw,sync,no_root_squash)
[root@node1 ~]# systemctl start nfs-server
[root@node1 ~]# vim /etc/fstab
[root@node1 ~]# tail -1 /etc/fstab
/dev/nfs-group/nfs-server /nfs-share xfs defaults 0 0 // 末尾新增
[root@node1 ~]# showmount -e localhost
Export list for localhost:
/nfs-share 192.168.110.0/24
[root@control ~]# showmount -e node1
Export list for node1:
/nfs-share 192.168.110.0/24
[root@node2 ~]# showmount -e node1
Export list for node1:
/nfs-share 192.168.110.0/24
2.安装nfs-csi-driver 到K8S 在control节点上进行
// 下载nfs-csi-driver的安装包 并上传到control节点中
[root@control ~]# ls csi-driver-nfs-master.zip
csi-driver-nfs-master.zip
[root@control ~]# unzip csi-driver-nfs-master.zip
[root@control ~]# cd csi-driver-nfs-master/
[root@control csi-driver-nfs-master]# ls
CHANGELOG cloudbuild.yaml code-of-conduct.md deploy docs go.sum LICENSE OWNERS pkg RELEASE.md SECURITY_CONTACTS test
charts cmd CONTRIBUTING.md Dockerfile go.mod hack Makefile OWNERS_ALIASES README.md release-tools support.md vendor
[root@control csi-driver-nfs-master]# ls deploy/v4.6.0/
crd-csi-snapshot.yaml csi-nfs-driverinfo.yaml csi-snapshot-controller.yaml rbac-snapshot-controller.yaml
csi-nfs-controller.yaml csi-nfs-node.yaml rbac-csi-nfs.yaml
# 查找部署nfs-csi-driver 需要使用那些镜像
[root@control csi-driver-nfs-master]# grep image deploy/v4.6.0/*.yaml
deploy/v4.6.0/csi-nfs-controller.yaml: image: registry.k8s.io/sig-storage/csi-provisioner:v4.0.0
deploy/v4.6.0/csi-nfs-controller.yaml: image: registry.k8s.io/sig-storage/csi-snapshotter:v6.3.3
deploy/v4.6.0/csi-nfs-controller.yaml: imagePullPolicy: IfNotPresent
deploy/v4.6.0/csi-nfs-controller.yaml: image: registry.k8s.io/sig-storage/livenessprobe:v2.12.0
deploy/v4.6.0/csi-nfs-controller.yaml: image: registry.k8s.io/sig-storage/nfsplugin:v4.6.0
deploy/v4.6.0/csi-nfs-controller.yaml: imagePullPolicy: IfNotPresent
deploy/v4.6.0/csi-nfs-node.yaml: image: registry.k8s.io/sig-storage/livenessprobe:v2.12.0
deploy/v4.6.0/csi-nfs-node.yaml: image: registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.10.0
deploy/v4.6.0/csi-nfs-node.yaml: image: registry.k8s.io/sig-storage/nfsplugin:v4.6.0
deploy/v4.6.0/csi-nfs-node.yaml: imagePullPolicy: "IfNotPresent"
deploy/v4.6.0/csi-snapshot-controller.yaml: image: registry.k8s.io/sig-storage/snapshot-controller:v6.3.3
# 替换为国内镜像站
[root@control csi-driver-nfs-master]# sed -i "s/registry\.k8s\.io\/sig-storage/registry.aliyuncs.com\/google_containers/g" deploy/v4.6.0/*.yaml
# aliyun 未提供nfsplugin镜像,修改为其他国内可用镜像站:
## 修改csi-nfs-controller.yaml 108行。关于nfsplugin镜像的行:
[root@control csi-driver-nfs-master]# head -108 deploy/v4.6.0/csi-nfs-controller.yaml | tail -1
image: swr.cn-north-4.myhuaweicloud.com/ddn-k8s/registry.k8s.io/sig-storage/nfsplugin:v4.6.0
## 修改csi-nfs-node.yaml 96行。关于nfsplugin镜像的行:
[root@control csi-driver-nfs-master]# head -96 deploy/v4.6.0/csi-nfs-node.yaml | tail -1
image: swr.cn-north-4.myhuaweicloud.com/ddn-k8s/registry.k8s.io/sig-storage/nfsplugin:v4.6.0
# 安装nfs-csi-driver
## 直接使用提供的安装脚本,这个安装脚本将依次调用对应的yaml文件,创建nfs-csi-driver所需要的资源。
[root@control csi-driver-nfs-master]# ./deploy/install-driver.sh v4.6.0 local
use local deploy
Installing NFS CSI driver, version: v4.6.0 ...
serviceaccount/csi-nfs-controller-sa created
serviceaccount/csi-nfs-node-sa created
clusterrole.rbac.authorization.k8s.io/nfs-external-provisioner-role created
clusterrolebinding.rbac.authorization.k8s.io/nfs-csi-provisioner-binding created
csidriver.storage.k8s.io/nfs.csi.k8s.io created
deployment.apps/csi-nfs-controller created
daemonset.apps/csi-nfs-node created
NFS CSI driver installed successfully.
[root@control csi-driver-nfs-master]# kubectl -n kube-system get pod -o wide -l app=csi-nfs-controller
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
csi-nfs-controller-5778b77f88-9tcvb 4/4 Running 0 73s 192.168.110.11 node1 <none> <none>
[root@control csi-driver-nfs-master]# kubectl -n kube-system get pod -o wide -l app=csi-nfs-node
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
csi-nfs-node-jm9tw 3/3 Running 0 87s 192.168.110.11 node1 <none> <none>
csi-nfs-node-pdc9x 3/3 Running 0 87s 192.168.110.10 control <none> <none>
csi-nfs-node-t25nx 3/3 Running 0 87s 192.168.110.22 node2 <none> <none>
#创建存储类
[root@control csi-driver-nfs-master]# cp deploy/storageclass.yaml ~
[root@control csi-driver-nfs-master]# cd
[root@control ~]# vim storageclass.yaml
[root@control ~]# cat storageclass.yaml
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: nfs-csi
annotations:
storageclass.kubernetes.io/is-default-class: "true"
provisioner: nfs.csi.k8s.io
parameters:
server: node1
share: /nfs-share
# csi.storage.k8s.io/provisioner-secret is only needed for providing mountOptions in DeleteVolume
# csi.storage.k8s.io/provisioner-secret-name: "mount-options"
# csi.storage.k8s.io/provisioner-secret-namespace: "default"
reclaimPolicy: Delete
volumeBindingMode: Immediate
mountOptions:
- nfsvers=4.1
[root@control ~]# kubectl get storageclasses
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
local-storage kubernetes.io/no-provisioner Delete WaitForFirstConsumer false 20d
nfs-csi (default) nfs.csi.k8s.io Delete Immediate false 18h
# 创建pvc
[root@control ~]# cat nfs-csi-test.yml
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-nfs-dynamic
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
storageClassName: nfs-csi
[root@control ~]# kubectl apply -f nfs-csi-test.yml
persistentvolumeclaim/pvc-nfs-dynamic created
[root@control ~]# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE
pvc-nfs-dynamic Bound pvc-394026c4-819a-4a68-8d56-7f70faba221b 1Gi RWX nfs-csi <unset> 3s
[root@control ~]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS VOLUMEATTRIBUTESCLASS REASON AGE
pvc-394026c4-819a-4a68-8d56-7f70faba221b 1Gi RWX Delete Bound default/pvc-nfs-dynamic nfs-csi <unset> 14s
# 创建服务挂载pv
[root@control ~]# cat test-nginx.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: test-nginx
spec:
selector:
matchLabels:
app: frontend
replicas: 3
template:
metadata:
labels:
app: frontend
spec:
containers:
- name: test-nginx
image: mynginx:new_files
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
volumeMounts:
- name: nfs
mountPath: /usr/share/nginx/html
readOnly: false
volumes:
- name: nfs
persistentVolumeClaim:
claimName: pvc-nfs-dynamic
在nfs服务端,预先添加一些文件
[root@node1 ~]# ls /nfs-share/
pvc-394026c4-819a-4a68-8d56-7f70faba221b
[root@node1 ~]# ls /nfs-share/pvc-394026c4-819a-4a68-8d56-7f70faba221b/
[root@node1 ~]# echo "content save on nfsserver" >> /nfs-share/pvc-394026c4-819a-4a68-8d56-7f70faba221b/index.html
[root@control ~]# vim test-nginx.yml
[root@control ~]# kubectl apply -f test-nginx.yml
deployment.apps/test-nginx created
[root@control ~]# kubectl get deployments.apps
NAME READY UP-TO-DATE AVAILABLE AGE
test-nginx 3/3 3 3 9s
[root@control ~]# kubectl expose deployment test-nginx
service/test-nginx exposed
[root@control ~]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 17d
test-nginx ClusterIP 10.96.18.20 <none> 80/TCP 7s
[root@control ~]# curl 10.96.18.20
content save on nfsserver
[root@control ~]# curl 10.96.18.20
content save on nfsserver
[root@control ~]# curl 10.96.18.20
content save on nfsserver
# 删除pvc
[root@control ~]# kubectl delete -f nfs-csi-test.yml
persistentvolumeclaim "pvc-nfs-dynamic" deleted
// 进程卡住,因为pvc 被pod使用中
## 打开新的终端,并删除使用pvc的pod即可
[root@control ~]# kubectl delete svc test-nginx
service "test-nginx" deleted
[root@control ~]# kubectl delete deployments.apps test-nginx
deployment.apps "test-nginx" deleted
# 将动态卷的回收策略设置为retain (保留)
[root@control ~]# cat storageclass.yaml
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: nfs-csi
annotations:
storageclass.kubernetes.io/is-default-class: "true"
provisioner: nfs.csi.k8s.io
parameters:
server: node1
share: /nfs-share
# csi.storage.k8s.io/provisioner-secret is only needed for providing mountOptions in DeleteVolume
# csi.storage.k8s.io/provisioner-secret-name: "mount-options"
# csi.storage.k8s.io/provisioner-secret-namespace: "default"
reclaimPolicy: Retain // 修改这里
volumeBindingMode: Immediate
mountOptions:
- nfsvers=4.1
[root@control ~]# kubectl delete -f storageclass.yaml
storageclass.storage.k8s.io "nfs-csi" deleted
[root@control ~]# kubectl apply -f storageclass.yaml
storageclass.storage.k8s.io/nfs-csi created
[root@control ~]# kubectl apply -f nfs-csi-test.yml
persistentvolumeclaim/pvc-nfs-dynamic created
[root@control ~]# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE
pvc-nfs-dynamic Bound pvc-a1603ec6-4c0c-4c2b-a1a9-26b119fff8ce 1Gi RWX nfs-csi <unset> 7s
[root@control ~]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS VOLUMEATTRIBUTESCLASS REASON AGE
pvc-a1603ec6-4c0c-4c2b-a1a9-26b119fff8ce 1Gi RWX Retain Bound default/pvc-nfs-dynamic nfs-csi <unset> 10s
[root@control ~]# kubectl delete pvc pvc-nfs-dynamic
persistentvolumeclaim "pvc-nfs-dynamic" deleted
[root@control ~]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS VOLUMEATTRIBUTESCLASS REASON AGE
pvc-a1603ec6-4c0c-4c2b-a1a9-26b119fff8ce 1Gi RWX Retain Released default/pvc-nfs-dynamic nfs-csi <unset> 42s