PV-PVC-SC图解
持久卷PV
PV概述
persistent volumes持久卷,独立的存储资源,实现K8S持久化存储。可以由外部存储系统(如 NFS、GlusterFS、AWS EBS 等)支持。PV可以限制资源的大小和容器
访问模式 | 官方介绍 |
---|---|
ReadWriteOnce (RWO) | 卷可以被单个节点以读写模式挂载。 |
ReadOnlyMany (ROX) | 卷可以被多个节点以只读模式挂载。 |
ReadWriteMany (RWX) | 卷可以被多个节点以读写模式挂载。 |
ReadWriteOncePod (RWOP) | 整个集群中只有一个pod可以读取或写入PV |
回收策略 | 特点 | 使用场景 | 处理方式 |
---|---|---|---|
Retain | 数据和存储资源保留,需人工干预。PV被标记为Release状态 | 适用于需要手动控制数据清理的情况 | 删除 PVC 后,PV 仍然存在,管理员需要手动清理数据并决定是否删除或重新利用存储资源。 |
Delete | 自动删除 PV 和存储资源 | 适用于动态 Provisioning 的 PV,存储资源自动删除 | 删除 PVC 后,PV 被删除,存储资源(如云存储)也被自动删除 |
Recycle | 该策略已弃用,执行简单的擦除操作 | 已弃用,原用于简单清理 | 执行基本擦除(例如 rm -rf ),然后将 PV 重新标记为可用 |
PV的创建
master231节点作为NFS准备工作
bash
mkdir -p /zhiyong18/data/nfs-server/pv/linux/pv00{1..4}
cat > /etc/exports <<EOF
/zhiyong18/data/nfs-server/pv/linux/pv001 *(rw,no_root_squash)
/zhiyong18/data/nfs-server/pv/linux/pv002 *(rw,no_root_squash)
/zhiyong18/data/nfs-server/pv/linux/pv003 *(rw,no_root_squash)
/zhiyong18/data/nfs-server/pv/linux/pv004 *(rw,no_root_squash)
EOF
systemctl restart nfs-server
1.创建一个PV,资源清单如下:
yaml
cat > 01-PV.yaml <<EOF
apiVersion: v1
kind: PersistentVolume
metadata:
name: zhiyong18-linux-pv01
labels:
school: zhiyong18
spec:
accessModes:
- ReadWriteMany
nfs:
path: /zhiyong18/data/nfs-server/pv/linux/pv001
server: 10.0.0.231
persistentVolumeReclaimPolicy: Retain
capacity:
storage: 2Gi
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: zhiyong18-linux-pv02
labels:
school: zhiyong18
spec:
accessModes:
- ReadWriteMany
nfs:
path: /zhiyong18/data/nfs-server/pv/linux/pv002
server: 10.0.0.231
persistentVolumeReclaimPolicy: Retain
capacity:
storage: 5Gi
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: zhiyong18-linux-pv03
labels:
school: zhiyong18
spec:
accessModes:
- ReadWriteMany
nfs:
path: /zhiyong18/data/nfs-server/pv/linux/pv003
server: 10.0.0.231
persistentVolumeReclaimPolicy: Retain
capacity:
storage: 10Gi
EOF
2.查看创建的PV
bash
[root@master231~]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS
zhiyong18-linux-pv01 2Gi RWX Retain Available
zhiyong18-linux-pv02 5Gi RWX Retain Available
zhiyong18-linux-pv03 10Gi RWX Retain Available
持久卷声明PVC
PVC概述
persistent volume claims持久卷声明,声明了应用程序所需要的存储大小、访问模式(如只读或读写)等。PVC 通过匹配已有的 PV 来获取存储资源,从而为容器提供所需的存储空间。如果有多个PV,那么PVC会择优匹配。
PVC的创建
注:之前创建的3个PV留着不要删。
1.PVC配置,创建1个PVC,不指定要使用的PV。查看最后选择了哪个PV
yaml
cat > 02-PVC.yaml <<EOF
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: zhiyong18-linux-pvc
spec:
# 声明要是用的pv
# volumeName: zhiyong18-linux-pv03
# 声明资源的访问模式
accessModes:
- ReadWriteMany
# 声明资源的使用量
resources:
requests:
storage: 3Gi
limits:
storage: 4Gi
EOF
2.运行结果PVC会自动匹配大小最合适的PV,被PVC关联的PV状态也会变为 Bound 状态
bash
[root@master231~]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
zhiyong18-linux-pv01 2Gi RWX Retain Available 20m
zhiyong18-linux-pv02 5Gi RWX Retain Bound default/zhiyong18-linux-pvc 20m
zhiyong18-linux-pv03 10Gi RWX Retain Available 20m
[root@master231~]# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
zhiyong18-linux-pvc Bound zhiyong18-linux-pv02 5Gi RWX 25s
如果pvc预期大小超过实际可用大小,会提示不可用:
no persistent volumes available for this claim and no storage class is set
,也就是说没有合适的PV关联
pod使用PVC作为存储卷
pod使用pvc作为volume
注:接前面创建好PV和PVC的步骤,创建pod并使用pvc进行卷挂载
1.书写配置
yaml
cat > 03-deploy-pvc.yaml <<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
name: deloy-xiuxian
spec:
replicas: 3
selector:
matchLabels:
apps: xiuxian
template:
metadata:
labels:
apps: xiuxian
spec:
volumes:
- name: data
# 指定后端的存储类型是一个pvc
persistentVolumeClaim:
# 指定pvc的名称
claimName: zhiyong18-linux-pvc
# 是否只读,默认值为false
readOnly: false
containers:
- name: c1
image: registry.cn-hangzhou.aliyuncs.com/yinzhengjie-k8s/apps:v2
volumeMounts:
- name: data
mountPath: /zhiyong18-pvc
EOF
bash
[root@master231~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
deloy-xiuxian-6bb74fdf79-bd7p2 1/1 Running 0 9s
deloy-xiuxian-6bb74fdf79-dv47w 1/1 Running 0 9s
deloy-xiuxian-6bb74fdf79-ptbw2 1/1 Running 0 9
2.让pod在里面写入数据,可以发现多个pod写入数据没有问题
bash
kubectl exec deloy-xiuxian-6bb74fdf79-bd7p2 \
-- sh -c 'echo wenzhiyong-use-PVC > /zhiyong18-pvc/pvc1.txt'
kubectl exec deloy-xiuxian-6bb74fdf79-dv47w \
-- sh -c 'echo wenzhiyong-use-PVC > /zhiyong18-pvc/pvc2.txt'
kubectl exec deloy-xiuxian-6bb74fdf79-ptbw2 \
-- sh -c 'echo wenzhiyong-use-PVC3 >> /zhiyong18-pvc/pvc1.txt'
3.进入任意1个pod查看数据
bash
/zhiyong18-pvc # cat *
wenzhiyong-use-PVC
wenzhiyong-use-PVC3
wenzhiyong-use-PVC
4.验证数据在宿主机的存储位置
bash
[root@master231~]# tree /zhiyong18/data/nfs-server/pv/linux/
/zhiyong18/data/nfs-server/pv/linux/
├── pv001
├── pv002
│ ├── pvc1.txt
│ └── pvc2.txt
├── pv003
└── pv004
查看pod使用了哪些PV的脚本
运行脚本携带pod名称作为第一个参数,再输出这个pod使用的PV再哪个目录下。
bash
#!/bin/bash
# 用户查看想要看得pod存储的nfs对应目录
podName=${1:-deloy-xiuxian-b954d54c5-m72zd}
# volumes[0] 表示第一个存储卷,一个pod可能使用多个volume
pvcName=`kubectl get pods ${podName} -o jsonpath={.spec.volumes[0].persistentVolumeClaim.claimName}`
pvName=`kubectl get pvc ${pvcName} -o jsonpath={.spec.volumeName}`
nfsPath=`kubectl get pv $pvName -o jsonpath={.spec.nfs.path}`
执行结果为:
bash
[root@master231tmp]# bash 1.sh deloy-xiuxian-6bb74fdf79-dv47w
deloy-xiuxian-6bb74fdf79-dv47w ~~~ /zhiyong18/data/nfs-server/pv/linux/pv002
快速删除删除PV PVC的脚本
方式1:
bash
for i in pv pvc; do kubectl get ${i} | awk -v obj=${i} 'NR>1 {print "kubectl delete --force " obj " " $1}'; done
kubectl delete --force pv pvc-4676c1ce-f5b6-4968-b772-406d2be09dc4
kubectl delete --force pv pvc-c7af277c-9f07-4084-a18b-a7ef63c76471
kubectl delete --force pv pvc-e91660e7-8ae1-45d7-ba0e-0ebd5289064b
kubectl delete --force pvc data-zhiyong18-linux-web-0
kubectl delete --force pvc data-zhiyong18-linux-web-1
kubectl delete --force pvc data-zhiyong18-linux-web-2
方式2:
bash
for i in pv pvc; do kubectl get ${i} --no-headers | awk -v obj=${i} '{print "kubectl delete --force " obj " " $1}'; done
kubectl delete --force pv pvc-4676c1ce-f5b6-4968-b772-406d2be09dc4
kubectl delete --force pv pvc-c7af277c-9f07-4084-a18b-a7ef63c76471
kubectl delete --force pv pvc-e91660e7-8ae1-45d7-ba0e-0ebd5289064b
kubectl delete --force pvc data-zhiyong18-linux-web-0
kubectl delete --force pvc data-zhiyong18-linux-web-1
kubectl delete --force pvc data-zhiyong18-linux-web-2
验证PV的回收策略-Retain
==提示:==删除PV前先删除pod,否则删除PVC时终端会卡住,因为有pod在使用资源,PVC不会立马删除
1删除pvc
bash
[root@master231~]# kubectl delete pvc zhiyong18-linux-pvc
persistentvolumeclaim "zhiyong18-linux-pvc" delete
2.查看pvc状态为 Released,表示已经释放了。通过查看宿主机目录发现数据依然存在
bash
[root@master231~]# kubectl get pvc
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/zhiyong18-linux-pv01 2Gi RWX Retain Available 80m
persistentvolume/zhiyong18-linux-pv02 5Gi RWX Retain Released default/zhiyong18-linux-pvc 80m
persistentvolume/zhiyong18-linux-pv03
bash
[root@master231~]# tree /zhiyong18/data/nfs-server/pv/linux/
/zhiyong18/data/nfs-server/pv/linux/
├── pv001
├── pv002
│ ├── pvc1.txt
│ └── pvc2.txt
├── pv003
└── pv004
3.删除写入了数据的PV,pod产生的数据还是保留了下来
bash
[root@master231~]# kubectl delete pv zhiyong18-linux-pv02
persistentvolume "zhiyong18-linux-pv02" deleted
[root@master231~]# tree /zhiyong18/data/nfs-server/pv/linux/
/zhiyong18/data/nfs-server/pv/linux/
├── pv001
├── pv002
│ ├── pvc1.txt
│ └── pvc2.txt
├── pv003
└── pv004
[root@master231~]# kubectl get pv zhiyong18-linux-pv02
Error from server (NotFound): persistentvolumes "zhiyong18-linux-pv02" not found
验证PV的回收策略-Recycle
==提示:==删除PV后,K8S会自动创建一个busybox的pod进行清理工作,如果这个busybox无法拉取,则不会完成清理
1.临时更改zhiyong18-linux-pv03的回收策为 Recycle
bash
kubectl patch pv zhiyong18-linux-pv03 -p '{"spec":{"persistentVolumeReclaimPolicy":"Recycle"}}'
# 输出:persistentvolume/zhiyong18-linux-pv03 patched
[root@master231~]# kubectl get pv zhiyong18-linux-pv03
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS
zhiyong18-linux-pv03 10Gi RWX Recycle Available
2.创建PVC并指定使用的PV为zhiyong18-linux-pv03
yaml
cat > 04-PVC-Recycle.yaml <<EOF
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: zhiyong18-linux-pvc
spec:
# 绑定Recycle策略的PV
volumeName: zhiyong18-linux-pv03
# 声明资源的访问模式
accessModes:
- ReadWriteMany
# 声明资源的使用量
resources:
requests:
storage: 3Gi
limits:
storage: 4Gi
EOF
3.再次创建使用PVC的pod,用之前 创建pod使用pvc 的pod清单就行
bash
kubectl apply -f 03-deploy-pvc.yaml
[root@master231~]# kubectl get pv,pvc
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM
persistentvolume/zhiyong18-linux-pv01 2Gi RWX Retain Available
persistentvolume/zhiyong18-linux-pv03 10Gi RWX Recycle Bound default/zhiyong18-linux-pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/zhiyong18-linux-pvc Bound zhiyong18-linux-pv03 10Gi RWX 47s
4.在pod中做写入数据的操作,然后删除pod,PVC,PV,并查看期间各自的状态
bash
kubectl exec deloy-xiuxian-6bb74fdf79-bv6rq \
-- sh -c 'echo PVC-Recycle > /zhiyong18-pvc/Recycle.txt'
kubectl exec deloy-xiuxian-6bb74fdf79-bv6rq \
-- cat /zhiyong18-pvc/Recycle.txt
PVC-Recycle
[root@master231~]# !tree
tree /zhiyong18/data/nfs-server/pv/linux/
/zhiyong18/data/nfs-server/pv/linux/
├── pv001
├── pv002
│ ├── pvc1.txt
│ └── pvc2.txt
├── pv003
│ └── Recycle.txt
└── pv004
5.删除pod后,PV的状态还是 Bound 。删除PVC后,pod就会进入 Released 状态。这是查看pod留下的数据还是存在
bash
kubectl delete -f 03-deploy-pvc.yaml
[root@master231~]# kubectl get pv zhiyong18-linux-pv03
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM
zhiyong18-linux-pv03 10Gi RWX Recycle Bound default/zhiyong18-linux-pvc
[root@master231~]# kubectl delete pvc zhiyong18-linux-pvc
persistentvolumeclaim "zhiyong18-linux-pvc" deleted
[root@master231~]# kubectl get pv zhiyong18-linux-pv03
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM
zhiyong18-linux-pv03 10Gi RWX Recycle Released default/zhiyong18-linux-pvc
[root@master231~]# kubectl delete pv zhiyong18-linux-pv03
persistentvolume "zhiyong18-linux-pv03" deleted
[root@master231~]# tree /zhiyong18/data/nfs-server/pv/linux/
/zhiyong18/data/nfs-server/pv/linux/
├── pv001
├── pv002
│ ├── pvc1.txt
│ └── pvc2.txt
├── pv003
│ └── Recycle.txt
└── pv004
6.查看自动创建的pod,这个pod叫清理者,从名称可以看出是清理PV zhiyong18-linux-pv03的
bash
[root@master231~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE
recycler-for-zhiyong18-linux-pv03 0/1 ErrImagePull 0 3m36s 10.100.2.4 worker232
发现要拉取的镜像为:Back-off pulling image "busybox:1.27",
可以提前手动导入一个有sh的镜像,把镜像名字改为busybox:1.27就可以。导入完成后。数据就被清理了。随后busybox也会随时消失。这就是PV的Recycle策略
bash
[root@master231~]# tree /zhiyong18/data/nfs-server/pv/linux/
/zhiyong18/data/nfs-server/pv/linux/
├── pv001
├── pv002
│ ├── pvc1.txt
│ └── pvc2.txt
├── pv003
└── pv004
4 directories, 0 files
[root@master231~]# kubectl get pods
No resources found in default namespace.
3m36s 10.100.2.4 worker232
发现要拉取的镜像为:Back-off pulling image "busybox:1.27",
可以提前手动导入一个有sh的镜像,把镜像名字改为busybox:1.27就可以。导入完成后。数据就被清理了。随后busybox也会随时消失。这就是PV的Recycle策略
```bash
[root@master231~]# tree /zhiyong18/data/nfs-server/pv/linux/
/zhiyong18/data/nfs-server/pv/linux/
├── pv001
├── pv002
│ ├── pvc1.txt
│ └── pvc2.txt
├── pv003
└── pv004
4 directories, 0 files
[root@master231~]# kubectl get pods
No resources found in default namespace.