kubernetes(k8s)PVC

概念

PVC 的全称是:PersistentVolumeClaim(持久化卷声明),PVC 是用户存储的一种声明,PVC 和 Pod 比较类似,Pod 消耗的是节点,PVC 消耗的是 PV 资源,Pod 可以请求 CPU 和内存,而 PVC 可以请求特定的存储空间和访问模式。对于真正使用存储的用户不需要关心底层的存储实现细节,只需要直接使用 PVC

即可。

环境描述

本文中有一个master,3个node节点

192.168.8.81 k8master1.meng.com k8master1 k8kubapi.meng.com k8kubapi

192.168.8.84 k8node1.meng.com k8node1

192.168.8.85 k8node2.meng.com k8node2

192.168.8.86 k8node3.meng.com k8node3

准备工作

查看master和node节点的状态

bash 复制代码
[root@k8master1 ~]# kubectl get nodes
NAME                 STATUS   ROLES           AGE   VERSION
k8master1.meng.com   Ready    control-plane   22d   v1.28.1
k8node1.meng.com     Ready    <none>          22d   v1.28.1
k8node2.meng.com     Ready    <none>          22d   v1.28.1
k8node3.meng.com     Ready    <none>          22d   v1.28.1

需要在所有节点安装 nfs 客户端程序,必须在所有节点都安装 nfs 客户端,否则可能会导致 PV 挂载不上的问题。

安装命令如下:

bash 复制代码
yum -y install  nfs-utils rpcbind
systemctl start rpcbind.service 
systemctl start nfs.service 

在master节点创建一些文件夹,更改属主

bash 复制代码
mkdir -p /nfs1/data/01 /nfs1/data/02 /nfs1/data/03
chown -R nfsnobody.nfsnobody /nfs1/  

将新建的文件夹共享

bash 复制代码
[root@k8master1 ~]# vim /etc/exports

添加如下内容

bash 复制代码
/nfs1/data/01  192.168.8.0/24(rw,sync,root_squash,all_squash)
/nfs1/data/02  192.168.8.0/24(rw,sync,root_squash,all_squash)
/nfs1/data/03  192.168.8.0/24(rw,sync,root_squash,all_squash)

从node节点尝试访问master节点共享的NFS文件夹,所有的节点都需要测试一下

bash 复制代码
[root@k8node1 containerd]# showmount -e 192.168.8.81
Export list for 192.168.8.81:
/nfs1/data/03 192.168.8.0/24
/nfs1/data/02 192.168.8.0/24
/nfs1/data/01 192.168.8.0/24
[root@k8node1 containerd]# mount -t nfs 192.168.8.81:/nfs1/data/01 /mnt
[root@k8node1 /]# umount /mnt

部署PV

创建了3个PV,大小为10M,1G,3G

bash 复制代码
[root@k8master1 ~]# cat pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv01-10m
spec:
  capacity:
    storage: 10M
  accessModes:
    - ReadWriteMany
  storageClassName: nfs
  nfs:
    path: /nfs1/data/01
    server: 192.168.8.81
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv02-1gi
spec:
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteMany
  storageClassName: nfs
  nfs:
    path: /nfs1/data/02
    server: 192.168.8.81
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv03-3gi
spec:
  capacity:
    storage: 3Gi
  accessModes:
    - ReadWriteMany
  storageClassName: nfs
  nfs:
    path: /nfs1/data/03
    server: 192.168.8.81

创建PV

bash 复制代码
[root@k8master1 ~]# kubectl apply -f pv.yaml

查看PV状态

bash 复制代码
[root@k8master1 ~]# kubectl get pv
NAME       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE
pv01-10m   10M        RWX            Retain           Available           nfs                     4s
pv02-1gi   1Gi        RWX            Retain           Available           nfs                     4s
pv03-3gi   3Gi        RWX            Retain           Available           nfs                     4s 

创建PVC

bash 复制代码
[root@k8master1 ~]# cat pvc.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: nginx-pvc
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 200Mi
  storageClassName: nfs

运维YAML文件创建PVC

bash 复制代码
[root@k8master1 ~]# kubectl apply -f pvc.yaml

查看PV,已有一个PV处于绑定状态

bash 复制代码
[root@k8master1 ~]# kubectl get pv
NAME       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM               STORAGECLASS   REASON   AGE
pv01-10m   10M        RWX            Retain           Available                       nfs                     42h
pv02-1gi   1Gi        RWX            Retain           Bound       default/nginx-pvc   nfs                     42h
pv03-3gi   3Gi        RWX            Retain           Available                       nfs                     42h

发现这个pv02-1gi绑定了我们刚才的pvc,为什么是这个,因为我们申请的是200M,01太小,03太大,k8s帮我们自动做了选择

创建Pod绑定PVC

bash 复制代码
[root@k8master1 ~]# cat pvc-deploy2.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: nginx-deploy-pvc
  name: nginx-deploy-pvc
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx-deploy-pvc
  template:
    metadata:
      labels:
        app: nginx-deploy-pvc
    spec:
      containers:
      - image: nginx
        name: nginx
        volumeMounts:
        - name: html
          mountPath: /usr/share/nginx/html
      volumes:
        - name: html
          persistentVolumeClaim:
            claimName: nginx-pvc
---
apiVersion: v1
kind: Service
metadata:
  name: nginx-pvc
  labels:
    app: nginx-pvc
spec:
  type: NodePort
  ports:
  - port: 80
    targetPort: web              #容器端口或名字
  selector:
    app: nginx-pvc
[root@k8master1 ~]#

执行文件创建pod,创建deployment和service,并暴露给node

bash 复制代码
[root@k8master1 ~]# kubectl apply -f pvc-deploy.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: nginx-deploy-pvc
  name: nginx-deploy-pvc
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx-deploy-pvc
  template:
    metadata:
      labels:
        app: nginx-deploy-pvc
    spec:
      containers:
      - image: nginx
        name: nginx
        ports:
        - containerPort: 80
          name: web
        volumeMounts:
        - name: www
          mountPath: /usr/share/nginx/html
      volumes:
        - name: www
          persistentVolumeClaim:
            claimName: nginx-pvc
---

apiVersion: v1
kind: Service
metadata:
  name: nginx-pvc
  labels:
    app: nginx-pvc
spec:
  type: NodePort
  ports:
    - port: 80
      targetPort: 80
      nodePort: 30004
  selector:
    app: nginx-deploy-pvc

我们进入到02里面新建个文件

bash 复制代码
[root@k8master1 ~]# cd /nfs1/data/02
[root@k8master1 ~]# echo 111 > index.html

交互访问docker,可以看到新创建的index文件

bash 复制代码
[root@k8master1 02]# kubectl exec -it nginx-deploy-pvc-7c466b8668-cfpsg  -- bash
root@nginx-deploy-pvc-7c466b8668-cfpsg:/# cd /usr/share/nginx/html/
root@nginx-deploy-pvc-7c466b8668-cfpsg:/usr/share/nginx/html# ls
index.html
root@nginx-deploy-pvc-7c466b8668-cfpsg:/usr/share/nginx/html#

创建的pod可以curl一下看看是否可以访问,如果不能访问可以参考K8S遇到的问题的文章。

bash 复制代码
[root@k8master1 ~]# kubectl get pods -o wide
NAME                                READY   STATUS    RESTARTS        AGE   IP            NODE               NOMINATED NODE   READINESS GATES
demoapp-7c58cd6bb-7qlrd             1/1     Running   1 (16m ago)     22d   10.244.1.15   k8node1.meng.com   <none>           <none>
demoapp-7c58cd6bb-f84kp             1/1     Running   1 (4m50s ago)   22d   10.244.3.12   k8node3.meng.com   <none>           <none>
demoapp-7c58cd6bb-ldrzf             1/1     Running   1 (8m50s ago)   22d   10.244.2.13   k8node2.meng.com   <none>           <none>
mypod                               1/1     Running   1 (8m50s ago)   21d   10.244.2.12   k8node2.meng.com   <none>           <none>
nginx-deploy-pvc-64b6b6bb47-csrnr   1/1     Running   1 (4m50s ago)   20m   10.244.3.11   k8node3.meng.com   <none>           <none>
nginx-deploy-pvc-64b6b6bb47-wcsbc   1/1     Running   1 (16m ago)     20m   10.244.1.16   k8node1.meng.com   <none>           <none>
[root@k8master1 ~]# curl 10.244.1.16
111
[root@k8master1 ~]#

通过浏览器可以访问暴露的端口来访问Nginx了,master节点和node节点都可以

手动创建deployment和service

bash 复制代码
kubectl create deploy nginx-deploy-pvc --image=nginx:latest --port=80 --replicas=2
kubectl expose deploy nginx-deploy-pvc --name=nginx-pvc --type=NodePort --port=80 --target-port=80 
相关推荐
drebander5 小时前
Maven 与 Kubernetes 部署:构建和部署到 Kubernetes 环境中
java·kubernetes·maven
ITPUB-微风6 小时前
58同城深度学习推理平台:基于Istio的云原生网关实践解析
深度学习·云原生·istio
qq_448941089 小时前
10、k8s对外服务之ingress
linux·容器·kubernetes
野猪佩挤9 小时前
minio作为K8S后端存储
云原生·容器·kubernetes
斯普信专业组10 小时前
K8S下redis哨兵集群使用secret隐藏configmap内明文密码方案详解
redis·kubernetes·bootstrap
福大大架构师每日一题15 小时前
6.4 k8s的informer机制
云原生·容器·kubernetes
炸鸡物料库16 小时前
Kubernetes 使用 Kube-Prometheus 构建指标监控 +飞书告警
运维·云原生·kubernetes·飞书·prometheus·devops
ITPUB-微风16 小时前
云原生DevOps:Zadig架构设计与企业实践分析
运维·云原生·devops
IT闫17 小时前
【Dubbo+Zookeeper】——SpringBoot+Dubbo+Zookeeper知识整合
分布式·zookeeper·云原生·dubbo
CarryBest18 小时前
搭建Kubernetes (K8s) 集群----Centos系统
容器·kubernetes·centos