云原生 | K8S中数据存储之StorageClass

在一个大规模的Kubernetes集群里,可能有成千上万个PVC,这就意味着运维人员必须实现创建出这个多个 PV,此外,随着项目的需要,会有新的PVC不断被提交,那么运维人员就需要不断的添加新的,满足要求的PV,否 则新的Pod就会因为PVC绑定不到PV而导致创建失败。而且通过 PVC 请求到一定的存储空间也很有可能不足 以满足应用对于存储设备的各种需求,而且不同的应用程序对于存储性能的要求可能也不尽相同,比如读写速 度、并发性能等,为了解决这一问题,Kubernetes 又为我们引入了一个新的资源对象:StorageClass, 通过 StorageClass 的定义,管理员可以将存储资源定义为某种类型的资源,比如快速存储、慢速存储等, kubernetes根据 StorageClass 的描述就可以非常直观的知道各种存储资源的具体特性了,这样就可以根 据应用的特性去申请合适的存储资源了。

什么是StorageClass

Kubernetes提供了一套可以自动创建PV的机制,即:Dynamic Provisioning.而这个机制的核心在 于:StorageClass这个API对象. StorageClass对象会定义下面两部分内容:

1.PV的属性.比如,存储类型,Volume的大小等.

2.创建这种PV需要用到的存储插件

**用户创建StatefulSet:**statefuleset中定义volumeclaimtemplates,指定storageclassname:manages-nfs-storage.

**k8s生成PVC:**每个pod启动时,自动创建PVC

StorageClass触发Provisioner: k8s发现PVC引用了managed-nfs-storage,查找对应的storageclass,并调用其关联的provisioner

**provisioner创建PV:**provisioner根据storageclass配置,在NFS服务器上创建目录,并动态生成PV,绑定PVC

pod挂载PV:PV成功绑定后,pod挂载该PV到指定路径

创建storageclass

创建NFS共享服务

bash 复制代码
[root@k8s-master storageclass]# cat /etc/exports
/data/nfs-demo 192.168.9.0/24(rw,sync,no_root_squash,no_subtree_check)
[root@k8s-master storageclass]# sudo exportfs -r
[root@k8s-master storageclass]# sudo exportfs -v
/data/nfs-demo  192.168.9.0/24(sync,wdelay,hide,no_subtree_check,sec=sys,rw,secure,no_root_squash,no_all_squash)

配置account及相关权限

bash 复制代码
# rbac.yaml (已去除所有注释和隐藏字符)
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-client-provisioner
  namespace: default
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: nfs-client-provisioner-runner
rules:
- apiGroups: [""]
  resources: ["persistentvolumes"]   #允许管理PV
  verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
  resources: ["persistentvolumeclaims"]   #允许管理PVC
  verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
  resources: ["storageclasses"]     #允许访问storageclass
  verbs: ["get", "list", "watch"]
- apiGroups: [""]
  resources: ["events"]       #允许记录事件
  verbs: ["create", "update", "patch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: run-nfs-client-provisioner   #绑定到serviceaccount
subjects:
- kind: ServiceAccount
  name: nfs-client-provisioner     #引用的clusterrole
  namespace: default
roleRef:
  kind: ClusterRole
  name: nfs-client-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: leader-locking-nfs-client-provisioner
  namespace: default
rules:
- apiGroups: [""]
  resources: ["endpoints"]
  verbs: ["get", "list", "watch", "create", "update", "patch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: leader-locking-nfs-client-provisioner
subjects:
- kind: ServiceAccount
  name: nfs-client-provisioner
  namespace: default
roleRef:
  kind: Role
  name: leader-locking-nfs-client-provisioner
  apiGroup: rbac.authorization.k8s.io

[root@k8s-master storageclass]# kubectl apply -f rbac.yaml 
serviceaccount/nfs-client-provisioner created
clusterrole.rbac.authorization.k8s.io/nfs-client-provisioner-runner created
clusterrolebinding.rbac.authorization.k8s.io/run-nfs-client-provisioner created
role.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner created
rolebinding.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner created
[root@k8s-master storageclass]#  kubectl get role,rolebinding
NAME                                                                   CREATED AT
role.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner   2025-04-25T02:59:25Z

NAME                                                                          ROLE                                         AGE
rolebinding.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner   Role/leader-locking-nfs-client-provisioner   13s

创建NFS资源的storageclass

bash 复制代码
[root@k8s-master storageclass]# kubectl apply -f nfs-StorageClass.yaml 
storageclass.storage.k8s.io/manages-nfs-storage created
[root@k8s-master storageclass]# kubectl get sc
NAME                  PROVISIONER   RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
manages-nfs-storage   nfs-storage   Delete          Immediate           false                  8s
#动态创建pv
[root@k8s-master storageclass]# cat nfs-StorageClass.yaml 
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: managed-nfs-storage
provisioner: nfs-storage
parameters:
  archiveOnDelete: "false"

创建NFS provisioner

bash 复制代码
[root@k8s-master storageclass]# kubectl get deploy,pod
NAME                                     READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/nfs-client-provisioner   1/1     1            1           18s

NAME                                         READY   STATUS    RESTARTS   AGE
pod/nfs-client-provisioner-cfbb6c87c-ztfvf   1/1     Running   0          18s
#定义 NFS Provisioner 的 Deployment。
[root@k8s-master storageclass]# cat nfs-provisioner.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nfs-client-provisioner
  labels:
    app: nfs-client-provisioner
  namespace: default
spec:
  replicas: 1
  strategy:
    type: Recreate       #重启策略:删除旧pod 再创建新pod
  selector:
    matchLabels:
      app: nfs-client-provisioner
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccountName: nfs-client-provisioner      #使用前面创建的serviceaccount
      containers:
      - name: nfs-client-provisioner
        image: registry.cn-beijing.aliyuncs.com/mydlq/nfs-subdir-external-provisioner:v4.0.0
        volumeMounts:
        - name: nfs-client-root
          mountPath: /persistentvolumes
        env:
        - name: PROVISIONER_NAME
          value: nfs-storage       
        - name: NFS_SERVER
          value: 192.168.9.178 
        - name: NFS_PATH
          value: /data/nfs-demo 
      volumes:
      - name: nfs-client-root
        nfs:
          server: 192.168.9.178  
          path: /data/nfs-demo

创建测试pod,查看是否可以正常挂载

bash 复制代码
[root@k8s-master storageclass]# cat test-pod.yaml 
apiVersion: v1
kind: Pod
metadata: 
  name: test-pod
spec:
  containers:
  - name: test-pod
    image: busybox:1.24
    command:
      - "/bin/sh"
    args:
      - "-c"
      - "touch /mnt/SUCCESS && exit 0 || exit 1"
    volumeMounts:
      - name: nfs-pvc
        mountPath: "/mnt"
  restartPolicy: "Never"
  volumes:
    - name: nfs-pvc
      persistentVolumeClaim:
        claimName: test-claim

[root@k8s-master default-test-claim-pvc-54d78e0c-7c7e-460c-a288-df1ecc3e346d]# ll
total 0
-rw-r--r--. 1 root root 0 Apr 26 23:16 SUCCESS

StateFulDet+volumeClaimTemplates自动创建PV

**statefulset:**为每个pod提供稳定的唯一标识,适用于需要持久化数据或有序部署/扩展的应用;通过volumeclaimtemplates字段,为每个pod自动生成唯一的PVC

**storageclass:**描述动态存储蓝图,指定存储后端类型,provisioner,回收策略;触发动态创建PV:当 PVC 中引用 StorageClass 名称时,Kubernetes 会根据 StorageClass 配置,调用关联的 Provisioner 自动创建 PV

**provisioner:**动态创建PV,根据storageclass的配置动态创建PV,并绑定

组件 核心作用 动态存储中的角色
StatefulSet 管理有状态应用,为每个 Pod 生成唯一的 PVC 触发动态 PVC 创建
StorageClass 定义存储类型和配置模板(如 NFS、云存储) 告诉 Kubernetes 如何创建 PV
Provisioner 根据 StorageClass 配置,在存储后端动态创建 PV 执行实际存储资源分配和绑定操作
bash 复制代码
apiVersion: v1
kind: Service
metadata:
  name: nginx-headless
  labels: 
    app: nginx-headless
spec:
  ports:
  - port: 80
    name: web
  clusterIP: None
  selector: 
    app: nginx
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: web
spec:
  selector:
    matchLabels:
      app: nginx
  serviceName: "nginx-headless"
  replicas: 2
  template:
    metadata:
      labels: 
        app: nginx
    spec: 
      containers:
      - name: nginx
        image: nginx:1.17.1
        ports:
        - containerPort: 80
          name: web
        volumeMounts:
        - name: www
          mountPath: /usr/share/nginx/html
  volumeClaimTemplates:
  - metadata:
      name: www
      annotations:
        volume.beta.kubernetes.io/storage-class: "managed-nfs-storage"
    spec: 
      accessModes: 
      - ReadWriteOnce
      resources:
        requests:
          storage: 1Gi


[root@k8s-master storageclass]# kubectl get pod -l app=nginx
NAME    READY   STATUS    RESTARTS   AGE
web-0   1/1     Running   0          50s
web-1   1/1     Running   0          23s
[root@k8s-master storageclass]# kubectl get pvc
NAME         STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS          AGE
test-claim   Bound    pvc-54d78e0c-7c7e-460c-a288-df1ecc3e346d   1Mi        RWX            managed-nfs-storage   4h11m
www-web-0    Bound    pvc-138a31f9-f5c0-40a2-a6a8-7d49af0bc377   1Gi        RWO            managed-nfs-storage   60s
www-web-1    Bound    pvc-6104a805-da71-4a0c-b9ae-048747412b69   1Gi        RWO            managed-nfs-storage   33s
[root@k8s-master storageclass]# kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                STORAGECLASS          REASON   AGE
pvc-138a31f9-f5c0-40a2-a6a8-7d49af0bc377   1Gi        RWO            Delete           Bound    default/www-web-0    managed-nfs-storage            65s
pvc-54d78e0c-7c7e-460c-a288-df1ecc3e346d   1Mi        RWX            Delete           Bound    default/test-claim   managed-nfs-storage            4h11m
pvc-6104a805-da71-4a0c-b9ae-048747412b69   1Gi        RWO            Delete           Bound    default/www-web-1    managed-nfs-storage            38s
 
[root@k8s-master storageclass]# ll /data/nfs-demo/
total 0
drwxrwxrwx. 2 root root 21 Apr 26 23:19 default-test-claim-pvc-54d78e0c-7c7e-460c-a288-df1ecc3e346d
drwxrwxrwx. 2 root root  6 Apr 27 03:18 default-www-web-0-pvc-138a31f9-f5c0-40a2-a6a8-7d49af0bc377
drwxrwxrwx. 2 root root  6 Apr 27 03:19 default-www-web-1-pvc-6104a805-da71-4a0c-b9ae-048747412b69
相关推荐
Cleo_Gao8 小时前
交我算使用保姆教程:在计算中心利用singularity容器训练深度学习模型
人工智能·深度学习·容器·计算中心
海鸥8110 小时前
在K8S迁移节点kubelet数据存储目录
java·kubernetes·kubelet
阿湯哥10 小时前
Kubernetes 核心组件架构详解
容器·架构·kubernetes
cooldream200912 小时前
深入理解虚拟机与容器:原理、对比与应用场景分析
云原生·系统架构师
东风微鸣12 小时前
运维员工离职交接清单
docker·云原生·kubernetes·可观察性
终端行者13 小时前
kubernetes常用命令 k8s指令大全
linux·容器·kubernetes
z350260370614 小时前
K8S学习笔记01
笔记·学习·kubernetes
小马爱打代码15 小时前
K8S - 从零构建 Docker 镜像与容器
docker·容器·kubernetes
hnlucky17 小时前
Docker 获取 Python 镜像操作指南
linux·运维·python·docker·容器·centos