k8s-持久化存储

在k8s中为什么要做持久化存储?

在k8s中部署的应用都是以pod容器的形式运行的,假如我们部署MySQL、Redis等数据库,需要对这些数据库产生的数据做备份。因为Pod是有生命周期的,如果pod不挂载数据卷,那pod被删除或重启后这些数据会随之消失,如果想要长久的保留这些数据就要用到pod数据持久化存储。

1、k8s持久化存储:emptyDir

查看k8s支持哪些存储

bash 复制代码
[[root@k8s-master01](mailto:root@k8s-master01) ~]# kubectl explain pods.spec.volumes

FIELDS:

  awsElasticBlockStore  <Object>

  azureDisk <Object>

  azureFile <Object>

   cephfs   <Object>

  cinder    <Object>

   configMap<Object>

  csi   <Object>

  downwardAPI   <Object>

   emptyDir <Object>

  ephemeral <Object>

  fc    <Object>

  flexVolume    <Object>

  flocker   <Object>

  gcePersistentDisk <Object>

  gitRepo   <Object>

   glusterfs<Object>

   hostPath <Object>

  iscsi <Object>

  name  <string> -required-

   nfs  <Object>

   persistentVolumeClaim<Object>

  photonPersistentDisk  <Object>

  portworxVolume    <Object>

  projected <Object>

  quobyte   <Object>

  rbd   <Object>

  scaleIO   <Object>

   secret   <Object>

  storageos <Object>

  vsphereVolume <Object>
bash 复制代码
emptyDir​hostPath​nfs​persistentVolumeClaim​glusterfs​cephfs​configMap​secret

我们想要使用存储卷,需要经历如下步骤

1、定义pod的volume,这个volume指明它要关联到哪个存储上的

2、在容器中要使用volumemounts挂载对应的存储

经过以上两步才能正确的使用存储卷

emptyDir类型的Volume是在Pod分配到Node上时被创建,Kubernetes会在Node上自动分配一个目录,因此无需指定宿主机Node上对应的目录文件。 这个目录的初始内容为空,当Pod从Node上移除时,emptyDir中的数据会被永久删除。emptyDir Volume主要用于某些应用程序无需永久保存的临时目录,多个容器的共享目录等。**

创建一个pod,挂载临时目录emptyDir

Emptydir的官方网址:

https://kubernetes.io/docs/concepts/storage/volumes#emptydir

bash 复制代码
[[root@k8s-master01](mailto:root@k8s-master01) ~]# cat emptydir.yaml

apiVersion: v1
kind: Pod
metadata:
 name: pod-empty
spec:
 containers:
 - name: container-empty
   image: nginx
   imagePullPolicy: IfNotPresent
   volumeMounts:
   - mountPath: /cache
     name: cache-volume  ##与volumes中的name保持一致
 volumes:
 - emptyDir: {}
   name: cache-volume

更新资源清单文件

bash 复制代码
[[root@k8s-master01](mailto:root@k8s-master01) ~]# kubectl apply -f emptydir.yaml

pod/pod-empty created

查看本机临时目录存在的位置,可用如下方法:

查看pod调度到哪个节点

root@k8s-master01 \~\]# kubectl get pod pod-empty -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES pod-empty 1/1 Running 0 27s 172.16.69.230 k8s-worker02 \ \

查看pod的uid

root@k8s-master01 \~\]# kubectl get pod pod-empty -o yaml \| grep uid uid: 45a10614-495b-4745-be0f-c7492b90e2b7

登录到k8s-worker02上

root@k8s-worker02 \~\]# tree /var/lib/kubelet/pods/45a10614-495b-4745-be0f-c7492b90e2b7/ /var/lib/kubelet/pods/45a10614-495b-4745-be0f-c7492b90e2b7/ ├── containers │ └── container-empty │ └── 43f2b9b9 ├── etc-hosts ├── plugins │ └── kubernetes.io\~empty-dir │ ├── cache-volume │ │ └── ready │ └── wrapped_kube-api-access-njjrv │ └── ready └── volumes ├── kubernetes.io\~empty-dir │ └── cache-volume └── kubernetes.io\~projected └── kube-api-access-njjrv ├── ca.crt -\> ..data/ca.crt ├── namespace -\> ..data/namespace └── token -\> ..data/token 11 directories, 7 files

测试

###模拟产生数据测试

root@pod-empty:/cache# touch file{1..10}

root@pod-empty:/cache# ls

10}file{1. aaa file1 file10 file2 file3 file4 file5 file6 file7 file8 file9

root@pod-empty:/cache# exit

exit

###node查看

root@k8s-worker02 \~\]# tree /var/lib/kubelet/pods/45a10614-495b-4745-be0f-c7492b90e2b7/ /var/lib/kubelet/pods/45a10614-495b-4745-be0f-c7492b90e2b7/ ├── containers │ └── container-empty │ └── 43f2b9b9 ├── etc-hosts ├── plugins │ └── kubernetes.io\~empty-dir │ ├── cache-volume │ │ └── ready │ └── wrapped_kube-api-access-njjrv │ └── ready └── volumes ├── kubernetes.io\~empty-dir │ └── cache-volume │ ├── 10}file{1. │ ├── aaa │ ├── file1 │ ├── file10 │ ├── file2 │ ├── file3 │ ├── file4 │ ├── file5 │ ├── file6 │ ├── file7 │ ├── file8 │ └── file9 └── kubernetes.io\~projected └── kube-api-access-njjrv ├── ca.crt -\> ..data/ca.crt ├── namespace -\> ..data/namespace └── token -\> ..data/token ​ 12 directories, 18 files ###模拟删除pod测试 ​ \[root@k8s-master01 \~\]# kubectl delete pod pod-empty pod "pod-empty" deleted \[root@k8s-worker02 \~\]# tree /var/lib/kubelet/pods/45a10614-495b-4745-be0f-c7492b90e2b7/ /var/lib/kubelet/pods/45a10614-495b-4745-be0f-c7492b90e2b7/ \[error opening dir

0 directories, 0 files

2、k8s持久化存储:hostPath

hostPath Volume是指Pod挂载宿主机上的目录或文件。 hostPath Volume使得容器可以使用宿主机的文件系统进行存储,hostpath(宿主机路径):节点级别的存储卷,在pod被删除,这个存储卷还是存在的,不会被删除,所以只要同一个pod被调度到同一个节点上来,在pod被删除重新被调度到这个节点之后,对应的数据依然是存在的。

查看hostPath存储卷的用法

\[root@k8s-master01\](mailto:root@k8s-master01) \~\]# kubectl explain pods.spec.volumes.hostPath KIND: Pod VERSION: v1 RESOURCE: hostPath \ DESCRIPTION: HostPath represents a pre-existing file or directory on the host machine that is directly exposed to the container. This is generally used for system agents or other privileged things that are allowed to see the host machine. Most containers will NOT need this. More info: \[https://kubernetes.io/docs/concepts/storage/volumes#hostpath\](https://kubernetes.io/docs/concepts/storage/volumes#hostpath) Represents a host path mapped into a pod. Host path volumes do not support ownership management or SELinux relabeling. FIELDS: path \ -required- type \

创建一个pod,挂载hostPath存储卷

bash 复制代码
[root@k8s-master01~]# cat hostpath.yaml

apiVersion: v1
kind: Pod
metadata:
 name: test-hostpath
spec:
  containers:
  - image: nginx
    name: test-nginx
    volumeMounts:
    - mountPath: /usr/share/nginx/html
      name: test-volume
  volumes:
  - name: test-volume
    hostPath:
     path: /data
     type: DirectoryOrCreate

注意:

DirectoryOrCreate表示本地有/data目录,就用本地的,本地没有就会在pod调度到的节点自动创建一个

更新资源清单文件,并查看pod调度到了哪个物理节点

bash 复制代码
[root@k8s-master01 ~]# kubectl apply -f  test-hostpath.yaml 
pod/test-hostpath created

[root@k8s-master01 ~]# kubectl get pod test-hostpath -o wide 
NAME            READY   STATUS    RESTARTS   AGE   IP              NODE           NOMINATED NODE   READINESS GATES
test-hostpath   1/1     Running   0          80s   172.16.69.248   k8s-worker02   <none>           <none>

测试

bash 复制代码
###没有首页
[root@k8s-master01 ~]# curl 172.16.69.248
<html>
<head><title>403 Forbidden</title></head>
<body>
<center><h1>403 Forbidden</h1></center>
<hr><center>nginx/1.25.3</center>
</body>
</html>
###生成首页
[root@k8s-worker02 ~]# cd /data/
[root@k8s-worker02 data]# ls
[root@k8s-worker02 data]# echo 1111 > index.html

[root@k8s-master01 ~]# curl 172.16.69.248
1111

hostpath存储卷缺点

单节点,pod删除之后重新创建必须调度到同一个node节点,数据才不会丢失

如何调度到同一个nodeName呢 ? 需要我们再yaml文件中进行指定就可以

apiVersion: v1

kind: Pod

metadata:

name: test-hostpath

spec:

nodeName: k8s-worker02

containers:

  • image: nginx

name: test-nginx

volumeMounts:

  • mountPath: /usr/share/nginx/html

name: test-volume

volumes:

  • name: test-volume

hostPath:

path: /data

type: DirectoryOrCreate

测试

root@k8s-master01 \~\]# kubectl apply -f test-hostpath.yaml pod/test-hostpath configured \[root@k8s-master01 \~\]# kubectl delete pod test-hostpath pod "test-hostpath" deleted \[root@k8s-master01 \~\]# kubectl apply -f test-hostpath.yaml pod/test-hostpath created \[root@k8s-master01 \~\]# kubectl get pod test-hostpath -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES test-hostpath 1/1 Running 0 9s 172.16.69.219 k8s-worker02 \ \ \[root@k8s-master01 \~\]# curl 172.16.69.219 1111

bash 复制代码
## 3、k8s持久化存储:nfs

上节说的hostPath存储,存在单点故障,pod挂载hostPath时,只有调度到同一个节点,数据才不会丢失。那可以使用nfs作为持久化存储。

**搭建nfs服务**

以k8s的控制节点作为NFS服务端

```shell
[root@k8s-master03 ~]# yum install -y nfs-utils
```

在宿主机创建NFS需要的共享目录

```shell
[root@k8s-master03 ~]# mkdir /data/volumes -pv
mkdir: 已创建目录 "/data"
mkdir: 已创建目录 "/data/volumes"
```

配置nfs共享服务器上的/data/volumes目录

```shell
[root@k8s-master01] ~]# systemctl enable --now nfs
[root@k8s-master01] ~]# vim /etc/exports
/data/volumes 192.168.115.0/24(rw,no_root_squash)
```

#使NFS配置生效

```shell
[root@k8s-master03 ~]# exportfs -avr
exporting 192.168.115.0/24:/data/volumes
```

**所有的worker节点安装nfs-utils**
yum install nfs-utils -y
systemctl enable --now nfs

#在k8s-worker01和k8s-worker02上手动挂载试试:

root@k8s-worker01 \~\]# mount 192.168.115.163:/data/volumes /mnt \[root@k8s-worker01 \~\]# df -Th \| grep nfs 192.168.115.163:/data/volumes nfs4 116G 6.7G 109G 6% /mnt #nfs可以被正常挂载 #手动卸载: \[root@k8s-worker01 \~\]# umount /mnt

创建Pod,挂载NFS共享出来的目录

Pod挂载nfs的官方地址:https://kubernetes.io/zh/docs/concepts/storage/volumes/

root@k8s-master01 \~\]# cat nfs.yaml apiVersion: v1 kind: Pod metadata: name: test-nfs spec: containers: - name: test-nfs image: nginx imagePullPolicy: IfNotPresent ports: - containerPort: 80 protocol: TCP volumeMounts: - name: nfs-volumes mountPath: /usr/share/nginx/html volumes: - name: nfs-volumes nfs: path: /data/volumes #共享目录 server: 192.168.115.163 ##nfs服务器地址 更新资源清单文件 \`\`\`shell \[root@k8s-master01 \~\]# kubectl apply -f test-nfs.yaml pod/test-nfs created \`\`\` 查看pod是否创建成功 \`\`\`shell \[root@k8s-master01 \~\]# kubectl get pods -o wide \| grep nfs test-nfs 1/1 Running 0 55s 172.16.79.68 k8s-worker01 \ \ \`\`\` \*\*测试\*\* \[root@k8s-master01 \~\]# curl 172.16.79.68 \ \\403 Forbidden\\ \ \\403 Forbidden\\ \\nginx/1.25.3\ \ \

PVC 基本概念与工作原理

PersistentVolumeClaim (PVC) 是用户对存储资源的抽象请求,通过声明式配置定义所需的存储大小、访问模式等属性。PVC 与 PersistentVolume (PV) 绑定后,Pod 可通过 PVC 挂载存储资源,实现数据持久化。


PVC 核心特性

动态供给

通过 StorageClass 动态创建 PV,无需管理员手动预配置 PV。例如定义以下 StorageClass:

yaml 复制代码
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: fast
provisioner: kubernetes.io/glusterfs
parameters:
  resturl: "http://glusterfs-cluster.example.com"

访问模式

支持三种模式:

  • ReadWriteOnce (RWO):单节点读写
  • ReadOnlyMany (ROX):多节点只读
  • ReadWriteMany (RWX):多节点读写

生命周期阶段

  • Available:未绑定的空闲 PV
  • Bound:已与 PVC 绑定
  • Released:PVC 删除但 PV 未回收
  • Failed:自动回收失败

PVC 使用示例

创建 PVC

yaml 复制代码
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: mypvc
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 5Gi
  storageClassName: fast

Pod 挂载 PVC

yaml 复制代码
apiVersion: v1
kind: Pod
metadata:
  name: mypod
spec:
  containers:
    - name: nginx
      image: nginx
      volumeMounts:
        - mountPath: "/usr/share/nginx/html"
          name: storage
  volumes:
    - name: storage
      persistentVolumeClaim:
        claimName: mypvc

PVC 与分布式存储集成

GlusterFS 示例

  1. 部署 GlusterFS 集群并创建 Volume
  2. 定义 StorageClass 使用 GlusterFS 插件
  3. PVC 申请时自动创建 GlusterFS 后端 PV

CephFS 示例

yaml 复制代码
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: cephfs
provisioner: ceph.com/cephfs
parameters:
  monitors: "10.16.154.78:6789"
  adminId: admin
  adminSecretName: ceph-secret

数据高可用建议

  • 分布式存储系统(如 Ceph/GlusterFS)提供副本机制
  • 定期备份 PV 数据至对象存储(如 S3/MinIO)
  • 监控 PV/PVC 状态并设置告警规则

通过 PVC 抽象存储细节,结合分布式存储后端,可实现跨节点数据共享与高可用性。

PV和PVC工作原理补充说明

PV(PersistentVolume)是集群中的存储资源,PVC(PersistentVolumeClaim)是用户对存储资源的请求。生命周期分为供应、绑定、使用和回收四个阶段。

静态供应 集群管理员手动创建PV,明确指定存储容量、访问模式等细节。这些PV会持久存在于Kubernetes API中,供PVC匹配使用。

动态供应 当没有匹配的静态PV时,集群会根据PVC中指定的StorageClass自动创建PV。需要预先配置好StorageClass和相关存储插件。

PVC绑定细节

PVC通过以下字段匹配PV:

  • 存储容量需求(resources.requests.storage)
  • 访问模式(ReadWriteOnce/ReadOnlyMany/ReadWriteMany)
  • StorageClass名称(若指定)
  • 标签选择器(selector)

绑定成功后,PV的状态变为Bound,且被独占锁定。若没有可用PV,PVC会保持Pending状态。

使用PVC的典型流程

创建NFS共享目录(假设NFS服务器已就绪):

shell 复制代码
mkdir -p /nfs/data/pv1
chmod 777 /nfs/data/pv1
echo "/nfs/data/pv1 *(rw,no_root_squash)" >> /etc/exports
exportfs -a

定义PV示例(static-pv.yaml):

yaml 复制代码
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv-nfs
spec:
  capacity:
    storage: 5Gi
  accessModes:
    - ReadWriteMany
  persistentVolumeReclaimPolicy: Retain
  nfs:
    path: /nfs/data/pv1
    server: nfs-server-ip

定义PVC示例(pvc-claim.yaml):

yaml 复制代码
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: nfs-pvc
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 5Gi

回收策略对比

Retain策略

  • PVC删除后PV状态变为Released
  • 需要手动清理PV(删除或重新定义)才能再次使用
  • 数据永久保留,适用于关键数据场景

Delete策略

  • 自动删除PV及后端存储数据
  • 适用于临时数据或可丢失数据
  • 依赖StorageClass的配置支持

Recycle策略(已弃用)

  • 基本数据擦除(如rm -rf /volume/*)
  • 现推荐使用动态供应配合Delete策略

在Pod中使用PVC

示例Pod定义(pod-with-pvc.yaml):

yaml 复制代码
apiVersion: v1
kind: Pod
metadata:
  name: nginx-pod
spec:
  containers:
  - name: nginx
    image: nginx
    volumeMounts:
    - name: nfs-vol
      mountPath: /usr/share/nginx/html
  volumes:
  - name: nfs-vol
    persistentVolumeClaim:
      claimName: nfs-pvc

验证步骤:

  1. 应用PV/PVC定义:kubectl apply -f static-pv.yaml,pvc-claim.yaml
  2. 检查绑定状态:kubectl get pv,pvc
  3. 创建Pod:kubectl apply -f pod-with-pvc.yaml
  4. 验证数据持久性:删除Pod后重新创建,检查数据是否保留

动态供应配置要点

  1. 创建StorageClass(nfs-sc.yaml):
yaml 复制代码
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: nfs-sc
provisioner: example.com/nfs
parameters:
  archiveOnDelete: "false"
  1. PVC中指定StorageClass:
yaml 复制代码
spec:
  storageClassName: nfs-sc
  accessModes: [...]

注意:动态供应需要对应的provisioner组件,不同存储系统(如AWS EBS、Ceph RBD等)需要安装各自的插件。

#在宿主机创建NFS需要的共享目录

root@k8s-master01\] \~\]# mkdir /data/volume_test/v{1,2,3,4,5,6,7,8,9,10} -p #配置nfs共享宿主机上的/data/volume_test/v1..v10目录 \[root@k8s-master01 \~\]# cat /etc/exports /data/volumes 192.168.115.0/24(rw,no_root_squash) /data/volume_test/v1 192.168.115.0/24(rw,no_root_squash) /data/volume_test/v2 192.168.115.0/24(rw,no_root_squash) /data/volume_test/v3 192.168.115.0/24(rw,no_root_squash) /data/volume_test/v4 192.168.115.0/24(rw,no_root_squash) /data/volume_test/v5 192.168.115.0/24(rw,no_root_squash) /data/volume_test/v6 192.168.115.0/24(rw,no_root_squash) /data/volume_test/v7 192.168.115.0/24(rw,no_root_squash) /data/volume_test/v8 192.168.115.0/24(rw,no_root_squash) /data/volume_test/v9 192.168.115.0/24(rw,no_root_squash) /data/volume_test/v10 192.168.115.0/24(rw,no_root_squash) #重新加载配置,使配置成效 \[root@k8s-master01 \~\]# exportfs -arv

#查看定义pv需要的字段

root@k8s-master01 \~\]# kubectl explain pv KIND: PersistentVolume VERSION: v1 DESCRIPTION: PersistentVolume (PV) is a storage resource provisioned by an administrator. It is analogous to a node. More info: \[https://kubernetes.io/docs/concepts/storage/persistent-volumes\](https://kubernetes.io/docs/concepts/storage/persistent-volumes) FIELDS: apiVersion \s kind\ metadata\ spec\ #查看定义nfs类型的pv需要的字段 \[root@k8s-master01\] \~\]# kubectl explain pv.spec.nfs KIND: PersistentVolume VERSION: v1 RESOURCE: nfs \ FIELDS: path\ -required- readOnly\ server \ -required-

root@k8s-master01 \~\]# cat pv.yaml apiVersion: v1 kind: PersistentVolume metadata: name: v1 spec: capacity: storage: 1Gi #pv的存储空间容量 accessModes: \["ReadWriteOnce"

nfs:

path: /data/volume_test/v1 #把nfs的存储空间创建成pv

server: 192.168.115.163 #nfs服务器的地址


apiVersion: v1

kind: PersistentVolume

metadata:

name: v2

spec:

capacity:

storage: 2Gi

accessModes: ["ReadWriteMany"]

nfs:

path: /data/volume_test/v2

server: 192.168.115.163


apiVersion: v1

kind: PersistentVolume

metadata:

name: v3

spec:

capacity:

storage: 3Gi

accessModes: ["ReadOnlyMany"]

nfs:

path: /data/volume_test/v3

server: 192.168.115.163


apiVersion: v1

kind: PersistentVolume

metadata:

name: v4

spec:

capacity:

storage: 4Gi

accessModes: ["ReadWriteOnce","ReadWriteMany"]

nfs:

path: /data/volume_test/v4

server: 192.168.115.163


apiVersion: v1

kind: PersistentVolume

metadata:

name: v5

spec:

capacity:

storage: 5Gi

accessModes: ["ReadWriteOnce","ReadWriteMany"]

nfs:

path: /data/volume_test/v5

server: 192.168.115.163


apiVersion: v1

kind: PersistentVolume

metadata:

name: v6

spec:

capacity:

storage: 6Gi

accessModes: ["ReadWriteOnce","ReadWriteMany"]

nfs:

path: /data/volume_test/v6

server: 192.168.115.163


apiVersion: v1

kind: PersistentVolume

metadata:

name: v7

spec:

capacity:

storage: 7Gi

accessModes: ["ReadWriteOnce","ReadWriteMany"]

nfs:

path: /data/volume_test/v7

server: 192.168.115.163


apiVersion: v1

kind: PersistentVolume

metadata:

name: v8

spec:

capacity:

storage: 8Gi

accessModes: ["ReadWriteOnce","ReadWriteMany"]

nfs:

path: /data/volume_test/v8

server: 192.168.115.163


apiVersion: v1

kind: PersistentVolume

metadata:

name: v9

spec:

capacity:

storage: 9Gi

accessModes: ["ReadWriteOnce","ReadWriteMany"]

nfs:

path: /data/volume_test/v9

server: 192.168.115.163


apiVersion: v1

kind: PersistentVolume

metadata:

name: v10

spec:

capacity:

storage: 10Gi

accessModes: ["ReadWriteOnce","ReadWriteMany"]

nfs:

path: /data/volume_test/v10

server: 192.168.115.163

更新资源清单文件

root@k8s-master01 \~\]# kubectl apply -f pv.yaml persistentvolume/v1 created persistentvolume/v2 created persistentvolume/v3 created persistentvolume/v4 created persistentvolume/v5 created persistentvolume/v6 created persistentvolume/v7 created persistentvolume/v8 created persistentvolume/v9 created persistentvolume/v10 created 查看pv资源 \[root@k8s-master01 \~\]# kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE v1 1Gi RWO Retain Available 17m v10 10Gi RWO,RWX Retain Available 17m v2 2Gi RWX Retain Bound default/my-pvc 17m v3 3Gi ROX Retain Available 17m v4 4Gi RWO,RWX Retain Bound default/my-pvc1 17m v5 5Gi RWO,RWX Retain Available 17m v6 6Gi RWO,RWX Retain Available 17m v7 7Gi RWO,RWX Retain Available 17m v8 8Gi RWO,RWX Retain Available 17m v9 9Gi RWO,RWX Retain Available 17m \[root@k8s-master01 \~\]# kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE my-pvc Bound v2 2Gi RWX 12m my-pvc1 Bound v4 4Gi RWO,RWX 4m11s

root@hd1 volume\]# cat pod_pvc.yaml apiVersion: v1 kind: Pod metadata: name: pod-pvc spec: containers: - name: nginx image: nginx imagePullPolicy: IfNotPresent volumeMounts: - name: nginx-html mountPath: /usr/share/nginx/html volumes: - name: nginx-html persistentVolumeClaim: claimName: my-pvc

更新资源清单文件

更新资源清单文件

root@k8s-master01 \~\]# kubectl apply -f pod-pvc.yaml pod/pod-pvc created ​ shell 查看pod状态 \[root@k8s-master01 \~\]# kubectl get pod -o wide \| grep pvc pod-pvc 1/1 Running 0 16s 172.16.79.127 k8s-worker01 \ \ \\403 Forbidden\\ \ \\403 Forbidden\\ \\nginx/1.25.3\ \ \

bash 复制代码
[root@k8s-master01 ~]# curl 172.16.79.127
<html>
<head><title>403 Forbidden</title></head>
<body>
<center><h1>403 Forbidden</h1></center>
<hr><center>nginx/1.25.3</center>
</body>
</html>

####在nfs服务器端写入index.html
[root@k8s-master03 ~]# cd /data/volume_test/v2
[root@k8s-master03 v2]# echo pvc > index.html
[root@k8s-master01 ~]# curl 172.16.79.127
pvc
###创建另一个pod测试

删除pod-pvc 这个pod,发现pvc还是存在的

bash 复制代码
[root@k8s-master01 ~]# kubectl delete pod pod-pvc
[root@k8s-master01 ~]# kubectl get pvc
NAME      STATUS   VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
my-pvc    Bound    v2       2Gi        RWX                           36s
my-pvc1   Bound    v4       4Gi        RWO,RWX                       20m

[root@k8s-master01 ~]# kubectl get pv
NAME   CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM             STORAGECLASS   REASON   AGE
v1     1Gi        RWO            Retain           Available                                             54s
v10    10Gi       RWO,RWX        Retain           Available                                             33m
v2     2Gi        RWX            Retain           Bound       default/my-pvc                            54s
v3     3Gi        ROX            Retain           Available                                             33m
v4     4Gi        RWO,RWX        Retain           Bound       default/my-pvc1                           33m
v5     5Gi        RWO,RWX        Retain           Available                                             33m
v6     6Gi        RWO,RWX        Retain           Available                                             33m
v7     7Gi        RWO,RWX        Retain           Available                                             33m
v8     8Gi        RWO,RWX        Retain           Available                                             33m
v9     9Gi        RWO,RWX        Retain           Available                                             33m

删除pvc

root@k8s-master01 \~\]# kubectl delete pvc my-pvc persistentvolumeclaim "my-pvc" deleted \[root@k8s-master01 \~\]# kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE my-pvc1 Bound v4 4Gi RWO,RWX 20m #发现pv的状态发生了变化 \[root@k8s-master01 \~\]# kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE v1 1Gi RWO Retain Available 2m9s v10 10Gi RWO,RWX Retain Available 34m v2 2Gi RWX Retain Released default/my-pvc 2m9s v3 3Gi ROX Retain Available 34m v4 4Gi RWO,RWX Retain Bound default/my-pvc1 34m v5 5Gi RWO,RWX Retain Available 34m v6 6Gi RWO,RWX Retain Available 34m v7 7Gi RWO,RWX Retain Available 34m v8 8Gi RWO,RWX Retain Available 34m v9 9Gi RWO,RWX Retain Available 34m

index.htmlex.html还是存在的

注:使用pvc和pv的注意事项

1、我们每次创建pvc的时候,需要事先有划分好的pv,这样可能不方便,那么可以在创建pvc的时候直接动态创建一个pv这个存储类,pv事先是不存在的

2、pvc和pv绑定,如果使用默认的回收策略retain,那么删除pvc之后,pv会处于released状态,我们想要继续使用这个pv,需要手动删除pv,kubectl delete pv pv_name,删除pv,不会删除pv里的数据,当我们重新创建pvc时还会和这个最匹配的pv绑定,数据还是原来数据,不会丢失。

StorageClass 运行原理

StorageClass 的核心机制是通过动态供应(Dynamic Provisioning)自动创建 PV,无需管理员手动预创建。其工作原理可分为以下关键环节:

动态供应流程

当用户创建 PVC 时,若指定了 StorageClass 名称,Kubernetes 会触发以下动作:

  1. PVC 向 StorageClass 发起存储请求,包含所需容量、访问模式等参数。
  2. StorageClass 根据预定义的 provisioner 字段调用对应的存储插件(如 Ceph RBD、AWS EBS)。
  3. 存储插件根据 parameters 配置(如磁盘类型、区域)在底层存储系统中分配资源,并自动创建 PV。
  4. 新创建的 PV 与 PVC 自动绑定,供 Pod 使用。

关键字段解析

  • provisioner:指定存储驱动(如 kubernetes.io/aws-ebs),决定由哪个插件创建 PV。
  • parameters:存储驱动的配置参数(如 AWS EBS 的 type: gp2)。
  • reclaimPolicy:定义 PV 回收策略(DeleteRetain),默认为 Delete

与静态供应的对比

  • 静态供应需手动创建 PV,StorageClass 实现自动化。
  • 动态供应按需创建 PV,避免资源浪费,适合大规模集群。

配置示例

以下是一个 AWS EBS 的 StorageClass 定义示例:

yaml 复制代码
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: fast
provisioner: kubernetes.io/aws-ebs
parameters:
  type: gp2
  fsType: ext4
reclaimPolicy: Retain
allowVolumeExpansion: true

字段说明

  • type: gp2:指定 AWS EBS 的固态硬盘类型。
  • allowVolumeExpansion: true:允许 PVC 动态扩容。

使用场景

有状态应用

数据库(如 MySQL)通过 StorageClass 自动分配持久化存储,确保数据可靠性。

CI/CD 流水线

构建任务临时申请存储资源,任务完成后自动释放 PV。

多云环境

通过不同 StorageClass 抽象底层存储差异(如 AWS EBS 与 Azure Disk)。

管理 PVC 和 Pod 的存储资源

在 Kubernetes 中,PersistentVolumeClaim (PVC) 和 PersistentVolume (PV) 用于管理存储资源。PVC 是用户对存储的请求,而 PV 是集群中的实际存储资源。StorageClass 用于定义不同类型的存储,如 Gold/Fast、Silver/Standard、Bronze/Slow。

配置 StorageClass

StorageClass 允许集群管理员定义不同类型的存储。例如:

  • Gold/Fast:高性能存储,适用于高 I/O 负载应用。
  • Silver/Standard:中等性能存储,适用于常规应用。
  • Bronze/Slow:低成本存储,适用于备份或低优先级应用。
yaml 复制代码
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: gold
provisioner: kubernetes.io/aws-ebs
parameters:
  type: gp2

自动生成 PV 并绑定 PVC

当用户创建 PVC 时,系统可以根据 StorageClass 自动生成 PV 并绑定。例如,以下 PVC 请求 30GiB 的存储:

yaml 复制代码
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: mypvc
spec:
  storageClassName: gold
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 30Gi

支持的存储后端

Kubernetes 支持多种存储后端,包括:

  • GlusterFS:分布式文件系统。
  • NFS:网络文件系统。
  • iSCSI:基于块的存储协议。
  • AWS EBS:亚马逊弹性块存储。
  • Ceph RBD:Ceph 块设备。
  • GCE Persistent Disk:谷歌云持久化磁盘。

存储资源管理

存储管理员负责配置和维护存储后端。例如:

  • 部署 GlusterFS 集群并配置 Kubernetes 集成。
  • 在 AWS 中创建 EBS 卷并确保其可用于集群。
  • 配置 Ceph RBD 以供 Kubernetes 使用。

示例:使用 AWS EBS

以下示例展示如何通过 StorageClass 使用 AWS EBS:

yaml 复制代码
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: aws-ebs
provisioner: kubernetes.io/aws-ebs
parameters:
  type: gp2
  fsType: ext4

用户可以通过 PVC 请求存储:

yaml 复制代码
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: ebs-pvc
spec:
  storageClassName: aws-ebs
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 100Gi

动态卷配置

动态卷配置允许按需创建 PV,无需手动干预。StorageClass 中的 provisioner 字段指定了用于动态配置的插件。例如,AWS EBS 的 provisioner 是 kubernetes.io/aws-ebs

存储容量管理

PVC 可以请求特定大小的存储,如 10GB、20GiB、30GiB 或 100GB。Kubernetes 会根据 StorageClass 的配置自动分配符合条件的 PV。

通过合理配置 StorageClass 和 PVC,用户可以灵活管理存储资源,而存储管理员可以集中管理后端存储系统。

StorageClass 的核心概念与作用

StorageClass 是 Kubernetes 中用于动态分配 PersistentVolume(PV)的机制,通过定义存储后端类型、回收策略等参数,实现 PV 的自动化创建与管理。

  • 核心字段

    • provisioner:指定存储后端的驱动(如 nfs-client),用于自动创建 PV。
    • parameters:存储后端的配置参数(如 NFS 服务器地址、路径等)。
    • reclaimPolicy:定义 PV 的回收策略(DeleteRetain)。
  • 命名规则

    StorageClass 的名称具有唯一性,用户通过名称请求特定类型的存储资源。创建后不可修改名称或关键参数。

  • 默认 StorageClass

    集群管理员可设置默认 StorageClass,当 PVC 未指定 storageClassName 时,自动使用默认类创建 PV。

NFS 存储的动态配置实现

以 NFS 为例,需部署对应的 Provisioner(如 nfs-client-provisioner),实现 PV 的自动化创建。

  • Provisioner 工作原理

    Provisioner 监听 PVC 请求,根据 StorageClass 配置自动在 NFS 服务器上创建目录并生成 PV,无需手动定义 PV。

  • 示例 StorageClass 配置

yaml 复制代码
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: nfs-storage
provisioner: example.com/nfs-client
parameters:
  server: 192.168.1.100
  path: /data/nfs
reclaimPolicy: Retain

操作流程

  1. 部署 NFS Provisioner

    通过 Helm 或手动部署 nfs-client-provisioner,需指定 NFS 服务器地址及共享路径。

  2. 创建 StorageClass

    定义 Provisioner 和 NFS 参数,标记为默认类(可选)。

  3. 用户创建 PVC

    PVC 中引用 StorageClass 名称,触发自动创建 PV 并绑定。

yaml 复制代码
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: nfs-pvc
spec:
  storageClassName: nfs-storage
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 1Gi

注意事项

  • 回收策略Delete 会在删除 PVC 时移除 PV 及 NFS 上的数据;Retain 保留数据需手动清理。
  • 权限配置:确保 NFS 服务器共享目录具有 Provisioner 所需的读写权限。
  • 多 StorageClass:集群可配置多个 StorageClass 以支持不同存储后端(如 SSD、HDD)。

通过 StorageClass 和 Provisioner 的配合,Kubernetes 实现了存储资源的按需分配,显著简化了有状态应用的存储管理。

Kubernetes Persistent Volume (PV) and Persistent Volume Claim (PVC) Workflow

The process of provisioning and using persistent storage in Kubernetes involves multiple steps, with interactions between administrators, users, and Kubernetes components. Below is a structured breakdown of the workflow:

Cluster Admin Sets Up Persistent Storage Infrastructure

A cluster administrator deploys a PersistentVolume (PV) provisioner if one is not already available. This could be a cloud provider's storage provisioner (e.g., AWS EBS, GCE PD) or an on-premises solution like Ceph or NFS.

The admin defines one or more StorageClass resources, which specify:

  • The provisioner to use
  • Parameters like disk type, replication policy, or performance settings
  • Optionally marks one StorageClass as the default
User Requests Storage via

创建可用的NFS Server

在Kubernetes集群外部或内部部署NFS服务器,确保共享目录可被集群节点访问。例如,在Linux系统中安装NFS服务并配置共享目录:

bash 复制代码
# 安装NFS服务
sudo apt-get install nfs-kernel-server

# 创建共享目录并设置权限
sudo mkdir -p /data/nfs
sudo chmod 777 /data/nfs

# 编辑/etc/exports文件,添加共享配置
echo "/data/nfs *(rw,sync,no_root_squash)" | sudo tee -a /etc/exports

# 重启NFS服务
sudo systemctl restart nfs-kernel-server

创建Service Account

为NFS Provisioner创建专用的Service Account和RBAC权限,确保其在集群中具有动态创建PV的权限:

yaml 复制代码
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-provisioner
  namespace: default
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: nfs-provisioner-runner
rules:
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: run-nfs-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-provisioner
    namespace: default
roleRef:
  kind: ClusterRole
  name: nfs-provisioner-runner
  apiGroup: rbac.authorization.k8s.io

部署NFS Provisioner

使用Deployment或StatefulSet部署NFS Provisioner,指定NFS服务器地址和共享路径:

yaml 复制代码
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nfs-provisioner
  namespace: default
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nfs-provisioner
  template:
    metadata:
      labels:
        app: nfs-provisioner
    spec:
      serviceAccountName: nfs-provisioner
      containers:
        - name: nfs-provisioner
          image: k8s.gcr.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2
          env:
            - name: PROVISIONER_NAME
              value: example.com/nfs
            - name: NFS_SERVER
              value: <NFS_SERVER_IP>
            - name: NFS_PATH
              value: /data/nfs
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
      volumes:
        - name: nfs-client-root
          nfs:
            server: <NFS_SERVER_IP>
            path: /data/nfs

创建StorageClass

定义StorageClass资源,关联NFS Provisioner并配置回收策略:

yaml 复制代码
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: nfs-storage
provisioner: example.com/nfs
parameters:
  archiveOnDelete: "false"
reclaimPolicy: Delete
volumeBindingMode: Immediate

关键字段说明:

  • provisioner: 必须与NFS Provisioner中定义的PROVISIONER_NAME一致。
  • reclaimPolicy: 可选Delete(删除PVC时自动清理PV和NFS数据)或Retain(保留数据)。
  • volumeBindingMode: Immediate表示立即绑定PV,WaitForFirstConsumer延迟到Pod使用时绑定。

测试PVC自动供给

创建PVC引用StorageClass,验证自动PV创建功能:

yaml 复制代码
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: test-pvc
spec:
  storageClassName: nfs-storage
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 1Gi

成功创建后,NFS共享目录下会自动生成<namespace>-<pvcname>-<pvname>格式的挂载点,PV会与PVC自动绑定。

a PersistentVolumeClaim (PVC)

A user creates a PersistentVolumeClaim (PVC) that references a specific StorageClass (or omits it to use the default). The PVC includes:

  • Desired access mode (e.g., ReadWriteOnce, ReadOnlyMany)
  • Required storage size
Kubernetes and the Provisioner Handle PV Creation

Kubernetes checks the PVC's StorageClass and forwards the request to the associated provisioner. The provisioner:

  • Dynamically provisions storage (e.g., allocates a cloud disk or an NFS share)
  • Creates a PersistentVolume (PV) matching the PVC's requirements
  • Binds the PV to the PVC
User Deploys a Pod Using the PVC

The user creates a Pod with a volume referencing the PVC by name. Kubernetes ensures the Pod has access to the bound PV's storage.

Key Components

  • PersistentVolume (PV): Represents actual storage resources in the cluster (pre-provisioned or dynamically created).
  • PersistentVolumeClaim (PVC): A user's request for storage, acting as an abstraction over PVs.
  • StorageClass: Defines storage provisioning behavior, allowing dynamic PV creation.
  • Provisioner: The backend storage system that creates PVs on demand.

Example YAML Definitions

StorageClass (Admin-Side)
yaml 复制代码
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: fast
provisioner: kubernetes.io/aws-ebs
parameters:
  type: gp2
  fsType: ext4
reclaimPolicy: Delete
volumeBindingMode: Immediate
PersistentVolumeClaim (User-Side)
yaml 复制代码
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: my-pvc
spec:
  storageClassName: fast
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 10Gi
Pod Using PVC
yaml 复制代码
apiVersion: v1
kind: Pod
metadata:
  name: my-pod
spec:
  containers:
    - name: my-container
      image: nginx
      volumeMounts:
        - mountPath: "/data"
          name: my-volume
  volumes:
    - name: my-volume
      persistentVolumeClaim:
        claimName: my-pvc

This workflow ensures dynamic, scalable storage provisioning while abstracting infrastructure details from end users.

bash 复制代码
#查看定义的storageclass需要的字段

[root@k8s-master01~]# kubectl explain storageclass

KIND:     StorageClass

VERSION:  storage.k8s.io/v1

DESCRIPTION:

    StorageClass describes the parameters for a class of storage for which

    PersistentVolumes can be dynamically provisioned.

    StorageClasses are non-namespaced; the name of the storage class according

    to etcd is in ObjectMeta.Name.

FIELDS:

  allowVolumeExpansion    <boolean>

  allowedTopologies   <[]Object>

  apiVersion  <string>

  kind<string>

  metadata<Object>

  mountOptions    <[]string>

  parameters  <map[string]string>

   provisioner<string> -required-

  reclaimPolicy   <string>

  volumeBindingMode   <string>

#provisioner:供应商(也称作制备器),storageclass需要有一个供应者,用来确定我们使用什么样的存储来创建pv

Volume Plugin 配置示例

Kubernetes 卷类型及版本要求

以下是 Kubernetes 支持的卷类型及其对应的最低版本要求:

gcePersistentDisk

支持的最低 Kubernetes 版本为 1.11,适用于 Google Compute Engine 的持久磁盘。

awsElasticBlockStore

支持的最低 Kubernetes 版本为 1.11,适用于 Amazon Web Services 的弹性块存储。

Cinder

支持的最低 Kubernetes 版本为 1.11,适用于 OpenStack 的块存储服务。

glusterfs

支持的最低 Kubernetes 版本为 1.11,适用于 GlusterFS 分布式文件系统。

rbd

支持的最低 Kubernetes 版本为 1.11,适用于 Ceph 块设备存储。

Azure File

支持的最低 Kubernetes 版本为 1.11,适用于 Azure 文件存储。

Azure Disk

支持的最低 Kubernetes 版本为 1.11,适用于 Azure 磁盘存储。

Portworx

支持的最低 Kubernetes 版本为 1.11,适用于 Portworx 容器原生存储。

FlexVolume

支持的最低 Kubernetes 版本为 1.13,适用于可扩展的插件式存储驱动。

CSI (Container Storage Interface)

Alpha 版本支持从 Kubernetes 1.14 开始,Beta 版本支持从 Kubernetes 1.16 开始,提供标准化的存储插件接口。

注意事项

  • 使用卷类型时需确保 Kubernetes 集群版本满足最低要求。
  • CSI 是未来的推荐标准,建议在新项目中使用 CSI 兼容的存储驱动。
Internal Provisioner

用于在 Kubernetes 集群内动态分配存储,通常与 StorageClass 资源配合使用。

AWSElasticBlockStore (AWS EBS)

AWS EBS 卷的 YAML 示例:

yaml 复制代码
apiVersion: v1
kind: PersistentVolume
metadata:
  name: aws-ebs-pv
spec:
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteOnce
  awsElasticBlockStore:
    volumeID: "vol-1234567890abcdef0"
    fsType: ext4
AzureFile

Azure 文件存储示例:

yaml 复制代码
apiVersion: v1
kind: PersistentVolume
metadata:
  name: azure-file-pv
spec:
  capacity:
    storage: 5Gi
  accessModes:
    - ReadWriteMany
  azureFile:
    secretName: azure-storage-secret
    shareName: k8s-share
    readOnly: false
AzureDisk

Azure 磁盘配置:

yaml 复制代码
apiVersion: v1
kind: PersistentVolume
metadata:
  name: azure-disk-pv
spec:
  capacity:
    storage: 20Gi
  accessModes:
    - ReadWriteOnce
  azureDisk:
    diskName: test-disk
    diskURI: /subscriptions/<sub-id>/resourceGroups/<rg-name>/providers/Microsoft.Compute/disks/test-disk
CephFS

Ceph 文件系统配置:

yaml 复制代码
apiVersion: v1
kind: PersistentVolume
metadata:
  name: cephfs-pv
spec:
  capacity:
    storage: 50Gi
  accessModes:
    - ReadWriteMany
  cephfs:
    monitors:
      - 10.16.154.78:6789
    path: /k8s
    user: admin
    secretRef:
      name: ceph-secret
Cinder (OpenStack)

OpenStack Cinder 卷示例:

yaml 复制代码
apiVersion: v1
kind: PersistentVolume
metadata:
  name: cinder-pv
spec:
  capacity:
    storage: 30Gi
  accessModes:
    - ReadWriteOnce
  cinder:
    volumeID: 12345-6789-0abc-def1
    fsType: ext4
Glusterfs

GlusterFS 卷示例:

yaml 复制代码
apiVersion: v1
kind: PersistentVolume
metadata:
  name: glusterfs-pv
spec:
  capacity:
    storage: 1Ti
  accessModes:
    - ReadWriteMany
  glusterfs:
    endpoints: glusterfs-cluster
    path: k8s-volume
    readOnly: false
NFS

NFS 卷配置:

yaml 复制代码
apiVersion: v1
kind: PersistentVolume
metadata:
  name: nfs-pv
spec:
  capacity:
    storage: 100Gi
  accessModes:
    - ReadWriteMany
  nfs:
    server: 10.0.0.42
    path: /exports/k8s
PortworxVolume

Portworx 卷示例:

yaml 复制代码
apiVersion: v1
kind: PersistentVolume
metadata:
  name: portworx-pv
spec:
  capacity:
    storage: 200Gi
  accessModes:
    - ReadWriteOnce
  portworxVolume:
    volumeID: "px-vol-12345"
    fsType: ext4
Local

本地存储卷示例:

yaml 复制代码
apiVersion: v1
kind: PersistentVolume
metadata:
  name: local-pv
spec:
  capacity:
    storage: 500Gi
  accessModes:
    - ReadWriteOnce
  local:
    path: /mnt/disks/ssd1
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: kubernetes.io/hostname
          operator: In
          values:
          - node-1

注意事项

  • 所有配置需确保 Kubernetes 集群已安装必要的驱动和依赖项。

  • 密钥(如 azure-storage-secretceph-secret)需在卷分配前单独创建。

  • 注意:此功能仅用于扩容卷,不能用于缩小卷。

    5.2 安装nfs provisioner,用于配合存储类动态生成pv

    1、创建运行nfs-provisioner需要的sa账号

bash 复制代码
[root@k8s-master01 newnfs]# cat sa.yaml 
apiVersion: v1
kind: Namespace
metadata:
  name: newnfs
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-client-provisioner
  namespace: newnfs
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: nfs-client-provisioner-runner
rules:
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    namespace: newnfs
roleRef:
  kind: ClusterRole
  name: nfs-client-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  namespace: newnfs
rules:
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  namespace: newnfs
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    namespace: newnfs
roleRef:
  kind: Role
  name: leader-locking-nfs-client-provisioner
  apiGroup: rbac.authorization.k8s.io


[root@k8s-master01 newnfs]# kubectl apply -f sa.yaml
namespace/newnfs created
serviceaccount/nfs-client-provisioner created
clusterrole.rbac.authorization.k8s.io/nfs-client-provisioner-runner created
clusterrolebinding.rbac.authorization.k8s.io/run-nfs-client-provisioner created
role.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner created
rolebinding.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner created

扩展:什么是sa?

sa的全称是serviceaccount。

serviceaccount是为了方便Pod里面的进程调用Kubernetes API或其他外部服务而设计的。

指定了serviceaccount之后,我们把pod创建出来了,我们在使用这个pod时,这个pod就有了我们指定的账户的权限了。

2、安装nfs-provisioner程序
*

Clojure 复制代码
[root@k8s-master03 ~]# mkdir /data/nfs_pro -p

#把/data/nfs_pro变成nfs共享的目录

[root@k8s-master03 ~]# cat /etc/exports

/data/volumes 192.168.115.0/24(rw,no_root_squash)

/data/volume_test/v1 192.168.115.0/24(rw,no_root_squash)

/data/volume_test/v2 192.168.115.0/24(rw,no_root_squash)

/data/volume_test/v3 192.168.115.0/24(rw,no_root_squash)

/data/volume_test/v4 192.168.115.0/24(rw,no_root_squash)

/data/volume_test/v5 192.168.115.0/24(rw,no_root_squash)

/data/volume_test/v6 192.168.115.0/24(rw,no_root_squash)

/data/volume_test/v7 192.168.115.0/24(rw,no_root_squash)

/data/volume_test/v8 192.168.115.0/24(rw,no_root_squash)

/data/volume_test/v9 192.168.115.0/24(rw,no_root_squash)

/data/volume_test/v10 192.168.115.0/24(rw,no_root_squash)



[root@k8s-master01 newnfs]# cat nfs.yaml 
kind: Deployment
apiVersion: apps/v1
metadata:
  name: nfs-client-provisioner
  namespace: newnfs
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nfs-client-provisioner
  strategy:
    type: Recreate        #设置升级策略为删除再创建(默认为滚动更新)
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccountName: nfs-client-provisioner  #上一步创建的ServiceAccount名称
      containers:
        - name: nfs-client-provisioner
          image: registry.cn-beijing.aliyuncs.com/mydlq/nfs-subdir-external-provisioner:v4.0.0
          imagePullPolicy: IfNotPresent
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME  # Provisioner的名称,以后设置的storageclass要和这个保持一致
              value: storage-nfs
            - name: NFS_SERVER        # NFS服务器地址,需和valumes参数中配置的保持一致
              value: 192.168.115.163
            - name: NFS_PATH          # NFS服务器数据存储目录,需和volumes参数中配置的保持一致
              value: /data/volumes
            - name: ENABLE_LEADER_ELECTION
              value: "true"
      volumes:
        - name: nfs-client-root
          nfs:
            server: 192.168.115.163       # NFS服务器地址
            path: /data/volumes        # NFS共享目录
Clojure 复制代码
更新资源清单文件

```shell
[root@k8s-master01 ~]# kubectl apply -f nfs.yaml

deployment.apps/nfs-client-provisioner created
```

查看nfs-provisioner是否正常运行

[root@k8s-master01 newnfs]# kubectl -n newnfs get pods | grep nfs
nfs-client-provisioner-5486f75d5-qjjnh   1/1     Running   0          8m1s
  • 5.3 创建storageclass,动态供给pv

Clojure 复制代码
[root@k8s-master01 newnfs]# cat sc.yaml 
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  namespace: newnfs
  name: nfs-storage
  annotations:
    storageclass.kubernetes.io/is-default-class: "false"  ## 是否设置为默认的storageclass
provisioner: storage-nfs                                   ## 动态卷分配者名称,必须和上面创建的deploy中环境变量"PROVISIONER_NAME"变量值一致
parameters:
  archiveOnDelete: "true"                                 ## 设置为"false"时删除PVC不会保留数据,"true"则保留数据
mountOptions: 
  - hard                                                  ## 指定为硬挂载方式
  - nfsvers=4                                             ## 指定NFS版本,这个需要根据NFS Server版本号设置nfs

[root@k8s-master01 ~]# kubectl apply -f sc.yaml
storageclass.storage.k8s.io/nfs-storage created

查看storageclass是否创建成功

root@k8s-master01 nfs\]# kubectl -n newnfs get sc NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE nfs-storage storage-nfs Delete Immediate false 8m30s

显示内容如上,说明storageclass创建成功了

注意:provisioner处写的example.com/nfs应该跟安装nfs provisioner时候的env下的PROVISIONER_NAME的value值保持一致,如下:

env:

name: PROVISIONER_NAME

value: example.com/nfs

5.4 创建pvc,通过storageclass动态生成pv

Clojure 复制代码
[root@k8s-master01 newnfs]# cat pvc.yaml 
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: storage-pvc
  namespace: newnfs
spec:
  storageClassName: nfs-storage    ## 需要与上面创建的storageclass的名称一致
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Mi

[root@k8s-master01 newnfs]# kubectl apply  -f pvc.yaml 
persistentvolumeclaim/storage-pvc created

查看是否动态生成了pv,pvc是否创建成功,并和pv绑定

Clojure 复制代码
[root@k8s-master01 newnfs]# kubectl -n newnfs get pvc
NAME          STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
storage-pvc   Bound    pvc-a56945be-6150-4591-b3e0-5e4a698b8e3a   1Mi        RWO            nfs-storage    8m8s
[root@k8s-master01 newnfs]# kubectl -n newnfs get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                STORAGECLASS   REASON   AGE
pvc-a56945be-6150-4591-b3e0-5e4a698b8e3a   1Mi        RWO            Delete           Bound    newnfs/storage-pvc   nfs-storage             26m

通过上面可以看到test-claim1的pvc已经成功创建了,绑定的pv是pvc-da737fb7-3ffb-43c4-a86a-2bdfa7f201e2,这个pv是由storageclass调用nfs provisioner自动生成的。

步骤总结:

1、供应商:创建一个nfs provisioner

2、创建storageclass,storageclass指定刚才创建的供应商

3、创建pvc,这个pvc指定storageclass

**报错修复**:

root@k8s-master01 manifests\]# kubectl -n newnfs get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE storage-pvc pending \[root@k8s-master01 manifests\]# kubectl -n newnfs logs nfs-client-provisioner-5486f75d5-qjjnh E0131 09:17:58.845719 1 controller.go:766\] Unexpected error getting claim reference to claim "default/test-claim1": selfLink was empty, can't make reference

Clojure 复制代码
[root@k8s-master01 manifests]# cd /etc/kubernetes/manifests
[root@k8s-master01 manifests]# cat kube-apiserver.yaml 
apiVersion: v1
kind: Pod
metadata:
  annotations:
    kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.115.161:6443
  creationTimestamp: null
  labels:
    component: kube-apiserver
    tier: control-plane
  name: kube-apiserver
  namespace: kube-system
spec:
  containers:
  - command:
    - kube-apiserver
    - --advertise-address=192.168.115.161
    - --allow-privileged=true
    - --authorization-mode=Node,RBAC
    - --client-ca-file=/etc/kubernetes/pki/ca.crt
    - --enable-admission-plugins=NodeRestriction
    - --enable-bootstrap-token-auth=true
    - --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt
    - --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt
    - --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key
    - --etcd-servers=https://127.0.0.1:2379
    - --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt
    - --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key
    - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
    - --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt
    - --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key
    - --requestheader-allowed-names=front-proxy-client
    - --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
    - --requestheader-extra-headers-prefix=X-Remote-Extra-
    - --requestheader-group-headers=X-Remote-Group
    - --requestheader-username-headers=X-Remote-User
    - --secure-port=6443
    - --service-account-issuer=https://kubernetes.default.svc.cluster.local
    - --service-account-key-file=/etc/kubernetes/pki/sa.pub
    - --service-account-signing-key-file=/etc/kubernetes/pki/sa.key
    - --service-cluster-ip-range=10.10.0.0/16
    - --tls-cert-file=/etc/kubernetes/pki/apiserver.crt
    - --tls-private-key-file=/etc/kubernetes/pki/apiserver.key
    - --feature-gates=RemoveSelfLink=false    ###增加该行,1.20版本以后
    image: registry.aliyuncs.com/google_containers/kube-apiserver:v1.24.0
    imagePullPolicy: IfNotPresent
    livenessProbe:
      failureThreshold: 8
      httpGet:
        host: 192.168.115.161
        path: /livez
        port: 6443
        scheme: HTTPS
      initialDelaySeconds: 10
      periodSeconds: 10
      timeoutSeconds: 15
    name: kube-apiserver
    readinessProbe:
      failureThreshold: 3
      httpGet:
        host: 192.168.115.161
        path: /readyz
        port: 6443
        scheme: HTTPS
      periodSeconds: 1
      timeoutSeconds: 15
    resources:
      requests:
        cpu: 250m
    startupProbe:
      failureThreshold: 24
      httpGet:
        host: 192.168.115.161
        path: /livez
        port: 6443
        scheme: HTTPS
      initialDelaySeconds: 10
      periodSeconds: 10
      timeoutSeconds: 15
    volumeMounts:
    - mountPath: /etc/ssl/certs
      name: ca-certs
      readOnly: true
    - mountPath: /etc/pki
      name: etc-pki
      readOnly: true
    - mountPath: /etc/kubernetes/pki
      name: k8s-certs
      readOnly: true
  hostNetwork: true
  priorityClassName: system-node-critical
  securityContext:
    seccompProfile:
      type: RuntimeDefault
  volumes:
  - hostPath:
      path: /etc/ssl/certs
      type: DirectoryOrCreate
    name: ca-certs
  - hostPath:
      path: /etc/pki
      type: DirectoryOrCreate
    name: etc-pki
  - hostPath:
      path: /etc/kubernetes/pki
      type: DirectoryOrCreate
    name: k8s-certs
status: {}
相关推荐
原神启动114 小时前
Docker 场景化作业:生产环境容器操作实训
运维·docker·容器
呼啦啦呼啦啦啦啦啦啦16 小时前
docker制作镜像的两种方式(保姆级教学)
运维·docker·容器
木风小助理17 小时前
PostgreSQL 的范式跃迁:从关系型数据库到统一数据平台
服务器·云原生·kubernetes
阿里云云原生17 小时前
ECS 端口不通,丢包诊断看这里!阿里云 SysOM 智能诊断实战!
云原生
qq_4557608518 小时前
docker - 网络
运维·docker·容器
阿里云云原生19 小时前
从这张年度技术力量榜单里,看见阿里云从云原生到 AI 原生的进化能力和决心
云原生
阿里云云原生19 小时前
2025 智能体工程现状
云原生·llm
是一个Bug19 小时前
云原生架构
云原生·架构
m0_4887776520 小时前
Docker容器技术场景化操作实战及网络模式部署
运维·docker·容器·网络模式