k8s使用ceph

获取ceph集群信息和admin用户的key

bash 复制代码
#获取集群信息
[ceph: root@ceph1 /]# ceph mon dump
epoch 3
fsid 0731be72-c206-11ef-82cf-000c29044707  #这一串一会要用
last_changed 2024-12-24T15:06:23.221115+0000
created 2024-12-24T14:51:53.318298+0000
min_mon_release 16 (pacific)
election_strategy: 1
0: [v2:192.168.1.121:3300/0,v1:192.168.1.121:6789/0] mon.ceph1
1: [v2:192.168.1.123:3300/0,v1:192.168.1.123:6789/0] mon.ceph2
2: [v2:192.168.1.125:3300/0,v1:192.168.1.125:6789/0] mon.ceph3
dumped monmap epoch 3

#获取admin用户key
[ceph: root@ceph1 /]# ceph auth get-key client.admin ;echo
AQCIympnrjJoDBAAzrX5BVT2oGcQ0GKc4CoPjA==

下载并导入镜像

将用到的镜像先下载下来,避免启动容器时,镜像下载太慢或者无法下载

可以下载到其中某一个节点上,然后将镜像拷贝到其它节点

建ceph的 provisioner

创建ceph目录,后续将所有yaml文件放到该目录下

bash 复制代码
[root@k8s-master ~]# mkdir ceph && cd ceph
创建secret.yaml
[root@k8s-master ceph]# cat secret.yaml 
---
apiVersion: v1
kind: Secret
metadata:
  name: csi-rbd-secret
  namespace: default
stringData:
  # Key values correspond to a user name and its key, as defined in the
  # ceph cluster. User ID should have required access to the 'pool'
  # specified in the storage class
  userID: admin
  userKey: AQCIympnrjJoDBAAzrX5BVT2oGcQ0GKc4CoPjA==
[root@k8s-master ceph]# kubectl create -f secret.yaml 
secret/csi-rbd-secret created
[root@k8s-master ceph]# kubectl get -f secret.yaml 
NAME             TYPE     DATA   AGE
csi-rbd-secret   Opaque   2      6s

创建config-map.yaml
[root@k8s-master ceph]# cat csi-config-map.yaml 
apiVersion: v1
kind: ConfigMap
metadata:
  name: "ceph-csi-config"
data:
  config.json: |-
    [
      {
        "clusterID": "0731be72-c206-11ef-82cf-000c29044707",
        "monitors": [
          "192.168.1.121:6789",
          "192.168.1.123:6789",
          "192.168.1.125:6789"
        ]
      }
    ]
[root@k8s-master ceph]# kubectl create -f csi-config-map.yaml 
configmap/ceph-csi-config created
[root@k8s-master ceph]# kubectl get -f csi-config-map.yaml 
NAME              DATA   AGE
ceph-csi-config   1      37s
创建ceph-conf.yaml
[root@k8s-master ceph]# cat ceph-conf.yaml 
---
# This is a sample configmap that helps define a Ceph configuration as required
# by the CSI plugins.

# Sample ceph.conf available at
# https://github.com/ceph/ceph/blob/master/src/sample.ceph.conf Detailed
# documentation is available at
# https://docs.ceph.com/en/latest/rados/configuration/ceph-conf/
apiVersion: v1
kind: ConfigMap
data:
  ceph.conf: |
    [global]
    auth_cluster_required = cephx
    auth_service_required = cephx
    auth_client_required = cephx

  # keyring is a required key and its value should be empty
  keyring: |
metadata:
  name: ceph-config
[root@k8s-master ceph]# kubectl create -f ceph-conf.yaml 
configmap/ceph-config created
[root@k8s-master ceph]# kubectl get -f ceph-conf.yaml 
NAME          DATA   AGE
ceph-config   2      5s
cat > csi-kms-config-map.yaml <<EOF
---
apiVersion: v1
kind: ConfigMap
data:
  config.json: |-
    {}
metadata:
  name: ceph-csi-encryption-kms-config
EOF
[root@k8s-master ceph]# kubectl apply -f csi-kms-config-map.yaml 
configmap/ceph-csi-encryption-kms-config created
[root@k8s-master ceph]# cat csi-nodeplugin-rbac.yaml 
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: rbd-csi-nodeplugin
  # replace with non-default namespace name
  namespace: default
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: rbd-csi-nodeplugin
rules:
  - apiGroups: [""]
    resources: ["nodes"]
    verbs: ["get"]
  # allow to read Vault Token and connection options from the Tenants namespace
  - apiGroups: [""]
    resources: ["secrets"]
    verbs: ["get"]
  - apiGroups: [""]
    resources: ["configmaps"]
    verbs: ["get"]
  - apiGroups: [""]
    resources: ["serviceaccounts"]
    verbs: ["get"]
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["volumeattachments"]
    verbs: ["list", "get"]
  - apiGroups: [""]
    resources: ["serviceaccounts/token"]
    verbs: ["create"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: rbd-csi-nodeplugin
subjects:
  - kind: ServiceAccount
    name: rbd-csi-nodeplugin
    # replace with non-default namespace name
    namespace: default
roleRef:
  kind: ClusterRole
  name: rbd-csi-nodeplugin
  apiGroup: rbac.authorization.k8s.io
[root@k8s-master ceph]# cat csi-provisioner-rbac.yaml 
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: rbd-csi-provisioner
  # replace with non-default namespace name
  namespace: default

---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: rbd-external-provisioner-runner
rules:
  - apiGroups: [""]
    resources: ["nodes"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["secrets"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["list", "watch", "create", "update", "patch"]
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "update", "delete", "patch"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims/status"]
    verbs: ["update", "patch"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: ["snapshot.storage.k8s.io"]
    resources: ["volumesnapshots"]
    verbs: ["get", "list", "watch", "update", "patch", "create"]
  - apiGroups: ["snapshot.storage.k8s.io"]
    resources: ["volumesnapshots/status"]
    verbs: ["get", "list", "patch"]
  - apiGroups: ["snapshot.storage.k8s.io"]
    resources: ["volumesnapshotcontents"]
    verbs: ["create", "get", "list", "watch", "update", "delete", "patch"]
  - apiGroups: ["snapshot.storage.k8s.io"]
    resources: ["volumesnapshotclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["volumeattachments"]
    verbs: ["get", "list", "watch", "update", "patch"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["volumeattachments/status"]
    verbs: ["patch"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["csinodes"]
    verbs: ["get", "list", "watch"]
  - apiGroups: ["snapshot.storage.k8s.io"]
    resources: ["volumesnapshotcontents/status"]
    verbs: ["update", "patch"]
  - apiGroups: [""]
    resources: ["configmaps"]
    verbs: ["get"]
  - apiGroups: [""]
    resources: ["serviceaccounts"]
    verbs: ["get"]
  - apiGroups: [""]
    resources: ["serviceaccounts/token"]
    verbs: ["create"]
  - apiGroups: ["groupsnapshot.storage.k8s.io"]
    resources: ["volumegroupsnapshotclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: ["groupsnapshot.storage.k8s.io"]
    resources: ["volumegroupsnapshotcontents"]
    verbs: ["get", "list", "watch", "update", "patch"]
  - apiGroups: ["groupsnapshot.storage.k8s.io"]
    resources: ["volumegroupsnapshotcontents/status"]
    verbs: ["update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: rbd-csi-provisioner-role
subjects:
  - kind: ServiceAccount
    name: rbd-csi-provisioner
    # replace with non-default namespace name
    namespace: default
roleRef:
  kind: ClusterRole
  name: rbd-external-provisioner-runner
  apiGroup: rbac.authorization.k8s.io

---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  # replace with non-default namespace name
  namespace: default
  name: rbd-external-provisioner-cfg
rules:
  - apiGroups: [""]
    resources: ["configmaps"]
    verbs: ["get", "list", "watch", "create", "update", "delete"]
  - apiGroups: ["coordination.k8s.io"]
    resources: ["leases"]
    verbs: ["get", "watch", "list", "delete", "update", "create"]

---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: rbd-csi-provisioner-role-cfg
  # replace with non-default namespace name
  namespace: default
subjects:
  - kind: ServiceAccount
    name: rbd-csi-provisioner
    # replace with non-default namespace name
    namespace: default
roleRef:
  kind: Role
  name: rbd-external-provisioner-cfg
  apiGroup: rbac.authorization.k8s.io
[root@k8s-master ceph]# kubectl create -f csi-nodeplugin-rbac.yaml -f csi-provisioner-rbac.yaml 
[root@k8s-master ceph]#  cat csi-rbdplugin-provisioner.yaml
---
kind: Service
apiVersion: v1
metadata:
  name: csi-rbdplugin-provisioner
  # replace with non-default namespace name
  namespace: default
  labels:
    app: csi-metrics
spec:
  selector:
    app: csi-rbdplugin-provisioner
  ports:
    - name: http-metrics
      port: 8080
      protocol: TCP
      targetPort: 8680

---
kind: Deployment
apiVersion: apps/v1
metadata:
  name: csi-rbdplugin-provisioner
  # replace with non-default namespace name
  namespace: default
spec:
  replicas: 3
  selector:
    matchLabels:
      app: csi-rbdplugin-provisioner
  template:
    metadata:
      labels:
        app: csi-rbdplugin-provisioner
    spec:
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            - labelSelector:
                matchExpressions:
                  - key: app
                    operator: In
                    values:
                      - csi-rbdplugin-provisioner
              topologyKey: "kubernetes.io/hostname"
      serviceAccountName: rbd-csi-provisioner
      priorityClassName: system-cluster-critical
      containers:
        - name: csi-rbdplugin
          # for stable functionality replace canary with latest release version
          image: harbor:80/ceph/cephcsi:canary
          imagePullPolicy: IfNotPresent
          args:
            - "--nodeid=$(NODE_ID)"
            - "--type=rbd"
            - "--controllerserver=true"
            - "--endpoint=$(CSI_ENDPOINT)"
            - "--csi-addons-endpoint=$(CSI_ADDONS_ENDPOINT)"
            - "--v=5"
            - "--drivername=rbd.csi.ceph.com"
            - "--pidlimit=-1"
            - "--rbdhardmaxclonedepth=8"
            - "--rbdsoftmaxclonedepth=4"
            - "--enableprofiling=false"
            - "--setmetadata=true"
          env:
            - name: POD_IP
              valueFrom:
                fieldRef:
                  fieldPath: status.podIP
            - name: NODE_ID
              valueFrom:
                fieldRef:
                  fieldPath: spec.nodeName
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
            # - name: KMS_CONFIGMAP_NAME
            #   value: encryptionConfig
            - name: CSI_ENDPOINT
              value: unix:///csi/csi-provisioner.sock
            - name: CSI_ADDONS_ENDPOINT
              value: unix:///csi/csi-addons.sock
          imagePullPolicy: "IfNotPresent"
          volumeMounts:
            - name: socket-dir
              mountPath: /csi
            - mountPath: /dev
              name: host-dev
            - mountPath: /sys
              name: host-sys
            - mountPath: /lib/modules
              name: lib-modules
              readOnly: true
            - name: ceph-csi-config
              mountPath: /etc/ceph-csi-config/
            - name: ceph-csi-encryption-kms-config
              mountPath: /etc/ceph-csi-encryption-kms-config/
            - name: keys-tmp-dir
              mountPath: /tmp/csi/keys
            - name: ceph-config
              mountPath: /etc/ceph/
            - name: oidc-token
              mountPath: /run/secrets/tokens
              readOnly: true
        - name: csi-provisioner
          image: harbor:80/ceph/csi-provisioner:v5.0.1
          imagePullPolicy: IfNotPresent
          args:
            - "--csi-address=$(ADDRESS)"
            - "--v=1"
            - "--timeout=150s"
            - "--retry-interval-start=500ms"
            - "--leader-election=true"
            - "--feature-gates=HonorPVReclaimPolicy=true"
            - "--prevent-volume-mode-conversion=true"
            # if fstype is not specified in storageclass, ext4 is default
            - "--default-fstype=ext4"
            - "--extra-create-metadata=true"
            - "--immediate-topology=false"
            - "--http-endpoint=$(POD_IP):8090"
          env:
            - name: ADDRESS
              value: unix:///csi/csi-provisioner.sock
            - name: POD_IP
              valueFrom:
                fieldRef:
                  fieldPath: status.podIP
          imagePullPolicy: "IfNotPresent"
          ports:
            - containerPort: 8090
              name: http-endpoint
              protocol: TCP
          volumeMounts:
            - name: socket-dir
              mountPath: /csi
        - name: csi-snapshotter
          image: harbor:80/ceph/csi-snapshotter:v8.2.0
          imagePullPolicy: IfNotPresent
          args:
            - "--csi-address=$(ADDRESS)"
            - "--v=1"
            - "--timeout=150s"
            - "--leader-election=true"
            - "--extra-create-metadata=true"
            - "--feature-gates=CSIVolumeGroupSnapshot=true"
            - "--http-endpoint=$(POD_IP):8092"
          env:
            - name: ADDRESS
              value: unix:///csi/csi-provisioner.sock
            - name: POD_IP
              valueFrom:
                fieldRef:
                  fieldPath: status.podIP
          imagePullPolicy: "IfNotPresent"
          ports:
            - containerPort: 8092
              name: http-endpoint
              protocol: TCP
          volumeMounts:
            - name: socket-dir
              mountPath: /csi
        - name: csi-attacher
          image: harbor:80/ceph/csi-attacher:v4.6.1
          imagePullPolicy: IfNotPresent
          args:
            - "--v=1"
            - "--csi-address=$(ADDRESS)"
            - "--leader-election=true"
            - "--retry-interval-start=500ms"
            - "--default-fstype=ext4"
            - "--http-endpoint=$(POD_IP):8093"
          env:
            - name: ADDRESS
              value: /csi/csi-provisioner.sock
            - name: POD_IP
              valueFrom:
                fieldRef:
                  fieldPath: status.podIP
          imagePullPolicy: "IfNotPresent"
          ports:
            - containerPort: 8093
              name: http-endpoint
              protocol: TCP
          volumeMounts:
            - name: socket-dir
              mountPath: /csi
        - name: csi-resizer
          image: harbor:80/ceph/csi-resizer:v1.11.1
          imagePullPolicy: IfNotPresent
          args:
            - "--csi-address=$(ADDRESS)"
            - "--v=1"
            - "--timeout=150s"
            - "--leader-election"
            - "--retry-interval-start=500ms"
            - "--handle-volume-inuse-error=false"
            - "--feature-gates=RecoverVolumeExpansionFailure=true"
            - "--http-endpoint=$(POD_IP):8091"
          env:
            - name: ADDRESS
              value: unix:///csi/csi-provisioner.sock
            - name: POD_IP
              valueFrom:
                fieldRef:
                  fieldPath: status.podIP
          imagePullPolicy: "IfNotPresent"
          ports:
            - containerPort: 8091
              name: http-endpoint
              protocol: TCP
          volumeMounts:
            - name: socket-dir
              mountPath: /csi
        - name: csi-rbdplugin-controller
          # for stable functionality replace canary with latest release version
          image: harbor:80/ceph/cephcsi:canary
          imagePullPolicy: IfNotPresent
          args:
            - "--type=controller"
            - "--v=5"
            - "--drivername=rbd.csi.ceph.com"
            - "--drivernamespace=$(DRIVER_NAMESPACE)"
            - "--setmetadata=true"
          env:
            - name: DRIVER_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
          imagePullPolicy: "IfNotPresent"
          volumeMounts:
            - name: ceph-csi-config
              mountPath: /etc/ceph-csi-config/
            - name: keys-tmp-dir
              mountPath: /tmp/csi/keys
            - name: ceph-config
              mountPath: /etc/ceph/
        - name: liveness-prometheus
          image: harbor:80/ceph/cephcsi:canary
          imagePullPolicy: IfNotPresent
          args:
            - "--type=liveness"
            - "--endpoint=$(CSI_ENDPOINT)"
            - "--metricsport=8680"
            - "--metricspath=/metrics"
            - "--polltime=60s"
            - "--timeout=3s"
          env:
            - name: CSI_ENDPOINT
              value: unix:///csi/csi-provisioner.sock
            - name: POD_IP
              valueFrom:
                fieldRef:
                  fieldPath: status.podIP
          ports:
            - containerPort: 8680
              name: http-metrics
              protocol: TCP
          volumeMounts:
            - name: socket-dir
              mountPath: /csi
          imagePullPolicy: "IfNotPresent"
      volumes:
        - name: host-dev
          hostPath:
            path: /dev
        - name: host-sys
          hostPath:
            path: /sys
        - name: lib-modules
          hostPath:
            path: /lib/modules
        - name: socket-dir
          emptyDir: {
            medium: "Memory"
          }
        - name: ceph-config
          configMap:
            name: ceph-config
        - name: ceph-csi-config
          configMap:
            name: ceph-csi-config
        - name: ceph-csi-encryption-kms-config
          configMap:
            name: ceph-csi-encryption-kms-config
        - name: keys-tmp-dir
          emptyDir: {
            medium: "Memory"
          }
        - name: oidc-token
          projected:
            sources:
              - serviceAccountToken:
                  path: oidc-token
                  expirationSeconds: 3600
                  audience: ceph-csi-kms
[root@k8s-master ceph]#  cat csi-rbdplugin.yaml
---
kind: DaemonSet
apiVersion: apps/v1
metadata:
  name: csi-rbdplugin
  # replace with non-default namespace name
  namespace: default
spec:
  selector:
    matchLabels:
      app: csi-rbdplugin
  template:
    metadata:
      labels:
        app: csi-rbdplugin
    spec:
      serviceAccountName: rbd-csi-nodeplugin
      hostNetwork: true
      hostPID: true
      priorityClassName: system-node-critical
      # to use e.g. Rook orchestrated cluster, and mons' FQDN is
      # resolved through k8s service, set dns policy to cluster first
      dnsPolicy: ClusterFirstWithHostNet
      containers:
        - name: csi-rbdplugin
          securityContext:
            privileged: true
            capabilities:
              add: ["SYS_ADMIN"]
            allowPrivilegeEscalation: true
          # for stable functionality replace canary with latest release version
          image: harbor:80/ceph/cephcsi:canary
          imagePullPolicy: IfNotPresent
          args:
            - "--nodeid=$(NODE_ID)"
            - "--pluginpath=/var/lib/kubelet/plugins"
            - "--stagingpath=/var/lib/kubelet/plugins/kubernetes.io/csi/"
            - "--type=rbd"
            - "--nodeserver=true"
            - "--endpoint=$(CSI_ENDPOINT)"
            - "--csi-addons-endpoint=$(CSI_ADDONS_ENDPOINT)"
            - "--v=5"
            - "--drivername=rbd.csi.ceph.com"
            - "--enableprofiling=false"
            # If topology based provisioning is desired, configure required
            # node labels representing the nodes topology domain
            # and pass the label names below, for CSI to consume and advertise
            # its equivalent topology domain
            # - "--domainlabels=failure-domain/region,failure-domain/zone"
            #
            # Options to enable read affinity.
            # If enabled Ceph CSI will fetch labels from kubernetes node and
            # pass `read_from_replica=localize,crush_location=type:value` during
            # rbd map command. refer:
            # https://docs.ceph.com/en/latest/man/8/rbd/#kernel-rbd-krbd-options
            # for more details.
            # - "--enable-read-affinity=true"
            # - "--crush-location-labels=topology.io/zone,topology.io/rack"
          env:
            - name: POD_IP
              valueFrom:
                fieldRef:
                  fieldPath: status.podIP
            - name: NODE_ID
              valueFrom:
                fieldRef:
                  fieldPath: spec.nodeName
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
            # - name: KMS_CONFIGMAP_NAME
            #   value: encryptionConfig
            - name: CSI_ENDPOINT
              value: unix:///csi/csi.sock
            - name: CSI_ADDONS_ENDPOINT
              value: unix:///csi/csi-addons.sock
          imagePullPolicy: "IfNotPresent"
          volumeMounts:
            - name: socket-dir
              mountPath: /csi
            - mountPath: /dev
              name: host-dev
            - mountPath: /sys
              name: host-sys
            - mountPath: /run/mount
              name: host-mount
            - mountPath: /etc/selinux
              name: etc-selinux
              readOnly: true
            - mountPath: /lib/modules
              name: lib-modules
              readOnly: true
            - name: ceph-csi-config
              mountPath: /etc/ceph-csi-config/
            - name: ceph-csi-encryption-kms-config
              mountPath: /etc/ceph-csi-encryption-kms-config/
            - name: plugin-dir
              mountPath: /var/lib/kubelet/plugins
              mountPropagation: "Bidirectional"
            - name: mountpoint-dir
              mountPath: /var/lib/kubelet/pods
              mountPropagation: "Bidirectional"
            - name: keys-tmp-dir
              mountPath: /tmp/csi/keys
            - name: ceph-logdir
              mountPath: /var/log/ceph
            - name: ceph-config
              mountPath: /etc/ceph/
            - name: oidc-token
              mountPath: /run/secrets/tokens
              readOnly: true
        - name: driver-registrar
          # This is necessary only for systems with SELinux, where
          # non-privileged sidecar containers cannot access unix domain socket
          # created by privileged CSI driver container.
          securityContext:
            privileged: true
            allowPrivilegeEscalation: true
          image: harbor/ceph/csi-node-driver-registrar:v2.11.1
          imagePullPolicy: IfNotPresent
          args:
            - "--v=1"
            - "--csi-address=/csi/csi.sock"
            - "--kubelet-registration-path=/var/lib/kubelet/plugins/rbd.csi.ceph.com/csi.sock"
          env:
            - name: KUBE_NODE_NAME
              valueFrom:
                fieldRef:
                  fieldPath: spec.nodeName
          volumeMounts:
            - name: socket-dir
              mountPath: /csi
            - name: registration-dir
              mountPath: /registration
        - name: liveness-prometheus
          securityContext:
            privileged: true
            allowPrivilegeEscalation: true
          image: harbor/ceph/cephcsi:canary
          imagePullPolicy: IfNotPresent
          args:
            - "--type=liveness"
            - "--endpoint=$(CSI_ENDPOINT)"
            - "--metricsport=8680"
            - "--metricspath=/metrics"
            - "--polltime=60s"
            - "--timeout=3s"
          env:
            - name: CSI_ENDPOINT
              value: unix:///csi/csi.sock
            - name: POD_IP
              valueFrom:
                fieldRef:
                  fieldPath: status.podIP
          volumeMounts:
            - name: socket-dir
              mountPath: /csi
          imagePullPolicy: "IfNotPresent"
      volumes:
        - name: socket-dir
          hostPath:
            path: /var/lib/kubelet/plugins/rbd.csi.ceph.com
            type: DirectoryOrCreate
        - name: plugin-dir
          hostPath:
            path: /var/lib/kubelet/plugins
            type: Directory
        - name: mountpoint-dir
          hostPath:
            path: /var/lib/kubelet/pods
            type: DirectoryOrCreate
        - name: ceph-logdir
          hostPath:
            path: /var/log/ceph
            type: DirectoryOrCreate
        - name: registration-dir
          hostPath:
            path: /var/lib/kubelet/plugins_registry/
            type: Directory
        - name: host-dev
          hostPath:
            path: /dev
        - name: host-sys
          hostPath:
            path: /sys
        - name: etc-selinux
          hostPath:
            path: /etc/selinux
        - name: host-mount
          hostPath:
            path: /run/mount
        - name: lib-modules
          hostPath:
            path: /lib/modules
        - name: ceph-config
          configMap:
            name: ceph-config
        - name: ceph-csi-config
          configMap:
            name: ceph-csi-config
        - name: ceph-csi-encryption-kms-config
          configMap:
            name: ceph-csi-encryption-kms-config
        - name: keys-tmp-dir
          emptyDir: {
            medium: "Memory"
          }
        - name: oidc-token
          projected:
            sources:
              - serviceAccountToken:
                  path: oidc-token
                  expirationSeconds: 3600
                  audience: ceph-csi-kms
---
# This is a service to expose the liveness metrics
apiVersion: v1
kind: Service
metadata:
  name: csi-metrics-rbdplugin
  # replace with non-default namespace name
  namespace: default
  labels:
    app: csi-metrics
spec:
  ports:
    - name: http-metrics
      port: 8080
      protocol: TCP
      targetPort: 8680
  selector:
    app: csi-rbdplugin
[root@k8s-master ceph]# kubectl create -f csi-rbdplugin-provisioner.yaml 
[root@k8s-master ceph]# kubectl create -f csi-rbdplugin.yaml 

修改deployment csi-rbdplugin-provisioner 副本为2

bash 复制代码
[root@k8s-master ceph]# kubectl edit deployment csi-rbdplugin-provisioner
deployment.apps/csi-rbdplugin-provisioner edited

检查provisioner的pod,状态为running才对

bash 复制代码
[root@k8s-master ceph]# kubectl get pods 
NAME                                         READY   STATUS    RESTARTS   AGE
csi-rbdplugin-7k6qz                          3/3     Running   0          104s
csi-rbdplugin-provisioner-785bfdc896-blrbg   7/7     Running   0          6m9s
csi-rbdplugin-provisioner-785bfdc896-dsq5x   7/7     Running   0          6m9s
csi-rbdplugin-v6pg7                          3/3     Running   0          104s

创建storageclass

bash 复制代码
cat > ceph-sc.yaml <<EOF
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
   name: csi-rbd-sc     #storageclass名称
provisioner: rbd.csi.ceph.com   #驱动器
parameters:
   clusterID: 0fd45688-fb9d-11ed-b585-000c29dd2163    #ceph集群id
   pool: aming-1      #pool空间
   imageFeatures: layering   #rbd特性
   csi.storage.k8s.io/provisioner-secret-name: csi-rbd-secret
   csi.storage.k8s.io/provisioner-secret-namespace: default
   csi.storage.k8s.io/controller-expand-secret-name: csi-rbd-secret
   csi.storage.k8s.io/controller-expand-secret-namespace: default
   csi.storage.k8s.io/node-stage-secret-name: csi-rbd-secret
   csi.storage.k8s.io/node-stage-secret-namespace: default
reclaimPolicy: Delete   #pvc回收机制
allowVolumeExpansion: true   #对扩展卷进行扩展
mountOptions:           #StorageClass 动态创建的 PersistentVolume 将使用类中 mountOptions 字段指定的挂载选项
   - discard
EOF
root@k8s-master ceph]# kubectl create -f ceph-sc.yaml 
storageclass.storage.k8s.io/csi-rbd-sc created
[root@k8s-master ceph]# kubectl get -f ceph-sc.yaml 
NAME         PROVISIONER        RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
csi-rbd-sc   rbd.csi.ceph.com   Delete          Immediate           true                   5s

创建pvc

bash 复制代码
cat > ceph-pvc.yaml <<EOF
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: ceph-pvc     #pvc名称
spec:
  accessModes:
    - ReadWriteOnce     #访问模式
  resources:
    requests:
      storage: 1Gi      #存储空间
  storageClassName: csi-rbd-sc
EOF
[root@k8s-master ceph]# kubectl create -f ceph-pvc.yaml 
persistentvolumeclaim/ceph-pvc created
[root@k8s-master ceph]# kubectl get -f ceph-pvc.yaml 
NAME       STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   VOLUMEATTRIBUTESCLASS   AGE
ceph-pvc   Bound    pvc-2b838204-18e6-41ee-b8e1-bd72bb01bea7   1Gi        RWO            csi-rbd-sc     <unset>                 5s

创建pod使用ceph存储

bash 复制代码
cat > ceph-pod.yaml <<EOF
apiVersion: v1
kind: Pod
metadata:
  name: ceph-pod
spec:
  containers:
  - name: ceph-ng
    image: nginx:1.23.2
    volumeMounts:
    - name: ceph-mnt
      mountPath: /mnt
      readOnly: false
  volumes:
  - name: ceph-mnt
    persistentVolumeClaim:
      claimName: ceph-pvc
EOF
[root@k8s-master ceph]# kubectl create -f ceph-pod.yaml 
pod/ceph-pod created
[root@k8s-master ceph]# kubectl get -f ceph-pod.yaml 
NAME       READY   STATUS    RESTARTS   AGE
ceph-pod   1/1     Running   0          26s

查看pv

bash 复制代码
[root@k8s-master ceph]# kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM              STORAGECLASS   VOLUMEATTRIBUTESCLASS   REASON   AGE
pvc-2b838204-18e6-41ee-b8e1-bd72bb01bea7   1Gi        RWO            Delete           Bound    default/ceph-pvc   csi-rbd-sc     <unset>                          2m17s

在ceph这边查看rbd

bash 复制代码
[ceph: root@ceph1 /]# rbd ls test   
csi-vol-5a376b0e-53dc-4458-b71b-6d4cde486f6e

在pod里查看挂载情况

bash 复制代码
[root@k8s-master ceph]# kubectl exec -it ceph-pod -- /bin/sh
# df -Th
Filesystem              Type     Size  Used Avail Use% Mounted on
overlay                 overlay   50G  8.5G   42G  17% /
tmpfs                   tmpfs     64M     0   64M   0% /dev
tmpfs                   tmpfs    4.9G     0  4.9G   0% /sys/fs/cgroup
/dev/rbd0               ext4     974M   24K  958M   1% /mnt
shm                     tmpfs     64M     0   64M   0% /dev/shm
/dev/mapper/centos-root xfs       50G  8.5G   42G  17% /etc/hosts
tmpfs                   tmpfs    9.7G   12K  9.7G   1% /run/secrets/kubernetes.io/serviceaccount
tmpfs                   tmpfs    4.9G     0  4.9G   0% /proc/acpi
tmpfs                   tmpfs    4.9G     0  4.9G   0% /proc/scsi
tmpfs                   tmpfs    4.9G     0  4.9G   0% /sys/firmware
相关推荐
Carry_NJ23 分钟前
docker-compose样例
运维·docker·容器
勇-子30 分钟前
K8s 常用资源介绍
云原生·容器·kubernetes
大G哥32 分钟前
k8s创建单例redis设置密码
数据库·redis·云原生·容器·kubernetes
勇-子3 小时前
K8s DaemonSet的介绍
云原生·容器·kubernetes
孟里啥都有.3 小时前
12.24 k8s yaml文件类型和介绍
云原生·容器·kubernetes
花晓木3 小时前
最新版本 - 二进制安装k8s1.29.2 集群
云原生·容器·kubernetes
休耕3 小时前
kubeadm搭建k8s集群
云原生·容器·kubernetes
翱翔-蓝天6 小时前
在 CentOS 系统上安装 ClickHouse
运维·docker·容器
SRExianxian7 小时前
kubernetes存储架构之PV controller源码解读
容器·架构·kubernetes
cdg==吃蛋糕9 小时前
docker代理配置
docker·容器·eureka