Kubernetes云原生存储解决方案之 Rook Ceph实践探究

Kubernetes云原生存储解决方案之 Rook Ceph实践探究

除了手动部署独立的 Ceph 集群并配置与Kubernetes进行对接外,Rook Ceph 支持直接在 Kubernetes 集群上部署 Ceph 集群。

通过Rook Ceph云原生存储编排平台,使得 Kubernetes 集群中启用高可用的 Ceph 存储,为 Kubernetes 应用程序提供块存储、对象存储和文件存储服务。

Rook高可用架构:

1. 准备工作

1.1 前提

1. Kubernetes版本

Rook 可以安装在任何现有的 Kubernetes 集群上,只要它满足最低版本,并且授予 Rook 所需的权限。早期 v1.9.7 版本的 Rook 支持 Kubernetes v1.17 或更高版本;现在的 v1.15 版本支持 Kubernetes v1.25v1.30版本。

2. CPU 架构

支持的架构:amd64 / x86_64arm64.

3. Ceph 部署前提

要配置 Ceph 存储集群,需要至少一种以下类型的本地存储:

  • 原始设备(没有分区或格式化的文件系统)
  • 原始分区(没有格式化的文件系统)
  • LVM 逻辑卷(没有格式化的文件系统)
  • block 模式的存储类中提供的持久卷

使用以下命令确认分区或设备是否已格式化文件系统:

shell 复制代码
root@k8s-1:~# lsblk -f
NAME   FSTYPE   LABEL    UUID                                 FSAVAIL FSUSE% MOUNTPOINT                          
vda                                                                          
├─vda1                                                                       
└─vda2 ext4              2ec0411c-1071-4316-bed7-6f0afdf54814     22G    39% /
vdb                                                                          
vdc 

如果 FSTYPE 字段不为空,则表示对应的设备上有文件系统。在此示例中,vdb 、vdc对 Rook 可用,而 vda 及其分区已有文件系统,不可用。

如果需要清理已有磁盘给 Ceph 使用,请使用下面的命令(生产环境请谨慎):

bash 复制代码
# yum install gdisk
sgdisk --zap-all /dev/sdd
4. LVM需求

在以下情况下,Ceph OSD 依赖于 LVM:

  • 如果启用了加密(集群 CR 中的 encryptedDevice: "true"
  • 指定了 metadata 设备
  • osdsPerDevice 大于 1

在以下情况下,OSD 不需要 LVM:

  • OSD 是在原始设备或分区上创建的
  • OSD 是在使用 storageClassDeviceSets 的 PVC 上创建的

如果需要 LVM,则需要在将运行 OSD 的主机上提供 LVM。一些 Linux 发行版不自带 lvm2 包。这个包在 Kubernetes 集群的所有存储节点上都是必需的,以便运行 Ceph OSD。没有这个包,即使 Rook 能够成功创建 Ceph OSD,当节点重新启动时,运行在重新启动节点上的 OSD Pod 将 无法启动。请使用 Linux 发行版的包管理器安装 LVM。例如:

CentOS:

shell 复制代码
sudo yum install -y lvm2

Ubuntu:

shell 复制代码
sudo apt-get install -y lvm2
5. 内核需求
RBD

Ceph 需要一个构建了 RBD 模块的 Linux 内核,许多较新的 Linux 发行版都包含这个模块,但并非所有。通过运行 modprobe rbd 来测试你的 Kubernetes 节点。如果找不到 rbd 模块,则需要重新构建内核以包含 rbd 模块,安装更新的内核,或者选择不同的 Linux 发行版。

在用于存储节点的机器通过如下命令加载 rbd 模块:

bash 复制代码
# 加载 rbd 和 nbd 模块
]# modprobe rbd
]# modprobe nbd

# 开机自动加载 rbd 和 nbd 模式
]# echo "rbd" >> /etc/modules-load.d/rook-ceph.conf
]# echo "nbd" >> /etc/modules-load.d/rook-ceph.conf

# 查看rbd模块
]# lsmod | grep rbd

# 正确的输出结果类似如下:
]# lsmod | grep rbd
rbd                   106496  0
libceph               327680  1 rbd

Rook 的默认 RBD 配置仅指定了分层功能,以便与较旧的内核广泛兼容。如果你的 Kubernetes 节点运行的是 5.4 或更高版本的内核,可以在存储类中启用其他功能标志。fast-diffobject-map 功能尤其有用。

shell 复制代码
imageFeatures: layering,fast-diff,object-map,deep-flatten,exclusive-lock
CephFS

如果从 Ceph 共享文件系统(CephFS)创建 RWX 卷,推荐的最低内核版本是 4.17。如果内核版本低于 4.17,请求的 PVC 大小将不会被强制执行。存储配额仅在更新的内核上得到强制执行。

1.2 部署环境准备

主机名 IP OS CPU 内存 系统盘 数据盘 用途
k8s-1 192.168.1.55 ubuntu 20.04.2 5.4.0-81-generic 4 8 40 KubeSphere/k8s-control-plane
k8s-2 192.168.1.56 ubuntu 20.04.2 5.4.0-81-generic 4 8 40 KubeSphere/k8s-control-plane
k8s-3 192.168.1.57 ubuntu 20.04.2 5.4.0-81-generic 4 8 40 KubeSphere/k8s-control-plane
k8s-4 192.168.1.58 ubuntu 20.04.2 5.4.0-81-generic 4 8 40 50Gx2 k8s-worker/Ceph
k8s-5 192.168.1.59 ubuntu 20.04.2 5.4.0-81-generic 4 8 40 50Gx2 k8s-worker/Ceph
k8s-6 192.168.1.60 ubuntu 20.04.2 5.4.0-81-generic 4 8 40 50Gx2 k8s-worker/Ceph

说明

本文的master节点取消NoSchedule的taint以便用于部署测试应用,正式环境建议部署单独的worker节点承载业务应用。

测试境涉及的软件版本信息如下:

  • 操作系统:ubuntu 20.04.2 x86_64
  • 内核:5.4.0-81-generic
  • Kubernetes:v1.27.4
  • Containerd:1.6.4
  • Rook:v1.15.2
  • Ceph: 18.2.4 reef (stable)

1.3 Rook 部署规划

为了更好地满足生产环境的实际需求,在规划和部署存储基础设施时增加了以下策略:

  • 节点扩展:向 Kubernetes 集群中新增三个专用节点,这些节点将专门承载 Ceph 存储服务,确保存储操作的高效性和稳定性。
  • 组件隔离:所有 Rook 和 Ceph 组件以及数据卷将被部署在这些专属节点上,实现组件的清晰隔离和专业化管理。
  • 节点标签化 :为每个存储节点设置了专门的标签,例如 role=storage-node,以便 Kubernetes 能够智能地调度相关ceph相关管理组件。非存储节点设置标签,例如 role=rook-ceph,用于承载 Ceph CSI 插件,使得运行在这些节点上的业务 Pod 能够利用 Ceph 提供的持久化存储。
  • 存储介质配置 :在每个存储节点上增加2块 500G 的 Ceph 专用数据盘 /dev/vdb/dev/vdc。为保证最佳性能,该磁盘将采用裸设备形态直接供 Ceph OSD 使用,无需进行分区或格式化

重要提示:

  • 本文提供的配置和部署经验对于理解 Rook-Ceph 的安装和运行机制具有参考价值。强烈建议不要将本文描述的配置直接应用于任何形式的生产环境
  • 在生产环境中,还需进一步考虑使用 SSD、NVMe 磁盘等高性能存储介质;细致规划故障域;制定详尽的存储节点策略;以及进行细致的系统优化配置等。

2. 节点规划

将集群中的三台master节点用作存储节点,下面设置节点标签:

bsh 复制代码
# 管理节点标签,用于安装ceph CSI插件
kubectl label nodes k8s-1 role=rook-ceph
kubectl label nodes k8s-2 role=rook-ceph
kubectl label nodes k8s-3 role=rook-ceph
# 查看label
root@k8s-1:~/rook-ceph/rook-1.15.2/deploy/examples# kubectl get node k8s-3 --show-labels
NAME    STATUS   ROLES           AGE   VERSION   LABELS
k8s-3   Ready    control-plane   22h   v1.27.4   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-3,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node.kubernetes.io/exclude-from-external-load-balancers=,role=rook-ceph

# worker节点标签,用于安装存储节点,ceph管理组件也安装到上面
kubectl label nodes k8s-4 role=storage-node
kubectl label nodes k8s-5 role=storage-node
kubectl label nodes k8s-6 role=storage-node
# 查看label
root@k8s-1:~/rook-ceph/rook-1.15.2/deploy/examples# kubectl get node k8s-6 --show-labels
NAME    STATUS   ROLES    AGE   VERSION   LABELS
k8s-6   Ready    <none>   22h   v1.27.4   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-6,kubernetes.io/os=linux,role=storage-node

说明:

测试环境也可以将ceph管理组件部署到k8s控制节点,生产环境中建议专门规划三个存储性能好的节点用于安装部署ceph集群。 CSI 插件仅部署到需要存储功能的业务节点上。

3. 安装配置 Rook Ceph Operator

下面通过Rook Ceph Operator进行rook ceph的部署测试。

3.1 下载部署代码

这里选用截止本文测试时最新的v1.15.2版本。

mkdir rook-ceph; cd rook-ceph
wget https://github.com/rook/rook/archive/refs/tags/v1.15.2.tar.gz
tar xvf v1.15.2.tar.gz
cd rook-1.15.2/deploy/examples/

3.2 修改镜像地址(可选配置)

如果访问dockerhub、quay.io和registry.k8s.io镜像仓库受限,可以将 Rook-Ceph 需要的镜像离线下载后导入到本地仓库,部署时修改镜像地址。

bash 复制代码
# 备份源文件operator.yaml
root@k8s-1:~/rook-ceph/rook-1.15.2/deploy/examples# cp operator.yaml operator.yaml.bak

# 查看默认的镜像地址
root@k8s-1:~/rook-ceph/rook-1.15.2/deploy/examples# grep -n "docker.io\|quay.io\|registry.k8s.io" operator.yaml 
130:  # ROOK_CSI_CEPH_IMAGE: "quay.io/cephcsi/cephcsi:v3.12.2"	# 这里注意版本号带v,否则拉去不到镜像
131:  # ROOK_CSI_REGISTRAR_IMAGE: "registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.11.1"
132:  # ROOK_CSI_RESIZER_IMAGE: "registry.k8s.io/sig-storage/csi-resizer:v1.11.1"
133:  # ROOK_CSI_PROVISIONER_IMAGE: "registry.k8s.io/sig-storage/csi-provisioner:v5.0.1"
134:  # ROOK_CSI_SNAPSHOTTER_IMAGE: "registry.k8s.io/sig-storage/csi-snapshotter:v8.0.1"
135:  # ROOK_CSI_ATTACHER_IMAGE: "registry.k8s.io/sig-storage/csi-attacher:v4.6.1"
513:  # ROOK_CSIADDONS_IMAGE: "quay.io/csiaddons/k8s-sidecar:v0.9.1"
607:          image: docker.io/rook/ceph:v1.15.2

# 取消镜像注释,并替换镜像地址前缀。根据上述查到的行号进行替换。
sed -i '130,135s/^.*#/ /g' operator.yaml
sed -i '513,513s/^.*#/ /g' operator.yaml

# 替换镜像仓库
sed -i 's#registry.k8s.io#10.210.10.210#g' operator.yaml
sed -i 's#quay.io#10.210.10.210#g' operator.yaml
sed -i 's#docker.io#10.210.10.210#g' operator.yaml

# 查看替换后的镜像地址
root@k8s-1:~/rook-ceph/rook-1.15.2/deploy/examples# grep -n "10.210.10.210" operator.yaml 
130:  ROOK_CSI_CEPH_IMAGE: "10.210.10.210/cephcsi/cephcsi:v3.12.2"
131:  ROOK_CSI_REGISTRAR_IMAGE: "10.210.10.210/sig-storage/csi-node-driver-registrar:v2.11.1"
132:  ROOK_CSI_RESIZER_IMAGE: "10.210.10.210/sig-storage/csi-resizer:v1.11.1"
133:  ROOK_CSI_PROVISIONER_IMAGE: "10.210.10.210/sig-storage/csi-provisioner:v5.0.1"
134:  ROOK_CSI_SNAPSHOTTER_IMAGE: "10.210.10.210/sig-storage/csi-snapshotter:v8.0.1"
135:  ROOK_CSI_ATTACHER_IMAGE: "10.210.10.210/sig-storage/csi-attacher:v4.6.1"
513:  ROOK_CSIADDONS_IMAGE: "10.210.10.210/csiaddons/k8s-sidecar:v0.9.1"
607:          image: 10.210.10.210/rook/ceph:v1.15.2

说明:

上述10.210.10.210为本地的harbor镜像仓库地址,确保本地对应项目下存在对应组件版本的镜像。本次使用到的容器镜像列表如下:

txt 复制代码
quay.io/cephcsi/cephcsi:v3.12.2
registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.11.1
registry.k8s.io/sig-storage/csi-resizer:v1.11.1
registry.k8s.io/sig-storage/csi-provisioner:v5.0.1
registry.k8s.io/sig-storage/csi-snapshotter:v8.0.1
registry.k8s.io/sig-storage/csi-attacher:v4.6.1
quay.io/csiaddons/k8s-sidecar:v0.9.1
docker.io/rook/ceph:v1.15.2
quay.io/ceph/ceph:v18.2.4

可以提前下载上述镜像并重新打tag推送到内部的镜像仓库。

3.3 修改自定义配置

修改配置文件 operator.yaml ,修改下面亲和性相关配置,和上述的label对应:

bash 复制代码
# rook-ceph 所有管理组件部署在指定标签节点
CSI_PROVISIONER_NODE_AFFINITY: "role=storage-node"

# k8s 其他节点安装 Ceph CSI Plugin
CSI_PLUGIN_NODE_AFFINITY: "role=rook-ceph"

3.4 部署 Rook Operator

bash 复制代码
# 部署 Rook operator
kubectl create -f crds.yaml -f common.yaml -f operator.yaml

# 查看rook-ceph-operator容器状态
kubectl -n rook-ceph get pod -o wide

root@k8s-1:~/rook-ceph/rook-1.15.2/deploy/examples# kubectl -n rook-ceph get pod -o wide
rook-ceph-operator-b86bf6d58-t24kr  1/1     Running     0    31m   172.25.173.11    k8s-5   <none>   <none>

4. 创建 Ceph 集群

4.1 修改集群配置文件

修改集群配置文件 cluster.yaml,增加节点亲和配置,取消下面位置的注释,并配置key和vlaues部分的值。

bash 复制代码
placement:
  all:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
        - matchExpressions:
          - key: role
            operator: In
            values:
            - storage-node

修改集群配置文件 cluster.yaml,增加存储节点和 OSD 磁盘配置。

bash 复制代码
storage: # cluster level storage configuration and selection
  useAllNodes: false  # 生产环境,一定要修改,默认会使用所有节点
  useAllDevices: false # 生产环境,一定要修改,默认会使用所有磁盘
  #deviceFilter:
  config:
    storeType: bluestore	# 添加
  nodes:	# 配置节点名和对应磁盘
    - name: "k8s-4"
      devices:
        - name: "vdb"
        - name: "vdc"
    - name: "k8s-5"
      devices:
        - name: "vdb"
        - name: "vdc"
    - name: "k8s-6"
      devices:
        - name: "vdb"
        - name: "vdc"

4.2 创建 Ceph 集群

  1. 创建集群

    kubectl create -f cluster.yaml

  2. 查看资源状态,确保所有相关 Pod 均为 Running

bash 复制代码
$ kubectl -n rook-ceph get pod -o wide

# 存储节点pod确认运行正常
root@k8s-1:~/rook-ceph/rook-1.15.2/deploy/examples# kubectl get pod -n rook-ceph -o wide | grep k8s-[4-6]
csi-cephfsplugin-provisioner-547c7c6d47-bpxqx     6/6     Running     2 (7m41s ago)   12m     172.25.133.206   k8s-6   <none>           <none>
csi-cephfsplugin-provisioner-547c7c6d47-tdbp8     6/6     Running     1 (11m ago)     12m     172.25.38.82     k8s-4   <none>           <none>
csi-rbdplugin-provisioner-574464dbfc-qb5s5        6/6     Running     1 (11m ago)     12m     172.25.38.81     k8s-4   <none>           <none>
csi-rbdplugin-provisioner-574464dbfc-szwnq        6/6     Running     2 (11m ago)     12m     172.25.173.15    k8s-5   <none>           <none>
rook-ceph-crashcollector-k8s-4-5cc6d79567-bb4zj   1/1     Running     0               7m36s   172.25.38.90     k8s-4   <none>           <none>
rook-ceph-crashcollector-k8s-5-77cb9fb554-ccrtd   1/1     Running     0               7m35s   172.25.173.23    k8s-5   <none>           <none>
rook-ceph-crashcollector-k8s-6-67684cd7b9-5rqfs   1/1     Running     0               8m7s    172.25.133.209   k8s-6   <none>           <none>
rook-ceph-exporter-k8s-4-5c7d9db66f-c8sdr         1/1     Running     0               7m32s   172.25.38.91     k8s-4   <none>           <none>
rook-ceph-exporter-k8s-5-98949968d-g9vm9          1/1     Running     0               7m31s   172.25.173.24    k8s-5   <none>           <none>
rook-ceph-exporter-k8s-6-57d8d78887-vwfd5         1/1     Running     0               8m7s    172.25.133.210   k8s-6   <none>           <none>
rook-ceph-mgr-a-69f4c7d775-9t8r8                  3/3     Running     0               8m24s   172.25.173.17    k8s-5   <none>           <none>
rook-ceph-mgr-b-6cd44fbf66-q4vpf                  3/3     Running     0               8m23s   172.25.38.84     k8s-4   <none>           <none>
rook-ceph-mon-a-5465966849-vcptl                  2/2     Running     0               16m     172.25.173.14    k8s-5   <none>           <none>
rook-ceph-mon-b-55d8f7bc5-bk5hq                   2/2     Running     0               15m     172.25.133.208   k8s-6   <none>           <none>
rook-ceph-mon-c-6cdc476964-l6p8r                  2/2     Running     0               10m     172.25.38.83     k8s-4   <none>           <none>
rook-ceph-operator-b86bf6d58-t24kr                1/1     Running     0               16m     172.25.173.11    k8s-5   <none>           <none>
rook-ceph-osd-0-6f8874b447-xknfs                  2/2     Running     0               7m36s   172.25.38.88     k8s-4   <none>           <none>
rook-ceph-osd-1-c7d85858b-7w4hb                   2/2     Running     0               7m35s   172.25.173.22    k8s-5   <none>           <none>
rook-ceph-osd-2-6ddd8d9bb6-c2t72                  2/2     Running     0               7m35s   172.25.133.213   k8s-6   <none>           <none>
rook-ceph-osd-3-6f8dc5577c-glw5x                  2/2     Running     0               7m36s   172.25.38.89     k8s-4   <none>           <none>
rook-ceph-osd-4-6b578f8d-6x75s                    2/2     Running     0               7m35s   172.25.173.21    k8s-5   <none>           <none>
rook-ceph-osd-5-7d8b74f77b-5xzds                  2/2     Running     0               7m35s   172.25.133.212   k8s-6   <none>           <none>
rook-ceph-osd-prepare-k8s-4-rltxh                 0/1     Completed   0               8m1s    172.25.38.87     k8s-4   <none>           <none>
rook-ceph-osd-prepare-k8s-5-rvlrv                 0/1     Completed   0               8m1s    172.25.173.20    k8s-5   <none>           <none>
rook-ceph-osd-prepare-k8s-6-z9n47                 0/1     Completed   0               8m      172.25.133.211   k8s-6   <none>           <none>

# 等待osd pod运行后,存储节点的磁盘已经完成配置,FSTYPE已经有标识ceph_bluestore  
root@k8s-4:~# lsblk -f       
NAME   FSTYPE         LABEL    UUID                                 FSAVAIL FSUSE% MOUNTPOINT
vda                                                                                
├─vda1                                                                             
└─vda2 ext4                    2ec0411c-1071-4316-bed7-6f0afdf54814   21.9G    39% /
vdb    ceph_bluestore                                                              
vdc    ceph_bluestore    

# 其他节点pod运行正常
root@k8s-1:~/rook-ceph/rook-1.15.2/deploy/examples# kubectl get pod -n rook-ceph -o wide | grep k8s-[1-3]
csi-cephfsplugin-kl7zn                            3/3     Running     0               12m     192.168.1.59     k8s-3   <none>           <none>
csi-cephfsplugin-v6w7j                            3/3     Running     0               12m     192.168.1.56     k8s-2   <none>           <none>
csi-cephfsplugin-zvgld                            3/3     Running     0               12m     192.168.1.55     k8s-1   <none>           <none>
csi-rbdplugin-mpv46                               3/3     Running     0               12m     192.168.1.55     k8s-1   <none>           <none>
csi-rbdplugin-t7drc                               3/3     Running     0               12m     192.168.1.56     k8s-2   <none>           <none>
csi-rbdplugin-tzd48                               3/3     Running     0               12m     192.168.1.59     k8s-3   <none>           <none>

5. 创建 Rook toolbox

通过 Rook 提供的 toolbox实现对 Ceph 集群的管理。toolbax中提供了管理ceph集群的ceph客户端命令,可以查看ceph集群状态、osd拓扑信息等。

bash 复制代码
# 创建toolbox pod
root@k8s-1:~/rook-ceph/rook-1.15.2/deploy/examples# kubectl create -f toolbox.yaml 
deployment.apps/rook-ceph-tools created

root@k8s-1:~/rook-ceph/rook-1.15.2/deploy/examples# kubectl -n rook-ceph get pod | grep tools
rook-ceph-tools-f5cd9fc5b-2lm7t                   1/1     Running     0              3m41s

root@k8s-1:~/rook-ceph/rook-1.15.2/deploy/examples# kubectl -n rook-ceph exec -it deploy/rook-ceph-tools -- bash

# 查看ceph集群状态
bash-5.1$ ceph -s
  cluster:
    id:     6e960b5f-ad26-408a-bd48-f4d522d6757b
    health: HEALTH_WARN
            clock skew detected on mon.c, mon.b
 
  services:
    mon: 3 daemons, quorum a,c,b (age 104m)
    mgr: a(active, since 103m), standbys: b
    osd: 6 osds: 6 up (since 101m), 6 in (since 103m)
 
  data:
    pools:   1 pools, 1 pgs
    objects: 2 objects, 449 KiB
    usage:   160 MiB used, 120 GiB / 120 GiB avail
    pgs:     1 active+clean

# 查看osd拓扑
bash-5.1$ ceph osd tree
ID  CLASS  WEIGHT   TYPE NAME       STATUS  REWEIGHT  PRI-AFF
-1         0.11691  root default                             
-5         0.03897      host k8s-4                           
 0    hdd  0.01949          osd.0       up   1.00000  1.00000
 3    hdd  0.01949          osd.3       up   1.00000  1.00000
-3         0.03897      host k8s-5                           
 1    hdd  0.01949          osd.1       up   1.00000  1.00000
 4    hdd  0.01949          osd.4       up   1.00000  1.00000
-7         0.03897      host k8s-6                           
 2    hdd  0.01949          osd.2       up   1.00000  1.00000
 5    hdd  0.01949          osd.5       up   1.00000  1.00000

6. 使用存储

Rock Ceph 提供了三种存储类型,请参考官方指南了解详情:

Rook 允许通过自定义资源定义 (crd) 创建和自定义存储池。支持 Replicated (副本)和 Erasure Coded(纠删码) 类型。

6.1 创建并使用块存储池

创建块存储池

可以参考官方的资源清单示例文件:

bash 复制代码
root@k8s-1:~/rook-ceph/rook-1.15.2/deploy/examples# grep -v  "^\s*#\|^\s*$" pool.yaml 
apiVersion: ceph.rook.io/v1
kind: CephBlockPool
metadata:
  name: replicapool
  namespace: rook-ceph # namespace:cluster
spec:
  failureDomain: host
  replicated:
    size: 3
    requireSafeReplicaSize: true
  parameters:
    compression_mode: none
  mirroring:
    enabled: false
    mode: image
  statusCheck:
    mirror:
      disabled: false
      interval: 60s

这里创建一个 3 副本的 Ceph 块存储池,编辑 CephBlockPool CR 资源清单,vi ceph-block-replicapool.yaml

yaml 复制代码
apiVersion: ceph.rook.io/v1
kind: CephBlockPool
metadata:
  name: replicapool
  namespace: rook-ceph
spec:
  failureDomain: host
  replicated:
    size: 3

创建 CephBlockPool 资源:

bash 复制代码
kubectl create -f ceph-block-replicapool.yaml

查看存储池资源:

bash 复制代码
kubectl get cephBlockPool -n rook-ceph -o wide
创建storageclass

编辑 StorageClass 资源清单,vi storageclass-rook-ceph-block.yaml

yaml 复制代码
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: rook-ceph-block
provisioner: rook-ceph.rbd.csi.ceph.com # csi-provisioner-name
parameters:
  clusterID: rook-ceph # namespace:cluster
  pool: replicapool
  imageFormat: "2"
  imageFeatures: layering
  csi.storage.k8s.io/provisioner-secret-name: rook-csi-rbd-provisioner
  csi.storage.k8s.io/provisioner-secret-namespace: rook-ceph # namespace:cluster
  csi.storage.k8s.io/controller-expand-secret-name: rook-csi-rbd-provisioner
  csi.storage.k8s.io/controller-expand-secret-namespace: rook-ceph # namespace:cluster
  csi.storage.k8s.io/node-stage-secret-name: rook-csi-rbd-node
  csi.storage.k8s.io/node-stage-secret-namespace: rook-ceph # namespace:cluster
  csi.storage.k8s.io/fstype: ext4
allowVolumeExpansion: true
reclaimPolicy: Delete

查看存储类资源:

bash 复制代码
root@k8s-1:~/rook-ceph/storage-pool# kubectl get sc
NAME                PROVISIONER                 RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
rook-ceph-block     rook-ceph.rbd.csi.ceph.com  Delete          Immediate           true                   7m12s

该部分资源清单件位于如下路径:rook-1.15.2/deploy/examples/csi/rbd,可以直接使用。

bash 复制代码
root@k8s-1:~/rook-ceph/rook-1.15.2/deploy/examples/csi/rbd# ll
total 60
drwxrwxr-x 2 root root 4096 Sep 29 09:41 ./
drwxrwxr-x 5 root root 4096 Sep 20 04:21 ../
-rw-rw-r-- 1 root root  489 Sep 20 04:21 pod-ephemeral.yaml
-rw-rw-r-- 1 root root  315 Sep 20 04:21 pod.yaml
-rw-rw-r-- 1 root root  266 Sep 20 04:21 pvc-clone.yaml
-rw-rw-r-- 1 root root  308 Sep 20 04:21 pvc-restore.yaml
-rw-rw-r-- 1 root root  196 Sep 20 04:21 pvc.yaml
-rw-rw-r-- 1 root root  362 Sep 20 04:21 raw-block-pod.yaml
-rw-rw-r-- 1 root root  226 Sep 20 04:21 raw-block-pvc.yaml
-rw-rw-r-- 1 root root  578 Sep 20 04:21 snapshotclass.yaml
-rw-rw-r-- 1 root root  205 Sep 20 04:21 snapshot.yaml
-rw-rw-r-- 1 root root 3984 Sep 20 04:21 storageclass-ec.yaml
-rw-rw-r-- 1 root root 2441 Sep 20 04:21 storageclass-test.yaml
-rw-rw-r-- 1 root root 4278 Sep 20 04:21 storageclass.yaml
创建测试应用使用块存储资源

使用 Rook 官方提供的测试pod挂载块存储资源测试。

bash 复制代码
# pvc资源清单
root@k8s-1:~/rook-ceph/rook-1.15.2/deploy/examples/csi/rbd# cat pvc.yaml 
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: rbd-pvc
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
  storageClassName: rook-ceph-block

# 创建pvc资源
root@k8s-1:~/rook-ceph/rook-1.15.2/deploy/examples/csi/rbd# kubectl create -f pvc.yaml 
persistentvolumeclaim/rbd-pvc created
root@k8s-1:~/rook-ceph/rook-1.15.2/deploy/examples/csi/rbd# kubectl get pvc
NAME           STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS      AGE
rbd-pvc        Bound    pvc-4fd6ca2f-1b19-453f-90d8-3dc428773d9a   1Gi        RWO            rook-ceph-block   69s

# 测试pod资源清单,这里使用nginx启动一个web服务器,并挂在pvc存储html文件
root@k8s-1:~/rook-ceph/rook-1.15.2/deploy/examples/csi/rbd# cat pod.yaml 
---
apiVersion: v1
kind: Pod
metadata:
  name: csirbd-demo-pod
spec:
  containers:
    - name: web-server
      image: nginx
      volumeMounts:
        - name: mypvc
          mountPath: /var/lib/www/html
  volumes:
    - name: mypvc
      persistentVolumeClaim:
        claimName: rbd-pvc
        readOnly: false
root@k8s-1:~/rook-ceph/rook-1.15.2/deploy/examples/csi/rbd# kubectl create -f pod.yaml 
pod/csirbd-demo-pod created

root@k8s-1:~/rook-ceph/rook-1.15.2/deploy/examples/csi/rbd# kubectl get pod -o wide
NAME              READY   STATUS    RESTARTS   AGE   IP             NODE    NOMINATED NODE   READINESS GATES
csirbd-demo-pod   1/1     Running   0          61s   172.25.13.81   k8s-3   <none>           <none>

查看pod内挂载的rbd存储:

bash 复制代码
root@k8s-1:~/rook-ceph/rook-1.15.2/deploy/examples/csi/rbd# kubectl exec -it csirbd-demo-pod -- df -Th
Filesystem     Type     Size  Used Avail Use% Mounted on
overlay        overlay   40G   16G   22G  42% /
tmpfs          tmpfs     64M     0   64M   0% /dev
tmpfs          tmpfs    3.9G     0  3.9G   0% /sys/fs/cgroup
shm            tmpfs     64M     0   64M   0% /dev/shm
/dev/vda2      ext4      40G   16G   22G  42% /etc/hosts
/dev/rbd0      ext4     974M   24K  958M   1% /var/lib/www/html
tmpfs          tmpfs    7.7G   12K  7.7G   1% /run/secrets/kubernetes.io/serviceaccount
tmpfs          tmpfs    3.9G     0  3.9G   0% /proc/acpi
tmpfs          tmpfs    3.9G     0  3.9G   0% /proc/scsi
tmpfs          tmpfs    3.9G     0  3.9G   0% /sys/firmware

6.2 创建并使用文件存储池

相关资源清单文件位于如下路径:

bash 复制代码
root@k8s-1:~/rook-ceph/rook-1.15.2/deploy/examples# ls -alh filesystem*
-rw-rw-r-- 1 root root 4.5K Sep 20 04:21 filesystem-ec.yaml
-rw-rw-r-- 1 root root 1.2K Sep 20 04:21 filesystem-mirror.yaml
-rw-rw-r-- 1 root root 1.8K Sep 20 04:21 filesystem-test.yaml
-rw-rw-r-- 1 root root 6.9K Sep 20 04:21 filesystem.yaml
root@k8s-1:~/rook-ceph/rook-1.15.2/deploy/examples# cd csi/
cephfs/ nfs/    rbd/  

root@k8s-1:~/rook-ceph/rook-1.15.2/deploy/examples# cd csi/cephfs/
root@k8s-1:~/rook-ceph/rook-1.15.2/deploy/examples/csi/cephfs# ll
total 56
drwxrwxr-x 2 root root 4096 Sep 29 10:38 ./
drwxrwxr-x 5 root root 4096 Sep 20 04:21 ../
-rw-rw-r-- 1 root root  616 Sep 20 04:21 groupsnapshotclass.yaml
-rw-rw-r-- 1 root root  360 Sep 20 04:21 groupsnapshot.yaml
-rw-rw-r-- 1 root root 1681 Sep 20 04:21 kube-registry.yaml
-rw-rw-r-- 1 root root  488 Sep 20 04:21 pod-ephemeral.yaml
-rw-rw-r-- 1 root root  321 Sep 20 04:21 pod.yaml
-rw-rw-r-- 1 root root  268 Sep 20 04:21 pvc-clone.yaml
-rw-rw-r-- 1 root root  310 Sep 20 04:21 pvc-restore.yaml
-rw-rw-r-- 1 root root  230 Sep 20 04:21 pvc.yaml
-rw-rw-r-- 1 root root  587 Sep 20 04:21 snapshotclass.yaml
-rw-rw-r-- 1 root root  214 Sep 20 04:21 snapshot.yaml
-rw-rw-r-- 1 root root 1751 Sep 20 04:21 storageclass-ec.yaml
-rw-rw-r-- 1 root root  770 Sep 29 10:38 storageclass.yaml
创建文件存储池

这次使用纠删码创建存储池:

bash 复制代码
root@k8s-1:~/rook-ceph/rook-1.15.2/deploy/examples# cp filesystem-ec.yaml filesystem-ec.yaml.bak
root@k8s-1:~/rook-ceph/rook-1.15.2/deploy/examples# vim filesystem-ec.yaml
# 修改节点亲和性部分,取消注释,修改key-values。修改时注意语法格式,对应部分字段对其,严格缩进。否则创建资源会报错。
    placement:
      nodeAffinity:
        requiredDuringSchedulingIgnoredDuringExecution:
          nodeSelectorTerms:
          - matchExpressions:
            - key: mds
              operator: In
              values:
              - mds-node
      topologySpreadConstraints:
      tolerations:
      - key: mds-node
        operator: Exists

# 生成不带注释行的yaml文件
root@k8s-1:~/rook-ceph/rook-1.15.2/deploy/examples# grep -v  "^\s*#\|^\s*$" filesystem-ec.yaml > myfs.yaml

# 节点设置label用于mds pod调度
root@k8s-1:~/rook-ceph/rook-1.15.2/deploy/examples# kubectl label node k8s-6 mds=mds-sever
node/k8s-6 labeled
root@k8s-1:~/rook-ceph/rook-1.15.2/deploy/examples# kubectl label node k8s-5 mds=mds-sever
node/k8s-5 labeled
root@k8s-1:~/rook-ceph/rook-1.15.2/deploy/examples# kubectl label node k8s-4 mds=mds-sever
node/k8s-4 labeled

# 创建文件系统实例
root@k8s-1:~/rook-ceph/rook-1.15.2/deploy/examples# kubectl create -f myfs.yaml 
cephfilesystem.ceph.rook.io/myfs-ec created
cephfilesystemsubvolumegroup.ceph.rook.io/myfs-csi created

# 查看mds
root@k8s-1:~/rook-ceph/rook-1.15.2/deploy/examples# kubectl get pod -n rook-ceph -o wide | grep mds
rook-ceph-mds-myfs-ec-a-85589777d5-7kdvp  2/2 Running   0  11s   172.25.173.36    k8s-5   <none>           <none>
rook-ceph-mds-myfs-ec-b-54c56f4459-v9vtx  2/2 Running   0  10s   172.25.133.236   k8s-6   <none>           <none>

# 创建文件系统后会自动创建相关存储池
root@k8s-1:~/rook-ceph/storage-pool# kubectl exec -itn rook-ceph rook-ceph-tools-f5cd9fc5b-2lm7t  
bash-5.1$ ceph osd pool ls
.mgr
replicapool
myfs-ec-metadata
myfs-ec-data0
myfs-ec-erasurecoded

# k8s资源对象
root@k8s-1:~/rook-ceph/rook-1.15.2/deploy/examples# kubectl get CephFilesystem -n rook-ceph
NAME       ACTIVEMDS   AGE   PHASE
myfs-ec   1           52m   Ready
创建storageclass
bash 复制代码
root@k8s-1:~/rook-ceph/rook-1.15.2/deploy/examples/csi/cephfs# grep -v  "^\s*#\|^\s*$" storageclass-ec.yaml 
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: rook-cephfs
provisioner: rook-ceph.cephfs.csi.ceph.com # csi-provisioner-name
parameters:
  clusterID: rook-ceph # namespace:cluster
  fsName: myfs-ec	# 文件系统实例
  pool: myfs-ec-erasurecoded	# 存储池
  csi.storage.k8s.io/provisioner-secret-name: rook-csi-cephfs-provisioner
  csi.storage.k8s.io/provisioner-secret-namespace: rook-ceph # namespace:cluster
  csi.storage.k8s.io/controller-expand-secret-name: rook-csi-cephfs-provisioner
  csi.storage.k8s.io/controller-expand-secret-namespace: rook-ceph # namespace:cluster
  csi.storage.k8s.io/node-stage-secret-name: rook-csi-cephfs-node
  csi.storage.k8s.io/node-stage-secret-namespace: rook-ceph # namespace:cluster
reclaimPolicy: Delete
allowVolumeExpansion: true
mountOptions:

root@k8s-1:~/rook-ceph/rook-1.15.2/deploy/examples/csi/cephfs# kubectl create -f storageclass-ec.yaml 
storageclass.storage.k8s.io/rook-cephfs created
root@k8s-1:~/rook-ceph/rook-1.15.2/deploy/examples/csi/cephfs# kubectl get sc
NAME                PROVISIONER           RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
rook-ceph-block     rook-ceph.rbd.csi.ceph.com       Delete          Immediate           true                   3h23m
rook-cephfs         rook-ceph.cephfs.csi.ceph.com    Delete          Immediate           true                   3s
创建测试应用并使用文件存储资源

使用 Rook 官方提供的测试pod挂载块存储资源测试。

bash 复制代码
# 创建pvc和pod
root@k8s-1:~/rook-ceph/rook-1.15.2/deploy/examples/csi/cephfs# kubectl create -f pvc.yaml 
persistentvolumeclaim/cephfs-pvc created
root@k8s-1:~/rook-ceph/rook-1.15.2/deploy/examples/csi/cephfs# kubectl create -f pod.yaml 
pod/csicephfs-demo-pod created

# 查看pod内挂载信息
root@k8s-1:~/rook-ceph/rook-1.15.2/deploy/examples/csi/cephfs# kubectl exec -it csicephfs-demo-pod -- df -Th
Filesystem            Type     Size  Used Avail Use% Mounted on
/dev/vda2             ext4      40G   19G   19G  52% /etc/hosts
10.103.253.118:6789,10.101.12.34:6789,10.106.31.157:6789:/volumes/csi/csi-vol-6af1ea76-8b27-4ec3-b844-724d39dee8bd/78776168-fc0b-440a-8ee8-5a87f831dc21 ceph     1.0G     0  1.0G   0% /var/lib/www/html                                                                                   

6.3 创建并使用对象存储池

创建对象存储和对象网关
bash 复制代码
# 对象存储相关示例清单文件
root@k8s-1:~/rook-ceph/rook-1.15.2/deploy/examples# ls -alh object*.yaml
-rw-rw-r-- 1 root root 3.3K Sep 20 04:21 object-a.yaml
-rw-rw-r-- 1 root root  169 Sep 20 04:21 object-bucket-claim-a.yaml
-rw-rw-r-- 1 root root  587 Sep 20 04:21 object-bucket-claim-delete.yaml
-rw-rw-r-- 1 root root  489 Sep 20 04:21 object-bucket-claim-notification.yaml
-rw-rw-r-- 1 root root  587 Sep 20 04:21 object-bucket-claim-retain.yaml
-rw-rw-r-- 1 root root 3.3K Sep 20 04:21 object-b.yaml
-rw-rw-r-- 1 root root 3.6K Sep 20 04:21 object-ec.yaml
-rw-rw-r-- 1 root root  777 Sep 20 04:21 object-external.yaml
-rw-rw-r-- 1 root root 1.9K Sep 20 04:21 object-multisite-pull-realm-test.yaml
-rw-rw-r-- 1 root root 2.0K Sep 20 04:21 object-multisite-pull-realm.yaml
-rw-rw-r-- 1 root root 1.5K Sep 20 04:21 object-multisite-test.yaml
-rw-rw-r-- 1 root root 1.6K Sep 20 04:21 object-multisite.yaml
-rw-rw-r-- 1 root root 6.5K Sep 20 04:21 object-openshift.yaml
-rw-rw-r-- 1 root root 1.5K Sep 20 04:21 object-shared-pools-test.yaml
-rw-rw-r-- 1 root root 1.5K Sep 20 04:21 object-shared-pools.yaml
-rw-rw-r-- 1 root root  685 Sep 20 04:21 object-test.yaml
-rw-rw-r-- 1 root root 1.1K Sep 20 04:21 object-user.yaml
-rw-rw-r-- 1 root root 6.6K Sep 29 13:32 object.yaml

root@k8s-1:~/rook-ceph/rook-1.15.2/deploy/examples# cp object.yaml object.yaml.bak
root@k8s-1:~/rook-ceph/rook-1.15.2/deploy/examples# vim object.yaml
# 修改节点亲和性部分,取消注释,修改key-values。修改时注意语法格式,对应部分字段对其,严格缩进。否则创建资源会报错。
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
        - matchExpressions:
          - key: rgw
            operator: In
            values:
            - rgw-node
    topologySpreadConstraints:
    tolerations:
    - key: rgw-node
      operator: Exists
    podAffinity:
    podAntiAffinity:

root@k8s-1:~/rook-ceph/rook-1.15.2/deploy/examples# grep -v  "^\s*#\|^\s*$" object.yaml > myobject.yaml

# 节点设置label用于rgw pod调度
root@k8s-1:~/rook-ceph/rook-1.15.2/deploy/examples# kubectl label node k8s-6 rgw=rgw-node
node/k8s-6 labeled
root@k8s-1:~/rook-ceph/rook-1.15.2/deploy/examples# kubectl label node k8s-5 rgw=rgw-node
node/k8s-5 labeled
root@k8s-1:~/rook-ceph/rook-1.15.2/deploy/examples# kubectl label node k8s-4 rgw=rgw-node
node/k8s-4 labeled

# 创建对象存储池和对象网关
root@k8s-1:~/rook-ceph/rook-1.15.2/deploy/examples# kubectl create -f myobject.yaml 

root@k8s-1:~/rook-ceph/rook-1.15.2/deploy/examples# kubectl -n rook-ceph -o wide get pod -l app=rook-ceph-rgw
NAME           READY   STATUS    RESTARTS   AGE   IP   NODE    NOMINATED NODE   READINESS GATES
rook-ceph-rgw-my-store-a-767c8f8dd9-2zpvl   2/2     Running   0    21m   172.25.173.40   k8s-5   <none>     <none>

root@k8s-1:~/rook-ceph/rook-1.15.2/deploy/examples# kubectl -n rook-ceph get svc | grep rgw
rook-ceph-rgw-my-store                   ClusterIP   10.103.162.53    <none>        80/TCP              3m8s

# 相关存储池
root@k8s-1:~/rook-ceph/storage-pool# kubectl exec -itn rook-ceph rook-ceph-tools-f5cd9fc5b-2lm7t  
bash-5.1$ ceph osd pool ls
...
.rgw.root
my-store.rgw.buckets.non-ec
my-store.rgw.otp
my-store.rgw.buckets.index
my-store.rgw.meta
my-store.rgw.log
my-store.rgw.control
my-store.rgw.buckets.data

# k8s资源对象
root@k8s-1:~/rook-ceph/rook-1.15.2/deploy/examples# kubectl get CephObjectStore -n rook-ceph
NAME       PHASE   ENDPOINT                                         SECUREENDPOINT   AGE
my-store   Ready   http://rook-ceph-rgw-my-store.rook-ceph.svc:80                    14m
创建storageclass
bash 复制代码
# 相关资源清单
root@k8s-1:~/rook-ceph/rook-1.15.2/deploy/examples# ls -alh | grep bucket-
-rw-rw-r-- 1 root root  659 Sep 20 04:21 bucket-notification-endpoint.yaml
-rw-rw-r-- 1 root root 1.2K Sep 20 04:21 bucket-notification.yaml
-rw-rw-r-- 1 root root  847 Sep 20 04:21 bucket-topic.yaml
-rw-rw-r-- 1 root root  169 Sep 20 04:21 object-bucket-claim-a.yaml
-rw-rw-r-- 1 root root  587 Sep 20 04:21 object-bucket-claim-delete.yaml
-rw-rw-r-- 1 root root  489 Sep 20 04:21 object-bucket-claim-notification.yaml
-rw-rw-r-- 1 root root  587 Sep 20 04:21 object-bucket-claim-retain.yaml
-rw-rw-r-- 1 root root  271 Sep 20 04:21 storageclass-bucket-a.yaml
-rw-rw-r-- 1 root root  708 Sep 20 04:21 storageclass-bucket-delete.yaml
-rw-rw-r-- 1 root root  715 Sep 20 04:21 storageclass-bucket-retain.yaml

# 基于上面创建的CephObjectStore创建存储类
root@k8s-1:~/rook-ceph/rook-1.15.2/deploy/examples# cat storageclass-bucket-retain.yaml 
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
   name: rook-ceph-retain-bucket
provisioner: rook-ceph.ceph.rook.io/bucket # driver:namespace:cluster
# set the reclaim policy to retain the bucket when its OBC is deleted
reclaimPolicy: Retain
parameters:
   objectStoreName: my-store # port 80 assumed
   objectStoreNamespace: rook-ceph # namespace:cluster
   # To accommodate brownfield cases reference the existing bucket name here instead
   # of in the ObjectBucketClaim (OBC). In this case the provisioner will grant
   # access to the bucket by creating a new user, attaching it to the bucket, and
   # providing the credentials via a Secret in the namespace of the requesting OBC.
   #bucketName:
root@k8s-1:~/rook-ceph/rook-1.15.2/deploy/examples# kubectl create -f storageclass-bucket-retain.yaml 
storageclass.storage.k8s.io/rook-ceph-retain-bucket created

root@k8s-1:~/rook-ceph/rook-1.15.2/deploy/examples# kubectl get sc
NAME                      PROVISIONER                                          RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
rook-ceph-block           rook-ceph.rbd.csi.ceph.com                           Delete          Immediate           true                   4h11m
rook-ceph-retain-bucket   rook-ceph.ceph.rook.io/bucket                        Retain          Immediate           false                  16s
rook-cephfs               rook-ceph.cephfs.csi.ceph.com                        Delete          Immediate           true                   41m
创建桶并使用s3cmd测试

根据上面创建的rook-ceph-retain-bucket 存储类,创建对象存储桶声明Object Bucket Claim(OBC)来请求一个存储桶。

bash 复制代码
# 查看obc定义
root@k8s-1:~/rook-ceph/rook-1.15.2/deploy/examples# cat object-bucket-claim-retain.yaml 
apiVersion: objectbucket.io/v1alpha1
kind: ObjectBucketClaim
metadata:
  name: ceph-retain-bucket
spec:
  # To create a new bucket specify either `bucketName` or
  # `generateBucketName` here. Both cannot be used. To access
  # an existing bucket the bucket name needs to be defined in
  # the StorageClass referenced here, and both `bucketName` and
  # `generateBucketName` must be omitted in the OBC.
  #bucketName:
  generateBucketName: ceph-bkt
  storageClassName: rook-ceph-retain-bucket
  additionalConfig:
    # To set for quota for OBC
    #maxObjects: "1000"
    #maxSize: "2G"

# 创建obc,同时创建obc后会创建对象桶。
root@k8s-1:~/rook-ceph/rook-1.15.2/deploy/examples# kubectl create -f object-bucket-claim-retain.yaml 
objectbucketclaim.objectbucket.io/ceph-retain-bucket created
root@k8s-1:~/rook-ceph/rook-1.15.2/deploy/examples# kubectl get obc
NAME                 AGE
ceph-retain-bucket   2m55s

# 相同namespace下会创建一个secret和ConfigMap。secret包含用于访问存储桶的凭据;cm包含存储桶端点信息。
root@k8s-1:~/rook-ceph/rook-1.15.2/deploy/examples# kubectl get cm 
NAME                 DATA   AGE
ceph-retain-bucket   5      2m
root@k8s-1:~/rook-ceph/rook-1.15.2/deploy/examples# kubectl get secret
NAME                 TYPE     DATA   AGE
ceph-retain-bucket   Opaque   2      4m20s

注意

示例文件中的delete、retain表示pv的回收策略,在创建资源时,注意根据命名找对应关系。

使用s3cmd测试对象桶:

bash 复制代码
# 查看桶的url和ak,sk
root@k8s-1:~/rook-ceph/rook-1.15.2/deploy/examples# kubectl get secret ceph-retain-bucket -oyaml
apiVersion: v1
data:
  AWS_ACCESS_KEY_ID: N0w2TkszTFVLVEgwTTBWUjE0ODU=
  AWS_SECRET_ACCESS_KEY: OW5OZExxUVo5eGZua1VubGtCTjUzbWwwVENsMFdBc3c4YklnRG52VQ==
kind: Secret
metadata:
...

root@k8s-1:~/rook-ceph/rook-1.15.2/deploy/examples# kubectl get cm ceph-retain-bucket -oyaml
apiVersion: v1
data:
  BUCKET_HOST: rook-ceph-rgw-my-store.rook-ceph.svc
  BUCKET_NAME: ceph-bkt-4f101ee2-eafa-4c99-8608-03162e0878fa
  BUCKET_PORT: "80"
  BUCKET_REGION: ""
  BUCKET_SUBREGION: ""
kind: ConfigMap
...

# 创建s3cmd配置文件
root@k8s-1:~/rook-ceph/rook-1.15.2/deploy/examples# export AWS_ACCESS_KEY_ID=$(kubectl -n default get secret ceph-retain-bucket -o jsonpath='{.data.AWS_ACCESS_KEY_ID}' | base64 --decode)
root@k8s-1:~/rook-ceph/rook-1.15.2/deploy/examples# export AWS_SECRET_ACCESS_KEY=$(kubectl -n default get secret ceph-retain-bucket -o jsonpath='{.data.AWS_SECRET_ACCESS_KEY}' | base64 --decode)
root@k8s-1:~/rook-ceph/rook-1.15.2/deploy/examples# echo $AWS_ACCESS_KEY_ID
7L6NK3LUKTH0M0VR1485
root@k8s-1:~/rook-ceph/rook-1.15.2/deploy/examples# echo $AWS_SECRET_ACCESS_KEY
9nNdLqQZ9xfnkUnlkBN53ml0TCl0WAsw8bIgDnvU

# 创建s3cmd配置文件
cat > ~/.s3cfg << EOF
[default]
access_key = 7L6NK3LUKTH0M0VR1485
host_base = rook-ceph-rgw-my-store.rook-ceph.svc
secret_key = 9nNdLqQZ9xfnkUnlkBN53ml0TCl0WAsw8bIgDnvU
use_https = False
EOF

# 将配置文件创建为cm
root@k8s-1:~/rook-ceph# kubectl create configmap s3cmd-config --from-file=/root/.s3cfg
configmap/s3cmd-config created

# 创建s3cmd,测试连接对象桶
root@k8s-1:~/rook-ceph# kubectl create -f s3cmd-pod.yaml 
pod/s3cmd-pod created
root@k8s-1:~/rook-ceph# kubectl exec -it s3cmd-pod -- sh
/ # cat /root/.s3cfg 
[default]
access_key = 7L6NK3LUKTH0M0VR1485
host_base = rook-ceph-rgw-my-store.rook-ceph.svc
secret_key = 9nNdLqQZ9xfnkUnlkBN53ml0TCl0WAsw8bIgDnvU
use_https = False
/ # s3cmd ls
2024-09-29 06:06  s3://ceph-bkt-4f101ee2-eafa-4c99-8608-03162e0878fa

其中s3cmd-pod资源清单文件如下:

yaml 复制代码
root@k8s-1:~/rook-ceph# cat s3cmd-pod.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: s3cmd-pod
spec:
  containers:
  - name: s3cmd-container
    image: harbor.rd.unicloud.com/s3cmd/s3cmd:latest  # 使用包含s3cmd的镜像
    command: [ "sleep", "99999" ]  # 保持Pod长时间运行,可以进入调试
    volumeMounts:
    - name: s3config
      mountPath: /root/  # 将配置文件挂载到s3cmd的默认配置路径
  volumes:
  - name: s3config
    configMap:
      name: s3cmd-config
      items:
      - key: .s3cfg
        path: .s3cfg

说明

s3cmd提供二进制方式的运行,参考s3cmd/INSTALL.md at master · s3tools/s3cmd (github.com)或者我之前的文章,安装后可以也可以直接在节点运行连接对象桶。

7. Ceph Dashboard

类似于Kubernetes的Dashboard,Ceph也 提供了一个 Dashboard 工具,用于在web界面管理查看ceph集群信息,包括集群整体运行状态、Mgr、Mon、OSD 和其他 Ceph 进程的状态,查看存储池和 PG 状态,以及显示守护进程的日志等。

7.1 查看ceph-mgr-dashboard信息

部署集群的配置文件 cluster.yaml ,默认已经开启了 Dashboard 功能,Rook Ceph operator 部署集群时将启用 ceph-mgr 的 Dashboard 模块。

bash 复制代码
root@k8s-1:~/rook-ceph/rook-1.15.2/deploy/examples# kubectl get svc -n rook-ceph
NAME                      TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)             AGE
rook-ceph-exporter        ClusterIP   10.108.171.16    <none>        9926/TCP            12h
rook-ceph-mgr             ClusterIP   10.107.248.122   <none>        9283/TCP            12h
rook-ceph-mgr-dashboard   ClusterIP   10.101.97.136    <none>        8443/TCP            12h
rook-ceph-mon-a           ClusterIP   10.101.12.34     <none>        6789/TCP,3300/TCP   12h
rook-ceph-mon-b           ClusterIP   10.106.31.157    <none>        6789/TCP,3300/TCP   12h
rook-ceph-mon-c           ClusterIP   10.103.253.118   <none>        6789/TCP,3300/TCP   12h

7.2 配置在集群外部访问dashborad

在 K8s 集群外部访问 Ceph Dashboard,可以通过 NodePort 或是 Ingress 的方式。

bash 复制代码
root@k8s-1:~/rook-ceph/rook-1.15.2/deploy/examples# cat dashboard-external-https.yaml
apiVersion: v1
kind: Service
metadata:
  name: rook-ceph-mgr-dashboard-external-https
  namespace: rook-ceph # namespace:cluster
  labels:
    app: rook-ceph-mgr
    rook_cluster: rook-ceph # namespace:cluster
spec:
  ports:
    - name: dashboard
      port: 8443
      protocol: TCP
      targetPort: 8443
  selector:
    app: rook-ceph-mgr
    mgr_role: active
    rook_cluster: rook-ceph # namespace:cluster
  sessionAffinity: None
  type: NodePort

root@k8s-1:~/rook-ceph/rook-1.15.2/deploy/examples# kubectl create -f dashboard-external-https.yaml 
service/rook-ceph-mgr-dashboard-external-https created
root@k8s-1:~/rook-ceph/rook-1.15.2/deploy/examples# kubectl get svc -n rook-ceph rook-ceph-mgr-dashboard-external-https
NAME                                     TYPE       CLUSTER-IP     EXTERNAL-IP   PORT(S)          AGE
rook-ceph-mgr-dashboard-external-https   NodePort   10.105.83.54   <none>        8443:30669/TCP   46s

7.3 获取登录凭证

登陆 Dashboard 时需要身份验证,Rook 创建了一个默认用户,用户名 admin。创建了一个名为 rook-ceph-dashboard-password 的 secret 存储密码,使用下面的命令获取随机生成的密码。

bash 复制代码
root@k8s-1:~/rook-ceph/rook-1.15.2/deploy/examples# kubectl -n rook-ceph get secret rook-ceph-dashboard-password -o jsonpath="{['data']['password']}" | base64 --decode && echo
GY2saqAx$(9\qh|W$>)K

7.4 登录Dashboard

通过k8s任意节点的 IP+nodeport访问,本文中url为https://192.168.1.55:30669,默认用户名 admin,密码通过上面的命令获取。

ceph文件系统:

ceph对象网关:

说明

登录dashboard后可以在用户管理界面修改admin密码。

8. 相关资料

  1. rook/rook: Storage Orchestration for Kubernetes (github.com)
  2. Rook - Rook Ceph Documentation
  3. Kubernetes 持久化存储之 Rook Ceph 探究 (kubesphere.io)
  4. 在 Kubernetes 中使用 Rook 构建云原生存储环境 (kubesphere.io)
相关推荐
魏 无羡4 小时前
linux CentOS系统上卸载docker
linux·kubernetes·centos
Karoku0664 小时前
【k8s集群应用】kubeadm1.20高可用部署(3master)
运维·docker·云原生·容器·kubernetes
凌虚6 小时前
Kubernetes APF(API 优先级和公平调度)简介
后端·程序员·kubernetes
探索云原生9 小时前
在 K8S 中创建 Pod 是如何使用到 GPU 的: nvidia device plugin 源码分析
ai·云原生·kubernetes·go·gpu
启明真纳9 小时前
elasticache备份
运维·elasticsearch·云原生·kubernetes
jwolf211 小时前
基于K8S的微服务:一、服务发现,负载均衡测试(附calico网络问题解决)
微服务·kubernetes·服务发现
nangonghen12 小时前
在华为云通过operator部署Doris v2.1集群
kubernetes·华为云·doris·operator
会飞的土拨鼠呀13 小时前
chart文件结构
运维·云原生·kubernetes
自在的LEE16 小时前
当 Go 遇上 Windows:15.625ms 的时间更新困局
后端·kubernetes·go
Hello Dam16 小时前
面向微服务的Spring Cloud Gateway的集成解决方案:用户登录认证与访问控制
spring cloud·微服务·云原生·架构·gateway·登录验证·单点登录