ceph 14.2.10 aarch64 非集群内 客户端 挂载块设备

集群上的机器测试

706 ceph pool create block-pool 64 64

707 ceph osd pool create block-pool 64 64

708 ceph osd pool application enable block-pool rbd

709 rbd create vdisk1 --size 4G --pool block-pool --image-format 2 --image-feature layering

710 rbd map block-pool/vdisk1

711 mkdir /mnt/vdisk1

712 mount /dev/rbd1 /mnt/vdisk1

713 mkfs.xfs /dev/rbd1

714 mount /dev/rbd1 /mnt/vdisk1

直接搬集群上的操作方式(失败)

[root@ceph-client mnt]# rbd map block-pool/vdisk1

unable to get monitor info from DNS SRV with service name: ceph-mon

In some cases useful info is found in syslog - try "dmesg | tail".

rbd: map failed: 2023-11-14 15:51:13.809 ffff8d4e7010 -1 failed for service _ceph-mon._tcp

(2) No such file or directory

报错连接不上mon

-m指定集群IP

[root@ceph-client mnt]# rbd map block-pool/vdisk1 -m 172.17.163.105,172.17.112.206

rbd: sysfs write failed

2023-11-14 15:55:34.873 ffff79be78c0 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [2,1]

rbd: couldn't connect to the cluster!

In some cases useful info is found in syslog - try "dmesg | tail".

rbd: map failed: (22) Invalid argument

[root@ceph-client mnt]# rbd map block-pool/vdisk1 -m 172.17.163.105

rbd: sysfs write failed

2023-11-14 15:58:45.753 ffffb09918c0 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [2,1]

rbd: couldn't connect to the cluster!

In some cases useful info is found in syslog - try "dmesg | tail".

rbd: map failed: (22) Invalid argumen

dmesg日志 no secret set (for auth_x protocol)

[91158.067305] libceph: no secret set (for auth_x protocol)

[91158.068206] libceph: error -22 on auth protocol 2 init

授权

查看pool

查看rbd image

生成授权

[root@ceph-0 ~]# ceph auth get-or-create client.blockuser mon 'allow r' osd 'allow * pool=block-pool'

[client.blockuser]

key = AQDNLFNlZXSwERAA9uYYz7UdIKmuO1bSiSmEVg==

导出授权

[root@ceph-0 ~]# ceph auth get client.blockuser

exported keyring for client.blockuser
[client.blockuser]
key = AQDNLFNlZXSwERAA9uYYz7UdIKmuO1bSiSmEVg==
caps mon = "allow r"
caps osd = "allow * pool=block-pool"

导出配置文件

[root@ceph-0 ~]# ceph auth get client.blockuser -o /etc/ceph/ceph.client.blockuser.keyring

exported keyring for client.blockuser

测试配置文件(成功)

[root@ceph-0 ~]# ceph --user blockuser -s

cluster:

id: ff72b496-d036-4f1b-b2ad-55358f3c16cb

health: HEALTH_ERR

mon ceph-0 is very low on available space

services:

mon: 4 daemons, quorum ceph-3,ceph-1,ceph-0,ceph-2 (age 30h)

mgr: ceph-0(active, since 3d), standbys: ceph-1, ceph-3, ceph-2

mds: 4 up:standby

osd: 4 osds: 3 up (since 2d), 3 in (since 2d)

rgw: 4 daemons active (ceph-0, ceph-1, ceph-2, ceph-3)

task status:

data:

pools: 5 pools, 192 pgs

objects: 201 objects, 6.4 MiB

usage: 3.2 GiB used, 297 GiB / 300 GiB avail

pgs: 192 active+clean

拷贝配置文件到客户端

[root@ceph-0 ~]# scp /etc/ceph/ceph.client.blockuser.keyring root@ceph-client:/etc/ceph/

在客户端验证一次(需要指定-m参数,成功)

[root@ceph-client ceph]# ceph --user blockuser -s -m ceph-0

cluster:

id: ff72b496-d036-4f1b-b2ad-55358f3c16cb

health: HEALTH_ERR

mon ceph-0 is very low on available space

services:

mon: 4 daemons, quorum ceph-3,ceph-1,ceph-0,ceph-2 (age 30h)

mgr: ceph-0(active, since 3d), standbys: ceph-1, ceph-3, ceph-2

mds: 4 up:standby

osd: 4 osds: 3 up (since 2d), 3 in (since 2d)

rgw: 4 daemons active (ceph-0, ceph-1, ceph-2, ceph-3)

task status:

data:

pools: 5 pools, 192 pgs

objects: 201 objects, 6.4 MiB

usage: 3.2 GiB used, 297 GiB / 300 GiB avail

pgs: 192 active+clean

客户端映射块设备(集群地址)

[root@ceph-client ceph]# rbd map block-pool/vdisk1 --user blockuser -m ceph-0,ceph-1,ceph-2,ceph-3

/dev/rbd0

可以看到成功映射出/dev/rbd0 块设备

格式化块设备

mkfs.xfs /dev/rbd0 -f

挂载块设备(成功)

参考

分布式存储系统之Ceph集群RBD基础使用 - Linux-1874 - 博客园 (cnblogs.com)

相关推荐
斯普信专业组2 天前
CephFS管理秘籍:全面掌握文件系统与MDS守护程序命令
ceph·cephfs
45° 微笑5 天前
k8s集群 ceph rbd 存储动态扩容
ceph·容器·kubernetes·rbd
查士丁尼·绵5 天前
ceph补充介绍
ceph
Hello.Reader7 天前
Ceph 存储系统全解
分布式·ceph
Clarence_Ls10 天前
<十六>Ceph mon 运维
运维·ceph
手持钩笼引天下10 天前
踩坑:关于使用ceph pg repair引发的业务阻塞
运维·ceph
Clarence_Ls10 天前
<十七>Ceph 块存储理论与实践
ceph
知本知至14 天前
ceph rgw使用sts Security Token Service
linux·网络·ceph
一名路过的小码农14 天前
ubantu 编译安装ceph 18.2.4
linux·c++·ceph
大新新大浩浩14 天前
ceph 删除rbd 锁的命令
ceph·1024程序员节