https://docs.ceph.com/en/reef/rbd/rbd-exclusive-locks/
官网的
exclusive-lock 的线性化只保证 RBD 内部结构(object-map/journal 等)不被并发修改,并不等于能让两台机器同时挂载 ext4/xfs 并安全读写。文档也强调它是为了避免"非协调写入"
我们两个客户端都执行了
rbd map rbdpool/disk1
后可以通过
rbd info rbdpool/disk1 # 查看属性
rbd status rbdpool/disk1 # 查看状态
rbd lock list rbdpool/disk1 # 查看锁的归属,同一个时候只有一个客户端获得锁
一开始是node2获得锁,然后在node3上执行
dd if=/dev/zero of=/dev/rbd0 bs=4M count=1 oflag=direct
然后锁就到了node3
如果想要设置为独占锁的话怎么办
rbd map rbdpool/disk1 --exclusive
然后在另外一台客户端映射就会失败
root@node3:/# rbd map rbdpool/disk1
rbd: sysfs write failed
In some cases useful info is found in syslog - try "dmesg | tail".
rbd: map failed: (30) Read-only file system
但是可以以可读的方式挂载
root@node3:/# rbd map rbdpool/disk1 --read-only
/dev/rbd0
但是通过可读方式映射的通过status查看看不到
bug
root@node3:/# rbd map rbdpool/disk1
modinfo: ERROR: Module alias rbd not found.
modprobe: FATAL: Module rbd not found in directory /lib/modules/5.15.0-139-generic
rbd: failed to load rbd kernel module (1)
rbd: sysfs write failed
In some cases useful info is found in syslog - try "dmesg | tail".
rbd: map failed: (2) No such file or directory
在宿主机中执行
# 加载 rbd 内核模块
modprobe rbd
# 验证模块是否加载成功(有输出即成功)
lsmod | grep rbd
盘如何分区
cat > /tmp/osd_node3_only.yml <<'YAML'
service_type: osd
service_id: node3_dedicated_osd
placement:
hosts:
- node3
data_devices:
paths:
- /dev/sdb
db_devices:
paths:
- /dev/sdc
# 固定大小(单位后缀可用)
block_db_size: '5G'
block_wal_size: '5G'
YAML
ceph orch apply -i /tmp/osd_node3_only.yml --dry-run # 模拟运行
ceph orch apply -i /tmp/osd_spec.yml # 部署