-
每台服务器各增加2块硬盘(类型最好是相同的)
-
将三台主机名设为node1.openlab.edu、node2.openlab.edu、node3.openlab.edu
-
登录所有主机,配置 /etc/hosts 文件
192.168.136.55 ceph1.openlab.edu ceph1
192.168.136.56 ceph2.openlab.edu ceph2
192.168.136.57 ceph3.openlab.edu ceph3 -
所有主机关闭防火墙,禁用SELinux
-
配置时间同步 其他主机一样
bash
[root@ceph01 ~]# vim /etc/chrony.conf
pool ntp.aliyun.com iburst
[root@ceph01 ~]# systemctl restart chronyd
[root@ceph01 ~]# systemctl enable chronyd
- 下载安装 cephadm (如果是centos不用这一步)
本步骤仅需在一台主机上完成即可,本实验中是在 ceph01 中进行的。
bash
[root@ceph01 ~]# yum install git -y
#拷贝一个 Git 仓库到本地
[root@ceph01 ~]# git clone https://gitee.com/yftyxa/openeuler-cephadm.git
[root@ceph01 ~]# cp openeuler-cephadm/cephadm /usr/sbin && chmod a+x /usr/sbin/cephadm
- 添加 ceph 所需的 yum 源
在所有主机中使用以下命令添加 ceph 对应的 yum 源:
bash
cat >> /etc/yum.repos.d/ceph.repo <<EOF
[ceph]
name=ceph x86_64
baseurl=https://repo.huaweicloud.com/ceph/rpm-pacific/el8/x86_64
enabled=1
gpgcheck=0
[ceph-noarch]
name=ceph noarch
baseurl=https://repo.huaweicloud.com/ceph/rpm-pacific/el8/noarch
enabled=1
gpgcheck=0
[ceph-source]
name=ceph SRPMS
baseurl=https://repo.huaweicloud.com/ceph/rpm-pacific/el8/SRPMS
enabled=1
gpgcheck=0
EOF
- 安装docker-ce 所有主机都操作 也可以安装podman
bash
# vim /etc/yum.repos.d/docker.repo
[docker-ce]
name=docker-ce
baseurl=http://mirrors.tuna.tsinghua.edu.cn/docker-ce/linux/rhel/8/x86_64/stable/
gpgcheck=0
[root@ceph01 ~]# yum install docker-ce -y
[root@ceph01 ~]# systemctl enable --now docker
- 安装 ceph
- 在 ceph01 上初始化 ceph 集群
bash
[root@ceph01 ~]# cephadm bootstrap --mon-ip 192.168.136.55 --allow-fqdn-hostname --initial-dashboard-user admin --initial-dashboard-password Huawei@123 --dashboard-password-noupdate


- 为集群添加 node 节点
创建 ceph 的管理容器:
bash
[root@ceph01 ~]# cephadm shell
查看状态
bash
[ceph: root@ceph01 /]# ceph -s
cluster:
id: b5500fba-e910-11ee-bbc5-000c290990e2
health: HEALTH_WARN
OSD count 0 < osd_pool_default_size 3
services:
mon: 1 daemons, quorum ceph01.openlab.edu (age 15m)
mgr: ceph01.openlab.edu.obowtz(active, since 12m)
osd: 0 osds: 0 up, 0 in
data:
pools: 0 pools, 0 pgs
objects: 0 objects, 0 B
usage: 0 B used, 0 B / 0 B avail
pgs:
生成集群公钥,并将其拷贝到剩余主机:
bash
[ceph: root@ceph01 /]# ceph cephadm get-pub-key > ~/ceph.pub
[ceph: root@ceph01 /]# ssh-copy-id -f -i ~/ceph.pub root@ceph2
[ceph: root@ceph01 /]# ssh-copy-id -f -i ~/ceph.pub root@ceph3
使用以下命令将全部主机添加到集群内:
bash
[ceph: root@ceph01 /]# ceph orch host add ceph02.openlab.edu 192.168.136.56
Added host 'ceph02.openlab.edu' with addr '192.168.136.56'
[ceph: root@ceph01 /]# ceph orch host add ceph03.openlab.edu 192.168.136.57
Added host 'ceph03.openlab.edu' with addr '192.168.136.57'
[ceph: root@ceph01 /]# ceph orch host ls
HOST ADDR LABELS STATUS
ceph01.openlab.edu 192.168.136.55 _admin
ceph02.openlab.edu 192.168.136.56
ceph03.openlab.edu 192.168.136.57
3 hosts in cluster
部署完成后状态如下:

3 ) ceph 集群初始化
使用以下命令取消 mon 服务的自动扩展功能
bash
[ceph: root@ceph01 /]# ceph orch apply mon --unmanaged=true
Scheduled mon update...
取消以后,mon 对应的"PLACEMENT"值变为"unmanaged"

为 ceph02 和 ceph03 添加标签"_admin"
bash
[ceph: root@ceph01 /]# ceph orch host label add ceph02.openlab.edu _admin
Added label _admin to host ceph02.openlab.edu
[ceph: root@ceph01 /]# ceph orch host label add ceph03.openlab.edu _admin
Added label _admin to host ceph03.openlab.edu

使用以下命令将 mon 和 mgr 组件部署到指定节点中:
bash
[ceph: root@ceph01 /]# ceph orch apply mon --placement="label:_admin"
Scheduled mon update...
[ceph: root@ceph01 /]# ceph orch apply mgr --placement="label:_admin"
Scheduled mgr update...

客户端安装ceph-common (不用每次cephadm shell)
bash
[root@ceph01 /]# yum install ceph-common

4)添加OSD 将所有主机上的硬盘添加为 OSD:
bash
[root@ceph01 ~]# ceph orch apply osd --all-available-devices
Scheduled osd.all-available-devices update...
bash
[root@ceph01 ~]# ceph osd ls
0
1
2
3
4
5

