centos7.9系统安装cloudpods并使用ceph存储(二)

1.ceph安装

1.1 环境准备

配置hosts:

bash 复制代码
$ vim /etc/hosts
10.121.x.x node01

设置ssh无密码登录:

bash 复制代码
# ssh-keygen -t rsa
# ssh-copy-id -i /root/.ssh/id_rsa node01

关闭selinux、firewalld

bash 复制代码
# setenforce 0
# sed -i "s#SELINUX=enforcing#SELINUX=disabled#g" /etc/selinux/config
# systemctl stop firewalld
# systemctl disable firewalld

配置ceph源:

bash 复制代码
# cd /etc/yum.repos.d/
# vim ceph.repo
[noarch]
name=noarch
baseurl=https://mirrors.aliyun.com/ceph/rpm-nautilus/el7/noarch/
enabled=1
gpgcheck=0

[x86_64]
name=x86 64
baseurl=https://mirrors.aliyun.com/ceph/rpm-nautilus/el7/x86_64/
enabled=1
gpgcheck=0

1.2 ceph 安装

本次安装使用ceph-deploy工具:

bash 复制代码
# yum -y install python-setuptools ceph-deploy
创建ceph操作目录,用于保存ceph-deploy生成的配置文件和密匙信息:
# mkdir /root/ceph-deploy
# cd ceph-deploy/
创建ceph集群:
# ceph-deploy new node01
手动安装软件包:
# yum install -y ceph ceph-mon ceph-mgr ceph-radosgw ceph-mds
monitor初始化:
# ceph-deploy mon create-initial
# ls -l
total 44
-rw------- 1 root root   113 Aug 21 18:39 ceph.bootstrap-mds.keyring
-rw------- 1 root root   113 Aug 21 18:39 ceph.bootstrap-mgr.keyring
-rw------- 1 root root   113 Aug 21 18:39 ceph.bootstrap-osd.keyring
-rw------- 1 root root   113 Aug 21 18:39 ceph.bootstrap-rgw.keyring
-rw------- 1 root root   151 Aug 21 18:39 ceph.client.admin.keyring
-rw-r--r-- 1 root root   198 Aug 21 18:37 ceph.conf
-rw-r--r-- 1 root root 16040 Aug 21 18:39 ceph-deploy-ceph.log
-rw------- 1 root root    73 Aug 21 18:37 ceph.mon.keyring
推送配置文件到节点:
# ceph-deploy admin node01
第二次推送需要加参数:
# ceph-deploy --overwrite-conf admin node01
创建manager daemon来做监控:
# ceph-deploy mgr create node01
# ceph -s
  cluster:
    id:     cc039d05-2643-4967-a89c-39fa7cdfa695
    health: HEALTH_WARN
            OSD count 0 < osd_pool_default_size 3
            mon is allowing insecure global_id reclaim

  services:
    mon: 1 daemons, quorum node01 (age 2m)
    mgr: node01(active, since 26s)
    osd: 0 osds: 0 up, 0 in

  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0 B
    usage:   0 B used, 0 B / 0 B avail
    pgs:
添加osd:
# lsblk -l
NAME          MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sdd             8:48   0   3.7T  0 disk
sdb             8:16   0   3.7T  0 disk
sr0            11:0    1  1024M  0 rom
sde             8:64   0 893.3G  0 disk
sde2            8:66   0     1G  0 part /boot
sde3            8:67   0 892.3G  0 part
centos00-root 253:1    0 888.3G  0 lvm  /
centos00-swap 253:0    0     4G  0 lvm
sde1            8:65   0     2M  0 part
sdc             8:32   0   3.7T  0 disk
# ceph-deploy osd create node01 --data /dev/sdb
# ceph-deploy osd create node01 --data /dev/sdc
# ceph-deploy osd create node01 --data /dev/sdd
# ceph osd tree
ID CLASS WEIGHT   TYPE NAME       STATUS REWEIGHT PRI-AFF
-1       10.91606 root default
-3       10.91606     host node01
 0   hdd  3.63869         osd.0       up  1.00000 1.00000
 1   hdd  3.63869         osd.1       up  1.00000 1.00000
 2   hdd  3.63869         osd.2       up  1.00000 1.00000
修改ceph.conf添加如下参数:
# vim ceph.conf 
osd pool default min_size = 1  #设置osd池的默认最小大小为1
osd pool default size = 1      # 设置osd池的默认大小为1
public_network = 10.121.218.0/24
mon_allow_pool_delete = true
# ceph-deploy --overwrite-conf admin node01
# ceph -s
  cluster:
    id:     cc039d05-2643-4967-a89c-39fa7cdfa695
    health: HEALTH_WARN
            mon is allowing insecure global_id reclaim

  services:
    mon: 1 daemons, quorum node01 (age 4m)
    mgr: node01(active, since 3m)
    osd: 3 osds: 3 up (since 93s), 3 in (since 93s)

  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0 B
    usage:   3.0 GiB used, 11 TiB / 11 TiB avail
    pgs:
说明:
	查阅资料解决版本是禁用不安全模式:
	# ceph config set mon auth_allow_insecure_global_id_reclaim false

1.3 创建池:

bash 复制代码
# ceph osd pool create cloudpods 64 64
# rbd pool init cloudpods

告警:

这里提示副本数没有配置告警

bash 复制代码
 ceph -s
  cluster:
    id:     cc039d05-2643-4967-a89c-39fa7cdfa695
    health: HEALTH_WARN
            1 pool(s) have no replicas configured

  services:
    mon: 1 daemons, quorum node01 (age 3m)
    mgr: node01(active, since 3m)
    osd: 3 osds: 3 up (since 3m), 3 in (since 10m)

  data:
    pools:   1 pools, 64 pgs
    objects: 1 objects, 19 B
    usage:   3.0 GiB used, 11 TiB / 11 TiB avail
    pgs:     64 active+clean
屏蔽告警方法:
ceph config set global mon_warn_on_pool_no_redundancy false
重启服务器

2.cloudpods配置ceph

存储-块存储-新建:

然后关联一下宿主机,否则会显示离线:

验证:

创建虚拟机并选择ceph RBD

相关推荐
斯普信专业组3 天前
CephFS管理秘籍:全面掌握文件系统与MDS守护程序命令
ceph·cephfs
45° 微笑6 天前
k8s集群 ceph rbd 存储动态扩容
ceph·容器·kubernetes·rbd
查士丁尼·绵6 天前
ceph补充介绍
ceph
Hello.Reader8 天前
Ceph 存储系统全解
分布式·ceph
Clarence_Ls11 天前
<十六>Ceph mon 运维
运维·ceph
手持钩笼引天下11 天前
踩坑:关于使用ceph pg repair引发的业务阻塞
运维·ceph
Clarence_Ls11 天前
<十七>Ceph 块存储理论与实践
ceph
知本知至14 天前
ceph rgw使用sts Security Token Service
linux·网络·ceph
一名路过的小码农14 天前
ubantu 编译安装ceph 18.2.4
linux·c++·ceph
大新新大浩浩15 天前
ceph 删除rbd 锁的命令
ceph·1024程序员节