Kubernetes使用Ceph存储

1.搭建Ceph集群

1)准备工作

|------|-------|---------------|
| 机器编号 | 主机名 | IP |
| 1 | ceph1 | 192.168.1.121 |
| 2 | ceph2 | 192.168.1.123 |
| 3 | ceph3 | 192.168.1.125 |

关闭selinux、firewalld,配置hostname以及/etc/hosts

为每一台机器都准备至少一块单独的磁盘

所有机器安装时间同步服务chrony

bash 复制代码
[root@ceph1 ~]# mv /etc/yum.repos.d/* /home
[root@ceph1 ~]# curl -o /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
[root@ceph1 ~]# yum clean all&& yum repolist&& yum install -y chrony
[root@ceph1 ~]# systemctl start chronyd 
[root@ceph1 ~]# systemctl enable chronyd

所有机器安装docker-ce

bash 复制代码
[root@ceph1 ~]# yum install -y dnf 
[root@ceph1 ~]# sudo dnf -y install dnf-plugins-core
[root@ceph1 ~]# sudo dnf config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
添加仓库自:https://download.docker.com/linux/centos/docker-ce.repo
[root@ceph1 ~]# sed -i 's+https://download.docker.com+https://mirrors.tuna.tsinghua.edu.cn/docker-ce+' /etc/yum.repos.d/docker-ce.repo
[root@ceph1 ~]# sudo dnf install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
[root@ceph1 ~]# sudo systemctl enable --now docker
sudo tee /etc/docker/daemon.json <<-'EOF'
{
    "registry-mirrors": [
    	"https://docker.1ms.run",
        "https://doublezonline.cloud"
    ]
}
EOF
sudo systemctl daemon-reload && sudo systemctl restart docker

所有机器安装python3、lvm2(三台都做)

bash 复制代码
[root@ceph1 ~]# yum install -y python3  lvm2

安装cephadm(ceph1上执行)

bash 复制代码
[root@ceph1 ~]# cat>>/etc/yum.repos.d/ceph.repo<<EOF
> [ceph]
> name=ceph
> baseurl=http://mirrors.aliyun.com/ceph/rpm-pacific/el8/x86_64/
> gpgcheck=0
> priority =1
> [ceph-noarch]
> name=cephnoarch
> baseurl=http://mirrors.aliyun.com/ceph/rpm-pacific/el8/noarch/
> gpgcheck=0
> priority =1
> [ceph-source]
> name=Ceph source packages
> baseurl=http://mirrors.aliyun.com/ceph/rpm-pacific/el8/SRPMS
> gpgcheck=0
> priority=1
> EOF
[root@ceph1 ~]# yum clean all&& yum repolist&& yum install -y cephadm

使用cephadm部署ceph

bash 复制代码
[root@ceph1 ~]# cephadm bootstrap --mon-ip 192.168.1.121
Ceph Dashboard is now available at:

	     URL: https://ceph1:8443/
	    User: admin
	Password: 8dfpxjf9cz

    新密码 wang1021

访问dashboard

增加host

bash 复制代码
生成ssh密钥对儿
[ceph: root@ceph1 /]# ceph cephadm get-pub-key > ~/ceph.pub
配置到另外两台机器免密登录
[ceph: root@ceph1 /]#  ssh-copy-id -f -i ~/ceph.pub root@ceph2
[ceph: root@ceph1 /]#  ssh-copy-id -f -i ~/ceph.pub root@ceph3

到浏览器里,增加主机


创建OSD(ceph shell模式下,在ceph上操作)

假设三台机器上新增的新磁盘为/dev/sdb

bash 复制代码
[ceph: root@ceph1 /]# ceph orch daemon add osd ceph1:/dev/sdb
[ceph: root@ceph1 /]# ceph orch daemon add osd ceph2:/dev/sdb&& ceph orch daemon add osd ceph3:/dev/sdb

查看磁盘列表:

bash 复制代码
[ceph: root@ceph1 /]# ceph orch device ls
HOST   PATH      TYPE  DEVICE ID                                             SIZE  AVAILABLE  REFRESHED  REJECT REASONS                                                           
ceph1  /dev/sdb  hdd                                                         100G             38s ago    Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected  
ceph1  /dev/sr0  hdd   VMware_Virtual_SATA_CDRW_Drive_00000000000000000001  4494M             38s ago    Has a FileSystem, Insufficient space (<5GB)                              
ceph2  /dev/sdb  hdd                                                         100G             41s ago    Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected  
ceph2  /dev/sr0  hdd   VMware_Virtual_SATA_CDRW_Drive_00000000000000000001  4494M             41s ago    Has a FileSystem, Insufficient space (<5GB)                              
ceph3  /dev/sdb  hdd                                                         100G             40s ago    Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected  
ceph3  /dev/sr0  hdd   VMware_Virtual_SATA_CDRW_Drive_00000000000000000001  4494M             40s ago    Has a FileSystem, Insufficient space (<5GB)                

此时dashboard上也可以看到

创建pool

查看集群状态

bash 复制代码
[ceph: root@ceph1 /]# ceph -s
  cluster:
    id:     0731be72-c206-11ef-82cf-000c29044707
    health: HEALTH_OK
 
  services:
    mon: 3 daemons, quorum ceph1,ceph2,ceph3 (age 2m)
    mgr: ceph1.eyxwak(active, since 10m), standbys: ceph2.hxchae
    osd: 3 osds: 3 up (since 2m), 3 in (since 3m)
 
  data:
    pools:   1 pools, 1 pgs
    objects: 0 objects, 0 B
    usage:   871 MiB used, 299 GiB / 300 GiB avail
    pgs:     1 active+clean

针对aming-1 pool启用rbd application

bash 复制代码
[ceph: root@ceph1 /]# ceph osd pool application enable test rdb
enabled application 'rdb' on pool 'test'

下一篇讲解k8s如何使用ceph

相关推荐
suamt1 分钟前
记录windows下如何运行docker程序
运维·docker·容器
oMcLin8 分钟前
如何在RHEL 9上配置并优化Kubernetes 1.23高可用集群,提升大规模容器化应用的自动化部署与管理?
kubernetes·自动化·php
特立独行的猫a11 分钟前
低成本搭建鸿蒙PC运行环境:基于 Docker 的 x86_64 服务器
docker·容器·harmonyos·鸿蒙pc
ghostwritten15 分钟前
Kubernetes 网络模式深入解析?
网络·容器·kubernetes
鋆雨无欢丶1 小时前
docker证书认证问题
运维·docker·容器
阿杰 AJie1 小时前
Docker 容器启动的全方位方法汇总
运维·docker·容器
原神启动11 小时前
K8S(七)—— Kubernetes Pod 基础概念与实战配置
云原生·容器·kubernetes
我的golang之路果然有问题1 小时前
Docker 之常用操作(实习中的)
java·运维·笔记·docker·容器·eureka
lisanmengmeng1 小时前
cephfs rbd应用
linux·运维·服务器·ceph
牛奔1 小时前
Docker 容器无法停止的排障与解决全过程
运维·docker·云原生·容器·eureka