Kubernetes使用Ceph存储

1.搭建Ceph集群

1)准备工作

|------|-------|---------------|
| 机器编号 | 主机名 | IP |
| 1 | ceph1 | 192.168.1.121 |
| 2 | ceph2 | 192.168.1.123 |
| 3 | ceph3 | 192.168.1.125 |

关闭selinux、firewalld,配置hostname以及/etc/hosts

为每一台机器都准备至少一块单独的磁盘

所有机器安装时间同步服务chrony

bash 复制代码
[root@ceph1 ~]# mv /etc/yum.repos.d/* /home
[root@ceph1 ~]# curl -o /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
[root@ceph1 ~]# yum clean all&& yum repolist&& yum install -y chrony
[root@ceph1 ~]# systemctl start chronyd 
[root@ceph1 ~]# systemctl enable chronyd

所有机器安装docker-ce

bash 复制代码
[root@ceph1 ~]# yum install -y dnf 
[root@ceph1 ~]# sudo dnf -y install dnf-plugins-core
[root@ceph1 ~]# sudo dnf config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
添加仓库自:https://download.docker.com/linux/centos/docker-ce.repo
[root@ceph1 ~]# sed -i 's+https://download.docker.com+https://mirrors.tuna.tsinghua.edu.cn/docker-ce+' /etc/yum.repos.d/docker-ce.repo
[root@ceph1 ~]# sudo dnf install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
[root@ceph1 ~]# sudo systemctl enable --now docker
sudo tee /etc/docker/daemon.json <<-'EOF'
{
    "registry-mirrors": [
    	"https://docker.1ms.run",
        "https://doublezonline.cloud"
    ]
}
EOF
sudo systemctl daemon-reload && sudo systemctl restart docker

所有机器安装python3、lvm2(三台都做)

bash 复制代码
[root@ceph1 ~]# yum install -y python3  lvm2

安装cephadm(ceph1上执行)

bash 复制代码
[root@ceph1 ~]# cat>>/etc/yum.repos.d/ceph.repo<<EOF
> [ceph]
> name=ceph
> baseurl=http://mirrors.aliyun.com/ceph/rpm-pacific/el8/x86_64/
> gpgcheck=0
> priority =1
> [ceph-noarch]
> name=cephnoarch
> baseurl=http://mirrors.aliyun.com/ceph/rpm-pacific/el8/noarch/
> gpgcheck=0
> priority =1
> [ceph-source]
> name=Ceph source packages
> baseurl=http://mirrors.aliyun.com/ceph/rpm-pacific/el8/SRPMS
> gpgcheck=0
> priority=1
> EOF
[root@ceph1 ~]# yum clean all&& yum repolist&& yum install -y cephadm

使用cephadm部署ceph

bash 复制代码
[root@ceph1 ~]# cephadm bootstrap --mon-ip 192.168.1.121
Ceph Dashboard is now available at:

	     URL: https://ceph1:8443/
	    User: admin
	Password: 8dfpxjf9cz

    新密码 wang1021

访问dashboard

增加host

bash 复制代码
生成ssh密钥对儿
[ceph: root@ceph1 /]# ceph cephadm get-pub-key > ~/ceph.pub
配置到另外两台机器免密登录
[ceph: root@ceph1 /]#  ssh-copy-id -f -i ~/ceph.pub root@ceph2
[ceph: root@ceph1 /]#  ssh-copy-id -f -i ~/ceph.pub root@ceph3

到浏览器里,增加主机


创建OSD(ceph shell模式下,在ceph上操作)

假设三台机器上新增的新磁盘为/dev/sdb

bash 复制代码
[ceph: root@ceph1 /]# ceph orch daemon add osd ceph1:/dev/sdb
[ceph: root@ceph1 /]# ceph orch daemon add osd ceph2:/dev/sdb&& ceph orch daemon add osd ceph3:/dev/sdb

查看磁盘列表:

bash 复制代码
[ceph: root@ceph1 /]# ceph orch device ls
HOST   PATH      TYPE  DEVICE ID                                             SIZE  AVAILABLE  REFRESHED  REJECT REASONS                                                           
ceph1  /dev/sdb  hdd                                                         100G             38s ago    Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected  
ceph1  /dev/sr0  hdd   VMware_Virtual_SATA_CDRW_Drive_00000000000000000001  4494M             38s ago    Has a FileSystem, Insufficient space (<5GB)                              
ceph2  /dev/sdb  hdd                                                         100G             41s ago    Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected  
ceph2  /dev/sr0  hdd   VMware_Virtual_SATA_CDRW_Drive_00000000000000000001  4494M             41s ago    Has a FileSystem, Insufficient space (<5GB)                              
ceph3  /dev/sdb  hdd                                                         100G             40s ago    Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected  
ceph3  /dev/sr0  hdd   VMware_Virtual_SATA_CDRW_Drive_00000000000000000001  4494M             40s ago    Has a FileSystem, Insufficient space (<5GB)                

此时dashboard上也可以看到

创建pool

查看集群状态

bash 复制代码
[ceph: root@ceph1 /]# ceph -s
  cluster:
    id:     0731be72-c206-11ef-82cf-000c29044707
    health: HEALTH_OK
 
  services:
    mon: 3 daemons, quorum ceph1,ceph2,ceph3 (age 2m)
    mgr: ceph1.eyxwak(active, since 10m), standbys: ceph2.hxchae
    osd: 3 osds: 3 up (since 2m), 3 in (since 3m)
 
  data:
    pools:   1 pools, 1 pgs
    objects: 0 objects, 0 B
    usage:   871 MiB used, 299 GiB / 300 GiB avail
    pgs:     1 active+clean

针对aming-1 pool启用rbd application

bash 复制代码
[ceph: root@ceph1 /]# ceph osd pool application enable test rdb
enabled application 'rdb' on pool 'test'

下一篇讲解k8s如何使用ceph

相关推荐
dessler27 分钟前
Kubernetes(k8s)-服务目录(ServiceCatalog)介绍(二)
linux·运维·kubernetes
Pasregret27 分钟前
07-云原生安全深度剖析:从 Kubernetes 集群防护到微服务安全加固
安全·云原生·kubernetes
阿杜杜不是阿木木1 小时前
16.使用豆包将docker-compose的yaml转为k8s的yaml,安装各种无状态服务
云原生·kubernetes·docker-compose·yaml·豆包·无状态服务
云原生的爱好者1 小时前
Prometheus+Grafana+K8s构建监控告警系统
kubernetes·grafana·prometheus
时空无限2 小时前
CEPH OSD_SLOW_PING_TIME_FRONT/BACK 警告处理
ceph
laimaxgg3 小时前
Docker华为云创建私人镜像仓库
运维·服务器·docker·容器·华为云
李的阿洁4 小时前
k8s中pod报错 FailedCreatePodSandBox
云原生·容器·kubernetes
咖啡调调。6 小时前
使用DaemonSet部署集群守护进程集
运维·云原生·容器·kubernetes
CAE虚拟与现实10 小时前
Dockerfile 文件常见命令及其作用
docker·容器·k8s·镜像·dockerhub