Kubernetes使用Ceph存储

1.搭建Ceph集群

1)准备工作

|------|-------|---------------|
| 机器编号 | 主机名 | IP |
| 1 | ceph1 | 192.168.1.121 |
| 2 | ceph2 | 192.168.1.123 |
| 3 | ceph3 | 192.168.1.125 |

关闭selinux、firewalld,配置hostname以及/etc/hosts

为每一台机器都准备至少一块单独的磁盘

所有机器安装时间同步服务chrony

bash 复制代码
[root@ceph1 ~]# mv /etc/yum.repos.d/* /home
[root@ceph1 ~]# curl -o /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
[root@ceph1 ~]# yum clean all&& yum repolist&& yum install -y chrony
[root@ceph1 ~]# systemctl start chronyd 
[root@ceph1 ~]# systemctl enable chronyd

所有机器安装docker-ce

bash 复制代码
[root@ceph1 ~]# yum install -y dnf 
[root@ceph1 ~]# sudo dnf -y install dnf-plugins-core
[root@ceph1 ~]# sudo dnf config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
添加仓库自:https://download.docker.com/linux/centos/docker-ce.repo
[root@ceph1 ~]# sed -i 's+https://download.docker.com+https://mirrors.tuna.tsinghua.edu.cn/docker-ce+' /etc/yum.repos.d/docker-ce.repo
[root@ceph1 ~]# sudo dnf install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
[root@ceph1 ~]# sudo systemctl enable --now docker
sudo tee /etc/docker/daemon.json <<-'EOF'
{
    "registry-mirrors": [
    	"https://docker.1ms.run",
        "https://doublezonline.cloud"
    ]
}
EOF
sudo systemctl daemon-reload && sudo systemctl restart docker

所有机器安装python3、lvm2(三台都做)

bash 复制代码
[root@ceph1 ~]# yum install -y python3  lvm2

安装cephadm(ceph1上执行)

bash 复制代码
[root@ceph1 ~]# cat>>/etc/yum.repos.d/ceph.repo<<EOF
> [ceph]
> name=ceph
> baseurl=http://mirrors.aliyun.com/ceph/rpm-pacific/el8/x86_64/
> gpgcheck=0
> priority =1
> [ceph-noarch]
> name=cephnoarch
> baseurl=http://mirrors.aliyun.com/ceph/rpm-pacific/el8/noarch/
> gpgcheck=0
> priority =1
> [ceph-source]
> name=Ceph source packages
> baseurl=http://mirrors.aliyun.com/ceph/rpm-pacific/el8/SRPMS
> gpgcheck=0
> priority=1
> EOF
[root@ceph1 ~]# yum clean all&& yum repolist&& yum install -y cephadm

使用cephadm部署ceph

bash 复制代码
[root@ceph1 ~]# cephadm bootstrap --mon-ip 192.168.1.121
Ceph Dashboard is now available at:

	     URL: https://ceph1:8443/
	    User: admin
	Password: 8dfpxjf9cz

    新密码 wang1021

访问dashboard

增加host

bash 复制代码
生成ssh密钥对儿
[ceph: root@ceph1 /]# ceph cephadm get-pub-key > ~/ceph.pub
配置到另外两台机器免密登录
[ceph: root@ceph1 /]#  ssh-copy-id -f -i ~/ceph.pub root@ceph2
[ceph: root@ceph1 /]#  ssh-copy-id -f -i ~/ceph.pub root@ceph3

到浏览器里,增加主机


创建OSD(ceph shell模式下,在ceph上操作)

假设三台机器上新增的新磁盘为/dev/sdb

bash 复制代码
[ceph: root@ceph1 /]# ceph orch daemon add osd ceph1:/dev/sdb
[ceph: root@ceph1 /]# ceph orch daemon add osd ceph2:/dev/sdb&& ceph orch daemon add osd ceph3:/dev/sdb

查看磁盘列表:

bash 复制代码
[ceph: root@ceph1 /]# ceph orch device ls
HOST   PATH      TYPE  DEVICE ID                                             SIZE  AVAILABLE  REFRESHED  REJECT REASONS                                                           
ceph1  /dev/sdb  hdd                                                         100G             38s ago    Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected  
ceph1  /dev/sr0  hdd   VMware_Virtual_SATA_CDRW_Drive_00000000000000000001  4494M             38s ago    Has a FileSystem, Insufficient space (<5GB)                              
ceph2  /dev/sdb  hdd                                                         100G             41s ago    Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected  
ceph2  /dev/sr0  hdd   VMware_Virtual_SATA_CDRW_Drive_00000000000000000001  4494M             41s ago    Has a FileSystem, Insufficient space (<5GB)                              
ceph3  /dev/sdb  hdd                                                         100G             40s ago    Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected  
ceph3  /dev/sr0  hdd   VMware_Virtual_SATA_CDRW_Drive_00000000000000000001  4494M             40s ago    Has a FileSystem, Insufficient space (<5GB)                

此时dashboard上也可以看到

创建pool

查看集群状态

bash 复制代码
[ceph: root@ceph1 /]# ceph -s
  cluster:
    id:     0731be72-c206-11ef-82cf-000c29044707
    health: HEALTH_OK
 
  services:
    mon: 3 daemons, quorum ceph1,ceph2,ceph3 (age 2m)
    mgr: ceph1.eyxwak(active, since 10m), standbys: ceph2.hxchae
    osd: 3 osds: 3 up (since 2m), 3 in (since 3m)
 
  data:
    pools:   1 pools, 1 pgs
    objects: 0 objects, 0 B
    usage:   871 MiB used, 299 GiB / 300 GiB avail
    pgs:     1 active+clean

针对aming-1 pool启用rbd application

bash 复制代码
[ceph: root@ceph1 /]# ceph osd pool application enable test rdb
enabled application 'rdb' on pool 'test'

下一篇讲解k8s如何使用ceph

相关推荐
努力搬砖的咸鱼3 小时前
容器之间怎么通信?Docker 网络全解析
网络·docker·云原生·容器
liming4956 小时前
Ubuntu18.04部署k8s
云原生·容器·kubernetes
YC运维8 小时前
Kubernetes资源管理全解析
java·容器·kubernetes
Leinwin8 小时前
微软发布Azure Kubernetes Service Automatic国际版
microsoft·kubernetes·azure
chinesegf9 小时前
Docker篇6-项目app.py和flask_app.service配置和映射到docker中
docker·容器·flask
退役小学生呀9 小时前
二十二、DevOps:基于Tekton的云原生平台落地(三)
linux·云原生·容器·kubernetes·k8s·devops·tekton
维尔切10 小时前
搭建 k8s
云原生·容器·kubernetes
hwj运维之路10 小时前
《Kubernetes面试题汇总系列》
云原生·容器·kubernetes
JavaLearnerZGQ11 小时前
配置Docker镜像源
运维·docker·容器
老友@11 小时前
Docker 化 Node.js 项目完整部署流程
docker·容器·node.js