arm架构ceph pacific部署

背景

合作伙伴实验室的华为私有云原来使用单点的nfs做为存储设备,现有两方面考量,业务需要使用oss了,k8s集群及其他机器也需要一套可扩展的分布式文件系统

部署ceph

初始机器配置规划

IP 配置 主机名 Role
10.17.3.14 4c8g1T数据盘 ceph-node01.xx.local mon1 mgr1 node01
10.17.3.15 4c8g1T数据盘 ceph-node02.xx.local mon2 mgr12 node02
10.17.3.16 4c8g1T数据盘 ceph-node03.xx.local mon3 mgr3 node03

所有节点执行:

节点上的硬盘需要做ceph osd的需要需要取消挂载

节点时间配置

bash 复制代码
apt install apt-transport-https ca-certificates curl software-properties-common -y
vim /etc/chrony/chrony.conf
server ntp.xx.xx.cn minpoll 4 maxpoll 10 iburst # 内部ntp服务器
systemctl restart chronyd

root@ceph-node01:/etc/ceph-cluster# chronyc sources -v
210 Number of sources = 1
 
  .-- Source mode  '^' = server, '=' = peer, '#' = local clock.
 / .- Source state '*' = current synced, '+' = combined , '-' = not combined,
| /   '?' = unreachable, 'x' = time may be in error, '~' = time too variable.
||                                                 .- xxxx [ yyyy ] +/- zzzz
||      Reachability register (octal) -.           |  xxxx = adjusted offset,
||      Log2(Polling interval) --.      |          |  yyyy = measured offset,
||                                \     |          |  zzzz = estimated error.
||                                 |    |           \
MS Name/IP address         Stratum Poll Reach LastRx Last sample              
===============================================================================
^* 100.xx.0.35                  4   8   377   177  +3929ns[+1073ns] +/-  273ms

root@ceph-node01:/etc/ceph-cluster# tail -n 3 /etc/hosts
10.17.3.14 ceph-node01.xx.local ceph-node01
10.17.3.15 ceph-node02.xx.local ceph-node02
10.17.3.16 ceph-node03.xx.local ceph-node03

使用ceph-deploy部署

bash 复制代码
curl -x socks5://10.17.3.154:7891 -LO https://download.ceph.com/keys/release.asc
apt-key add release.asc
echo "deb https://download.ceph.com/debian-pacific/ bionic main" | tee /etc/apt/sources.list.d/ceph.list
# 创建普通账户
groupadd -r -g 2088 cephadmin && useradd -r -m -s /bin/bash -u 2088 -g 2088 cephadmin && echo "cephadmin:xx" | chpasswd
echo "cephadmin ALL=(ALL:ALL) NOPASSWD: ALL" >> /etc/sudoers
su cephadmin
 
apt install ceph-common -y
mkdir -pv /etc/ceph-cluster
 
 
ceph-deploy install --release pacific ceph-node01
ceph-deploy install --release pacific ceph-node02

ceph quorum_status --format json-pretty

集群deploy节点初始化

bash 复制代码
cephadmin@ceph-node01:/etc/ceph-cluster$ sudo ceph-deploy new --cluster-network 10.17.3.0/24 --public-network 10.17.3.0/24 ceph-node1.xx.local
sudo: unable to resolve host ceph-node01: Resource temporarily unavailable
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/cephadmin/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/local/bin/ceph-deploy new --cluster-network 10.17.3.0/24 --public-network 10.17.3.0/24 ceph-node1.xx.local
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0xffffb6791c20>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  ssh_copykey                   : True
[ceph_deploy.cli][INFO  ]  mon                           : ['ceph-node1.xx.local']
[ceph_deploy.cli][INFO  ]  func                          : <function new at 0xffffb6772410>
[ceph_deploy.cli][INFO  ]  public_network                : 10.17.3.0/24
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  cluster_network               : 10.17.3.0/24
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  fsid                          : None
[ceph_deploy.new][DEBUG ] Creating new cluster named ceph
[ceph_deploy.new][INFO  ] making sure passwordless SSH succeeds
[ceph-node1.xx.local][DEBUG ] connected to host: ceph-node01
[ceph-node1.xx.local][INFO  ] Running command: ssh -CT -o BatchMode=yes ceph-node1.xx.local
[ceph_deploy.new][WARNIN] could not connect via SSH
[ceph_deploy.new][INFO  ] will connect again with password prompt
root@ceph-node1.xx.local's password:
Permission denied, please try again.
root@ceph-node1.xx.local's password:
[ceph-node1.xx.local][DEBUG ] connected to host: ceph-node1.xx.local
[ceph-node1.xx.local][DEBUG ] detect platform information from remote host
[ceph-node1.xx.local][DEBUG ] detect machine type
[ceph_deploy.new][INFO  ] adding public keys to authorized_keys
[ceph-node1.xx.local][DEBUG ] append contents to file
root@ceph-node1.xx.local's password:
root@ceph-node1.xx.local's password:
[ceph-node1.xx.local][DEBUG ] connected to host: ceph-node1.xx.local
[ceph-node1.xx.local][DEBUG ] detect platform information from remote host
[ceph-node1.xx.local][DEBUG ] detect machine type
[ceph-node1.xx.local][DEBUG ] find the location of an executable
[ceph-node1.xx.local][INFO  ] Running command: /bin/ip link show
[ceph-node1.xx.local][INFO  ] Running command: /bin/ip addr show
[ceph-node1.xx.local][DEBUG ] IP addresses found: [u'10.108.101.32', u'10.104.61.120', u'10.98.52.88', u'10.244.24.0', u'10.244.24.1', u'10.99.115.16', u'10.106.43.191', u'10.104.75.139', u'10.105.7.41', u'10.100.142.181', u'10.97.252.180', u'10.110.23.237', u'10.98.213.254', u'10.96.0.1', u'10.101.27.103', u'10.99.3.237', u'10.97.241.24', u'10.17.3.14', u'10.110.31.40', u'10.109.24.221', u'10.97.44.182', u'10.99.46.158', u'10.100.68.217', u'10.96.87.174', u'10.97.255.233', u'10.111.118.0', u'10.96.0.10', u'10.96.23.220', u'10.105.34.53', u'10.106.170.182', u'10.106.145.33']
[ceph_deploy.new][DEBUG ] Resolving host ceph-node1.xx.local
[ceph_deploy.new][DEBUG ] Monitor ceph-node1 at 10.17.3.14
[ceph_deploy.new][DEBUG ] Monitor initial members are ['ceph-node1']
[ceph_deploy.new][DEBUG ] Monitor addrs are [u'10.17.3.14']
[ceph_deploy.new][DEBUG ] Creating a random mon key...
[ceph_deploy.new][DEBUG ] Writing monitor keyring to ceph.mon.keyring...
[ceph_deploy.new][DEBUG ] Writing initial config to ceph.conf...
cephadmin@ceph-node01:/etc/ceph-cluster$ ls
ceph.conf  ceph-deploy-ceph.log  ceph.mon.keyring

node节点初始化

bash 复制代码
sudo ceph-deploy install --no-adjust-repos --nogpgcheck ceph-node1.xx.local ceph-node2.xx.local ceph-node3.xx.local
 
sudo: unable to resolve host ceph-node01: Resource temporarily unavailable
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/cephadmin/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/local/bin/ceph-deploy install --no-adjust-repos --nogpgcheck ceph-node1.xx.local ceph-node2.xx.local ceph-node3.xx.local
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  testing                       : None
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0xffff9f33dc80>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  dev_commit                    : None
[ceph_deploy.cli][INFO  ]  install_mds                   : False
[ceph_deploy.cli][INFO  ]  stable                        : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  adjust_repos                  : False
[ceph_deploy.cli][INFO  ]  func                          : <function install at 0xffff9f3fac50>
[ceph_deploy.cli][INFO  ]  install_mgr                   : False
[ceph_deploy.cli][INFO  ]  install_all                   : False
[ceph_deploy.cli][INFO  ]  repo                          : False
[ceph_deploy.cli][INFO  ]  host                          : ['ceph-node1.xx.local', 'ceph-node2.xx.local', 'ceph-node3.xx.local']
[ceph_deploy.cli][INFO  ]  install_rgw                   : False
[ceph_deploy.cli][INFO  ]  install_tests                 : False
[ceph_deploy.cli][INFO  ]  repo_url                      : None
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  install_osd                   : False
[ceph_deploy.cli][INFO  ]  version_kind                  : stable
[ceph_deploy.cli][INFO  ]  install_common                : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  dev                           : master
[ceph_deploy.cli][INFO  ]  nogpgcheck                    : True
[ceph_deploy.cli][INFO  ]  local_mirror                  : None
[ceph_deploy.cli][INFO  ]  release                       : None
[ceph_deploy.cli][INFO  ]  install_mon                   : False
[ceph_deploy.cli][INFO  ]  gpg_url                       : None
[ceph_deploy.install][DEBUG ] Installing stable version mimic on cluster ceph hosts ceph-node1.xx.local ceph-node2.xx.local ceph-node3.xx.local
[ceph_deploy.install][DEBUG ] Detecting platform for host ceph-node1.xx.local ...
root@ceph-node1.xx.local's password:
root@ceph-node1.xx.local's password:
[ceph-node1.xx.local][DEBUG ] connected to host: ceph-node1.xx.local
[ceph-node1.xx.local][DEBUG ] detect platform information from remote host
[ceph-node1.xx.local][DEBUG ] detect machine type
[ceph_deploy.install][INFO  ] Distro info: Ubuntu 18.04 bionic
[ceph-node1.xx.local][INFO  ] installing Ceph on ceph-node1.xx.local
[ceph-node1.xx.local][INFO  ] Running command: env DEBIAN_FRONTEND=noninteractive DEBIAN_PRIORITY=critical apt-get --assume-yes -q update
[ceph-node1.xx.local][DEBUG ] Hit:1 https://mirrors.ustc.edu.cn/kubernetes/core:/stable:/v1.30/deb  InRelease
[ceph-node1.xx.local][DEBUG ] Hit:2 http://ports.ubuntu.com/ubuntu-ports bionic InRelease
[ceph-node1.xx.local][DEBUG ] Hit:3 http://ports.ubuntu.com/ubuntu-ports bionic-updates InRelease
[ceph-node1.xx.local][DEBUG ] Hit:4 http://ports.ubuntu.com/ubuntu-ports bionic-backports InRelease
[ceph-node1.xx.local][DEBUG ] Hit:5 http://ports.ubuntu.com/ubuntu-ports bionic-security InRelease
[ceph-node1.xx.local][DEBUG ] Hit:6 https://download.ceph.com/debian-pacific bionic InRelease
[ceph-node1.xx.local][DEBUG ] Reading package lists...
[ceph-node1.xx.local][INFO  ] Running command: env DEBIAN_FRONTEND=noninteractive DEBIAN_PRIORITY=critical apt-get --assume-yes -q --no-install-recommends install ca-certificates apt-transport-https
[ceph-node1.xx.local][DEBUG ] Reading package lists...
[ceph-node1.xx.local][DEBUG ] Building dependency tree...
[ceph-node1.xx.local][DEBUG ] Reading state information...
[ceph-node1.xx.local][DEBUG ] ca-certificates is already the newest version (20230311ubuntu0.18.04.1).
[ceph-node1.xx.local][DEBUG ] apt-transport-https is already the newest version (1.6.17).
[ceph-node1.xx.local][DEBUG ] 0 upgraded, 0 newly installed, 0 to remove and 340 not upgraded.
[ceph-node1.xx.local][INFO  ] Running command: env DEBIAN_FRONTEND=noninteractive DEBIAN_PRIORITY=critical apt-get --assume-yes -q update
[ceph-node1.xx.local][DEBUG ] Hit:1 https://mirrors.ustc.edu.cn/kubernetes/core:/stable:/v1.30/deb  InRelease
[ceph-node1.xx.local][DEBUG ] Hit:2 http://ports.ubuntu.com/ubuntu-ports bionic InRelease
[ceph-node1.xx.local][DEBUG ] Hit:3 https://download.ceph.com/debian-pacific bionic InRelease
[ceph-node1.xx.local][DEBUG ] Hit:4 http://ports.ubuntu.com/ubuntu-ports bionic-updates InRelease
[ceph-node1.xx.local][DEBUG ] Hit:5 http://ports.ubuntu.com/ubuntu-ports bionic-backports InRelease
[ceph-node1.xx.local][DEBUG ] Hit:6 http://ports.ubuntu.com/ubuntu-ports bionic-security InRelease
[ceph-node1.xx.local][DEBUG ] Reading package lists...
[ceph-node1.xx.local][INFO  ] Running command: env DEBIAN_FRONTEND=noninteractive DEBIAN_PRIORITY=critical apt-get --assume-yes -q --no-install-recommends install ceph ceph-osd ceph-mds ceph-mon radosgw
[ceph-node1.xx.local][DEBUG ] Reading package lists...
[ceph-node1.xx.local][DEBUG ] Building dependency tree...
[ceph-node1.xx.local][DEBUG ] Reading state information...
[ceph-node1.xx.local][DEBUG ] The following packages were automatically installed and are no longer required:
[ceph-node1.xx.local][DEBUG ]   formencode-i18n libpython2.7 python-asn1crypto python-bcrypt python-bs4
[ceph-node1.xx.local][DEBUG ]   python-ceph-argparse python-certifi python-cffi-backend python-chardet
[ceph-node1.xx.local][DEBUG ]   python-cherrypy3 python-cryptography python-dnspython python-enum34
[ceph-node1.xx.local][DEBUG ]   python-formencode python-idna python-ipaddress python-jinja2 python-logutils
[ceph-node1.xx.local][DEBUG ]   python-mako python-markupsafe python-openssl python-paste python-pastedeploy
[ceph-node1.xx.local][DEBUG ]   python-pecan python-pkg-resources python-prettytable python-rbd
[ceph-node1.xx.local][DEBUG ]   python-requests python-simplegeneric python-simplejson python-singledispatch
[ceph-node1.xx.local][DEBUG ]   python-six python-tempita python-urllib3 python-waitress python-webob
[ceph-node1.xx.local][DEBUG ]   python-webtest python-werkzeug
[ceph-node1.xx.local][DEBUG ] Use 'apt autoremove' to remove them.
[ceph-node1.xx.local][DEBUG ] The following additional packages will be installed:
[ceph-node1.xx.local][DEBUG ]   ceph-base ceph-common ceph-mgr ceph-mgr-modules-core libcephfs2 libjaeger
[ceph-node1.xx.local][DEBUG ]   liblua5.3-0 librabbitmq4 librados2 libradosstriper1 librbd1 librdkafka1
[ceph-node1.xx.local][DEBUG ]   librdmacm1 librgw2 libsqlite3-mod-ceph python3-bcrypt python3-bs4
[ceph-node1.xx.local][DEBUG ]   python3-ceph-argparse python3-ceph-common python3-cephfs python3-cherrypy3
[ceph-node1.xx.local][DEBUG ]   python3-dateutil python3-distutils python3-jwt python3-lib2to3
[ceph-node1.xx.local][DEBUG ]   python3-logutils python3-mako python3-markupsafe python3-paste
[ceph-node1.xx.local][DEBUG ]   python3-pastedeploy python3-pecan python3-prettytable python3-rados
[ceph-node1.xx.local][DEBUG ]   python3-rbd python3-rgw python3-simplegeneric python3-singledispatch
[ceph-node1.xx.local][DEBUG ]   python3-tempita python3-waitress python3-webob python3-webtest
[ceph-node1.xx.local][DEBUG ]   python3-werkzeug
[ceph-node1.xx.local][DEBUG ] Suggested packages:
[ceph-node1.xx.local][DEBUG ]   python3-influxdb python3-crypto python3-beaker python-mako-doc httpd-wsgi
[ceph-node1.xx.local][DEBUG ]   libapache2-mod-python libapache2-mod-scgi libjs-mochikit python-pecan-doc
[ceph-node1.xx.local][DEBUG ]   python-waitress-doc python-webob-doc python-webtest-doc ipython3
[ceph-node1.xx.local][DEBUG ]   python3-lxml python3-termcolor python3-watchdog python-werkzeug-doc
[ceph-node1.xx.local][DEBUG ] Recommended packages:
[ceph-node1.xx.local][DEBUG ]   nvme-cli smartmontools ceph-fuse ceph-mgr-dashboard
[ceph-node1.xx.local][DEBUG ]   ceph-mgr-diskprediction-local ceph-mgr-k8sevents ceph-mgr-cephadm
[ceph-node1.xx.local][DEBUG ]   python3-lxml python3-routes python3-simplejson python3-pastescript
[ceph-node1.xx.local][DEBUG ]   python3-pyinotify
[ceph-node1.xx.local][DEBUG ] The following packages will be REMOVED:
[ceph-node1.xx.local][DEBUG ]   python-cephfs python-rados python-rgw
[ceph-node1.xx.local][DEBUG ] The following NEW packages will be installed:
[ceph-node1.xx.local][DEBUG ]   ceph-mgr-modules-core libjaeger liblua5.3-0 librabbitmq4 librdkafka1
[ceph-node1.xx.local][DEBUG ]   librdmacm1 libsqlite3-mod-ceph python3-bcrypt python3-bs4
[ceph-node1.xx.local][DEBUG ]   python3-ceph-argparse python3-ceph-common python3-cephfs python3-cherrypy3
[ceph-node1.xx.local][DEBUG ]   python3-dateutil python3-distutils python3-jwt python3-lib2to3
[ceph-node1.xx.local][DEBUG ]   python3-logutils python3-mako python3-markupsafe python3-paste
[ceph-node1.xx.local][DEBUG ]   python3-pastedeploy python3-pecan python3-prettytable python3-rados
[ceph-node1.xx.local][DEBUG ]   python3-rbd python3-rgw python3-simplegeneric python3-singledispatch
[ceph-node1.xx.local][DEBUG ]   python3-tempita python3-waitress python3-webob python3-webtest
[ceph-node1.xx.local][DEBUG ]   python3-werkzeug
[ceph-node1.xx.local][DEBUG ] The following packages will be upgraded:
[ceph-node1.xx.local][DEBUG ]   ceph ceph-base ceph-common ceph-mds ceph-mgr ceph-mon ceph-osd libcephfs2
[ceph-node1.xx.local][DEBUG ]   librados2 libradosstriper1 librbd1 librgw2 radosgw
[ceph-node1.xx.local][DEBUG ] 13 upgraded, 34 newly installed, 3 to remove and 327 not upgraded.
[ceph-node1.xx.local][DEBUG ] Need to get 70.2 MB of archives.
[ceph-node1.xx.local][DEBUG ] After this operation, 117 MB of additional disk space will be used.
[ceph-node1.xx.local][DEBUG ] Get:1 http://ports.ubuntu.com/ubuntu-ports bionic-updates/main arm64 librdmacm1 arm64 17.1-1ubuntu0.2 [49.1 kB]
[ceph-node1.xx.local][DEBUG ] Get:2 http://ports.ubuntu.com/ubuntu-ports bionic-updates/main arm64 liblua5.3-0 arm64 5.3.3-1ubuntu0.18.04.1 [105 kB]
[ceph-node1.xx.local][DEBUG ] Get:3 http://ports.ubuntu.com/ubuntu-ports bionic-updates/universe arm64 librabbitmq4 arm64 0.8.0-1ubuntu0.18.04.2 [30.3 kB]
[ceph-node1.xx.local][DEBUG ] Get:4 http://ports.ubuntu.com/ubuntu-ports bionic/universe arm64 librdkafka1 arm64 0.11.3-1build1 [245 kB]
[ceph-node1.xx.local][DEBUG ] Get:5 https://download.ceph.com/debian-pacific bionic/main arm64 libradosstriper1 arm64 16.2.15-1bionic [387 kB]
[ceph-node1.xx.local][DEBUG ] Get:6 http://ports.ubuntu.com/ubuntu-ports bionic/main arm64 python3-dateutil all 2.6.1-1 [52.3 kB]
[ceph-node1.xx.local][DEBUG ] Get:7 http://ports.ubuntu.com/ubuntu-ports bionic/universe arm64 python3-bcrypt arm64 3.1.4-2 [25.3 kB]
[ceph-node1.xx.local][DEBUG ] Get:8 http://ports.ubuntu.com/ubuntu-ports bionic/universe arm64 python3-cherrypy3 all 8.9.1-2 [160 kB]
[ceph-node1.xx.local][DEBUG ] Get:9 http://ports.ubuntu.com/ubuntu-ports bionic-updates/main arm64 python3-lib2to3 all 3.6.9-1~18.04 [77.4 kB]
[ceph-node1.xx.local][DEBUG ] Get:10 http://ports.ubuntu.com/ubuntu-ports bionic-updates/main arm64 python3-distutils all 3.6.9-1~18.04 [144 kB]
[ceph-node1.xx.local][DEBUG ] Get:11 http://ports.ubuntu.com/ubuntu-ports bionic-updates/main arm64 python3-jwt all 1.5.3+ds1-1ubuntu0.1 [16.6 kB]
[ceph-node1.xx.local][DEBUG ] Get:12 http://ports.ubuntu.com/ubuntu-ports bionic/universe arm64 python3-logutils all 0.3.3-5 [16.7 kB]
[ceph-node1.xx.local][DEBUG ] Get:13 http://ports.ubuntu.com/ubuntu-ports bionic/main arm64 python3-markupsafe arm64 1.0-1build1 [13.2 kB]
[ceph-node1.xx.local][DEBUG ] Get:14 http://ports.ubuntu.com/ubuntu-ports bionic-updates/main arm64 python3-mako all 1.0.7+ds1-1ubuntu0.2 [59.4 kB]
[ceph-node1.xx.local][DEBUG ] Get:15 http://ports.ubuntu.com/ubuntu-ports bionic/universe arm64 python3-simplegeneric all 0.8.1-1 [11.5 kB]
[ceph-node1.xx.local][DEBUG ] Get:16 http://ports.ubuntu.com/ubuntu-ports bionic/universe arm64 python3-singledispatch all 3.4.0.3-2 [7,022 B]
[ceph-node1.xx.local][DEBUG ] Get:17 http://ports.ubuntu.com/ubuntu-ports bionic/main arm64 python3-webob all 1:1.7.3-2fakesync1 [64.3 kB]
[ceph-node1.xx.local][DEBUG ] Get:18 https://download.ceph.com/debian-pacific bionic/main arm64 radosgw arm64 16.2.15-1bionic [9,564 kB]
[ceph-node1.xx.local][DEBUG ] Get:19 http://ports.ubuntu.com/ubuntu-ports bionic/main arm64 python3-bs4 all 4.6.0-1 [67.8 kB]
[ceph-node1.xx.local][DEBUG ] Get:20 http://ports.ubuntu.com/ubuntu-ports bionic/main arm64 python3-waitress all 1.0.1-1 [53.4 kB]
[ceph-node1.xx.local][DEBUG ] Get:21 http://ports.ubuntu.com/ubuntu-ports bionic/main arm64 python3-tempita all 0.5.2-2 [13.9 kB]
[ceph-node1.xx.local][DEBUG ] Get:22 http://ports.ubuntu.com/ubuntu-ports bionic/main arm64 python3-paste all 2.0.3+dfsg-4ubuntu1 [456 kB]
[ceph-node1.xx.local][DEBUG ] Get:23 http://ports.ubuntu.com/ubuntu-ports bionic/main arm64 python3-pastedeploy all 1.5.2-4 [13.4 kB]
[ceph-node1.xx.local][DEBUG ] Get:24 http://ports.ubuntu.com/ubuntu-ports bionic/main arm64 python3-webtest all 2.0.28-1ubuntu1 [27.9 kB]
[ceph-node1.xx.local][DEBUG ] Get:25 http://ports.ubuntu.com/ubuntu-ports bionic/universe arm64 python3-pecan all 1.2.1-2 [86.1 kB]
[ceph-node1.xx.local][DEBUG ] Get:26 http://ports.ubuntu.com/ubuntu-ports bionic-updates/universe arm64 python3-werkzeug all 0.14.1+dfsg1-1ubuntu0.2 [175 kB]
[ceph-node1.xx.local][DEBUG ] Get:27 http://ports.ubuntu.com/ubuntu-ports bionic/main arm64 python3-prettytable all 0.7.2-3 [19.7 kB]
 
.....
[ceph-node3.xx.local][DEBUG ] Setting up ceph (16.2.15-1bionic) ...
[ceph-node3.xx.local][DEBUG ] Processing triggers for systemd (237-3ubuntu10.31) ...
[ceph-node3.xx.local][DEBUG ] Processing triggers for man-db (2.8.3-2ubuntu0.1) ...
[ceph-node3.xx.local][DEBUG ] Processing triggers for ureadahead (0.100.0-21) ...
[ceph-node3.xx.local][DEBUG ] ureadahead will be reprofiled on next reboot
[ceph-node3.xx.local][DEBUG ] Processing triggers for libc-bin (2.27-3ubuntu1) ...
[ceph-node3.xx.local][INFO  ] Running command: ceph --version
[ceph-node3.xx.local][DEBUG ] ceph version 16.2.15 (618f440892089921c3e944a991122ddc44e60516) pacific (stable)

ceph集群添加ceph-mon服务,mon初始化

bash 复制代码
cephadmin@ceph-node01:/etc/ceph-cluster$ sudo ceph-deploy  mon create-initial
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/cephadmin/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/local/bin/ceph-deploy mon create-initial
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  subcommand                    : create-initial
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0xffff82b5ceb0>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  func                          : <function mon at 0xffff82bc9cd0>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  keyrings                      : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.mon][DEBUG ] Deploying mon, cluster ceph hosts ceph-node01
[ceph_deploy.mon][DEBUG ] detecting platform for host ceph-node01 ...
[ceph-node01][DEBUG ] connected to host: ceph-node01
[ceph-node01][DEBUG ] detect platform information from remote host
[ceph-node01][DEBUG ] detect machine type
[ceph-node01][DEBUG ] find the location of an executable
[ceph_deploy.mon][INFO  ] distro info: Ubuntu 18.04 bionic
[ceph-node01][DEBUG ] determining if provided host has same hostname in remote
[ceph-node01][DEBUG ] get remote short hostname
[ceph-node01][DEBUG ] deploying mon to ceph-node01
[ceph-node01][DEBUG ] get remote short hostname
[ceph-node01][DEBUG ] remote hostname: ceph-node01
[ceph-node01][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph-node01][DEBUG ] create the mon path if it does not exist
[ceph-node01][DEBUG ] checking for done path: /var/lib/ceph/mon/ceph-ceph-node01/done
[ceph-node01][DEBUG ] create a done file to avoid re-doing the mon deployment
[ceph-node01][DEBUG ] create the init path if it does not exist
[ceph-node01][INFO  ] Running command: systemctl enable ceph.target
[ceph-node01][INFO  ] Running command: systemctl enable ceph-mon@ceph-node01
[ceph-node01][INFO  ] Running command: systemctl start ceph-mon@ceph-node01
[ceph-node01][INFO  ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.ceph-node01.asok mon_status
[ceph-node01][DEBUG ] ********************************************************************************
[ceph-node01][DEBUG ] status for monitor: mon.ceph-node01
[ceph-node01][DEBUG ] {
[ceph-node01][DEBUG ]   "election_epoch": 3,
[ceph-node01][DEBUG ]   "extra_probe_peers": [],
[ceph-node01][DEBUG ]   "feature_map": {
[ceph-node01][DEBUG ]     "mon": [
[ceph-node01][DEBUG ]       {
[ceph-node01][DEBUG ]         "features": "0x3f01cfbdfffdffff",
[ceph-node01][DEBUG ]         "num": 1,
[ceph-node01][DEBUG ]         "release": "luminous"
[ceph-node01][DEBUG ]       }
[ceph-node01][DEBUG ]     ]
[ceph-node01][DEBUG ]   },
[ceph-node01][DEBUG ]   "features": {
[ceph-node01][DEBUG ]     "quorum_con": "4540138314316775423",
[ceph-node01][DEBUG ]     "quorum_mon": [
[ceph-node01][DEBUG ]       "kraken",
[ceph-node01][DEBUG ]       "luminous",
[ceph-node01][DEBUG ]       "mimic",
[ceph-node01][DEBUG ]       "osdmap-prune",
[ceph-node01][DEBUG ]       "nautilus",
[ceph-node01][DEBUG ]       "octopus",
[ceph-node01][DEBUG ]       "pacific",
[ceph-node01][DEBUG ]       "elector-pinging"
[ceph-node01][DEBUG ]     ],
[ceph-node01][DEBUG ]     "required_con": "2449958747317026820",
[ceph-node01][DEBUG ]     "required_mon": [
[ceph-node01][DEBUG ]       "kraken",
[ceph-node01][DEBUG ]       "luminous",
[ceph-node01][DEBUG ]       "mimic",
[ceph-node01][DEBUG ]       "osdmap-prune",
[ceph-node01][DEBUG ]       "nautilus",
[ceph-node01][DEBUG ]       "octopus",
[ceph-node01][DEBUG ]       "pacific",
[ceph-node01][DEBUG ]       "elector-pinging"
[ceph-node01][DEBUG ]     ]
[ceph-node01][DEBUG ]   },
[ceph-node01][DEBUG ]   "monmap": {
[ceph-node01][DEBUG ]     "created": "2024-10-08T10:12:42.715558Z",
[ceph-node01][DEBUG ]     "disallowed_leaders: ": "",
[ceph-node01][DEBUG ]     "election_strategy": 1,
[ceph-node01][DEBUG ]     "epoch": 1,
[ceph-node01][DEBUG ]     "features": {
[ceph-node01][DEBUG ]       "optional": [],
[ceph-node01][DEBUG ]       "persistent": [
[ceph-node01][DEBUG ]         "kraken",
[ceph-node01][DEBUG ]         "luminous",
[ceph-node01][DEBUG ]         "mimic",
[ceph-node01][DEBUG ]         "osdmap-prune",
[ceph-node01][DEBUG ]         "nautilus",
[ceph-node01][DEBUG ]         "octopus",
[ceph-node01][DEBUG ]         "pacific",
[ceph-node01][DEBUG ]         "elector-pinging"
[ceph-node01][DEBUG ]       ]
[ceph-node01][DEBUG ]     },
[ceph-node01][DEBUG ]     "fsid": "5a6fdfb7-81a1-40f6-97b7-c92f96de9ac5",
[ceph-node01][DEBUG ]     "min_mon_release": 16,
[ceph-node01][DEBUG ]     "min_mon_release_name": "pacific",
[ceph-node01][DEBUG ]     "modified": "2024-10-08T10:12:42.715558Z",
[ceph-node01][DEBUG ]     "mons": [
[ceph-node01][DEBUG ]       {
[ceph-node01][DEBUG ]         "addr": "10.17.3.14:6789/0",
[ceph-node01][DEBUG ]         "crush_location": "{}",
[ceph-node01][DEBUG ]         "name": "ceph-node01",
[ceph-node01][DEBUG ]         "priority": 0,
[ceph-node01][DEBUG ]         "public_addr": "10.17.3.14:6789/0",
[ceph-node01][DEBUG ]         "public_addrs": {
[ceph-node01][DEBUG ]           "addrvec": [
[ceph-node01][DEBUG ]             {
[ceph-node01][DEBUG ]               "addr": "10.17.3.14:3300",
[ceph-node01][DEBUG ]               "nonce": 0,
[ceph-node01][DEBUG ]               "type": "v2"
[ceph-node01][DEBUG ]             },
[ceph-node01][DEBUG ]             {
[ceph-node01][DEBUG ]               "addr": "10.17.3.14:6789",
[ceph-node01][DEBUG ]               "nonce": 0,
[ceph-node01][DEBUG ]               "type": "v1"
[ceph-node01][DEBUG ]             }
[ceph-node01][DEBUG ]           ]
[ceph-node01][DEBUG ]         },
[ceph-node01][DEBUG ]         "rank": 0,
[ceph-node01][DEBUG ]         "weight": 0
[ceph-node01][DEBUG ]       }
[ceph-node01][DEBUG ]     ],
[ceph-node01][DEBUG ]     "removed_ranks: ": "",
[ceph-node01][DEBUG ]     "stretch_mode": false,
[ceph-node01][DEBUG ]     "tiebreaker_mon": ""
[ceph-node01][DEBUG ]   },
[ceph-node01][DEBUG ]   "name": "ceph-node01",
[ceph-node01][DEBUG ]   "outside_quorum": [],
[ceph-node01][DEBUG ]   "quorum": [
[ceph-node01][DEBUG ]     0
[ceph-node01][DEBUG ]   ],
[ceph-node01][DEBUG ]   "quorum_age": 77,
[ceph-node01][DEBUG ]   "rank": 0,
[ceph-node01][DEBUG ]   "state": "leader",
[ceph-node01][DEBUG ]   "stretch_mode": false,
[ceph-node01][DEBUG ]   "sync_provider": []
[ceph-node01][DEBUG ] }
[ceph-node01][DEBUG ] ********************************************************************************
[ceph-node01][INFO  ] monitor: mon.ceph-node01 is running
[ceph-node01][INFO  ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.ceph-node01.asok mon_status
[ceph_deploy.mon][INFO  ] processing monitor mon.ceph-node01
[ceph-node01][DEBUG ] connected to host: ceph-node01
[ceph-node01][DEBUG ] detect platform information from remote host
[ceph-node01][DEBUG ] detect machine type
[ceph-node01][DEBUG ] find the location of an executable
[ceph-node01][INFO  ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.ceph-node01.asok mon_status
[ceph_deploy.mon][INFO  ] mon.ceph-node01 monitor has reached quorum!
[ceph_deploy.mon][INFO  ] all initial monitors are running and have formed quorum
[ceph_deploy.mon][INFO  ] Running gatherkeys...
[ceph_deploy.gatherkeys][INFO  ] Storing keys in temp directory /tmp/tmpWWGCyS
[ceph-node01][DEBUG ] connected to host: ceph-node01
[ceph-node01][DEBUG ] detect platform information from remote host
[ceph-node01][DEBUG ] detect machine type
[ceph-node01][DEBUG ] get remote short hostname
[ceph-node01][DEBUG ] fetch remote file
[ceph-node01][INFO  ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --admin-daemon=/var/run/ceph/ceph-mon.ceph-node01.asok mon_status
[ceph-node01][INFO  ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-ceph-node01/keyring auth get client.admin
[ceph-node01][INFO  ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-ceph-node01/keyring auth get client.bootstrap-mds
[ceph-node01][INFO  ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-ceph-node01/keyring auth get client.bootstrap-mgr
[ceph-node01][INFO  ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-ceph-node01/keyring auth get client.bootstrap-osd
[ceph-node01][INFO  ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-ceph-node01/keyring auth get client.bootstrap-rgw
[ceph_deploy.gatherkeys][INFO  ] Storing ceph.client.admin.keyring
[ceph_deploy.gatherkeys][INFO  ] Storing ceph.bootstrap-mds.keyring
[ceph_deploy.gatherkeys][INFO  ] Storing ceph.bootstrap-mgr.keyring
[ceph_deploy.gatherkeys][INFO  ] keyring 'ceph.mon.keyring' already exists
[ceph_deploy.gatherkeys][INFO  ] Storing ceph.bootstrap-osd.keyring
[ceph_deploy.gatherkeys][INFO  ] Storing ceph.bootstrap-rgw.keyring
[ceph_deploy.gatherkeys][INFO  ] Destroy temp directory /tmp/tmpWWGCyS

验证生成的文件

bash 复制代码
cephadmin@ceph-node01:/etc/ceph-cluster$ ls /etc/ceph/
ceph.conf  ceph-deploy-ceph.log  ceph.mon.keyring  rbdmap  tmpAi40Po  tmpSILILE  tmpwq6jcL
cephadmin@ceph-node01:/etc/ceph-cluster$ ls
ceph.bootstrap-mds.keyring  ceph.bootstrap-mgr.keyring  ceph.bootstrap-osd.keyring  ceph.bootstrap-rgw.keyring  ceph.client.admin.keyring  ceph.conf  ceph-deploy-ceph.log  ceph.mon.keyring

将ceph admin密钥分发至各机器

bash 复制代码
cephadmin@ceph-node01:/etc/ceph-cluster$ sudo ceph-deploy admin ceph-node01 ceph-node02 ceph-node03
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/cephadmin/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/local/bin/ceph-deploy admin ceph-node01 ceph-node02 ceph-node03
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0xffff99fbb0f0>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  client                        : ['ceph-node01', 'ceph-node02', 'ceph-node03']
[ceph_deploy.cli][INFO  ]  func                          : <function admin at 0xffff9a0d5c50>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to ceph-node01
[ceph-node01][DEBUG ] connected to host: ceph-node01
[ceph-node01][DEBUG ] detect platform information from remote host
[ceph-node01][DEBUG ] detect machine type
[ceph-node01][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to ceph-node02
The authenticity of host 'ceph-node02 (10.17.3.15)' can't be established.
ECDSA key fingerprint is SHA256:G3fJV27edH5tu4HNY0ArPdlNDPO9eaIEQKOdd1MAcdo.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'ceph-node02' (ECDSA) to the list of known hosts.
root@ceph-node02's password:
root@ceph-node02's password:
[ceph-node02][DEBUG ] connected to host: ceph-node02

部署ceph-mgr节点,后续将node02和node03都添加进来

bash 复制代码
cephadmin@ceph-node01:/etc/ceph-cluster$ sudo ceph-deploy mgr create ceph-node01
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/cephadmin/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/local/bin/ceph-deploy mgr create ceph-node01
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  mgr                           : [('ceph-node01', 'ceph-node01')]
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  subcommand                    : create
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0xffff9f0271e0>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  func                          : <function mgr at 0xffff9f11b350>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.mgr][DEBUG ] Deploying mgr, cluster ceph hosts ceph-node01:ceph-node01
[ceph-node01][DEBUG ] connected to host: ceph-node01
[ceph-node01][DEBUG ] detect platform information from remote host
[ceph-node01][DEBUG ] detect machine type
[ceph_deploy.mgr][INFO  ] Distro info: Ubuntu 18.04 bionic
[ceph_deploy.mgr][DEBUG ] remote host will use systemd
[ceph_deploy.mgr][DEBUG ] deploying mgr bootstrap to ceph-node01
[ceph-node01][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph-node01][WARNIN] mgr keyring does not exist yet, creating one
[ceph-node01][DEBUG ] create a keyring file
[ceph-node01][DEBUG ] create path recursively if it doesn't exist
[ceph-node01][INFO  ] Running command: ceph --cluster ceph --name client.bootstrap-mgr --keyring /var/lib/ceph/bootstrap-mgr/ceph.keyring auth get-or-create mgr.ceph-node01 mon allow profile mgr osd allow * mds allow * -o /var/lib/ceph/mgr/ceph-ceph-node01/keyring
[ceph-node01][INFO  ] Running command: systemctl enable ceph-mgr@ceph-node01
[ceph-node01][WARNIN] Created symlink /etc/systemd/system/ceph-mgr.target.wants/ceph-mgr@ceph-node01.service → /lib/systemd/system/ceph-mgr@.service.
[ceph-node01][INFO  ] Running command: systemctl start ceph-mgr@ceph-node01
[ceph-node01][INFO  ] Running command: systemctl enable ceph.target

在对应节点验证mgr服务是否正常

bash 复制代码
cephadmin@ceph-node01:/etc/ceph-cluster$ ps -ef |grep ceph-
root        4243       1  0 17:36 ?        00:00:00 /usr/bin/python2.7 /usr/bin/ceph-crash
ceph       11656       1  0 18:39 ?        00:00:00 /usr/bin/ceph-mon -f --cluster ceph --id ceph-node01 --setuser ceph --setgroup ceph
root       11707    5223  0 18:39 pts/1    00:00:00 tail -f /var/log/ceph/ceph-mon.ceph-node01.log
ceph       12301       1  9 18:45 ?        00:00:05 /usr/bin/ceph-mgr -f --cluster ceph --id ceph-node01 --setuser ceph --setgroup ceph
cephadm+   12529    9641  0 18:46 pts/0    00:00:00 grep --color=auto ceph-

推送管理集群的证书给node01 node02 node03

bash 复制代码
cephadmin@ceph-node01:/etc/ceph-cluster$ sudo ceph-deploy admin ceph-node01
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/cephadmin/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/local/bin/ceph-deploy admin ceph-node01
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0xffff83ba20f0>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  client                        : ['ceph-node01']
[ceph_deploy.cli][INFO  ]  func                          : <function admin at 0xffff83cbcc50>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to ceph-node01
[ceph-node01][DEBUG ] connected to host: ceph-node01
[ceph-node01][DEBUG ] detect platform information from remote host
[ceph-node01][DEBUG ] detect machine type
[ceph-node01][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf

初始化存储节点,也就是用来存储数据的节点,ceph集群中拥有最多osd的机器

bash 复制代码
# 所有存储节点都执行
ceph-deploy install --release pacific ceph-node02
ceph-deploy install --release pacific ceph-node03
root@ceph-node01:/etc/ceph-cluster# ceph-deploy install --release pacific ceph-node01
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/local/bin/ceph-deploy install --release pacific ceph-node01
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  testing                       : None
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0xffffa437daf0>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  dev_commit                    : None
[ceph_deploy.cli][INFO  ]  install_mds                   : False
[ceph_deploy.cli][INFO  ]  stable                        : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  adjust_repos                  : True
[ceph_deploy.cli][INFO  ]  func                          : <function install at 0xffffa4439c50>
[ceph_deploy.cli][INFO  ]  install_mgr                   : False
[ceph_deploy.cli][INFO  ]  install_all                   : False
[ceph_deploy.cli][INFO  ]  repo                          : False
[ceph_deploy.cli][INFO  ]  host                          : ['ceph-node01']
[ceph_deploy.cli][INFO  ]  install_rgw                   : False
[ceph_deploy.cli][INFO  ]  install_tests                 : False
[ceph_deploy.cli][INFO  ]  repo_url                      : None
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  install_osd                   : False
[ceph_deploy.cli][INFO  ]  version_kind                  : stable
[ceph_deploy.cli][INFO  ]  install_common                : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  dev                           : master
[ceph_deploy.cli][INFO  ]  nogpgcheck                    : False
[ceph_deploy.cli][INFO  ]  local_mirror                  : None
[ceph_deploy.cli][INFO  ]  release                       : pacific
[ceph_deploy.cli][INFO  ]  install_mon                   : False
[ceph_deploy.cli][INFO  ]  gpg_url                       : None
[ceph_deploy.install][DEBUG ] Installing stable version pacific on cluster ceph hosts ceph-node01
[ceph_deploy.install][DEBUG ] Detecting platform for host ceph-node01 ...
[ceph-node01][DEBUG ] connected to host: ceph-node01
[ceph-node01][DEBUG ] detect platform information from remote host
[ceph-node01][DEBUG ] detect machine type
[ceph_deploy.install][INFO  ] Distro info: Ubuntu 18.04 bionic
[ceph-node01][INFO  ] installing Ceph on ceph-node01
[ceph-node01][INFO  ] Running command: env DEBIAN_FRONTEND=noninteractive DEBIAN_PRIORITY=critical apt-get --assume-yes -q update
[ceph-node01][DEBUG ] Hit:1 https://mirrors.ustc.edu.cn/kubernetes/core:/stable:/v1.30/deb  InRelease
[ceph-node01][DEBUG ] Hit:2 http://ports.ubuntu.com/ubuntu-ports bionic InRelease
[ceph-node01][DEBUG ] Hit:3 http://ports.ubuntu.com/ubuntu-ports bionic-updates InRelease
[ceph-node01][DEBUG ] Hit:4 https://download.ceph.com/debian-pacific bionic InRelease
[ceph-node01][DEBUG ] Hit:5 http://ports.ubuntu.com/ubuntu-ports bionic-backports InRelease
[ceph-node01][DEBUG ] Hit:6 http://ports.ubuntu.com/ubuntu-ports bionic-security InRelease
[ceph-node01][DEBUG ] Reading package lists...
[ceph-node01][INFO  ] Running command: env DEBIAN_FRONTEND=noninteractive DEBIAN_PRIORITY=critical apt-get --assume-yes -q --no-install-recommends install ca-certificates apt-transport-https
[ceph-node01][DEBUG ] Reading package lists...
[ceph-node01][DEBUG ] Building dependency tree...
[ceph-node01][DEBUG ] Reading state information...
[ceph-node01][DEBUG ] ca-certificates is already the newest version (20230311ubuntu0.18.04.1).
[ceph-node01][DEBUG ] apt-transport-https is already the newest version (1.6.17).
[ceph-node01][DEBUG ] The following packages were automatically installed and are no longer required:
[ceph-node01][DEBUG ]   formencode-i18n libpython2.7 python-asn1crypto python-bcrypt python-bs4
[ceph-node01][DEBUG ]   python-ceph-argparse python-certifi python-cffi-backend python-chardet
[ceph-node01][DEBUG ]   python-cherrypy3 python-cryptography python-dnspython python-enum34
[ceph-node01][DEBUG ]   python-formencode python-idna python-ipaddress python-jinja2 python-logutils
[ceph-node01][DEBUG ]   python-mako python-markupsafe python-openssl python-paste python-pastedeploy
[ceph-node01][DEBUG ]   python-pecan python-pkg-resources python-prettytable python-rbd
[ceph-node01][DEBUG ]   python-requests python-simplegeneric python-simplejson python-singledispatch
[ceph-node01][DEBUG ]   python-six python-tempita python-urllib3 python-waitress python-webob
[ceph-node01][DEBUG ]   python-webtest python-werkzeug
[ceph-node01][DEBUG ] Use 'apt autoremove' to remove them.
[ceph-node01][DEBUG ] 0 upgraded, 0 newly installed, 0 to remove and 327 not upgraded.
[ceph-node01][INFO  ] Running command: wget -O release.asc https://download.ceph.com/keys/release.asc
[ceph-node01][WARNIN] --2024-10-08 19:02:09--  https://download.ceph.com/keys/release.asc
[ceph-node01][WARNIN] Resolving download.ceph.com (download.ceph.com)... 158.69.68.124, 2607:5300:201:2000::3:58a1
[ceph-node01][WARNIN] Connecting to download.ceph.com (download.ceph.com)|158.69.68.124|:443... connected.
[ceph-node01][WARNIN] HTTP request sent, awaiting response... 200 OK
[ceph-node01][WARNIN] Length: 1645 (1.6K) [application/octet-stream]
[ceph-node01][WARNIN] Saving to: 'release.asc'
[ceph-node01][WARNIN]
[ceph-node01][WARNIN]      0K .                                                     100%  439M=0s
[ceph-node01][WARNIN]
[ceph-node01][WARNIN] 2024-10-08 19:02:10 (439 MB/s) - 'release.asc' saved [1645/1645]

查看节点磁盘并初始化

bash 复制代码
cephadmin@ceph-node01:/etc/ceph-cluster$ sudo ceph-deploy disk list ceph-node01.xx.local
cephadmin@ceph-node01:/etc/ceph-cluster$ sudo ceph-deploy disk list ceph-node02.xx.local
cephadmin@ceph-node01:/etc/ceph-cluster$ sudo ceph-deploy disk list ceph-node03.xx.local

擦除磁盘数据

bash 复制代码
cephadmin@ceph-node01:/etc/ceph-cluster$ sudo ceph-deploy disk zap ceph-node01 /dev/vdb
cephadmin@ceph-node01:/etc/ceph-cluster$ sudo ceph-deploy disk zap ceph-node02 /dev/vdb
cephadmin@ceph-node01:/etc/ceph-cluster$ sudo ceph-deploy disk zap ceph-node03 /dev/vdb

输出

bash 复制代码
ph_deploy.conf][DEBUG ] found configuration file at: /home/cephadmin/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/local/bin/ceph-deploy disk zap ceph-node01 /dev/vdb
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  debug                         : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  subcommand                    : zap
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0xffff93fe9f50>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  host                          : ceph-node01
[ceph_deploy.cli][INFO  ]  func                          : <function disk at 0xffff940514d0>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  disk                          : ['/dev/vdb']
[ceph_deploy.osd][DEBUG ] zapping /dev/vdb on ceph-node01
[ceph-node01][DEBUG ] connected to host: ceph-node01
[ceph-node01][DEBUG ] detect platform information from remote host
[ceph-node01][DEBUG ] detect machine type
[ceph-node01][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO  ] Distro info: Ubuntu 18.04 bionic
[ceph-node01][DEBUG ] zeroing last few blocks of device
[ceph-node01][DEBUG ] find the location of an executable
[ceph-node01][INFO  ] Running command: /usr/sbin/ceph-volume lvm zap /dev/vdb
[ceph-node01][WARNIN] --> Zapping: /dev/vdb
[ceph-node01][WARNIN] --> --destroy was not specified, but zapping a whole device will remove the partition table
[ceph-node01][WARNIN] Running command: /bin/dd if=/dev/zero of=/dev/vdb bs=1M count=10 conv=fsync
[ceph-node01][WARNIN]  stderr: 10+0 records in
[ceph-node01][WARNIN] 10+0 records out
[ceph-node01][WARNIN]  stderr: 10485760 bytes (10 MB, 10 MiB) copied, 0.0246339 s, 426 MB/s
[ceph-node01][WARNIN] --> Zapping successful for: <Raw Device: /dev/vdb>

添加osd,数据data 元数据block wal日志block-wal都放在一起

bash 复制代码
cephadmin@ceph-node01:/etc/ceph-cluster$ sudo ceph-deploy osd create ceph-node01.xx.local --data /dev/vdb
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/cephadmin/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/local/bin/ceph-deploy osd create ceph-node01.xx.local --data /dev/vdb
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  bluestore                     : None
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0xffff811c2aa0>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  fs_type                       : xfs
[ceph_deploy.cli][INFO  ]  block_wal                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  journal                       : None
[ceph_deploy.cli][INFO  ]  subcommand                    : create
[ceph_deploy.cli][INFO  ]  host                          : ceph-node01.xx.local
[ceph_deploy.cli][INFO  ]  filestore                     : None
[ceph_deploy.cli][INFO  ]  func                          : <function osd at 0xffff81225450>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  zap_disk                      : False
[ceph_deploy.cli][INFO  ]  data                          : /dev/vdb
[ceph_deploy.cli][INFO  ]  block_db                      : None
[ceph_deploy.cli][INFO  ]  dmcrypt                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  dmcrypt_key_dir               : /etc/ceph/dmcrypt-keys
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  debug                         : False
[ceph_deploy.osd][DEBUG ] Creating OSD on cluster ceph with data device /dev/vdb
[ceph-node01.xx.local][DEBUG ] connected to host: ceph-node01.xx.local
[ceph-node01.xx.local][DEBUG ] detect platform information from remote host
[ceph-node01.xx.local][DEBUG ] detect machine type
[ceph-node01.xx.local][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO  ] Distro info: Ubuntu 18.04 bionic
[ceph_deploy.osd][DEBUG ] Deploying osd to ceph-node01.xx.local
[ceph-node01.xx.local][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph-node01.xx.local][WARNIN] osd keyring does not exist yet, creating one
[ceph-node01.xx.local][DEBUG ] create a keyring file
[ceph-node01.xx.local][DEBUG ] find the location of an executable
[ceph-node01.xx.local][INFO  ] Running command: /usr/sbin/ceph-volume --cluster ceph lvm create --bluestore --data /dev/vdb
[ceph-node01.xx.local][WARNIN] Running command: /usr/bin/ceph-authtool --gen-print-key
[ceph-node01.xx.local][WARNIN] Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 66fd9200-a35e-4a36-85a2-a512b09826de
[ceph-node01.xx.local][WARNIN] Running command: vgcreate --force --yes ceph-cf974d8c-8a5a-47ae-beb6-c2d6902df8f9 /dev/vdb
[ceph-node01.xx.local][WARNIN]  stdout: Physical volume "/dev/vdb" successfully created.
[ceph-node01.xx.local][WARNIN]  stdout: Volume group "ceph-cf974d8c-8a5a-47ae-beb6-c2d6902df8f9" successfully created
[ceph-node01.xx.local][WARNIN] Running command: lvcreate --yes -l 262143 -n osd-block-66fd9200-a35e-4a36-85a2-a512b09826de ceph-cf974d8c-8a5a-47ae-beb6-c2d6902df8f9
[ceph-node01.xx.local][WARNIN]  stdout: Logical volume "osd-block-66fd9200-a35e-4a36-85a2-a512b09826de" created.
[ceph-node01.xx.local][WARNIN] Running command: /usr/bin/ceph-authtool --gen-print-key
[ceph-node01.xx.local][WARNIN] Running command: /bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0
[ceph-node01.xx.local][WARNIN] --> Executable selinuxenabled not in PATH: /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin
[ceph-node01.xx.local][WARNIN] Running command: /bin/chown -h ceph:ceph /dev/ceph-cf974d8c-8a5a-47ae-beb6-c2d6902df8f9/osd-block-66fd9200-a35e-4a36-85a2-a512b09826de
[ceph-node01.xx.local][WARNIN] Running command: /bin/chown -R ceph:ceph /dev/dm-0
[ceph-node01.xx.local][WARNIN] Running command: /bin/ln -s /dev/ceph-cf974d8c-8a5a-47ae-beb6-c2d6902df8f9/osd-block-66fd9200-a35e-4a36-85a2-a512b09826de /var/lib/ceph/osd/ceph-0/block
[ceph-node01.xx.local][WARNIN] Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-0/activate.monmap
[ceph-node01.xx.local][WARNIN]  stderr: 2024-10-08T19:14:26.763+0800 ffff8e3ea1f0 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
[ceph-node01.xx.local][WARNIN] 2024-10-08T19:14:26.763+0800 ffff8e3ea1f0 -1 AuthRegistry(0xffff8805c4d0) no keyring found at /etc/ceph/ceph.client.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,, disabling cephx
[ceph-node01.xx.local][WARNIN]  stderr: got monmap epoch 1
[ceph-node01.xx.local][WARNIN] --> Creating keyring file for osd.0
[ceph-node01.xx.local][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/keyring
[ceph-node01.xx.local][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/
[ceph-node01.xx.local][WARNIN] Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid 66fd9200-a35e-4a36-85a2-a512b09826de --setuser ceph --setgroup ceph
[ceph-node01.xx.local][WARNIN]  stderr: 2024-10-08T19:14:27.315+0800 ffffb41ab010 -1 bluestore(/var/lib/ceph/osd/ceph-0/) _read_fsid unparsable uuid
[ceph-node01.xx.local][WARNIN] --> ceph-volume lvm prepare successful for: /dev/vdb
[ceph-node01.xx.local][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
[ceph-node01.xx.local][WARNIN] Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-cf974d8c-8a5a-47ae-beb6-c2d6902df8f9/osd-block-66fd9200-a35e-4a36-85a2-a512b09826de --path /var/lib/ceph/osd/ceph-0 --no-mon-config
[ceph-node01.xx.local][WARNIN] Running command: /bin/ln -snf /dev/ceph-cf974d8c-8a5a-47ae-beb6-c2d6902df8f9/osd-block-66fd9200-a35e-4a36-85a2-a512b09826de /var/lib/ceph/osd/ceph-0/block
[ceph-node01.xx.local][WARNIN] Running command: /bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
[ceph-node01.xx.local][WARNIN] Running command: /bin/chown -R ceph:ceph /dev/dm-0
[ceph-node01.xx.local][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
[ceph-node01.xx.local][WARNIN] Running command: /bin/systemctl enable ceph-volume@lvm-0-66fd9200-a35e-4a36-85a2-a512b09826de
[ceph-node01.xx.local][WARNIN]  stderr: Created symlink /etc/systemd/system/multi-user.target.wants/ceph-volume@lvm-0-66fd9200-a35e-4a36-85a2-a512b09826de.service → /lib/systemd/system/ceph-volume@.service.
[ceph-node01.xx.local][WARNIN] Running command: /bin/systemctl enable --runtime ceph-osd@0
[ceph-node01.xx.local][WARNIN]  stderr: Created symlink /run/systemd/system/ceph-osd.target.wants/ceph-osd@0.service → /lib/systemd/system/ceph-osd@.service.
[ceph-node01.xx.local][WARNIN] Running command: /bin/systemctl start ceph-osd@0
[ceph-node01.xx.local][WARNIN] --> ceph-volume lvm activate successful for osd ID: 0
[ceph-node01.xx.local][WARNIN] --> ceph-volume lvm create successful for: /dev/vdb
[ceph-node01.xx.local][INFO  ] checking OSD status...
[ceph-node01.xx.local][DEBUG ] find the location of an executable
[ceph-node01.xx.local][INFO  ] Running command: /usr/bin/ceph --cluster=ceph osd stat --format=json
[ceph_deploy.osd][DEBUG ] Host ceph-node01.xx.local is now ready for osd use.

验证

bash 复制代码
root@ceph-node01:/etc/ceph-cluster# ceph -s
  cluster:
    id:     5a6fdfb7-81a1-40f6-97b7-c92f96de9ac5
    health: HEALTH_WARN
            mon is allowing insecure global_id reclaim
            OSD count 2 < osd_pool_default_size 3
  
  services:
    mon: 1 daemons, quorum ceph-node01 (age 37m)
    mgr: ceph-node01(active, since 31m)
    osd: 2 osds: 2 up (since 21s), 2 in (since 30s)
  
  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0 B
    usage:   580 MiB used, 2.0 TiB / 2.0 TiB avail
    pgs:

mon监控节点状态查看

bash 复制代码
root@ceph-node01:~# ceph quorum_status --format json-pretty
 
{
    "election_epoch": 20,
    "quorum": [
        0,
        1,
        2
    ],
    "quorum_names": [
        "ceph-node01",
        "ceph-node02",
        "ceph-node03"
    ],
    "quorum_leader_name": "ceph-node01",
    "quorum_age": 77,
    "features": {
        "quorum_con": "4540138314316775423",
        "quorum_mon": [
            "kraken",
            "luminous",
            "mimic",
            "osdmap-prune",
            "nautilus",
            "octopus",
            "pacific",
            "elector-pinging"
        ]
    },
    "monmap": {
        "epoch": 3,
        "fsid": "5a6fdfb7-81a1-40f6-97b7-c92f96de9ac5",
        "modified": "2024-10-08T11:29:51.477381Z",
        "created": "2024-10-08T10:12:42.715558Z",
        "min_mon_release": 16,
        "min_mon_release_name": "pacific",
        "election_strategy": 1,
        "disallowed_leaders: ": "",
        "stretch_mode": false,
        "tiebreaker_mon": "",
        "removed_ranks: ": "",
        "features": {
            "persistent": [
                "kraken",
                "luminous",
                "mimic",
                "osdmap-prune",
                "nautilus",
                "octopus",
                "pacific",
                "elector-pinging"
            ],
            "optional": []
        },
        "mons": [
            {
                "rank": 0,
                "name": "ceph-node01",
                "public_addrs": {
                    "addrvec": [
                        {
                            "type": "v2",
                            "addr": "10.17.3.14:3300",
                            "nonce": 0
                        },
                        {
                            "type": "v1",
                            "addr": "10.17.3.14:6789",
                            "nonce": 0
                        }
                    ]
                },
                "addr": "10.17.3.14:6789/0",
                "public_addr": "10.17.3.14:6789/0",
                "priority": 0,
                "weight": 0,
                "crush_location": "{}"
            },
            {
                "rank": 1,
                "name": "ceph-node02",
                "public_addrs": {
                    "addrvec": [
                        {
                            "type": "v2",
                            "addr": "10.17.3.15:3300",
                            "nonce": 0
                        },
                        {
                            "type": "v1",
                            "addr": "10.17.3.15:6789",
                            "nonce": 0
                        }
                    ]
                },
                "addr": "10.17.3.15:6789/0",
                "public_addr": "10.17.3.15:6789/0",
                "priority": 0,
                "weight": 0,
                "crush_location": "{}"
            },
            {
                "rank": 2,
                "name": "ceph-node03",
                "public_addrs": {
                    "addrvec": [
                        {
                            "type": "v2",
                            "addr": "10.17.3.16:3300",
                            "nonce": 0
                        },
                        {
                            "type": "v1",
                            "addr": "10.17.3.16:6789",
                            "nonce": 0
                        }
                    ]
                },
                "addr": "10.17.3.16:6789/0",
                "public_addr": "10.17.3.16:6789/0",
                "priority": 0,
                "weight": 0,
                "crush_location": "{}"
            }
        ]
    }
}

部署ceph-dashboard

bash 复制代码
# 是否安装dashboard module
root@ceph-node01:~# dpkg -l |grep ceph-mgr
ii  ceph-mgr                              16.2.15-1bionic                    arm64        manager for the ceph distributed storage system
ii  ceph-mgr-modules-core                 16.2.15-1bionic                    all          ceph manager modules which are always enabled
root@ceph-node01:~# apt install ceph-mgr-dashboard
# 查看当前安装的模块状态以及可启用的模块
root@ceph-node01:~# ceph mgr module ls > ceph-mgr-module.json
# 启用ceph-dashboard模块
root@ceph-node01:~# ceph mgr module enable dashboard
# 禁用ssl
root@ceph-node01:~# ceph config set mgr mgr/dashboard/ssl false
# 配置监听IP
root@ceph-node01:~# ceph config set mgr mgr/dashboard/ceph-node01/server_addr 10.17.3.14
# 配置监听端口
root@ceph-node01:~# ceph config set mgr mgr/dashboard/ceph-node01/server_port 9009
# 验证端口是否监听,不监听则重启mgr服务
root@ceph-node01:~# systemctl restart ceph-mgr@ceph-node01.service
root@ceph-node01:~# systemctl status ceph-mgr@ceph-node01.service

安全组放行9009端口,浏览器测试

设置ceph-dashboard密码

bash 复制代码
root@ceph-node01:/etc/ceph-cluster# echo "cephdashboard" > ceph-dashboard-passwd.txt
root@ceph-node01:/etc/ceph-cluster# cat ceph-dashboard-passwd.txt
cephdashboard
root@ceph-node01:/etc/ceph-cluster# ceph dashboard set-login-credentials ceph -i ceph-dashboard-passwd.txt
******************************************************************
***          WARNING: this command is deprecated.              ***
*** Please use the ac-user-* related commands to manage users. ***
******************************************************************
Username and password updated

配置radosgw对象存储网关

bash 复制代码
apt-cache madison radosgw
cd /etc/ceph-cluster/

cephadmin@ceph-node01:/etc/ceph-cluster$ sudo ceph-deploy --overwrite-conf rgw create ceph-node01
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/cephadmin/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/local/bin/ceph-deploy --overwrite-conf rgw create ceph-node01
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  rgw                           : [('ceph-node01', 'rgw.ceph-node01')]
[ceph_deploy.cli][INFO  ]  overwrite_conf                : True
[ceph_deploy.cli][INFO  ]  subcommand                    : create
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0xffff890a19b0>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  func                          : <function rgw at 0xffff89142950>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.rgw][DEBUG ] Deploying rgw, cluster ceph hosts ceph-node01:rgw.ceph-node01
[ceph-node01][DEBUG ] connected to host: ceph-node01
[ceph-node01][DEBUG ] detect platform information from remote host
[ceph-node01][DEBUG ] detect machine type
[ceph_deploy.rgw][INFO  ] Distro info: Ubuntu 18.04 bionic
[ceph_deploy.rgw][DEBUG ] remote host will use systemd
[ceph_deploy.rgw][DEBUG ] deploying rgw bootstrap to ceph-node01
[ceph-node01][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph-node01][WARNIN] rgw keyring does not exist yet, creating one
[ceph-node01][DEBUG ] create a keyring file
[ceph-node01][DEBUG ] create path recursively if it doesn't exist
[ceph-node01][INFO  ] Running command: ceph --cluster ceph --name client.bootstrap-rgw --keyring /var/lib/ceph/bootstrap-rgw/ceph.keyring auth get-or-create client.rgw.ceph-node01 osd allow rwx mon allow rw -o /var/lib/ceph/radosgw/ceph-rgw.ceph-node01/keyring
[ceph-node01][INFO  ] Running command: systemctl enable ceph-radosgw@rgw.ceph-node01
[ceph-node01][WARNIN] Created symlink /etc/systemd/system/ceph-radosgw.target.wants/ceph-radosgw@rgw.ceph-node01.service → /lib/systemd/system/ceph-radosgw@.service.
[ceph-node01][INFO  ] Running command: systemctl start ceph-radosgw@rgw.ceph-node01
[ceph-node01][INFO  ] Running command: systemctl enable ceph.target
[ceph_deploy.rgw][INFO  ] The Ceph Object Gateway (RGW) is now running on host ceph-node01 and default port 7480

验证服务

bash 复制代码
cephadmin@ceph-node01:/etc/ceph-cluster$ systemctl status ceph-radosgw@rgw.ceph-node01.service
● ceph-radosgw@rgw.ceph-node01.service - Ceph rados gateway
   Loaded: loaded (/lib/systemd/system/ceph-radosgw@.service; indirect; vendor preset: enabled)
   Active: active (running) since Wed 2024-10-09 10:31:20 CST; 57s ago
 Main PID: 20282 (radosgw)
    Tasks: 602
   CGroup: /system.slice/system-ceph\x2dradosgw.slice/ceph-radosgw@rgw.ceph-node01.service
           └─20282 /usr/bin/radosgw -f --cluster ceph --name client.rgw.ceph-node01 --setuser ceph --setgroup ceph
cephadmin@ceph-node01:/etc/ceph-cluster$
cephadmin@ceph-node01:/etc/ceph-cluster$ ps -ef |grep radosgw
ceph       20282       1  0 10:31 ?        00:00:00 /usr/bin/radosgw -f --cluster ceph --name client.rgw.ceph-node01 --setuser ceph --setgroup ceph
cephadm+   21020   20167  0 10:32 pts/0    00:00:00 grep --color=auto radosgw

验证rgw客户端

bash 复制代码
cephadmin@ceph-node01:/etc/ceph-cluster$ curl http://10.17.3.14:7480/
<?xml version="1.0" encoding="UTF-8"?><ListAllMyBucketsResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/"><Owner><ID>anonymous</ID><DisplayName></DisplayName></Owner><Buckets></Buckets></ListAllMyBucketsResult>cephadmin@ceph-node01:/etc/ceph-cluster$
cephadmin@ceph-node01:/etc/ceph-cluster$
cephadmin@ceph-node01:/etc/ceph-cluster$
cephadmin@ceph-node01:/etc/ceph-cluster$ sudo ceph -s
  cluster:
    id:     5a6fdfb7-81a1-40f6-97b7-c92f96de9ac5
    health: HEALTH_WARN
            mons are allowing insecure global_id reclaim
  
  services:
    mon: 3 daemons, quorum ceph-node01,ceph-node02,ceph-node03 (age 39m)
    mgr: ceph-node01(active, since 15h)
    osd: 3 osds: 3 up (since 15h), 3 in (since 15h)
    rgw: 1 daemon active (1 hosts, 1 zones)
  
  data:
    pools:   5 pools, 129 pgs
    objects: 195 objects, 4.9 KiB
    usage:   872 MiB used, 3.0 TiB / 3.0 TiB avail
    pgs:     129 active+clean

客户端s3 browser

下载地址

bash 复制代码
https://s3browser.com/


reference

相关推荐
javaDocker4 小时前
业务架构、数据架构、应用架构和技术架构
架构
JosieBook6 小时前
【架构】主流企业架构Zachman、ToGAF、FEA、DoDAF介绍
架构
.生产的驴6 小时前
SpringCloud OpenFeign用户转发在请求头中添加用户信息 微服务内部调用
spring boot·后端·spring·spring cloud·微服务·架构
丁总学Java7 小时前
ARM 架构(Advanced RISC Machine)精简指令集计算机(Reduced Instruction Set Computer)
arm开发·架构
ZOMI酱9 小时前
【AI系统】GPU 架构与 CUDA 关系
人工智能·架构
天天扭码16 小时前
五天SpringCloud计划——DAY2之单体架构和微服务架构的选择和转换原则
java·spring cloud·微服务·架构
余生H16 小时前
transformer.js(三):底层架构及性能优化指南
javascript·深度学习·架构·transformer