当前openstack实验里面有一个控制节点(兼存储节点)、一个网络节点、两个计算节点。再加一个NFS服务器。本次实验要再加一个专门的存储节点。
新增一台服务器,使用GUI服务器形式安装,使用CentOS-7-x86_64-DVD-1804.iso 这个镜像。安装完成后配置IP:192.168.23.81。
1、关闭防火墙和selinux
systemctl stop firewalld.service
systemctl disable firewalld.service
systemctl stop NetworkManager
systemctl disable NetworkManager
关闭selinux
setenforce 0
vim /etc/selinux/config
改动:SELINUX=disable
2、上传安装文件
把openstack_R.tar.gz上传到/home目录下
tar -xzvf openstack_R.tar.gz
删除所有yum源
cd /etc/yum.repos.d/
rm *.repo -f
生成新的本地yum源,内容如下:
vim openstack.repo
base
name=CentOS-$releasever - Base
baseurl=file:///home/openstack
enable=1
gpgcheck=0
3、修改一下hosts文件
192.168.23.86 controller
192.168.23.88 network
192.168.23.91 computer01
192.168.23.92 computer02
192.168.23.18 storage01
4、取消图形界面登录,更改为多用户模式
systemctl set-default multi-user.target
5、增加一块硬盘专用于LVM的物理卷,并安装openstack cinder volume服务
pvcreate /dev/sdb
vgcreate cinder-volumes /dev/sdb
yum install openstack-cinder
6、编辑/etc/cinder/cinder.conf文件
DEFAULT
backup_swift_url=http://192.168.23.86:8080/v1/AUTH_
backup_swift_container=volumebackups
backup_driver=cinder.backup.drivers.swift
enable_v3_api=True
storage_availability_zone=nova
default_availability_zone=nova
default_volume_type=iscsi
auth_strategy=keystone
enabled_backends=lvm
osapi_volume_listen=0.0.0.0
osapi_volume_workers=4
debug=False
log_dir=/var/log/cinder
transport_url=rabbit://guest:guest@192.168.23.86:5672/
control_exchange=openstack
api_paste_config=/etc/cinder/api-paste.ini
glance_host=192.168.23.86
backend
backend_defaults
barbican
brcd_fabric_example
cisco_fabric_example
coordination
cors
database
connection=mysql+pymysql://cinder:123456@192.168.23.86/cinder
fc-zone-manager
healthcheck
key_manager
backend=cinder.keymgr.conf_key_mgr.ConfKeyManager
keystone_authtoken
www_authenticate_uri=http://192.168.23.86:5000/
auth_uri=http://192.168.23.86:5000/
auth_type=password
auth_url=http://192.168.23.86:35357
username=cinder
password=123456
user_domain_name=Default
project_name=services
project_domain_name=Default
matchmaker_redis
nova
oslo_concurrency
oslo_messaging_amqp
oslo_messaging_kafka
oslo_messaging_notifications
oslo_messaging_rabbit
ssl=False
oslo_messaging_zmq
oslo_middleware
oslo_policy
policy_file=/etc/cinder/policy.json
oslo_reports
oslo_versionedobjects
profiler
sample_remote_file_source
service_user
ssl
vault
lvm
volume_backend_name=lvm
volume_driver=cinder.volume.drivers.lvm.LVMVolumeDriver
iscsi_ip_address=192.168.23.81
iscsi_helper=lioadm
lio_target_automatic_acls = True
lio_target_manual_acls = False
lio_target_iqn = iqn.2010-10.org.openstack
lio_target_portal_address = 192.168.23.81
lio_target_portal_port = 3260
lio_target_use_chap_auth = False
lio_target_protocol = iscsi
volume_group=cinder-volumes
volumes_dir=/var/lib/cinder/volumes
1)主要是复制controller的cinder.conf文件;
2)改动的主要是删除原有的pool,新增加了一个后端pool:[lvm];
3)把enabled_backends的内容改为:enabled_backends=lvm
7、在控制节点修改控制节点iptables(重要),主要是放行新的存储节点
iptables -L -n
根据顺序增加相应的rule
iptables -I INPUT 16 -p tcp --dport 3306 -s 192.168.23.81 -j ACCEPT #放行新存储节点访问mysql
iptables -I INPUT 5 -p tcp -m multiport --dports 5671,5672 -s 192.168.23.81 -j ACCEPT #放行新存储节点访问amqp
iptables -I INPUT 25 -p tcp -m multiport --dports 6000,6001,6002,873 -s 192.168.23.81 -j ACCEPT #放行新存储节点访问swift storage and rsync
8、启动cinder volume服务
systemctl status openstack-cinder-volume.service
systemctl enable openstack-cinder-volume.service
9、常用运维操作
1)查看日志:more /var/log/cinder/volume.log
journalctl -u openstack-cinder-volume
2)查看卷组:vgdisplay
lvdisplay
lvs -a
3)在控制节点查看服务:cinder service-list
+------------------+-----------------+------+---------+-------+----------------------------+-----------------+
| Binary | Host | Zone | Status | State | Updated_at | Disabled Reason |
+------------------+-----------------+------+---------+-------+----------------------------+-----------------+
| cinder-backup | controller | nova | enabled | up | 2026-01-24T01:22:23.000000 | - |
| cinder-scheduler | controller | nova | enabled | up | 2026-01-24T01:22:23.000000 | - |
| cinder-volume | controller@lvm | nova | enabled | up | 2026-01-24T01:22:17.000000 | - |
| cinder-volume | controller@lvm2 | nova | enabled | up | 2026-01-24T01:22:17.000000 | - |
| cinder-volume | controller@nfs | nova | enabled | up | 2026-01-24T01:22:23.000000 | - |
| cinder-volume | storage01@lvm | nova | enabled | up | 2026-01-24T01:22:21.000000 | - |
+------------------+-----------------+------+---------+-------+----------------------------+-----------------+
可见多出一个service,说明安装成功
4)查看类型
cinder type-list
+--------------------------------------+-------+-------------+-----------+
| ID | Name | Description | Is_Public |
+--------------------------------------+-------+-------------+-----------+
| 082b2a5b-62d0-4a2e-a50c-7794b1998ad8 | iscsi | - | True |
| baec423a-f817-4925-a3cf-1d0cd816ab54 | nfs | - | True
有两个类型iscsi和nfs
5)查看iscsi类型对应的过滤条件
cinder type-show iscsi
+---------------------------------+--------------------------------------+
| Property | Value |
+---------------------------------+--------------------------------------+
| description | None |
| extra_specs | volume_backend_name : lvm |
| id | 082b2a5b-62d0-4a2e-a50c-7794b1998ad8 |
| is_public | True |
| name | iscsi |
| os-volume-type-access:is_public | True |
| qos_specs_id | None
从上面可以看出extra_specs是volume_backend_name : lvm,意思这个类型要选择volume_backend_name是lvm的存储池
查看存储池:
cinder get-pools --detail | grep name | grep -v vendor
| name | storage01@lvm#lvm |
| pool_name | lvm |
| volume_backend_name | lvm |
| name | controller@lvm#lvm |
| pool_name | lvm |
| volume_backend_name | lvm |
| name | controller@lvm2#lvm |
| pool_name | lvm |
| volume_backend_name | lvm |
| name | controller@nfs#nfs |
| volume_backend_name | nfs
可以看出volume_backend_name为lvm 的有3个存储池: storage01@lvm#lvm、controller@lvm#lvm和 controller@lvm2#lvm。那就意味着iscsi这个类型只能在这三个存储池里面进行选择。
默认是根据空闲空间大小进行调度