OpenStack镜像、脚本详解
前言
发现在发布这篇文章后2024广东省职业技能大赛云计算------私有云(OpenStack)平台搭建_openstack云计算比赛-CSDN博客,很多人仍然做不出来,包括自己身边很多人初学时也只知道照着命令打,结果自己错哪里了都不清不楚,所以这篇文章就详细讲讲。
镜像组成
比赛时提供的这个镜像包,我们在挂载之后可以看到生成了两个目录
shell
[root@controller ~]# ls
chinaskills_cloud_iaas_v2.0.3.iso
[root@controller ~]# mount -o loop chinaskills_cloud_iaas_v2.0.3.iso /opt
[root@controller ~]# ls /opt/
iaas-repo images
首先是iaas-repo目录,这是我们后续安装openstack各个组件所需要用到的软件包和依赖
shell
#base目录里存放着安装openstack各个组件所需要的软件包#repodata目录里则存放着元数据
[root@controller ~]# ls /opt/iaas-repo/
base repodata
images目录里存放着一些虚拟机镜像,可在平台搭建好后,供我们创建虚拟机实例
shell
[root@controller ~]# ls /opt/images/
amphora-x64-haproxy.qcow2 CentOS-7-x86_64-2009.qcow2 MySQL_5.6_XD.qcow2
CentOS7_1804.tar cirros-0.3.4-x86_64-disk.img
脚本和变量
脚本
shell
#当我们输入这条命令时,实际上只是下载了一些文件到我们本地
[root@controller ~]# yum -y install openstack-iaas
#这些文件可分为两种,一种是声明全局变量的文件,另一种是各个组件的安装脚本。
#其中声明全局变量的文件openrc.sh存放在/etc/openstack/目录下
[root@controller ~]# ls /etc/openstack/
openrc.sh
#各个组件的安装脚本存放在/usr/local/bin/目录下
[root@controller ~]# ls /usr/local/bin/
create_dual_intermediate_CA.sh iaas-install-mysql.sh
iaas-install-aodh.sh iaas-install-neutron-compute.sh
iaas-install-barbican.sh iaas-install-neutron-controller.sh
iaas-install-ceilometer-compute.sh iaas-install-nova-compute.sh
iaas-install-ceilometer-controller.sh iaas-install-nova-controller.sh
iaas-install-cinder-compute.sh iaas-install-octavia.sh
iaas-install-cinder-controller.sh iaas-install-placement.sh
iaas-install-cloudkitty.sh iaas-install-swift-compute.sh
iaas-install-dashboard.sh iaas-install-swift-controller.sh
iaas-install-fwaas-and-vpnaas.sh iaas-install-trove.sh
iaas-install-glance.sh iaas-install-zun-compute.sh
iaas-install-heat.sh iaas-install-zun-controller.sh
iaas-install-keystone.sh iaas-pre-host.sh
iaas-install-manila-compute.sh openrc.sh
iaas-install-manila-controller.sh
#至于这么设计的原因是什么,请容我细细道来
我们都知道,脚本能够自动执行我们写好的命令,提高效率
shell
#相同的一串命令,手动输入命令比较繁琐
[root@controller ~]# echo "1"
1
[root@controller ~]# echo "2"
2
[root@controller ~]# echo "3"
3
[root@controller ~]# echo "4"
4
[root@controller ~]#
#但如果我们事先将其写成脚本文件
[root@controller ~]# cat echo.sh
#/bin/bash
echo "1"
echo "2"
echo "3"
echo "4"
#直接运行脚本就可以做到相同的事情
[root@controller ~]# ./echo.sh
1
2
3
4
同理,如果我们要使用脚本搭建openstack,只要把手动搭建所需要的命令,写成脚本文件自动运行就可以了。但如果我们要广泛使用这个脚本,就不能简单的将命令照搬进去,因为我们无法保证在不同的环境下,脚本里面的一些值都是不变的。
举个例子,比如在脚本里面,有一处地方是测试对方主机的连通性,使用ping去ping对方主机,这台主机的IP是192.168.100.10。
如果我简单的写成ping -c 4 192.168.100.10
,这样在当前环境下没有任何问题。但如果把这个脚本给其他人使用,每个人的个人习惯,喜好不同,比如A的对方主机设置的IP是10.0.0.10,B设置的是172.16.0.10,那这条命令就出问题了。
而且像这样根据不同环境会有不同变化的地方,在脚本里不止有一处,我们也没办法每次使用脚本都挨个去改动,所以,我们就需要用全局变量文件来解决这个问题。
变量
变量应该不需要我多说,我们在终端中就可以设置变量的值
shell
[root@controller ~]# VALUE=5
[root@controller ~]# echo $VALUE
5
稍微转变一下思路,将变量值的设置放在文件里
shell
[root@controller ~]# vi value.sh
VALUE=1
[root@controller ~]# source value.sh
[root@controller ~]# echo $VALUE
1
这样,我们只需要将脚本里那些会根据实际情况变动的地方,例如IP主机名这些用变量名代替,在全局变量文件中声明这个变量的值即可。
还是刚刚的例子,ping对方主机我们可以写成:
ping -c 4 $HOST_IP
然后在全局变量文件中,根据实际情况声明这个变量的值
HOST_IP=192.168.100.10
这样,我们只需要在全局变量文件中声明变量,将脚本里可能会变动的地方用变量代替,即可让这个脚本更加泛用。
因而在我们使用脚本搭建openstack的过程中,第一步要做的就是完善这个全局变量文件。在后续运行组件安装脚本的过程中,绝大多数人出错的点就是因为变量文件里面的值没有设置好导致的。
shell
#在比赛做这道题时,除了各节点的IP、主机名、网段和分区会有所不同外,其他那些密码啥的,题目都有要求(基本都是000000),按照要求填写即可
[root@controller ~]# grep -vE '^\s*#|^\s*$' /etc/openstack/openrc.sh
HOST_IP=
HOST_PASS=
HOST_NAME=
HOST_IP_NODE=
HOST_PASS_NODE=
HOST_NAME_NODE=
network_segment_IP=
RABBIT_USER=
RABBIT_PASS=
DB_PASS=
DOMAIN_NAME=
ADMIN_PASS=
DEMO_PASS=
KEYSTONE_DBPASS=
GLANCE_DBPASS=
GLANCE_PASS=
PLACEMENT_DBPASS=
PLACEMENT_PASS=
NOVA_DBPASS=
NOVA_PASS=
NEUTRON_DBPASS=
NEUTRON_PASS=
METADATA_SECRET=
INTERFACE_NAME=
Physical_NAME=
minvlan=
maxvlan=
CINDER_DBPASS=
CINDER_PASS=
BLOCK_DISK=
SWIFT_PASS=
OBJECT_DISK=
STORAGE_LOCAL_NET_IP=
TROVE_DBPASS=
TROVE_PASS=
HEAT_DBPASS=
HEAT_PASS=
CEILOMETER_DBPASS=
CEILOMETER_PASS=
AODH_DBPASS=
AODH_PASS=
ZUN_DBPASS=
ZUN_PASS=
KURYR_PASS=
OCTAVIA_DBPASS=
OCTAVIA_PASS=
MANILA_DBPASS=
MANILA_PASS=
SHARE_DISK=
CLOUDKITTY_DBPASS=
CLOUDKITTY_PASS=
BARBICAN_DBPASS=
BARBICAN_PASS=
在根据自身情况设置好变量的值后,我们就可以直接运行那些早已帮我们写好的组件安装脚本了,接下来我会讲解安装基础组件的脚本
脚本详解
iaas-pre-host.sh
shell
#/bin/bash
source /etc/openstack/openrc.sh
生效全局变量文件,每个脚本的开头都会执行。后面不再赘述
shell
#Welcome page
cat > /etc/motd <<EOF
################################
# Welcome to OpenStack #
################################
EOF
整个段落所做的事:在用户登录后弹出欢迎界面
/etc/motd这个文件是在你登录系统后显示的,不管你是直接登录还是远程登录,都将显示这个文件里的信息
故而这段脚本功能是每次登录都打印欢迎界面
shell
#selinux
sed -i 's/SELINUX=.*/SELINUX=disabled/g' /etc/selinux/config
setenforce 0
第一句为设置Selinux为Disable(关闭模式)
由于修改配置文件后需要重启才能生效,所以第二句功能为临时关闭Selinux
shell
#firewalld
systemctl stop firewalld
systemctl disable firewalld >> /dev/null 2>&1
整个段落所做的事:将防火墙关闭并禁用开机自启
关于>> /dev/null 2>&1
的解释:
-
/dev/null
是说将命令的返回结果(输出)追加到/dev/null文件中而/dev/null文件是Linux的空文件设备,所有往这个文件里面写入的内容都会丢失,正因如此,它还有"黑洞"的俗称
-
2>&1
2
指的是错误输出,1
指的是标准输出,>
为重定向,&
为绑定意思即为将错误输出和标准输出绑定输出到同一个文件中,而这个文件是
/dev/null
的话,就找不着了所以整句
>> /dev/null 2>&1
的功能就是让命令的返回结果(无论正确还是报错)不显示在屏幕上
shell
#NetworkManager
systemctl stop NetworkManager >> /dev/null 2>&1
systemctl disable NetworkManager >> /dev/null 2>&1
yum remove -y NetworkManager firewalld
systemctl restart network
整个段落所做的事:将NetworkManager服务停止、禁止开机自启、删除,随后重启network服务
为何这么做的原因:
NetworkManager服务和network服务类似,都是管理网络的服务
但是两种服务都配置会起冲突,导致network服务开不起来,所以一般都会关闭其中一个服务
并且NetworkManager在网络断开的时候会清除路由,如果是自定义的路由,没有添加进NetworkManager的配置文件当中的话,路由就会被清除掉
NetworkManager比较适合有桌面环境的系统,我们搭建openstack只是用作服务器,不需要桌面环境。所以我们保留network服务,将NetworkManager服务关闭禁用开机自启
其他解释:
yum remove
是删除软件,脚本这里粗暴的删除了NetworkManager和firewalld防火墙
shell
#iptables
yum install iptables-services -y
if [ 0 -ne $? ]; then
echo -e "\033[31mThe installation source configuration errors\033[0m"
exit 1
fi
systemctl restart iptables
iptables -F
iptables -X
iptables -Z
/usr/sbin/iptables-save
systemctl stop iptables
systemctl disable iptables
整个段落所做的事:安装和配置iptables服务。
iptables是一个防火墙服务,和firewalld相比,有以下区别:
iptables | firewalld | |
---|---|---|
实现方式 | 链式规则 | 区域和服务 |
默认放行规则 | 允许 | 拒绝 |
优缺点 | 修改规则后需全部刷新才可生效 | 允许动态更新规则而不破坏现有会话和连接 |
CentOS7默认使用的防火墙是firewalld,替换了iptables,因为firewalld使用更加方便,功能也更加强大。
这里选择抛弃功能强大的firewalld,转而使用iptables的原因是 :
OpenStack网络相关组件的功能(如安全组)是基于iptables实现的,所以没办法😂
if语句的解释:
shell
if [ "条件" ]; then
执行命令
fi
是shell语言中if判断语句的基本写法。如果(if)满足某条件,那么(then)就执行以下语句。fi是结束的标记,用于结束一个if语句
if判断语句当中的条件0 -ne $?
:
-ne
意为不等于 $?
是一个系统变量,表示"最后一次执行命令"的退出状态.0为成功,非0为失败
合起来这个条件就是:最后一次执行的命令 ($?)没有 (不等于)成功 (0)**的话
那么这段代码中的整个语句意思就很明显了,如果最后一次命令,即安装iptables的命令失败了(返回非0)
则执行echo命令打印一段话(The installation source configuration errors
)提示用户安装出错并退出程序(exit 1
)
如果成功了,不满足条件,则直接运行后面的命令
iptables -F
清空防火墙规则
iptables -X
删除用户自定义的链
iptables -Z
清空防火墙数据表统计信息
/usr/sbin/iptables-save
生效配置
最后将服务关闭和禁用开机自启。
echo语句的解释:
-e
参数用于启用特殊字符的解释,例如\n换行符,\t制表符等等
后面双引号里打印的文字可以看作 \033[31m打印的文字\033[0m
\033[31m
是设置输出字体的颜色为红色,31代表的是红色
这里颜色的控制是通过ESC字符**(ASCII码中的八进制表示形式为\033)**+[+颜色代码+m实现的。
而后面的\033[0m
就是将颜色重新改回去,回到默认设置去
shell
# install package
sed -i -e 's/#UseDNS yes/UseDNS no/g' -e 's/GSSAPIAuthentication yes/GSSAPIAuthentication no/g' /etc/ssh/sshd_config
yum upgrade -y
yum install python-openstackclient openstack-selinux openstack-utils crudini expect lsof net-tools vim -y
修改SSH配置文件,禁用DNS解析和GSSAPI身份认证,然后更新系统中所有已安装软件包至其最新版本,并安装openstack所需的软件包。
其中crudini和expect是脚本执行过程中需要用到的工具。
shell
#hosts
if [[ `ip a |grep -w $HOST_IP ` != '' ]];then
hostnamectl set-hostname $HOST_NAME
elif [[ `ip a |grep -w $HOST_IP_NODE ` != '' ]];then
hostnamectl set-hostname $HOST_NAME_NODE
else
hostnamectl set-hostname $HOST_NAME
fi
sed -i -e "/$HOST_NAME/d" -e "/$HOST_NAME_NODE/d" /etc/hosts
echo "$HOST_IP $HOST_NAME" >> /etc/hosts
echo "$HOST_IP_NODE $HOST_NAME_NODE" >> /etc/hosts
整个段落所做的事:根据你在openrc.sh环境变量文件中填写的ip地址参数,修改主机名,并在/etc/hosts文件中添加对应主机映射 。
这里的if里的条件通过将ip a
命令的反馈结果给grep
筛查,-w
参数是匹配具体字段,这样一来。如果是控制节点,第一个if条件筛查ip是否与环境变量文件中的一致,结果!= ' '
(不为空值,即能够匹配得到),则修改主机名为controller。若是计算节点,则不满足第一个if条件,转而匹配第二个。当我们配置没有出错的情况下,这段if语句完美的实现了我们的需求:根据不同节点设置主机名。
shell
#ssh
if [[ ! -s ~/.ssh/id_rsa.pub ]];then
ssh-keygen -t rsa -N '' -f ~/.ssh/id_rsa -q -b 2048
fi
name=`hostname`
if [[ $name == $HOST_NAME ]];then
expect -c "set timeout -1;
spawn ssh-copy-id -i /root/.ssh/id_rsa $HOST_NAME_NODE;
expect {
*password:* {send -- $HOST_PASS_NODE\r;
expect {
*denied* {exit 2;}
eof}
}
*(yes/no)* {send -- yes\r;exp_continue;}
eof {exit 1;}
}
"
else
expect -c "set timeout -1;
spawn ssh-copy-id -i /root/.ssh/id_rsa $HOST_NAME;
expect {
*password:* {send -- $HOST_PASS\r;
expect {
*denied* {exit 2;}
eof}
}
*(yes/no)* {send -- yes\r;exp_continue;}
eof {exit 1;}
}
"
fi
整个段落所做的事:设置两节点之间的免密登录,方便后面脚本中主机之间进行交互操作
这里用到了上面安装的expect
工具,它是一个用于自动化交互式任务的工具和编程语言。通常用于编写脚本来与命令行程序进行交互,自动化执行诸如登录、输入密码、处理交互式提示等任务。
其中,expect -c
表示运行一段包含期望操作的代码块。
set timeout -1
设置了超时时间为永不超时,避免程序响应过长导致错过一些输出信息。
spawn
则是启动进程并持续监控跟踪输出的信息。
expect
命令检测到了password:关键字时,则使用send
命令模拟用户输入。出现大量的\r
是因为send
无法自动回车,用\r
模拟用户回车。
shell
#chrony
yum install -y chrony
if [[ $name == $HOST_NAME ]];then
sed -i '3,6s/^/#/g' /etc/chrony.conf
sed -i '7s/^/server controller iburst/g' /etc/chrony.conf
echo "allow $network_segment_IP" >> /etc/chrony.conf
echo "local stratum 10" >> /etc/chrony.conf
else
sed -i '3,6s/^/#/g' /etc/chrony.conf
sed -i '7s/^/server controller iburst/g' /etc/chrony.conf
fi
systemctl restart chronyd
systemctl enable chronyd
整个段落所做的事:安装chrony时间同步服务,根据不同的主机做不同的配置,然后重启服务,设置开机自启
具体设置了controller节点的ip网段为主时间节点,用于同步compute节点的时间
shell
#DNS
if [[ $name == $HOST_NAME ]];then
yum install bind -y
sed -i -e '13,14s/^/\/\//g' \
-e '19s/^/\/\//g' \
-e '37,42s/^/\/\//g' \
-e 's/recursion yes/recursion no/g' \
-e 's/dnssec-enable yes/dnssec-enable no/g' \
-e 's/dnssec-validation yes/dnssec-validation no/g' /etc/named.conf
systemctl start named.service
systemctl enable named.service
fi
printf "\033[35mPlease Reboot or Reconnect the terminal\n\033[0m"
整个段落所做的事:在controller节点安装bind域名解析服务并做相应配置,开启服务设置开机自启
随后打印信息提示用户重启系统
iaas-install-mysql.sh
shell
#!/bin/bash
source /etc/openstack/openrc.sh
ping $HOST_IP -c 4 >> /dev/null 2>&1
if [ 0 -ne $? ]; then
echo -e "\033[31m Warning\nPlease make sure the network configuration is correct!\033[0m"
exit 1
fi
整个段落所做的事:ping一下本地查看是否ping通,通则继续,不通则打印警告信息
shell
# MariaDB
yum install -y mariadb-10.3.20 mariadb-server-10.3.20 python2-PyMySQL
sed -i "/^symbolic-links/a\default-storage-engine = innodb\ninnodb_file_per_table\ncollation-server = utf8_general_ci\ninit-connect = 'SET NAMES utf8'\ncharacter-set-server = utf8\nmax_connections=10000" /etc/my.cnf
crudini --set /usr/lib/systemd/system/mariadb.service Service LimitNOFILE 10000
crudini --set /usr/lib/systemd/system/mariadb.service Service LimitNPROC 10000
systemctl daemon-reload
systemctl enable mariadb.service
systemctl restart mariadb.service
整个段落所做的事:安装mariadb数据库服务,并在其配置文件中设置参数进行调优后重启
systemctl daemon-reload
是重新加载系统管理器的配置文件
因为它的上两条指令修改了systemd目录下mariadb的配置文件内容,重新加载,意在生效配置
shell
expect -c "
spawn /usr/bin/mysql_secure_installation
expect \"Enter current password for root (enter for none):\"
send \"\r\"
expect \"Set root password?\"
send \"y\r\"
expect \"New password:\"
send \"$DB_PASS\r\"
expect \"Re-enter new password:\"
send \"$DB_PASS\r\"
expect \"Remove anonymous users?\"
send \"y\r\"
expect \"Disallow root login remotely?\"
send \"n\r\"
expect \"Remove test database and access to it?\"
send \"y\r\"
expect \"Reload privilege tables now?\"
send \"y\r\"
expect eof
"
整个段落所做的事:通过expect工具初始化数据库。
前面简单说过整个工具,仅作补充
expect eof
是固定格式,用于结束交互,上面配置ssh时其实也有用到, 不过由于嵌套的原因不太直观。
多嘴一下:也许你会问,明明初始化数据库这部分,脚本运行到这里,让用户操作就行了,也就点几下回车,为什么要写一大堆搞得那么麻烦?但其实,一个好的脚本不应该频繁的让用户有多余的操作,这会降低用户的体验,同时也是为了提高脚本的易用性,所以下这么多功夫,完全值得。
shell
# RabbitMQ
yum install rabbitmq-server -y
systemctl start rabbitmq-server.service
systemctl enable rabbitmq-server.service
rabbitmqctl add_user $RABBIT_USER $RABBIT_PASS
rabbitmqctl set_permissions $RABBIT_USER ".*" ".*" ".*"
安装rabbit消息队列服务,该服务用于openstack所有组件的消息传递
下面两条语句创建了openstack用户和密码000000(在openrc.sh文件里定义)并给予了所有权限
shell
# Memcache
yum install memcached python-memcached -y
sed -i -e 's/OPTIONS.*/OPTIONS="-l 127.0.0.1,::1,'$HOST_NAME'"/g' /etc/sysconfig/memcached
systemctl start memcached.service
systemctl enable memcached.service
整个段落所做的事:安装Memcache缓存服务,并在其配置文件中做相应配置。
keystone会使用它来缓存令牌,登录dashboard是产生的信息也会存放在memcache里
同时,它还可以缓存数据库查询的结果,减少数据库的访问次数,加速网址访问的速度
shell
# ETCD
yum install etcd -y
cp -a /etc/etcd/etcd.conf{,.bak}
sed -i -e 's/#ETCD_LISTEN_PEER_URLS.*/ETCD_LISTEN_PEER_URLS="http:\/\/'$HOST_IP':2380"/g' \
-e 's/^ETCD_LISTEN_CLIENT_URLS.*/ETCD_LISTEN_CLIENT_URLS="http:\/\/'$HOST_IP':2379"/g' \
-e 's/^ETCD_NAME="default"/ETCD_NAME="'$HOST_NAME'"/g' \
-e 's/#ETCD_INITIAL_ADVERTISE_PEER_URLS.*/ETCD_INITIAL_ADVERTISE_PEER_URLS="http:\/\/'$HOST_IP':2380"/g' \
-e 's/^ETCD_ADVERTISE_CLIENT_URLS.*/ETCD_ADVERTISE_CLIENT_URLS="http:\/\/'$HOST_IP':2379"/g' \
-e 's/#ETCD_INITIAL_CLUSTER=.*/ETCD_INITIAL_CLUSTER="'$HOST_NAME'=http:\/\/'$HOST_IP':2380"/g' \
-e 's/#ETCD_INITIAL_CLUSTER_TOKEN.*/ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster-01"/g' \
-e 's/#ETCD_INITIAL_CLUSTER_STATE.*/ETCD_INITIAL_CLUSTER_STATE="new"/g' /etc/etcd/etcd.conf
systemctl start etcd
systemctl enable etcd
整个段落所做的事:安装和配置etcd服务
它可以记录当前正在运行的服务及其地址和端口,使各个组件之间可以互相发现并连接从而协同工作。
iaas-install-keystone.sh
shell
#!/bin/bash
source /etc/openstack/openrc.sh
#keystone mysql
mysql -uroot -p$DB_PASS -e "create database IF NOT EXISTS keystone ;"
mysql -uroot -p$DB_PASS -e "GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY '$KEYSTONE_DBPASS' ;"
mysql -uroot -p$DB_PASS -e "GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY '$KEYSTONE_DBPASS' ;"
在数据库中创建一个keystone数据库,并授予远程访问权限
shell
#install keystone
yum install openstack-keystone httpd mod_wsgi -y
整个段落所做的事:安装keystone软件包
keystone为其他组件提供统一的身份认证服务,包括身份认证、令牌发放和校验服务列表、定义用户权限等。
OpenStack中所有服务的授权和认证都需要经过keystone,因此keystone是OpenStack中第一个需要安装的核心组件。
shell
#/etc/keystone/keystone.conf
openstack-config --set /etc/keystone/keystone.conf database connection mysql+pymysql://keystone:$KEYSTONE_DBPASS@$HOST_NAME/keystone
openstack-config --set /etc/keystone/keystone.conf token provider fernet
以第一条为例,openstack-config --set
为命令,/etc/keystone/keystone.conf
为操作的配置文件database
为具体段落,在文件中的显示为[database]
,connection
是修改的内容,它的值会被修改为最后一串内容,即该段落在文件中表现为:
shell
[database]
connection=mysql+pymysql://keystone:$KEYSTONE_DBPASS@$HOST_NAME/keystone
shell
su -s /bin/sh -c "keystone-manage db_sync" keystone
连接数据库
shell
keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
生成用于加密和解密身份认证令牌的密钥Fernet,并指定keystone用户和组管理密钥
shell
keystone-manage credential_setup --keystone-user keystone --keystone-group keystone
配置keystone数据库,用于存储和管理用户、服务、角色和权限等身份认证信息。同样指定了keystone用户和组进行管理
shell
keystone-manage bootstrap --bootstrap-password $ADMIN_PASS \
--bootstrap-admin-url http://$HOST_NAME:5000/v3/ \
--bootstrap-internal-url http://$HOST_NAME:5000/v3/ \
--bootstrap-public-url http://$HOST_NAME:5000/v3/ \
--bootstrap-region-id RegionOne
创建用于管理keystone的管理员用户,并配置keystoneAPI的URL和区域ID
shell
sed -i "s/#ServerName www.example.com:80/ServerName $HOST_NAME/g" /etc/httpd/conf/httpd.conf
ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/
systemctl enable httpd.service
systemctl restart httpd.service
配置web服务器的默认主机名为controller并创建一个软连接,最后重启服务
shell
export OS_USERNAME=admin
export OS_PASSWORD=$ADMIN_PASS
export OS_PROJECT_NAME=admin
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_DOMAIN_NAME=Default
export OS_AUTH_URL=http://$HOST_NAME:5000/v3
export OS_IDENTITY_API_VERSION=3
设置admin用户临时环境变量
shell
openstack domain create --description "Default Domain" $DOMAIN_NAME
openstack project create --domain $DOMAIN_NAME --description "Admin project" myadmin
openstack user create --domain $DOMAIN_NAME --password $ADMIN_PASS myadmin
openstack role add --project myadmin --user myadmin admin
创建默认域、myadmin项目、用户,并赋予admin角色
shell
export OS_USERNAME=myadmin
export OS_PASSWORD=$ADMIN_PASS
export OS_PROJECT_NAME=myadmin
export OS_USER_DOMAIN_NAME=$DOMAIN_NAME
export OS_PROJECT_DOMAIN_NAME=$DOMAIN_NAME
export OS_AUTH_URL=http://$HOST_NAME:5000/v3
export OS_IDENTITY_API_VERSION=3
设置myadmin用户的环境变量
shell
openstack project delete admin
openstack project set --name admin --domain $DOMAIN_NAME --description "Admin Project" --enable myadmin
删除临时admin项目,并创建新的admin项目,指定myadmin为该项目的管理员
shell
export OS_PROJECT_NAME=admin
设置项目用户名为admin
shell
openstack user delete admin
openstack user set --name admin --domain $DOMAIN_NAME --project admin --project-domain $DOMAIN_NAME --password $ADMIN_PASS --enable myadmin
删除临时admin用户,并创建新的admin用户,绑定admin项目和默认域,设置密码并指定myadmin用户为该用户的管理员
shell
export OS_USERNAME=admin
设置用户名为admin
shell
openstack role add --project admin --user admin admin
给admin用户赋予admin角色
shell
openstack project create --domain $DOMAIN_NAME --description "Service Project" service
openstack project create --domain $DOMAIN_NAME --description "Demo Project" demo
创建admin项目和demo项目
shell
openstack user create --domain $DOMAIN_NAME --password $DEMO_PASS demo
openstack role create user
openstack role add --project demo --user demo user
创建一个demo用户,并赋予user角色
shell
cat > /etc/keystone/admin-openrc.sh <<-EOF
export OS_PROJECT_DOMAIN_NAME=$DOMAIN_NAME
export OS_USER_DOMAIN_NAME=$DOMAIN_NAME
export OS_PROJECT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=$ADMIN_PASS
export OS_AUTH_URL=http://$HOST_NAME:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
EOF
将管理员用户的环境变量写入admin-openrc.sh文件,作为平台管理员使用
shell
cat > /etc/keystone/demo-openrc.sh <<-EOF
export OS_PROJECT_DOMAIN_NAME=$DOMAIN_NAME
export OS_USER_DOMAIN_NAME=$DOMAIN_NAME
export OS_PROJECT_NAME=demo
export OS_USERNAME=demo
export OS_PASSWORD=$DEMO_PASS
export OS_AUTH_URL=http://$HOST_NAME:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
EOF
将普通用户的环境变量写入demo-openrc.sh文件,作为平台普通用户使用
shell
source /etc/keystone/admin-openrc.sh
生效环境变量,获得管理员用户权限
shell
openstack token issue
查看当前令牌
iaas-install-glance.sh
shell
#!/bin/bash
source /etc/openstack/openrc.sh
source /etc/keystone/admin-openrc.sh
#glance mysql
mysql -uroot -p$DB_PASS -e "create database IF NOT EXISTS glance ;"
mysql -uroot -p$DB_PASS -e "GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY '$GLANCE_DBPASS' ;"
mysql -uroot -p$DB_PASS -e "GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY '$GLANCE_DBPASS' ;"
创建glance数据库,并授予远程登录权限
shell
#glance user role service endpoint
openstack user create --domain $DOMAIN_NAME --password $GLANCE_PASS glance
openstack role add --project service --user glance admin
openstack service create --name glance --description "OpenStack Image" image
openstack endpoint create --region RegionOne image public http://$HOST_NAME:9292
openstack endpoint create --region RegionOne image internal http://$HOST_NAME:9292
openstack endpoint create --region RegionOne image admin http://$HOST_NAME:9292
创建glance用户、服务和端点
shell
#glance install
yum install -y openstack-glance
安装glance安装包
shell
#/etc/glance/glance-api.conf
openstack-config --set /etc/glance/glance-api.conf database connection mysql+pymysql://glance:$GLANCE_DBPASS@$HOST_NAME/glance
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken www_authenticate_uri http://$HOST_NAME:5000
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_url http://$HOST_NAME:5000
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken memcached_servers $HOST_NAME:11211
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_type password
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken project_domain_name $DOMAIN_NAME
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken user_domain_name $DOMAIN_NAME
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken project_name service
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken username glance
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken password $GLANCE_PASS
openstack-config --set /etc/glance/glance-api.conf paste_deploy flavor keystone
整个段落所做的事:配置keystone认证信息
glance由glance-api和glance-registry组成
其中glance-api负责接受云系统镜像的创建,删除,读取请求,监听的端口是9192
glance-registry用于Mysql的数据交互,用户存储或获取镜像的元数据,监听的端口是9191
shell
openstack-config --set /etc/glance/glance-api.conf glance_store stores file,http
openstack-config --set /etc/glance/glance-api.conf glance_store $DOMAIN_NAME'_store' file
openstack-config --set /etc/glance/glance-api.conf glance_store filesystem_store_datadir /var/lib/glance/images/
配置镜像的存储格式和存储位置
shell
#/etc/glance/glance-registry.conf
openstack-config --set /etc/glance/glance-registry.conf database connection mysql+pymysql://glance:$GLANCE_DBPASS@$HOST_NAME/glance
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken www_authenticate_uri http://$HOST_NAME:5000
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_url http://$HOST_NAME:5000
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken memcached_servers $HOST_NAME:11211
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_type password
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken project_domain_name $DOMAIN_NAME
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken user_domain_name $DOMAIN_NAME
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken project_name service
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken username glance
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken password $GLANCE_PASS
openstack-config --set /etc/glance/glance-registry.conf paste_deploy flavor keystone
配置glance和数据库的连接以及keystone的认证
shell
#su glance mysql
su -s /bin/sh -c "glance-manage db_sync" glance
通过glance用户连接数据库
shell
systemctl enable openstack-glance-api.service openstack-glance-registry.service
systemctl restart openstack-glance-api.service openstack-glance-registry.service
设置glance服务开机自启,并重启服务
iaas-install-placement.sh
shell
#!/bin/bash
source /etc/openstack/openrc.sh
source /etc/keystone/admin-openrc.sh
#placement mysql
mysql -uroot -p$DB_PASS -e "CREATE DATABASE placement;"
mysql -uroot -p$DB_PASS -e "GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' IDENTIFIED BY '$PLACEMENT_DBPASS';"
mysql -uroot -p$DB_PASS -e "GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' IDENTIFIED BY '$PLACEMENT_DBPASS';"
创建placement数据库,并授予远程访问权限
shell
#placement user role service endpoint
openstack user create --domain $DOMAIN_NAME --password $PLACEMENT_PASS placement
openstack role add --project service --user placement admin
openstack service create --name placement --description "Placement API" placement
openstack endpoint create --region RegionOne placement public http://$HOST_NAME:8778
openstack endpoint create --region RegionOne placement internal http://$HOST_NAME:8778
openstack endpoint create --region RegionOne placement admin http://$HOST_NAME:8778
创建placement用户、服务和端点
shell
#placement install
yum install openstack-placement-api python2-pip -y
整个段落所做的事:安装placement资源管理和调度服务
placement服务是在S版本从nova服务中拆分出来的,它可以收集各个节点的可用资源并写入到数据库中
同时nova的scheduler服务会调用它,它的监听端口是8778
shell
#/etc/placement/placement.conf
openstack-config --set /etc/placement/placement.conf api auth_strategy keystone
openstack-config --set /etc/placement/placement.conf keystone_authtoken auth_url http://$HOST_NAME:5000/v3
openstack-config --set /etc/placement/placement.conf keystone_authtoken memcached_servers $HOST_NAME:11211
openstack-config --set /etc/placement/placement.conf keystone_authtoken auth_type password
openstack-config --set /etc/placement/placement.conf keystone_authtoken project_domain_name $DOMAIN_NAME
openstack-config --set /etc/placement/placement.conf keystone_authtoken user_domain_name $DOMAIN_NAME
openstack-config --set /etc/placement/placement.conf keystone_authtoken project_name service
openstack-config --set /etc/placement/placement.conf keystone_authtoken username placement
openstack-config --set /etc/placement/placement.conf keystone_authtoken password $PLACEMENT_PASS
openstack-config --set /etc/placement/placement.conf placement_database connection mysql+pymysql://placement:$PLACEMENT_DBPASS@$HOST_NAME/placement
配置keystone认证服务、数据库的连接
shell
#su placement mysql
su -s /bin/sh -c "placement-manage db sync" placement
#/etc/httpd/conf.d/00-placement-api.conf
cat >> /etc/httpd/conf.d/00-placement-api.conf <<EOF
<Directory /usr/bin>
<IfVersion >= 2.4>
Require all granted
</IfVersion>
<IfVersion < 2.4>
Order allow,deny
Allow from all
</IfVersion>
</Directory>
EOF
systemctl restart httpd
连接数据库,配置httpd的版本为2.4(版本过高会报错)后重启服务
iaas-install-nova-controller.sh
shell
#!/bin/bash
source /etc/openstack/openrc.sh
source /etc/keystone/admin-openrc.sh
#neutron mysql
mysql -uroot -p$DB_PASS -e "create database IF NOT EXISTS nova ;"
mysql -uroot -p$DB_PASS -e "create database IF NOT EXISTS nova_api ;"
mysql -uroot -p$DB_PASS -e "create database IF NOT EXISTS nova_cell0 ;"
mysql -uroot -p$DB_PASS -e "GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY '$NOVA_DBPASS' ;"
mysql -uroot -p$DB_PASS -e "GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY '$NOVA_DBPASS' ;"
mysql -uroot -p$DB_PASS -e "GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' IDENTIFIED BY '$NOVA_DBPASS' ;"
mysql -uroot -p$DB_PASS -e "GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' IDENTIFIED BY '$NOVA_DBPASS' ;"
mysql -uroot -p$DB_PASS -e "GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' IDENTIFIED BY '$NOVA_DBPASS' ;"
mysql -uroot -p$DB_PASS -e "GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' IDENTIFIED BY '$NOVA_DBPASS' ;"
整个段落所做的事:创建nova的三个数据库,并均赋予远程访问权限
nova_api数据库用于存放全局数据,如实例类型、实例组以及配额等
nova_cell0数据库用于存放数据调度失败的虚拟机数据
shell
#nova user role service endpoint
openstack user create --domain $DOMAIN_NAME --password $NOVA_PASS nova
openstack role add --project service --user nova admin
openstack service create --name nova --description "OpenStack Compute" compute
openstack endpoint create --region RegionOne compute public http://$HOST_NAME:8774/v2.1
openstack endpoint create --region RegionOne compute internal http://$HOST_NAME:8774/v2.1
openstack endpoint create --region RegionOne compute admin http://$HOST_NAME:8774/v2.1
创建nova用户并赋予admin角色,创建服务和端点
shell
#nova install
yum install openstack-nova-api openstack-nova-conductor openstack-nova-novncproxy openstack-nova-scheduler -y
整个段落所做的事:安装nova软件包
nova提供计算服务,负责维护和管理云环境的计算资源
nova-api用于接收和处理HTTP请求,并通过消息队列或HTTP与其他组件进行通信
nova-conductor是nova-compute和数据库之间的信息中介,它会接收nova-compute访问数据库的操作,避免nova-compute对数据库进行直接访问,提高安全性
nova-novncproxy是nova的vnc服务,负责提供实例的控制台
nova-scheduler是nova的调度服务,用于决定哪台计算节点承载实例
shell
#/etc/nova/nova.conf
openstack-config --set /etc/nova/nova.conf DEFAULT enabled_apis osapi_compute,metadata
openstack-config --set /etc/nova/nova.conf DEFAULT my_ip $HOST_IP
openstack-config --set /etc/nova/nova.conf DEFAULT use_neutron true
openstack-config --set /etc/nova/nova.conf DEFAULT firewall_driver nova.virt.firewall.NoopFirewallDriver
openstack-config --set /etc/nova/nova.conf DEFAULT transport_url rabbit://openstack:$RABBIT_PASS@$HOST_NAME
配置rabbit消息队列连接
shell
openstack-config --set /etc/nova/nova.conf api_database connection mysql+pymysql://nova:$NOVA_DBPASS@$HOST_NAME/nova_api
nova_api连接数据库
shell
openstack-config --set /etc/nova/nova.conf database connection mysql+pymysql://nova:$NOVA_DBPASS@$HOST_NAME/nova
nova连接数据库
shell
openstack-config --set /etc/nova/nova.conf api auth_strategy keystone
openstack-config --set /etc/nova/nova.conf keystone_authtoken www_authenticate_uri http://$HOST_NAME:5000/
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_url http://$HOST_NAME:5000/
openstack-config --set /etc/nova/nova.conf keystone_authtoken memcached_servers $HOST_NAME:11211
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_type password
openstack-config --set /etc/nova/nova.conf keystone_authtoken project_domain_name $DOMAIN_NAME
openstack-config --set /etc/nova/nova.conf keystone_authtoken user_domain_name $DOMAIN_NAME
openstack-config --set /etc/nova/nova.conf keystone_authtoken project_name service
openstack-config --set /etc/nova/nova.conf keystone_authtoken username nova
openstack-config --set /etc/nova/nova.conf keystone_authtoken password $NOVA_PASS
配置keystone认证信息
shell
openstack-config --set /etc/nova/nova.conf vnc enabled true
openstack-config --set /etc/nova/nova.conf vnc server_listen $HOST_IP
openstack-config --set /etc/nova/nova.conf vnc server_proxyclient_address $HOST_IP
配置nova实例的网页控制台连接地址
shell
openstack-config --set /etc/nova/nova.conf glance api_servers http://$HOST_NAME:9292
对接glance服务
shell
openstack-config --set /etc/nova/nova.conf oslo_concurrency lock_path /var/lib/nova/tmp
配置锁文件,控制对共享资源的并发访问,以防止并发访问冲突
shell
openstack-config --set /etc/nova/nova.conf placement region_name RegionOne
openstack-config --set /etc/nova/nova.conf placement project_domain_name $DOMAIN_NAME
openstack-config --set /etc/nova/nova.conf placement project_name service
openstack-config --set /etc/nova/nova.conf placement auth_type password
openstack-config --set /etc/nova/nova.conf placement user_domain_name $DOMAIN_NAME
openstack-config --set /etc/nova/nova.conf placement auth_url http://$HOST_NAME:5000/v3
openstack-config --set /etc/nova/nova.conf placement username placement
openstack-config --set /etc/nova/nova.conf placement password $PLACEMENT_PASS
配置placement服务的认证信息
shell
openstack-config --set /etc/nova/nova.conf scheduler discover_hosts_in_cells_interval 300
配置主机发现,检测可用的计算节点时间间隔,时间为300秒
shell
#su nova mysql
su -s /bin/sh -c "nova-manage api_db sync" nova
su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova
su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova
su -s /bin/sh -c "nova-manage db sync" nova
su -s /bin/sh -c "nova-manage cell_v2 list_cells" nova
使用nova用户连接数据库
shell
systemctl restart openstack-nova-api.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service
systemctl enable openstack-nova-api.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service
重启nova服务,并设置开机自启
shell
openstack flavor create --id 1 --vcpus 1 --ram 512 --disk 10 m1.tiny
openstack flavor create --id 2 --vcpus 1 --ram 1024 --disk 20 m1.small
openstack flavor create --id 3 --vcpus 2 --ram 2048 --disk 40 m1.medium
创建三个云实例类型
iaas-install-nova-compute.sh
shell
#!/bin/bash
source /etc/openstack/openrc.sh
#nova-compute install
yum install openstack-nova-compute -y
安装nova-compute组件,它负责管理虚拟机实例的生命周期,是至关重要的守护进程
shell
#/etc/nova/nova.conf
openstack-config --set /etc/nova/nova.conf DEFAULT my_ip $HOST_IP
openstack-config --set /etc/nova/nova.conf DEFAULT use_neutron True
openstack-config --set /etc/nova/nova.conf DEFAULT firewall_driver nova.virt.firewall.NoopFirewallDriver
openstack-config --set /etc/nova/nova.conf DEFAULT enabled_apis osapi_compute,metadata
openstack-config --set /etc/nova/nova.conf DEFAULT transport_url rabbit://$RABBIT_USER:$RABBIT_PASS@$HOST_NAME
配置rabbitmq消息队列连接
shell
openstack-config --set /etc/nova/nova.conf api auth_strategy keystone
配置认证类型为keystone
shell
openstack-config --set /etc/nova/nova.conf keystone_authtoken www_authenticate_uri http://$HOST_NAME:5000/
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_url http://$HOST_NAME:5000/
openstack-config --set /etc/nova/nova.conf keystone_authtoken memcached_servers $HOST_NAME:11211
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_type password
openstack-config --set /etc/nova/nova.conf keystone_authtoken project_domain_name $DOMAIN_NAME
openstack-config --set /etc/nova/nova.conf keystone_authtoken user_domain_name $DOMAIN_NAME
openstack-config --set /etc/nova/nova.conf keystone_authtoken project_name service
openstack-config --set /etc/nova/nova.conf keystone_authtoken username nova
openstack-config --set /etc/nova/nova.conf keystone_authtoken password $NOVA_PASS
配置keystone认证信息
shell
openstack-config --set /etc/nova/nova.conf vnc enabled True
openstack-config --set /etc/nova/nova.conf vnc server_listen 0.0.0.0
openstack-config --set /etc/nova/nova.conf vnc server_proxyclient_address $HOST_IP_NODE
配置vnc监听地址和连接地址
shell
openstack-config --set /etc/nova/nova.conf vnc novncproxy_base_url http://$HOST_IP:6080/vnc_auto.html
配置vnc连接网页地址
shell
openstack-config --set /etc/nova/nova.conf glance api_servers http://$HOST_NAME:9292
对接glance服务
shell
openstack-config --set /etc/nova/nova.conf oslo_concurrency lock_path /var/lib/nova/tmp
配置锁文件的路径
shell
openstack-config --set /etc/nova/nova.conf placement region_name RegionOne
openstack-config --set /etc/nova/nova.conf placement project_domain_name $DOMAIN_NAME
openstack-config --set /etc/nova/nova.conf placement project_name service
openstack-config --set /etc/nova/nova.conf placement auth_type password
openstack-config --set /etc/nova/nova.conf placement user_domain_name $DOMAIN_NAME
openstack-config --set /etc/nova/nova.conf placement auth_url http://$HOST_NAME:5000/v3
openstack-config --set /etc/nova/nova.conf placement username placement
openstack-config --set /etc/nova/nova.conf placement password $PLACEMENT_PASS
配置placement连接信息
shell
virt_num=`egrep -c '(vmx|svm)' /proc/cpuinfo`
if [ $virt_num = '0' ];then
crudini --set /etc/nova/nova.conf libvirt virt_type qemu
fi
检查是否支持虚拟化,若不支持(值为0),则改成qemu虚拟化类型
shell
systemctl enable libvirtd.service openstack-nova-compute.service
systemctl restart libvirtd.service openstack-nova-compute.service
设置nova-compute服务开机自启,并重启服务
shell
ssh $HOST_IP "source /etc/keystone/admin-openrc.sh && openstack compute service list --service nova-compute"
ssh $HOST_IP 'source /etc/keystone/admin-openrc.sh && su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova'
远程连接到控制节点并配置主机发现
iaas-install-neutron-controller.sh
shell
#!/bin/bash
source /etc/openstack/openrc.sh
source /etc/keystone/admin-openrc.sh
#neutron mysql
mysql -uroot -p$DB_PASS -e "create database IF NOT EXISTS neutron ;"
mysql -uroot -p$DB_PASS -e "GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY '$NEUTRON_DBPASS' ;"
mysql -uroot -p$DB_PASS -e "GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY '$NEUTRON_DBPASS' ;"
创建neutron数据库,并授予远程访问权限
shell
#neutron user role service endpoint
openstack user create --domain $DOMAIN_NAME --password $NEUTRON_PASS neutron
创建neutron用户
shell
openstack role add --project service --user neutron admin
赋予admin角色
shell
openstack service create --name neutron --description "OpenStack Networking" network
创建network服务
shell
openstack endpoint create --region RegionOne network public http://$HOST_NAME:9696
openstack endpoint create --region RegionOne network internal http://$HOST_NAME:9696
openstack endpoint create --region RegionOne network admin http://$HOST_NAME:9696
配置端点信息
shell
#neutron install
yum install openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge ebtables -y
整个段落所做的事:安装neutron软件包
Neutron是OpenStack的网络服务,提供了虚拟网络的创建、管理和连接功能,以支持OpenStack云环境中虚拟机和其他资源的网络通信
shell
#network
if [[ `ip a |grep -w $INTERFACE_IP |grep -w $INTERFACE_NAME` = '' ]];then
cat > /etc/sysconfig/network-scripts/ifcfg-$INTERFACE_NAME <<EOF
DEVICE=$INTERFACE_NAME
TYPE=Ethernet
BOOTPROTO=none
ONBOOT=yes
EOF
systemctl restart network
fi
配置外网网卡,为云主机提供网络支持
shell
#/etc/neutron/neutron.conf
openstack-config --set /etc/neutron/neutron.conf database connection mysql+pymysql://neutron:$NEUTRON_DBPASS@$HOST_NAME/neutron
配置数据库连接
shell
openstack-config --set /etc/neutron/neutron.conf DEFAULT core_plugin ml2
openstack-config --set /etc/neutron/neutron.conf DEFAULT service_plugins router
openstack-config --set /etc/neutron/neutron.conf DEFAULT allow_overlapping_ips true
配置网络连接插件
shell
openstack-config --set /etc/neutron/neutron.conf DEFAULT transport_url rabbit://$RABBIT_USER:$RABBIT_PASS@$HOST_NAME
连接rabbitmq消息队列连接
shell
openstack-config --set /etc/neutron/neutron.conf DEFAULT auth_strategy keystone
配置认证方式为keystone
shell
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken www_authenticate_uri http://$HOST_NAME:5000
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_url http://$HOST_NAME:5000
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken memcached_servers $HOST_NAME:11211
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_type password
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken project_domain_name $DOMAIN_NAME
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken user_domain_name $DOMAIN_NAME
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken project_name service
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken username neutron
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken password $NEUTRON_PASS
配置keystone认证信息
shell
openstack-config --set /etc/neutron/neutron.conf DEFAULT notify_nova_on_port_status_changes true
openstack-config --set /etc/neutron/neutron.conf DEFAULT notify_nova_on_port_data_changes true
配置参数,当neutron管理的端口状态或数据发生改变时,会将通知发送给nova服务
shell
openstack-config --set /etc/neutron/neutron.conf nova auth_url http://$HOST_NAME:5000
openstack-config --set /etc/neutron/neutron.conf nova auth_type password
openstack-config --set /etc/neutron/neutron.conf nova project_domain_name $DOMAIN_NAME
openstack-config --set /etc/neutron/neutron.conf nova user_domain_name $DOMAIN_NAME
openstack-config --set /etc/neutron/neutron.conf nova region_name RegionOne
openstack-config --set /etc/neutron/neutron.conf nova project_name service
openstack-config --set /etc/neutron/neutron.conf nova username nova
openstack-config --set /etc/neutron/neutron.conf nova password $NOVA_PASS
配置对接nova的连接信息
shell
openstack-config --set /etc/neutron/neutron.conf oslo_concurrency lock_path /var/lib/neutron/tmp
配置锁文件路径
#/etc/neutron/plugins/ml2/ml2_conf.ini
shell
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 type_drivers flat,vlan,vxlan,gre,local
设置网卡配置模式
shell
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 tenant_network_types vxlan
设置租户的网络类型为vxlan
shell
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 mechanism_drivers linuxbridge,l2population
整个段落所做的事:将Linux Brige和L2 Population机制驱动程序配置为ML2插件的机制驱动程序
Linux Brige,即网桥。可以将一台主机上的多个网卡桥接起来,充当一台交换机,可以桥接物理网卡,也可以是虚拟网卡
L2 Population用于提高VXLAN网络的可扩展性
ML2全称Moduler Layer 2,是在H版本实现的一个新的核心插件,因其功能强大,管理方便,扩展性强的优点取代其他核心组件
shell
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 extension_drivers port_security
将Port Security扩展驱动程序配置为ML2插件的扩展驱动程序
shell
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_flat flat_networks $Physical_NAME
将Flat类型的网络配置为使用指定的物理网络名称,即我们的外网
shell
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_vlan network_vlan_ranges $Physical_NAME:$minvlan:$maxvlan
将VXLAN类型的网络配置使用指定的物理网络名称、并指定VLAN范围,即我们环境变量文件设置的范围
shell
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_vxlan vni_ranges $minvlan:$maxvlan
将VXLAN类型的网络配置为使用指定的VNI范围
shell
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup enable_ipset true
将安全组配置为启用IP集成,让它使用Linux内核的IP集成来优化防火墙规则
shell
#/etc/neutron/plugins/ml2/linuxbridge_agent.ini
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini linux_bridge physical_interface_mappings provider:$INTERFACE_NAME
将物理网络接口映射到Linux Bridge
shell
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan enable_vxlan true
启用VXLAN
shell
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan local_ip $HOST_IP
设置VXLAN的本地IP地址
shell
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan l2_population true
启用VXLAN的L2 Population功能以提高网络的性能和可扩展性
shell
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini securitygroup enable_security_group true
启用安全组功能。安全组是一种虚拟网络的访问控制机制,可以限制虚拟机的网络访问权限
shell
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini securitygroup firewall_driver neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
设置防火墙驱动程序。防火墙驱动程序是用于实现安全组功能的一个组件
shell
#br_netfilter
modprobe br_netfilter
整个段落所做的事:加载内核模块br_netfilter
它允许 Linux 内核通过 netfilter 框架来处理桥接网络数据包。
shell
echo 'net.bridge.bridge-nf-call-iptables = 1' >> /etc/sysctl.conf
echo 'net.bridge.bridge-nf-call-ip6tables = 1' >> /etc/sysctl.conf
整个段落所做的事:将两个内核参数写入 /etc/sysctl.conf 文件中
这样就能够允许 Linux 内核在 iptables 防火墙规则中使用 netfilter 框架来处理桥接网络数据包
shell
sysctl -p
sysctl net.bridge.bridge-nf-call-iptables
sysctl net.bridge.bridge-nf-call-ip6tables
生效参数并重新加载
shell
#/etc/neutron/l3_agent.ini
openstack-config --set /etc/neutron/l3_agent.ini DEFAULT interface_driver linuxbridge
设置接口类型为网桥
shell
#/etc/neutron/dhcp_agent.ini
openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT interface_driver linuxbridge
设置DHCP代理将使用Linux桥接驱动程序来与Linux内核网络堆栈交互,以便在OpenStack云中实现虚拟网络功能
shell
openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT dhcp_driver neutron.agent.linux.dhcp.Dnsmasq
整个段落所做的事:设置DHCP代理使用Dnsmasq
DNSmasq是一个小巧且方便地用于配置DNS和DHCP的工具
shell
openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT enable_isolated_metadata true
设置DHCP代理使用隔离的元数据服务来获取OpenStack云中的虚拟机元数据。
shell
#/etc/neutron/metadata_agent.ini
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT nova_metadata_host $HOST_NAME
设置Neutron元数据代理从控制节点获取虚拟机的元数据信息
shell
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT metadata_proxy_shared_secret $METADATA_SECRET
确保元数据代理和Nova元数据服务之间的通信是安全的。
shell
#/etc/nova/nova.conf
openstack-config --set /etc/nova/nova.conf neutron auth_url http://$HOST_NAME:5000
openstack-config --set /etc/nova/nova.conf neutron auth_type password
openstack-config --set /etc/nova/nova.conf neutron project_domain_name $DOMAIN_NAME
openstack-config --set /etc/nova/nova.conf neutron user_domain_name $DOMAIN_NAME
openstack-config --set /etc/nova/nova.conf neutron region_name RegionOne
openstack-config --set /etc/nova/nova.conf neutron project_name service
openstack-config --set /etc/nova/nova.conf neutron username neutron
openstack-config --set /etc/nova/nova.conf neutron password $NEUTRON_PASS
openstack-config --set /etc/nova/nova.conf neutron service_metadata_proxy true
openstack-config --set /etc/nova/nova.conf neutron metadata_proxy_shared_secret $METADATA_SECRET
在nova配置文件中配置neutron认证
shell
#su neutron mysql
ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
创建软连接
shell
su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
用neutron用户创建数据库
shell
systemctl restart openstack-nova-api.service
systemctl enable neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent neutron-l3-agent
systemctl start neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent neutron-l3-agent
重启nova-api服务,设置neutron服务开机自启并启动
iaas-install-neutron-compute.sh
shell
#!/bin/bash
source /etc/openstack/openrc.sh
#neutron install
yum install -y openstack-neutron-linuxbridge ebtables ipset
安装neutron软件包
shell
#network
if [[ `ip a |grep -w $INTERFACE_IP |grep -w $INTERFACE_NAME` = '' ]];then
cat > /etc/sysconfig/network-scripts/ifcfg-$INTERFACE_NAME <<EOF
DEVICE=$INTERFACE_NAME
TYPE=Ethernet
BOOTPROTO=none
ONBOOT=yes
EOF
systemctl restart network
fi
配置外网网卡,为云主机提供网络支持
shell
#/etc/neutron/neutron.conf
openstack-config --set /etc/neutron/neutron.conf DEFAULT transport_url rabbit://$RABBIT_USER:$RABBIT_PASS@$HOST_NAME
连接rabbitmq消息队列连接
shell
openstack-config --set /etc/neutron/neutron.conf DEFAULT auth_strategy keystone
配置认证方式为keystone
shell
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken www_authenticate_uri http://$HOST_NAME:5000
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_url http://$HOST_NAME:5000
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken memcached_servers $HOST_NAME:11211
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_type password
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken project_domain_name $DOMAIN_NAME
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken user_domain_name $DOMAIN_NAME
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken project_name service
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken username neutron
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken password $NEUTRON_PASS
配置keystone认证信息
shell
openstack-config --set /etc/neutron/neutron.conf oslo_concurrency lock_path /var/lib/neutron/tmp
配置锁文件路径
shell
#/etc/neutron/plugins/ml2/linuxbridge_agent.ini
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini linux_bridge physical_interface_mappings provider:$INTERFACE_NAME
将物理网络接口映射到Linux Bridge
shell
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan enable_vxlan True
启用VXLAN
shell
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan local_ip $HOST_IP_NODE
设置VXLAN的本地IP地址
shell
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan l2_population true
启用VXLAN的L2 Population功能
shell
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini securitygroup enable_security_group True
启用安全组功能。
shell
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini securitygroup firewall_driver neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
设置防火墙驱动程序
shell
#br_netfilter
modprobe br_netfilter
加载内核模块br_netfilter
shell
echo 'net.bridge.bridge-nf-call-iptables = 1' >> /etc/sysctl.conf
echo 'net.bridge.bridge-nf-call-ip6tables = 1' >> /etc/sysctl.conf
将两个内核参数写入 /etc/sysctl.conf 文件中
shell
sysctl -p
sysctl net.bridge.bridge-nf-call-iptables
sysctl net.bridge.bridge-nf-call-ip6tables
生效参数并重新加载
shell
#/etc/nova/nova.conf
openstack-config --set /etc/nova/nova.conf neutron url http://$HOST_NAME:9696
openstack-config --set /etc/nova/nova.conf neutron auth_url http://$HOST_NAME:5000
openstack-config --set /etc/nova/nova.conf neutron auth_type password
openstack-config --set /etc/nova/nova.conf neutron project_domain_name $DOMAIN_NAME
openstack-config --set /etc/nova/nova.conf neutron user_domain_name $DOMAIN_NAME
openstack-config --set /etc/nova/nova.conf neutron region_name RegionOne
openstack-config --set /etc/nova/nova.conf neutron project_name service
openstack-config --set /etc/nova/nova.conf neutron username neutron
openstack-config --set /etc/nova/nova.conf neutron password $NEUTRON_PASS
在nova配置文件中配置neutron认证
shell
systemctl restart openstack-nova-compute.service
systemctl restart neutron-linuxbridge-agent.service
systemctl enable neutron-linuxbridge-agent.service
重启nova-compute服务,重启neutron服务并设置开机自启
iaas-install-dashboard.sh
shell
#!/bin/bash
source /etc/openstack/openrc.sh
source /etc/keystone/admin-openrc.sh
#dashboard install
yum install openstack-dashboard -y
整个段落所做的事:安装dashboard组件
dashboard提供了可视化的操作界面,使得云平台管理员以及用户可以管理不同的Openstack资源以及服务
shell
#/etc/openstack-dashboard/local_settings
sed -i '/^OPENSTACK_HOST/s#127.0.0.1#'$HOST_NAME'#' /etc/openstack-dashboard/local_settings
将登录主机换成控制节点主机名
shell
sed -i "/^ALLOWED_HOSTS/s#\[.*\]#['*']#" /etc/openstack-dashboard/local_settings
运行所有主机访问服务
shell
sed -i '/TIME_ZONE/s#UTC#Asia/Shanghai#' /etc/openstack-dashboard/local_settings
设置时间区域为上海
shell
sed -i '/^#SESSION_ENGINE/s/#//' /etc/openstack-dashboard/local_settings
启用缓存作为session存储的后端
shell
sed -i "/^SESSION_ENGINE/s#'.*'#'django.contrib.sessions.backends.cache'#" /etc/openstack-dashboard/local_settings
将session存储后端切换为缓存以提高session访问的效率和性能
shell
cat >> /etc/openstack-dashboard/local_settings <<EOF
OPENSTACK_API_VERSIONS = {
"identity": 3,
"image": 2,
"volume": 2,
}
OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"
OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "$DOMAIN_NAME"
指定openstack api的版本、默认角色和默认域
shell
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
'LOCATION': '$HOST_NAME:11211',
}
}
将Memcached设置为dashboard的缓存后端,并设置其端口为11211
shell
WEBROOT = '/dashboard/'
EOF
设置dashboard的web根路径
shell
#/etc/httpd/conf.d/openstack-dashboard.conf
sed -e '4iWSGIApplicationGroup %{GLOBAL}' /etc/httpd/conf.d/openstack-dashboard.conf
设置dashboard使用全局的WSGI应用程序组,保证其在部署过程中能够正常运行
shell
#rebuild dashboard
cd /usr/share/openstack-dashboard && python manage.py make_web_conf --apache > /etc/httpd/conf.d/openstack-dashboard.conf
确保dashboard在Apache服务器上配置正确,保证浏览器能够访问和使用dashboard
shell
ln -s /etc/openstack-dashboard /usr/share/openstack-dashboard/openstack_dashboard/conf
创建软连接
shell
sed -i "s:WSGIScriptAlias / :WSGIScriptAlias /dashboard :" /etc/httpd/conf.d/openstack-dashboard.conf
将dashboard部署到/dashboard目录下
shell
sed -i "s:Alias /static:Alias /dashboard/static:" /etc/httpd/conf.d/openstack-dashboard.conf
配置网址时能够正确加载静态文件
shell
systemctl restart httpd.service memcached.service
重启httpd和缓存服务
shell
#/root/logininfo.txt
printf "\033[35mThe horizon service is ready,Now you can visit the following;\n\033[0m"
echo 浏览器访问:http://$HOST_IP/dashboard
echo 域:$DOMAIN_NAME
echo 用户名:admin
echo 密码:"${ADMIN_PASS}"
echo 信息输出到root目录下的logininfo.txt中了。
打印信息到屏幕,让用户知晓
shell
cat >> /root/logininfo.txt << EOF
浏览器访问:http://$HOST_IP/dashboard
域:$DOMAIN_NAME
用户名:admin
密码:"${ADMIN_PASS}"
EOF
记录登录所需信息到文件中,避免用户遗忘
iaas-install-cinder-controller.sh
shell
#!/bin/bash
source /etc/openstack/openrc.sh
source /etc/keystone/admin-openrc.sh
#cinder mysql
mysql -uroot -p$DB_PASS -e "create database cinder;"
mysql -uroot -p$DB_PASS -e "grant all privileges on cinder.* to 'cinder'@'%' identified by '$CINDER_DBPASS';"
mysql -uroot -p$DB_PASS -e "grant all privileges on cinder.* to 'cinder'@'localhost' identified by '$CINDER_DBPASS';"
创建cinder数据库,并授予远程访问权限
shell
#cinder user role service endpoint
openstack user create --domain $DOMAIN_NAME --password $CINDER_PASS cinder
openstack role add --project service --user cinder admin
创建cinder用户并赋予admin角色
shell
openstack service create --name cinderv2 --description "OpenStack Block Storage" volumev2
openstack service create --name cinderv3 --description "OpenStack Block Storage" volumev3
创建两个卷服务
shell
openstack endpoint create --region RegionOne volumev2 public http://$HOST_NAME:8776/v2/%\(project_id\)s
openstack endpoint create --region RegionOne volumev2 internal http://$HOST_NAME:8776/v2/%\(project_id\)s
openstack endpoint create --region RegionOne volumev2 admin http://$HOST_NAME:8776/v2/%\(project_id\)s
openstack endpoint create --region RegionOne volumev3 public http://$HOST_NAME:8776/v3/%\(project_id\)s
openstack endpoint create --region RegionOne volumev3 internal http://$HOST_NAME:8776/v3/%\(project_id\)s
openstack endpoint create --region RegionOne volumev3 admin http://$HOST_NAME:8776/v3/%\(project_id\)s
创建端点信息
shell
#cinder install
yum install openstack-cinder -y
整个段落所做的事:安装cinder软件包
cinder是一个用于提供持久性存储的组件,类似于为虚拟机提供了一个硬盘
它允许用户创建、管理和附加存储卷,以供虚拟机存储数据和操作系统
shell
#/etc/cinder/cinder.conf
openstack-config --set /etc/cinder/cinder.conf database connection mysql+pymysql://cinder:$CINDER_DBPASS@$HOST_NAME/cinder
openstack-config --set /etc/cinder/cinder.conf DEFAULT transport_url rabbit://$RABBIT_USER:$RABBIT_PASS@$HOST_NAME
openstack-config --set /etc/cinder/cinder.conf DEFAULT auth_strategy keystone
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken www_authenticate_uri http://$HOST_NAME:5000
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_url http://$HOST_NAME:5000
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken memcached_servers $HOST_NAME:11211
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_type password
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken project_domain_name $DOMAIN_NAME
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken user_domain_name $DOMAIN_NAME
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken project_name service
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken username cinder
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken password $CINDER_PASS
openstack-config --set /etc/cinder/cinder.conf DEFAULT my_ip $HOST_IP
openstack-config --set /etc/cinder/cinder.conf oslo_concurrency lock_path /var/lib/cinder/tmp
配置keystone认证信息和文件锁
shell
#/etc/nova/nova.conf
openstack-config --set /etc/nova/nova.conf cinder os_region_name RegionOne
确定云中数据存储区域的标识符
shell
#su cinder mysql
su -s /bin/sh -c "cinder-manage db sync" cinder
使用cinder用户连接数据库
shell
systemctl restart openstack-nova-api.service
systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service
systemctl restart openstack-cinder-api.service openstack-cinder-scheduler.service httpd
重启nova-api服务,设置cinder服务开机自启并重启
shell
cinder service-list
列出所有服务
iaas-install-cinder-compute.sh
shell
#!/bin/bash
source /etc/openstack/openrc.sh
#cinder install
yum install lvm2 device-mapper-persistent-data openstack-cinder targetcli python-keystone -y
安装软件包
shell
systemctl enable lvm2-lvmetad.service
systemctl restart lvm2-lvmetad.service
设置lvm服务开机自启并重启服务
shell
#Create a disk for cinder volumes
pvcreate /dev/$BLOCK_DISK
创建物理卷
shell
vgcreate cinder-volumes /dev/$BLOCK_DISK
创建卷组
shell
partprobe
刷新分区表,以便内核重新读取
shell
#sed -i '/^ filter/d' /etc/lvm/lvm.conf
#sed -i '/^devices/a\ filter = ["a/sdb/", "a/sda/", "r/.*/"]' /etc/lvm/lvm.conf
#sed -i "s/sdz/$BLOCK_DISK/g" /etc/lvm/lvm.conf
#partprobe
#/etc/cinder/cinder.conf
openstack-config --set /etc/cinder/cinder.conf database connection mysql+pymysql://cinder:$CINDER_DBPASS@$HOST_NAME/cinder
连接数据库
shell
openstack-config --set /etc/cinder/cinder.conf DEFAULT transport_url rabbit://$RABBIT_USER:$RABBIT_PASS@$HOST_NAME
连接rabbit消息队列
shell
openstack-config --set /etc/cinder/cinder.conf DEFAULT auth_strategy keystone
配置keystone为认证服务
shell
openstack-config --set /etc/cinder/cinder.conf DEFAULT my_ip $HOST_IP_NODE
配置cinder的地址
shell
openstack-config --set /etc/cinder/cinder.conf DEFAULT enabled_backends lvm
配置后端存储类型为lvm
shell
openstack-config --set /etc/cinder/cinder.conf DEFAULT glance_api_servers http://$HOST_NAME:9292
配置cinder的api地址和端口号
shell
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken www_authenticate_uri http://$HOST_NAME:5000
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_url http://$HOST_NAME:5000
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken memcached_servers $HOST_NAME:11211
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_type password
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken project_domain_name $DOMAIN_NAME
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken user_domain_name $DOMAIN_NAME
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken project_name service
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken username cinder
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken password $CINDER_PASS
#配置keystone认证信息
shell
openstack-config --set /etc/cinder/cinder.conf lvm volume_driver cinder.volume.drivers.lvm.LVMVolumeDriver
openstack-config --set /etc/cinder/cinder.conf lvm volume_group cinder-volumes
指定LVM驱动来管理Cinder卷
shell
openstack-config --set /etc/cinder/cinder.conf lvm iscsi_protocol iscsi
指定使用ISCSI协议来访问Cinder卷
shell
openstack-config --set /etc/cinder/cinder.conf lvm iscsi_helper lioadm
指定使用LIO(Linux iSCSI Target)管理器作为iSCSI协议的帮助程序来访问Cinder卷。
shell
openstack-config --set /etc/cinder/cinder.conf oslo_concurrency lock_path /var/lib/cinder/tmp
设置锁文件路径
shell
systemctl enable openstack-cinder-volume.service target.service
systemctl restart openstack-cinder-volume.service target.service
设置cinder服务开机自启并重启服务
shell
ssh $HOST_IP "source /etc/keystone/admin-openrc.sh && cinder service-list"
远程登录controller节点,查看cinder服务