openstack-train-ovs-ceph 部署

第一章 Openstack简介

https://baike.baidu.com/item/OpenStack/342467?fr=aladdin

Openstack框架图![img](file:///C:\Users\ADMINI~1\AppData\Local\Temp\ksohtml9212\wps1.jpg)

Openstack是kvm的管理集群

第二章 Openstack云平台部署

服务器规划

由于利用VMware Workstation 模拟的环境,eth0配置网络为仅主机模式,eth1配置网络为桥接模式。

主机名 IP 磁盘 CPU memory
controller eth0:10.0.0.10,eth1:不配置ip sda:100G 4C 8G
compute01 eth0:10.0.0.11,eth1:不配置ip sda:100G 2C 4G
compute02 eth0:10.0.0.12,eth1:不配置ip sda:100G 2C 4G

![img](file:///C:\Users\ADMINI~1\AppData\Local\Temp\ksohtml9212\wps2.jpg)

修改网卡配置文件

网卡名称 模式 作用 IP
Eth0 NAT模式 作为open stack管理、集群、存储网络 10.0.0.0/24
eth1 桥接模式 作为open stack租户、业务、外部网络 不配置ip/24

controller节点:

cat > /etc/sysconfig/network-scripts/ifcfg-eth0 <<EOF
BOOTPROTO=static
DEVICE=eth0
ONBOOT=yes
IPADDR=10.0.0.10
NETMASK=255.255.255.0
GATEWAY=10.0.0.254
DNS1=114.114.114.114
EOF
cat > /etc/sysconfig/network-scripts/ifcfg-eth1 <<EOF
BOOTPROTO=none
DEVICE=eth1
ONBOOT=yes
EOF

compute01节点:

cat > /etc/sysconfig/network-scripts/ifcfg-eth0 <<EOF
BOOTPROTO=static
DEVICE=eth0
ONBOOT=yes
IPADDR=10.0.0.11
NETMASK=255.255.255.0
GATEWAY=10.0.0.254
DNS1=114.114.114.114
EOF
cat > /etc/sysconfig/network-scripts/ifcfg-eth1 <<EOF
BOOTPROTO=none
DEVICE=eth1
ONBOOT=yes
EOF

compute02节点:

cat > /etc/sysconfig/network-scripts/ifcfg-eth0 <<EOF
BOOTPROTO=static
DEVICE=eth0
ONBOOT=yes
IPADDR=10.0.0.12
NETMASK=255.255.255.0
GATEWAY=10.0.0.254
DNS1=114.114.114.114
EOF
cat > /etc/sysconfig/network-scripts/ifcfg-eth1 <<EOF
BOOTPROTO=none
DEVICE=eth1
ONBOOT=yes
EOF

2.1 配置系统环境及基础服务的安装

2.1.1 配置系统环境

controller:内存8G, ip:10.0.0.11

compute01: 内存4G,cpu开启虚拟化(必开) ip:10.0.0.12

compute02: 内存4G,cpu开启虚拟化(必开) ip:10.0.0.13

2.1.1.1 设置服务器主机名

控制节点生成密钥并设置主机名

[root@localhost ~]# ssh-keygen 

hostnamectl set-hostname controller
hostnamectl set-hostname compute01
hostnamectl set-hostname compute02

2.1.1.2 配置hosts解析(两台主机都需要配置)

[root@controller ~]# cat >> /etc/hosts << EOF
10.0.0.10 controller
10.0.0.11 compute01
10.0.0.12 compute02
EOF
[root@controller ~]# scp /etc/hosts compute01:/etc/

2.1.1.3 关闭防火墙、NetworkManager网络管理工具及selinux

systemctl stop firewalld
systemctl disable firewalld
systemctl stop NetworkManager
systemctl disable NetworkManager
sed -i  's/SELINUX=enforcing/SELINUX=disabled/g' /etc/sysconfig/selinux
setenforce 0

2.1.1.4 配置http本地源

首先上传openstack-train.tar.gz压缩包在进行如下配置

控制节点:

mkdir -p /var/www/html/
tar fx openstack-train.tar.gz -C /var/www/html/
rm -rf /etc/yum.repos.d/*
cat << EOF >> /etc/yum.repos.d/openstack-train.repo 
[openstack]
name=openstack
baseurl=file:///var/www/html/openstack-train
gpgcheck=0
enabled=1
EOF
yum clean all
yum makecache

yum install -y httpd
systemctl start httpd && systemctl enable httpd

计算节点:

rm -rf /etc/yum.repos.d/*
cat << EOF >> /etc/yum.repos.d/openstack-train.repo 
[openstack]
name=openstack
baseurl=http://10.0.0.10/openstack-train
gpgcheck=0
enabled=1
EOF
yum clean all
yum makecache

所有节点安装基础工具包

yum install -y unzip wget lrzsz net-tools vim tree lsof tcpdump telnet screen bash-completion fio stress sysstat tree strace

2.1.1.5 安装openstack客户端及selinux管理工具

控制节点计算节点服务器都需要安装

安装openstack客户端

yum install python-openstackclient -y

安装 openstack-selinux 软件包以便自动管理 OpenStack 服务的安全策略

yum install openstack-selinux -y

2.1.2 配置时钟同步服务

[root@controller ~]# timedatectl set-timezone Asia/Shanghai
[root@controller ~]# vim /etc/chrony.conf
# Allow NTP client access from local network.
allow 10.0.0.0/24
[root@controller ~]# systemctl restart chronyd
[root@controller ~]# ss -autlp|grep ntp
udp  UNCONN  0  0  *:ntp  *:*  users:(("chronyd",pid=9801,fd=3))

[root@compute01 ~]# vim /etc/chrony.conf 
server 10.0.0.10 iburst
[root@compute01 ~]# systemctl restart chronyd
[root@compute01 ~]# chronyc sources
210 Number of sources = 1
MS Name/IP address     Stratum Poll Reach LastRx Last sample
===============================================================================
^? controller           2  6   1   1  -990us[ -990us] +/- 9124us

2.1.3 优化参数(所有节点)

修改ulimit

[root@controller ~]# ulimit -n 655350
[root@controller ~]# ulimit -n
65535
[root@controller ~]# vim /etc/security/limits.conf
root       soft   core          1000
root       hard   core          unlimited
root       soft   nofile        655350
root       hard   nofile        655350
root       soft   nproc         655350
root       hard   nproc         655350
root       soft   memlock       unlimited
root       hard   memlock       unlimited
root       soft   stack         unlimited
root       hard   stack         unlimited
root       soft   sigpending    1024000
root       hard   sigpending    1024000

[root@controller ~]# scp /etc/security/limits.conf compute01:/etc/security/
[root@controller ~]# scp /etc/security/limits.conf compute02:/etc/security/

优化打开文件句柄数

[root@controller ~]# vim /etc/systemd/system.conf
DefaultLimitNOFILE=655350
DefaultLimitNPROC=655350
[root@controller ~]# scp /etc/systemd/system.conf compute01:/etc/systemd/
[root@controller ~]# scp /etc/systemd/system.conf compute02:/etc/systemd/

[root@controller ~]# ulimit -n 655350
[root@controller ~]# bash -c "ulimit -u 655350"

最好是所有节点都重启一次

2.1.4 安装数据库并配置及安全初始化

2.1.4.1 安装MySQL服务

[root@controller ~]# yum install mariadb mariadb-server python2-PyMySQL -y

2.1.4.2 在mariadb配置文件创建openstack配置文件,添加如下内容

[root@controller ~]# vim /etc/my.cnf.d/openstack.cnf
[mysqld]
bind-address = 10.0.0.10
default-storage-engine = innodb
innodb_file_per_table = on # 独立表空间,每个表都有自己的独立表空间,表在不同数据库中移动。
max_connections = 4096
collation-server = utf8_general_ci
character-set-server = utf8
binlog_format = ROW
expire_logs_days = 7
max_binlog_size = 512M
innodb_autoinc_lock_mode = 2

2.14.3 启动mariadb服务,并设置开机自启动

[root@controller ~]# systemctl start mariadb && systemctl enable mariadb

2.1.4.4 进行mariadb安全设置

数据库root用户密码暂时不设置

[root@controller ~]# mysql_secure_installation
当前密码:回车
是否设置root密码:n
匿名用户:y
Root远程登陆禁止:y
移除test库和访问权限:y
重载权限表:y

2.1.4.5 优化mariadb最大连接

登录mariadb查看最大连接

[root@controller ~]# mysql
MariaDB [(none)]> show variables like 'max_connections';
+-----------------------+-------+
| Variable_name         | Value |
+-----------------------+-------+
| max_connections        | 594  |  # 查看最大连接数与配置文件一致
+-----------------------+-------+
3 rows in set (0.002 sec)

发现最大连接数改为214并不是我们设置的1000。。。这是因为Mariadb有默认的打开文件数限制。

配置Mariadb打开文件数,修改/usr/lib/systemd/system/mariadb.service文件,在[Service]下新增如下配置:

[root@controller ~]# vim /usr/lib/systemd/system/mariadb.service 
LimitNOFILE=655350
LimitNPROC=655350

重新加载系统服务并重启Mariadb服务

[root@controller ~]# systemctl --system daemon-reload
[root@controller ~]# systemctl restart mariadb.service

再次查看Mariadb最大连接数

MariaDB [(none)]> show variables like 'max_connections';
+-----------------+-------+
| Variable_name   | Value |
+-----------------+-------+
| max_connections | 4096  |
+-----------------+-------+
1 row in set (0.002 sec)

2.1.5 安装并配置消息队列

2.1.5.1 安装并启动rabbitmq

[root@controller ~]# yum install rabbitmq-server -y
[root@controller ~]# systemctl start rabbitmq-server && systemctl enable rabbitmq-server

2.1.5.2 在rabbitmq里创建openstack用户

[root@controller ~]# rabbitmqctl add_user openstack RABBIT_PASS
Creating user "openstack" ...
[root@controller ~]# rabbitmqctl list_users
Listing users
openstack    [administrator]
guest  [administrator]

2.1.5.3 设置新建用户的状态为管理用户用于登录使用

[root@controller ~]# rabbitmqctl set_user_tags openstack administrator
Setting tags for user "openstack" to [administrator]

2.1.5.4 查看指定用户权限信息

[root@controller ~]# rabbitmqctl list_user_permissions openstack
Listing permissions for user "openstack"

2.1.5.5 配置openstack用户读写权限

[root@controller ~]# rabbitmqctl set_permissions openstack ".*" ".*" ".*"
Setting permissions for user "openstack" in vhost "/" 

[root@controller ~]# rabbitmqctl list_user_permissions openstack
Listing permissions for user "openstack"
/	.*	.*	.*

2.1.5.6 启用rabbitmq web界面

[root@controller ~]# rabbitmq-plugins enable rabbitmq_management
The following plugins have been enabled:
  amqp_client
  cowlib
  cowboy
  rabbitmq_web_dispatch
  rabbitmq_management_agent
  rabbitmq_management

Applying plugin configuration to rabbit@controller... started 6 plugins.

访问http://10.0.0.10:15672/

![img](file:///C:\Users\ADMINI~1\AppData\Local\Temp\ksohtml9212\wps3.jpg)

![img](file:///C:\Users\ADMINI~1\AppData\Local\Temp\ksohtml9212\wps4.jpg)

查看所有的队列:rabbitmqctl list_queues

2.1.6 安装memcached缓存密码使用

[root@controller ~]# yum install memcached python-memcached -y

修改memcached监听地址

配置服务以使用控制器节点的管理IP地址。这是为了允许其他节点通过管理网络进行访问:

[root@controller ~]# vim /etc/sysconfig/memcached
PORT="11211"
USER="memcached"
MAXCONN="1024"
CACHESIZE="64"
OPTIONS="-l 127.0.0.1,::1,10.0.0.10"
[root@controller ~]# systemctl start memcached && systemctl enable memcached && systemctl status memcached

2.2 认证keystone服务安装

keystone的两大功能:用户认证和服务目录

用户认证相关:

user: 用户

project:项目也叫租户

token: 令牌,认证成功了分配一个令牌

role: 角色

服务目录相关:

service:服务

endpoint: 端点,服务对应的 URL。每个服务都有 3 个 URL:

public url 可以全局访问 # 所有租户访问云平台组件

internal url 只能被局域网访问,openstack 服务之间访问 # openstack内部组件互相通信的

admin url 从常规的访问分离出来,只能 admin 使用 # 管理员用户用于管理的

2.2.1 创库授权

创建数据库及授予keystone数据库访问权限

[root@controller ~]# mysql
CREATE DATABASE keystone; 

GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \
IDENTIFIED BY 'KEYSTONE_DBPASS';

GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \
IDENTIFIED BY 'KEYSTONE_DBPASS'; 

FLUSH PRIVILEGES;
quit

2.2.2 安装keystone服务相关软件包

[root@controller ~]# yum install python2-qpid-proton openstack-keystone httpd mod_wsgi -y

2.2.3 修改配置文件

[root@controller ~]# cp /etc/keystone/keystone.conf{,.bak}
[root@controller ~]# grep -Ev '^$|#' /etc/keystone/keystone.conf.bak >/etc/keystone/keystone.conf
# 在[database]中配置数据库访问、在该[token]部分中,配置Fernet令牌提供者
[root@controller ~]# vim /etc/keystone/keystone.conf
[database]
connection = mysql+pymysql://keystone:KEYSTONE_DBPASS@10.0.0.10/keystone 
[token]
provider = fernet

2.2.4 初始化数据库

[root@controller ~]# su -s /bin/sh -c "keystone-manage db_sync" keystone 
[root@controller ~]# mysql keystone -e 'show tables';	#验证是否导入成功

2.2.5 初始化Fernet-key密钥存储库

[root@controller ~]# keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
[root@controller ~]# keystone-manage credential_setup --keystone-user keystone --keystone-group keystone 

检查是否生成对应的 key 文件

[root@controller ~]# ls /etc/keystone/ | egrep "fernet-key"
[root@controller ~]# tree /etc/keystone/fernet-keys/

2.2.6 引导身份服务

[root@controller ~]# keystone-manage bootstrap --bootstrap-password ADMIN_PASS \
  --bootstrap-admin-url http://10.0.0.10:5000/v3/ \
  --bootstrap-internal-url http://10.0.0.10:5000/v3/ \
  --bootstrap-public-url http://10.0.0.10:5000/v3/ \
  --bootstrap-region-id RegionOne

2.2.7 配置Apche httpd服务

编辑/etc/httpd/conf/httpd.conf文件并配置ServerName选项为控制节点

[root@controller ~]# sed -i "s/#ServerName www.example.com:80/ServerName 10.0.0.10/" /etc/httpd/conf/httpd.conf

创建/usr/share/keystone/wsgi-keystone.conf文件软链接到/etc/httpd/conf.d/下

[root@controller ~]# ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/

启动http服务

[root@controller ~]# systemctl restart httpd
[root@controller ~]# netstat -auntlp|grep 5000
tcp6       0      0 :::5000                 :::*                    LISTEN      14414/httpd

2.2.8 创建临时环境变量

$ export OS_USERNAME=admin
$ export OS_PASSWORD=ADMIN_PASS
$ export OS_PROJECT_NAME=admin
$ export OS_USER_DOMAIN_NAME=Default
$ export OS_PROJECT_DOMAIN_NAME=Default
$ export OS_AUTH_URL=http://10.0.0.10:5000/v3
$ export OS_IDENTITY_API_VERSION=3

替换ADMIN_PASS为"引导身份服务"设置的密码

验证:验证openstack cli是否能够正常使用

[root@controller ~]# openstack service list
[root@controller ~]# openstack endpoint list 

![img](file:///C:\Users\ADMINI~1\AppData\Local\Temp\ksohtml9212\wps5.jpg)

2.2.9 创建domain(域), projects(项目), users(用户), 与roles(角色)

projrct/user等基于domain存在;

在"认证引导"章节中,初始化admin用户即生成"default" domain

[root@controller ~]# openstack domain list

如果需要生成新的domain,

[root@controller ~]# openstack domain create --description "An Example Domain" example
[root@controller ~]# openstack domain list

project属于某个domain;

以创建service项目为例,service项目属于"default" domain

[root@controller ~]# openstack project create --domain default --description "Service Project" service

在default域创建myproject项目:

[root@controller ~]# openstack project create --domain default --description "Demo Project" myproject

user属于某个domain

创建myuser用户在default域

[root@controller ~]# openstack user create --domain default --password MYUSER_PASS myuser

创建普通用户角色(区别于admin用户)

[root@controller ~]# openstack role create myrole

将myrole角色添加到myproject项目和myuser用户:

[root@controller ~]# openstack role add --project myproject --user myuser myrole

2.2.10 验证操作

取消临时变量OS_AUTH_URL和OS_PASSWORD环境变量

[root@controller ~]# unset OS_AUTH_URL OS_PASSWORD

[root@controller ~]# openstack service list
Missing value auth-url required for auth plugin password #需要密码的

以admin用户身份请求身份验证令牌 密码:ADMIN_PASS

openstack --os-auth-url http://controller:5000/v3 \
 --os-project-domain-name Default --os-user-domain-name Default \
 --os-project-name admin --os-username admin token issue

myuser验证 密码:MYUSER_PASS

openstack --os-auth-url http://controller:5000/v3 \
 --os-project-domain-name Default --os-user-domain-name Default \
 --os-project-name myproject --os-username myuser token issue

2.2.11 创建openstack客户端环境脚本

创建和编辑admin-openrc文件并添加以下内容

[root@controller ~]# vim admin-openrc
export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=ADMIN_PASS
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
export PS1='[\u@\h \W(keystone_admin)]# '

替换ADMIN_PASS为您admin在身份服务中为用户选择的密码。

创建和编辑demo-openrc文件并添加以下内容

[root@controller ~]# vim demo-openrc
export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=myproject
export OS_USERNAME=myuser
export OS_PASSWORD=MYUSER_PASS
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2

替换MYUSER_PASS为您demo在身份服务中为用户选择的密码。

加载环境变量

[root@controller ~]# source admin-openrc

请求身份验证令牌

[root@controller ~]# openstack service list

2.3 镜像glance服务安装

镜像服务 (glance) 允许用户发现、注册和获取虚拟机镜像。它提供了一个 REST API,允许您查询虚拟机镜像的 metadata 并获取一个现存的镜像。您可以将虚拟机镜像存储到各种位置,从简单的文件系统到对象存储系统----例如 OpenStack 对象存储, 并通过镜像服务使用。

2.3.1 数据库创库授权

[root@controller ~]# mysql
CREATE DATABASE glance;
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \
  IDENTIFIED BY 'GLANCE_DBPASS';
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \
  IDENTIFIED BY 'GLANCE_DBPASS';
FLUSH PRIVILEGES;
quit

2.3.2 在keystone创建glance用户关联角色

创建glance用户

[root@controller ~]# openstack user create --domain default --password GLANCE_PASS glance

将glance用户加入service项目授予admin角色

[root@controller ~]# openstack role add --project service --user glance admin

2.3.3 在keystone上创建服务实体和注册api

创建glance服务实体

openstack service create --name glance \
 --description "OpenStack Image" image

创建镜像服务api端点

openstack endpoint create --region RegionOne \
  image public http://10.0.0.10:9292
openstack endpoint create --region RegionOne \
  image internal http://10.0.0.10:9292
openstack endpoint create --region RegionOne \
  image admin http://10.0.0.10:9292

2.3.4 安装服务和相应软件包

[root@controller ~]# yum install openstack-glance -y

2.3.5 修改相应服务的配置文件

[root@controller ~]# cp /etc/glance/glance-api.conf{,.bak}
[root@controller ~]# grep '^[a-Z\[]' /etc/glance/glance-api.conf.bak >/etc/glance/glance-api.conf
[root@controller ~]# vim /etc/glance/glance-api.conf
# 配置连接数据库
[database]
connection = mysql+pymysql://glance:GLANCE_DBPASS@10.0.0.10/glance
# 配置认证服务访问
[keystone_authtoken]
www_authenticate_uri  = http://10.0.0.10:5000
auth_url = http://10.0.0.10:5000
memcached_servers = 10.0.0.10:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = glance
password = GLANCE_PASS
[paste_deploy]
flavor = keystone
# 配置glance镜像存储
[glance_store]
stores = file,http
default_store = file
filesystem_store_datadir = /var/lib/glance/images/

2.3.6 同步数据库并验证

[root@controller ~]# su -s /bin/sh -c "glance-manage db_sync" glance
[root@controller ~]# mysql glance -e "show tables";

2.3.7 启动服务并设置为开机启动

[root@controller ~]# systemctl enable openstack-glance-api.service
[root@controller ~]# systemctl start openstack-glance-api.service 
[root@controller ~]# netstat -lntup|grep 9292

2.3.8 验证操作

上传cirros-0.3.4-x86_64-disk.img镜像

[root@controller ~]# wget 

执行命令上传镜像

使用qcow2格式上传,bare容器格式上传到镜像服务

设置公共可见,这样所所有的项目都可以访问

[root@controller ~]# glance image-create --name "cirros" \
 --file cirros-0.4.0-x86_64-disk.img \
 --disk-format qcow2 --container-format bare \
 --visibility public --progress

对比镜像md5值

[root@controller ~]# ll /var/lib/glance/images/ab550b5b-8220-4c2e-936e-f4773b763e89
-rw-r----- 1 glance glance 12716032 Oct 24 22:17 /var/lib/glance/images/ab550b5b-8220-4c2e-936e-f4773b763e89
[root@controller ~]#
[root@controller ~]# md5sum /var/lib/glance/images/ab550b5b-8220-4c2e-936e-f4773b763e89
443b7623e27ecf03dc9e01ee93f67afe  /var/lib/glance/images/ab550b5b-8220-4c2e-936e-f4773b763e89
[root@controller ~]#
[root@controller ~]# md5sum /root/cirros-0.4.0-x86_64-disk.img
443b7623e27ecf03dc9e01ee93f67afe  /root/cirros-0.4.0-x86_64-disk.img

验证上传镜像

[root@controller ~]# openstack image list  # glance image-list
+--------------------------------------+--------+--------+
| ID                                   | Name   | Status |
+--------------------------------------+--------+--------+
| ab550b5b-8220-4c2e-936e-f4773b763e89 | cirros | active |
+--------------------------------------+--------+--------+

如果能看到上面结果,说明glance成功

官网下载image

https://docs.openstack.org/image-guide/obtain-images.html

2.4 placement服务安装

2.4.1 数据库创库授权

[root@controller ~]# mysql
CREATE DATABASE placement;
GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' \
  IDENTIFIED BY 'PLACEMENT_DBPASS';
GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' \
  IDENTIFIED BY 'PLACEMENT_DBPASS';
FLUSH PRIVILEGES;
quit

2.4.2 创建placement用户

使用您选择的创建Placement服务用户PLACEMENT_PASS:

[root@controller ~]# openstack user create --domain default --password PLACEMENT_PASS placement

2.4.3 将placement用户加入service项目授予admin角色

[root@controller ~]# openstack role add --project service --user placement admin

2.4.4 创建placement服务实体:

[root@controller ~]# openstack service create --name placement --description "Placement API" placement

2.4.5 创建placement-API服务端点:

openstack endpoint create --region RegionOne placement public http://10.0.0.10:8778
openstack endpoint create --region RegionOne placement internal http://10.0.0.10:8778
openstack endpoint create --region RegionOne placement admin http://10.0.0.10:8778

2.4.6 安装placement

[root@controller ~]# yum install openstack-placement-api -y

2.4.7 配置placement.conf

[root@controller ~]# cp /etc/placement/placement.conf{,.bak} 
[root@controller ~]# grep -Ev '^$|#' /etc/placement/placement.conf.bak >/etc/placement/placement.conf 
[root@controller ~]# vim /etc/placement/placement.conf
[placement_database]
connection = mysql+pymysql://placement:PLACEMENT_DBPASS@10.0.0.10/placement
[api]
auth_strategy = keystone
[keystone_authtoken]
auth_url = http://10.0.0.10:5000/v3
memcached_servers = 10.0.0.10:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = placement
password = PLACEMENT_PASS

2.4.8 导入数据库placement

[root@controller ~]# su -s /bin/sh -c "placement-manage db sync" placement

验证数据库:
[root@controller ~]# mysql -e "use placement;show tables;"
[root@controller ~]# systemctl restart httpd

2.4.9 验证安装

检查状态,确保一切正常

[root@controller ~]# placement-status upgrade check
+----------------------------------+
| Upgrade Check Results            |
+----------------------------------+
| Check: Missing Root Provider IDs |
| Result: Success                  |
| Details: None                    |
+----------------------------------+
| Check: Incomplete Consumers      |
| Result: Success                  |
| Details: None                    |
+----------------------------------+

2.5 计算服务nove控制节点安装-10.0.0.10

nova-api:#接受并响应所有的计算服务请求,管理虚拟机(云主机)生命周期

nova-compute(多个):#真正管理虚拟机

nova-scheduler: #nova调度器(挑选出最合适的nova-compute来创建虚机)

nova-conductor:#帮助nova-compute代理修改数据库中虚拟机的状态(计算节点访问数据库中间件)

nova-novncproxy:#web版的vnc来直接操作云主机

2.5.1 数据库创库授权

创建nova、nova_api、nova_cell0三个数据库并授权

[root@controller ~]# mysql
CREATE DATABASE nova_api;
CREATE DATABASE nova;
CREATE DATABASE nova_cell0;

GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' \
  IDENTIFIED BY 'NOVA_DBPASS';
GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \
  IDENTIFIED BY 'NOVA_DBPASS';

GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \
  IDENTIFIED BY 'NOVA_DBPASS';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \
  IDENTIFIED BY 'NOVA_DBPASS';

GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' \
  IDENTIFIED BY 'NOVA_DBPASS';
GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' \
  IDENTIFIED BY 'NOVA_DBPASS';
FLUSH PRIVILEGES;
quit

2.5.2 在keystone创建系统用户(nova)关联角色

创建用户

openstack user create --domain default \
 --password NOVA_PASS nova

将admin角色添加到nova用户

openstack role add --project service --user nova admin

2.5.3 在keystone上创建服务实体和注册api

创建nova服务实体

openstack service create --name nova \
 --description "OpenStack Compute" compute

创建compute服务api端点

openstack endpoint create --region RegionOne \
  compute public http://10.0.0.10:8774/v2.1
openstack endpoint create --region RegionOne \
  compute internal http://10.0.0.10:8774/v2.1
openstack endpoint create --region RegionOne \
  compute admin http://10.0.0.10:8774/v2.1

2.5.4 安装服务相应软件包

yum install openstack-nova-api openstack-nova-conductor \
  openstack-nova-novncproxy openstack-nova-scheduler -y

2.5.5 修改相应服务的配置文件

[root@controller ~]# cp /etc/nova/nova.conf{,.bak}
[root@controller ~]# grep '^[a-Z\[]' /etc/nova/nova.conf.bak >/etc/nova/nova.conf
[root@controller ~]# vim /etc/nova/nova.conf
[DEFAULT]
# 配置允许compute和metadata api
enabled_apis = osapi_compute,metadata

# 配置 rabbitMQ 访问
transport_url = rabbit://openstack:RABBIT_PASS@10.0.0.10:5672/

# 设置管理 IP
my_ip = 10.0.0.10

# 配置 nova 使用 neutron 的网络服务
use_neutron = true
firewall_driver = nova.virt.firewall.NoopFirewallDriver

# 配置认证使用 keystone
[api]
auth_strategy = keystone

# 配置 API 数据库
[api_database]
connection = mysql+pymysql://nova:NOVA_DBPASS@10.0.0.10/nova_api

# 配置数据库
[database]
connection = mysql+pymysql://nova:NOVA_DBPASS@10.0.0.10/nova

# 配置 glance 镜像
[glance]
api_servers = http://10.0.0.10:9292

# 配置 keystone 认证相关
[keystone_authtoken]
www_authenticate_uri = http://10.0.0.10:5000/
auth_url = http://10.0.0.10:5000/
memcached_servers = 10.0.0.10:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = nova
password = NOVA_PASS

# 配置 lock_path 路径
[oslo_concurrency]
lock_path = /var/lib/nova/tmp

# 配置 placement 相关
[placement]
region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://10.0.0.10:5000/v3
username = placement
password = PLACEMENT_PASS

# 配置 VNC
[vnc]
enabled = true
server_listen = $my_ip
server_proxyclient_address = $my_ip

2.5.6 配置00-placement-api.conf

由于packaging bug的原因,需要手动添加placement API配置到如下路径

/etc/httpd/conf.d/00-placement-api.conf

[root@controller ~]# echo "

#Placement API
<Directory /usr/bin>
   <IfVersion >= 2.4>
      Require all granted
   </IfVersion>
   <IfVersion < 2.4>
      Order allow,deny
      Allow from all
   </IfVersion>
</Directory>
" >> /etc/httpd/conf.d/00-placement-api.conf

重启httpd服务,启动placement-api监听端口

[root@controller ~]# systemctl restart httpd

2.5.7 同步数据库

导入nova-api数据库

[root@controller ~]# su -s /bin/sh -c "nova-manage api_db sync" nova

注册cell0数据库

[root@controller ~]# su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova

创建cell1单元格

[root@controller ~]# su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova
d8844378-ef53-4e2a-8544-37cb4516cec5

导入nova库

[root@controller ~]# su -s /bin/sh -c "nova-manage db sync" nova

忽略输出信息

验证nova cell0和cell1是否正确注册

[root@controller ~]# su -s /bin/sh -c "nova-manage cell_v2 list_cells" nova

![img](file:///C:\Users\ADMINI~1\AppData\Local\Temp\ksohtml9212\wps6.jpg)

验证数据库

[root@controller ~]# mysql nova_api -e "show tables;"
[root@controller ~]# mysql nova -e "show tables;"
[root@controller ~]# mysql nova_cell0 -e "show tables;"

2.5.8 启动服务

systemctl enable \
  openstack-nova-api.service \
  openstack-nova-scheduler.service \
  openstack-nova-conductor.service \
  openstack-nova-novncproxy.service
systemctl start \
  openstack-nova-api.service \
  openstack-nova-scheduler.service \
  openstack-nova-conductor.service \
  openstack-nova-novncproxy.service

检测:

[root@controller ~]# openstack compute service list
[root@controller ~]# nova service-list
+----+----------------+------------+----------+---------+-------+----------------------------+
| ID | Binary         | Host       | Zone     | Status  | State | Updated At                 |
+----+----------------+------------+----------+---------+-------+----------------------------+
|  3 | nova-conductor | controller | internal | enabled | up    | 2023-10-25T01:04:04.000000 |
|  5 | nova-scheduler | controller | internal | enabled | up    | 2023-10-25T01:04:08.000000 |
+----+----------------+------------+----------+---------+-------+----------------------------+

2.6 网络服务neutron控制节点安装-10.0.0.11

网络:在实际的物理环境下,使用交换机或者集线器把多个计算机连接起来形成了网络。在neutron的世界里,网络也是将多个不同的云主机连接起来

子网:在实际的物理环境下,在一个网络中。我们可以将网络划分成为逻辑子网。在neutron的世界里,子网也隶属于网络下。

端口:在实际的物理环境下,每个子网或者每个网络,都有很多的端口,比如交换机端口来供计算机连接,在neutron的世界里端口也隶属于子网下,云主机的网卡会对应到一个端口上。

路由器:在实际的网络环境下,不同网络或者不同逻辑子网之间如果需要进行通信,需要通过路由器进行路由。在neutron的世界里路由也是这个作用。用来连接不同的网络或者子网。

neutron-server 端口(9696) api:接受和响应外部的网络管理请求

neutron-linuxbridge-agent: 负责创建桥接网卡

neutron-dhcp-agent: 负责分配IP

neutron-metadata-agent: 配合nova-metadata-api实现虚拟机的定制化操作

L3-agent 实现三层网络vxlan(网络层)

2.6.1 数据库创库授权

[root@controller ~]# mysql
CREATE DATABASE neutron;
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \
  IDENTIFIED BY 'NEUTRON_DBPASS';
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \
  IDENTIFIED BY 'NEUTRON_DBPASS';
FLUSH PRIVILEGES;
quit

2.6.2 创建nova用户及添加角色

openstack user create --domain default --password NEUTRON_PASS neutron
openstack role add --project service --user neutron admin

2.6.3 在keystone上创建服务和注册api

创建服务实体

openstack service create --name neutron \
 --description "OpenStack Networking" network

创建网络服务api端点

openstack endpoint create --region RegionOne \
  network public http://10.0.0.10:9696
openstack endpoint create --region RegionOne \
  network internal http://10.0.0.10:9696
openstack endpoint create --region RegionOne \
  network admin http://10.0.0.10:9696

2.6.4 配置内核转发参数

[root@controller ~]# cat >> /etc/sysctl.conf << EOF
# 用于控制系统是否开启对数据包源地址的校验,关闭
net.ipv4.conf.all.rp_filter=0
net.ipv4.conf.default.rp_filter=0
# 开启二层转发设备
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
EOF

加载内核模块:作用:桥接流量转发到iptables链

[root@controller ~]# modprobe br_netfilter
[root@controller ~]# modinfo br_netfilter

生效内核配置

[root@controller ~]# sysctl -p

2.6.5 安装服务相应软件包

yum install openstack-neutron openstack-neutron-ml2 openstack-neutron-openvswitch openvswitch ebtables -y

2.6.6 修改相应服务的配置文件

2.6.6.1 /etc/neutron/neutron.conf

[root@controller ~]# cp /etc/neutron/neutron.conf{,.bak}
[root@controller ~]# grep '^[a-Z\[]' /etc/neutron/neutron.conf.bak >/etc/neutron/neutron.conf
[root@controller ~]# vim /etc/neutron/neutron.conf
[DEFAULT]
core_plugin = ml2
service_plugins = router
allow_overlapping_ips = true
transport_url = rabbit://openstack:RABBIT_PASS@10.0.0.10
auth_strategy = keystone
notify_nova_on_port_status_changes = true
notify_nova_on_port_data_changes = true
[database]
connection = mysql+pymysql://neutron:NEUTRON_DBPASS@10.0.0.10/neutron
[keystone_authtoken]
www_authenticate_uri = http://10.0.0.10:5000
auth_url = http://10.0.0.10:5000
memcached_servers = 10.0.0.10:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = NEUTRON_PASS
[oslo_concurrency]
lock_path = /var/lib/neutron/tmp
[nova]
auth_url = http://10.0.0.10:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = NOVA_PASS

2.6.6.2 /etc/neutron/plugins/ml2/ml2_conf.ini

[root@controller ~]# cp /etc/neutron/plugins/ml2/ml2_conf.ini{,.bak}
[root@controller ~]# grep '^[a-Z\[]' /etc/neutron/plugins/ml2/ml2_conf.ini.bak >/etc/neutron/plugins/ml2/ml2_conf.ini
[root@controller ~]# vim /etc/neutron/plugins/ml2/ml2_conf.ini
[DEFAULT]
[ml2]
type_drivers = flat,vlan,vxlan,gre
tenant_network_types = vxlan
mechanism_drivers = openvswitch,l2population
# mechanism_drivers机制驱动,官方推荐linuxbridge或者集成openswitch
# l2population 当用的网络类型为vxlan或者gre的时候就需要打开,此配置是阻隔网络广播风暴的。目的是减少vxlan内隧道里的arp泛洪报文
extension_drivers = port_security
[ml2_type_flat]
flat_networks = physnet1
[ml2_type_vxlan]
vni_ranges = 1:1000
[ml2_type_vlan]
network_vlan_ranges = default:3001:4000
[securitygroup]
enable_ipset = true
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver

2.6.6.3 /etc/neutron/plugins/ml2/openvswitch_agent.ini

注意修改网卡的名称和ip地址

[root@controller ~]# cp /etc/neutron/plugins/ml2/openvswitch_agent.ini{,.bak}
[root@controller ~]# grep '^[a-Z\[]' /etc/neutron/plugins/ml2/openvswitch_agent.ini.bak >/etc/neutron/plugins/ml2/openvswitch_agent.ini
[root@controller ~]# vim /etc/neutron/plugins/ml2/openvswitch_agent.ini
[DEFAULT]
[ovs]
tunnel_bridge = br-tun #隧道桥的名字

local_ip = 10.0.0.10
integration_bridge = br-int #网桥
tenant_network_type = vxlan #租户网络类型
tunnel_type = vxlan #隧道类型
tunnel_id_ranges = 1:1000 #vlan-id范围
enable_tunneling = true
bridge_mappings = physnet1:br-eth1 #外网网桥
prevent_arp_spoofing = true

[agent]
tunnel_types = vxlan
l2_population = true
prevent_arp_spoofing = True

2.6.6.4 /etc/neutron/l3_agent.ini 三层代理,路由功能

[root@controller ~]# cp /etc/neutron/l3_agent.ini{,.bak}
[root@controller ~]# grep '^[a-Z\[]' /etc/neutron/l3_agent.ini.bak >/etc/neutron/l3_agent.ini
[root@controller ~]# vim /etc/neutron/l3_agent.ini
[DEFAULT]
verbose = true
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver #网口驱动
external_network_bridge = br-eth1 #外部网桥名称
[agent]
[network_log]
[ovs]

2.6.6.5 /etc/neutron/dhcp_agent.ini 分配ip的服务

[root@controller ~]# cp /etc/neutron/dhcp_agent.ini{,.bak}
[root@controller ~]# grep '^[a-Z\[]' /etc/neutron/dhcp_agent.ini.bak >/etc/neutron/dhcp_agent.ini
[root@controller ~]# vim /etc/neutron/dhcp_agent.ini
[DEFAULT]
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver #openvswitch#网口驱动
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq #分配ip的驱动,dnsmasq是一个进程
enable_isolated_metadata = true #开启网络元数据进行记录

2.6.6.6 /etc/neutron/metadata_agent.ini

[root@controller ~]# cp /etc/neutron/metadata_agent.ini{,.bak}
[root@controller ~]# grep '^[a-Z\[]' /etc/neutron/metadata_agent.ini.bak >/etc/neutron/metadata_agent.ini
[root@controller ~]# vim /etc/neutron/metadata_agent.ini 
[DEFAULT]
nova_metadata_host = 10.0.0.10 #nova的ip地址
metadata_proxy_shared_secret = METADATA_SECRET #neutron和nova对接使用的口令、密码

2.6.6.7 再次修改/etc/nova/nova.conf

在 [neutron] 部分,配置访问参数,启用元数据代理并设置密码

配置计算服务能够正常使用网络服务

[root@controller ~]# vim /etc/nova/nova.conf
[neutron]
url = http://10.0.0.10:9696
auth_url = http://10.0.0.10:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = NEUTRON_PASS
service_metadata_proxy = true
metadata_proxy_shared_secret = METADATA_SECRET

2.6.7 同步数据库

网络服务初始化脚本需要/etc/neutron/plugin.ini指向ML2插件配置文件的符号链接 /etc/neutron/plugins/ml2/ml2_conf.ini。如果此符号链接不存在,请使用以下命令创建它

[root@controller ~]# ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini

同步数据库

su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \

 --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron

查看neutron库是否有表

[root@controller ~]# mysql neutron -e "show tables;"

2.6.8 启动服务

启动网络服务,并将其配置为在系统引导时启动。

对于两个网络选项:

systemctl enable neutron-server.service \
 neutron-openvswitch-agent neutron-dhcp-agent.service \
 neutron-metadata-agent.service openvswitch.service

systemctl restart neutron-server.service \
 neutron-openvswitch-agent neutron-dhcp-agent.service \
 neutron-metadata-agent.service openvswitch.service

对于vxlan网络。还需要启用l3-agent服务

systemctl enable neutron-l3-agent.service
systemctl restart neutron-l3-agent.service

验证

[root@controller ~]# neutron agent-list

2.6.9 创建网桥

重新启动nova-api服务

[root@controller ~]# systemctl restart openstack-nova-api.service

[root@controller ~]# ovs-vsctl show

新建一个外部网络桥接

[root@controller ~]# ovs-vsctl add-br br-eth1

将外部网络桥接映射到网卡,这里绑定第二张网卡,属于业务网卡

[root@controller ~]# ovs-vsctl add-port br-eth1 eth1

[root@controller ~]# ovs-vsctl show

扩展

防止出现提示缺少lib库需要安装

yum install -y libibverbs

删除隧道网络port: vlan id

ovs-vsctl del-port br-tun vxlan-ac10001f

2.7 安装horizon web界面

horizon 提供一个web操作界面openstack的系统。

使用Django框架基于openstack api开发

支持session存储在db、memcached

支持集群

[root@controller ~]# yum install openstack-dashboard -y

覆盖到原有的配置文件

[root@controller ~]# cp /etc/openstack-dashboard/local_settings{,.bak}
[root@controller ~]# vim /etc/openstack-dashboard/local_settings

WEBROOT = '/dashboard/'

# 配置仪表板以在控制器节点上使用OpenStack服务
OPENSTACK_HOST = "10.0.0.10"

# 在Dashboard configuration部分中,允许主机访问Dashboard
ALLOWED_HOSTS = ['*']

# 配置memcached会话存储服务
SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
CACHES = {
    'default': {
        'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
        'LOCATION': '10.0.0.10:11211',
    },
}

# 启用Identity API版本3
OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST

# 启用对域的支持(可以不配置,这行就写登录填写域的,default)
OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True

# 配置API版本
OPENSTACK_API_VERSIONS = {
    "identity": 3,
    "image": 2,
    "volume": 3,
#    "compute": 2,
}

# 将Default配置为通过仪表板创建的用户的默认域(可以不配置,这行就写登录填写域的,default)
OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default"

# 将用户配置为通过仪表板创建的用户的默认角色
OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"

# 启用卷备份
OPENSTACK_CINDER_FEATURES = {
    'enable_backup': True,
}

# 打开路由、配额
OPENSTACK_NEUTRON_NETWORK = {
    ...
    'enable_router': True,
    'enable_quotas': True,
    'enable_distributed_router': False,
    'enable_ha_router': False,
    #'enable_lb': False,
    #'enable_firewall': False,
    #'enable_vpn': False,
    #'enable_fip_topology_check': True,
}

# 配置时区
TIME_ZONE = "Asia/Shanghai"

修改openstack-dashboard文件

[root@ controller ~]# vim /etc/httpd/conf.d/openstack-dashboard.conf
# 在第三行WSGISocketPrefix run/wsgi下面加一行代码: 
WSGIApplicationGroup %{GLOBAL}
[root@controller ~]# systemctl restart httpd.service memcached.service

启动比较慢。每次启动都会先Deleting在copy。可以查看/var/log/messages日志文件

无法访问的状态下进行下面排错

查看日志进行排错

[root@controller ~]# vim /var/log/httpd/error_log 

访问http://10.0.0.10/dashboard

2.8 计算服务nove计算节点安装-10.0.0.11

下面操作在计算节点操作,一定要注意在计算节点安装。

VMware一定要开启inter_vt虚拟化技术

nova-compute一般运行在计算节点,通过messages queue接收并管理VM的生命周期

nova-compute调用libvirtd来管理虚拟机

2.8.1 检查计算节点是否支持虚拟化

确定您的计算节点是否支持虚拟机的硬件加速

egrep -c '(vmx|svm)' /proc/cpuinfo

2.8.2 安装nova-compute

[root@compute01 ~]# yum install openstack-nova-compute -y

2.8.3 修改配置文件

[DEFAULT]
# 只允许 compute 和 metadata API
enabled_apis = osapi_compute,metadata

# 配置 rabbitMQ
transport_url = rabbit://openstack:RABBIT_PASS@10.0.0.10

# 配置管理 IP
my_ip = 10.0.0.11

# 配置使用网络功能
use_neutron = true
firewall_driver = nova.virt.firewall.NoopFirewallDriver

# 配置 API 的认证使用 keystone
[api]
auth_strategy = keystone

# 配置镜像服务
[glance]
api_servers = http://10.0.0.10:9292

# keystone 相关配置
[keystone_authtoken]
www_authenticate_uri = http://10.0.0.10:5000/
auth_url = http://10.0.0.10:5000/
memcached_servers = 10.0.0.10:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = nova
password = NOVA_PASS

# 运行在缺少虚拟化支持的旧硬件上,如果是物理机不需要此配置,物理机配置会导致虚拟机卡顿
[libvirt]
virt_type = qemu

# 配置所文件
[oslo_concurrency]
lock_path = /var/lib/nova/tmp

# placement 相关配置
[placement]
region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://10.0.0.10:5000/v3
username = placement
password = PLACEMENT_PASS

# VNC 配置
[vnc]
enabled = true
server_listen = 0.0.0.0
server_proxyclient_address = $my_ip
novncproxy_base_url = http://10.0.0.10:6080/vnc_auto.html

2.8.4 开机启动 libvertd 及 nova-compute 服务

[root@compute01 ~]# systemctl enable libvirtd.service openstack-nova-compute.service
[root@compute01 ~]# systemctl start libvirtd.service openstack-nova-compute.service

2.8.5 检查compute计算节点

向cell数据库添加计算节点

在任意控制节点操作

获取管理员凭据以启用仅管理员CLI命令,然后确认数据库中是否存在计算主机:

[root@controller ~]# source admin-openrc
[root@controller ~]# openstack compute service list --service nova-compute
+----+--------------+-----------+------+---------+-------+----------------------------+
| ID | Binary       | Host      | Zone | Status  | State | Updated At                 |
+----+--------------+-----------+------+---------+-------+----------------------------+
|  7 | nova-compute | compute01 | nova | enabled | up    | 2023-10-25T02:55:10.000000 |
+----+--------------+-----------+------+---------+-------+----------------------------+

2.8.6 发现计算节点

2.8.6.1 手动发现计算节点

nova-compute可以看到对应的计算节点是到nova hypervisor-list还是没有呢,因为没有发现计算节点添加到cell库里面

[root@controller ~]# nova hypervisor-list
+----+---------------------+-------+--------+
| ID | Hypervisor hostname | State | Status |
+----+---------------------+-------+--------+
+----+---------------------+-------+--------+

手工发现计算节点主机,即添加到cell数据库

[root@controller ~]# su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova
Found 2 cell mappings.
Skipping cell0 since it does not contain hosts.
Getting computes from cell 'cell1': 25b06a73-93b8-48a1-b866-92b18e8d4ab7
Checking host mapping for compute host 'compute01': 4bc11718-ddc1-4776-a5a9-1e84c02a63a7
Creating host mapping for compute host 'compute01': 4bc11718-ddc1-4776-a5a9-1e84c02a63a7
Found 1 unmapped computes in cell: 25b06a73-93b8-48a1-b866-92b18e8d4ab7

2.8.6.2 自动发现计算节点

在控制节点操作;

为避免新加入计算节点时,手动执行注册操作"nova-manage cell_v2 discover_hosts",可设置控制节点定时自动发现主机;

涉及控制节点nova.conf文件的[scheduler]标签;

如下设置自动发现时间为5min,可根据实际环境调节

添加新计算节点时,必须在控制器节点上运行以注册这些新计算节点。或者,您可以在以下位置设置适当的间隔来代替nova-manage cell_v2 discover_hosts/etc/nova/nova.conf

为避免新加入计算节点时,手动执行注册操作"nova-manage cell_v2 discover_hosts",可设置控制节点定时自动发现主机;

涉及控制节点nova.conf文件的[scheduler]标签;

如下设置自动发现时间为5min,可根据实际环境调节

添加新计算节点时,必须在控制器节点上运行以注册这些新计算节点。或者,您可以在以下位置设置适当的间隔来代替nova-manage cell_v2 discover_hosts/etc/nova/nova.conf

[root@controller ~]# vim /etc/nova/nova.conf
[scheduler]
discover_hosts_in_cells_interval = 300

[root@controller ~]# systemctl restart openstack-nova-conductor openstack-nova-api openstack-nova-novncproxy openstack-nova-scheduler

2.8.7 验证

登陆dashboard,管理员-->计算-->虚拟机管理器

如果已注册成功,在"虚拟机管理器"标签可以看到计算节点,并能展示出各计算节点的资源;如果未注册或注册失败,则"虚拟机管理器"标签下无主机。

或者:

[root@controller ~]# openstack hypervisor list
+----+---------------------+-----------------+-----------+-------+
| ID | Hypervisor Hostname | Hypervisor Type | Host IP   | State |
+----+---------------------+-----------------+-----------+-------+
|  1 | compute01           | QEMU            | 10.0.0.11 | up    |
+----+---------------------+-----------------+-----------+-------+

2.9 网络服务neutron计算节点安装-10.0.0.11

计算节点上操作,一定要注意是在计算节点:

[root@compute01 ~]# yum install openvswitch openstack-neutron-openvswitch

2.9.1 /etc/neutron/neutron.conf

[root@compute01 ~]# cp /etc/neutron/neutron.conf{,.bak}
[root@compute01 ~]# grep '^[a-Z\[]' /etc/neutron/neutron.conf.bak >/etc/neutron/neutron.conf
[root@compute01 ~]# vim /etc/neutron/neutron.conf
[DEFAULT]
transport_url = rabbit://openstack:RABBIT_PASS@10.0.0.10
auth_strategy = keystone
[keystone_authtoken]
www_authenticate_uri = http://10.0.0.10:5000
auth_url = http://10.0.0.10:5000
memcached_servers = 10.0.0.10:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = NEUTRON_PASS
[oslo_concurrency]
lock_path = /var/lib/neutron/tmp

2.9.3 /etc/neutron/plugins/ml2/openvswitch_agent.ini

注意网卡名称和ip地址

[root@compute01 ~]# cp /etc/neutron/plugins/ml2/openvswitch_agent.ini{,.bak}
[root@compute01 ~]# grep '^[a-Z\[]' /etc/neutron/plugins/ml2openvswitch_agent.ini.bak >/etc/neutron/plugins/ml2/openvswitch_agent.ini
[root@compute01 ~]# vim /etc/neutron/plugins/ml2/openvswitch_agent.ini
[DEFAULT]
[ovs]
tunnel_bridge = br-tun
local_ip = 10.0.0.11
integration_bridge = br-int
tenant_network_type = vxlan
tunnel_type = vxlan
tunnel_id_ranges = 1:1000
enable_tunneling = true
bridge_mappings = physnet1:br-eth1
[agent]
tunnel_types = vxlan
l2_population = true
prevent_arp_spoofing = True
[securitygroup]
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
enable_security_group = true

配置内核参数:所有节点配置

通过验证以下所有sysctl值设置为确保Linux操作系统内核支持网桥过滤器

bridge:是否允许桥接;

如果"sysctl -p"加载不成功,报" No such file or directory"错误,需要加载内核模块"br_netfilter";

命令"modinfo br_netfilter"查看内核模块信息;

命令"modprobe br_netfilter"加载内核模块

[root@compute01 ~]# cat >> /etc/sysctl.conf << EOF
# 用于控制系统是否开启对数据包源地址的校验,关闭
net.ipv4.conf.all.rp_filter=0
net.ipv4.conf.default.rp_filter=0
# 开启二层转发设备
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
EOF

加载内核模块:作用:桥接流量转发到iptables链

[root@compute01 ~]# modprobe br_netfilter

生效内核配置

[root@compute01 ~]# sysctl -p

2.9.3 /etc/nova/nova.conf

[root@compute01 ~]# vim /etc/nova/nova.conf
[neutron]
auth_url = http://10.0.0.10:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = NEUTRON_PASS

重启nova

[root@compute01 ~]# systemctl restart openstack-nova-compute.service

启动openvswitch

[root@compute01 ~]# systemctl restart openvswitch neutron-openvswitch-agent
[root@compute01 ~]# systemctl enable openvswitch neutron-openvswitch-agent

创建网桥

[root@compute01 ~]# ovs-vsctl show

新建一个外部网络桥接

[root@compute01 ~]# ovs-vsctl add-br br-eth1

将外部网络桥接映射到网卡,这里绑定第二张网卡,属于业务网卡

[root@compute01 ~]# ovs-vsctl add-port br-eth1 eth1
[root@compute01 ~]# ovs-vsctl show

到控制节点验证

[root@controller ~]# neutron agent-list

![img](file:///C:\Users\ADMINI~1\AppData\Local\Temp\ksohtml9212\wps7.jpg)

2.10 启动一个实例(运维基础操作)

控制节点操作

加载openstack环境变量

[root@controller ~]# source /root/admin-openrc 

2.10.1 创建路由器

[root@controller ~]# openstack router create Ext-Router

2.10.2 创建Vxlan网络和子网

[root@controller ~]# openstack network create --provider-network-type vxlan Intnal-01

[root@controller ~]# openstack subnet create Intsubnal --network Intnal-01 --subnet-range 192.168.1.0/24 --gateway 192.168.1.254 --dns-nameserver 114.114.114.114

2.10.3 将内部网络关联到路由器

[root@controller ~]# openstack router add subnet Ext-Router Intsubnal

2.10.4 创建Flat网络和子网(模拟外部网络)

[root@controller ~]# openstack network create --provider-physical-network physnet1 --provider-network-type flat  --external Extnal

[root@controller ~]# openstack subnet create Extsubnal --network Extnal --subnet-range 192.168.100.0/24  --allocation-pool start=192.168.100.150,end=192.168.100.200 --gateway 192.168.100.1 --dns-nameserver 114.114.114.114 --no-dhcp

2.10.5 将外部网络关联到路由器

[root@controller ~]# openstack router set Ext-Router --external-gateway Extnal

2.10.6 创建all-rule安全组并设置安全组规则

[root@controller ~]# openstack security group create all-rule

删除创建后的自带规则(uuid需要替换)

[root@controller ~]# openstack security group rule list 7c047c32-e49f-4a94-8a9c-3cbad4b78e0c|sed -n '4,5p'|awk '{print $2}'|xargs openstack security group rule delete

开放安全组icmp、udp、tcp所有协议

openstack security group list |grep all-rule|awk '{print $2}' |xargs openstack security group rule create --protocol icmp --ingress 
openstack security group list |grep all-rule|awk '{print $2}' |xargs openstack security group rule create --protocol icmp --egress 
openstack security group list |grep all-rule|awk '{print $2}' |xargs openstack security group rule create --protocol udp --ingress 
openstack security group list |grep all-rule|awk '{print $2}' |xargs openstack security group rule create --protocol udp --egress 
openstack security group list |grep all-rule|awk '{print $2}' |xargs openstack security group rule create --protocol tcp --ingress 
openstack security group list |grep all-rule|awk '{print $2}' |xargs openstack security group rule create --protocol tcp --egress 

查看安全组规则

[root@controller ~]# openstack security group list|grep all-rule|awk '{print $2}'|xargs openstack security group rule list

2.10.7 上传镜像(之前上传过不用上传了)

[root@controller ~]# openstack image create cirros --disk-format qcow2 --file cirros-0.4.0-x86_64-disk.img

2.10.8 创建云主机类型

[root@controller ~]# openstack flavor create --vcpus 1 --ram 512 --disk 1 1C-512MB-1G

2.10.9 创建云主机

neutron net-list #Intnal-01 uuid
openstack server create --flavor 1C-512MB-1G --image cirros --security-group all-rule --nic net-id=$(vxlan网络id) vm01

当启动实例之后快速查看compute日志,系统会快速下载安装系统,然后系统会自动进行格式转换。qcow2转换成raw。

![img](file:///C:\Users\ADMINI~1\AppData\Local\Temp\ksohtml9212\wps8.jpg)

2.10.10 分配浮动地址并绑定到云主机

openstack floating ip create Extnal

将分配的浮动IP绑定云主机

openstack server add floating ip vm01 $(分配出的地址)

2.10.11 VNC查看实例

[root@controller ~]# openstack console url show vm01


[root@controller ~]# openstack server list
[root@controller ~]# nova list 
[root@controller ~]# nova get-vnc-console 144ed249-ac05-4529-a093-cc058536f98c novnc

![img](file:///C:\Users\ADMINI~1\AppData\Local\Temp\ksohtml9212\wps9.jpg)

计算节点查看

[root@compute01 ~]# virsh list --all
 Id    Name                           State
----------------------------------------------------
 1     instance-00000001              running

2.10.12 修复vnc控制台

vnc控制台报错Something went wrong, connection is closed 修复

![img](file:///C:\Users\ADMINI~1\AppData\Local\Temp\ksohtml9212\wps10.jpg)

查看/var/log/nova/nova-novncproxy.log日志,出现:

[root@controller ~(keystone_admin)]# tail -f /var/log/nova/nova-novncproxy.log 
code 400, message Client must support 'binary' or 'base64' protocol 

因协议问题;修改/usr/share/novnc/core/websock.js文件,加入'binary' or 'base64'协议。

[root@controller ~(keystone_admin)]# vim /usr/share/novnc/core/websock.js 
    open(uri, protocols) {
        //this.attach(new WebSocket(uri, protocols));
        this.attach(new WebSocket(uri,['binary', 'base64']));
    } 

![img](file:///C:\Users\ADMINI~1\AppData\Local\Temp\ksohtml9212\wps11.jpg)

[root@controller ~]# systemctl restart openstack-nova-novncproxy.service

再次访问测试一下虚拟机控制台;

如果没有成功的话,直接访问http://ip:6080/core/websock.js查看页面里的代码是不是你改过之后的,如果不是说明你改的没生效,清一下浏览器缓存,或者换个浏览器多试试。

查看修改的内容没有出现,清理浏览器缓存;

http://10.0.0.10:6080/core/websock.js
    open(uri, protocols) {
        this.attach(new WebSocket(uri, protocols));
    }

清理缓存后

    open(uri, protocols) {
        //this.attach(new WebSocket(uri, protocols));
        this.attach(new WebSocket(uri,['binary', 'base64']));
    }

2.10.13 ssh测试

[C:\~]$ ssh cirros@192.168.100.185
$ sudo -i
# ip a|grep 192.
    inet 192.168.1.96/24 brd 192.168.1.255 scope global eth0

2.10.14 课外自学

添加密钥对

[root@controller ~]# openstack keypair create --public-key ~/.ssh/id_rsa.pub mykey

删除密钥对

[root@controller ~]# openstack keypair delete mykey

[root@controller ~]# openstack server create --flavor m1.nano --image cirros \
 --nic net-id=7c39d0a5-a468-4934-be8c-39e4703b2c86,v4-fixed-ip=10.0.0.180 \
 --availability-zone nova:compute01:compute01 vm01

第三章 Openstack扩容与删除计算节点

增加一个台服务器

配置ip为 :10.0.0.12

主机名为:compute02:hostnamectl set-hostname compute02

绑定hosts解析

[root@controller ~]# scp /etc/hosts 10.0.0.12:/etc/hosts

3.1 扩容计算节点

确定您的计算节点是否支持虚拟机的硬件加速

egrep -c '(vmx|svm)' /proc/cpuinfo

3.1.1 配置yum源并安装基础工具包

rm -rf /etc/yum.repos.d/*
cat << EOF >> /etc/yum.repos.d/openstack-train.repo 
[openstack]
name=openstack
baseurl=http://10.0.0.10/openstack-train
gpgcheck=0
enabled=1
EOF
yum clean all
yum makecache

基础工具包安装

yum install -y unzip wget lrzsz net-tools vim tree lsof tcpdump telnet screen bash-completion fio stress sysstat tree strace

3.1.2 时间同步

[root@compute02 ~]# vim /etc/chrony.conf 
server 10.0.0.10 iburst
[root@compute02 ~]# systemctl restart chronyd
[root@compute02 ~]# chronyc sources
210 Number of sources = 1
MS Name/IP address         Stratum Poll Reach LastRx Last sample               
===============================================================================
^* controller                    3   6    17     0  -1968ns[  +21us] +/-   48ms

3.1.3 安装openstack客户端和openstack-selinux

yum install python-openstackclient openstack-selinux -y

3.1.4 安装配置nova-compute

安装nova-compute

有报错就安装一下没有不用安装yum install python2-qpid-proton

[root@compute02 ~]# yum install openstack-nova-compute -y

修改nova配置文件

[root@compute02 ~]# cp /etc/nova/nova.conf{,.bak}
[root@compute02 ~]# grep '^[a-Z\[]' /etc/nova/nova.conf.bak >/etc/nova/nova.conf
[root@compute02 ~]# vim /etc/nova/nova.conf
[DEFAULT]
# 只允许 compute 和 metadata API
enabled_apis = osapi_compute,metadata

# 配置 rabbitMQ
transport_url = rabbit://openstack:RABBIT_PASS@10.0.0.10

# 配置管理 IP
my_ip = 10.0.0.12

# 配置使用网络功能
use_neutron = true
firewall_driver = nova.virt.firewall.NoopFirewallDriver

# 配置 API 的认证使用 keystone
[api]
auth_strategy = keystone

# 配置镜像服务
[glance]
api_servers = http://10.0.0.10:9292

# keystone 相关配置
[keystone_authtoken]
www_authenticate_uri = http://10.0.0.10:5000/
auth_url = http://10.0.0.10:5000/
memcached_servers = 10.0.0.10:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = nova
password = NOVA_PASS

# 运行在缺少虚拟化支持的旧硬件上,物理机不需要此配置
[libvirt]
virt_type = qemu

# 配置所文件
[oslo_concurrency]
lock_path = /var/lib/nova/tmp

# placement 相关配置
[placement]
region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://10.0.0.10:5000/v3
username = placement
password = PLACEMENT_PASS

# VNC 配置
[vnc]
enabled = true
server_listen = 0.0.0.0
server_proxyclient_address = $my_ip
novncproxy_base_url = http://10.0.0.10:6080/vnc_auto.html

启动

[root@compute02 ~]# systemctl enable libvirtd.service openstack-nova-compute.service
[root@compute02 ~]# systemctl start libvirtd.service openstack-nova-compute.service

在控制节点上查看一定要多出来一个10.0.0.12计算节点nova-cpmpute

[root@controller ~]# nova service-list
[root@controller ~]# openstack compute service list
+----+----------------+------------+----------+---------+-------+----------------------------+
| ID | Binary         | Host       | Zone     | Status  | State | Updated At                 |
+----+----------------+------------+----------+---------+-------+----------------------------+
|  3 | nova-conductor | controller | internal | enabled | up    | 2023-10-25T13:43:22.000000 |
|  5 | nova-scheduler | controller | internal | enabled | up    | 2023-10-25T13:43:21.000000 |
|  7 | nova-compute   | compute01  | nova     | enabled | up    | 2023-10-25T13:43:15.000000 |
|  8 | nova-compute   | compute02  | nova     | enabled | up    | 2023-10-25T13:43:20.000000 |
+----+----------------+------------+----------+---------+-------+----------------------------+

向cell数据库添加计算节点

在控制节点操作

获取管理员凭据以启用仅管理员CLI命令,然后确认数据库中是否存在计算主机:

[root@controller ~]# openstack compute service list --service nova-compute
+----+--------------+-----------+------+---------+-------+----------------------------+
| ID | Binary       | Host      | Zone | Status  | State | Updated At                 |
+----+--------------+-----------+------+---------+-------+----------------------------+
|  7 | nova-compute | compute01 | nova | enabled | up    | 2023-10-25T13:46:35.000000 |
|  8 | nova-compute | compute02 | nova | enabled | up    | 2023-10-25T13:46:30.000000 |
+----+--------------+-----------+------+---------+-------+----------------------------+

由于配置了自动发现计算节点 每隔 5 分钟会自动扫描新增的节点自动添加到cell数据库中

登陆dashboard,管理员-->计算-->虚拟机管理器

如果已注册成功,在"虚拟机管理器"标签下可发现计算节点,并能展示出各计算节点的资源;

如果未注册或注册失败,则"虚拟机管理器"标签下无主机。

![img](file:///C:\Users\ADMINI~1\AppData\Local\Temp\ksohtml9212\wps12.jpg)

[root@controller ~]# nova hypervisor-list
+--------------------------------------+---------------------+-------+---------+
| ID                                   | Hypervisor hostname | State | Status  |
+--------------------------------------+---------------------+-------+---------+
| 4bc11718-ddc1-4776-a5a9-1e84c02a63a7 | compute01           | up    | enabled |
| 48db0f79-2fbb-4d69-96d0-4e22ce4adf48 | compute02           | up    | enabled |
+--------------------------------------+---------------------+-------+---------+

3.1.5 安装配置neutron

计算节点上操作,一定要注意是在计算节点:

[root@compute02 ~]# yum install openvswitch openstack-neutron-openvswitch

3.1.5.1 /etc/neutron/neutron.conf

[root@compute02 ~]# cp /etc/neutron/neutron.conf{,.bak}
[root@compute02 ~]# grep '^[a-Z\[]' /etc/neutron/neutron.conf.bak >/etc/neutron/neutron.conf
[root@compute02 ~]# vim /etc/neutron/neutron.conf
[DEFAULT]
transport_url = rabbit://openstack:RABBIT_PASS@10.0.0.10
auth_strategy = keystone
[keystone_authtoken]
www_authenticate_uri = http://10.0.0.10:5000
auth_url = http://10.0.0.10:5000
memcached_servers = 10.0.0.10:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = NEUTRON_PASS
[oslo_concurrency]
lock_path = /var/lib/neutron/tmp

3.1.5.2 /etc/neutron/plugins/ml2/openvswitch_agent.ini

注意网卡名称和ip地址

[root@compute02 ~]# cp /etc/neutron/plugins/ml2/openvswitch_agent.ini{,.bak}
[root@compute02 ~]# grep '^[a-Z\[]' /etc/neutron/plugins/ml2openvswitch_agent.ini.bak >/etc/neutron/plugins/ml2/openvswitch_agent.ini
[root@compute02 ~]# vim /etc/neutron/plugins/ml2/openvswitch_agent.ini
[DEFAULT]
[ovs]
tunnel_bridge = br-tun
local_ip = 10.0.0.12
integration_bridge = br-int
tenant_network_type = vxlan
tunnel_type = vxlan
tunnel_id_ranges = 1:1000
enable_tunneling = true
bridge_mappings = physnet1:br-eth1
[agent]
tunnel_types = vxlan
l2_population = true
prevent_arp_spoofing = True
[securitygroup]
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
enable_security_group = true

配置内核参数:所有节点配置

通过验证以下所有sysctl值设置为确保Linux操作系统内核支持网桥过滤器:

bridge:是否允许桥接;

如果"sysctl -p"加载不成功,报" No such file or directory"错误,需要加载内核模块"br_netfilter";

命令"modinfo br_netfilter"查看内核模块信息;

命令"modprobe br_netfilter"加载内核模块

[root@compute02 ~]# cat >> /etc/sysctl.conf << EOF
# 用于控制系统是否开启对数据包源地址的校验,关闭
net.ipv4.conf.all.rp_filter=0
net.ipv4.conf.default.rp_filter=0
# 开启二层转发设备
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
EOF

加载内核模块:作用:桥接流量转发到iptables链

[root@compute02 ~]# modprobe br_netfilter

生效内核配置

[root@compute02 ~]# sysctl -p

3.1.5.3 /etc/nova/nova.conf

[root@compute02 ~]# vim /etc/nova/nova.conf
[neutron]
auth_url = http://10.0.0.10:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = NEUTRON_PASS

重启nova

[root@compute02 ~]# systemctl restart openstack-nova-compute.service

启动openvswitch

[root@compute02 ~]# systemctl restart openvswitch neutron-openvswitch-agent
[root@compute02 ~]# systemctl enable openvswitch neutron-openvswitch-agent

3.1.5.4 创建网桥

新建一个外部网络桥接

[root@compute02 ~]# ovs-vsctl show
[root@compute02 ~]# ovs-vsctl add-br br-eth1

将外部网络桥接映射到网卡,这里绑定第二张网卡,属于业务网卡

[root@compute02 ~]# ovs-vsctl add-port br-eth1 eth1
[root@compute02 ~]# ovs-vsctl show

到控制节点验证,出现compute02为成功。

[root@controller ~]# neutron agent-list
[root@controller ~]# nova service-list

![img](file:///C:\Users\ADMINI~1\AppData\Local\Temp\ksohtml9212\wps13.jpg)

3.1.6 测试启动实例到compute02节点

[root@controller ~]# neutron net-list

[root@controller ~]# openstack server create --flavor 1C-512MB-1G --image cirros --security-group all-rule --nic net-id=06ecab21-d1d7-42f0-aecc-f94b83793a24 --availability-zone nova:compute02:compute02 vm02

分配浮动地址

openstack floating ip create Extnal

将分配的浮动IP绑定云主机

openstack server add floating ip vm02 $(分配出的地址)

3.2 删除计算节点

3.2.1 删除计算节点的id

查看计算节点,主要是id列

[root@controller ~]# nova service-list

![img](file:///C:\Users\ADMINI~1\AppData\Local\Temp\ksohtml9212\wps14.jpg)

选择要删除的计算节点

删除compute02计算节点

[root@controller ~]# nova service-delete 68d81bbe-f4f9-4e75-af35-2e24337903b5
ERROR (Conflict): Unable to delete compute service that is hosting instances. Migrate or delete the instances first. (HTTP 409) (Request-ID: req-3990748d-1e5f-4a74-91d0-a048de4855cb)
# 报错:需要删除compute02计算节点上的实例

[root@controller ~]# nova delete 617d1901-8931-477b-99b1-0186a00249c4
Request to delete server 617d1901-8931-477b-99b1-0186a00249c4 has been accepted.

再次测试删除

[root@controller ~]# nova service-delete 68d81bbe-f4f9-4e75-af35-2e24337903b5
[root@controller ~]# nova service-list

![img](file:///C:\Users\ADMINI~1\AppData\Local\Temp\ksohtml9212\wps15.jpg)

3.2.2 删除数据库中对应的信息

3.2.2.1 删除nova库services表数据

[root@controller ~]# mysql
MariaDB [(none)]> use nova
MariaDB [nova]> select host from services;
+------------+
| host       |
+------------+
| 0.0.0.0    |
| 0.0.0.0    |
| compute01  |
| compute02  |
| controller |
| controller |
+------------+
6 rows in set (0.000 sec)

删除

MariaDB [nova]> delete from nova.services where host='compute02';
Query OK, 1 row affected (0.021 sec)

再次查看没有compute02节点了

MariaDB [nova]> select host from services;

3.2.2.2 删除nova库compute_nodes表数据

MariaDB [nova]> select host from compute_nodes;
+-----------+
| host      |
+-----------+
| compute01 |
| compute02 |
+-----------+
2 rows in set (0.001 sec)

删除

MariaDB [nova]> delete from nova.compute_nodes where host='compute02';
Query OK, 1 row affected (0.031 sec)

再查看没有compute02节点了

MariaDB [nova]> select host from compute_nodes;

3.2.3 删除compute02节点再添加

[root@compute02 ~]# systemctl restart libvirtd.service openstack-nova-compute.service
[root@controller ~]# su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova

再次添加后compute02节点的/var/log/nova/nova-compute.log可能会报错:ResourceProviderCreationFailed: Failed to create resource provider compute02

解决方法:

检查控制节点数据库:

如下两个表中同一个计算节点的uuid应该是相同的,如果不同,需要以nova库compute_nodes表改成相同的(主要还是看报错提示的那是那个uuid);

我这个报错提示的是以nova库的compute_nodes表数据。所以placement库resource_providers表需要对照nova的compute_nodes修改。

MariaDB [(none)]> select uuid,name from placement.resource_providers;
+--------------------------------------+-----------+
| uuid                                 | name      |
+--------------------------------------+-----------+
| c3a9aec8-9508-4edc-a1e5-cfc59e8ee935 | compute02 |
| 148e67ca-33c7-4c6d-b631-3e280ce10a81 | compute01 |
+--------------------------------------+-----------+
2 rows in set (0.000 sec)

MariaDB [(none)]> select uuid,host from nova.compute_nodes;
+--------------------------------------+-----------+
| uuid                                 | host      |
+--------------------------------------+-----------+
| 4c78068e-a236-472d-8071-817ffbbd04f3 | compute01 |
| ebefd159-5ce6-4416-b5e4-55d2a8646d91 | compute02 |
+--------------------------------------+-----------+
2 rows in set (0.000 sec)

MariaDB [(none)]> UPDATE placement.resource_providers set uuid='4c78068e-a236-472d-8071-817ffbbd04f3'  where name='compute01';
MariaDB [(none)]> UPDATE placement.resource_providers set uuid='ebefd159-5ce6-4416-b5e4-55d2a8646d91'  where name='compute02';

MariaDB [(none)]> select uuid,name from placement.resource_providers;
+--------------------------------------+-----------+
| uuid                                 | name      |
+--------------------------------------+-----------+
| ebefd159-5ce6-4416-b5e4-55d2a8646d91 | compute02 |
| 4c78068e-a236-472d-8071-817ffbbd04f3 | compute01 |
+--------------------------------------+-----------+
2 rows in set (0.000 sec)

MariaDB [(none)]> select uuid,host from nova.compute_nodes;
+--------------------------------------+-----------+
| uuid                                 | host      |
+--------------------------------------+-----------+
| 4c78068e-a236-472d-8071-817ffbbd04f3 | compute01 |
| ebefd159-5ce6-4416-b5e4-55d2a8646d91 | compute02 |
+--------------------------------------+-----------+
2 rows in set (0.000 sec)

第四章 Cinder块存储部署

OpenStack 块存储服务(cinder)为虚拟机添加持久的存储,块存储提供一个基础设施为了管理卷,以及和 OpenStack 计算服务交互,为实例提供卷。此服务也会激活管理卷的快照和卷类型的功能。块存储服务通常包含下列组件:

cinder-api #外部链接

接受 API 请求,并将其路由到 cinder-volume 执行。

cinder-volume #管理卷

与块存储服务和例如 cinder-scheduler 的进程进行直接交互。它也可以与这些进程通过一个消息队列进行交互。 cinder-volume 服务响应送到块存储服务的读写请求来维持状态。它也可以和多种存储提供者在驱动架构下进行交互。

cinder-scheduler #调度这个卷放在哪里

选择最优存储提供节点来创建卷。其与 nova-scheduler 组件类似

cinder-backup #卷的备份

cinder-backup 服务提供任何种类备份卷到一个备份存储提供者。就像cinder-volume服务,它与多种存储提供者在驱动架构下进行交互。

消息队列

在块存储的进程之间路由信息。

4.1 控制节点部署 cinter 创库授权

[root@controller ~]# mysql
CREATE DATABASE cinder;
GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' \
  IDENTIFIED BY 'CINDER_DBPASS';
GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' \
  IDENTIFIED BY 'CINDER_DBPASS';
FLUSH PRIVILEGES;
quit

[root@controller ~]# source admin-openrc

4.2 在keystone创建系统用户(cinder)关联角色

创建一个 cinder 用户并添加 admin 角色到 cinder 用户上

openstack user create --domain default --password CINDER_PASS cinder
openstack role add --project service --user cinder admin

4.3 注册cinderv2和cinderv3服务实体

openstack service create --name cinderv2 \
  --description "OpenStack Block Storage" volumev2
openstack service create --name cinderv3 \
  --description "OpenStack Block Storage" volumev3

4.4 注册Block Storage服务 v2\v3 API端点

openstack endpoint create --region RegionOne \
  volumev2 public http://10.0.0.10:8776/v2/%\(project_id\)s
openstack endpoint create --region RegionOne \
  volumev2 internal http://10.0.0.10:8776/v2/%\(project_id\)s
openstack endpoint create --region RegionOne \
  volumev2 admin http://10.0.0.10:8776/v2/%\(project_id\)s

openstack endpoint create --region RegionOne \
  volumev3 public http://10.0.0.10:8776/v3/%\(project_id\)s
openstack endpoint create --region RegionOne \
  volumev3 internal http://10.0.0.10:8776/v3/%\(project_id\)s
openstack endpoint create --region RegionOne \
  volumev3 admin http://10.0.0.10:8776/v3/%\(project_id\)s

4.5 安装服务相应软件包

[root@controller ~]# yum install openstack-cinder -y

4.6 修改相应服务的配置文件

[root@controller ~]# cp /etc/cinder/cinder.conf{,.bak}
[root@controller ~]# grep -Ev '^$|#' /etc/cinder/cinder.conf.bak >/etc/cinder/cinder.conf
[root@controller ~]# vim /etc/cinder/cinder.conf
[DEFAULT]
transport_url = rabbit://openstack:RABBIT_PASS@10.0.0.10
auth_strategy = keystone
my_ip = 10.0.0.10
enabled_backends = ceph_hdd
[database]
connection = mysql+pymysql://cinder:CINDER_DBPASS@10.0.0.10/cinder
[keystone_authtoken]
www_authenticate_uri = http://10.0.0.10:5000
auth_url = http://10.0.0.10:5000
memcached_servers = 10.0.0.10:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = cinder
password = CINDER_PASS
[oslo_concurrency]
lock_path = /var/lib/cinder/tmp

4.7 同步数据库

[root@controller ~]# su -s /bin/sh -c "cinder-manage db sync" cinder

检查数据库表

[root@controller ~]# mysql cinder -e "show tables;"

4.8 nova对接cinder

配置nova配置文件使用块设备存储,编辑文件 /etc/nova/nova.conf 并添加如下到其中:

[root@controller ~]# vim /etc/nova/nova.conf
[cinder]
os_region_name = RegionOne

4.9 启动服务并查看volume状态

重启compute api服务

systemctl restart openstack-nova-api.service

启动cinder块存储服务

systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service openstack-cinder-volume
systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service openstack-cinder-volume

[root@controller ~]# cinder type-list
[root@controller ~]# openstack volume service list
+------------------+---------------------+------+---------+-------+----------------------------+
| Binary           | Host                | Zone | Status  | State | Updated At                 |
+------------------+---------------------+------+---------+-------+----------------------------+
| cinder-scheduler | controller          | nova | enabled | up    | 2023-10-26T02:54:10.000000 |
| cinder-volume    | controller@ceph_hdd | nova | enabled | down  | 2023-10-26T02:49:40.000000 |
+------------------+---------------------+------+---------+-------+----------------------------+

此时后端存储服务为ceph,但ceph相关服务尚未启用并集成到cinder-volume,导致cinder-volume服务的状态是"down"可以查看cinder-volume日志

第五章 ceph集群部署

主机名 ip 磁盘 CPU memory
storage01 10.0.0.111 sda:100G ,sdb:50G,sdc:50G,sdd:50G 2C 4G
storage02 10.0.0.112 sda:100G ,sdb:50G,sdc:50G,sdd:50G 2C 4G
storage03 10.0.0.113 sda:100G ,sdb:50G,sdc:50G,sdd:50G 2C 4G
网卡名称 模式 作用 IP
eth0 仅主机模式 作为open stack管理、集群、存储网络 10.0.0.0/24

5.1 修改主机名称、hosts文件、免密

5.1.1 修改主机名

hostnamectl set-hostname storage01
hostnamectl set-hostname storage02
hostnamectl set-hostname storage03

5.1.2 新增hosts解析

[root@controller ~]# vim /etc/hosts
10.0.0.10 controller
10.0.0.11 compute01
10.0.0.12 compute02

# 新增
10.0.0.111 storage01
10.0.0.112 storage02
10.0.0.113 storage03

5.1.3 处理免密脚本

[root@controller ~]# yum install expect -y
[root@controller ~]# vim auto_ssh.sh
#!/usr/bin/expect  
set timeout 10  

#执行该脚本传入进来的三个参数
set username [lindex $argv 0]  
set password [lindex $argv 1]  
set hostname [lindex $argv 2]

#此处为你想要传递给对端机器进行授权的密钥存放位置  
spawn ssh-copy-id -i /root/.ssh/id_rsa.pub $username@$hostname
expect {
            #first connect, no public key in ~/.ssh/known_hosts
            "Are you sure you want to continue connecting (yes/no)?" { #第一次ssh匹配此处逻辑
            send "yes\r"
            expect "password:"
                send "$password\r"
            }
            #already has public key in ~/.ssh/known_hosts
            "password:" { #第二次匹配此处逻辑
                send "$password\r"
            }
            "Now try logging into the machine" {
                #it has authorized, do nothing! #已经授权无密钥登录,则匹配此处逻辑,do nothing
            }
        }
expect eof

[root@controller ~]# chmod 777 auto_ssh.sh 
[root@controller ~]# ./auto_ssh.sh root 123456 10.0.0.10
[root@controller ~]# ./auto_ssh.sh root 123456 10.0.0.11
[root@controller ~]# ./auto_ssh.sh root 123456 10.0.0.12
[root@controller ~]# ./auto_ssh.sh root 123456 10.0.0.111
[root@controller ~]# ./auto_ssh.sh root 123456 10.0.0.112
[root@controller ~]# ./auto_ssh.sh root 123456 10.0.0.113

5.1.4 配置免密

修改ssh配置文件

sed -i 's/#   StrictHostKeyChecking ask/StrictHostKeyChecking no/' /etc/ssh/ssh_config
sed -i 's/GSSAPIAuthentication yes/GSSAPIAuthentication no/' /etc/ssh/ssh_config
systemctl restart sshd

配置免密、hosts文件、指纹认证文件

# 所有云平台节点root用户免密
scp /root/.ssh/id_rsa.pub compute01:/root/.ssh/id_rsa.pub
scp /root/.ssh/id_rsa.pub compute02:/root/.ssh/id_rsa.pub
scp /root/.ssh/id_rsa.pub controller:/root/.ssh/id_rsa.pub

scp /root/.ssh/id_rsa compute01:/root/.ssh/id_rsa
scp /root/.ssh/id_rsa compute02:/root/.ssh/id_rsa
scp /root/.ssh/id_rsa controller:/root/.ssh/id_rsa

scp /root/.ssh/id_rsa.pub compute01:/root/.ssh/authorized_keys
scp /root/.ssh/id_rsa.pub compute02:/root/.ssh/authorized_keys
scp /root/.ssh/id_rsa.pub controller:/root/.ssh/authorized_keys

# 存储节点免密
scp /root/.ssh/id_rsa.pub storage01:/root/.ssh/id_rsa.pub
scp /root/.ssh/id_rsa.pub storage02:/root/.ssh/id_rsa.pub
scp /root/.ssh/id_rsa.pub storage03:/root/.ssh/id_rsa.pub

scp /root/.ssh/id_rsa storage01:/root/.ssh/id_rsa
scp /root/.ssh/id_rsa storage02:/root/.ssh/id_rsa
scp /root/.ssh/id_rsa storage03:/root/.ssh/id_rsa

scp /root/.ssh/id_rsa.pub storage01:/root/.ssh/authorized_keys
scp /root/.ssh/id_rsa.pub storage02:/root/.ssh/authorized_keys
scp /root/.ssh/id_rsa.pub storage03:/root/.ssh/authorized_keys

# 拷贝指纹
scp /root/.ssh/known_hosts compute01:/root/.ssh/
scp /root/.ssh/known_hosts compute02:/root/.ssh/
scp /root/.ssh/known_hosts controller:/root/.ssh/
scp /root/.ssh/known_hosts storage01:/root/.ssh/
scp /root/.ssh/known_hosts storage02:/root/.ssh/
scp /root/.ssh/known_hosts storage03:/root/.ssh/

# 拷贝hosts文件
scp /etc/hosts compute01:/etc/
scp /etc/hosts compute02:/etc/
scp /etc/hosts controller:/etc/
scp /etc/hosts storage01:/etc/
scp /etc/hosts storage02:/etc/
scp /etc/hosts storage03:/etc/

5.2 时钟同步、关闭selinux及防火墙

存储节点基础配置

systemctl stop firewalld
systemctl disable firewalld
systemctl stop NetworkManager
systemctl disable NetworkManager

[root@storage01 ~]# vim /etc/chrony.conf 
server 10.0.0.10 iburst
[root@storage01 ~]# systemctl restart chronyd
[root@storage01 ~]# chronyc sources
210 Number of sources = 1
MS Name/IP address         Stratum Poll Reach LastRx Last sample               
===============================================================================
^* controller                    3   6    17     0  -1968ns[  +21us] +/-   48ms

5.3 配置yum源并安装基础工具包

rm -rf /etc/yum.repos.d/*
cat << EOF >> /etc/yum.repos.d/openstack-train.repo 
[openstack]
name=openstack
baseurl=http://10.0.0.10/openstack-train
gpgcheck=0
enabled=1
EOF
yum clean all
yum makecache

yum install -y unzip wget lrzsz net-tools vim tree lsof tcpdump telnet screen bash-completion fio stress sysstat tree strace

5.4 安装ceph并生成配置文件

[root@storage01 ~]# yum install ceph -y
[root@storage02 ~]# yum install ceph -y
[root@storage03 ~]# yum install ceph -y

[root@storage01 ~]# uuidgen
fab9e39e-ca46-4868-bee5-1b4f8353f85e

[root@storage01 ~]# vim /etc/ceph/ceph.conf
[global]
fsid = fab9e39e-ca46-4868-bee5-1b4f8353f85e
mon initial members = storage01
mon host = 10.0.0.111,10.0.0.112,10.0.0.113
public network = 10.0.0.0/24
cluster network = 10.0.0.0/24
auth cluster required = cephx
auth service required = cephx
auth client required = cephx
osd journal size = 1024
osd pool default size = 3
osd pool default min size = 2
osd pool default pg num = 32
osd pool default pgp num = 32
osd crush chooseleaf type = 1

[mon]
mon allow pool delete = true

5.5 创建ceph-mon

5.5.1 初始化添加storage01 mon节点

创建mon密钥环

[root@storage01 ~]# ceph-authtool --create-keyring /tmp/ceph.mon.keyring --gen-key -n mon. --cap mon 'allow *'

创建管理密钥环

[root@storage01 ~]# ceph-authtool --create-keyring /etc/ceph/ceph.client.admin.keyring --gen-key -n client.admin --cap mon 'allow *' --cap osd 'allow *' --cap mds 'allow *' --cap mgr 'allow *'

创建osd引导密钥环,每个存储节点添加

[root@storage01 ~]# ceph-authtool --create-keyring /var/lib/ceph/bootstrap-osd/ceph.keyring --gen-key -n client.bootstrap-osd --cap mon 'profile bootstrap-osd' --cap mgr 'allow r'
[root@storage02 ~]# ceph-authtool --create-keyring /var/lib/ceph/bootstrap-osd/ceph.keyring --gen-key -n client.bootstrap-osd --cap mon 'profile bootstrap-osd' --cap mgr 'allow r'
[root@storage03 ~]# ceph-authtool --create-keyring /var/lib/ceph/bootstrap-osd/ceph.keyring --gen-key -n client.bootstrap-osd --cap mon 'profile bootstrap-osd' --cap mgr 'allow r'

将mon密钥环添加给管理和osd引导密钥环中

[root@storage01 ~]# ceph-authtool /tmp/ceph.mon.keyring --import-keyring /etc/ceph/ceph.client.admin.keyring
[root@storage01 ~]# ceph-authtool /tmp/ceph.mon.keyring --import-keyring /var/lib/ceph/bootstrap-osd/ceph.keyring

修改用户组

[root@storage01 ~]# chown ceph:ceph /tmp/ceph.mon.keyring

生成monmap(monmaptool --create --add {hostname} {ip-address} --fsid {uuid} /tmp/monmap)

[root@storage01 ~]# monmaptool --create --add storage01 10.0.0.111 --fsid fab9e39e-ca46-4868-bee5-1b4f8353f85e /tmp/monmap

创建mon路径并授权

默认集群名称为ceph,当前节点名称为stroage01

[root@storage01 ~]# sudo -u ceph mkdir /var/lib/ceph/mon/ceph-storage01

初始化mon节点守护进程

sudo -u ceph ceph-mon [--cluster {cluster-name}] --mkfs -i {hostname} --monmap /tmp/monmap --keyring /tmp/ceph.mon.keyring

默认集群名称为ceph,使用monmap和mon密钥换进行初始化,例如

[root@storage01 ~]# sudo -u ceph ceph-mon --mkfs -i storage01 --monmap /tmp/monmap --keyring /tmp/ceph.mon.keyring

为防止重新被安装创建一个空的done文件

[root@storage01 ~]# sudo -u ceph touch /var/lib/ceph/mon/ceph-storage01/done

注意将节点名、节点IP地址和集群fsid改为实际对应创建mon目录,命令格式如下,如果目录属主不是ceph需修改为ceph。

sudo -u ceph mkdir /var/lib/ceph/mon/{cluster-name}-{hostname}

启动服务

[root@storage01 ~]# systemctl start ceph-mon@storage01
[root@storage01 ~]# systemctl status ceph-mon@storage01
[root@storage01 ~]# systemctl enable ceph-mon@storage01

查看集群健康状态,有如下告警:

[root@storage01 ~]# ceph -s
mon is allowing insecure global_id reclaim
1 monitors have not enabled msgr2

有如下告警:

mon is allowing insecure global_id reclaim
1 monitors have not enabled msgr2

消除告警

[root@storage01 ~]# ceph config set mon mon_warn_on_insecure_global_id_reclaim_allowed false
[root@storage01 ~]# ceph mon enable-msgr2

拷贝文件,将/etc/ceph/下所有文件拷贝至其他节点

[root@storage01 ~]# ll /etc/ceph/
[root@storage01 ~]# scp /etc/ceph/* storage02:/etc/ceph/
[root@storage01 ~]# scp /etc/ceph/* storage03:/etc/ceph/

5.5.2 添加其他storage02 mon节点(可选)

如果在其他节点也要启用mon服务,即添加新的mon节点(mon节点一般为奇数个,中小型集群推荐3个),可按如下步骤:

登录到新节点

[root@storage01 ~]# ssh storage02

创建节点mon目录

[root@storage02 ~]# sudo -u ceph mkdir /var/lib/ceph/mon/ceph-storage02

在临时目录获取监视器密钥环和监视器运行图

[root@storage02 ~]# ceph auth get mon. -o /tmp/ceph.mon.keyring
[root@storage02 ~]# ceph mon getmap -o /tmp/ceph.mon.map

修改监视器密钥环属主和属组为ceph

[root@storage02 ~]# chown ceph.ceph /tmp/ceph.mon.keyring
[root@storage02 ~]# chown ceph.ceph /tmp/ceph.mon.map 

初始化节点mon

[root@storage02 ~]# sudo -u ceph ceph-mon --mkfs -i storage02 --monmap /tmp/ceph.mon.map --keyring /tmp/ceph.mon.keyring

启动服务,查看服务并使能开机启动

[root@storage02 ~]# systemctl start ceph-mon@storage02
[root@storage02 ~]# systemctl status ceph-mon@storage02
[root@storage02 ~]# systemctl enable ceph-mon@storage02

验证

[root@storage02 ~]# ceph -s

为防止重新被安装创建一个空的done文件

[root@storage02 ~]# sudo touch /var/lib/ceph/mon/ceph-storage02/done

5.5.3 添加其他storage03 mon节点(可选)

如果在其他节点也要启用mon服务,即添加新的mon节点(mon节点一般为奇数个,中小型集群推荐3个),可按如下步骤:

登录到新节点

[root@storage01 ~]# ssh storage03

创建节点mon目录

[root@storage03 ~]# sudo -u ceph mkdir /var/lib/ceph/mon/ceph-storage03

在临时目录获取监视器密钥环和监视器运行图

[root@storage03 ~]# ceph auth get mon. -o /tmp/ceph.mon.keyring
[root@storage03 ~]# ceph mon getmap -o /tmp/ceph.mon.map

修改监视器密钥环属主和属组为ceph

[root@storage03 ~]# chown ceph.ceph /tmp/ceph.mon.keyring
[root@storage03 ~]# chown ceph.ceph /tmp/ceph.mon.map

初始化节点mon

[root@storage03 ~]# sudo -u ceph ceph-mon --mkfs -i storage03 --monmap /tmp/ceph.mon.map --keyring /tmp/ceph.mon.keyring

启动服务,查看服务并使能开机启动

[root@storage03 ~]# systemctl start ceph-mon@storage03
[root@storage03 ~]# systemctl status ceph-mon@storage03
[root@storage03 ~]# systemctl enable ceph-mon@storage03

验证

[root@storage03 ~]# ceph -s

为防止重新被安装创建一个空的done文件

[root@storage03 ~]# sudo touch /var/lib/ceph/mon/ceph-storage03/done

5.6 Manager(可部署一个)

Managers: Ceph管理进程(ceph-mgr)主要负责持续监控当前集群的运行指标和运行状态,包括存储利用率、当前性能指标、系统负载。还通过基于python的模块组管理和为外界提供统一的入口获取集群运行信息,包括一个基于web的dashboard和REST API。一个集群一般需要主备冗余的两个节点运行管理进程。mgr可以安装配置在任一节点,也可以多寄点。步骤如下:

本次部署为3个mgr

5.6.1 创建mgr密钥环

[root@storage01 ~]# ceph auth get-or-create mgr.storage01 mon 'allow profile mgr' osd 'allow *' mds 'allow *'
[root@storage02 ~]# ceph auth get-or-create mgr.storage02 mon 'allow profile mgr' osd 'allow *' mds 'allow *'
[root@storage03 ~]# ceph auth get-or-create mgr.storage03 mon 'allow profile mgr' osd 'allow *' mds 'allow *'

5.6.2 创建mgr节点目录,名称为集群名称+节点名称

[root@storage01 ~]# sudo -u ceph mkdir /var/lib/ceph/mgr/ceph-storage01
[root@storage02 ~]# sudo -u ceph mkdir /var/lib/ceph/mgr/ceph-storage02
[root@storage03 ~]# sudo -u ceph mkdir /var/lib/ceph/mgr/ceph-storage03

5.6.3 获取mgr密钥环,名称为默认为keyring

[root@storage01 ~]# ceph auth get mgr.storage01 -o /var/lib/ceph/mgr/ceph-storage01/keyring
[root@storage02 ~]# ceph auth get mgr.storage02 -o /var/lib/ceph/mgr/ceph-storage02/keyring
[root@storage03 ~]# ceph auth get mgr.storage03 -o /var/lib/ceph/mgr/ceph-storage03/keyring

5.6.4 启动服务

systemctl daemon-reload
systemctl start ceph-mgr@storage01
systemctl status ceph-mgr@storage01
systemctl enable ceph-mgr@storage01

systemctl daemon-reload
systemctl start ceph-mgr@storage02
systemctl status ceph-mgr@storage02
systemctl enable ceph-mgr@storage02

systemctl daemon-reload
systemctl start ceph-mgr@storage03
systemctl status ceph-mgr@storage03
systemctl enable ceph-mgr@storage03

5.7 部署ceph osd

5.7.1 生成并拷贝密钥文件

root@storage01:~# ceph auth get client.bootstrap-osd -o /var/lib/ceph/bootstrap-osd/ceph.keyring
root@storage01:~# scp -r /var/lib/ceph/bootstrap-osd/ceph.keyring storage02:/var/lib/ceph/bootstrap-osd/ceph.keyring
root@storage01:~# scp -r /var/lib/ceph/bootstrap-osd/ceph.keyring storage03:/var/lib/ceph/bootstrap-osd/ceph.keyring

5.7.2 创建日志盘并创建osd

parted -s /dev/sdb mklabel gpt
parted -a optimal /dev/sdb mkpart primary $(( 0 * 33 ))% $(( 0 * 33 + 33 ))%
parted -a optimal /dev/sdb mkpart primary $(( 1 * 33 ))% $(( 1 * 33 + 33 ))%
parted -a optimal /dev/sdb mkpart primary $(( 2 * 33 ))% $(( 2 * 33 + 33 ))%

ceph-volume --cluster ceph lvm create --bluestore --data /dev/sdc --block.db /dev/sdb1
ceph-volume --cluster ceph lvm create --bluestore --data /dev/sdd --block.db /dev/sdb2
ceph-volume --cluster ceph lvm create --bluestore --data /dev/sde --block.db /dev/sdb3

5.7.3 查看osd状态以及启动osd

ceph-volume lvm list

对准备就绪的osd进行激活:

ceph-volume lvm activate {ID} {FSID}

或激活单个节点的所有osd

ceph-volume lvm activate --all

启动osd(OSD编号会从0开始自动增加,可逐个节点创建OSD,OSD守护进程也是用该ID作为标识)

systemctl start ceph-osd@0
systemctl status ceph-osd@0
systemctl enable ceph-osd@0

5.7.4 查看集群以及磁盘状态

[root@controller ~]# ceph osd tree
[root@controller ~]# ceph --s
[root@controller ~]# ceph mon stat
[root@controller ~]# ceph auth list

5.8 测试创建块存储

查看pool

[root@storage01 ~]# ceph osd lspools

[root@storage01 ~]# ceph osd pool create rbd_volumes 32

[root@storage01 ~]# rbd create -p rbd_volumes --image rbd-demo.img --size 1G
[root@storage01 ~]# rbd create rbd_volumes/rbd-demo-1.img --size 1G
[root@storage01 ~]# rbd -p rbd_volumes ls
rbd-demo-1.img
rbd-demo.img

[root@storage01 ~]# rbd info -p rbd_volumes --image rbd-demo.img
rbd image 'rbd-demo.img':
	size 1 GiB in 256 objects
	order 22 (4 MiB objects)
	snapshot_count: 0
	id: ac53321230bc
	block_name_prefix: rbd_data.ac53321230bc
	format: 2
	features: layering, exclusive-lock, object-map, fast-diff, deep-flatten
	op_features: 
	flags: 
	create_timestamp: Tue Oct 31 21:51:28 2023
	access_timestamp: Tue Oct 31 21:51:28 2023
	modify_timestamp: Tue Oct 31 21:51:28 2023

删除

[root@storage01 ~]# rbd rm -p rbd_volumes --image rbd-demo-1.img
Removing image: 100% complete...done.
[root@storage01 ~]# rbd -p rbd_volumes ls
rbd-demo.img
[root@storage01 ~]# rbd rm -p rbd_volumes --image rbd-demo.img

删除测试pool

[root@storage01 ~]# ceph osd pool rm rbd_volumes rbd_volumes --yes-i-really-really-mean-it

第六章 OpenStack对接Ceph平台

特此声明,在对接ceph之前把现有的实例和镜像删除。以便后期对接成功旧资源找不到接口之后无法删除。

6.1 创建后端需要的存储池

stroage01节点操作 创建volumes images backups vms存储池

ceph osd pool create volumes 32
ceph osd pool create images 32
ceph osd pool create backups 32
ceph osd pool create vms 32

6.2 创建后端用户

6.2.1 创建密钥

stroage01节点操作

切换到ceph目录,在ceph上创建cinder、glance、cinder-backup、nova用户创建密钥,允许访问使用Ceph存储池

cd /etc/ceph/

6.2.2 创建用户client.cinder

对volumes存储池有rwx权限,对vms存储池有rwx权限,对images池有rx权限

[root@storage01 ceph]# ceph auth get-or-create client.cinder mon "allow r" osd "allow class-read object_prefix rbd_children,allow rwx pool=volumes,allow rwx pool=vms,allow rx pool=images"

class-read:x的子集,授予用户调用类读取方法的能力

object_prefix 通过对象名称前缀。下例将访问限制为任何池中名称仅以 rbd_children 为开头的对象

6.2.3 创建用户client.glance

对images存储池有rwx权限

[root@storage01 ceph]# ceph auth get-or-create client.glance mon "allow r" osd "allow class-read object_prefix rbd_children,allow rwx pool=images"

6.2.4 创建用户client.cinder-backup

对backups存储池有rwx权限

[root@storage01 ceph]# ceph auth get-or-create client.cinder-backup mon "profile rbd" osd "profile rbd pool=backups"

使用 rbd profile 为新的 cinder-backup 用户帐户定义访问权限。然后,客户端应用使用这一帐户基于块来访问利用了 RADOS 块设备的 Ceph 存储。

6.3 安装ceph客户端并创建目录

主要作用是OpenStack可调用Ceph资源

Controller、compute01、compute01节点操作

[root@controller ~]# yum install -y ceph-common
[root@compute01 ~]# yum install -y ceph-common
[root@compute02 ~]# yum install -y ceph-common

控制节点和计算节点创建存放ceph密钥的目录

mkdir /etc/ceph/

6.4 导出密钥环

stroage01节点 导出glance密钥、cinder密钥、cinder-backup密钥

ceph auth get client.glance -o ceph.client.glance.keyring
ceph auth get client.cinder -o ceph.client.cinder.keyring
ceph auth get client.cinder-backup -o ceph.client.cinder-backup.keyring

6.5 拷贝密钥环

stroage01节点操作

6.5.1 控制节点准备

拷贝glance密钥

scp ceph.client.glance.keyring root@controller:/etc/ceph/

拷贝cinder密钥

scp ceph.client.cinder.keyring root@controller:/etc/ceph/

拷贝cinder-backup密钥

scp ceph.client.cinder-backup.keyring root@controller:/etc/ceph/

拷贝ceph集群认证配置文件

scp ceph.conf root@controller:/etc/ceph/

6.5.2 计算节点准备

拷贝cinder密钥

scp ceph.client.cinder.keyring root@compute01:/etc/ceph/
scp ceph.client.cinder.keyring root@compute02:/etc/ceph/

拷贝cinder-backup密钥(backup服务节点)

scp ceph.client.cinder-backup.keyring root@compute01:/etc/ceph/
scp ceph.client.cinder-backup.keyring root@compute02:/etc/ceph/

拷贝ceph集群认证配置文件

scp ceph.conf root@compute01:/etc/ceph/
scp ceph.conf root@compute02:/etc/ceph/

6.6 计算节点添加libvirt密钥

6.6.1 compute01添加密钥

生成密钥(PS:注意,如果有多个计算节点,它们的UUID必须一致)

[root@compute01 ~]# cd /etc/ceph/
[root@compute01 ceph]# uuidgen 
ad92b6a2-6e02-439b-8b73-5e007ffa1010

UUID=$(uuidgen)
cat >> secret.xml << EOF
<secret ephemeral='no' private='no'>
  <uuid>ad92b6a2-6e02-439b-8b73-5e007ffa1010</uuid>
  <usage type='ceph'>
    <name>client.cinder secret</name>
  </usage>
</secret>
EOF

执行命令写入secret

[root@compute01 ~]# virsh secret-define --file secret.xml
Secret ad92b6a2-6e02-439b-8b73-5e007ffa1010 created

加入key,将key值复制出来

[root@compute01 ~]# cat ceph.client.cinder.keyring
AQAEJfBkIU0dFhAAK3Zk3fcdVisjQzOGIyMDEg==

[root@compute01 ~]# virsh secret-set-value --secret ad92b6a2-6e02-439b-8b73-5e007ffa1010 --base64 $(cat ceph.client.cinder.keyring | grep key | awk -F ' ' '{print $3}')

查看添加后端密钥

virsh secret-list

6.6.2 compute02添加密钥

生成密钥(PS:注意,如果有多个计算节点,它们的UUID必须一致)

[root@compute01 ~]# cd /etc/ceph/
[root@compute01 ceph]# uuidgen 
ad92b6a2-6e02-439b-8b73-5e007ffa1010

UUID=$(uuidgen)
cat >> secret.xml << EOF
<secret ephemeral='no' private='no'>
  <uuid>ad92b6a2-6e02-439b-8b73-5e007ffa1010</uuid>
  <usage type='ceph'>
    <name>client.cinder secret</name>
  </usage>
</secret>
EOF

执行命令写入secret

[root@compute01 ~]# virsh secret-define --file secret.xml
Secret ad92b6a2-6e02-439b-8b73-5e007ffa1010 created

加入key,将key值复制出来

[root@compute01 ~]# cat ceph.client.cinder.keyring
AQAEJfBkIU0dFhAAK3Zk3fcdVisjQzOGIyMDEg==

[root@compute01 ~]# virsh secret-set-value --secret ad92b6a2-6e02-439b-8b73-5e007ffa1010 --base64 $(cat ceph.client.cinder.keyring | grep key | awk -F ' ' '{print $3}')

查看添加后端密钥

virsh secret-list

6.7 配置glance集成ceph存储

controller节点

更改glance密钥属性

[root@controller ~]# chown glance.glance /etc/ceph/ceph.client.glance.keyring

修改配置文件

vim /etc/glance/glance-api.conf
[glance_store]
#stores = file,http
#default_store = file
#filesystem_store_datadir = /var/lib/glance/images/
stores = rbd,file,http
default_store = rbd
rbd_store_pool = images
rbd_store_user = glance
rbd_store_ceph_conf = /etc/ceph/ceph.conf
rbd_store_chunk_size = 8

重启生效ceph配置

[root@controller ~]# systemctl restart openstack-glance-api.service 

上传镜像

[root@controller ~]# glance image-create --name "cirros" --file cirros-0.4.0-x86_64-disk.img --disk-format qcow2 --container-format bare --visibility=public --progress
[root@controller ~]# glance image-list
+--------------------------------------+--------+
| ID                                   | Name   |
+--------------------------------------+--------+
| 7b9b1783-c1e8-42a7-b60d-bf627ab77742 | cirros |
+--------------------------------------+--------+

到storage01节点验证镜像

[root@storage01 ~]# rbd -p images ls
7b9b1783-c1e8-42a7-b60d-bf627ab77742

6.8 配置cinder集成ceph存储

更改cinder密钥属性controller节点

[root@controller ~]# chown cinder.cinder /etc/ceph/ceph.client.cinder.keyring

修改配置文件(controller节点)

[root@controller ~]# vim /etc/cinder/cinder.conf
[DEFAULT]
enabled_backends = ceph_hdd #ceph_hdd对应下面对接的标签[ceph_hdd]

[ceph_hdd]
volume_driver = cinder.volume.drivers.rbd.RBDDriver
rbd_pool = volumes
rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_flatten_volume_from_snapshot = false
rbd_max_clone_depth = 5
rbd_store_chunk_size = 4
rados_connect_timeout = -1
glance_api_version = 2
rbd_user = cinder
rbd_secret_uuid = ad92b6a2-6e02-439b-8b73-5e007ffa1010
volume_backend_name = ceph_hdd #对应cinder type-list类型

重启服务生效配置

systemctl restart openstack-cinder-api openstack-cinder-scheduler openstack-cinder-volume

创建卷类型(controller节点)

openstack volume type create ceph_hdd

设置卷类型元数据(controller节点)

cinder type-key ceph_hdd set volume_backend_name=ceph_hdd

查看存储类型(controller节点)

openstack volume type list

创建卷测试(controller节点)

openstack volume create ceph-hdd-disk01 --type ceph_hdd --size 1

查看所有卷

[root@controller ~]# cinder list
[root@controller ~]# openstack volume list
+--------------------------------------+-----------------+-----------+------+-------------+
| ID                                   | Name            | Status    | Size | Attached to |
+--------------------------------------+-----------------+-----------+------+-------------+
| d038edcc-b505-45fc-af97-dcf3e3385433 | ceph-hdd-disk01 | available |    1 |             |
+--------------------------------------+-----------------+-----------+------+-------------+

查看volumes存储池是否存在卷(ceph节点)

[root@storage01 ~]# rbd -p volumes ls
volume-d038edcc-b505-45fc-af97-dcf3e3385433

6.9 配置卷备份集成ceph存储

controller节点

修改配置文件

[root@controller ~]# vim /etc/cinder/cinder.conf
[DEFAULT]
backup_driver = cinder.backup.drivers.ceph.CephBackupDriver
backup_ceph_conf = /etc/ceph/ceph.conf
backup_ceph_user = cinder-backup
backup_ceph_chunk_size = 4194304
backup_ceph_pool = backups
backup_ceph_stripe_unit = 0
backup_ceph_stripe_count = 0
restore_discard_excess_bytes = true

重启生效配置

[root@controller ~]# systemctl restart openstack-cinder-api openstack-cinder-scheduler openstack-cinder-volume openstack-cinder-backup

创建卷备份(controller节点)

[root@controller ~]# openstack volume backup create --name ceph-hdd-disk01_backup ceph-hdd-disk01
+-------+--------------------------------------+
| Field | Value                                |
+-------+--------------------------------------+
| id    | 43784c72-a247-4f00-a842-97880939554c |
| name  | ceph-hdd-disk01_backup               |
+-------+--------------------------------------+

验证卷备份(storage01节点)

[root@storage01 ~]# rbd -p backups ls
volume-d038edcc-b505-45fc-af97-dcf3e3385433.backup.43784c72-a247-4f00-a842-97880939554c

6.10 配置nova-compute集成ceph存储

compute01、compute02节点(所有计算节点操作)

修改配置文件

vim /etc/nova/nova.conf
[DEFAULT]
force_raw_images = true

[libvirt]
images_type = rbd
images_rbd_pool = vms
images_rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_user = cinder
rbd_secret_uuid = ad92b6a2-6e02-439b-8b73-5e007ffa1010

重启nova服务生效配置

systemctl restart openstack-nova-compute

6.11 测试启动实例

6.11.1 创建实例

命令创建实例测试(controller节点)

[root@controller ~]# neutron net-list
[root@controller ~]# openstack server create --flavor 1C-512MB-1G --image cirros --security-group all-rule --nic net-id=$(vxlan网络id) --availability-zone nova:compute01:compute01 vm01

nova list --all

[root@controller ~]# openstack server create --flavor 1C-512MB-1G --image cirros --security-group all-rule --nic net-id=$(vxlan网络id) --availability-zone nova:compute02:compute02 vm02

在web界面创建虚拟机

项目------>计算------>实例------>创建实例

依次填写实例名称、镜像源、不创建新卷(选择不就是用ceph存储否侧还是用的本地盘)

实例类型选择

网络选择

创建完成之后查看实例的UUID,可以通过页面的 项目------>计算------>实例------>选择实例名称------>概况------>ID

或者通过

[root@controller ~]# nova list --all|grep vm01
[root@controller ~]# openstack server list
+--------------------------------------+------+--------+-----------------------+--------+-------------+
| ID                                   | Name | Status | Networks              | Image  | Flavor      |
+--------------------------------------+------+--------+-----------------------+--------+-------------+
| b202b3b5-e347-47c4-9010-04153e068dbd | vm01 | ACTIVE | Intnal-01=192.168.1.6 | cirros | 1C-512MB-1G |
+--------------------------------------+------+--------+-----------------------+--------+-------------+

在ceph虚拟机池中列出镜像,能看到镜像存储在ceph中

[root@storage01 ~]# rbd -p vms ls
b202b3b5-e347-47c4-9010-04153e068dbd_disk

链接浮动IP

[root@controller ~]# openstack floating ip list
[root@controller ~]# openstack server add floating ip vm01 $(分配出的地址)

6.11.2 连接ceph卷测试存储

项目------>卷------>选择一个卷的下拉菜单------>管理连接------>选择连接到实例

连接完成就可以在卷的页面看到了

格式化挂载

[root@controller ~]# ssh cirros@192.168.100.199
cirros@192.168.100.199's password: gocubsgo
$ sudo su - root
# ls -l /dev/vdb 
brw-------    1 root     root      253,  16 Feb 24 14:04 /dev/vdb
# mkfs.ext4 /dev/vdb #格式化
# mount /dev/vdb /mnt/	#挂载
# df -h /mnt/
Filesystem                Size      Used Available Use% Mounted on
/dev/vdb                  9.7G     22.5M      9.2G   0% /mnt

写入文件测试

# cd /mnt/
# mkdir 1 2 3 4 5
# touch a.txt
# echo 1234567890 >a.txt

卸载

# umount /mnt/

还可以把卷分离连接到其他的设备上。

一定要先分离卷:依次点击:编剧卷---管理连接---分离卷

6.12 消除ceph告警

[root@storage01 ~]# ceph osd pool application enable volumes rbd
[root@storage01 ~]# ceph osd pool application enable images rbd
[root@storage01 ~]# ceph osd pool application enable backups rbd
[root@storage01 ~]# ceph osd pool application enable vms rbd

第七章 Openstack创建虚机流程

7.1 个人总结

一、nova 内部调用

面或命令行通过RESTful API向keystone获取认证信息

1.当在web界面或命令行执行创建实例的操作时

2.nova-api接收到创建虚机的请求;写入到数据库。

3.通过消息队列把请求传给nova-scheduler

4.nova-scheduler接到这个任务之后首先查看配置,也就是创建虚机指定的实例类型"1C-1G-10G",nova-scheduler就会选择满足虚机配置的对应计算节点去创建(那么这个时候nova怎么知道哪一个计算节点满足创建虚机的需求?计算节点会每间隔1分钟默认向nova-scheduler汇报计算节点本身还有多少可用的cpu、内存、磁盘/var/log/nova/nova-compute.log),nova-scheduler会自动挑出来一个满足创建虚机的计算节点来进行创建虚机,假设选择了一个compute01的nova-compute

5.nova-scheduler通过消息队列把请求指派给compute01

6.6通过消息队列把请求传给compute01计算节点的nova-compute服务

7.计算节点nova-compute服务需要知道创建虚机的配置(实例类型),nova-compute服务需要连接数据库才能获取到创建虚机的具体信息

8.nova-compute找nova-conductor通过nova-conductor连接数据库查询这台虚机的信息,最后返回给compute01计算节点进行创建。

二、nova 与其他服务之间的调用

1.在创建虚机的时候会把这个请求带上一个token的keystone认证否则无法进行下面的任何操作

2.nova-compute服务首先调用glance下载镜像

3.nova-compute根据实例类型启动虚机1C-1G-10G

4.nova-compute找neutron安排dhcp agent分配ip,分配好ip之后在安排linuxbridage创建桥接网段。neutron内部之间的调用也是通过消息队列

5.nova-compute找cinder-api分配卷,验证token合法,验证合法情况下cinder-api会把任务传给cinder-scheduler,最后cinder-scheduler选择cinder-volume创建卷,这个卷作为云主机的磁盘文件

6.nova-compute启动实例

7.nova-compute找nova-conductor修改数据库处理虚机孵化及活动状态

当新创建的虚拟机快进如系统的时候,nova-metadata-api和neutron-metadata-agent修改主机名注入密钥对

创建虚机的日志

/var/lib/nova/instances/xxxxxx-xxxx-xxxx-xxxxxxxxx/console.log

7.1 百度摘抄

![img](file:///C:\Users\ADMINI~1\AppData\Local\Temp\ksohtml9212\wps16.jpg)

1.界面或命令行cli通过RESTful API向keystone获取认证信息。

2.keystone通过用户请求认证信息,并生成auth-token返回给对应的认证请求。

3.界面或命令行通过RESTful API向nova-api发送一个boot instance的请求(携带auth-token)。

4.nova-api接受请求后向keystone发送认证请求,查看token是否为有效用户和token。

5.keystone验证token是否有效,如有效则返回有效的认证和对应的角色(注:有些操作需要有角色权限才能操作)。

6.通过认证后nova-api和数据库通讯。

7.初始化新建虚拟机的数据库记录。

8.nova-api通过rpc.call向nova-scheduler请求是否有创建虚拟机的资源(Host ID)。

9.nova-scheduler进程侦听消息队列,获取nova-api的请求。

10.nova-scheduler通过查询nova数据库中计算资源的情况,并通过调度算法计算符合虚拟机创建需要的主机。

11.对于有符合虚拟机创建的主机,nova-scheduler更新数据库中虚拟机对应的物理主机信息。

12.nova-scheduler通过rpc.cast向nova-compute发送对应的创建虚拟机请求的消息。

13.nova-compute会从对应的消息队列中获取创建虚拟机请求的消息。

14.nova-compute通过rpc.call向nova-conductor请求获取虚拟机消息。(Flavor)

15.nova-conductor从消息队队列中拿到nova-compute请求消息。

16.nova-conductor根据消息查询虚拟机对应的信息。

17.nova-conductor从数据库中获得虚拟机对应信息。

18.nova-conductor把虚拟机信息通过消息的方式发送到消息队列中。

19.nova-compute从对应的消息队列中获取虚拟机信息消息。

20.nova-compute通过keystone的RESTfull API拿到认证的token,并通过HTTP请求glance-api获取创建虚拟机所需要镜像。

21.glance-api向keystone认证token是否有效,并返回验证结果。

22.token验证通过,nova-compute获得虚拟机镜像信息(URL)。

23.nova-compute通过keystone的RESTfull API拿到认证k的token,并通过HTTP请求neutron-server获取创建虚拟机所需要的网络信息。

24.neutron-server向keystone认证token是否有效,并返回验证结果。

25.token验证通过,nova-compute获得虚拟机网络信息。

26.nova-compute通过keystone的RESTfull API拿到认证的token,并通过HTTP请求cinder-api获取创建虚拟机所需要的持久化存储信息。

27.cinder-api向keystone认证token是否有效,并返回验证结果。

28.token验证通过,nova-compute获得虚拟机持久化存储信息。

29.nova-compute根据instance的信息调用配置的虚拟化驱动来创建虚拟机。

第八章 利用kvm制作centos镜像并上传到glance

8.1 制作镜像

8.1.1 基础设置

利用centos7.6操作系统kvm制作一个qcow2格式的镜像

配置源

tar fx openstack-train.tar.gz
rm -rf /etc/yum.repos.d/*
cat << EOF >> /etc/yum.repos.d/openstack-train.repo 
[openstack]
name=openstack
baseurl=file:///root/openstack-train
gpgcheck=0
enabled=1
EOF
yum clean all
yum makecache

基础工具包安装

yum install -y unzip wget lrzsz net-tools vim tree lsof tcpdump telnet screen bash-completion fio stress sysstat tree strace

安装kvm

[root@localhost ~]# yum install qemu-kvm libvirt virt-install virt-manager

上传iso文件"CentOS-7.6-x86_64-DVD-1810.iso"到/kvm

8.1.2 创建虚拟机

[root@localhost ~]# mkdir /kvm
[root@localhost ~]# virt-install --virt-type kvm --os-type=linux --os-variant rhel7 --name centos --memory 1024 --vcpus 1 --disk /kvm/centos01.raw,format=raw,size=10 --cdrom /kvm/CentOS-7.6-x86_64-DVD-1810.iso --network network=default --graphics vnc,listen=0.0.0.0 --noautoconsole

接下来就是正常安装操作系统(标准分区、北京时区东八区、最小化安装)

8.1.3 定制操作系统

安装完操作系统之后做一些基本设置,比如:关闭selinux、firewalld、NetworkManager。

定制操作系统,openstack利用此镜像创建实例。

ssh登录

首先查登录系统查看下ip地址

[root@localhost ~]# ssh 192.168.122.125
root@192.168.122.125's password: 

关闭selinux、防火墙、NetworkManager

systemctl stop firewalld.service
systemctl disable firewalld.service
sed -i -r 's/^(SELINUX=).*/\1disabled/' /etc/selinux/config
iptables -F
iptables -L
iptables -X
systemctl stop NetworkManager.service
systemctl disable NetworkManager.service

修改网卡配置文件

cat > /etc/sysconfig/network-scripts/ifcfg-eth0 << EOF
DEVICE=eth0
BOOTPROTO=dhcp
ONBOOT=yes
EOF

修改grub菜单

grubby --update-kernel=ALL --args="console=ttyS0,115200n8" #115200=波特率,n8=8位
sed -i "s/quiet\"$/quiet console=tty0 console=ttyS0,115200n8\"/" /etc/default/grub
grub2-mkconfig -o /boot/grub2/grub.cfg

修改ssh配置文件

sed -i "s/#UseDNS yes/UseDNS no/g" /etc/ssh/sshd_config
systemctl reload sshd

安装acpid(用户空间的服务进程)

yum install acpid -y
systemctl enable acpid

安装qemu-guest-agent(密码透传工具)

yum install qemu-guest-agent -y
systemctl enable qemu-guest-agent

安装cloud-utils-growpart(实现标准分区虚拟机启动时 /分区自动扩展)

yum install cloud-utils-growpart -y

安装cloud-init(设置虚拟机元数据信息,主机名透传、根分区自动扩展等等)

yum install cloud-init -y
sed -i "/^disable_root/s/1$/0/" /etc/cloud/cloud.cfg
sed -i "/^ssh_pwauth/s/0$/1/" /etc/cloud/cloud.cfg
sed -i "/^syslog_fix_perms/a\\\ndatasource:\n  OpenStack:\n    timeout: 5\n    max_wait: 10\n" /etc/cloud/cloud.cfg
sed -i "/hostname$/d" /etc/cloud/cloud.cfg

删除默认DNS解析

[root@localhost ~]# vi /etc/resolv.conf 

8.1.4 格式转换

qcow2转换raw格式

[root@localhost ~]# ll /kvm/centos01.raw 
-rw------- 1 qemu qemu 10737418240 Nov  4 13:25 /kvm/centos01.raw

[root@localhost ~]# qemu-img info /kvm/centos01.raw 
image: /kvm/centos01.raw
file format: raw
virtual size: 10G (10737418240 bytes)
disk size: 1.4G
[root@localhost ~]# 
[root@localhost ~]# qemu-img convert -f raw /kvm/centos01.raw -O qcow2 /kvm/centos01.qcow2
[root@localhost ~]# 
[root@localhost ~]# qemu-img info /kvm/centos01.qcow2 
image: /kvm/centos01.qcow2
file format: qcow2
virtual size: 10G (10737418240 bytes)
disk size: 1.8G
cluster_size: 65536
Format specific information:
    compat: 1.1
    lazy refcounts: false

8.2 测试上传cetos7.6镜像并启动一个实例

上传并管理镜像官网地址

https://docs.openstack.org/zh_CN/user-guide/dashboard-manage-images.html

8.2.1 上传centos7.6镜像

依次点击:项目------>计算------>镜像------>创建镜像

![img](file:///C:\Users\ADMINI~1\AppData\Local\Temp\ksohtml9212\wps17.jpg)

虚拟机实现密码透传功能只安装了qga还不行还需要设置镜像的元数据hw_qemu_guest_agent=yes

[root@controller ~]# glance image-list
[root@controller ~]# glance image-show 24230bd8-f1b3-4823-8cef-53a29135ca3d
[root@controller ~]# glance image-update 24230bd8-f1b3-4823-8cef-53a29135ca3d --property hw_qemu_guest_agent=yes
[root@controller ~]# glance image-show 24230bd8-f1b3-4823-8cef-53a29135ca3d

8.2.2 启动centos7.6实例

8.2.2.1 首先创建flavor

查看

[root@controller ~]# openstack flavor list

创建cpu1核、内存1024M、硬盘10G

[root@controller ~]# openstack flavor create --vcpus 1 --ram 1024 --disk 20 1C-1G-20G

查看

[root@controller ~]# openstack flavor list

8.2.2.2 启动云主机并测试

[root@controller ~]# openstack network list

创建实例到指定计算节点

创建实例源选择"CentOS-1810"。。实例类型选择"1C-1G-20G"。。

[root@controller ~]# neutron net-list
[root@controller ~]# openstack server create --flavor 1C-1G-20G --image CentOS-1810 --nic net-id=$uuid --security-group all-rule --availability-zone nova:compute02:compute02 vm02
-参数 解释
--flavor 实例类型
--image 镜像
--nic 网络 net-id网络id
--nic net-id=e4798bba-ad5c-4b1d-a091-7f3655ad1284,v4-fixed-ip=xxx.xxx.xxx.xxx
--availability-zone nova:compute02:compute02 compute02为指定计算节点

测试连接

![img](file:///C:\Users\ADMINI~1\AppData\Local\Temp\ksohtml9212\wps18.jpg)

[C:\~]$ ssh root@192.168.100.185
[root@vm02 ~]# ifconfig |grep 192.168
        inet 192.168.1.90  netmask 255.255.255.0  broadcast 192.168.1.255

8.2.2.3 验证云主机功能

检查硬盘是否自动扩容标准分区,进入操作系统df -h查看根分区

密码透传

[root@controller ~]# nova set-password $vm-uuid

第九章 vxlan网络

9.1 vxlan简要概括

一个交换机最多可以创建4096个vlan、交换机默认是1,4094需要做堆叠

以上限制了,根本不可能通过vlan来做用户的私有网络。

直接通过vxlan来做

VXLAN(虚拟扩展局域网)是一种网络虚似化技术,可以改进大型云计算在部署时的扩展问题,是对VLAN的一种扩展。VXLAN是一种功能强大的工具,可以穿透三层网络对二层进行扩展。它可通过封装流量并将其扩展到第三层网关,以此来解决VMS(虚拟内存系统)的可移植性限制,使其可以访问在外部IP子网上的服务器

它支持的数量是4096*4096=16777216

9.2 修改配置

控制节点修改openstack-dashboard配置

[root@controller ~]# vim /etc/openstack-dashboard/local_settings
OPENSTACK_NEUTRON_NETWORK = {
    'enable_router': True, # 开启路由

计算节点nova-compute服务default标签下添加如下

vif_plugging_timeout = 10
vif_plugging_is_fatal = False

如果不配置可能会失败,失败后返回的报错日志

Build of instance b202b3b5-e347-47c4-9010-04153e068dbd aborted: Failed to allocate the network(s), not rescheduling.

问题现象

查看错误的虚拟机的port状态为down状态,nova-compute有报错 提示尝试更新port状态达到最大次数。

问题原因

在虚拟机孵化的过程中,nova会向neutron发送一个vif的请求动作,然后自身开始创建tap、加载网桥,生成流表,以及写入image的动作,neutron server检查到nova完成这些之后,会将该虚拟机port状态从down置为up状态,如果在默认的等待时间内(300s)neutron server没有将这个port的状态置为up,则nova会抛出异常报错。

9.3 创建网络并启动虚拟机测试

创建两个私有网络

创建一个路由器,两个私有网络连接到路由器上,路由器并连接外部网络。

在两个私有网络下面启动虚拟机并相互ping以及ping百度测试;

[C:\~]$ ping 192.168.100.199
[root@controller ~]# ping 192.168.100.199

[root@controller ~]# ssh cirros@192.168.100.199
Warning: Permanently added '192.168.100.199' (ECDSA) to the list of known hosts.
cirros@192.168.100.199's password: 
$ sudo -i
# 
$ ip a 
# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc pfifo_fast qlen 1000
    link/ether fa:16:3e:98:70:a4 brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.6/24 brd 192.168.1.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::f816:3eff:fe98:70a4/64 scope link 
       valid_lft forever preferred_lft forever

9.4 验证

[root@controller ~]# ip netns 
qdhcp-7283e381-3635-4b9d-9b62-5f55932fff02 (id: 1)
qrouter-074961a7-1e3b-459c-8ba2-a7f8d5a6e483 (id: 0)
[root@controller ~]# 
[root@controller ~]# ip netns exec qrouter-074961a7-1e3b-459c-8ba2-a7f8d5a6e483 /bin/bash

进入到网络命名空间查看qg接口

![img](file:///C:\Users\ADMINI~1\AppData\Local\Temp\ksohtml9212\wps19.jpg)

在命名空间查看iptable规则

[root@controller ~]# iptables -t nat -L -n

目的地址转换:DNAT 当去访问192.168.100.199的时候访问的是192.168.1.6

源地址转换:SANT

![img](file:///C:\Users\ADMINI~1\AppData\Local\Temp\ksohtml9212\wps20.jpg)

通过上面的规则实现ip地址映射

[root@compute01 ~]# tcpdump icmp

在私有网络虚机里面ping 10.0.0.31

所有Intnal-01网络Intsubnal 192.168.1.0/24子网的虚机都是通过192.168.100.197网关去出网的

![img](file:///C:\Users\ADMINI~1\AppData\Local\Temp\ksohtml9212\wps21.jpg)

私有网络里的虚机要出去上网必须需要外部网关

[root@controller ~]# neutron router-list
[root@controller ~]# neutron router-port-list 074961a7-1e3b-459c-8ba2-a7f8d5a6e483

管理员------>网络------>路由------>接口------>类型------>外部网关------>固定ip和命名空间的qg接口对上ok

![img](file:///C:\Users\ADMINI~1\AppData\Local\Temp\ksohtml9212\wps22.jpg)

9.5 vxlan 网络流程

![img](file:///C:\Users\ADMINI~1\AppData\Local\Temp\ksohtml9212\wps23.jpg)

第十章 主机聚合及项目配额调整

主机聚合:把不同型号物理机创建一个逻辑聚合组,比如cpu指令集不一致,无法热迁移。把相同cpu指令集的放在一个组里面;

10.1 创建主机聚合

管理员------>计算------>主机聚合------>创建主机聚合

[root@controller ~]# openstack aggregate create compute-i7
[root@controller ~]# openstack aggregate add host compute-i7 compute01
[root@controller ~]# openstack aggregate add host compute-i7 compute02
[root@controller ~]# openstack aggregate set --zone compute-i7-az compute-i7

[root@controller ~]# openstack aggregate list
+----+------------+-------------------+
| ID | Name       | Availability Zone |
+----+------------+-------------------+
|  4 | compute-i7 | compute-i7-az     |
+----+------------+-------------------+
[root@controller ~]# openstack aggregate show compute-i7
+-------------------+----------------------------+
| Field             | Value                      |
+-------------------+----------------------------+
| availability_zone | compute-i7-az              |
| created_at        | 2023-11-01T10:10:59.000000 |
| deleted           | False                      |
| deleted_at        | None                       |
| hosts             | compute01, compute02       |
| id                | 4                          |
| name              | compute-i7                 |
| properties        |                            |
| updated_at        | None                       |
+-------------------+----------------------------+

删除

[root@controller ~]# openstack aggregate remove host compute-i7 compute01
[root@controller ~]# openstack aggregate remove host compute-i7 compute02
[root@controller ~]# openstack aggregate delete compute-i7

10.2 配额调整

由于默认启动实例只能启动10个,cpu只能使用20个,内存使用50G

项目------>计算------>概况,默认租户的配合很低

修改配额

身份管理------>项目------>找到admin后面的下拉菜单------>修改配额

修改配额后点击确定

最后,再次到概况里面查看修改成功

https://docs.openstack.org/python-openstackclient/pike/cli/command-objects/quota.html

计算配额调整

[root@controller ~]# openstack project list
[root@controller ~]# openstack project list
+----------------------------------+-----------+
| ID                               | Name      |
+----------------------------------+-----------+
| 1deabd48eaca4c44a941a2a6b843e94e | admin     |
| 7a6d7e0185fd42508dac8b23ef03b511 | myproject |
| c6d528ec121845eab0f530070b1ec422 | service   |
+----------------------------------+-----------+
 
[root@controller ~]# nova quota-defaults
+----------------------+-------+
| Quota                | Limit |
+----------------------+-------+
| instances            | 10    |
| cores                | 20    |
| ram                  | 51200 |
| metadata_items       | 128   |
| key_pairs            | 100   |
| server_groups        | 10    |
| server_group_members | 10    |
+----------------------+-------+
[root@controller ~]# nova quota-class-update 1deabd48eaca4c44a941a2a6b843e94e --instances 50
[root@controller ~]# nova quota-class-update 1deabd48eaca4c44a941a2a6b843e94e --cores 200
[root@controller ~]# nova quota-class-update 1deabd48eaca4c44a941a2a6b843e94e --ram 102400
[root@controller ~]# nova quota-defaults 
+----------------------+--------+
| Quota                | Limit  |
+----------------------+--------+
| instances            | 50     |
| cores                | 200    |
| ram                  | 102400 |
| metadata_items       | 128    |
| key_pairs            | 100    |
| server_groups        | 10     |
| server_group_members | 10     |
+----------------------+--------+

卷配额调整

[root@controller ~]# openstack project list
[root@controller ~]# cinder quota-defaults 1deabd48eaca4c44a941a2a6b843e94e
[root@controller ~]# cinder quota-update --volumes 100 1deabd48eaca4c44a941a2a6b843e94e
[root@controller ~]# cinder quota-update --snapshots 100 1deabd48eaca4c44a941a2a6b843e94e

第十一章 测试调整实例大小

随着业务用户不断上升原有分配的硬件资源已经不够用,这个时候需要增加一下cpu、内存。

调整实例大小就是变更系统硬件资源

注意:调整实例大小虚拟机需要关机,不支持再线调整实例大小配置

接下来登陆dashboard界面在线调整云主机的大小:

项目------>计算------>实例------>找到需要调整实例的虚拟机下拉的菜单------>调整实例大小

选择需要扩容到的配置规格(flavor)------>调整大小

迁移过程页面显示,也可以查看后台计算节点compute日志------>需要点击调整大小迁移

11.1命令行cli迁移

[root@controller ~]# nova list --all
[root@controller ~]# openstack flavor list
[root@controller ~]# openstack flavor create --vcpus 1 --ram 1024 --disk 1 1C-1G-1G

[root@controller ~]# nova help resize
usage: nova resize [--poll] <server> <flavor>

Resize a server.

Positional arguments:
  <server>  Name or ID of server.
  <flavor>  Name or ID of new flavor.

Optional arguments:
  --poll    Report the server resize progress until it completes.

[root@controller ~]# nova resize c68b069f-8527-4a97-9dab-d9aa6515e307 399382d2-8ccc-4f5e-986e-a6174f883f71

![img](file:///C:\Users\ADMINI~1\AppData\Local\Temp\ksohtml9212\wps24.jpg)

在日志中错误提示还是挺明显的,根据错误提示,解决相应的故障就行了。

openstack的虚拟机在线调整大小的原理:

其实就相当于做了一个云主机在不同宿主机(计算节点)之间的迁移,所以前提是至少需要有两个计算节点。

如果是单机部署的openstack(即控制节点和计算节点都在一台机器上),有且只有一个计算节点,那么是无法完成在线调整虚拟机大小的。

同时要注意的是:

要在相关迁移云主机间进行无密码访问,由于OpenStack是由Nova组件来管理云主机,所以需要对Nova用户进行无密码访问。

11.2 修改计算节点的nova.conf文件

可以参考最原始的nova.conf

[defaults]
allow_resize_to_same_host = true
enabled_filters=AvailabilityZoneFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,ServerGroupAntiAffinityFilter,ServerGroupAffinityFilter

计算机点重启nova-compute服务

systemctl restart openstack-nova-compute

11.3 nova用户的ssh双向认证关系

下面的操作在云主机所在计算节点和其他相关迁移云主机的计算节点上操作。

要清楚的是:

计算节点可能有多台,但是我们只需要在要调整大小的云主机所在的计算节点和其他的另外一台或多台计算几点上操作就行,并不是要求所有的计算节点都要操作(全部计算节点都操作也是可以的

[root@compute02 ~]# cat /etc/passwd|grep nova
[root@compute02 ~]# cat /etc/passwd|grep nova

cat /etc/passwd|grep nova

[root@compute01 ~]# cat /etc/passwd|grep nova
nova:x:162:162:OpenStack Nova Daemons:/var/lib/nova:/bin/bash
[root@compute02 ~]# cat /etc/passwd|grep nova
nova:x:162:162:OpenStack Nova Daemons:/var/lib/nova:/bin/bash

compute节点

mkdir /var/lib/nova/.ssh/

控制节点拷贝密钥

scp /root/.ssh/id_rsa.pub compute01:/var/lib/nova/.ssh/id_rsa.pub
scp /root/.ssh/id_rsa.pub compute02:/var/lib/nova/.ssh/id_rsa.pub

scp /root/.ssh/id_rsa compute01:/var/lib/nova/.ssh/id_rsa
scp /root/.ssh/id_rsa compute02:/var/lib/nova/.ssh/id_rsa

scp /root/.ssh/id_rsa.pub compute01:/var/lib/nova/.ssh/authorized_keys
scp /root/.ssh/id_rsa.pub compute02:/var/lib/nova/.ssh/authorized_keys


chown -R nova.nova /var/lib/nova/.ssh/

如果还迁移失败compute01和compute02切换nova用户执行

ssh compute01
ssh compute02
ssh 10.0.0.11
ssh 10.0.0.12

11.4 再次测试调整大小

[root@controller ~]# nova resize c68b069f-8527-4a97-9dab-d9aa6515e307 399382d2-8ccc-4f5e-986e-a6174f883f71

迁移到dashboard查看------>需要点击调整大小/迁移

11.5 登录虚拟机验证

[root@controller ~]# ssh 192.168.100.199
root@192.168.100.199's password: 
[root@localhost ~]# free -m
              total        used        free      shared  buff/cache   available
Mem:           1999          69        1838           8          91        1800
Swap:             0           0           0
[root@localhost ~]# cat /proc/cpuinfo |grep processor
processor	: 0
processor	: 1

第十二章 实例迁移

12.1 实例冷迁移

关机状态下进行迁移,开机也可以实现,但是虚拟机会发生重启

管理员------>计算------>实例------>找到需要迁移的虚拟机确认状态、电源状态、所在主机------>下拉菜单------>迁移实例------>确认迁移

还需要点一下 确认\调整大小迁移

迁移完成之后开机

项目------>计算------>实例------>实例名称------>启动实例

[root@controller ~]# openstack server list
[root@controller ~]# openstack server stop 6eb2f571-fc3f-4271-9d3c-b5cf17e26989
[root@controller ~]# nova migrate 6eb2f571-fc3f-4271-9d3c-b5cf17e26989 --host compute01

ssh连接测试

[root@controller ~]# ssh 192.168.100.199
root@192.168.100.199's password: 
[root@vm01 ~]# ifconfig |grep 192

12.2 实例热迁移

为保证业务的连续性而产生的实例热迁移;

热迁移:将两台cpu型号相同的物理机上面运行的虚拟机实现开机状态、业务不下线热迁移;根据网络环境可能会有几秒的掉ping;

管理员------>计算------>实例------>找到需要迁移的虚拟机确认状态、电源状态、所在主机------>下拉菜单------>实例热迁移------>迁移到计算节点

默认迁移失败的,所以需要配置一下。

12.2.1 配置libvirtd文件

compute01、compute02节点

修改 /etc/libvirt/libvirtd.conf 配置监听地址

[root@compute01 ~]# vim /etc/libvirt/libvirtd.conf
[root@compute01 ~]# egrep -vn "^$|^#" /etc/libvirt/libvirtd.conf
listen_tls = 0
listen_tcp = 1
tcp_port = "16509"
listen_addr = "10.0.0.11"	# 设置为计算节点的ip地址
auth_tcp = "none"

修改/etc/sysconfig/libvirtd 开启监听

[root@compute01 ~]# vim /etc/sysconfig/libvirtd
LIBVIRTD_ARGS="--listen"	# 取消注释

启动相关计算节点服务

systemctl restart libvirtd.service
systemctl restart openstack-nova-compute.service
systemctl restart neutron-openvswitch-agent

验证

[root@compute01 ~]# netstat -nltp|grep 16509 
tcp        0      0 10.0.0.11:16509         0.0.0.0:*               LISTEN      29684/libvirtd 
[root@compute02 ~]# netstat -nltp|grep 16509 
tcp        0      0 10.0.0.12:16509         0.0.0.0:*               LISTEN      29803/libvirtd

测试连接

[root@compute01 ~]# virsh -c qemu+tcp://compute02/system
Welcome to virsh, the virtualization interactive terminal.

Type:  'help' for help with commands
       'quit' to quit

virsh #

12.2.2 测试迁移

查看需要迁移的云主机详细信息

[root@controller ~]# openstack server list
[root@controller ~]# openstack server show $uuid

打开vnc控制台,ping网关地址测试是否掉ping

[root@controller ~]# nova get-vnc-console 6eb2f571-fc3f-4271-9d3c-b5cf17e26989 novnc

计算节点查看日志

[root@compute01 ~]# tail -f /var/log/nova/nova-compute.log
[root@compute02 ~]# tail -f /var/log/nova/nova-compute.log

热迁移

[root@controller ~]# nova live-migration 6eb2f571-fc3f-4271-9d3c-b5cf17e26989 compute02

迁移成功会导致控制台丢失,需要再次重新获取

[root@controller ~]# nova get-vnc-console 6eb2f571-fc3f-4271-9d3c-b5cf17e26989 novnc

发现掉了三个ping也就是说热迁移会导致虚拟机丢数据包轻微的影响业务

![img](file:///C:\Users\ADMINI~1\AppData\Local\Temp\ksohtml9212\wps25.jpg)

在前端web页面测试下迁移操作进行测试

管理员------>计算------>实例------>对应迁移的云主机下拉菜单------>实例热迁移

查看云主机所在的计算节点是否发生了变化

[root@controller ~]# openstack server show $UUID

第十三章 服务器断电故障恢复

模拟vmware断电,直接点击对应主机叉子进行断电。物理机直接拔掉电源

![img](file:///C:\Users\ADMINI~1\AppData\Local\Temp\ksohtml9212\wps26.jpg)

在启动物理机启动时候实例不会正常启动,需要手动启动实例才可以

nova list --all --host=compute02
nova start $uuid

服务器断电之后导致实例锁住了。启动实例之后如下页面

![img](file:///C:\Users\ADMINI~1\AppData\Local\Temp\ksohtml9212\wps27.jpg)

首先nova list --all --host=computex 找到异常宕机的所有虚拟机uuid

其次需要在ceph虚拟机池中列出启动的实例id

进行uuid对比,找到需要解决丢失的实例id号

查看 ceph的vms存储池

[root@storage01 ~]# rbd ls vms
185455c4-682e-4a81-9ab1-fe74deafe2bf_disk
8325609a-8b46-4d4b-90d9-77a0b96f7ef7_disk

查看锁

[root@storage01 ~]# rbd lock ls vms/185455c4-682e-4a81-9ab1-fe74deafe2bf_disk
There is 1 exclusive lock on this image.
Locker        ID                  Address                
client.624769 auto 94193066146688 10.0.0.12:0/326754146 

释放锁

[root@storage01 ~]# rbd lock remove vms/185455c4-682e-4a81-9ab1-fe74deafe2bf_disk "auto 94193066146688" client.624769 
[root@storage01 ~]# echo $?
0 

参考图片

![img](file:///C:\Users\ADMINI~1\AppData\Local\Temp\ksohtml9212\wps28.jpg)

控制节点再次硬重启实例,并查vnc查看虚拟机状态

[root@controller ~]# nova reboot --hard uuid

![img](file:///C:\Users\ADMINI~1\AppData\Local\Temp\ksohtml9212\wps29.jpg)

第十四章 Openstack虚机绑定vip

由于OpenStack启动的实例需要做高可用,需要绑定vip。Openstack默认不能绑定vip需要进行配置才可以绑定vip

vip地址

192.168.1.50

192.168.1.51

14.1 创建port占用ip

[root@controller ~]# neutron net-list
[root@controller ~]# neutron subnet-list
[root@controller ~]# neutron port-create --fixed-ip subnet_id=d4d4076b-02c9-4ad4-8ce7-143aede1933a,ip_address=192.168.1.50 7283e381-3635-4b9d-9b62-5f55932fff02

subnet id以及subnet所在的网络id

14.2 绑定

查看 虚拟机port-id

[root@controller ~]# neutron port-list|grep 192.168.1.90
| ddfe66e7-2453-4c42-9480-b1cf1cb48621 |      | 1deabd48eaca4c44a941a2a6b843e94e | fa:16:3e:5c:1e:19 | {"subnet_id": "d4d4076b-02c9-4ad4-8ce7-143aede1933a", "ip_address": "192.168.1.90"}    |
[root@controller ~]# neutron port-list|grep 192.168.1.220
| 975d2413-0e7e-42c7-97a8-f767d8d8ed35 |      | 1deabd48eaca4c44a941a2a6b843e94e | fa:16:3e:ea:f4:79 | {"subnet_id": "d4d4076b-02c9-4ad4-8ce7-143aede1933a", "ip_address": "192.168.1.220"}   |

[root@controller ~]# neutron port-show 975d2413-0e7e-42c7-97a8-f767d8d8ed35
[root@controller ~]# neutron port-show ddfe66e7-2453-4c42-9480-b1cf1cb48621

一个虚拟机绑定一个vip

neutron port-update 975d2413-0e7e-42c7-97a8-f767d8d8ed35 --allowed-address-pairs type=dict list=true ip_address=192.168.1.50
neutron port-update ddfe66e7-2453-4c42-9480-b1cf1cb48621 --allowed-address-pairs type=dict list=true ip_address=192.168.1.50

一个虚拟机绑定多个vip

neutron port-update 975d2413-0e7e-42c7-97a8-f767d8d8ed35 --allowed-address-pairs type=dict list=true ip_address=192.168.1.0/24
neutron port-update ddfe66e7-2453-4c42-9480-b1cf1cb48621 --allowed-address-pairs type=dict list=true ip_address=192.168.1.0/24

14.3 查看验证

此时执行命令neutron port-show port-ip地址

[root@controller ~]# neutron port-show 975d2413-0e7e-42c7-97a8-f767d8d8ed35
[root@controller ~]# neutron port-show ddfe66e7-2453-4c42-9480-b1cf1cb48621

删除虚拟机绑定的vip

neutron port-update --no-allowed-address-pairs 8fd37220-5806-444e-a70a-c5a064862b0d
neutron port-show 8fd37220-5806-444e-a70a-c5a064862b0d

第十五章 直通盘挂载

优点:由于io密集型的业务读写要求比较高,iops要求4k随机读写都需要在大几千以上,这个时候可以通过直通盘解决这个问题

缺点:

1:直接使用物理硬盘,直通盘不可以做raid因为raid需要同时读写多个硬盘

2:虚拟机不能迁移

vmware 添加一块新的硬盘,开机添加默认虚拟机lsblk无法看到。通过执行下面内容可以看到新添加的硬盘;

for CHANNEL in `ls /sys/class/scsi_host/ | xargs` ; do echo "- - -" > /sys/class/scsi_host/${CHANNEL}/scan ; done

计算节点操作

mkdir /data
mkfs.xfs /dev/sdb 
mount /dev/sdb /data
cd /data/

qemu-img create -f qcow2 vm01-disk01.qcow2 50G
Formatting 'vm01-disk01.qcow2', fmt=qcow2 size=53687091200 cluster_size=65536 lazy_refcounts=off refcount_bits=16

virsh attach-disk --domain 185455c4-682e-4a81-9ab1-fe74deafe2bf /data/vm01-disk01.qcow2 --target vdb --targetbus virtio --sourcetype file --live --config --cache none --driver qemu --subdriver qcow2
Disk attached successfully

测试随机写

fio -ioengine=libaio -direct=1 -iodepth 1 -thread -rw=randwrite -filename=/dev/vdb -bs=4k -numjobs=8 -runtime=60 -group_reporting -name=4KB_randwrite
fio -ioengine=libaio -direct=1 -iodepth 1 -thread -rw=randwrite -filename=/dev/vdc -bs=4k -numjobs=8 -runtime=60 -group_reporting -name=4KB_randwrite

ceph分布式存储磁盘测试

![img](file:///C:\Users\ADMINI~1\AppData\Local\Temp\ksohtml9212\wps30.jpg)

直通盘测试

![img](file:///C:\Users\ADMINI~1\AppData\Local\Temp\ksohtml9212\wps31.jpg)

第十六章 物理机宕机虚拟机迁移

如果计算节点发生了宕机导致不可恢复,这个时候需要恢复虚拟机业务按如下步骤进行恢复业务

查询虚拟机的uuid

[root@controller ~]# nova list --all

找到server节点修改数据库

修改nova数据库instances表的虚拟机状态机所在物理主机node节点

select task_state,vm_state,power_state,display_name,deleted,host,launched_on,node from nova.instances where uuid="c8c7d866-f41f-49ff-aa35-33441227859d";
UPDATE nova.instances SET task_state=NULL,vm_state='active',host='compute01',launched_on='compute01',node='compute01' where uuid="c8c7d866-f41f-49ff-aa35-33441227859d";

找到数据库修改后的节点,创建/var/lib/nova/instances/"uuid"目录

cd /var/lib/nova/instances/
mkdir c8c7d866-f41f-49ff-aa35-33441227859d
chown nova.nova c8c7d866-f41f-49ff-aa35-33441227859d

(早期openstack版本需要做这个一步)进入到c8c7d866-f41f-49ff-aa35-33441227859d文件夹创建磁盘配置文件)

touch disk.config
chown qemu.qemu disk.config

对该虚拟机进行硬重启

[root@controller ~]# nova reboot --hard c8c7d866-f41f-49ff-aa35-33441227859d

刷新页面或者nova list --all 查看node节点是否变化

查看vnc

[root@controller ~]# nova get-vnc-console c8c7d866-f41f-49ff-aa35-33441227859d novnc

宕机节点修好之后需要删除原节点的uuid目录及virsh删除防止重写

cd /var/lib/nova/instances/
rm -rf c8c7d866-f41f-49ff-aa35-33441227859d
virsh list|grep 
virsh shutdown id
virsh undefine id

第十七章 卷启动虚拟机

正常的openstack对接ceph方式nova启动的虚拟机会出现问题,启动的虚拟机cinder list无法看到虚拟机的卷nova volume-attachments $UUID也无法看到虚拟机的/dev/vda卷信息。但是在后端ceph存储rbd ls -p vms可以看到。这个问题不小,如果后面跨存储池迁移虚拟机,nova和cinder都无法看到虚拟机卷信息会导致无法迁移,nova和cinder无法查看到,那么数据库也的nova库block_device_mapping也无法看到。所以创建虚拟机尽可能在底层用命令启动虚拟机到卷里面。

[root@controller ~]# openstack server list
+--------------------------------------+------+--------+-------------------------+--------+-------------+
| ID                                   | Name | Status | Networks                | Image  | Flavor      |
+--------------------------------------+------+--------+-------------------------+--------+-------------+
| 1eedd19e-72d0-4912-ad83-3d0a045d4906 | vm01 | ACTIVE | Intnal-01=192.168.1.107 | cirros | 1C-512MB-1G |
+--------------------------------------+------+--------+-------------------------+--------+-------------+

列出连接到服务器的所有卷,没有虚拟机vda盘信息

[root@controller ~]# nova volume-attachments 1eedd19e-72d0-4912-ad83-3d0a045d4906
+----+--------+-----------+-----------+-----+-----------------------+
| ID | DEVICE | SERVER ID | VOLUME ID | TAG | DELETE ON TERMINATION |
+----+--------+-----------+-----------+-----+-----------------------+
+----+--------+-----------+-----------+-----+-----------------------+

列出所有卷,也没有虚拟机的vda盘信息

[root@controller ~]# openstack volume list

修改计算节点对接ceph存储池vms为volumes池名字

修改nova配置文件的vms存储池为volumes

vim /etc/nova/nova.conf
images_rbd_pool = volumes
systemctl restart openstack-nova-compute.service

创建虚拟机用cli创建,先指定镜像创建卷,再从卷启动虚拟机

glance image-list # 获取镜像uuid
cinder type-list # 获取存储类型uuid
cinder create --name vm01 --image-id $UUID  1 --volume-type $UUID

neutron net-list
openstack flavor list
nova boot vm01 --flavor $flavor-UUID --nic net-id=$net-UUID --security-group all-rule --block-device source=volume,size=1,id=$volume-UUID,dest=volume,bootindex=0 --meta boot_from_ebs=EBS --meta image_name=cirros --meta hotplug="cpu,mem"
参数 解释
source=volume 卷启动虚拟机
source=image 镜像启动虚拟机,cinder list还是看不到所以不采用
id 卷id
hotplug 热插拔

验证

[root@controller ~]# openstack server list
[root@controller ~]# nova volume-attachments 30af2949-1bcc-493e-b6b0-a0be04977976
[root@controller ~]# openstack volume list
[root@controller ~]# cinder list --all
[root@controller ~]# nova get-vnc-console 30af2949-1bcc-493e-b6b0-a0be04977976 novnc

![img](file:///C:\Users\ADMINI~1\AppData\Local\Temp\ksohtml9212\wps32.jpg)

第十八章 ceph存储扩容 新增SSD存储池

18.1 配置主机名和免密

新增存储节点配置主机名

hostnamectl set-hostname storage04
hostnamectl set-hostname storage05
hostnamectl set-hostname storage0

免密

[root@controller ~]# ./auto_ssh.sh root 123456 10.0.0.114
[root@controller ~]# ./auto_ssh.sh root 123456 10.0.0.115
[root@controller ~]# ./auto_ssh.sh root 123456 10.0.0.116

配置hosts解析

[root@controller ~]# vim /etc/hosts
10.0.0.10 controller
10.0.0.11 compute01
10.0.0.12 compute02

10.0.0.111 storage01
10.0.0.112 storage02
10.0.0.113 storage03

10.0.0.114 storage04
10.0.0.115 storage05
10.0.0.116 storage06

scp /etc/hosts controller:/etc/hosts 
scp /etc/hosts compute01:/etc/hosts 
scp /etc/hosts compute02:/etc/hosts 
scp /etc/hosts storage01:/etc/hosts 
scp /etc/hosts storage02:/etc/hosts 
scp /etc/hosts storage03:/etc/hosts 
scp /etc/hosts storage04:/etc/hosts 
scp /etc/hosts storage05:/etc/hosts 
scp /etc/hosts storage06:/etc/hosts 

免密配置

scp /root/.ssh/id_rsa.pub storage04:/root/.ssh/id_rsa.pub
scp /root/.ssh/id_rsa.pub storage05:/root/.ssh/id_rsa.pub
scp /root/.ssh/id_rsa.pub storage06:/root/.ssh/id_rsa.pub

scp /root/.ssh/id_rsa storage04:/root/.ssh/id_rsa
scp /root/.ssh/id_rsa storage05:/root/.ssh/id_rsa
scp /root/.ssh/id_rsa storage06:/root/.ssh/id_rsa

scp /root/.ssh/id_rsa.pub storage04:/root/.ssh/authorized_keys
scp /root/.ssh/id_rsa.pub storage05:/root/.ssh/authorized_keys
scp /root/.ssh/id_rsa.pub storage06:/root/.ssh/authorized_keys

拷贝指纹

cat /etc/hosts|grep 10|awk '{print $2}' > /root/hosts
for i in `cat /root/hosts` ;do scp /root/.ssh/known_hosts ${i}:/root/.ssh/;done

18.2 配置yum源并安装基础工具包

rm -rf /etc/yum.repos.d/*
cat << EOF >> /etc/yum.repos.d/openstack-train.repo 
[openstack]
name=openstack
baseurl=http://10.0.0.10/openstack-train
gpgcheck=0
enabled=1
EOF
yum clean all
yum makecache

yum install -y unzip wget lrzsz net-tools vim tree lsof tcpdump telnet screen bash-completion fio stress sysstat tree strace

18.3 时钟同步、关闭selinux及防火墙

systemctl stop firewalld
systemctl disable firewalld
systemctl stop NetworkManager
systemctl disable NetworkManager

[root@storage04 ~]# vim /etc/chrony.conf 
server 10.0.0.10 iburst
[root@storage04 ~]# systemctl restart chronyd
[root@storage04 ~]# chronyc sources
210 Number of sources = 1
MS Name/IP address         Stratum Poll Reach LastRx Last sample               
===============================================================================
^* controller                    3   6    17     0  -1968ns[  +21us] +/-   48ms

18.4 安装ceph并拷贝文件

待扩容节点安装ceph

[root@storage04 ~]# yum install ceph -y
[root@storage05 ~]# yum install ceph -y
[root@storage06 ~]# yum install ceph -y

[root@storage01 ~]# scp -r /etc/ceph/ceph.conf storage04:/etc/ceph/
[root@storage01 ~]# scp -r /etc/ceph/ceph.conf storage05:/etc/ceph/
[root@storage01 ~]# scp -r /etc/ceph/ceph.conf storage06:/etc/ceph/

[root@storage01 ~]# scp -r /var/lib/ceph/bootstrap-osd/ceph.keyring storage04:/var/lib/ceph/bootstrap-osd/ceph.keyring
[root@storage01 ~]# scp -r /var/lib/ceph/bootstrap-osd/ceph.keyring storage05:/var/lib/ceph/bootstrap-osd/ceph.keyring
[root@storage01 ~]# scp -r /var/lib/ceph/bootstrap-osd/ceph.keyring storage06:/var/lib/ceph/bootstrap-osd/ceph.keyring

[root@storage01 ~]# scp /etc/ceph/ceph.client.admin.keyring storage04:/etc/ceph/
[root@storage01 ~]# scp /etc/ceph/ceph.client.admin.keyring storage05:/etc/ceph/
[root@storage01 ~]# scp /etc/ceph/ceph.client.admin.keyring storage06:/etc/ceph/

18.5 打标签

生产环境下在扩容ceph-osd之前需要打标签,以免数据均衡造成资源的消耗,影响上层的虚拟机业务。

# 禁止pg因新的osd加入集群而进行remap
ceph osd set norebalance

# 禁止数据回填
ceph osd set nobackfill

# 防止osd意外down掉而被out出去
ceph osd set noout

# 禁止scrub对元数据信息进行扫描
ceph osd set noscrub

# 禁止nodeep-scrub对元数据信息和存储的数据进行扫描
ceph osd set nodeep-scrub

# 禁止新的OSD在加入crush之后工作
ceph osd set noin

18.6 扩容osd并启动

ceph-volume lvm create --data /dev/nvme0n1
ceph-volume lvm create --data /dev/nvme0n2
ceph-volume lvm create --data /dev/nvme0n3

当前版本默认底层默认存储机制为 BlueStore,其性能较早先的filestore要好,并针对SSD磁盘做了优化。

有NVME固态盘可用如下命令对 db 和 wal 使用单独存储空间进行加速。

sudo ceph-volume lvm prepare --data {dev/vg/lv} --block.wal {partition} --block.db { partition }

wal和db既可以单独存储,也可以分开存储,根据实践其容量应保证不小于data容量的5%。

查看

ceph-volume lvm list

对准备就绪的osd进行激活:

ceph-volume lvm activate {ID} {FSID}

或激活单个节点的所有osd

ceph-volume lvm activate --all

启动osd

OSD编号会从0开始自动增加,可逐个节点创建OSD,OSD守护进程也是用该ID作为标识。

启动服务:

systemctl start ceph-osd@0
systemctl status ceph-osd@0
systemctl enable ceph-osd@0

各节点都添加完OSD可使用命令查看:

查看osd状态

ceph osd tree

查看集群状态

ceph --s

18.7 创建ssd故障域名

创建ssd盘的root故障域

ceph osd crush add-bucket ceph_nvme_ssd root

移动root-ssd下

ceph osd crush move storage04 root=ceph_nvme_ssd
ceph osd crush move storage05 root=ceph_nvme_ssd
ceph osd crush move storage06 root=ceph_nvme_ssd

18.8 创建ssd存储池

[root@storage01 ~]# ceph osd pool create ceph_nvme_ssd 32
pool 'ceph_nvme_ssd' created
[root@storage01 ~]# ceph osd pool application enable ceph_nvme_ssd rbd
enabled application 'rbd' on pool 'ceph_nvme_ssd'
[root@storage01 ~]# ceph osd pool application get ceph_nvme_ssd 
{
    "rbd": {}
}

18.9 创建ssd卷类型

创建卷类型

cinder type-create ceph_nvme_ssd

卷类型关联存储池

cinder type-key ceph_nvme_ssd set volume_backend_name=ceph_nvme_ssd

查看卷类型

cinder extra-specs-list

取消存储卷类型

cinder type-key ceph_nvme_ssd unset enabled_backends

查看后端对接类型

cinder get-pools --detail

18.10 修改cinder配置文件

controller节点

[root@controller ~]# vim /etc/cinder/cinder.conf
[DEFAULT]
enabled_backends = ceph_hdd,ceph_nvme_ssd

[ceph_nvme_ssd]
volume_driver = cinder.volume.drivers.rbd.RBDDriver
rbd_pool = ceph_nvme_ssd
rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_flatten_volume_from_snapshot = false
rbd_max_clone_depth = 5
rbd_store_chunk_size = 4
rados_connect_timeout = -1
glance_api_version = 2
rbd_user = cinder
rbd_secret_uuid = ad92b6a2-6e02-439b-8b73-5e007ffa1010
volume_backend_name = ceph_nvme_ssd
[root@controller ~]# systemctl restart openstack-cinder-volume openstack-cinder-api openstack-cinder-scheduler

18.11 创建cursh map规则

查看池规则

[root@storage01 ~]# ceph osd pool ls detail

查看crush规则名称

[root@storage01 ~]# ceph osd crush rule ls

查看crush规则

[root@storage01 ~]# ceph osd crush rule dump replicated_rule

创建replicated_rule_ssd crush规则并指向ssd host

[root@storage01 ~]# ceph osd crush rule create-replicated replicated_rule_ssd ssd host
[root@storage01 ~]# ceph osd pool set ceph_nvme_ssd crush_rule replicated_rule_ssd
[root@storage01 ~]# ceph osd crush rule dump replicated_rule_ssd

查看ssd pool crush map规则

[root@storage01 ~]# ceph osd pool get ceph_nvme_ssd crush_rule
crush_rule: replicated_rule_ssd

18.12 测试创建

[root@storage01 ~]# rbd create -p ceph_nvme_ssd --image rbd-001.img --size 10G
[root@storage01 ~]# rbd create ceph_nvme_ssd/rbd-002.img --size 10G
[root@storage01 ~]# rbd -p ceph_nvme_ssd ls
rbd-001.img
rbd-002.img

[root@storage01 ~]# rbd -p ceph_nvme_ssd ls
rbd-001.img
rbd-002.img
[root@storage01 ~]# rbd info -p ceph_nvme_ssd --image rbd-001.img
rbd image 'rbd-001.img':
	size 10 GiB in 2560 objects
	order 22 (4 MiB objects)
	snapshot_count: 0
	id: 175fc8319bdf90
	block_name_prefix: rbd_data.175fc8319bdf90
	format: 2
	features: layering, exclusive-lock, object-map, fast-diff, deep-flatten
	op_features: 
	flags: 
	create_timestamp: Sun Dec  3 13:09:53 2023
	access_timestamp: Sun Dec  3 13:09:53 2023
	modify_timestamp: Sun Dec  3 13:09:53 2023



[root@storage01 ~]# rbd rm -p ceph_nvme_ssd --image rbd-002.img
Removing image: 100% complete...done.

[root@storage01 ~]# rbd -p ceph_nvme_ssd ls
rbd-001.img

18.13 cinder密钥环修改权限

查看cinder密钥环,没有对ssd池的操作

[root@storage01 ~]# cat /etc/ceph/ceph.client.cinder.keyring 
[client.cinder]
	key = AQA3CUFl/t3FGRAA+Xbs/FxKB0NVtOwbkRymug==
	caps mon = "allow r"
	caps osd = "allow class-read object_prefix rbd_children,allow rwx pool=volumes,allow rwx pool=vms,allow rx pool=images"

修改cinder密钥环指定对ssd池rwx权限

[root@storage01 ~]# ceph auth caps client.cinder mon 'allow r' osd 'allow class-read object_prefix rbd_children,allow rwx pool=volumes,allow rwx pool=vms,allow rx pool=images,allow rwx pool=ceph_nvme_ssd'

导出key

[root@storage01 ~]# ceph auth get client.cinder -o /etc/ceph/ceph.client.cinder.keyring
[root@storage01 ~]# cat /etc/ceph/ceph.client.cinder.keyring 
[client.cinder]
	key = AQA3CUFl/t3FGRAA+Xbs/FxKB0NVtOwbkRymug==
	caps mon = "allow r"
	caps osd = "allow class-read object_prefix rbd_children,allow rwx pool=volumes,allow rwx pool=vms,allow rx pool=images,allow rwx pool=ceph_nvme_ssd"

拷贝到计算节点

[root@storage01 ~]# scp /etc/ceph/ceph.client.cinder.keyring root@compute01:/etc/ceph/
[root@storage01 ~]# scp /etc/ceph/ceph.client.cinder.keyring root@compute02:/etc/ceph/

18.14 取消标记

# 禁止pg因新的osd加入集群而进行remap
ceph osd unset norebalance

# 禁止数据回填
ceph osd unset nobackfill

# 防止osd意外down掉而被out出去
ceph osd unset noout

# 禁止scrub对元数据信息进行扫描
ceph osd unset noscrub

# 禁止nodeep-scrub对元数据信息和存储的数据进行扫描
ceph osd unset nodeep-scrub

# 禁止新的OSD在加入crush之后工作
ceph osd unset noin

18.15 创建主机测试

glance image-list
cinder type-list
cinder create --name vm01 --image-id $UUID 20 --volume-type $UUID

[root@storage01 ~]# rbd -p ceph_nvme_ssd ls
volume-8770f8db-e4c0-4be5-865f-99868542bc45

neutron net-list
openstack flavor list
nova boot vm01 --flavor $UUID --nic net-id=$UUID --security-group all-rule --block-device source=volume,size=1,id=$UUID,dest=volume,bootindex=0 --meta boot_from_ebs=EBS --meta image_name=cirros --meta hotplug="cpu,mem"

18.16 ceph查看

[root@storage01 ~]# ceph osd tree
ID  CLASS WEIGHT  TYPE NAME          STATUS REWEIGHT PRI-AFF 
-22       0.43918 root ceph_nvme_ssd                         
-13       0.14639     host storage04                         
  9   ssd 0.04880         osd.9          up  1.00000 1.00000 
 10   ssd 0.04880         osd.10         up  1.00000 1.00000 
 11   ssd 0.04880         osd.11         up  1.00000 1.00000 
-16       0.14639     host storage05                         
 12   ssd 0.04880         osd.12         up  1.00000 1.00000 
 13   ssd 0.04880         osd.13         up  1.00000 1.00000 
 14   ssd 0.04880         osd.14         up  1.00000 1.00000 
-19       0.14639     host storage06                         
 15   ssd 0.04880         osd.15         up  1.00000 1.00000 
 16   ssd 0.04880         osd.16         up  1.00000 1.00000 
 17   ssd 0.04880         osd.17         up  1.00000 1.00000 
 -1       1.16908 root default                               
 -3       0.38969     host storage01                         
  0   hdd 0.12990         osd.0          up  1.00000 1.00000 
  3   hdd 0.12990         osd.3          up  1.00000 1.00000 
  6   hdd 0.12990         osd.6          up  1.00000 1.00000 
 -5       0.38969     host storage02                         
  1   hdd 0.12990         osd.1          up  1.00000 1.00000 
  4   hdd 0.12990         osd.4          up  1.00000 1.00000 
  7   hdd 0.12990         osd.7          up  1.00000 1.00000 
 -7       0.38969     host storage03                         
  2   hdd 0.12990         osd.2          up  1.00000 1.00000 
  5   hdd 0.12990         osd.5          up  1.00000 1.00000 
  8   hdd 0.12990         osd.8          up  1.00000 1.00000 

[root@storage01 ~]# ceph df 
RAW STORAGE:
    CLASS     SIZE        AVAIL       USED        RAW USED     %RAW USED 
    hdd       1.2 TiB     878 GiB     310 GiB      319 GiB         26.67 
    ssd       450 GiB     437 GiB     4.2 GiB       13 GiB          2.93 
    TOTAL     1.6 TiB     1.3 TiB     314 GiB      332 GiB         20.19 
 
POOLS:
    POOL              ID     PGS     STORED      OBJECTS     USED        %USED     MAX AVAIL 
    volumes            3      32     1.4 GiB         379     4.3 GiB      0.52       272 GiB 
    images             4      32     1.4 GiB         192     4.3 GiB      0.52       272 GiB 
    backups            5      32        19 B           3     192 KiB         0       272 GiB 
    vms                6      32     1.4 GiB         515     4.5 GiB      0.54       272 GiB 
    ceph_nvme_ssd      7      32     1.4 GiB         382     4.1 GiB      0.99       138 GiB 

第十九章 octavia负载均衡

相关推荐
mqiqe3 天前
云计算Openstack Neutron
云计算·openstack·perl
mqiqe3 天前
云计算Openstack Keystone
数据库·云计算·openstack
mqiqe5 天前
云计算Openstack Cinder
云计算·php·openstack
mqiqe5 天前
云计算Openstack Glance
云计算·openstack
mqiqe7 天前
云计算Openstack Nova
microsoft·云计算·openstack
mqiqe8 天前
云计算Openstack
云计算·openstack
mqiqe9 天前
云计算Openstack Swift
云计算·openstack·swift
苦逼IT运维10 天前
OpenStack 部署实践与原理解析 - Ubuntu 22.04 部署 (DevStack)
linux·运维·ubuntu·openstack·运维开发·devops
kuuuugua11 天前
2024广东省职业技能大赛云计算——OpenStack镜像、脚本详解
云计算·bash·openstack
qlau200718 天前
基于kolla-ansible在AnolisOS8.6上部署all-in-one模式OpenStack-Train
ansible·openstack