openstack完整的部署(最详细)

  • openstack完整的部署(最详细)已关闭评论
  • 77 次浏览
  • A+
所属分类:linux技术
摘要

只在控制节点上面操作安装openstack镜像服务,只在控制节点进行操作  glacne的配置文件是/etc/glance/glance-api.conf,修改可以实现glacne与数据库及keystone的连接

目录

一:keystone组件部署

只在控制节点上面操作

1、安装和配置keystone

# 1.安装keystone软件包 # wsgi:使web服务器支持WSGI的插件 # httpd:Apache软件包 # openstack-keystone:keystone的软件包 [root@controller ~]# yum install -y openstack-keystone httpd mod_wsgi  # 查看keystone用户信息 [root@controller ~]# cat /etc/passwd | grep keystone keystone:x:163:163:OpenStack Keystone Daemons:/var/lib/keystone:/sbin/nologin  # 查看keystone用户组 [root@controller ~]# cat /etc/group | grep keystone keystone:x:163:   # 2.创建keystone的数据库并授权 [root@controller ~]# mysql -uroot -p000000 # 创建数据库 MariaDB [(none)]> CREATE DATABASE keystone; Query OK, 1 row affected (0.000 sec) # 授权本地登录keystone用户 MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY '000000'; Query OK, 0 rows affected (0.001 sec) # 授权任意远程主机登录keystone用户 MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY '000000';          Query OK, 0 rows affected (0.000 sec) # 退出数据库 MariaDB [(none)]> quit Bye   # 3.修改keystone配置文件 [root@controller ~]# vi /etc/keystone/keystone.conf  # 找到[database] 部分,加入如下内容,配置数据库连接信息 connection=mysql+pymysql://keystone:000000@controller/keystone     # 找到[token] 部分,解开注释,配置令牌的加密方式 provider = fernet   # 4.初始化keytone数据库 # 同步数据库 # su keytone:表示切换到keytone用户 # '-s /bin/sh':表示指定使用什么编译器来执行命令 # '-c':表示具体执行的命令 [root@controller ~]# su keystone -s /bin/sh -c "keystone-manage db_sync"  # 检查数据库 [root@controller ~]# mysql -uroot -p000000 # 打开keystone数据库 MariaDB [(none)]> use keystone; Reading table information for completion of table and column names You can turn off this feature to get a quicker startup with -A  Database changed # 显示keytone数据库中的数据表 MariaDB [keystone]> show tables; +------------------------------------+ | Tables_in_keystone                 | +------------------------------------+ | access_rule                        | | access_token                       | | application_credential             | | application_credential_access_rule | | application_credential_role        | # 出现如上所示的很多数据表,说明数据库同步成功。

2、keystone组件初始化

# 1.初始化Fernet密钥库 [root@controller ~]# keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone # 执行命令后创建/etc/keystone/fernet-keys,并在目录中生成两个fernet密钥,分别用于加密和解密 [root@controller fernet-keys]# pwd /etc/keystone/fernet-keys [root@controller fernet-keys]# du -sh * 4.0K	0 4.0K	1  [root@controller keystone]# keystone-manage credential_setup --keystone-user keystone --keystone-group keystone # 执行命令后创建/etc/keystone/credential-keys目录,生成两个fetnet密钥,用于加密/解密用户凭证 [root@controller credential-keys]# pwd /etc/keystone/credential-keys [root@controller credential-keys]# du -sh * 4.0K	0 4.0K	1   # 2.初始化用户身份认证信息 # openstack默认有一个admin用户,还没有对应的密码等登录所必须的信息。使用 `keystone-manage bootstrap` 初始化登录凭证。 [root@controller ~]# keystone-manage bootstrap --bootstrap-password 000000    # 设置密码 > --bootstrap-admin-url http://controller:5000/v3                             # 设置用户服务端点 > --bootstrap-internal-url http://controller:5000/v3                          # 设置内部用户的服务端点 > --bootstrap-public-url http://controller:5000/v3                            # 设置公共用户的服务端点 > --bootstrap-region-id RegionOne                                              # 设置区域ID # 命令执行后,keystone数据库中就已经存放了登录需要的验证信息。   # 3.配置web服务 # (1)给apache增加wsgi支持 # 将wsgi-keystone.conf文件软链接到'/etc/httpd/conf.d/目录',作为apache的配置文件 [root@controller ~]# ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/ [root@controller ~]# ls /etc/httpd/conf.d/ autoindex.conf  README  userdir.conf  welcome.conf  wsgi-keystone.conf  # (2)修改apache服务器配置并启动 [root@controller ~]# vi /etc/httpd/conf/httpd.conf  # 修改为web服务所在的IP地址或域名 96 ServerName controller  # (3)启动apache [root@controller ~]# systemctl start httpd [root@controller ~]# systemctl enable httpd Created symlink from /etc/systemd/system/multi-user.target.wants/httpd.service to /usr/lib/systemd/system/httpd.service.

3、模拟登录验证

# 创建一个文件存储身份凭证 [root@controller ~]# vi admin-login export OS_USERNAME=admin export OS_PASSWORD=000000 export OS_PROJECT_NAME=admin export OS_USER_DOMAIN_NAME=Default export OS_PROJECT_DOMAIN_NAME=Default export OS_AUTH_URL=http://controller:5000/v3 export OS_IDENTITY_API_VERSION=3 export OS_IMAGE_API_VERSION=2  # 导入环境变量 [root@controller ~]# source admin-login  # 查看现有环境信息 [root@controller ~]# export -p declare -x OS_AUTH_URL="http://controller:5000/v3" declare -x OS_IDENTITY_API_VERSION="3" declare -x OS_IMAGE_API_VERSION="2" declare -x OS_PASSWORD="000000" declare -x OS_PROJECT_DOMAIN_NAME="Default"/ declare -x OS_PROJECT_NAME="admin" declare -x OS_USERNAME="admin" declare -x OS_USER_DOMAIN_NAME="Default"

4、检测keystone服务  

# 在default域创建名为 'project' 的项目 [root@controller ~]# openstack project create --domain default project +-------------+----------------------------------+ | Field       | Value                            | +-------------+----------------------------------+ | description |                                  | | domain_id   | default                          | | enabled     | True                             | | id          | e3a549077f354998aa1a75677cfde62e | | is_domain   | False                            | | name        | project                          | | options     | {}                               | | parent_id   | default                          | | tags        | []                               | +-------------+----------------------------------+  # 查看现有项目列表 [root@controller ~]# openstack project list +----------------------------------+---------+ | ID                               | Name    | +----------------------------------+---------+ | 4188570a34464b938ed3fa7e08681df8 | admin   | | e3a549077f354998aa1a75677cfde62e | project | +----------------------------------+---------+  # 创建名为user的角色 [root@controller ~]# openstack role create user +-------------+----------------------------------+ | Field       | Value                            | +-------------+----------------------------------+ | description | None                             | | domain_id   | None                             | | id          | 700ec993d3cf456fa591c03e72f37856 | | name        | user                             | | options     | {}                               | +-------------+----------------------------------+  # 查看当前角色列表 [root@controller ~]# openstack role list +----------------------------------+--------+ | ID                               | Name   | +----------------------------------+--------+ | 47670bbd6cc1472ab42db560637c7ebe | reader | | 5eee0910aeb844a1b82f48100da7adc9 | admin  | | 700ec993d3cf456fa591c03e72f37856 | user   | | bc2c8147bbd643629a020a6bd9591eca | member | +----------------------------------+--------+  # 查看现有域列表 [root@controller ~]# openstack domain list +---------+---------+---------+--------------------+ | ID      | Name    | Enabled | Description        | +---------+---------+---------+--------------------+ | default | Default | True    | The default domain | +---------+---------+---------+--------------------+  # 查看现有用户列表 [root@controller ~]# openstack user list +----------------------------------+-------+ | ID                               | Name  | +----------------------------------+-------+ | f4f16d960e0643d7b5a35db152c87dae | admin | +----------------------------------+-------+

二:glacne部署  

安装openstack镜像服务,只在控制节点进行操作  

1、安装glance

# 1.安装glance软件包 # 原生的源缺包,把阿里的源下载到这个路径 [root@controller yum.repos.d]# curl -o /etc/yum.repos.d/CentOS-Base.repo  http://mirrors.aliyun.com/repo/Centos-7.repo [root@controller ~]# yum install -y openstack-glance # 安装后会自动生成glance用户和用户组 [root@controller ~]# cat /etc/passwd | grep glance glance:x:161:161:OpenStack Glance Daemons:/var/lib/glance:/sbin/nologin [root@controller ~]# cat /etc/group | grep glance glance:x:161:  # 2.创建glance数据库并授权 # 连接数据库 [root@controller ~]# mysql -uroot -p000000 # 新建glance数据库 MariaDB [(none)]> CREATE DATABASE glance; Query OK, 1 row affected (0.001 sec) # 为新数据库授权本地和远程登录glance用户 MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY '000000'; Query OK, 0 rows affected (0.001 sec) MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY '000000'; Query OK, 0 rows affected (0.001 sec) # 退出数据库 MariaDB [(none)]> quit Bye

2、配置glance

glacne的配置文件是/etc/glance/glance-api.conf,修改可以实现glacne与数据库及keystone的连接

# 1.备份配置文件 [root@controller ~]# cp /etc/glance/glance-api.conf /etc/glance/glance-api.conf.bak  # 2.去掉配置文件注释和空行 # grep:查找文件中符合条件的字符串。 -E:采用正则表达式;-v:匹配所有不满足正则的条件(反选) # ^:以什么开头; $:匹配字符结尾;|:匹配|左或|右的字符 [root@controller ~]# grep -Ev '^$|#' /etc/glance/glance-api.conf.bak > /etc/glance/glance-api.conf  # 3.编辑配置文件 # default_store = file:默认存储系统为本地系统 # filesystem_store_datadir = /var/lib/glance/images/ : 镜像文件实际存储的目录 [root@controller ~]# vi /etc/glance/glance-api.conf [database] connection = mysql+pymysql://glance:000000@controller/glance  [glance_store] stores = file default_store = file                                   filesystem_store_datadir = /var/lib/glance/images/  [keystone_authtoken] auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password username = glance password = 000000 project_name = project user_domain_name = Default project_domain_name = Default  [paste_deploy] flavor = keystone   # 4.初始化数据库 # 同步数据库:将安装文件中的数据库表信息填入数据库中 [root@controller ~]# su glance -s /bin/sh -c "glance-manage db_sync"  # 检查数据库 [root@controller ~]# mysql -uroot -p000000 MariaDB [(none)]> use glance MariaDB [glance]> show tables; +----------------------------------+ | Tables_in_glance                 | +----------------------------------+ | alembic_version                  | | image_locations                  | | image_members                    | | image_properties                 | | image_tags                       | | images                           | .....

3、glance组件初始化

glance安装配置成功后,需要给glance初始化用户,密码并分配角色,初始化服务和服务端点  

(1)创建glance用户并分配角色

# 导入环境变量模拟登录 [root@controller ~]# source admin-login  # 在default域创建名为glance,密码为000000的用户 [root@controller ~]# openstack user create --domain default --password 000000 glance +---------------------+----------------------------------+ | Field               | Value                            | +---------------------+----------------------------------+ | domain_id           | default                          | | enabled             | True                             | | id                  | 81238b556a444c8f80cb3d7dc72a24d3 | | name                | glance                           | | options             | {}                               | | password_expires_at | None                             | +---------------------+----------------------------------+  # 查看当前已有的项目 [root@controller ~]# openstack project list +----------------------------------+---------+ | ID                               | Name    | +----------------------------------+---------+ | 4188570a34464b938ed3fa7e08681df8 | admin   | | e3a549077f354998aa1a75677cfde62e | project | +----------------------------------+---------+ # 查看已有的用户 [root@controller ~]# openstack user list +----------------------------------+--------+ | ID                               | Name   | +----------------------------------+--------+ | f4f16d960e0643d7b5a35db152c87dae | admin  | | 81238b556a444c8f80cb3d7dc72a24d3 | glance | +----------------------------------+--------+  # 授予glance用户操作project项目时的admin角色权限 [root@controller ~]# openstack role add --project project --user glance admin # 查看glance用户详情 [root@controller ~]# openstack user show glance +---------------------+----------------------------------+ | Field               | Value                            | +---------------------+----------------------------------+ | domain_id           | default                          | | enabled             | True                             | | id                  | 81238b556a444c8f80cb3d7dc72a24d3 | | name                | glance                           | | options             | {}                               | | password_expires_at | None                             | +---------------------+----------------------------------+

(2)创建glacne服务及服务端点

# 1.创建服务 # 创建名为glance,类型为image的服务 [root@controller ~]# openstack service create --name glance image +---------+----------------------------------+ | Field   | Value                            | +---------+----------------------------------+ | enabled | True                             | | id      | 324a07034ea4453692570e3edf73cf2c | | name    | glance                           | | type    | image                            | +---------+----------------------------------+  # 2.创建镜像服务端点 # 服务端点有三种:公共用户(public)、内部组件(internal)、Admin用户(admin)服务的地址。 # 创建公共用户访问服务端点 [root@controller ~]# openstack endpoint create --region RegionOne glance public http://controller:9292 +--------------+----------------------------------+ | Field        | Value                            | +--------------+----------------------------------+ | enabled      | True                             | | id           | ab3208eb36fd4a8db9c90b9113da9bbb | | interface    | public                           | | region       | RegionOne                        | | region_id    | RegionOne                        | | service_id   | 324a07034ea4453692570e3edf73cf2c | | service_name | glance                           | | service_type | image                            | | url          | http://controller:9292           | +--------------+----------------------------------+  # 创建内部组件访问服务端点 [root@controller ~]# openstack endpoint create --region RegionOne glance internal http://controller:9292 +--------------+----------------------------------+ | Field        | Value                            | +--------------+----------------------------------+ | enabled      | True                             | | id           | 54994f15e8184e099334760060b9e2a9 | | interface    | internal                         | | region       | RegionOne                        | | region_id    | RegionOne                        | | service_id   | 324a07034ea4453692570e3edf73cf2c | | service_name | glance                           | | service_type | image                            | | url          | http://controller:9292           | +--------------+----------------------------------+  # 创建Admin用户访问服务端点 [root@controller ~]# openstack endpoint create --region RegionOne glance admin http://controller:9292 +--------------+----------------------------------+ | Field        | Value                            | +--------------+----------------------------------+ | enabled      | True                             | | id           | 97ae61936255471f9f55858cc0443e61 | | interface    | admin                            | | region       | RegionOne                        | | region_id    | RegionOne                        | | service_id   | 324a07034ea4453692570e3edf73cf2c | | service_name | glance                           | | service_type | image                            | | url          | http://controller:9292           | +--------------+----------------------------------+  # 查克服务端点 [root@controller ~]# openstack endpoint list +----------------------------------+-----------+--------------+--------------+---------+-----------+---------------------------+ | ID                               | Region    | Service Name | Service Type | Enabled | Interface | URL                       | +----------------------------------+-----------+--------------+--------------+---------+-----------+---------------------------+ | 0d31919afb564c8aa52ec5eddf474a55 | RegionOne | keystone     | identity     | True    | admin     | http://controller:5000/v3 | | 243f1e7ace4f444cba2978b900aeb165 | RegionOne | keystone     | identity     | True    | internal  | http://controller:5000/v3 | | 54994f15e8184e099334760060b9e2a9 | RegionOne | glance       | image        | True    | internal  | http://controller:9292    | | 702df46845be40fb9e75fb988314ee90 | RegionOne | keystone     | identity     | True    | public    | http://controller:5000/v3 | | 97ae61936255471f9f55858cc0443e61 | RegionOne | glance       | image        | True    | admin     | http://controller:9292    | | ab3208eb36fd4a8db9c90b9113da9bbb | RegionOne | glance       | image        | True    | public    | http://controller:9292    | +----------------------------------+-----------+--------------+--------------+---------+-----------+---------------------------+

(3)启动glacne服务

# 开机启动glance服务 [root@controller ~]# systemctl enable openstack-glance-api Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-glance-api.service to /usr/lib/systemd/system/openstack-glance-api.service. # 启动glance服务 [root@controller ~]# systemctl start openstack-glance-api

4、验证glacne服务  

# 方法一:查看端口占用情况(9292是否被占用) [root@controller ~]# netstat -tnlup | grep 9292 tcp        0      0 0.0.0.0:9292            0.0.0.0:*               LISTEN      5740/python2    # 方法二:查看服务运行状态(active (running)说明服务正在运行) [root@controller ~]# systemctl status openstack-glance-api ● openstack-glance-api.service - OpenStack Image Service (code-named Glance) API server    Loaded: loaded (/usr/lib/systemd/system/openstack-glance-api.service; enabled; vendor preset: disabled)    Active: active (running) since Wed 2022-10-19 17:09:13 CST;

5、用glacne制作镜像  

# 安装lrzsz工具 [root@controller ~]# yum install -y lrzsz  # 上传cirros-0.5.1-x86_64-disk.img 到/root目录 [root@controller ~]# rz z waiting to receive.**B0100000023be50 [root@controller ~]# ls admin-login      cirros-0.5.1-x86_64-disk.img  # glance创建镜像 [root@controller ~]# openstack image create --file cirros-0.5.1-x86_64-disk.img --disk-format qcow2 --container-format bare --public cirros # 查看镜像列表 [root@controller ~]# openstack image list +--------------------------------------+--------+--------+ | ID                                   | Name   | Status | +--------------------------------------+--------+--------+ | a859fddb-3ec1-4cd8-84ec-482112af929b | cirros | active | +--------------------------------------+--------+--------+ # 删除镜像 [root@controller ~]# openstack image delete a859fddb-3ec1-4cd8-84ec-482112af929b # 重新创镜像 [root@controller ~]# openstack image create --file cirros-0.5.1-x86_64-disk.img --disk-format qcow2 --container-format bare --public cirros +------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Field            | Value                                                                                                                                                                                      | +------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | checksum         | 1d3062cd89af34e419f7100277f38b2b                                                                                                                                                           | | container_format | bare                                                                                                                                                                                       | | created_at       | 2022-10-19T09:20:03Z                                                                                                                                                                       | | disk_format      | qcow2                                                                                                                                                                                      | | file             | /v2/images/7096885c-0a58-4086-8014-b92affceb0e8/file                                                                                                                                       | | id               | 7096885c-0a58-4086-8014-b92affceb0e8                                                                                                                                                       | | min_disk         | 0                                                                                                                                                                                          | | min_ram          | 0                                                                                                                                                                                          | | name             | cirros                                                                                                                                                                                     | | owner            | 4188570a34464b938ed3fa7e08681df8                                                                                                                                                           | | properties       | os_hash_algo='sha512', os_hash_value='553d220ed58cfee7dafe003c446a9f197ab5edf8ffc09396c74187cf83873c877e7ae041cb80f3b91489acf687183adcd689b53b38e3ddd22e627e7f98a09c46', os_hidden='False' | | protected        | False                                                                                                                                                                                      | | schema           | /v2/schemas/image                                                                                                                                                                          | | size             | 16338944                                                                                                                                                                                   | | status           | active                                                                                                                                                                                     | | tags             |                                                                                                                                                                                            | | updated_at       | 2022-10-19T09:20:03Z                                                                                                                                                                       | | virtual_size     | None                                                                                                                                                                                       | | visibility       | public                                                                                                                                                                                     | +------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+  # 查看镜像物理文件 # /var/lib/glance/images/文件夹是 glance-api.conf配置文件中定义镜像文件存储的位置。 [root@controller ~]# ll /var/lib/glance/images/ total 15956 -rw-r----- 1 glance glance 16338944 Oct 19 17:20 7096885c-0a58-4086-8014-b92affceb0e8

三:放置服务(placement)部署

从stein版本开始,将系统资源的监控功能从nova中独立出来,成为一个独立的组件

1、安装placement软件包

# 安装placement软件包 # 安装好会自动生成placement用户和用户组 [root@controller ~]# yum install -y openstack-placement-api  # 查看确认用户和用户组已经创建 [root@controller ~]# cat /etc/passwd | grep placement placement:x:993:990:OpenStack Placement:/:/bin/bash [root@controller ~]# cat /etc/group | grep placement placement:x:990:  # 登录数据库 [root@controller ~]# mysql -uroot -p000000  # 创建placement数据库 MariaDB [(none)]> create database placement; Query OK, 1 row affected (0.000 sec)  # 数据库授权 MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' IDENTIFIED BY '000000'; Query OK, 0 rows affected (0.001 sec) MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' IDENTIFIED BY '000000'; Query OK, 0 rows affected (0.000 sec) MariaDB [(none)]> quit Bye 

2、配置placement服务

# 备份配置文件 [root@controller ~]# cp /etc/placement/placement.conf /etc/placement/placement.conf.bak [root@controller ~]# ls /etc/placement/ placement.conf  placement.conf.bak  policy.json  # 去掉配置文件注释和空行 [root@controller ~]# grep -Ev '^$|#' /etc/placement/placement.conf.bak  > /etc/placement/placement.conf [root@controller ~]# cat /etc/placement/placement.conf [DEFAULT] [api] [cors] [keystone_authtoken] [oslo_policy] [placement] [placement_database] [profiler]  # 编辑配置文件 [root@controller ~]# vi /etc/placement/placement.conf [api] auth_strategy = keystone  [keystone_authtoken] auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = project username = placement password = 000000  [placement_database] connection = mysql+pymysql://placement:000000@controller/placement    # 编辑修改apache配置文件 # 在"VirtualHost"节点加入如下 Directory内容 [root@controller ~]# vi /etc/httpd/conf.d/00-placement-api.conf  Listen 8778  <VirtualHost *:8778>   WSGIProcessGroup placement-api   ...略   <Directory /usr/bin>     <IfVersion >= 2.4>       Require all granted     </IfVersion>   </Directory> </VirtualHost>   # 查看Apache版本号 [root@controller ~]# httpd -v Server version: Apache/2.4.6 (CentOS) Server built:   Jan 25 2022 14:08:43   # 同步数据库,将数据库的表信息填充进数据库 [root@controller ~]# su placement -s /bin/sh -c "placement-manage db sync"  # 检查数据库同步 [root@controller ~]# mysql -uroot -p000000  MariaDB [(none)]> use placement;  MariaDB [placement]> show tables; +------------------------------+ | Tables_in_placement          | +------------------------------+ | alembic_version              | | allocations                  | | consumers                    | | inventories                  | | placement_aggregates         | | projects                     | | resource_classes             | | resource_provider_aggregates | | resource_provider_traits     | | resource_providers           | | traits                       | | users                        | +------------------------------+ 12 rows in set (0.000 sec)

3、placement组件初始化  

# 导入环境变量模拟登录 [root@controller ~]# source admin-login  # 创建placement用户 [root@controller ~]# openstack user create --domain default --password 000000 placement +---------------------+----------------------------------+ | Field               | Value                            | +---------------------+----------------------------------+ | domain_id           | default                          | | enabled             | True                             | | id                  | e0d6a46f9b1744d8a7ab0332ab45d59c | | name                | placement                        | | options             | {}                               | | password_expires_at | None                             | +---------------------+----------------------------------+  # 给placement用户分配admin角色 [root@controller ~]# openstack role add --project project --user placement admin  # 创建placement服务 [root@controller ~]# openstack service create --name placement placement +---------+----------------------------------+ | Field   | Value                            | +---------+----------------------------------+ | enabled | True                             | | id      | da038496edf04ce29d7d3d6b8e647755 | | name    | placement                        | | type    | placement                        | +---------+----------------------------------+ # 查看当前已经创建的服务列表 [root@controller ~]# openstack service list +----------------------------------+-----------+-----------+ | ID                               | Name      | Type      | +----------------------------------+-----------+-----------+ | 324a07034ea4453692570e3edf73cf2c | glance    | image     | | 5d25b4ed1443497599707e043866eaae | keystone  | identity  | | da038496edf04ce29d7d3d6b8e647755 | placement | placement | +----------------------------------+-----------+-----------+  # 创建服务端点 # placement服务端点有三个:公众用户、内部组件、Admin用户(admin)服务。 [root@controller ~]# openstack endpoint create --region RegionOne placement public http://controller:8778 +--------------+----------------------------------+ | Field        | Value                            | +--------------+----------------------------------+ | enabled      | True                             | | id           | da0c279c9a394d0f80e7a33acb9e0d8d | | interface    | public                           | | region       | RegionOne                        | | region_id    | RegionOne                        | | service_id   | da038496edf04ce29d7d3d6b8e647755 | | service_name | placement                        | | service_type | placement                        | | url          | http://controller:8778           | +--------------+----------------------------------+ [root@controller ~]# openstack endpoint create --region RegionOne placement internal http://controller:8778 +--------------+----------------------------------+ | Field        | Value                            | +--------------+----------------------------------+ | enabled      | True                             | | id           | 79ca63ffd52d4d96b418cdf962c1e3ca | | interface    | internal                         | | region       | RegionOne                        | | region_id    | RegionOne                        | | service_id   | da038496edf04ce29d7d3d6b8e647755 | | service_name | placement                        | | service_type | placement                        | | url          | http://controller:8778           | +--------------+----------------------------------+ [root@controller ~]# openstack endpoint create --region RegionOne placement admin http://controller:8778 +--------------+----------------------------------+ | Field        | Value                            | +--------------+----------------------------------+ | enabled      | True                             | | id           | fbee454f73d64bb18a52d8696c7aa596 | | interface    | admin                            | | region       | RegionOne                        | | region_id    | RegionOne                        | | service_id   | da038496edf04ce29d7d3d6b8e647755 | | service_name | placement                        | | service_type | placement                        | | url          | http://controller:8778           | +--------------+----------------------------------+  # 查看检查 [root@controller ~]# openstack endpoint list 

4、placement组件检测  

(1)检测placement组件的运行状态的2种方法

# 方法一:查看端口占用情况(8778是否被占用) [root@controller ~]# netstat -tnlup | grep 8778 tcp6       0      0 :::8778                 :::*                    LISTEN      1018/httpd   # 方法二:查看服务端点通信 [root@controller ~]# curl http://controller:8778 {"versions": [{"status": "CURRENT", "min_version": "1.0", "max_version": "1.36", "id": "v1.0", "links": [{"href": "", "rel": "self"}]}]} 

(2)安装完成情况检查  

# 1.控制节点是否建立了placement用户 [root@controller ~]# cat /etc/passwd | grep placement placement:x:993:990:OpenStack Placement:/:/bin/bash  # 2.控制节点是否创建placement用户组 [root@controller ~]# cat /etc/group | grep placement placement:x:990:  # 3.是否创建placement数据库 MariaDB [(none)]> show databases; +--------------------+ | Database           | +--------------------+ | glance             | | information_schema | | keystone           | | mysql              | | performance_schema | | placement          | +--------------------+  # 4.查看placement用户对数据库的权限 MariaDB [(none)]> show grants for placement@'%'; MariaDB [(none)]> show grants for placement@'localhost';  # 5.查看placement数据表列表 MariaDB [(none)]> use placement;  MariaDB [placement]> show tables; +------------------------------+ | Tables_in_placement          | +------------------------------+ | alembic_version              | | allocations                  | | consumers                    |  # 6.查看placement用户是否创建 [root@controller ~]# openstack user list +----------------------------------+-----------+ | ID                               | Name      | +----------------------------------+-----------+ | f4f16d960e0643d7b5a35db152c87dae | admin     | | 81238b556a444c8f80cb3d7dc72a24d3 | glance    | | e0d6a46f9b1744d8a7ab0332ab45d59c | placement | +----------------------------------+-----------+  # 7.查看placement用户是否有admin权限 # 查看用户id和角色id,然后在role assignment列表中查看id对应关系 [root@controller ~]# openstack user list +----------------------------------+-----------+ | ID                               | Name      | +----------------------------------+-----------+ | f4f16d960e0643d7b5a35db152c87dae | admin     | | 81238b556a444c8f80cb3d7dc72a24d3 | glance    | | e0d6a46f9b1744d8a7ab0332ab45d59c | placement | +----------------------------------+-----------+ [root@controller ~]# openstack role list +----------------------------------+--------+ | ID                               | Name   | +----------------------------------+--------+ | 47670bbd6cc1472ab42db560637c7ebe | reader | | 5eee0910aeb844a1b82f48100da7adc9 | admin  | | 700ec993d3cf456fa591c03e72f37856 | user   | | bc2c8147bbd643629a020a6bd9591eca | member | +----------------------------------+--------+ [root@controller ~]# openstack role assignment list +----------------------------------+----------------------------------+-------+----------------------------------+--------+--------+-----------+ | Role                             | User                             | Group | Project                          | Domain | System | Inherited | +----------------------------------+----------------------------------+-------+----------------------------------+--------+--------+-----------+ | 5eee0910aeb844a1b82f48100da7adc9 | 81238b556a444c8f80cb3d7dc72a24d3 |       | e3a549077f354998aa1a75677cfde62e |        |        | False     | | 5eee0910aeb844a1b82f48100da7adc9 | e0d6a46f9b1744d8a7ab0332ab45d59c |       | e3a549077f354998aa1a75677cfde62e |        |        | False     | | 5eee0910aeb844a1b82f48100da7adc9 | f4f16d960e0643d7b5a35db152c87dae |       | 4188570a34464b938ed3fa7e08681df8 |        |        | False     | | 5eee0910aeb844a1b82f48100da7adc9 | f4f16d960e0643d7b5a35db152c87dae |       |                                  |        | all    | False     | +----------------------------------+----------------------------------+-------+----------------------------------+--------+--------+-----------+  # 8.placement服务是否创建 [root@controller ~]# openstack service list +----------------------------------+-----------+-----------+ | ID                               | Name      | Type      | +----------------------------------+-----------+-----------+ | 324a07034ea4453692570e3edf73cf2c | glance    | image     | | 5d25b4ed1443497599707e043866eaae | keystone  | identity  | | da038496edf04ce29d7d3d6b8e647755 | placement | placement | +----------------------------------+-----------+-----------+  # 9.检测placement服务端点 [root@controller ~]# openstack endpoint list  # 10.查看placement服务端口是否正常 [root@controller ~]# curl http://controller:8778 {"versions": [{"status": "CURRENT", "min_version": "1.0", "max_version": "1.36", "id": "v1.0", "links": [{"href": "", "rel": "self"}]}]} 

四:计算服务nova部署  

nova负责云主机实例的创建,删除,启动,停止等

1、安装配置控制节点nova服务

(1)安装nova软件包

# 安装nova相关软件包 [root@controller ~]# yum install -y openstack-nova-api openstack-nova-conductor openstack-nova-scheduler openstack-nova-novncproxy  # 查看nova用户 [root@controller ~]# cat /etc/passwd | grep nova nova:x:162:162:OpenStack Nova Daemons:/var/lib/nova:/sbin/nologin  # 查看nova用户组 [root@controller ~]# cat /etc/group | grep nova nobody:x:99:nova nova:x:162:nova 

(2)创建nova数据库并授权  

支持nova组件的数据库有三个:nova_api,nova_cell0,nova

# 登录数据库 [root@controller ~]# mysql -uroot -p000000  # 创建三个数据库 MariaDB [(none)]> create database nova_api; MariaDB [(none)]> create database nova_cell0; MariaDB [(none)]> create database nova; MariaDB [(none)]> show databases; +--------------------+ | Database           | +--------------------+ | nova               | | nova_api           | | nova_cell0         |  # 为数据库授权本地和远程管理权限 MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' IDENTIFIED BY '000000'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' IDENTIFIED BY '000000';  MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' IDENTIFIED BY '000000'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' IDENTIFIED BY '000000';  MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY '000000'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY '000000'; 

(3)修改nova配置文件

nova配置文件,/etc/nova/nova.conf。修改这个文件可以实现nova与数据库,keystone和其他组件的连接

api_database:配置与数据库nova_api连接

database:配置与数据库nova连接

api、keystone_authtoken:配置与keystone交互

placement:配置与placement组件交互

glance:配置glacne组件交互

oslo_concurrency:配置锁的路径,为openstack中的代码提供线程及进程锁,lock_path配置这个模块指定的路径

DEFAULT:配置使用消息队列和防火墙等信息

VNC:配置vnc连接模式

配置文件中的$表示取变量值

# 备份配置文件 [root@controller ~]# cp /etc/nova/nova.conf /etc/nova/nova.conf.bak  # 去除配置文件中的注释和空行 [root@controller ~]# grep -Ev '^$|#' /etc/nova/nova.conf.bak > /etc/nova/nova.conf  # 修改配置文件 [root@controller ~]# vi /etc/nova/nova.conf [DEFAULT] enable_apis = osapi_compute,metadata transport_url = rabbit://rabbitmq:000000@controller:5672 my_ip = 192.168.10.10 use_neutron = true firewall_driver = nova.virt.firewall.NoopFirewallDriver  [api] auth_strategy = keystone  [api_database] connection = mysql+pymysql://nova:000000@controller/nova_api  [database] connection = mysql+pymysql://nova:000000@controller/nova  [glance] api_servers = http://controller:9292  [keystone_authtoken] auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = project username = nova password = 000000  [oslo_concurrency] lock_path = /var/lib/nova/tmp  [placement] auth_url = http://controller:5000 auth_type = password project_domain_name = Default user_domain_name = Default project_name = project username = placement password = 000000 region_name = RegionOne  [vnc] enabled = true server_listen = $my_ip server_proxyclient_address = $my_ip 

(4)初始化数据库

将安装文件中的数据库表信息填入数据库中

# 初始化 nova_api数据库 [root@controller ~]# su nova -s /bin/sh -c "nova-manage api_db sync"  # 创建‘cell1’单元,该单元使用nova数据库 [root@controller ~]# su nova -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1"  # 映射nova 到cell0数据库,使cell0的表结构与nova表结构一致 [root@controller ~]# su nova -s /bin/sh -c "nova-manage cell_v2 map_cell0"  # 初始化nova数据库,因为映射,cell0会同步创建相同数据库(有warning先忽略) [root@controller ~]# su nova -s /bin/sh -c "nova-manage db sync" 

(5)验证单元是否都正确注册

存在cell0和cell2个单元则正常

cell0:系统管理

cell1:云主机管理,每增加一个计算计算节点则增加一个和cell1功能相同的单元

[root@controller ~]# nova-manage cell_v2 list_cells +-------+--------------------------------------+----------------------------------------+-------------------------------------------------+----------+ |  Name |                 UUID                 |             Transport URL              |               Database Connection               | Disabled | +-------+--------------------------------------+----------------------------------------+-------------------------------------------------+----------+ | cell0 | 00000000-0000-0000-0000-000000000000 |                 none:/                 | mysql+pymysql://nova:****@controller/nova_cell0 |  False   | | cell1 | 83ad6d17-f245-4310-8729-fccaa033edf2 | rabbit://rabbitmq:****@controller:5672 |    mysql+pymysql://nova:****@controller/nova    |  False   | +-------+--------------------------------------+----------------------------------------+-------------------------------------------------+----------+ 

2、控制节点nova组件初始化及检测

(1)nova组件初始化  

# 导入环境变量模拟登录 [root@controller ~]# source admin-login  # 在default域创建名为nova的用户 [root@controller ~]# openstack user create --domain default --password 000000 nova +---------------------+----------------------------------+ | Field               | Value                            | +---------------------+----------------------------------+ | domain_id           | default                          | | enabled             | True                             | | id                  | 2f5041ed122d4a50890c34ea02881b47 | | name                | nova                             | | options             | {}                               | | password_expires_at | None                             | +---------------------+----------------------------------+  # 为nova用户分配admin角色 [root@controller ~]# openstack role add --project project --user nova admin  # 创建compute类型的nova服务 [root@controller ~]# openstack service create --name nova compute +---------+----------------------------------+ | Field   | Value                            | +---------+----------------------------------+ | enabled | True                             | | id      | e7cccf0a4d2549139801ac51bb8546db | | name    | nova                             | | type    | compute                          | +---------+----------------------------------+  # 创建服务端点 [root@controller ~]# openstack endpoint create --region RegionOne nova public http://controller:8774/v2.1 +--------------+----------------------------------+ | Field        | Value                            | +--------------+----------------------------------+ | enabled      | True                             | | id           | c60a9641abbb47b391751c9a0b0d6828 | | interface    | public                           | | region       | RegionOne                        | | region_id    | RegionOne                        | | service_id   | e7cccf0a4d2549139801ac51bb8546db | | service_name | nova                             | | service_type | compute                          | | url          | http://controller:8774/v2.1      | +--------------+----------------------------------+ [root@controller ~]# openstack endpoint create --region RegionOne nova internal http://controller:8774/v2.1 +--------------+----------------------------------+ | Field        | Value                            | +--------------+----------------------------------+ | enabled      | True                             | | id           | 49b042b01ad44784888e65366d61dede | | interface    | internal                         | | region       | RegionOne                        | | region_id    | RegionOne                        | | service_id   | e7cccf0a4d2549139801ac51bb8546db | | service_name | nova                             | | service_type | compute                          | | url          | http://controller:8774/v2.1      | +--------------+----------------------------------+ [root@controller ~]# openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1 +--------------+----------------------------------+ | Field        | Value                            | +--------------+----------------------------------+ | enabled      | True                             | | id           | 6dd22acff2ab4c2195cefee39f371cc0 | | interface    | admin                            | | region       | RegionOne                        | | region_id    | RegionOne                        | | service_id   | e7cccf0a4d2549139801ac51bb8546db | | service_name | nova                             | | service_type | compute                          | | url          | http://controller:8774/v2.1      | +--------------+----------------------------------+  [root@controller ~]# openstack endpoint list | grep nova | 49b042b01ad44784888e65366d61dede | RegionOne | nova         | compute      | True    | internal  | http://controller:8774/v2.1 | | 6dd22acff2ab4c2195cefee39f371cc0 | RegionOne | nova         | compute      | True    | admin     | http://controller:8774/v2.1 | | c60a9641abbb47b391751c9a0b0d6828 | RegionOne | nova         | compute      | True    | public    | http://controller:8774/v2.1 |   # 开机启动控制节点的nova服务 [root@controller ~]# systemctl enable openstack-nova-api openstack-nova-scheduler openstack-nova-conductor openstack-nova-novncproxy Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-nova-api.service to /usr/lib/systemd/system/openstack-nova-api.service. Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-nova-scheduler.service to /usr/lib/systemd/system/openstack-nova-scheduler.service. Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-nova-conductor.service to /usr/lib/systemd/system/openstack-nova-conductor.service. Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-nova-novncproxy.service to /usr/lib/systemd/system/openstack-nova-novncproxy.service.  # 启动nova服务 [root@controller ~]# systemctl start openstack-nova-api openstack-nova-scheduler openstack-nova-conductor openstack-nova-novncproxy 

(2)检测控制节点nova服务

nova服务会占用8874和9775端口,查看端口是否启动,可判断nova服务是否已经运行了

nova-conductor和nova-scheduler2个服务在控制节点的模块均处于up状态,则服务正常

# 方法一:查看端口占用 [root@controller ~]# netstat -nutpl | grep 877 tcp        0      0 0.0.0.0:8774            0.0.0.0:*               LISTEN      2487/python2         tcp        0      0 0.0.0.0:8775            0.0.0.0:*               LISTEN      2487/python2         tcp6       0      0 :::8778                 :::*                    LISTEN      1030/httpd   # 方法二:查看计算服务列表 [root@controller ~]# openstack compute service list +----+----------------+------------+----------+---------+-------+----------------------------+ | ID | Binary         | Host       | Zone     | Status  | State | Updated At                 | +----+----------------+------------+----------+---------+-------+----------------------------+ |  1 | nova-conductor | controller | internal | enabled | up    | 2022-10-28T10:53:26.000000 | |  4 | nova-scheduler | controller | internal | enabled | up    | 2022-10-28T10:53:28.000000 | +----+----------------+------------+----------+---------+-------+----------------------------+ 

3、安装配置计算节点nova服务  

nova需要再计算节点安装nova-compute计算模块,所有的云主机均为该模块在计算节点生成

(1)安装nova软件包

# 把阿里云的源下载过来 [root@compute yum.repos.d]# scp root@192.168.10.10:/etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/ root@192.168.10.10's password:  CentOS-Base.repo                                                        100% 2523     1.1MB/s   00:00     [root@compute yum.repos.d]# ls CentOS-Base.repo  OpenStack.repo  repo.bak  # 安装nova的计算模块 [root@compute yum.repos.d]# yum install -y openstack-nova-compute  # 查看用户信息 [root@compute ~]# cat /etc/passwd | grep nova nova:x:162:162:OpenStack Nova Daemons:/var/lib/nova:/sbin/nologin  # 查看用户组信息 [root@compute ~]# cat /etc/group | grep nova nobody:x:99:nova qemu:x:107:nova libvirt:x:987:nova nova:x:162:nova

(2)修改nova配置文件

nova的配置文件/etc/nova/nova.conf,修改他实现nova与数据库,keystone和其他组件的连接  

与控制节点的主要区别:

my_ip = 192.168.109.10

libvirt中多了配置:virt_type = qemu

# 备份配置文件 [root@compute ~]# cp /etc/nova/nova.conf /etc/nova/nova.conf.bak  # 去掉配置文件注释和空行 [root@compute ~]# grep -Ev '^$|#' /etc/nova/nova.conf.bak > /etc/nova/nova.conf  # 编辑配置文件 [root@compute ~]# vi /etc/nova/nova.conf [DEFAULT] enable_apis = osapi_compute,metadata transport_url = rabbit://rabbitmq:000000@controller:5672 my_ip = 192.168.10.20 use_neutron = true firewall_driver = nova.virt.firewall.NoopFirewallDriver  [api] auth_strategy = keystone  [glance] api_servers = http://controller:9292  [keystone_authtoken] auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = project username = nova password = 000000  [libvirt] virt_type = qemu  [oslo_concurrency] lock_path = /var/lib/nova/tmp  [placement] auth_url = http://controller:5000 auth_type = password project_domain_name = Default user_domain_name = Default project_name = project username = placement password = 000000 region_name = RegionOne  [vnc] enabled = true server_listen = 0.0.0.0 server_proxyclient_address = $my_ip novncproxy_base_url = http://192.168.10.10:6080/vnc_auto.html 

(3)启动计算节点nova服务

# 开机启动 [root@compute ~]# systemctl enable libvirtd openstack-nova-compute Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-nova-compute.service to /usr/lib/systemd/system/openstack-nova-compute.service.  # 启动 [root@compute ~]# systemctl start libvirtd openstack-nova-compute  # 在控制节点查看服务状态 [root@controller ~]# openstack compute service list +----+----------------+------------+----------+---------+-------+----------------------------+ | ID | Binary         | Host       | Zone     | Status  | State | Updated At                 | +----+----------------+------------+----------+---------+-------+----------------------------+ |  1 | nova-conductor | controller | internal | enabled | up    | 2022-10-28T11:19:57.000000 | |  4 | nova-scheduler | controller | internal | enabled | up    | 2022-10-28T11:19:49.000000 | |  6 | nova-compute   | compute    | nova     | enabled | up    | 2022-10-28T11:19:56.000000 | +----+----------------+------------+----------+---------+-------+----------------------------+ 

4、发现计算节点并检验服务

每个计算节点要加入系统,都需要在控制节点上执行一次发现计算节点的操作,被发现的计算节点才能被映射为一个单元

(1)发现计算节点

注意是控制节点操作

# 模拟登录验证 [root@controller ~]# source admin-login  # 切换nova用户执行发现未注册计算节点 # 发现计算节点后,将自动与cell1单元形成关联,后面可通过cell1对计算节点管理 [root@controller ~]# su nova -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" Found 2 cell mappings. Getting computes from cell 'cell1': 83ad6d17-f245-4310-8729-fccaa033edf2 Checking host mapping for compute host 'compute': 13af5106-c1c1-4b3f-93f5-cd25e030f39d Creating host mapping for compute host 'compute': 13af5106-c1c1-4b3f-93f5-cd25e030f39d Found 1 unmapped computes in cell: 83ad6d17-f245-4310-8729-fccaa033edf2 Skipping cell0 since it does not contain hosts.  # 设置自动发现 # 1.每隔60秒执行一次发现命令 [root@controller ~]# vi /etc/nova/nova.conf [scheduler] discover_hosts_in_cells_interval = 60 # 2.重启nova-api服务,让配置生效 [root@controller ~]# systemctl restart openstack-nova-api 

(2)验证nova服务

均在控制节点上操作  

# 方法一:查看计算服务列表 [root@controller ~]# openstack compute service list +----+----------------+------------+----------+---------+-------+----------------------------+ | ID | Binary         | Host       | Zone     | Status  | State | Updated At                 | +----+----------------+------------+----------+---------+-------+----------------------------+ |  1 | nova-conductor | controller | internal | enabled | up    | 2022-10-28T12:02:46.000000 | |  4 | nova-scheduler | controller | internal | enabled | up    | 2022-10-28T12:02:38.000000 | |  6 | nova-compute   | compute    | nova     | enabled | up    | 2022-10-28T12:02:40.000000 | +----+----------------+------------+----------+---------+-------+----------------------------+  # 方法二:查看Openstack服务及端点列表 [root@controller ~]# openstack catalog list +-----------+-----------+-----------------------------------------+ | Name      | Type      | Endpoints                               | +-----------+-----------+-----------------------------------------+ | glance    | image     | RegionOne                               | |           |           |   internal: http://controller:9292      | |           |           | RegionOne                               | |           |           |   admin: http://controller:9292         | |           |           | RegionOne                               | |           |           |   public: http://controller:9292        | |           |           |                                         | | keystone  | identity  | RegionOne                               | |           |           |   admin: http://controller:5000/v3      | |           |           | RegionOne                               | |           |           |   internal: http://controller:5000/v3   | |           |           | RegionOne                               | |           |           |   public: http://controller:5000/v3     | |           |           |                                         | | placement | placement | RegionOne                               | |           |           |   internal: http://controller:8778      | |           |           | RegionOne                               | |           |           |   public: http://controller:8778        | |           |           | RegionOne                               | |           |           |   admin: http://controller:8778         | |           |           |                                         | | nova      | compute   | RegionOne                               | |           |           |   internal: http://controller:8774/v2.1 | |           |           | RegionOne                               | |           |           |   admin: http://controller:8774/v2.1    | |           |           | RegionOne                               | |           |           |   public: http://controller:8774/v2.1   | |           |           |                                         | +-----------+-----------+-----------------------------------------+  # 方法三:使用Nova状态检测工具进行检查 [root@controller ~]# nova-status upgrade check +--------------------------------+ | Upgrade Check Results          | +--------------------------------+ | Check: Cells v2                | | Result: Success                | | Details: None                  | +--------------------------------+ | Check: Placement API           | | Result: Success                | | Details: None                  | +--------------------------------+ | Check: Ironic Flavor Migration | | Result: Success                | | Details: None                  | +--------------------------------+ | Check: Cinder API              | | Result: Success                | | Details: None                  | +--------------------------------+ 

5、安装并完成情况检测  

# 1.检查控制节点nova用户和用户组 [root@controller ~]# cat /etc/passwd | grep nova nova:x:162:162:OpenStack Nova Daemons:/var/lib/nova:/sbin/nologin [root@controller ~]# cat /etc/group | grep nova nobody:x:99:nova nova:x:162:nova  # 2.检查计算节点nova用户和用户组 [root@compute ~]# cat /etc/passwd | grep nova nova:x:162:162:OpenStack Nova Daemons:/var/lib/nova:/sbin/nologin [root@compute ~]# cat /etc/group | grep nova nobody:x:99:nova qemu:x:107:nova libvirt:x:987:nova nova:x:162:nova  # 3.查看控制节点数据库 MariaDB [(none)]> show databases; +--------------------+ | Database           | +--------------------+ | glance             | | information_schema | | keystone           | | mysql              | | nova               | | nova_api           | | nova_cell0         |  # 4.查看nova用户对数据库的权限 MariaDB [(none)]> show grants for nova@'%'; +-----------------------------------------------------------------------------------------------------+ | Grants for nova@%                                                                                   | +-----------------------------------------------------------------------------------------------------+ | GRANT USAGE ON *.* TO 'nova'@'%' IDENTIFIED BY PASSWORD '*032197AE5731D4664921A6CCAC7CFCE6A0698693' | | GRANT ALL PRIVILEGES ON `nova`.* TO 'nova'@'%'                                                      | | GRANT ALL PRIVILEGES ON `nova_api`.* TO 'nova'@'%'                                                  | | GRANT ALL PRIVILEGES ON `nova_cell0`.* TO 'nova'@'%'                                                | +-----------------------------------------------------------------------------------------------------+ 4 rows in set (0.000 sec)  MariaDB [(none)]> show grants for nova@'localhost'; +-------------------------------------------------------------------------------------------------------------+ | Grants for nova@localhost                                                                                   | +-------------------------------------------------------------------------------------------------------------+ | GRANT USAGE ON *.* TO 'nova'@'localhost' IDENTIFIED BY PASSWORD '*032197AE5731D4664921A6CCAC7CFCE6A0698693' | | GRANT ALL PRIVILEGES ON `nova`.* TO 'nova'@'localhost'                                                      | | GRANT ALL PRIVILEGES ON `nova_cell0`.* TO 'nova'@'localhost'                                                | | GRANT ALL PRIVILEGES ON `nova_api`.* TO 'nova'@'localhost'    # 5.novanova_apinova_cell0数据库表同步 MariaDB [(none)]> use nova Database changed  MariaDB [nova]> show tables; +--------------------------------------------+ | Tables_in_nova                             | +--------------------------------------------+ | agent_builds                               | | aggregate_hosts                            | | aggregate_metadata                         |  # 6.检查nova用户是否存在 [root@controller ~]# openstack user list +----------------------------------+-----------+ | ID                               | Name      | +----------------------------------+-----------+ | f4f16d960e0643d7b5a35db152c87dae | admin     | | 81238b556a444c8f80cb3d7dc72a24d3 | glance    | | e0d6a46f9b1744d8a7ab0332ab45d59c | placement | | 2f5041ed122d4a50890c34ea02881b47 | nova      |  # 7.检查nova用户是否有admin权限 [root@controller ~]# openstack role list +----------------------------------+--------+ | ID                               | Name   | +----------------------------------+--------+ | 47670bbd6cc1472ab42db560637c7ebe | reader | | 5eee0910aeb844a1b82f48100da7adc9 | admin  | | 700ec993d3cf456fa591c03e72f37856 | user   | | bc2c8147bbd643629a020a6bd9591eca | member | +----------------------------------+--------+ [root@controller ~]# openstack role assignment list +----------------------------------+----------------------------------+-------+----------------------------------+--------+--------+-----------+ | Role                             | User                             | Group | Project                          | Domain | System | Inherited | +----------------------------------+----------------------------------+-------+----------------------------------+--------+--------+-----------+ | 5eee0910aeb844a1b82f48100da7adc9 | 2f5041ed122d4a50890c34ea02881b47 |       | e3a549077f354998aa1a75677cfde62e |        |        | False     |  # 8.检查是否创建看了服务实体nova [root@controller ~]# openstack service list +----------------------------------+-----------+-----------+ | ID                               | Name      | Type      | +----------------------------------+-----------+-----------+ | 324a07034ea4453692570e3edf73cf2c | glance    | image     | | 5d25b4ed1443497599707e043866eaae | keystone  | identity  | | da038496edf04ce29d7d3d6b8e647755 | placement | placement | | e7cccf0a4d2549139801ac51bb8546db | nova      | compute   |  # 9.检查nova服务端点 [root@controller ~]# openstack endpoint list | grep nova | 49b042b01ad44784888e65366d61dede | RegionOne | nova         | compute      | True    | internal  | http://controller:8774/v2.1 | | 6dd22acff2ab4c2195cefee39f371cc0 | RegionOne | nova         | compute      | True    | admin     | http://controller:8774/v2.1 | | c60a9641abbb47b391751c9a0b0d6828 | RegionOne | nova         | compute      | True    | public    | http://controller:8774/v2.1 |  # 10.nova服务是否正常运行 [root@controller ~]# nova-status upgrade check +--------------------------------+ | Upgrade Check Results          | +--------------------------------+ | Check: Cells v2                | | Result: Success                | | Details: None                  | +--------------------------------+ | Check: Placement API           | | Result: Success                | | Details: None                  | +--------------------------------+ | Check: Ironic Flavor Migration | | Result: Success                | | Details: None                  | +--------------------------------+ | Check: Cinder API              | | Result: Success                | | Details: None                  | +--------------------------------+ 

  

五:网络服务(Neutron)部署

neutron负责虚拟网络设备的创建,管理,包含网桥,网络,端口等

1、网络初始环境准备

(1)设置外网网卡为混杂模式

需要将网卡设置为混杂模式,网卡能将通过自己的接口的所有数据都捕获,为了实现虚拟网络的数据转发,neutron需要将外网网卡设置为混杂模式

 

# 设置控制节点 [root@controller ~]# ifconfig ens34 promisc [root@controller ~]# ifconfig ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500         inet 192.168.10.10  netmask 255.255.255.0  broadcast 192.168.10.255 		略...  ens34: flags=4419<UP,BROADCAST,RUNNING,PROMISC,MULTICAST>  mtu 1500    《————这里增加了PROMISC  # 设置计算节点 [root@compute ~]# ifconfig ens34 promisc [root@compute ~]# ifconfig ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500         inet 192.168.10.20  netmask 255.255.255.0  broadcast 192.168.10.255 		略...  ens34: flags=4419<UP,BROADCAST,RUNNING,PROMISC,MULTICAST>  mtu 1500 

 

网卡信息中出现了PROMISC字样,则表示陈工设置为混杂模式,凡是通过该网卡的数据均可被该网卡接收,

在设置开机后混杂模式自动生效

# 控制节点执行 [root@controller ~]# echo 'ifconfig ens34 promisc' >> /etc/profile [root@controller ~]# tail -1 /etc/profile ifconfig ens34 promisc  # 计算节点执行 [root@compute ~]# echo 'ifconfig ens34 promisc' >> /etc/profile [root@compute ~]# tail -1 /etc/profile ifconfig ens34 promisc

(2)加载桥接模式防火墙模块  

 网络过滤器时linux内核中的一个软件框架,用于管理网络数据包,能网络地址转换,还能修改数据包,数据包过滤等 

 

# 1.修改系统参数配置文件 # 控制节点修改 [root@controller ~]# echo 'net.bridge.bridge-nf-call-iptables = 1 net.bridge.bridge-nf-call-ip6tables = 1' >> /etc/sysctl.conf  [root@controller ~]# tail -n 2 /etc/sysctl.conf  net.bridge.bridge-nf-call-iptables = 1 net.bridge.bridge-nf-call-ip6tables = 1  # 计算节点修改 [root@compute ~]#  echo 'net.bridge.bridge-nf-call-iptables = 1 > net.bridge.bridge-nf-call-ip6tables = 1' >> /etc/sysctl.conf  [root@compute ~]#  tail -n 2 /etc/sysctl.conf net.bridge.bridge-nf-call-iptables = 1 net.bridge.bridge-nf-call-ip6tables = 1  # 2.分别加载br_netfilter模块 [root@controller ~]# modprobe br_netfilter [root@compute ~]# modprobe br_netfilter  # 3.分别检查模块加载 [root@controller ~]# sysctl -p net.bridge.bridge-nf-call-iptables = 1 net.bridge.bridge-nf-call-ip6tables = 1  [root@compute ~]# sysctl -p net.bridge.bridge-nf-call-iptables = 1 net.bridge.bridge-nf-call-ip6tables = 1

2、控制节点neutron服务安装配置 

(1)安装neutron软件包

openstack-neutron:neuron-server的包

openstack-neutron-ml2:ml2插件的包

openstack-neutron-linuxbridge:网桥和网络提供者相关的包

# 安装相关软件包 # 阿里云上有包dnsmasq-utils-2.76-17.el7_9.3.x86_64.rpm缺失 [root@controller ~]# yum install -y wget [root@controller ~]# wget http://mirror.centos.org/centos/7/updates/x86_64/Packages/dnsmasq-utils-2.76-17.el7_9.3.x86_64.rpm [root@controller ~]# ls admin-login      cirros-0.5.1-x86_64-disk.img    dnsmasq-utils-2.76-17.el7_9.3.x86_64.rpm [root@controller ~]#  rpm -ivh dnsmasq-utils-2.76-17.el7_9.3.x86_64.rpm  Preparing...                          ################################# [100%] Updating / installing...    1:dnsmasq-utils-2.76-17.el7_9.3    ################################# [100%]  [root@controller ~]# yum install -y openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge  # 检查用户信息 [root@controller ~]# cat /etc/passwd | grep neutron neutron:x:990:987:OpenStack Neutron Daemons:/var/lib/neutron:/sbin/nologin  # 检查用户组信息 [root@controller ~]# cat /etc/group | grep neutron neutron:x:987: 

(2)创建neutron数据库并授权  

# 1.登录并创建数据库 [root@controller ~]# mysql -uroot -p000000 MariaDB [(none)]> create database neutron; Query OK, 1 row affected (0.000 sec) MariaDB [(none)]> show databases; +--------------------+ | Database           | +--------------------+ | glance             | | information_schema | | keystone           | | mysql              | | neutron            |  # 2.为数据库授权本地和远程管理权限 MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY '000000'; Query OK, 0 rows affected (0.001 sec)  MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY '000000'; Query OK, 0 rows affected (0.000 sec) 

(3)修改neutron服务相关的配置文件  

 1.配置neutron组件信息

修改default与keystone_autjoken部分,实现与keystone交互

修改database部分,实现与数据库连接

修改default部分,实现与消息队列交互及核心插件等

修改oslo_concurrency,配置锁路径

# 备份配置文件 [root@controller ~]# cp /etc/neutron/neutron.conf /etc/neutron/neutron.conf.bak # 去掉配置文件注释和空行 [root@controller ~]# grep -Ev '^$|#' /etc/neutron/neutron.conf.bak > /etc/neutron/neutron.conf  # 编辑配置文件 [root@controller ~]# vi /etc/neutron/neutron.conf [DEFAULT] core_plugin = ml2 service_plugins = router transport_url = rabbit://rabbitmq:000000@controller auth_strategy = keystone notify_nova_on_port_status_changes = true notify_nova_on_port_data_changes = true  [database] connection = mysql+pymysql://neutron:000000@controller/neutron  [keystone_authtoken] auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = project username = neutron password = 000000  [oslo_concurrency] lock_path = /var/lib/nova/tmp  [nova] auth_url = http://controller:5000 auth_type = password project_domain_name = default user_domain_name = default project_name = project username = nova password = 000000 region_name = RegionOne server_proxyclient_address = 192.168.10.10 

2、修改二层模块插件(ml2plugin)的配置文件

是neutron的核心插件

# 备份配置文件 [root@controller ~]# cp /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugins/ml2/ml2_conf.ini.bak # 去除配置文件中的注释和空行 [root@controller ~]# grep -Ev '^$|#' /etc/neutron/plugins/ml2/ml2_conf.ini.bak > /etc/neutron/plugins/ml2/ml2_conf.ini  # 编辑配置文件 [root@controller ~]# vi /etc/neutron/plugins/ml2/ml2_conf.ini [ml2] type_drivers = flat,local,vlan,gre,vxlan,geneve tenant_network_types = local,flat mechanism_drivers = linuxbridge extension_drivers = port_security  [ml2_type_flat] flat_networks = provider  [securitygroup] enable_ipset = true  # 设置映射启用ML2插件 [root@controller ~]# ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini [root@controller ~]# ll /etc/neutron/ lrwxrwxrwx  1 root root       37 Nov  4 20:01 plugin.ini -> /etc/neutron/plugins/ml2/ml2_conf.ini 

3、修改网桥代理配置文件

要在ml2的配置文件中设置机制驱动mechanism_drivers的值为linuxbridge

# 1.备份配置文件 [root@controller ~]# cp /etc/neutron/plugins/ml2/linuxbridge_agent.ini /etc/neutron/plugins/ml2/linuxbridge_agent.ini.bak  # 2.删除注释和空行 [root@controller ~]# grep -Ev '^$|#' /etc/neutron/plugins/ml2/linuxbridge_agent.ini.bak > /etc/neutron/plugins/ml2/linuxbridge_agent.ini [root@controller ~]# cat /etc/neutron/plugins/ml2/linuxbridge_agent.ini [DEFAULT]  # 3.编辑配置文件 [root@controller ~]# vi /etc/neutron/plugins/ml2/linuxbridge_agent.ini [DEFAULT] [linux_bridge] physical_interface_mappings = provider:ens34  [vxlan] enable_vxlan = false  [securitygroup] enable_security_group = true firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver 

4、修改dhcp代理配置文件

dhcp-agent为云主机提供了自动分配ip地址的服务

# 1.备份和去除空行和注释配置文件 [root@controller ~]# cp /etc/neutron/dhcp_agent.ini /etc/neutron/dhcp_agent.ini.bak [root@controller ~]# grep -Ev '^$|#' /etc/neutron/dhcp_agent.ini.bak > /etc/neutron/dhcp_agent.ini [root@controller ~]# cat /etc/neutron/dhcp_agent.ini [DEFAULT]  # 2.编辑配置文件 [root@controller ~]# vi /etc/neutron/dhcp_agent.ini [DEFAULT] interface_driver = linuxbridge dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq enable_isolated_metadata = true 

5、修改元数据代理配置文件

云主机在计算节点,运行过程中需要和控制节点nova-api模块交互,交互需要使用neutron-metadata-agent

# 1.备份和去除空行注释配置文件 [root@controller ~]# cp /etc/neutron/metadata_agent.ini /etc/neutron/metadata_agent.ini.bak [root@controller ~]# grep -Ev '^$|#' /etc/neutron/metadata_agent.ini.bak  > /etc/neutron/metadata_agent.ini [root@controller ~]# cat /etc/neutron/metadata_agent.ini [DEFAULT] [cache]  # 2.编辑配置文件 [root@controller ~]# vi /etc/neutron/metadata_agent.ini [DEFAULT] nova_metadata_host = controller metadata_proxy_shared_secret = METADATA_SECRET [cache] 

6、修改Nova配置文件

nova处于云平台的核心位置,需要在nova配置文件中指明如何与neutron进行交互

# 注意文件目录 [root@controller ~]# echo ' [neutron] auth_url = http://controller:5000 auth_type = password project_domain_name = Default user_domain_name = Default region_name = RegionOne project_name = project username = neutron password = 000000 service_metadata_proxy = true metadata_proxy_shared_secret = METADATA_SECRET ' >> /etc/nova/nova.conf 

(4)初始化数据库

neutron数据库同步,将安装文件中的数据库的表信息填充到数据库中  

# 数据库同步 [root@controller neutron]# su neutron -s /bin/sh -c "neutron-db-manage  --config-file /etc/neutron/neutron.conf  --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head"  # 数据库验证 [root@controller neutron]# mysql -uroot -p000000 MariaDB [(none)]> use neutron; Database changed MariaDB [neutron]> show tables; +-----------------------------------------+ | Tables_in_neutron                       | +-----------------------------------------+ | address_scopes                          | | agents                                  | | alembic_version                         | | allowedaddresspairs                     | | arista_provisioned_nets                 | | arista_provisioned_tenants              | 

3、neutron组件初始化

任务均在控制节点完成

(1)创建neutron用户并分配角色

# 模拟登录 [root@controller ~]# source admin-login  # 在 default 域创建neutron用户  [root@controller ~]# openstack user create --domain default --password 000000 neutron +---------------------+----------------------------------+ | Field               | Value                            | +---------------------+----------------------------------+ | domain_id           | default                          | | enabled             | True                             | | id                  | 67bd1f9c48174e3e96bb41e0f76687ca | | name                | neutron                          | | options             | {}                               | | password_expires_at | None                             | +---------------------+----------------------------------+  # 给neutron用户分配admin角色 [root@controller ~]# openstack role add --project project --user neutron admin  # 验证 [root@controller ~]# openstack role assignment list +----------------------------------+----------------------------------+-------+----------------------------------+--------+--------+-----------+ | Role                             | User                             | Group | Project                          | Domain | System | Inherited | +----------------------------------+----------------------------------+-------+----------------------------------+--------+--------+-----------+ | 5eee0910aeb844a1b82f48100da7adc9 | 2f5041ed122d4a50890c34ea02881b47 |       | e3a549077f354998aa1a75677cfde62e |        |        | False     | | 5eee0910aeb844a1b82f48100da7adc9 | 67bd1f9c48174e3e96bb41e0f76687ca |       | e3a549077f354998aa1a75677cfde62e |        |        | False     | 

(2)创建neutron服务及服务端点

# 创建network类型neutron服务 [root@controller ~]# openstack service create --name neutron network +---------+----------------------------------+ | Field   | Value                            | +---------+----------------------------------+ | enabled | True                             | | id      | 459c365a11c74e5894b718b5406022a8 | | name    | neutron                          | | type    | network                          | +---------+----------------------------------+  # 创建3个服务端点 [root@controller ~]# openstack endpoint create --region RegionOne neutron public http://controller:9696 +--------------+----------------------------------+ | Field        | Value                            | +--------------+----------------------------------+ | enabled      | True                             | | id           | 1d59d497c89c4fa9b8789d685fab9fe5 | | interface    | public                           | | region       | RegionOne                        | | region_id    | RegionOne                        | | service_id   | 459c365a11c74e5894b718b5406022a8 | | service_name | neutron                          | | service_type | network                          | | url          | http://controller:9696           | +--------------+----------------------------------+ [root@controller ~]# openstack endpoint create --region RegionOne neutron internal http://controller:9696 +--------------+----------------------------------+ | Field        | Value                            | +--------------+----------------------------------+ | enabled      | True                             | | id           | 44de22606819441aa845b370a9304bf5 | | interface    | internal                         | | region       | RegionOne                        | | region_id    | RegionOne                        | | service_id   | 459c365a11c74e5894b718b5406022a8 | | service_name | neutron                          | | service_type | network                          | | url          | http://controller:9696           | +--------------+----------------------------------+ [root@controller ~]# openstack endpoint create --region RegionOne neutron admin http://controller:9696 +--------------+----------------------------------+ | Field        | Value                            | +--------------+----------------------------------+ | enabled      | True                             | | id           | 75e7eaf8bc664a2c901b7ad58141bedc | | interface    | admin                            | | region       | RegionOne                        | | region_id    | RegionOne                        | | service_id   | 459c365a11c74e5894b718b5406022a8 | | service_name | neutron                          | | service_type | network                          | | url          | http://controller:9696           | +--------------+----------------------------------+ 

(3)启动控制节点上的neutron服务

由于修改了nova的配置文件,启动neutron服务前,需要重启nova服务

# 重启nova服务 [root@controller ~]# systemctl restart openstack-nova-api  # 服务开机启动 [root@controller ~]# systemctl enable neutron-server neutron-linuxbridge-agent neutron-dhcp-agent neutron-metadata-agent Created symlink from /etc/systemd/system/multi-user.target.wants/neutron-server.service to /usr/lib/systemd/system/neutron-server.service. Created symlink from /etc/systemd/system/multi-user.target.wants/neutron-linuxbridge-agent.service to /usr/lib/systemd/system/neutron-linuxbridge-agent.service. Created symlink from /etc/systemd/system/multi-user.target.wants/neutron-dhcp-agent.service to /usr/lib/systemd/system/neutron-dhcp-agent.service. Created symlink from /etc/systemd/system/multi-user.target.wants/neutron-metadata-agent.service to /usr/lib/systemd/system/neutron-metadata-agent.service.  [root@controller neutron]# systemctl start neutron-server neutron-linuxbridge-agent neutron-dhcp-agent neutron-metadata-agent 

4、检测控制节点上的neutron服务

# 方法一:查看端口占用情况 [root@controller neutron]# netstat -tnlup|grep 9696 tcp        0      0 0.0.0.0:9696            0.0.0.0:*               LISTEN      4652/server.log  # 方法二:检验服务端点 [root@controller neutron]# curl http://controller:9696 {"versions": [{"status": "CURRENT", "id": "v2.0", "links": [{"href": "http://controller:9696/v2.0/", "rel": "self"}]}]}  # 方法三:查看服务运行状态 # Loaded:值为enabled,表示服务以及设置了开机启动 # Active:值为active(running),表示服务当前处于运行状态 [root@controller neutron]# systemctl status neutron-server ● neutron-server.service - OpenStack Neutron Server    Loaded: loaded (/usr/lib/systemd/system/neutron-server.service; enabled; vendor preset: disabled)    Active: active (running) since Fri 2022-11-11 16:31:20 CST; 5min ago  Main PID: 4652 (/usr/bin/python) 

5、安装和配置计算节点的neutron服务

均在计算节点上完成 

(1)安装neutron软件包

# 计算节点安装软件包,包含网桥和网络提供者的相关软件 [root@compute ~]# yum install -y openstack-neutron-linuxbridge  # 查看neutron用户和用户组 [root@compute ~]# cat /etc/passwd | grep neutron neutron:x:989:986:OpenStack Neutron Daemons:/var/lib/neutron:/sbin/nologin [root@compute ~]# cat /etc/group | grep neutron neutron:x:986: 

(2)修改neutron配置文件

要对neutron组件,网桥代理,Nova组件进行配置

1.neutron配置文件

# 备份配置文件 [root@compute ~]# cp /etc/neutron/neutron.conf /etc/neutron/neutron.conf.bak # 去除空行和注释 [root@compute ~]# grep -Ev '^$|#' /etc/neutron/neutron.conf.bak > /etc/neutron/neutron.conf [root@compute ~]# cat /etc/neutron/neutron.conf [DEFAULT] [cors] [database] [keystone_authtoken]  # 修改Neutron配置文件 [root@compute ~]# vi /etc/neutron/neutron.conf [DEFAULT] transport_url = rabbit://rabbitmq:000000@controller:5672 auth_strategy = keystone  [keystone_authtoken] auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = project username = neutron password = 000000  [oslo_concurrency] lock_path = /var/lib/neutron/tmp 

2.网桥代理配置文件

# 网桥代理的配置文件备份和去空行和注释 [root@compute ~]# cp /etc/neutron/plugins/ml2/linuxbridge_agent.ini /etc/neutron/plugins/ml2/linuxbridge_agent.ini.bak [root@compute ~]# grep -Ev '^$|#' /etc/neutron/plugins/ml2/linuxbridge_agent.ini.bak > /etc/neutron/plugins/ml2/linuxbridge_agent.ini  # 修改网桥代理的配置文件 [root@compute ~]# vi /etc/neutron/plugins/ml2/linuxbridge_agent.ini [DEFAULT] [linux_bridge] physical_interface_mappings = provider:ens34  [vxlan] enable_vxlan = false  [securitygroup] enable_security_group = true firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver 

3.nova配置文件

# 在Nova配置文件中,需要在[DEFAULT]部分加入两行内容。在[neutron]部分加入内容 [root@compute ~]# vi /etc/nova/nova.conf [DEFAULT] enable_apis = osapi_compute,metadata transport_url = rabbit://rabbitmq:000000@controller:5672 my_ip = 192.168.10.20 use_neutron = true firewall_driver = nova.virt.firewall.NoopFirewallDriver vif_plugging_is_fatal = false vif_plugging_timeout = 0  [neutron] auth_url = http://controller:5000 auth_type = password project_domain_name = default user_domain_name = default region_name = RegionOne project_name = project username = neutron password = 000000 

(3)启动计算节点neutron服务

[root@compute ~]# systemctl restart openstack-nova-compute [root@compute ~]# systemctl enable neutron-linuxbridge-agent Created symlink from /etc/systemd/system/multi-user.target.wants/neutron-linuxbridge-agent.service to /usr/lib/systemd/system/neutron-linuxbridge-agent.service. [root@compute ~]# systemctl start neutron-linuxbridge-agent 

6、检测neutron服务

2种方法检测neutron组件的运行状态,均在控制节点执行

# 方法一:查看网络代理服务列表 # 查询出四个数据,均为UP状态 [root@controller ~]# openstack network agent list +--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+ | ID                                   | Agent Type         | Host       | Availability Zone | Alive | State | Binary                    | +--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+ | 0e2c0f8f-8fa7-4b64-8df2-6f1aedaa7c2b | Linux bridge agent | compute    | None              | :-)   | UP    | neutron-linuxbridge-agent | | c6688165-593d-4c5e-b25c-5ff2b6c75866 | Linux bridge agent | controller | None              | :-)   | UP    | neutron-linuxbridge-agent | | dc335348-5639-40d1-b121-3abfc9aefc8e | Metadata agent     | controller | None              | :-)   | UP    | neutron-metadata-agent    | | ddc49378-aea8-4f2e-b1b4-568fa4c85038 | DHCP agent         | controller | nova              | :-)   | UP    | neutron-dhcp-agent        | +--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+  # 方法二:用Neutron状态检测工具检测 [root@controller ~]# neutron-status upgrade check +---------------------------------------------------------------------+ | Upgrade Check Results                                               | +---------------------------------------------------------------------+ | Check: Gateway external network                                     | | Result: Success                                                     | | Details: L3 agents can use multiple networks as external gateways.  | +---------------------------------------------------------------------+ | Check: External network bridge                                      | | Result: Success                                                     | | Details: L3 agents are using integration bridge to connect external | |   gateways                                                          | +---------------------------------------------------------------------+ | Check: Worker counts configured                                     | | Result: Warning                                                     | | Details: The default number of workers has changed. Please see      | |   release notes for the new values, but it is strongly              | |   encouraged for deployers to manually set the values for           | |   api_workers and rpc_workers.                                      | +---------------------------------------------------------------------+ 

7、安装完成情况检测

# 1.控制节点外网卡设置了混杂模式(PROMISC) [root@controller ~]# ip a 3: ens34: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000  # 2.计算节点外网卡设置了混杂模式(PROMISC) [root@compute ~]# ip a 3: ens34: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000  # 3.控制节点创建neutron用户和用户组 [root@controller ~]# cat /etc/passwd | grep neutron neutron:x:990:987:OpenStack Neutron Daemons:/var/lib/neutron:/sbin/nologin [root@controller ~]# cat /etc/group | grep neutron neutron:x:987:  # 4.计算节点创建neutron用户和用户组 [root@compute ~]# cat /etc/passwd | grep neutron neutron:x:989:986:OpenStack Neutron Daemons:/var/lib/neutron:/sbin/nologin [root@compute ~]# cat /etc/group | grep neutron neutron:x:986:  # 5.控制节点是否建立neutron数据库 [root@controller ~]# mysql -uroot -p000000 MariaDB [(none)]> show databases; +--------------------+ | Database           | +--------------------+ | glance             | | information_schema | | keystone           | | mysql              | | neutron            |  # 6.检查neutron用户对数据库的权限 MariaDB [(none)]> show grants for neutron; +--------------------------------------------------------------------------------------------------------+ | Grants for neutron@%                                                                                   | +--------------------------------------------------------------------------------------------------------+ | GRANT USAGE ON *.* TO 'neutron'@'%' IDENTIFIED BY PASSWORD '*032197AE5731D4664921A6CCAC7CFCE6A0698693' | | GRANT ALL PRIVILEGES ON `neutron`.* TO 'neutron'@'%'                                                   | +--------------------------------------------------------------------------------------------------------+  # 7.检查neutron数据库中的数据表是否同步 MariaDB [(none)]> use neutron; Database changed MariaDB [neutron]> show tables; +-----------------------------------------+ | Tables_in_neutron                       | +-----------------------------------------+ | address_scopes                          | | agents                                  | | alembic_version                         | | allowedaddresspairs                     | | arista_provisioned_nets                 | | arista_provisioned_tenants              |  # 8.检查openstack用户列表 [root@controller ~]# openstack user list | grep neutron | 67bd1f9c48174e3e96bb41e0f76687ca | neutron   |  # 9.查看neutron用户是否有ADMIN权限 [root@controller ~]# openstack role list | grep admin | 5eee0910aeb844a1b82f48100da7adc9 | admin  | [root@controller ~]# openstack role assignment list | grep 67bd1f9c48174e3e96bb41e0f76687ca | 5eee0910aeb844a1b82f48100da7adc9 | 67bd1f9c48174e3e96bb41e0f76687ca |       | e3a549077f354998aa1a75677cfde62e |        |        | False     |  # 10.检查是否创建了服务实体neutron [root@controller ~]# openstack service list +----------------------------------+-----------+-----------+ | ID                               | Name      | Type      | +----------------------------------+-----------+-----------+ | 324a07034ea4453692570e3edf73cf2c | glance    | image     | | 459c365a11c74e5894b718b5406022a8 | neutron   | network   |  # 11.neutron的三个域端点是否创建 [root@controller ~]# openstack endpoint list | grep neutron | 1d59d497c89c4fa9b8789d685fab9fe5 | RegionOne | neutron      | network      | True    | public    | http://controller:9696      | | 44de22606819441aa845b370a9304bf5 | RegionOne | neutron      | network      | True    | internal  | http://controller:9696      | | 75e7eaf8bc664a2c901b7ad58141bedc | RegionOne | neutron      | network      | True    | admin     | http://controller:9696  # 12.查看网络代理列表,检查服务是否正常运行 [root@controller ~]# openstack network agent list +--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+ | ID                                   | Agent Type         | Host       | Availability Zone | Alive | State | Binary                    | +--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+ | 0e2c0f8f-8fa7-4b64-8df2-6f1aedaa7c2b | Linux bridge agent | compute    | None              | :-)   | UP    | neutron-linuxbridge-agent | | c6688165-593d-4c5e-b25c-5ff2b6c75866 | Linux bridge agent | controller | None              | :-)   | UP    | neutron-linuxbridge-agent | | dc335348-5639-40d1-b121-3abfc9aefc8e | Metadata agent     | controller | None              | :-)   | UP    | neutron-metadata-agent    | | ddc49378-aea8-4f2e-b1b4-568fa4c85038 | DHCP agent         | controller | nova              | :-)   | UP    | neutron-dhcp-agent        | +--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+ 

六:仪表盘服务(dashboard)部署

提供了图形化界面来管理openstack平台,就是一个web前端控制台

1、安装dashboard软件包

在计算节点安装dashboard软件包

[root@compute ~]# yum install -y openstack-dashboard 

2、修改horizon配置文件

# 修改Horizon配置文件 [root@compute ~]# vi /etc/openstack-dashboard/local_settings # 控制节点位置 OPENSTACK_HOST = "controller" OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST # 启用对多域的支持 OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True # 配置API版本 OPENSTACK_API_VERSIONS = {     "identity": 3,     "image": 2,     "volume": 3 } # 配置通过仪表盘创建的用户默认域为Default OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default" # 配置通过仪表盘创建的用户默认角色为user OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"  # 修改配置二层网络 OPENSTACK_NEUTRON_NETWORK = {     'enable_auto_allocated_network': False,     'enable_distributed_router': False,     'enable_fip_topology_check': False,     'enable_ha_router': False,     'enable_ipv6': False,     'enable_quotas': False,     'enable_rbac_policy': False,     'enable_router': False,      'default_dns_nameservers': [],     'supported_provider_types': ['*'],     'segmentation_id_range': {},     'extra_provider_types': {},     'supported_vnic_types': ['*'],     'physical_networks': [], } # 配置时区 TIME_ZONE = "Asia/Shanghai"  # 允许从任意主机访问 ALLOWED_HOSTS = ['*']  # 配置使用缓存服务 CACHES = {     'default': {         'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',         'LOCATION': 'controller:11211',     }, } SESSION_ENGINE = 'django.contrib.sessions.backends.cache' 

(3)重建apache下dashboard配置文件

dashboard是一个web应用,必须运行在apache这样的服务器因此要设置让apache知道如何运行该服务

# 进入Dashboard网站目录 [root@compute ~]# cd /usr/share/openstack-dashboard/ [root@compute openstack-dashboard]# ls manage.py  manage.pyc  manage.pyo  openstack_dashboard  static  # 编译生成Dashboard的WEB服务文件 [root@compute openstack-dashboard]# python manage.py make_web_conf --apache > /etc/httpd/conf.d/openstack-dashboard.conf [root@compute openstack-dashboard]# cat /etc/httpd/conf.d/openstack-dashboard.conf  <VirtualHost *:80>     ServerAdmin webmaster@openstack.org     ServerName  openstack_dashboard     DocumentRoot /usr/share/openstack-dashboard/     LogLevel warn     ErrorLog /var/log/httpd/openstack_dashboard-error.log     CustomLog /var/log/httpd/openstack_dashboard-access.log combined     WSGIScriptReloading On     WSGIDaemonProcess openstack_dashboard_website processes=3     WSGIProcessGroup openstack_dashboard_website     WSGIApplicationGroup %{GLOBAL}     WSGIPassAuthorization On     WSGIScriptAlias / /usr/share/openstack-dashboard/openstack_dashboard/wsgi.py     <Location "/">         Require all granted     </Location>     Alias /static /usr/share/openstack-dashboard/static     <Location "/static">         SetHandler None     </Location> </Virtualhost> 

这样就是实现了apache的web服务器配置目录下生成一个配置文件

4、建立策略文件软连接

在/etc/openstack-dashboard目录下保存了一些dashboard与其他组件交互式的默认策略

# 查看交互默认策略 [root@compute ~]# cd /etc/openstack-dashboard/ [root@compute openstack-dashboard]# ls cinder_policy.json  keystone_policy.json  neutron_policy.json  nova_policy.json glance_policy.json  local_settings        nova_policy.d  # 将这些策略链接到Dashboard项目中,让策略生效 [root@compute openstack-dashboard]# ln -s /etc/openstack-dashboard /usr/share/openstack-dashboard/openstack_dashboard/conf [root@compute openstack-dashboard]# ll /usr/share/openstack-dashboard/openstack_dashboard/ total 240 drwxr-xr-x  3 root root  4096 Nov 18 15:00 api lrwxrwxrwx  1 root root    24 Nov 18 15:33 conf -> /etc/openstack-dashboard 

5、启动服务并验证

# Apaceh服务开机启动和重启 [root@compute ~]# systemctl enable httpd Created symlink from /etc/systemd/system/multi-user.target.wants/httpd.service to /usr/lib/systemd/system/httpd.service. [root@compute ~]# systemctl restart httpd

访问计算节点的ip地址

用户名admin,密码000000 

七: 块存储服务(cinder部署)

控制节点和计算节点部署配置cinder服务

1、控制节点安装和配置cinder

(1)安装cinder软件包

在openstack-cinder软件包中包括cinder-api和cindder-scheduler模块

# 安装cinder软件包 [root@controller ~]# yum install -y openstack-cinder  # 查看cinder用户和用户组 [root@controller ~]# cat /etc/passwd | grep cinder cinder:x:165:165:OpenStack Cinder Daemons:/var/lib/cinder:/sbin/nologin [root@controller ~]# cat /etc/group | grep cinder nobody:x:99:nova,cinder cinder:x:165:cinder 

(2)创建cinder数据库并授权

# 登录数据库 [root@controller ~]# mysql -uroot -p000000  # 创建cinder数据库 MariaDB [(none)]> CREATE DATABASE cinder; Query OK, 1 row affected (0.004 sec)  # 给cinder用户授权本地和远程访问 MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' IDENTIFIED BY '000000'; Query OK, 0 rows affected (0.007 sec)  MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIED BY '000000';  Query OK, 0 rows affected (0.000 sec) 

(3)修改cinder配置文件  

# 备份配置文件 [root@controller ~]# cp /etc/cinder/cinder.conf /etc/cinder/cinder.conf.bak # 去除配置文件空行和注释 [root@controller ~]# grep -Ev '^$|#' /etc/cinder/cinder.conf.bak > /etc/cinder/cinder.conf  # 编辑配置文件 [root@controller ~]# vi /etc/cinder/cinder.conf [DEFAULT] auth_stategy = keystone transport_url = rabbit://rabbitmq:000000@controller:5672  [database] connection = mysql+pymysql://cinder:000000@controller/cinder  [keystone_authtoken] auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password  username = cinder password = 000000  project_name = project user_domain_name = Default project_domain_name = Default  [oslo_concurrency] lock_path = /var/lib/cinder/tmp 

(4)修改Nova配置文件

cinder与nova交互,需要修改nova配置文件

[root@controller ~]# vi /etc/nova/nova.conf [cinder] os_region_name = RegionOne 

(5)初始化cinder数据库

# 执行初始化操作,同步数据库 [root@controller ~]# su cinder -s /bin/sh -c "cinder-manage db sync" Deprecated: Option "logdir" from group "DEFAULT" is deprecated. Use option "log-dir" from group "DEFAULT".  # 验证查看cinder库里的表 MariaDB [cinder]> show tables; +----------------------------+ | Tables_in_cinder           | +----------------------------+ | attachment_specs           | | backup_metadata            | | backups                    | 

(6)创建cinder用户并分配角色

# 模拟登陆 [root@controller ~]# source admin-login   # 平台创建cinder用户 [root@controller ~]# openstack user create --domain default --password 000000 cinder +---------------------+----------------------------------+ | Field               | Value                            | +---------------------+----------------------------------+ | domain_id           | default                          | | enabled             | True                             | | id                  | b9a2bdfcbf3b445ab0db44c9e35af678 | | name                | cinder                           | | options             | {}                               | | password_expires_at | None                             | +---------------------+----------------------------------+  # 给用户cinder分配admin角色 [root@controller ~]# openstack role add --project project --user cinder admin 

(7)创建cinder服务及端点

# 创建服务 [root@controller ~]# openstack service create --name cinderv3 volumev3 +---------+----------------------------------+ | Field   | Value                            | +---------+----------------------------------+ | enabled | True                             | | id      | 90dc0dcf9879493d98144b481ea0df2b | | name    | cinderv3                         | | type    | volumev3                         | +---------+----------------------------------+  # 创建服务端点 [root@controller ~]# openstack endpoint create --region RegionOne volumev3 public http://controller:8776/v3/%(project_id)s +--------------+------------------------------------------+ | Field        | Value                                    | +--------------+------------------------------------------+ | enabled      | True                                     | | id           | 6bb167be751241d1922a81b6b4c18898         | | interface    | public                                   | | region       | RegionOne                                | | region_id    | RegionOne                                | | service_id   | 90dc0dcf9879493d98144b481ea0df2b         | | service_name | cinderv3                                 | | service_type | volumev3                                 | | url          | http://controller:8776/v3/%(project_id)s | +--------------+------------------------------------------+ [root@controller ~]# openstack endpoint create --region RegionOne volumev3 internal http://controller:8776/v3/%(project_id)s +--------------+------------------------------------------+ | Field        | Value                                    | +--------------+------------------------------------------+ | enabled      | True                                     | | id           | e8ad2286c57443a5970e9d17ca33076a         | | interface    | internal                                 | | region       | RegionOne                                | | region_id    | RegionOne                                | | service_id   | 90dc0dcf9879493d98144b481ea0df2b         | | service_name | cinderv3                                 | | service_type | volumev3                                 | | url          | http://controller:8776/v3/%(project_id)s | +--------------+------------------------------------------+ [root@controller ~]# openstack endpoint create --region RegionOne volumev3 admin http://controller:8776/v3/%(project_id)s +--------------+------------------------------------------+ | Field        | Value                                    | +--------------+------------------------------------------+ | enabled      | True                                     | | id           | dd6d3b221e244cd5a5bb6a2b33159c1d         | | interface    | admin                                    | | region       | RegionOne                                | | region_id    | RegionOne                                | | service_id   | 90dc0dcf9879493d98144b481ea0df2b         | | service_name | cinderv3                                 | | service_type | volumev3                                 | | url          | http://controller:8776/v3/%(project_id)s | +--------------+------------------------------------------+ 

(8)启动cinder服务

# 重启nova服务(配置文件改过了) [root@controller ~]# systemctl restart openstack-nova-api  # 开机启动 [root@controller ~]# systemctl enable openstack-cinder-api openstack-cinder-scheduler Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-cinder-api.service to /usr/lib/systemd/system/openstack-cinder-api.service. Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-cinder-scheduler.service to /usr/lib/systemd/system/openstack-cinder-scheduler.service.  # 立即启动 [root@controller ~]# systemctl start openstack-cinder-api openstack-cinder-scheduler 

(9)检测控制节点cinder服务

# 方法一:查看8776端口占用情况 [root@controller ~]# netstat -nutpl | grep 8776 tcp        0      0 0.0.0.0:8776            0.0.0.0:*               LISTEN      15517/python2  # 方法二:查看存储服务列表,是否处于UP状态 [root@controller ~]# openstack volume service list +------------------+------------+------+---------+-------+----------------------------+ | Binary           | Host       | Zone | Status  | State | Updated At                 | +------------------+------------+------+---------+-------+----------------------------+ | cinder-scheduler | controller | nova | enabled | up    | 2022-11-18T11:08:47.000000 | +------------------+------------+------+---------+-------+----------------------------+ 

2、搭建存储节点

(1)为计算节点添加硬盘

 

(2)创建卷组

cinder使用lvm来实现块设备(卷)的管理

# 1.查看系统硬盘挂载情况 [root@compute ~]# lsblk NAME            MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT sda               8:0    0   40G  0 disk  ├─sda1            8:1    0    1G  0 part /boot └─sda2            8:2    0   39G  0 part    ├─centos-root 253:0    0   35G  0 lvm  /   └─centos-swap 253:1    0    4G  0 lvm  [SWAP] sdb               8:16   0   40G  0 disk     《—————sdb设备还没有分区和挂载 sr0              11:0    1 1024M  0 rom    # 2.创建LVM物理卷组 # 2.1 硬盘初始化为物理卷 [root@compute ~]# pvcreate /dev/sdb   Physical volume "/dev/sdb" successfully created.  # 2.2 物理卷归并为卷组 # 格式:vgcreate 卷组名  物理卷... [root@compute ~]# vgcreate cinder-volumes /dev/sdb   Volume group "cinder-volumes" successfully created  # 2.3 修改LVM配置 # 在配置文件中的devices部分,添加过滤器,只接受/dev/sdb # a表示接受,r表示拒绝 [root@compute ~]# vi /etc/lvm/lvm.conf devices {         filter = ["a/sdb/","r/.*/"]  # 3.启动LVM元数据服务 [root@compute ~]# systemctl enable lvm2-lvmetad [root@compute ~]# systemctl start lvm2-lvmetad 

3、安装和配置存储节点

均在计算节点操作

(1)安装cinder相关的软件包

openstack-cinder是cinder的软件包

targetcli是一个命令行工具,用于管理Linux的存储资源

python-keystone是与keystone的连接插件

[root@compute ~]# yum install -y openstack-cinder targetcli python-keystone 

(2)配置文件修改

# 备份配置文件 [root@compute ~]# cp /etc/cinder/cinder.conf /etc/cinder/cinder.conf.bak # 去除空行和注释 [root@compute ~]# grep -Ev '^$|#' /etc/cinder/cinder.conf.bak > /etc/cinder/cinder.conf  # 修改配置文件 # 配置文件中“volume_group”的值应和“创建LVM物理卷组”部分创建的卷组名一致:cinder-volumes [root@compute ~]# vi /etc/cinder/cinder.conf [DEFAULT] auth_stategy = keystone transport_url = rabbit://rabbitmq:000000@controller:5672 enabled_backends = lvm glance_api_servers = http://controller:9292  [lvm] volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver volume_group = cinder-volumes target_protocol = iscsi target_helper = lioadm  [database] connection = mysql+pymysql://cinder:000000@controller/cinder  [keystone_authtoken] auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password  username = cinder password = 000000  project_name = project user_domain_name = Default project_domain_name = Default  [oslo_concurrency] lock_path = /var/lib/cinder/tmp 

(3)存储节点启动cinder服务

[root@compute ~]# systemctl enable openstack-cinder-volume target Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-cinder-volume.service to /usr/lib/systemd/system/openstack-cinder-volume.service. Created symlink from /etc/systemd/system/multi-user.target.wants/target.service to /usr/lib/systemd/system/target.service. [root@compute ~]# systemctl start openstack-cinder-volume target 

4、检查cinder服务

# 方法一:查看存储服务列表 # 查看Cinder服务中各个模块的服务状态 [root@controller ~]# openstack volume service list +------------------+-------------+------+---------+-------+----------------------------+ | Binary           | Host        | Zone | Status  | State | Updated At                 | +------------------+-------------+------+---------+-------+----------------------------+ | cinder-scheduler | controller  | nova | enabled | up    | 2022-11-18T12:15:46.000000 | | cinder-volume    | compute@lvm | nova | enabled | up    | 2022-11-18T12:15:43.000000 | +------------------+-------------+------+---------+-------+----------------------------+  # 方法二:查看Dashboard检查卷状态 # 1.左侧导航栏出现卷的选项 # 2.在项目概况中出现卷、卷快照、卷存储三个饼图 

5、用cinder创建卷

(1)命令模式创建卷

要在控制节点上面执行命令

[root@controller ~]# openstack volume create --size 8 volume1 +---------------------+--------------------------------------+ | Field               | Value                                | +---------------------+--------------------------------------+ | attachments         | []                                   | | availability_zone   | nova                                 | | bootable            | false                                | | consistencygroup_id | None                                 | | created_at          | 2022-11-25T06:26:14.000000           | | description         | None                                 | | encrypted           | False                                | | id                  | 690449e4-f950-4949-a0d4-7184226a2447 | | migration_status    | None                                 | | multiattach         | False                                | | name                | volume1                              | | properties          |                                      | | replication_status  | None                                 | | size                | 8                                    | | snapshot_id         | None                                 | | source_volid        | None                                 | | status              | creating                             | | type                | __DEFAULT__                          | | updated_at          | None                                 | | user_id             | f4f16d960e0643d7b5a35db152c87dae     | +---------------------+--------------------------------------+  [root@controller ~]# openstack volume list +--------------------------------------+---------+-----------+------+-------------+ | ID                                   | Name    | Status    | Size | Attached to | +--------------------------------------+---------+-----------+------+-------------+ | 690449e4-f950-4949-a0d4-7184226a2447 | volume1 | available |    8 |             | +--------------------------------------+---------+-----------+------+-------------+ 

(2)用dashboard创建