一、nova介紹
Nova 是 OpenStack 最核心的服務,負責維護和管理雲環境的計算資源。OpenStack 作為 IaaS 的雲操作系統,虛擬機生命周期管理也就是通過 Nova 來實現的。
用途與功能:
1) 實例生命周期管理
2) 管理計算資源
3) 網絡和認證管理
4) REST 風格的 API
5) 異步的一致性通信
6)Hypervisor 透明:支持Xen,XenServer/XCP,KVM, UML, VMware vSphere and Hyper-V
在上圖中可以看到,Nova 處於 Openstak 架構的中心,其他組件都為 Nova 提供支持: Glance 為 VM 提供 image;Cinder 和 Swift 分別為 VM 提供塊存儲和對象存儲;Neutron 為 VM 提供網絡連接。
nova架構如下圖:
Nova 的架構比較復雜,包含很多組件。 這些組件以子服務(后台 deamon 進程)的形式運行,可以分為以下幾類:
1、API
nova-api是整個Nova 組件的門戶,接收和響應客戶的 API 調用。所有對 Nova 的請求都首先由 nova-api 處理。nova-api 向外界暴露若干 HTTP REST API 接口 在 keystone 中我們可以查詢 nova-api 的 endponits。
客戶端可以將請求發送到 endponits 指定的地址,向 nova-api 請求操作。 當然,作為最終用戶的我們不會直接發送 Rest AP I請求。 OpenStack CLI,Dashboard 和其他需要跟 Nova 交換的組件會使用這些 API。
Nova-api 對接收到的 HTTP API 請求會做如下處理:
(1)檢查客戶端傳入的參數是否合法有效
(2)調用 Nova 其他子服務的處理客戶端 HTTP 請求
(3)格式化 Nova 其他子服務返回的結果並返回給客戶端
nova-api 接收哪些請求?
簡單的說,只要是跟虛擬機生命周期相關的操作,nova-api 都可以響應。 大部分操作都可以在 Dashboard 上找到。打開Instance管理界面
除了提供 OpenStack 自己的API,nova-api 還支持 Amazon EC2 API。 也就是說,如果客戶以前使用 Amazon EC2,並且用 EC2 的 API 開發了些工具來管理虛機,那么如果現在要換成 OpenStack,這些工具可以無縫遷移到 OpenStack,因為 nova-api 兼容 EC2 API,無需做任何修改。
2、Compute Core
(1)nova-scheduler:
虛機調度服務,負責決定在哪個計算節點上運行虛機。創建 Instance 時,用戶會提出資源需求,例如 CPU、內存、磁盤各需要多少。OpenStack 將這些需求定義在 flavor 中,用戶只需要指定用哪個 flavor 就可以了。
可用的 flavor 在 System->Flavors 中管理。
下面介紹 nova-scheduler 是如何實現調度的。在 /etc/nova/nova.conf 中,nova 通過 driver=filter_scheduler 這個參數來配置 nova-scheduler。
driver = filter_scheduler
Filter scheduler
Filter scheduler 是 nova-scheduler 默認的調度器,調度過程分為兩步:
1) 通過過濾器(filter)選擇滿足條件的計算節點(運行 nova-compute)
2) 通過權重計算(weighting)選擇在最優(權重值最大)的計算節點上創建 Instance。
Nova 允許使用第三方 scheduler,配置 scheduler_driver 即可。 這又一次體現了OpenStack的開放性。Scheduler 可以使用多個 filter 依次進行過濾,過濾之后的節點再通過計算權重選出最適合的節點。
上圖是調度過程的一個示例:
1) 最開始有 6 個計算節點 Host1-Host6
2) 通過多個 filter 層層過濾,Host2 和 Host4 沒有通過,被刷掉了
3) Host1,Host3,Host5,Host6 計算權重,結果 Host5 得分最高,最終入選
當 Filter scheduler 需要執行調度操作時,會讓 filter 對計算節點進行判斷,filter 返回 True 或 False。經過前面一堆 filter 的過濾,nova-scheduler 選出了能夠部署 instance 的計算節點。
如果有多個計算節點通過了過濾,那么最終選擇哪個節點呢?
Scheduler 會對每個計算節點打分,得分最高的獲勝。 打分的過程就是 weight,翻譯過來就是計算權重值,那么 scheduler 是根據什么來計算權重值呢?
目前 nova-scheduler 的默認實現是根據計算節點空閑的內存量計算權重值: 空閑內存越多,權重越大,instance 將被部署到當前空閑內存最多的計算節點上。
(2)nova-compute:
nova-compute 是管理虛機的核心服務,在計算節點上運行。通過調用Hypervisor API實現節點上的 instance的生命周期管理。 OpenStack 對 instance 的操作,最后都是交給 nova-compute 來完成的。 nova-compute 與 Hypervisor 一起實現 OpenStack 對 instance 生命周期的管理。
Openstack中虛機默認的保存路徑在:/var/lib/nova/instances
通過Driver架構支持多種Hypervisor
Hypervisor是計算節點上跑的虛擬化管理程序,虛機管理最底層的程序。 不同虛擬化技術提供自己的 Hypervisor。 常用的 Hypervisor 有 KVM,Xen, VMWare 等。nova-compute 為這些 Hypervisor 定義了統一的接口,Hypervisor 只需要實現這些接口,就可以 Driver 的形式即插即用到 OpenStack 系統中。 下面是Nova Driver的架構示意圖:
(3)nova-conductor:
nova-compute 經常需要更新數據庫,比如更新和獲取虛機的狀態。 出於安全性和伸縮性的考慮,nova-compute 並不會直接訪問數據庫,而是將這個任務委托給 nova-conductor。
這樣做有兩個顯著好處:
1)更高的系統安全性
2)更好的系統伸縮性
3、Console Interface
openstack-nova-api:nova門戶
openstack-nova-conductor:幫助nova-compute訪問數據庫的
openstack-nova-console:提供多種方式訪問虛機的控制台
openstack-nova-novncproxy:是基於web瀏覽器提供虛機的控制台
openstack-nova-scheduler:負責調度虛機啟動到哪一個計算節點
openstack-nova-placement-api:資源使用情況追蹤
openstack-nova-spicehtml5proxy: 基於 HTML5 瀏覽器的 SPICE 訪問
openstack-nova-xvpnvncproxy: 基於 Java 客戶端的 VNC 訪問
openstack-nova-consoleauth: 負責對訪問虛機控制台請求提供 Token 認證
openstack-nova-cert: 提供 x509 證書支持
4、Database
Nova 會有一些數據需要存放到數據庫中,一般使用 MySQL。數據庫安裝在控制節點上。 Nova 使用命名為 “nova” 的數據庫。
5、Message Queue
在前面我們了解到 Nova 包含眾多的子服務,這些子服務之間需要相互協調和通信。為解耦各個子服務,Nova 通過 Message Queue 作為子服務的信息中轉站。 所以在架構圖上我們看到了子服務之間沒有直接的連線,是通過 Message Queue 聯系的。
OpenStack 默認是用 RabbitMQ 作為 Message Queue。 MQ 是 OpenStack 的核心基礎組件,我們后面也會詳細介紹。
二、Nova 組件如何協同工作
Nova 物理部署方案
下面我們可以看看實驗環境的具體部署情況。 通過在計算節點和控制節點上運行
ps -elf | grep nova 來查看運行的 nova 子服務
計算節點compute只運行了nova-compute子服務
控制節點controller運行了若干nova-*子服務
RabbitMQ 和 MySQL 也是放在控制節點上的。可能細心的同學已經發現我們的控制節點上也運行了 nova-compute。 這實際上也就意味着 devstack-controller 既是一個控制節點,同時也是一個計算節點,也可以在上面運行虛機。
這也向我們展示了 OpenStack 這種分布式架構部署上的靈活性: 可以將所有服務都放在一台物理機上,作為一個 All-in-One 的測試環境; 也可以將服務部署在多台物理機上,獲得更好的性能和高可用。
另外,也可以用 nova service-list 查看 nova-* 子服務都分布在哪些節點上

-
客戶(可以是 OpenStack 最終用戶,也可以是其他程序)向 API(nova-api)發送請求:“幫我創建一個虛機”
-
API 對請求做一些必要處理后,向 Messaging(RabbitMQ)發送了一條消息:“讓 Scheduler 創建一個虛機”
-
Scheduler(nova-scheduler)從 Messaging 獲取到 API 發給它的消息,然后執行調度算法,從若干計算節點中選出節點 A
-
Scheduler 向 Messaging 發送了一條消息:“在計算節點 A 上創建這個虛機”
-
計算節點 A 的 Compute(nova-compute)從 Messaging 中獲取到 Scheduler 發給它的消息,然后在本節點的 Hypervisor 上啟動虛機。
-
在虛機創建的過程中,Compute 如果需要查詢或更新數據庫信息,會通過 Messaging 向 Conductor(nova-conductor)發送消息,Conductor 負責數據庫訪問。
以上是創建虛機最核心的步驟, 這幾個步驟向我們展示了 nova-* 子服務之間的協作的方式,也體現了 OpenStack 整個系統的分布式設計思想,掌握這種思想對我們深入理解 OpenStack 會非常有幫助。
三、nova創建虛擬機詳細過程

1、界面或命令行通過RESTful API向keystone獲取認證信息。
2、keystone通過用戶請求認證信息,並生成auth-token返回給對應的認證請求。
3、界面或命令行通過RESTful API向nova-api發送一個boot instance的請求(攜帶auth-token)。
4、nova-api接受請求后向keystone發送認證請求,查看token是否為有效用戶和token。
5、keystone驗證token是否有效,如有效則返回有效的認證和對應的角色(注:有些操作需要有角色權限才能操作)。
6、通過認證后nova-api和數據庫通訊。
7、初始化新建虛擬機的數據庫記錄。
8、nova-api通過rpc.call向nova-scheduler請求是否有創建虛擬機的資源(Host ID)。
9、nova-scheduler進程偵聽消息隊列,獲取nova-api的請求。
10、nova-scheduler通過查詢nova數據庫中計算資源的情況,並通過調度算法計算符合虛擬機創建需要的主機。
11、對於有符合虛擬機創建的主機,nova-scheduler更新數據庫中虛擬機對應的物理主機信息。
12、nova-scheduler通過rpc.cast向nova-compute發送對應的創建虛擬機請求的消息。
13、nova-compute會從對應的消息隊列中獲取創建虛擬機請求的消息。
14、nova-compute通過rpc.call向nova-conductor請求獲取虛擬機消息。(Flavor)
15、nova-conductor從消息隊隊列中拿到nova-compute請求消息。
16、nova-conductor根據消息查詢虛擬機對應的信息。
17、nova-conductor從數據庫中獲得虛擬機對應信息。
18、nova-conductor把虛擬機信息通過消息的方式發送到消息隊列中。
19、nova-compute從對應的消息隊列中獲取虛擬機信息消息。
20、nova-compute通過keystone的RESTfull API拿到認證的token,並通過HTTP請求glance-api獲取創建虛擬機所需要鏡像。
21、glance-api向keystone認證token是否有效,並返回驗證結果。
22、token驗證通過,nova-compute獲得虛擬機鏡像信息(URL)。
23、nova-compute通過keystone的RESTfull API拿到認證k的token,並通過HTTP請求neutron-server獲取創建虛擬機所需要的網絡信息。
24、neutron-server向keystone認證token是否有效,並返回驗證結果。
25、token驗證通過,nova-compute獲得虛擬機網絡信息。
26、nova-compute通過keystone的RESTfull API拿到認證的token,並通過HTTP請求cinder-api獲取創建虛擬機所需要的持久化存儲信息。
27、cinder-api向keystone認證token是否有效,並返回驗證結果。
28、token驗證通過,nova-compute獲得虛擬機持久化存儲信息。
29、nova-compute根據instance的信息調用配置的虛擬化驅動來創建虛擬機。
四、徹底刪除nova-compute節點
1、控制節點上操作查看計算節點,刪除node1
openstack host list
nova service-list
2、將node1上的計算服務設置為down,然后disabled
systemctl stop openstack-nova-compute
nova service-list
nova service-disable node1 nova-compute
nova service-list
3、在數據庫里清理(nova庫)
(1)查看現在數據庫狀態
[root@node1 ~]# mysql -u root -p Enter password: Welcome to the MariaDB monitor. Commands end with ; or \g. Your MariaDB connection id is 90 Server version: 10.1.20-MariaDB MariaDB Server Copyright (c) 2000, 2016, Oracle, MariaDB Corporation Ab and others. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. MariaDB [(none)]> use nova; Reading table information for completion of table and column names You can turn off this feature to get a quicker startup with -A Database changed MariaDB [nova]> select host from nova.services; +---------+ | host | +---------+ | 0.0.0.0 | | 0.0.0.0 | | node1 | | node1 | | node1 | | node1 | | node2 | +---------+ 7 rows in set (0.00 sec) MariaDB [nova]> select hypervisor_hostname from compute_nodes; +---------------------+ | hypervisor_hostname | +---------------------+ | node1 | | node2 | +---------------------+ 2 rows in set (0.00 sec)
(2)刪除數據庫中的node1節點信息
MariaDB [nova]> delete from nova.services where host="node1"; Query OK, 4 rows affected (0.01 sec) MariaDB [nova]> delete from compute_nodes where hypervisor_hostname="node1"; Query OK, 1 row affected (0.00 sec) MariaDB [nova]> MariaDB [nova]> MariaDB [nova]> MariaDB [nova]> select host from nova.services; +---------+ | host | +---------+ | 0.0.0.0 | | 0.0.0.0 | | node2 | +---------+ 3 rows in set (0.00 sec) MariaDB [nova]> select hypervisor_hostname from compute_nodes; +---------------------+ | hypervisor_hostname | +---------------------+ | node2 | +---------------------+ 1 row in set (0.00 sec)
五、安裝和配置nova計算服務
(一)安裝和配置控制節點(ren3)
1、在數據庫中創建nova計算服務的數據庫,用戶及授予權限
(1)使用root用戶登錄數據庫:
[root@ren3 ~]# mysql -u root -proot
(2)創建nova_api、nova和nova_cell0數據庫:
MariaDB [(none)]> CREATE DATABASE nova_api; Query OK, 1 row affected (0.00 sec) MariaDB [(none)]> CREATE DATABASE nova; Query OK, 1 row affected (0.00 sec) MariaDB [(none)]> CREATE DATABASE nova_cell0; Query OK, 1 row affected (0.00 sec) MariaDB [(none)]> show databases; +--------------------+ | Database | +--------------------+ | glance | | information_schema | | keystone | | mysql | | nova | | nova_api | | nova_cell0 | | performance_schema | +--------------------+ 8 rows in set (0.01 sec)
(3)授予適當的數據庫使用權限:
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' \ IDENTIFIED BY 'NOVA_DBPASS'; Query OK, 0 rows affected (0.05 sec) MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \ IDENTIFIED BY 'NOVA_DBPASS'; Query OK, 0 rows affected (0.00 sec) MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \ IDENTIFIED BY 'NOVA_DBPASS'; Query OK, 0 rows affected (0.00 sec) MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \ IDENTIFIED BY 'NOVA_DBPASS'; Query OK, 0 rows affected (0.00 sec) MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' \ IDENTIFIED BY 'NOVA_DBPASS'; Query OK, 0 rows affected (0.00 sec) MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' \ IDENTIFIED BY 'NOVA_DBPASS'; Query OK, 0 rows affected (0.00 sec)
MariaDB [(none)]> select user,password,host from mysql.user;+----------+-------------------------------------------+-----------+ | user | password | host | +----------+-------------------------------------------+-----------+ | root | *81F5E21E35407D884A6CD4A731AEBFB6AF209E1B | localhost | | root | *81F5E21E35407D884A6CD4A731AEBFB6AF209E1B | 127.0.0.1 | | root | *81F5E21E35407D884A6CD4A731AEBFB6AF209E1B | ::1 | | glance | *C0CE56F2C0C7234791F36D89700B02691C1CAB8E | localhost | | keystone | *442DFE587A8B6BE1E9538855E8187C1EFB863A73 | localhost | | keystone | *442DFE587A8B6BE1E9538855E8187C1EFB863A73 | % | | glance | *C0CE56F2C0C7234791F36D89700B02691C1CAB8E | % | | nova | *B79B482785488AB91D97EAFCAD7BA8839EF65AD3 | localhost | | nova | *B79B482785488AB91D97EAFCAD7BA8839EF65AD3 | % | +----------+-------------------------------------------+-----------+ 9 rows in set (0.00 sec)
退出數據庫
2、加載系統變量
[root@ren3 ~]# cat openrc export OS_USERNAME=admin export OS_PASSWORD=admin export OS_PROJECT_NAME=admin export OS_USER_DOMAIN_NAME=Default export OS_PROJECT_DOMAIN_NAME=Default export OS_AUTH_URL=http://ren3:35357/v3
export OS_IDENTITY_API_VERSION=3 [root@ren3 ~]# source openrc
3、創建計算服務憑證
(1)創建nova用戶
[root@ren3 ~]# openstack user create --domain default --password=nova nova +---------------------+----------------------------------+ | Field | Value | +---------------------+----------------------------------+ | domain_id | default | | enabled | True | | id | fb741e18c1f242cebc6a512c679a07a7 | | name | nova | | options | {} | | password_expires_at | None | +---------------------+----------------------------------+
(2)向nova用戶添加管理角色:
[root@ren3 ~]# openstack role add --project service --user nova admin
(3)創建nova服務:
[root@ren3 ~]# openstack service create --name nova --description "OpenStack Compute" compute +-------------+----------------------------------+ | Field | Value | +-------------+----------------------------------+ | description | OpenStack Compute | | enabled | True | | id | 207e39b4617e4ac2ad6ec90e055342e3 | | name | nova | | type | compute | +-------------+----------------------------------+ [root@ren3 ~]# openstack service list +----------------------------------+----------+----------+ | ID | Name | Type | +----------------------------------+----------+----------+ | 207e39b4617e4ac2ad6ec90e055342e3 | nova | compute | | a7cf08799d4b4b509530ae6c21453b08 | glance | image | | ab70227ae28c4fb7a774ed4808489e76 | keystone | identity | +----------------------------------+----------+----------+
4、創建計算API服務端點:
[root@ren3 ~]# openstack endpoint create --region RegionOne \ compute public http://ren3:8774/v2.1 +--------------+----------------------------------+ | Field | Value | +--------------+----------------------------------+ | enabled | True | | id | a10ffd61f7a0467682325e930b384222 | | interface | public | | region | RegionOne | | region_id | RegionOne | | service_id | 207e39b4617e4ac2ad6ec90e055342e3 | | service_name | nova | | service_type | compute | | url | http://ren3:8774/v2.1 | +--------------+----------------------------------+ [root@ren3 ~]# openstack endpoint create --region RegionOne \ compute internal http://ren3:8774/v2.1 +--------------+----------------------------------+ | Field | Value | +--------------+----------------------------------+ | enabled | True | | id | 79cf3dc0d9784eaf83fec84a74dd468d | | interface | internal | | region | RegionOne | | region_id | RegionOne | | service_id | 207e39b4617e4ac2ad6ec90e055342e3 | | service_name | nova | | service_type | compute | | url | http://ren3:8774/v2.1 | +--------------+----------------------------------+ [root@ren3 ~]# openstack endpoint create --region RegionOne \ compute admin http://ren3:8774/v2.1 +--------------+----------------------------------+ | Field | Value | +--------------+----------------------------------+ | enabled | True | | id | d413124d690d44b8853f6c5b1dce9af1 | | interface | admin | | region | RegionOne | | region_id | RegionOne | | service_id | 207e39b4617e4ac2ad6ec90e055342e3 | | service_name | nova | | service_type | compute | | url | http://ren3:8774/v2.1 | +--------------+----------------------------------+
5、創建placement用戶
[root@ren3 ~]# openstack user create --domain default --password=placement placement +---------------------+----------------------------------+ | Field | Value | +---------------------+----------------------------------+ | domain_id | default | | enabled | True | | id | 53e13c535c8648f58abcae149a44d816 | | name | placement | | options | {} | | password_expires_at | None | +---------------------+----------------------------------+
6、添加placement用戶到service項目,並給它admin角色
[root@ren3 ~]# openstack role add --project service --user placement admin
7、創建placement服務
[root@ren3 ~]# openstack service create --name placement --description "Placement API" placement +-------------+----------------------------------+ | Field | Value | +-------------+----------------------------------+ | description | Placement API | | enabled | True | | id | d421cabebb114dab9b5f1cd63900b910 | | name | placement | | type | placement | +-------------+----------------------------------+ [root@ren3 ~]# openstack service list |grep placement | d421cabebb114dab9b5f1cd63900b910 | placement | placement |
8、創建placement api端點:
[root@ren3 ~]# openstack endpoint create --region RegionOne placement public http://ren3:8778 +--------------+----------------------------------+ | Field | Value | +--------------+----------------------------------+ | enabled | True | | id | bbf36665cd90488e9269a12bea3b839c | | interface | public | | region | RegionOne | | region_id | RegionOne | | service_id | d421cabebb114dab9b5f1cd63900b910 | | service_name | placement | | service_type | placement | | url | http://ren3:8778 | +--------------+----------------------------------+ [root@ren3 ~]# openstack endpoint create --region RegionOne placement internal http://ren3:8778 +--------------+----------------------------------+ | Field | Value | +--------------+----------------------------------+ | enabled | True | | id | b50f00b3e45c4741856f300dce012dc2 | | interface | internal | | region | RegionOne | | region_id | RegionOne | | service_id | d421cabebb114dab9b5f1cd63900b910 | | service_name | placement | | service_type | placement | | url | http://ren3:8778 | +--------------+----------------------------------+ [root@ren3 ~]# openstack endpoint create --region RegionOne placement admin http://ren3:8778 +--------------+----------------------------------+ | Field | Value | +--------------+----------------------------------+ | enabled | True | | id | 6e64d3f4f9cd4bfbaea80c975e92d03a | | interface | admin | | region | RegionOne | | region_id | RegionOne | | service_id | d421cabebb114dab9b5f1cd63900b910 | | service_name | placement | | service_type | placement | | url | http://ren3:8778 | +--------------+----------------------------------+ [root@ren3 ~]# openstack endpoint list | grep placement | 6e64d3f4f9cd4bfbaea80c975e92d03a | RegionOne | placement | placement | True | admin | http://ren3:8778 | | b50f00b3e45c4741856f300dce012dc2 | RegionOne | placement | placement | True | internal | http://ren3:8778 | | bbf36665cd90488e9269a12bea3b839c | RegionOne | placement | placement | True | public | http://ren3:8778 |
9、安裝nova軟件包
[root@ren3 ~]# yum install openstack-nova-api openstack-nova-conductor openstack-nova-console openstack-nova-novncproxy openstack-nova-scheduler openstack-nova-placement-api -y
10、編輯/etc/nova/nova.conf文件
(1)在[DEFAULT]部分,啟用計算和元數據api:
[DEFAULT]
# ...
enabled_apis = osapi_compute,metadata
(2)在[api_database]和[database]部分,配置數據庫訪問:
[api_database] # ... connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api [database] # ... connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova
(3)在[DEFAULT]部分,配置RabbitMQ消息隊列訪問:
[DEFAULT] # ... transport_url = rabbit://openstack:RABBIT_PASS@controller
(4)在[api]和[keystone_authtoken]部分,配置身份服務訪問:
[api] # ... auth_strategy = keystone [keystone_authtoken] # ... auth_uri = http://controller:5000 auth_url = http://controller:35357 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = nova password = NOVA_PASS
(5)在[DEFAULT]部分,配置my_ip選項來使用控制器節點的管理接口IP地址:
[DEFAULT] # ... my_ip = 10.0.0.11
(6)在[DEFAULT]部分,啟用對網絡服務的支持:
[DEFAULT] # ... use_neutron = True firewall_driver = nova.virt.firewall.NoopFirewallDriver
(7)在[vnc]部分,配置vnc代理以使用控制節點的管理接口IP地址:
[vnc] enabled = true # ... vncserver_listen = $my_ip vncserver_proxyclient_address = $my_ip
(8)在[glance]部分,配置圖像服務API的位置:
[glance] # ... api_servers = http://controller:9292
(9)在[oslo_concurrency]部分,配置鎖路徑:
[oslo_concurrency]
# ...
lock_path = /var/lib/nova/tmp
(10)在[placement]部分,配置放置API:
[placement] # ... os_region_name = RegionOne project_domain_name = Default project_name = service auth_type = password user_domain_name = Default auth_url = http://controller:35357/v3 username = placement password = PLACEMENT_PASS
修改后的配置文件:
[root@ren3 ~]# cd /etc/nova/
[root@ren3 nova]# ls
api-paste.ini nova.conf policy.json release rootwrap.conf
[root@ren3 nova]# cp nova.conf nova.conf.bak
[root@ren3 nova]# vim nova.conf
[DEFAULT] my_ip = 192.168.11.3 use_neutron = True firewall_driver = nova.virt.firewall.NoopFirewallDriver enabled_apis=osapi_compute,metadata transport_url = rabbit://openstack:admin@ren3 [api] auth_strategy = keystone [api_database] connection = mysql+pymysql://nova:NOVA_DBPASS@ren3/nova_api [barbican] [cache] [cells] [cinder] #os_region_name = RegionOne [cloudpipe] [conductor] [console] [consoleauth] [cors] [cors.subdomain] [crypto] [database] connection = mysql+pymysql://nova:NOVA_DBPASS@ren3/nova [ephemeral_storage_encryption] [filter_scheduler] [glance] api_servers = http://ren3:9292 [guestfs] [healthcheck] [hyperv] [image_file_url] [ironic] [key_manager] [keystone_authtoken] auth_uri = http://ren3:5000 auth_url = http://ren3:35357 memcached_servers = ren3:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = nova password = nova [libvirt] #virt_type = qemu [matchmaker_redis] [metrics] [mks] [neutron] #url = http://ren3:9696 #auth_url = http://ren3:35357 #auth_type = password #project_domain_name = default #user_domain_name = default #region_name = RegionOne #project_name = service #username = neutron #password = neutron #service_metadata_proxy = true #metadata_proxy_shared_secret = METADATA_SECRET [notifications] [osapi_v21] [oslo_concurrency] lock_path = /var/lib/nova/tmp [oslo_messaging_amqp] [oslo_messaging_kafka] [oslo_messaging_notifications] [oslo_messaging_rabbit] [oslo_messaging_zmq] [oslo_middleware] [oslo_policy] [pci] [placement] os_region_name = RegionOne auth_type = password auth_url = http://ren3:35357/v3 project_name = service project_domain_name = Default username = placement password = placement user_domain_name = Default [quota] [rdp] [remote_debug] [scheduler] [serial_console] [service_user] [spice] [ssl] [trusted_computing] [upgrade_levels] [vendordata_dynamic_auth] [vmware] [vnc] enabled = true vncserver_listen = $my_ip vncserver_proxyclient_address = $my_ip #novncproxy_base_url = http://192.168.11.3:6080/vnc_auto.html [workarounds] [wsgi] [xenserver] [xvp]
11、由於打包錯誤必須通過將以下配置添加到/etc/httpd/conf.d/00-nova- placi - API .conf來啟用對放置API的訪問:
[root@ren3 ~]# vim /etc/httpd/conf.d/00-nova-placement-api.conf
<Directory /usr/bin> <IfVersion >= 2.4> Require all granted </IfVersion> <IfVersion < 2.4> Order allow,deny Allow from all </IfVersion> </Directory>
重啟httpd服務
[root@ren3 ~]# systemctl restart httpd
12、同步數據庫數據(nova、nova-api、nova-cell0)
[root@ren3 ~]# su -s /bin/sh -c "nova-manage api_db sync" nova [root@ren3 ~]# su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova [root@ren3 ~]# su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova e02ac651-4481-48ce-aed3-ff4dfc75b026 [root@ren3 ~]# su -s /bin/sh -c "nova-manage db sync" nova
13、驗證nova cell0和cell1注冊正確:
[root@ren3 ~]# nova-manage cell_v2 list_cells +-------+--------------------------------------+ | Name | UUID | +-------+--------------------------------------+ | cell0 | 00000000-0000-0000-0000-000000000000 | | cell1 | e02ac651-4481-48ce-aed3-ff4dfc75b026 | +-------+--------------------------------------+
[root@ren3 ~]# mysql -u nova -pNOVA_DBPASS MariaDB [(none)]> show databases; +--------------------+ | Database | +--------------------+ | information_schema | | nova | | nova_api | | nova_cell0 | +--------------------+ 4 rows in set (0.00 sec) MariaDB [nova]> show tables; +--------------------------------------------+ | Tables_in_nova | +--------------------------------------------+ | agent_builds | | aggregate_hosts | | aggregate_metadata | | aggregates | | allocations | | block_device_mapping | | bw_usage_cache | | cells | | certificates | | compute_nodes | | console_auth_tokens | | console_pools | | consoles | | dns_domains | | fixed_ips | | floating_ips | | instance_actions | | instance_actions_events | | instance_extra | | instance_faults | | instance_group_member | | instance_group_policy | | instance_groups | | instance_id_mappings | | instance_info_caches | | instance_metadata | | instance_system_metadata | | instance_type_extra_specs | | instance_type_projects | | instance_types | | instances | | inventories | | key_pairs | | migrate_version | | migrations | | networks | | pci_devices | | project_user_quotas | | provider_fw_rules | | quota_classes | | quota_usages | | quotas | | reservations | | resource_provider_aggregates | | resource_providers | | s3_images | | security_group_default_rules | | security_group_instance_association | | security_group_rules | | security_groups | | services | | shadow_agent_builds | | shadow_aggregate_hosts | | shadow_aggregate_metadata | | shadow_aggregates | | shadow_block_device_mapping | | shadow_bw_usage_cache | | shadow_cells | | shadow_certificates | | shadow_compute_nodes | | shadow_console_pools | | shadow_consoles | | shadow_dns_domains | | shadow_fixed_ips | | shadow_floating_ips | | shadow_instance_actions | | shadow_instance_actions_events | | shadow_instance_extra | | shadow_instance_faults | | shadow_instance_group_member | | shadow_instance_group_policy | | shadow_instance_groups | | shadow_instance_id_mappings | | shadow_instance_info_caches | | shadow_instance_metadata | | shadow_instance_system_metadata | | shadow_instance_type_extra_specs | | shadow_instance_type_projects | | shadow_instance_types | | shadow_instances | | shadow_key_pairs | | shadow_migrate_version | | shadow_migrations | | shadow_networks | | shadow_pci_devices | | shadow_project_user_quotas | | shadow_provider_fw_rules | | shadow_quota_classes | | shadow_quota_usages | | shadow_quotas | | shadow_reservations | | shadow_s3_images | | shadow_security_group_default_rules | | shadow_security_group_instance_association | | shadow_security_group_rules | | shadow_security_groups | | shadow_services | | shadow_snapshot_id_mappings | | shadow_snapshots | | shadow_task_log | | shadow_virtual_interfaces | | shadow_volume_id_mappings | | shadow_volume_usage_cache | | snapshot_id_mappings | | snapshots | | tags | | task_log | | virtual_interfaces | | volume_id_mappings | | volume_usage_cache | +--------------------------------------------+ 110 rows in set (0.00 sec) MariaDB [nova]> use nova_api MariaDB [nova_api]> show tables; +------------------------------+ | Tables_in_nova_api | +------------------------------+ | aggregate_hosts | | aggregate_metadata | | aggregates | | allocations | | build_requests | | cell_mappings | | flavor_extra_specs | | flavor_projects | | flavors | | host_mappings | | instance_group_member | | instance_group_policy | | instance_groups | | instance_mappings | | inventories | | key_pairs | | migrate_version | | placement_aggregates | | project_user_quotas | | quota_classes | | quota_usages | | quotas | | request_specs | | reservations | | resource_classes | | resource_provider_aggregates | | resource_providers | +------------------------------+ 27 rows in set (0.00 sec) MariaDB [nova_api]> use nova_cell0 MariaDB [nova_cell0]> show tables; +--------------------------------------------+ | Tables_in_nova_cell0 | +--------------------------------------------+ | agent_builds | | aggregate_hosts | | aggregate_metadata | | aggregates | | allocations | | block_device_mapping | | bw_usage_cache | | cells | | certificates | | compute_nodes | | console_auth_tokens | | console_pools | | consoles | | dns_domains | | fixed_ips | | floating_ips | | instance_actions | | instance_actions_events | | instance_extra | | instance_faults | | instance_group_member | | instance_group_policy | | instance_groups | | instance_id_mappings | | instance_info_caches | | instance_metadata | | instance_system_metadata | | instance_type_extra_specs | | instance_type_projects | | instance_types | | instances | | inventories | | key_pairs | | migrate_version | | migrations | | networks | | pci_devices | | project_user_quotas | | provider_fw_rules | | quota_classes | | quota_usages | | quotas | | reservations | | resource_provider_aggregates | | resource_providers | | s3_images | | security_group_default_rules | | security_group_instance_association | | security_group_rules | | security_groups | | services | | shadow_agent_builds | | shadow_aggregate_hosts | | shadow_aggregate_metadata | | shadow_aggregates | | shadow_block_device_mapping | | shadow_bw_usage_cache | | shadow_cells | | shadow_certificates | | shadow_compute_nodes | | shadow_console_pools | | shadow_consoles | | shadow_dns_domains | | shadow_fixed_ips | | shadow_floating_ips | | shadow_instance_actions | | shadow_instance_actions_events | | shadow_instance_extra | | shadow_instance_faults | | shadow_instance_group_member | | shadow_instance_group_policy | | shadow_instance_groups | | shadow_instance_id_mappings | | shadow_instance_info_caches | | shadow_instance_metadata | | shadow_instance_system_metadata | | shadow_instance_type_extra_specs | | shadow_instance_type_projects | | shadow_instance_types | | shadow_instances | | shadow_key_pairs | | shadow_migrate_version | | shadow_migrations | | shadow_networks | | shadow_pci_devices | | shadow_project_user_quotas | | shadow_provider_fw_rules | | shadow_quota_classes | | shadow_quota_usages | | shadow_quotas | | shadow_reservations | | shadow_s3_images | | shadow_security_group_default_rules | | shadow_security_group_instance_association | | shadow_security_group_rules | | shadow_security_groups | | shadow_services | | shadow_snapshot_id_mappings | | shadow_snapshots | | shadow_task_log | | shadow_virtual_interfaces | | shadow_volume_id_mappings | | shadow_volume_usage_cache | | snapshot_id_mappings | | snapshots | | tags | | task_log | | virtual_interfaces | | volume_id_mappings | | volume_usage_cache | +--------------------------------------------+ 110 rows in set (0.00 sec)
14、啟動nova服務
[root@ren3 ~]# systemctl enable openstack-nova-api.service \ openstack-nova-consoleauth.service openstack-nova-scheduler.service \ openstack-nova-conductor.service openstack-nova-novncproxy.service [root@ren3 ~]# systemctl start openstack-nova-api.service \ openstack-nova-consoleauth.service openstack-nova-scheduler.service \ openstack-nova-conductor.service openstack-nova-novncproxy.service [root@ren3 ~]# systemctl status openstack-nova-api.service \ openstack-nova-consoleauth.service openstack-nova-scheduler.service \ openstack-nova-conductor.service openstack-nova-novncproxy.service |grep active |wc -l 5
[root@ren3 ~]# firewall-cmd --list-ports 4369/tcp 5672/tcp 15672/tcp 25672/tcp 3306/tcp 11211/tcp 80/tcp 35357/tcp 5000/tcp 9292/tcp 9191/tcp [root@ren3 ~]# ss -tnl State Recv-Q Send-Q Local Address:Port Peer Address:Port LISTEN 0 128 *:8775 *:* LISTEN 0 128 *:9191 *:* LISTEN 0 128 192.168.11.3:5672 *:* LISTEN 0 128 *:25672 *:* LISTEN 0 128 192.168.11.3:3306 *:* LISTEN 0 128 192.168.11.3:11211 *:* LISTEN 0 128 127.0.0.1:11211 *:* LISTEN 0 128 *:9292 *:* LISTEN 0 128 *:4369 *:* LISTEN 0 128 *:22 *:* LISTEN 0 128 *:15672 *:* LISTEN 0 100 127.0.0.1:25 *:* LISTEN 0 100 *:6080 *:* LISTEN 0 128 *:8774 *:* LISTEN 0 128 :::5000 :::* LISTEN 0 128 :::8778 :::* LISTEN 0 128 ::1:11211 :::* LISTEN 0 128 :::80 :::* LISTEN 0 128 :::22 :::* LISTEN 0 100 ::1:25 :::* LISTEN 0 128 :::35357 :::* [root@ren3 ~]# netstat -anp |grep 8775 tcp 0 0 0.0.0.0:8775 0.0.0.0:* LISTEN 18175/python2 [root@ren3 ~]# netstat -anp |grep 6080 tcp 0 0 0.0.0.0:6080 0.0.0.0:* LISTEN 18179/python2 [root@ren3 ~]# firewall-cmd --add-port=8774/tcp --permanent success [root@ren3 ~]# firewall-cmd --add-port=8775/tcp --permanent success [root@ren3 ~]# firewall-cmd --add-port=8778/tcp --permanent success [root@ren3 ~]# firewall-cmd --add-port=6080/tcp --permanent success [root@ren3 ~]# firewall-cmd --reload success
(二)安裝和配置計算節點(ren4)
1、安裝nova軟件包
第一次安裝失敗,需要解決依賴關系
在控制節點操作:
[root@ren3 ~]# ls anaconda-ks.cfg openrc openstack-ocata --description openstack_app.tar.gz yum-repo.sh [root@ren3 ~]# cd openstack-ocata/ [root@ren3 openstack-ocata]# ls cirros-0.3.3-x86_64-disk.img openstack-compute-yilai [root@ren3 openstack-ocata]# scp -r openstack-compute-yilai/ ren4:/root/ qemu-img-ev-2.9.0-16.el7_4.8.1.x86_64 100% 2276KB 50.4MB/s 00:00 qemu-kvm-common-ev-2.9.0-16.el7_4.8.1 100% 913KB 28.2MB/s 00:00 qemu-kvm-ev-2.9.0-16.el7_4.8.1.x86_64 100% 2914KB 39.2MB/s 00:00
在計算節點操作:
[root@ren4 ~]# ls anaconda-ks.cfg openstack-compute-yilai yum-repo.sh [root@ren4 ~]# cd openstack-compute-yilai/ [root@ren4 openstack-compute-yilai]# ls qemu-img-ev-2.9.0-16.el7_4.8.1.x86_64.rpm qemu-kvm-common-ev-2.9.0-16.el7_4.8.1.x86_64.rpm qemu-kvm-ev-2.9.0-16.el7_4.8.1.x86_64.rpm [root@ren4 openstack-compute-yilai]# yum localinstall ./* -y
[root@ren4 ~]# yum install openstack-nova-compute -y
2、編輯/etc/nova/nova.conf文件
(1)[DEFAULT]部分,啟用計算和元數據api:
[DEFAULT]
# ...
enabled_apis = osapi_compute,metadata
(2)在[DEFAULT]部分,配置RabbitMQ消息隊列訪問:
[DEFAULT] # ... transport_url = rabbit://openstack:RABBIT_PASS@controller
(3)在[api]和[keystone_authtoken]部分,配置身份服務訪問:
[api] # ... auth_strategy = keystone [keystone_authtoken] # ... auth_uri = http://controller:5000 auth_url = http://controller:35357 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = nova password = NOVA_PASS
(4)在[DEFAULT]部分,配置my_ip選項:
[DEFAULT]
# ...
my_ip = MANAGEMENT_INTERFACE_IP_ADDRESS
(5)在[DEFAULT]部分,啟用對網絡服務的支持:
[DEFAULT] # ... use_neutron = True firewall_driver = nova.virt.firewall.NoopFirewallDriver
(6)在[vnc]部分,啟用和配置遠程控制台訪問:
[vnc] # ... enabled = True vncserver_listen = 0.0.0.0 vncserver_proxyclient_address = $my_ip novncproxy_base_url = http://controller:6080/vnc_auto.html
(7)在[glance]部分,配置圖像服務API的位置:
[glance] # ... api_servers = http://controller:9292
(8)在[oslo_concurrency]部分,配置鎖路徑:
[oslo_concurrency]
# ...
lock_path = /var/lib/nova/tmp
(9)在[placement]部分,配置放置API:
[placement] # ... os_region_name = RegionOne project_domain_name = Default project_name = service auth_type = password user_domain_name = Default auth_url = http://controller:35357/v3 username = placement password = PLACEMENT_PASS
(10)配置[libvirt]部分來使用QEMU
[libvirt]
# ...
virt_type = qemu
修改后的配置文件:
[root@ren4 ~]# cd /etc/nova/
[root@ren4 nova]# ls
api-paste.ini nova.conf policy.json release rootwrap.conf
[root@ren4 nova]# cp nova.conf nova.conf.bak
[root@ren4 nova]# vim nova.conf
[DEFAULT] my_ip = 192.168.11.4 use_neutron = True firewall_driver = nova.virt.firewall.NoopFirewallDriver enabled_apis=osapi_compute,metadata transport_url = rabbit://openstack:admin@ren3 [api] auth_strategy = keystone [api_database] #connection = mysql+pymysql://nova:NOVA_DBPASS@ren3/nova_api [barbican] [cache] [cells] [cinder] #os_region_name = RegionOne [cloudpipe] [conductor] [console] [consoleauth] [cors] [cors.subdomain] [crypto] [database] #connection = mysql+pymysql://nova:NOVA_DBPASS@ren3/nova [ephemeral_storage_encryption] [filter_scheduler] [glance] api_servers = http://ren3:9292 [guestfs] [healthcheck] [hyperv] [image_file_url] [ironic] [key_manager] [keystone_authtoken] auth_uri = http://ren3:5000 auth_url = http://ren3:35357 memcached_servers = ren3:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = nova password = nova [libvirt] virt_type = qemu [matchmaker_redis] [metrics] [mks] [neutron] #url = http://ren3:9696 #auth_url = http://ren3:35357 #auth_type = password #project_domain_name = default #user_domain_name = default #region_name = RegionOne #project_name = service #username = neutron #password = neutron #service_metadata_proxy = true #metadata_proxy_shared_secret = METADATA_SECRET [notifications] [osapi_v21] [oslo_concurrency] lock_path = /var/lib/nova/tmp [oslo_messaging_amqp] [oslo_messaging_kafka] [oslo_messaging_notifications] [oslo_messaging_rabbit] [oslo_messaging_zmq] [oslo_middleware] [oslo_policy] [pci] [placement] os_region_name = RegionOne auth_type = password auth_url = http://ren3:35357/v3 project_name = service project_domain_name = Default username = placement password = placement user_domain_name = Default [quota] [rdp] [remote_debug] [scheduler] [serial_console] [service_user] [spice] [ssl] [trusted_computing] [upgrade_levels] [vendordata_dynamic_auth] [vmware] [vnc] enabled = True vncserver_listen = 0.0.0.0 vncserver_proxyclient_address = $my_ip novncproxy_base_url = http://192.168.11.3:6080/vnc_auto.html [workarounds] [wsgi] [xenserver] [xvp]
3、啟動服務
[root@ren4 ~]# systemctl enable libvirtd.service openstack-nova-compute.service Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-nova-compute.service to /usr/lib/systemd/system/openstack-nova-compute.service. [root@ren4 ~]# systemctl start libvirtd.service openstack-nova-compute.service [root@ren4 ~]# ss -tnl State Recv-Q Send-Q Local Address:Port Peer Address:Port LISTEN 0 128 *:111 *:* LISTEN 0 128 *:22 *:* LISTEN 0 100 127.0.0.1:25 *:* LISTEN 0 128 :::111 :::* LISTEN 0 128 :::22 :::* LISTEN 0 100 ::1:25 :::* [root@ren4 ~]# systemctl status libvirtd.service openstack-nova-compute.service |grep active |wc -l 2
[root@ren4 ~]# firewall-cmd --add-port=111/tcp success [root@ren4 ~]# firewall-cmd --add-port=111/tcp --permanent success
4、將計算節點添加到cell數據庫(在控制節點上運行)
(1)加載系統環境變量,並確認
[root@ren3 ~]# . openrc [root@ren3 ~]# openstack hypervisor list +----+---------------------+-----------------+--------------+-------+ | ID | Hypervisor Hostname | Hypervisor Type | Host IP | State | +----+---------------------+-----------------+--------------+-------+ | 1 | ren4 | QEMU | 192.168.11.4 | up | +----+---------------------+-----------------+--------------+-------+
(2)發現計算主機:
[root@ren3 ~]# su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova Found 2 cell mappings. Skipping cell0 since it does not contain hosts. Getting compute nodes from cell 'cell1': e02ac651-4481-48ce-aed3-ff4dfc75b026 Found 1 computes in cell: e02ac651-4481-48ce-aed3-ff4dfc75b026 Checking host mapping for compute host 'ren4': 84f6b8ad-b130-41e3-bcf1-1b25cfad1d77 Creating host mapping for compute host 'ren4': 84f6b8ad-b130-41e3-bcf1-1b25cfad1d77
當添加新的計算節點時,必須在控制節點上運行 nova-manage cell_v2 discover_hosts 來注冊這些新的計算節點。或者,可以在/etc/nova/nova.conf中設置一個適當的間隔:
[scheduler] discover_hosts_in_cells_interval = 300
5、驗證計算服務
(1)驗證服務組件的啟動
[root@ren3 ~]# openstack compute service list +----+------------+------+----------+---------+-------+--------------+ | ID | Binary | Host | Zone | Status | State | Updated At | +----+------------+------+----------+---------+-------+--------------+ | 1 | nova-conso | ren3 | internal | enabled | up | 2019-10-12T1 | | | leauth | | | | | 1:42:35.0000 | | | | | | | | 00 | | 2 | nova- | ren3 | internal | enabled | up | 2019-10-12T1 | | | conductor | | | | | 1:42:34.0000 | | | | | | | | 00 | | 3 | nova- | ren3 | internal | enabled | up | 2019-10-12T1 | | | scheduler | | | | | 1:42:35.0000 | | | | | | | | 00 | | 6 | nova- | ren4 | nova | enabled | up | 2019-10-12T1 | | | compute | | | | | 1:42:29.0000 | | | | | | | | 00 | +----+------------+------+----------+---------+-------+--------------+
[root@ren3 ~]# nova service-list +----+------------------+------+----------+---------+-------+----------------------------+-----------------+ | Id | Binary | Host | Zone | Status | State | Updated_at | Disabled Reason | +----+------------------+------+----------+---------+-------+----------------------------+-----------------+ | 1 | nova-consoleauth | ren3 | internal | enabled | up | 2019-10-14T02:10:33.000000 | - | | 2 | nova-conductor | ren3 | internal | enabled | up | 2019-10-14T02:10:42.000000 | - | | 3 | nova-scheduler | ren3 | internal | enabled | up | 2019-10-14T02:10:33.000000 | - | | 6 | nova-compute | ren4 | nova | enabled | up | 2019-10-14T02:10:40.000000 | - | +----+------------------+------+----------+---------+-------+----------------------------+-----------------+
這個輸出應該表明在控制節點上啟用了三個服務組件,在計算節點上啟用了一個服務組件。
(2)列出身份服務中的API端點,以驗證與身份服務的連接性:
[root@ren3 ~]# openstack catalog list +-----------+-----------+-----------------------------------+ | Name | Type | Endpoints | +-----------+-----------+-----------------------------------+ | nova | compute | RegionOne | | | | internal: http://ren3:8774/v2.1 | | | | RegionOne | | | | public: http://ren3:8774/v2.1 | | | | RegionOne | | | | admin: http://ren3:8774/v2.1 | | | | | | glance | image | RegionOne | | | | internal: http://ren3:9292 | | | | RegionOne | | | | admin: http://ren3:9292 | | | | RegionOne | | | | public: http://ren3:9292 | | | | | | keystone | identity | RegionOne | | | | public: http://ren3:5000/v3/ | | | | RegionOne | | | | internal: http://ren3:5000/v3/ | | | | RegionOne | | | | admin: http://ren3:35357/v3/ | | | | | | placement | placement | RegionOne | | | | admin: http://ren3:8778 | | | | RegionOne | | | | internal: http://ren3:8778 | | | | RegionOne | | | | public: http://ren3:8778 | | | | | +-----------+-----------+-----------------------------------+
(3)驗證鏡像服務的鏡像
[root@ren3 ~]# glance image-list +--------------------------------------+--------+ | ID | Name | +--------------------------------------+--------+ | d8e9a113-edef-41a6-9778-622edf76de39 | cirros | +--------------------------------------+--------+
[root@ren3 ~]# openstack image list +--------------------------------------+--------+--------+ | ID | Name | Status | +--------------------------------------+--------+--------+ | d8e9a113-edef-41a6-9778-622edf76de39 | cirros | active | +--------------------------------------+--------+--------+
(4)檢查cells和placement API是否正常工作:
[root@ren3 ~]# nova-status upgrade check +---------------------------+
| Upgrade Check Results |
+---------------------------+
| Check: Cells v2 |
| Result: 成功 |
| Details: None |
+---------------------------+
| Check: Placement API |
| Result: 成功 |
| Details: None |
+---------------------------+
| Check: Resource Providers |
| Result: 成功 |
| Details: None |
+---------------------------+