一、nova簡介:
nova和swift是openstack最早的兩個組件,nova分為控制節點和計算節點,計算節點通過nova computer進行虛擬機創建,通過libvirt調用kvm創建虛擬機,nova之間通信通過rabbitMQ隊列進行通信,起組件和功能如下:
1.1:nova API的功能,nova API:
1.2:nova scheduler
nova scheduler模塊在openstack中的作用是決策虛擬機創建在哪個主機(計算節點)上。決策一個虛擬機應該調度到某物理節點,需要分為兩個步驟:
過濾(filter),過濾出可以創建虛擬機的主機
計算權值(weight),根據權重大進行分配,默認根據資源可用空間進行權重排序
二、部署nova控制節點:
官方文檔:https://docs.openstack.org/ocata/zh_CN/install-guide-rdo/nova-controller-install.html
1、准備數據庫
1、用數據庫連接客戶端以 root
用戶連接到數據庫服務器,並創建nova_api
, nova
, 和nova_cell0
databases:
[root@openstack-1 ~]# mysql -uroot -pcentos MariaDB [(none)]> CREATE DATABASE nova_api; MariaDB [(none)]> CREATE DATABASE nova; MariaDB [(none)]> CREATE DATABASE nova_cell0;
2、對數據庫進行正確的授權
# GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' IDENTIFIED BY 'nova123'; # GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'nova123'; # GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' IDENTIFIED BY 'nova123';
3、在控制台一驗證數據庫
2、在控制台一創建nova用戶
1、獲得 admin
憑證來獲取只有管理員能執行的命令的訪問權限:
[root@openstack-1 ~]# . admin.sh #獲取admin權限 [root@openstack-1 ~]# openstack user create --domain default --password-prompt nova #創建nova用戶,密碼也是nova
2、給 nova
用戶添加 admin
角色:
# openstack role add --project service --user nova admin
3、創建 nova
服務實體:
# openstack service create --name nova --description "OpenStack Compute" compute
4、創建公共端、私有端和服務端,注意修改里邊的openstack-vip.net是域名,解析VIP地址的hosts文件,根據不同配置進行修改。
$ openstack endpoint create --region RegionOne \ compute public http://openstack-vip.net:8774/v2.1 +--------------+-------------------------------------------+ | Field | Value | +--------------+-------------------------------------------+ | enabled | True | | id | 3c1caa473bfe4390a11e7177894bcc7b | | interface | public | | region | RegionOne | | region_id | RegionOne | | service_id | 060d59eac51b4594815603d75a00aba2 | | service_name | nova | | service_type | compute | | url | http://controller:8774/v2.1 | +--------------+-------------------------------------------+ $ openstack endpoint create --region RegionOne \ compute internal http://openstack-vip.net:8774/v2.1 +--------------+-------------------------------------------+ | Field | Value | +--------------+-------------------------------------------+ | enabled | True | | id | e3c918de680746a586eac1f2d9bc10ab | | interface | internal | | region | RegionOne | | region_id | RegionOne | | service_id | 060d59eac51b4594815603d75a00aba2 | | service_name | nova | | service_type | compute | | url | http://controller:8774/v2.1 | +--------------+-------------------------------------------+ $ openstack endpoint create --region RegionOne \ compute admin http://openstack-vip.net:8774/v2.1 +--------------+-------------------------------------------+ | Field | Value | +--------------+-------------------------------------------+ | enabled | True | | id | 38f7af91666a47cfb97b4dc790b94424 | | interface | admin | | region | RegionOne | | region_id | RegionOne | | service_id | 060d59eac51b4594815603d75a00aba2 | | service_name | nova | | service_type | compute | | url | http://controller:8774/v2.1 | +--------------+-------------------------------------------+
5、如果創建重復的,就需要刪除。
# openstack endpoint list #先查詢ID # openstack endpoint delete ID # 刪除重復的ID
6、創建placement用戶並授權,密碼也是placement。
$ openstack user create --domain default --password-prompt placement User Password: Repeat User Password: +---------------------+----------------------------------+ | Field | Value | +---------------------+----------------------------------+ | domain_id | default | | enabled | True | | id | fa742015a6494a949f67629884fc7ec8 | | name | placement | | options | {} | | password_expires_at | None | +---------------------+----------------------------------+
7、將admin角色加入到項目服務和placement用戶中
$ openstack role add --project service --user placement admin
8、在服務目錄中創建放置API條目
$ openstack service create --name placement --description "Placement API" placement +-------------+----------------------------------+ | Field | Value | +-------------+----------------------------------+ | description | Placement API | | enabled | True | | id | 2d1a27022e6e4185b86adac4444c495f | | name | placement | | type | placement | +-------------+----------------------------------+
9、創建放置API服務端點
$ openstack endpoint create --region RegionOne placement public http://controller:8778 #注意controller需要修改為VIP解析的域名,三個都需要替換。 +--------------+----------------------------------+ | Field | Value | +--------------+----------------------------------+ | enabled | True | | id | 2b1b2637908b4137a9c2e0470487cbc0 | | interface | public | | region | RegionOne | | region_id | RegionOne | | service_id | 2d1a27022e6e4185b86adac4444c495f | | service_name | placement | | service_type | placement | | url | http://controller:8778 | +--------------+----------------------------------+ $ openstack endpoint create --region RegionOne placement internal http://controller:8778 +--------------+----------------------------------+ | Field | Value | +--------------+----------------------------------+ | enabled | True | | id | 02bcda9a150a4bd7993ff4879df971ab | | interface | internal | | region | RegionOne | | region_id | RegionOne | | service_id | 2d1a27022e6e4185b86adac4444c495f | | service_name | placement | | service_type | placement | | url | http://controller:8778 | +--------------+----------------------------------+ $ openstack endpoint create --region RegionOne placement admin http://controller:8778 +--------------+----------------------------------+ | Field | Value | +--------------+----------------------------------+ | enabled | True | | id | 3d71177b9e0f406f98cbff198d74b182 | | interface | admin | | region | RegionOne | | region_id | RegionOne | | service_id | 2d1a27022e6e4185b86adac4444c495f | | service_name | placement | | service_type | placement | | url | http://controller:8778 |
三、在haproxy服務器上配置
1、修改haproxy配置文件,監聽兩個端口,將VIP轉發到控制台一:vim /etc/haproxy/haproxy.cfg
listen openstack_nova_port_8774 bind 192.168.7.248:8774 mode tcp log global server 192.168.7.100 192.168.7.100:8774 check inter 3000 fall 3 rise 5 server 192.168.7.101 192.168.7.101:8774 check inter 3000 fall 3 rise 5 backup listen openstack_nova_port_8778 bind 192.168.7.248:8778 mode tcp log global server 192.168.7.100 192.168.7.100:8778 check inter 3000 fall 3 rise 5 server 192.168.7.101 192.168.7.101:8778 check inter 3000 fall 3 rise 5 backup
2、重啟haproxy服務
[root@mysql1 ~]# systemctl restart haproxy
四、在所有控制端安裝包
1、安裝軟件包
# yum install openstack-nova-api openstack-nova-conductor \ openstack-nova-console openstack-nova-novncproxy \ openstack-nova-scheduler openstack-nova-placement-api
2、編輯``/etc/nova/nova.conf``文件並完成下面的操作:
在``[DEFAULT]``部分,只啟用計算和元數據API:
[DEFAULT] # ... enabled_apis = osapi_compute,metadata
3、在``[api_database]``和``[database]``部分,配置數據庫的連接:
[api_database] # ... connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api [database] # ... connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova
NOVA_DBPASS,cotroller改為VIP解析hosts域名。
4、在``[DEFAULT]``部分,配置``RabbitMQ``消息隊列訪問權限:
[DEFAULT] # ... transport_url = rabbit://openstack:RABBIT_PASS@controller
用你在 “RabbitMQ” 中為 “openstack” 選擇的密碼替換 “RABBIT_PASS”。
5、編輯nova配置文件,定義keystone配置
[api] # ... auth_strategy = keystone [keystone_authtoken] # ... auth_uri = http://controller:5000 #需要將controller修改為VIP的域名解析地址 auth_url = http://controller:35357 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = nova password = nova #nova密碼
6、在 [DEFAULT]``部分,啟用網絡服務支持:
[DEFAULT] # ... use_neutron = True firewall_driver = nova.virt.firewall.NoopFirewallDriver # 關閉防火牆
7、在``[vnc]``部分,配置VNC代理使用控制節點的管理接口IP地址 :
[vnc] enabled = true # ... vncserver_listen = 192.168.7.100 #監聽本地IP地址,也可以監聽本地所有地址0.0.0.0 vncserver_proxyclient_address = 192.168.7.100
8、在 [glance]
區域,配置鏡像服務 API 的位置:
[glance] # ... api_servers = http://openstack-vip.net:9292 #監聽VIP的域名地址(openstack-vip.net)
9、在 [oslo_concurrency]
部分,配置鎖路徑:
[oslo_concurrency] # ... lock_path = /var/lib/nova/tmp
10、配置API接口
[placement] # ... os_region_name = RegionOne project_domain_name = Default project_name = service auth_type = password user_domain_name = Default auth_url = http://openstack-vip.net:35357/v3 username = placement password = placement #創建placement的用戶密碼
11、配置 apache 允許訪問placement API,修改配置文件:/etc/httpd/conf.d/00-nova-placement-api.conf,在底部加入以下內容:
<Directory /usr/bin> <IfVersion >= 2.4> Require all granted </IfVersion> <IfVersion < 2.4> Order allow,deny Allow from all </IfVersion> </Directory>
12、重啟httpd服務
# systemctl restart httpd
13、初始化nova數據庫,以下四個步驟都執行一下。
# su -s /bin/sh -c "nova-manage api_db sync" nova # su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova # su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova # su -s /bin/sh -c "nova-manage db sync" nova
14、驗證cell0和cell1是否正確
# nova-manage cell_v2 list_cells +-------+--------------------------------------+ | Name | UUID | +-------+--------------------------------------+ | cell1 | 109e1d4b-536a-40d0-83c6-5f121b82b650 | | cell0 | 00000000-0000-0000-0000-000000000000 | +-------+--------------------------------------+
15、啟動nova服務,並設置為開機啟動
# systemctl enable openstack-nova-api.service \ openstack-nova-consoleauth.service openstack-nova-scheduler.service \ openstack-nova-conductor.service openstack-nova-novncproxy.service # systemctl start openstack-nova-api.service \ openstack-nova-consoleauth.service openstack-nova-scheduler.service \ openstack-nova-conductor.service openstack-nova-novncproxy.service
在haproxy服務器監聽新的端口6080
修改haproxy服務的配置文件: vim /etc/haproxy/haproxy.cfg
listen openstack_nova_port_6080 bind 192.168.7.248:6080 mode tcp log global server 192.168.7.100 192.168.7.100:6080 check inter 3000 fall 3 rise 5 server 192.168.7.101 192.168.7.101:6080 check inter 3000 fall 3 rise 5 backup
重啟haproxy服務
[root@mysql1 ~]# systemctl restart haproxy
然后可以在web頁面登陸查看此時的連接情況:
192.168.7.106:15672 訪問的是后端rabbitmq服務。
備份控制端nova數據,同步到其他控制端上,實現nova高可用
1、將/etc/nova目錄下的文件打包,復制到其他控制端服務器上
[root@openstack-1 ~]# cd /etc/nova [root@openstack-1 ~]# tar zcvf nova-conller.tar.gz ./*
2、將打包的文件復制到其他控制端主機上
[root@openstack-1 ~]# scp /etc/nova/zcvf nova-conller.tar.gz 192.168.7.101:/etc/nova/
3、在復制到其他控制端的包解壓,並修改IP地址
[root@openstack-2 ~]# tar -xvf nova-conllar.tar.gz
4、修改/etc/nova/nova.conf配置文件,將里邊的IP地址修改為本機的IP地址
[vnc] enabled = true # ... vncserver_listen = 192.168.7.101 #監聽本地IP地址,也可以監聽本地所有地址0.0.0.0 vncserver_proxyclient_address = 192.168.7.101
5、配置 apache 允許訪問placement API,修改配置文件:/etc/httpd/conf.d/00-nova-placement-api.conf,在底部加入以下內容:
<Directory /usr/bin> <IfVersion >= 2.4> Require all granted </IfVersion> <IfVersion < 2.4> Order allow,deny Allow from all </IfVersion> </Directory>
6、啟動httpd服務,並設置為開機啟動
# systemctl start httpd # systemctl enable httpd
7、啟動nova服務,並設置為開機啟動
# systemctl enable openstack-nova-api.service \ openstack-nova-consoleauth.service openstack-nova-scheduler.service \ openstack-nova-conductor.service openstack-nova-novncproxy.service # systemctl start openstack-nova-api.service \ openstack-nova-consoleauth.service openstack-nova-scheduler.service \ openstack-nova-conductor.service openstack-nova-novncproxy.service
重啟命令最好是寫到一個指定的目錄下,並創建一個腳本:
[root@openstack-1 scripts]# mkdit scripts #創建一個專門存放腳本的目錄 [root@openstack-1 scripts]# vim nova-restart.sh #寫一個重啟nova腳本
寫一個重啟nova腳本
#!/bin/bash systemctl restart openstack-nova-api.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service
執行腳本,在工作中會大大提高工作效率
#bash nova-restart.sh
8、驗證nova服務,注意所有的控制端的Host文件名稱不能一樣,否則會有錯誤。
[root@linux-host1 ~]# nova service-list +----+------------------+-------------------------+----------+---------+-------+----------------------------+-----------------+ | Id | Binary | Host | Zone | Status | State | Updated_at | Disabled Reason | +----+------------------+-------------------------+----------+---------+-------+----------------------------+-----------------+ | 1 | nova-consoleauth | linux-host1.exmaple.com | internal | enabled | up | 2016-10-28T07:51:45.000000 | - | | 2 | nova-scheduler | linux-host1.exmaple.com | internal | enabled | up | 2016-10-28T07:51:45.000000 | - | | 3 | nova-conductor | linux-host1.exmaple.com | internal | enabled | up | 2016-10-28T07:51:44.000000 | - | | 6 | nova-compute | linux-host2.exmaple.com | nova | enabled | up | 2016-10-28T07:51:40.000000 | - | +----+------------------+-------------------------+----------+---------+-------+----
五、安裝部署nova計算節點
1、在計算節點服務器上配置
1、安裝軟件包
# yum install openstack-nova-compute -y
修改計算節點服務器的本地hosts文件,用來解析VIP地址的域名
# vim /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 192.168.7.248 openstack-vip.net
2、編輯``/etc/nova/nova.conf``文件並完成下面的操作:
在``[DEFAULT]``部分,只啟用計算和元數據API:
[DEFAULT] # ... enabled_apis = osapi_compute,metadata
3、在``[DEFAULT]``部分,配置``RabbitMQ``消息隊列訪問權限:
[DEFAULT] # ... transport_url = rabbit://openstack:RABBIT_PASS@controller
用你在 “RabbitMQ” 中為 “openstack” 選擇的密碼替換 “RABBIT_PASS”。
4、配置nova的API接口
[api] # ... auth_strategy = keystone
[keystone_authtoken] # ... auth_uri = http://controller:5000 #controller改為本地VIP解析的域名地址 auth_url = http://controller:35357 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = nova password = nova #改為nova密碼
5、在 [DEFAULT]``部分,啟用網絡服務支持:
[DEFAULT] # ... use_neutron = True firewall_driver = nova.virt.firewall.NoopFirewallDriver
6、在``[vnc]``部分,啟用並配置遠程控制台訪問:
[vnc] # ... enabled = True vncserver_listen = 0.0.0.0 vncserver_proxyclient_address = 192.168.7.103 #改為本地的計算機節點IP地址 novncproxy_base_url = http://controller:6080/vnc_auto.html #將controller改為本地VIP解析的hosts域名地址
服務器組件監聽所有的 IP 地址,而代理組件僅僅監聽計算節點管理網絡接口的 IP 地址。基本的URL 指示您可以使用 web 瀏覽器訪問位於該計算節點上實例的遠程控制台的位置。
7、在 [glance]
區域,配置鏡像服務 API 的位置:
[glance] # ... api_servers = http://controller:9292 #將controller改為VIP地址,也就是本地的hosts文件解析VIP地址的域名。
8、在 [oslo_concurrency]
部分,配置鎖路徑:
[oslo_concurrency] # ... lock_path = /var/lib/nova/tmp
9、在[placement]中配置api接口:
[placement] # ... os_region_name = RegionOne project_domain_name = Default project_name = service auth_type = password user_domain_name = Default auth_url = http://controller:35357/v3 #修改為VIP解析的地址。 username = placement password = PLACEMENT_PASS #placement賬號密碼
10、確定您的計算節點是否支持虛擬機的硬件加速。
$ egrep -c '(vmx|svm)' /proc/cpuinfo
11、在 /etc/nova/nova.conf
文件的 [libvirt]
區域做出如下的編輯,使用kvm虛擬化,走的是CPU硬件類,啟動比較快
[libvirt] # ... virt_type = qemu
12、啟動計算服務,並設置為開機啟動
# systemctl enable libvirtd.service openstack-nova-compute.service # systemctl start libvirtd.service openstack-nova-compute.service
測試效果:
1、先發現計算機主機
# su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova Found 2 cell mappings. Skipping cell0 since it does not contain hosts. Getting compute nodes from cell 'cell1': ad5a5985-a719-4567-98d8-8d148aaae4bc Found 1 computes in cell: ad5a5985-a719-4567-98d8-8d148aaae4bc Checking host mapping for compute host 'compute': fe58ddc1-1d65-4f87-9456-bc040dc106b3 Creating host mapping for compute host 'compute': fe58ddc1-1d65-4f87-9456-bc040dc106b3
2、然后列出此時的創建的結果,顯示后就說明創建成功。
$ openstack hypervisor list +----+---------------------+-----------------+-----------+-------+ | ID | Hypervisor Hostname | Hypervisor Type | Host IP | State | +----+---------------------+-----------------+-----------+-------+ | 1 | compute1 | QEMU | 10.0.0.31 | up | +----+---------------------+-----------------+-----------+-------+
實現計算機節點高可用
在第二節計算機節點配置
1、先安裝軟件包
# yum install openstack-nova-compute
修改第二個計算機節點的hosts文件
# vim /etc/hosts 192.168.7.104 openstack-vip.net
2、將第一個計算機節點安裝完的nova進行打包
[root@computer-1 ~]# cd /etc/nova [root@computer-1 ~]# tar -zcvf nova-computer.tar.gz ./*
3、將第一個計算機節點安裝的nova包傳到第二個計算機節點上
[root@computer-1 ~]# scp nova-computer.tar.gz 192.168.7.104:/etc/nova/
4、在第二個計算機節點進行解壓
# tar -xvf nova.tar.gz
5、修改/etc/nova/nova.conf配置文件
[vnc] # ... enabled = True vncserver_listen = 0.0.0.0 vncserver_proxyclient_address = 192.168.7.104 #改為本地的計算機節點IP地址 novncproxy_base_url = http://controller:6080/vnc_auto.html #將controller改為本地VIP解析的hosts域名地址
6、然后啟動nova計算機節點服務,並設置為開機啟動
# systemctl enable libvirtd.service openstack-nova-compute.service # systemctl start libvirtd.service openstack-nova-compute.service
7、啟動nova后可以查看日志是否有報錯
[root@openstack-1 ~]# tail /var/log/nova/*.log
8、在控制台執行此條命令:添加新的計算節點時,必須在控制器節點上運行nova-managecell_v2discover_host來注冊這些新的計算節點。或者,您可以在/etc/nova/nova.conf中設置適當的間隔:
[scheduler] discover_hosts_in_cells_interval = 300
測試效果:
1、先在控制台進行以下操作,發現計算機主機
# su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova Found 2 cell mappings. Skipping cell0 since it does not contain hosts. Getting compute nodes from cell 'cell1': ad5a5985-a719-4567-98d8-8d148aaae4bc Found 1 computes in cell: ad5a5985-a719-4567-98d8-8d148aaae4bc Checking host mapping for compute host 'compute': fe58ddc1-1d65-4f87-9456-bc040dc106b3 Creating host mapping for compute host 'compute': fe58ddc1-1d65-4f87-9456-bc040dc106b3
2、然后在計算節點上列出此時的創建的結果,顯示后就說明創建成功。
$ openstack hypervisor list +----+---------------------+-----------------+-----------+-------+ | ID | Hypervisor Hostname | Hypervisor Type | Host IP | State | +----+---------------------+-----------------+-----------+-------+ | 1 | compute1 | QEMU | 10.0.0.31 | up | +----+---------------------+-----------------+-----------+-------+
3、在控制台列出計算機服務列表
$ openstack nova service list +----+--------------------+------------+----------+---------+-------+----------------------------+ | Id | Binary | Host | Zone | Status | State | Updated At | +----+--------------------+------------+----------+---------+-------+----------------------------+ | 1 | nova-consoleauth | controller | internal | enabled | up | 2016-02-09T23:11:15.000000 | | 2 | nova-scheduler | controller | internal | enabled | up | 2016-02-09T23:11:15.000000 | | 3 | nova-conductor | controller | internal | enabled | up | 2016-02-09T23:11:16.000000 | | 4 | nova-compute | compute1 | nova | enabled | up | 2016-02-09T23:11:20.000000 | +----+--------------------+------------+----------+---------+-------+----------------------------+
3、在控制台查看Nova運行狀態
# nova-status upgrade check +---------------------------+ | Upgrade Check Results | +---------------------------+ | Check: Cells v2 | | Result: Success | | Details: None | +---------------------------+ | Check: Placement API | | Result: Success | | Details: None | +---------------------------+ | Check: Resource Providers | | Result: Success | | Details: None | +---------------------------+
4、使用openstack命令在控制端測試
$ openstack image list +--------------------------------------+-------------+-------------+ | ID | Name | Status | +--------------------------------------+-------------+-------------+ | 9a76d9f9-9620-4f2e-8c69-6c5691fae163 | cirros | active | +--------------------------------------+-------------+-------------+