Nova相關介紹
目前的Nova主要由API,Compute,Conductor,Scheduler組成
Compute:用來交互並管理虛擬機的生命周期;
Scheduler:從可用池中根據各種策略選擇最合適的計算節點來創建新的虛擬機;
Conductor:為數據庫的訪問提供統一的接口層。
Compute Service Nova 是 OpenStack 最核心的服務,負責維護和管理雲環境的計算資源。
OpenStack 作為 IaaS 的雲操作系統,虛擬機生命周期管理也就是通過 Nova 來實現的
在上圖中可以看到,Nova 處於 Openstak 架構的中心,其他組件都為 Nova 提供支持:
Glance 為 VM 提供 image
Cinder 和 Swift 分別為 VM 提供塊存儲和對象存儲
Neutron 為 VM 提供網絡連接
Nova 架構如下
Nova 的架構比較復雜,包含很多組件。
這些組件以子服務(后台 deamon 進程)的形式運行,可以分為以下幾類:
AP部分
nova-api
其它模塊的通過HTTP協議進入Nova模塊的接口,api是外部訪問nova的唯一途徑
Compute Core部分
nova-scheduler
虛機調度服務,負責決定在哪個計算節點上運行虛機
Nova scheduler模塊決策一個虛擬機應該調度到某個物理節點,需要兩個步驟
1、過濾(Fliter):先過濾出符合條件的主機,空閑資源足夠的。
2、計算權重
nova-compute
管理虛機的核心服務,通過調用 Hypervisor API 實現虛機生命周期管理
Hypervisor
計算節點上跑的虛擬化管理程序,虛機管理最底層的程序。 不同虛擬化技術提供自己的 Hypervisor。 常用的 Hypervisor 有 KVM,Xen, VMWare 等
nova-conductor
nova-compute 經常需要更新數據庫,比如更新虛機的狀態。 出於安全性和伸縮性的考慮,nova-compute 並不會直接訪問數據庫(早期版本計算節點可以直接訪問數據庫,計算節點一旦被入侵,整個庫都危險)
而是將這個任務委托給 nova-conductor,這個我們在后面會詳細討論。
Console Interface部分
nova-console
用戶可以通過多種方式訪問虛機的控制台: nova-novncproxy,基於 Web 瀏覽器的 VNC 訪問 nova-spicehtml5proxy,基於 HTML5 瀏覽器的 SPICE 訪問 nova-xvpnvncproxy,基於 Java 客戶端的 VNC 訪問
nova-consoleauth
負責對訪問虛機控制台請親提供 Token 認證
nova-cert
Database部分
Nova 會有一些數據需要存放到數據庫中,一般使用 MySQL。 Nova 使用名為 "nova" 和"nova_api"的數據庫。
某個主機硬件資源足夠並不表示符合Nova scheduler。
可能認為會沒有資源的場景:
1、網絡故障,主機網絡節點有問題
2、nova compute故障了
控制節點安裝和配置Nova
1、控制節點安裝軟件包
[root@linux-node1 ~]# yum install openstack-nova-api openstack-nova-conductor openstack-nova-console openstack-nova-novncproxy openstack-nova-scheduler -y Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile * base: mirrors.163.com * epel: mirror.premi.st * extras: mirrors.163.com * updates: mirrors.163.com Package 1:openstack-nova-api-13.1.2-1.el7.noarch already installed and latest version Package 1:openstack-nova-conductor-13.1.2-1.el7.noarch already installed and latest version Package 1:openstack-nova-console-13.1.2-1.el7.noarch already installed and latest version Package 1:openstack-nova-novncproxy-13.1.2-1.el7.noarch already installed and latest version Package 1:openstack-nova-scheduler-13.1.2-1.el7.noarch already installed and latest version Nothing to do [root@linux-node1 ~]#
2、配置部分---數據庫連接配置
/etc/nova/nova.conf
在[api_database]和[database]部分,配置數據庫的連接:
[api_database] ... connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api [database] ... connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova
更改結果如下
[root@linux-node1 ~]# grep -n '^[a-Z]' /etc/nova/nova.conf 2168:connection = mysql+pymysql://nova:nova@192.168.56.11/nova_api 3128:connection = mysql+pymysql://nova:nova@192.168.56.11/nova [root@linux-node1 ~]#
同步數據到mysql庫,出現警告也沒關系
[root@linux-node1 ~]# su -s /bin/sh -c "nova-manage api_db sync" nova [root@linux-node1 ~]# su -s /bin/sh -c "nova-manage db sync" nova [root@linux-node1 ~]#
[root@linux-node1 ~]# mysql -h192.168.56.11 -unova -pnova -e "use nova;show tables;" +--------------------------------------------+ | Tables_in_nova | +--------------------------------------------+ | agent_builds | | aggregate_hosts | | aggregate_metadata | | aggregates | | allocations | | block_device_mapping | | bw_usage_cache | | cells | | certificates | | compute_nodes | | console_pools | | consoles | | dns_domains | | fixed_ips | | floating_ips | | instance_actions | | instance_actions_events | | instance_extra | | instance_faults | | instance_group_member | | instance_group_policy | | instance_groups | | instance_id_mappings | | instance_info_caches | | instance_metadata | | instance_system_metadata | | instance_type_extra_specs | | instance_type_projects | | instance_types | | instances | | inventories | | key_pairs | | migrate_version | | migrations | | networks | | pci_devices | | project_user_quotas | | provider_fw_rules | | quota_classes | | quota_usages | | quotas | | reservations | | resource_provider_aggregates | | resource_providers | | s3_images | | security_group_default_rules | | security_group_instance_association | | security_group_rules | | security_groups | | services | | shadow_agent_builds | | shadow_aggregate_hosts | | shadow_aggregate_metadata | | shadow_aggregates | | shadow_block_device_mapping | | shadow_bw_usage_cache | | shadow_cells | | shadow_certificates | | shadow_compute_nodes | | shadow_console_pools | | shadow_consoles | | shadow_dns_domains | | shadow_fixed_ips | | shadow_floating_ips | | shadow_instance_actions | | shadow_instance_actions_events | | shadow_instance_extra | | shadow_instance_faults | | shadow_instance_group_member | | shadow_instance_group_policy | | shadow_instance_groups | | shadow_instance_id_mappings | | shadow_instance_info_caches | | shadow_instance_metadata | | shadow_instance_system_metadata | | shadow_instance_type_extra_specs | | shadow_instance_type_projects | | shadow_instance_types | | shadow_instances | | shadow_key_pairs | | shadow_migrate_version | | shadow_migrations | | shadow_networks | | shadow_pci_devices | | shadow_project_user_quotas | | shadow_provider_fw_rules | | shadow_quota_classes | | shadow_quota_usages | | shadow_quotas | | shadow_reservations | | shadow_s3_images | | shadow_security_group_default_rules | | shadow_security_group_instance_association | | shadow_security_group_rules | | shadow_security_groups | | shadow_services | | shadow_snapshot_id_mappings | | shadow_snapshots | | shadow_task_log | | shadow_virtual_interfaces | | shadow_volume_id_mappings | | shadow_volume_usage_cache | | snapshot_id_mappings | | snapshots | | tags | | task_log | | virtual_interfaces | | volume_id_mappings | | volume_usage_cache | +--------------------------------------------+ [root@linux-node1 ~]# mysql -h192.168.56.11 -unova -pnova -e "use nova_api;show tables;" +--------------------+ | Tables_in_nova_api | +--------------------+ | build_requests | | cell_mappings | | flavor_extra_specs | | flavor_projects | | flavors | | host_mappings | | instance_mappings | | migrate_version | | request_specs | +--------------------+ [root@linux-node1 ~]#
3、 配置部分---keystone配置
在[DEFAULT] 和 [keystone_authtoken] 部分,配置認證服務訪問:
[DEFAULT] ... auth_strategy = keystone [keystone_authtoken] ... auth_uri = http://192.168.56.11:5000 auth_url = http://192.168.56.11:35357 memcached_servers = 192.168.56.11:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = nova password = nova
4、配置部分---RabbitMQ配置
修改rabbitmq的配置,nova它們各個組件之間要用到
在 [DEFAULT] 和 [oslo_messaging_rabbit]部分,配置 RabbitMQ 消息隊列訪問:
[DEFAULT] ... rpc_backend = rabbit [oslo_messaging_rabbit] ... rabbit_host=192.168.56.11 rabbit_userid = openstack rabbit_password = openstack
下面也可以把端口部分注釋去掉
5、配置部分---nova自身功能模塊的配置
在[DEFAULT]部分,只啟用計算和元數據API:
[DEFAULT] ... enabled_apis = osapi_compute,metadata
在 [DEFAULT 部分,這里就不配置了,以刪除線標識
[DEFAULT]
...
my_ip = 10.0.0.11
在 [DEFAULT] 部分,使能 Networking 服務:
默認情況下,計算服務使用內置的防火牆服務。由於網絡服務包含了防火牆服務
你必須使用nova.virt.firewall.NoopFirewallDriver防火牆服務來禁用掉計算服務內置的防火牆服務
[DEFAULT] ... use_neutron = True firewall_driver = nova.virt.firewall.NoopFirewallDriver
在[vnc]部分,配置VNC代理使用控制節點的管理接口IP地址 (這里原來是$my_ip):
[vnc] ... vncserver_listen = 192.168.56.11 vncserver_proxyclient_address = 192.168.56.11
在 [glance] 區域,配置鏡像服務 API 的位置:
[glance] ... api_servers = http://192.168.56.11:9292
在 [oslo_concurrency] 部分,配置鎖路徑:
[oslo_concurrency] ... lock_path = /var/lib/nova/tmp
[root@linux-node1 ~]# grep -n '^[a-Z]' /etc/nova/nova.conf 267:enabled_apis=osapi_compute,metadata 382:auth_strategy=keystone 1561:firewall_driver=nova.virt.libvirt.firewall.IptablesFirewallDriver 1684:use_neutron=True 2119:rpc_backend=rabbit 2168:connection = mysql+pymysql://nova:nova@192.168.56.11/nova_api 3128:connection = mysql+pymysql://nova:nova@192.168.56.11/nova 3354:api_servers= http://192.168.56.11:9292 3523:auth_uri = http://192.168.56.11:5000 3524:auth_url = http://192.168.56.11:35357 3525:memcached_servers = 192.168.56.11:11211 3526:auth_type = password 3527:project_domain_name = default 3528:user_domain_name = default 3529:project_name = service 3530:username = nova 3531:password = nova 4307:lock_path=/var/lib/nova/tmp 4458:rabbit_host=192.168.56.11 4464:rabbit_port=5672 4476:rabbit_userid=openstack 4480:rabbit_password=openstack 5427:vncserver_listen=192.168.56.11 5451:vncserver_proxyclient_address=192.168.56.11 [root@linux-node1 ~]#
7、啟動nova服務
啟動 nova 服務並將其設置為隨系統啟動
命令如下
systemctl enable openstack-nova-api.service \ openstack-nova-consoleauth.service openstack-nova-scheduler.service \ openstack-nova-conductor.service openstack-nova-novncproxy.service systemctl start openstack-nova-api.service \ openstack-nova-consoleauth.service openstack-nova-scheduler.service \ openstack-nova-conductor.service openstack-nova-novncproxy.service
執行過程如下
[root@linux-node1 ~]# systemctl enable openstack-nova-api.service \ > openstack-nova-consoleauth.service openstack-nova-scheduler.service \ > openstack-nova-conductor.service openstack-nova-novncproxy.service Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-nova-api.service to /usr/lib/systemd/system/openstack-nova-api.service. Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-nova-consoleauth.service to /usr/lib/systemd/system/openstack-nova-consoleauth.service. Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-nova-scheduler.service to /usr/lib/systemd/system/openstack-nova-scheduler.service. Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-nova-conductor.service to /usr/lib/systemd/system/openstack-nova-conductor.service. Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-nova-novncproxy.service to /usr/lib/systemd/system/openstack-nova-novncproxy.service. [root@linux-node1 ~]# systemctl start openstack-nova-api.service \ > openstack-nova-consoleauth.service openstack-nova-scheduler.service \ > openstack-nova-conductor.service openstack-nova-novncproxy.service [root@linux-node1 ~]#
下面步驟之前已經操作了,這里就不做了,以刪除線標識
1、獲得 admin 憑證來獲取只有管理員能執行的命令的訪問權限
source admin-openrc
2、 要創建服務證書,完成這些步驟
創建 nova 用戶
openstack user create --domain default \
--password-prompt nova
3、給 nova 用戶添加 admin 角色
$ openstack role add --project service --user nova admin
openstack service create --name nova --description "OpenStack Compute" compute
執行過程如下
[root@linux-node1 ~]# source admin-openstack.sh [root@linux-node1 ~]# openstack service create --name nova --description "OpenStack Compute" compute +-------------+----------------------------------+ | Field | Value | +-------------+----------------------------------+ | description | OpenStack Compute | | enabled | True | | id | e1e90d1948fb4384a8d2b09edb2c0cf6 | | name | nova | | type | compute | +-------------+----------------------------------+ [root@linux-node1 ~]#
openstack endpoint create --region RegionOne compute public http://192.168.56.11:8774/v2.1/%\(tenant_id\)s
[root@linux-node1 ~]# openstack endpoint create --region RegionOne compute public http://192.168.56.11:8774/v2.1/%\(tenant_id\)s +--------------+----------------------------------------------+ | Field | Value | +--------------+----------------------------------------------+ | enabled | True | | id | 7017bf5b4990451296c6b51aff13e6f4 | | interface | public | | region | RegionOne | | region_id | RegionOne | | service_id | e1e90d1948fb4384a8d2b09edb2c0cf6 | | service_name | nova | | service_type | compute | | url | http://192.168.56.11:8774/v2.1/%(tenant_id)s | +--------------+----------------------------------------------+ [root@linux-node1 ~]#
創建internal端點
命令如下
openstack endpoint create --region RegionOne compute internal http://192.168.56.11:8774/v2.1/%\(tenant_id\)s
操作如下
[root@linux-node1 ~]# openstack endpoint create --region RegionOne compute internal http://192.168.56.11:8774/v2.1/%\(tenant_id\)s +--------------+----------------------------------------------+ | Field | Value | +--------------+----------------------------------------------+ | enabled | True | | id | b3325481dd704aeb94c02c48b97e0991 | | interface | internal | | region | RegionOne | | region_id | RegionOne | | service_id | e1e90d1948fb4384a8d2b09edb2c0cf6 | | service_name | nova | | service_type | compute | | url | http://192.168.56.11:8774/v2.1/%(tenant_id)s | +--------------+----------------------------------------------+ [root@linux-node1 ~]#
創建admin端點
命令如下
openstack endpoint create --region RegionOne compute admin http://192.168.56.11:8774/v2.1/%\(tenant_id\)s
操作如下
[root@linux-node1 ~]# openstack endpoint create --region RegionOne compute admin http://192.168.56.11:8774/v2.1/%\(tenant_id\)s +--------------+----------------------------------------------+ | Field | Value | +--------------+----------------------------------------------+ | enabled | True | | id | 8ce0f94f658141ce94aa339b43c48eea | | interface | admin | | region | RegionOne | | region_id | RegionOne | | service_id | e1e90d1948fb4384a8d2b09edb2c0cf6 | | service_name | nova | | service_type | compute | | url | http://192.168.56.11:8774/v2.1/%(tenant_id)s | +--------------+----------------------------------------------+ [root@linux-node1 ~]#
檢查創建結果
[root@linux-node1 ~]# openstack host list +----------------------+-------------+----------+ | Host Name | Service | Zone | +----------------------+-------------+----------+ | linux-node1.nmap.com | scheduler | internal | | linux-node1.nmap.com | consoleauth | internal | | linux-node1.nmap.com | conductor | internal | +----------------------+-------------+----------+ [root@linux-node1 ~]#
計算節點安裝和配置nova
nova compute通過libvirt管理kvm,計算節點是真正運行虛擬機的
計算節點機器必須打開vt-x
計算節點操作之前先同步下時間
[root@linux-node2 ~]# ntpdate time1.aliyun.com 18 Feb 12:53:43 ntpdate[3184]: adjust time server 115.28.122.198 offset -0.005554 sec [root@linux-node2 ~]# date Sat Feb 18 12:53:45 CST 2017 [root@linux-node2 ~]#
1、安裝軟件包
[root@linux-node2 ~]# yum install openstack-nova-compute -y Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile * base: mirrors.aliyun.com * epel: mirrors.tuna.tsinghua.edu.cn * extras: mirrors.163.com * updates: mirrors.aliyun.com Package 1:openstack-nova-compute-13.1.2-1.el7.noarch already installed and latest version Nothing to do [root@linux-node2 ~]#
關於novncproxy
novncproxy的端口是6080 ,登錄控制節點查看下
[root@linux-node1 ~]# netstat -lntp | grep 6080 tcp 0 0 0.0.0.0:6080 0.0.0.0:* LISTEN 13967/python2 [root@linux-node1 ~]# [root@linux-node1 ~]# ps aux | grep 13967 nova 13967 0.0 1.6 379496 66716 ? Ss 11:24 0:06 /usr/bin/python2 /usr/bin/nova-novncproxy --web /usr/share/novnc/ root 16309 0.0 0.0 112644 968 pts/0 S+ 13:07 0:00 grep --colour=auto 13967 [root@linux-node1 ~]#
2、計算節點配置文件修改
計算節點的配置文件更改的地方和控制節點幾乎一致,因此可以把控制節點的配置文件拷貝過來使用,還需要修改個別地方
1、計算節點沒有配置連接數據庫。(其實拷貝過來不刪數據庫的配置也能運行正常,但是不規范)
2、計算節點vnc多配置一行
[root@linux-node1 ~]# ls /etc/nova/ -l total 224 -rw-r----- 1 root nova 3673 Oct 10 21:20 api-paste.ini -rw-r----- 1 root nova 184346 Feb 18 11:20 nova.conf -rw-r----- 1 root nova 27914 Oct 10 21:20 policy.json -rw-r--r-- 1 root root 72 Oct 13 20:01 release -rw-r----- 1 root nova 966 Oct 10 21:20 rootwrap.conf [root@linux-node1 ~]#
[root@linux-node1 ~]# rsync -avz /etc/nova/nova.conf root@192.168.56.12:/etc/nova/ root@192.168.56.12's password: sending incremental file list sent 41 bytes received 12 bytes 15.14 bytes/sec total size is 184346 speedup is 3478.23 [root@linux-node1 ~]#
[root@linux-node2 ~]# ll /etc/nova/ total 224 -rw-r----- 1 root nova 3673 Oct 10 21:20 api-paste.ini -rw-r----- 1 root nova 184346 Feb 18 11:20 nova.conf -rw-r----- 1 root nova 27914 Oct 10 21:20 policy.json -rw-r--r-- 1 root root 72 Oct 13 20:01 release -rw-r----- 1 root nova 966 Oct 10 21:20 rootwrap.conf [root@linux-node2 ~]#
在[vnc]部分,啟用並配置遠程控制台訪問:
[vnc] ... enabled = True vncserver_listen = 0.0.0.0 vncserver_proxyclient_address = $my_ip novncproxy_base_url = http://controller:6080/vnc_auto.html
服務器組件監聽所有的 IP 地址,而代理組件僅僅監聽計算節點管理網絡接口的 IP 地址。
基本的 URL 只是你可以使用 web 瀏覽器訪問位於該計算節點上實例的遠程控制台的位置。
改成如下

[root@linux-node2 ~]# egrep -c '(vmx|svm)' /proc/cpuinfo 4 [root@linux-node2 ~]#
如果這個命令返回了 one or greater 的值,那么你的計算節點支持硬件加速且不需要額外的配置。
如果這個命令返回了 zero 值,那么你的計算節點不支持硬件加速。你必須配置 libvirt 來使用 QEMU 去代替 KVM
在 /etc/nova/nova.conf 文件的 [libvirt] 區域做出如下的編輯:
[libvirt] ... virt_type = qemu
外把下面注釋去掉

[libvirt]模塊部分也修改完畢
[root@linux-node2 ~]# grep -n '^[a-Z]' /etc/nova/nova.conf 267:enabled_apis=osapi_compute,metadata 382:auth_strategy=keystone 1561:firewall_driver=nova.virt.libvirt.firewall.IptablesFirewallDriver 1684:use_neutron=True 2119:rpc_backend=rabbit 3354:api_servers= http://192.168.56.11:9292 3523:auth_uri = http://192.168.56.11:5000 3524:auth_url = http://192.168.56.11:35357 3525:memcached_servers = 192.168.56.11:11211 3526:auth_type = password 3527:project_domain_name = default 3528:user_domain_name = default 3529:project_name = service 3530:username = nova 3531:password = nova 3683:virt_type=kvm 4307:lock_path=/var/lib/nova/tmp 4458:rabbit_host=192.168.56.11 4464:rabbit_port=5672 4476:rabbit_userid=openstack 4480:rabbit_password=openstack 5385:enabled=true 5427:vncserver_listen=0.0.0.0 5451:vncserver_proxyclient_address=192.168.56.12 5532:novncproxy_base_url=http://192.168.56.11:6080/vnc_auto.html [root@linux-node2 ~]#
3、啟動服務和檢查狀態
啟動計算服務及其依賴,並將其配置為隨系統自動啟動
[root@linux-node2 ~]# systemctl enable libvirtd.service openstack-nova-compute.service Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-nova-compute.service to /usr/lib/systemd/system/openstack-nova-compute.service. [root@linux-node2 ~]# systemctl start libvirtd.service openstack-nova-compute.service [root@linux-node2 ~]#
[root@linux-node1 ~]# source admin-openstack.sh [root@linux-node1 ~]# openstack host list +----------------------+-------------+----------+ | Host Name | Service | Zone | +----------------------+-------------+----------+ | linux-node1.nmap.com | scheduler | internal | | linux-node1.nmap.com | consoleauth | internal | | linux-node1.nmap.com | conductor | internal | | linux-node2.nmap.com | compute | nova | +----------------------+-------------+----------+ [root@linux-node1 ~]#
控制節點列出nova的服務,后面的update時間都幾乎一致,如果差距過大,可能造成無法創建虛擬機
[root@linux-node1 ~]# nova service-list +----+------------------+----------------------+----------+---------+-------+----------------------------+-----------------+ | Id | Binary | Host | Zone | Status | State | Updated_at | Disabled Reason | +----+------------------+----------------------+----------+---------+-------+----------------------------+-----------------+ | 1 | nova-scheduler | linux-node1.nmap.com | internal | enabled | up | 2017-02-18T05:41:14.000000 | - | | 2 | nova-consoleauth | linux-node1.nmap.com | internal | enabled | up | 2017-02-18T05:41:14.000000 | - | | 3 | nova-conductor | linux-node1.nmap.com | internal | enabled | up | 2017-02-18T05:41:13.000000 | - | | 7 | nova-compute | linux-node2.nmap.com | nova | enabled | up | 2017-02-18T05:41:13.000000 | - | +----+------------------+----------------------+----------+---------+-------+----------------------------+-----------------+ [root@linux-node1 ~]#
下面命令測試nova連接glance是否正常
[root@linux-node1 ~]# nova image-list +--------------------------------------+--------+--------+--------+ | ID | Name | Status | Server | +--------------------------------------+--------+--------+--------+ | 9969eaa3-0296-48cc-a42e-a02251b778a6 | cirros | ACTIVE | | +--------------------------------------+--------+--------+--------+ [root@linux-node1 ~]#