境:
2台安裝了centos7-minimal的主機
ip地址:
10.132.226.103/24 (controller)
10.132.226.104/24 (compute1)
1.配置主機名。
修改/etc/hostname文件,將里面的內容刪除並分別增加controller和compute1保存並重啟主機
2.配置名稱解析:
修改兩個主機中的/etc/hosts文件,並增加以下內容:
#controller 10.132.226.103 controller #compute1 10.132.226.104 compute1
3.配置網絡時間協議(NTP)
1)控制節點安裝chrony:
1 [root@controller ~]# yum install -y chrony
2)控制節點配置chrony:修改/etc/chrony.conf文件
1 #~~~~~~~~~~添加allow~~~~~~~~~~# 2 # Allow NTP client access from local network. 3 #allow 192.168.0.0/16
4 allow 10.132.226.104
3)重啟chrony服務:
1 [root@controller ~]# systemctl enable chronyd.service 2 [root@controller ~]# systemctl start chronyd.service
4)計算節點安裝chrony:
1 [root@compute1 ~]# yum install -y install
5)配置計算節點:修改/etc/chrony.conf文件
[root@compute1 ~]# vim /etc/chrony.conf #~~~~~~~~注釋其他遠程時間同步節點,增加controller節點~~~~~~~# # Use public servers from the pool.ntp.org project. # Please consider joining the pool (http://www.pool.ntp.org/join.html).
server controller iburst #server 1.centos.pool.ntp.org iburst #server 2.centos.pool.ntp.org iburst #server 3.centos.pool.ntp.org iburst
4.安裝OpenStack包(控制節點與計算節點都需要安裝)
1).更改源,改為阿里雲的源,(否則可能會出錯,更換源之后其實也非常慢)
注意:更換阿里雲的源在之后的安裝中會出現問題,不建議更換。
2)啟用OpenStack存儲庫
1 # yum install centos-release-openstack-queens
修改:啟用OpenStack存儲庫后可以修改CentOS-OpenStack-queens.repo文件以起到更換源的作用
1 [centos-openstack-queens] 2 name=CentOS-7 - OpenStack queens 3 #baseurl=http://mirror.centos.org/centos/7/cloud/$basearch/openstack-queens/
4 baseurl=http://mirrors.163.com/centos/7.5.1804/cloud/x86_64/openstack-queens/ ===>網易的源
3)升級所有節點上的包
# yum upgrade
4)安裝OpenStack客戶端,以及OpenStack-selinux軟件包(以自動管理OpenStack服務的安全策略)
1 # yum install python-openstackclient 2 # yum install openstack-selinux
5.SQL數據庫安裝(mariadb)(僅控制節點)
1)安裝:
1 # yum install mariadb mariadb-server python2-PyMySQL
2)創建和編輯/etc/my.cnf.d/openstack.cnf文件
創建一個[mysqld]
部分,並將bind-address
密鑰設置為控制器節點的管理IP地址,以允許其他節點通過管理網絡進行訪問。設置其他鍵以啟用有用選項和UTF-8字符集:
1 [mysqld] 2 bind-address = 10.132.226.103
3
4 default-storage-engine = innodb 5 innodb_file_per_table = on 6 max_connections = 4096
7 collation-server = utf8_general_ci 8 character-set-server = utf8
3)啟動mariadb服務並設置開機自啟動:
1 # systemctl enable mariadb.service 2 # systemctl start mariadb.service
4)運行mysql_secure_installation腳本來進行數據庫初始化設置
1 # mysql_secure_installation
添加:不知道是否要進行其他節點對控制節點數據庫的遠程訪問,這里先增加一個可以遠程登錄的root用戶:
1 MariaDB [(none)]> grant all privileges on *.* to 'root'@'%' identified by '***';
如果不加identified by 會出現沒有匹配的行的問題。
6.消息隊列安裝和配置(僅在控制節點安裝)
1)安裝並啟動
1 [root@controller ~]# yum install -y rabbitmq-server 2 [root@controller ~]# systemctl enable rabbitmq-server.service 3 Created symlink from /etc/systemd/system/multi-user.target.wants/rabbitmq-server.service to /usr/lib/systemd/system/rabbitmq-server.service. 4 [root@controller ~]# systemctl start rabbitmq-server.service
2)添加OpenStack用戶
1 [root@controller ~]# rabbitmqctl add_user openstack RABBIT_PASS 2 Creating user "openstack" ...
密碼:RabbitMQ消息隊列OpenStack用戶的密碼為RABBIT_PASS
3)允許用戶進行配置,寫入和讀取訪問OpenStack
1 [root@controller ~]# rabbitmqctl set_permissions openstack ".*" ".*" ".*"
2 Setting permissions for user "openstack" in vhost "/" ...
7.安裝配置Memcached(控制節點)
1)安裝
1 [root@controller ~]# yum install -y memcached python-memcached
2)編輯/etc/sysconfig/memcached文件並完成以下操作:配置服務以使用控制節點的管理ip地址(10.132.226.103),這是為了通過管理網絡啟用其他節點的訪問:
1 OPTIONS="-l 127.0.0.1,::1,controller,compute1" ===>修改此行
注意:安裝文檔只添加了controller,我自作主張添加了compute1,如有問題排查此處!!!
3)設置開機自啟並啟動memcached
1 [root@controller ~]# systemctl enable memcached.service 2 Created symlink from /etc/systemd/system/multi-user.target.wants/memcached.service to /usr/lib/systemd/system/memcached.service. 3 [root@controller ~]# systemctl start memcached.service
8.Etcd的安裝配置 (控制節點):
OpenStack服務可以使用Etcd,一種分布式可靠的鍵值存儲,用於分布式密鑰鎖定,存儲配置,跟蹤服務生存和其他場景。
1)安裝
1 [root@controller ~]# yum install etcd
2)編輯/etc/etcd/etcd.conf文件修改以下內容
1 #[Member] 2 ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
3 ETCD_LISTEN_PEER_URLS="http://10.132.226.103:2380"
4 ETCD_LISTEN_CLIENT_URLS="http://10.132.226.103:2379"
5 ETCD_NAME="controller"
6 #[Clustering] 7 ETCD_INITIAL_ADVERTISE_PEER_URLS="http://10.132.226.103:2380"
8 ETCD_ADVERTISE_CLIENT_URLS="http://10.132.226.103:2379"
9 ETCD_INITIAL_CLUSTER="controller=http://10.132.226.103:2380"
10 ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster-01"
11 ETCD_INITIAL_CLUSTER_STATE="new"
3)啟動etcd服務並設置開機啟動
1 [root@controller etcd]# systemctl enable etcd 2 Created symlink from /etc/systemd/system/multi-user.target.wants/etcd.service to /usr/lib/systemd/system/etcd.service. 3 [root@controller etcd]# systemctl start etcd
9.keystone安裝配置(控制節點)
1)數據庫設置,創建keystone數據庫,並賦予適當權限
MariaDB [(none)]> create database keystone; Query OK, 1 row affected (0.00 sec) MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY 'KEYSTONE_DBPASS'; Query OK, 0 rows affected (0.00 sec) MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'controller' IDENTIFIED BY 'KEYSTONE_DBPASS'; Query OK, 0 rows affected (0.00 sec) MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'KEYSTONE_DBPASS'; Query OK, 0 rows affected (0.00 sec)
注意:keystone數據庫的密碼為:KEYSTONE_DBPASS
2)安裝和配置組件
1 [root@controller etcd]# yum install openstack-keystone httpd mod_wsgi
3)編輯/etc/keystone/keystone.conf文件
(1)在[database]部分中配置數據庫訪問
1 connection = mysql+pymysql://keystone:KEYSTONE_DBPASS@controller/keystone
注意:注釋掉或刪除其他connection選項
(2)在[token]部分中,配置fernet令牌提供程序:
1 [token] 2 #~~~~~~省略若干~~~~~~# 3 provider = fernet
4)填充identity服務數據庫:
1 [root@controller etcd]# su -s /bin/sh -c "keystone-manage db_sync" keystone
注意:等待完成可以查看一下keystone數據庫中是否有表生產,若沒有則有可能出現配置出錯的問題。可以查看一下日志信息/var/log/keystone/keystone.conf
5)初始化Fernet秘鑰存儲庫:
1 [root@controller etcd]# keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone 2 [root@controller etcd]# keystone-manage credential_setup --keystone-user keystone --keystone-group keystone
6)引導身份服務:
[root@controller etcd]# keystone-manage bootstrap --bootstrap-password ADMIN_PASS \ > --bootstrap-admin-url http://controller:5000/v3/ \
> --bootstrap-internal-url http://controller:5000/v3/ \
> --bootstrap-public-url http://controller:5000/v3/ \
> --bootstrap-region-id RegionOne
注意:此處admin用戶的密碼為ADMIN_PASS
7)配置Apache HTTP服務器
(1)編輯/etc/httpd/conf/httpd.conf文件並配置ServerName應用控制節點選項
1 ServerName controller
(2)創建/user/share/keystone/wsgi-keystone.conf文件的鏈接:
1 [root@controller ~]# ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/
(3)啟動Apache HTTP
1 [root@controller ~]# systemctl enable httpd.service 2 Created symlink from /etc/systemd/system/multi-user.target.wants/httpd.service to /usr/lib/systemd/system/httpd.service. 3 [root@controller ~]# systemctl start httpd.service
8)配置管理用戶
1 [root@controller ~]# export OS_USERNAME=admin 2 [root@controller ~]# export OS_PASSWORD=ADMIN_PASS 3 [root@controller ~]# export OS_PROJECT_NAME=admin 4 [root@controller ~]# export OS_USER_DOMAIN_NAME=Default 5 [root@controller ~]# export OS_PROJECT_DOMAIN_NAME=Default 6 [root@controller ~]# export OS_AUTH_URL=http://controller:35357/v3
7 [root@controller ~]# export OS_IDENTITY_API_VERSION=3
注意:ADMIN_PASS為keystone服務admin用戶密碼
10.創建域,項目,用戶和角色(控制節點)
Identity服務為每個OpenStack服務提供身份驗證服務。身份驗證服務使用域,項目,用戶和角色的組合。
1)雖然本指南中的keystone-manage bootstrap步驟中已存在“默認”域,但創建新域的正式方法是:
[root@controller fernet-keys]# openstack domain create --description "An Example Domain" example +-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | An Example Domain |
| enabled | True |
| id | 59a70c50c06e478cb08916c7ef070f56 |
| name | example |
| tags | [] |
+-------------+----------------------------------+
2)創建service項目:
[root@controller keystone]# openstack project create --domain default --description "Service Project" service +-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | Service Project |
| domain_id | default |
| enabled | True |
| id | 873afbbb1c8d4282b92e987bc5d3027c |
| is_domain | False |
| name | service |
| parent_id | default |
| tags | [] |
+-------------+----------------------------------+
3)創建demo項目和用戶:常規(非管理員)任務應使用非特權項目和用戶
(1)創建demo項目:
[root@controller keystone]# openstack project create --domain default --description "Demo Project" demo +-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | Demo Project |
| domain_id | default |
| enabled | True |
| id | b9d32240eedf44568aef4d6cfa5ee086 |
| is_domain | False |
| name | demo |
| parent_id | default |
| tags | [] |
+-------------+----------------------------------+
注意:為此項目(demo)創建其他用戶時不需要重復創建此(demo)項目
(2)創建demo用戶:
[root@controller keystone]# openstack user create --domain default --password-prompt demo User Password: ===>密碼為demo123 Repeat User Password: +---------------------+----------------------------------+
| Field | Value |
+---------------------+----------------------------------+
| domain_id | default |
| enabled | True |
| id | b59804bf67b74986ae3b56190be59d45 |
| name | demo |
| options | {} |
| password_expires_at | None |
+---------------------+----------------------------------+
(3)創建用戶角色(role):
[root@controller keystone]# openstack role create user +-----------+----------------------------------+
| Field | Value |
+-----------+----------------------------------+
| domain_id | None |
| id | 6c7aced4ae9143cbab48b122470b4e3a |
| name | user |
+-----------+----------------------------------+
(4)將用戶角色(role)添加到demo項目和用戶:
[root@controller keystone]# openstack role add --project demo --user demo user
11.驗證操作(控制節點):
在安裝其他服務之前驗證Identity服務操作
1)取消設置臨時變量OS_AUTH_URL和OS_PASSWORD環境變量:
[root@controller keystone]# unset OS_AUTH_URL OS_PASSWORD
2)作為admin用戶,請求身份驗證令牌:
[root@controller keystone]# openstack --os-auth-url http://controller:35357/v3 --os-project-domain-name Default --os-user-domain-name Default --os-project-name admin --os-username admin token issue
Password: ===>此處為密碼為ADMIN_PASS +------------+----------------------------------------------------------------+
| Field | Value |
+------------+----------------------------------------------------------------+
| expires | 2018-07-25T15:11:22+0000 |
| id | gAAAAABbWIUKJK3_S_FDcnJ1OdedaHbbeRX-ty8Ey1l93-RFOxT0pD9dn_LCyY |
| | RVPCB6MoHOXFfaPx6ud7HDowDTzSOQDI5CPaMj6Zh6Utu6uNjPEZk9yluFIbZH |
| | gKDRVO5fTxEcV2hAtSCLtW8HNq9xQ0Qij0s-o5VSW-MwmzI_w35abMLiTX4 |
| project_id | a6fa6483ff3d4fa0b08c63a4c42ac82e |
| user_id | 113a25c4fa5144d5870062af8a8b72b5 |
+------------+----------------------------------------------------------------+
注意:keystone的admin用戶的密碼為ADMIN_PASS
3)作為demo用戶,請求身份驗證令牌:
[root@controller keystone]# openstack --os-auth-url http://controller:5000/v3 --os-project-domain-name Default --os-user-domain-name Default --os-project-name demo --os-username demo token issue
Password: +------------+----------------------------------------------------------------+
| Field | Value |
+------------+----------------------------------------------------------------+
| expires | 2018-07-25T15:20:58+0000 |
| id | gAAAAABbWIdKca9AKlNiolngK-0dBTliDXSYAiT0HtLFSNwvMBJ4UaphfqzP33 |
| | nOw_nyO0JWInl40NYgipX_tcS72lP9wWvUnd9bF3Gz9lCFjjqfzMAwXOCfJNRn |
| | GxvqJ_Wit_0Jxo77XpeZ2xIGTfcarJATGVDpnxOSwLzZTsP1J21p95pT_Ng |
| project_id | b9d32240eedf44568aef4d6cfa5ee086 |
| user_id | b59804bf67b74986ae3b56190be59d45 |
+------------+----------------------------------------------------------------+
12.創建腳本(控制節點)
前面的部分使用環境變量和命令選項的組合來通過openstack
客戶端與Identity服務進行交互 。為了提高客戶端操作的效率,OpenStack支持簡單的客戶端環境腳本,也稱為OpenRC文件。這些腳本通常包含所有客戶端的常用選項,但也支持唯一選項。
1)創建並編輯admin-openrc文件並寫入以下內容:
export OS_PROJECT_DOMAIN_NAME=Default export OS_USER_DOMAIN_NAME=Default export OS_PROJECT_NAME=admin export OS_USERNAME=admin export OS_PASSWORD=ADMIN_PASS export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3 export OS_IMAGE_API_VERSION=2
2)創建並編輯demo-openrc文件並寫入以下內容:
export OS_PROJECT_DOMAIN_NAME=Default export OS_USER_DOMAIN_NAME=Default export OS_PROJECT_NAME=demo export OS_USERNAME=demo export OS_PASSWORD=DEMO_PASS export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3 export OS_IMAGE_API_VERSION=2
3)使用腳本:加載admin-openrc
文件以使用Identity服務的位置以及admin
項目和用戶憑據填充環境變量,並請求身份驗證令牌
[root@controller ~]# . admin-openrc
[root@controller ~]# openstack token issue
+------------+----------------------------------------------------------------+
| Field | Value |
+------------+----------------------------------------------------------------+
| expires | 2018-07-25T15:29:34+0000 |
| id | gAAAAABbWIlO5vVfBz1qlVO2iMI3Eg1nt498wav_gj2Ks3RI47LmU-fRFmiJNh |
| | BU8zN9FcvzW5scu2ngG7u81P2SqSYtH4kLfSSwiEiDC4NUtD2e8MCI3Sm6MyBq |
| | XsJRMyIMTw1yL28oHPT_x9NKUwNVUe2Vko4vM49HK4X5rFm0mS0j2uXUp2c |
| project_id | a6fa6483ff3d4fa0b08c63a4c42ac82e |
| user_id | 113a25c4fa5144d5870062af8a8b72b5 |
+------------+----------------------------------------------------------------+
13.Glance的安裝配置(控制節點)
1)創建glance數據庫與用戶
MariaDB [(none)]> CREATE DATABASE glance; Query OK, 1 row affected (0.00 sec) MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY 'GLANCE_DBPASS'; Query OK, 0 rows affected (0.00 sec) MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY 'GLANCE_DBPASS'; Query OK, 0 rows affected (0.00 sec) MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'controller' IDENTTIFIED BY 'GLANCE_DBPASS'; Query OK, 0 rows affected (0.00 sec) MariaDB [(none)]> flush privileges; Query OK, 0 rows affected (0.00 sec)
注意:數據庫glance用戶的密碼為GLANCE_DBPASS
2)創建openstack的glance用戶
[root@controller ~]# . admin-openrc [root@controller ~]# openstack user create --domain default --password-prompt glance User Password: ===>glance123 Repeat User Password: +---------------------+----------------------------------+
| Field | Value |
+---------------------+----------------------------------+
| domain_id | default |
| enabled | True |
| id | 9a23f1aa715247eda3e2d6773966cec6 |
| name | glance |
| options | {} |
| password_expires_at | None |
+---------------------+----------------------------------+
注意:OpenStack的glance用戶密碼為glance123
3)將admin角色添加到glance用戶和service項目:
[root@controller ~]# openstack role add --project service --user glance admin
4)創建glance服務實體
[root@controller ~]# openstack service create --name glance --description "OpenStack Image" image +-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | OpenStack Image |
| enabled | True |
| id | 55aab5302a5940709a49a899d678b578 |
| name | glance |
| type | image |
+-------------+----------------------------------+
5)創建Image服務API端點:
[root@controller ~]# openstack endpoint create --region RegionOne image public http://controller:9292
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | e6efcd617e5e497a8be2d28e902f16b7 |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 55aab5302a5940709a49a899d678b578 |
| service_name | glance |
| service_type | image |
| url | http://controller:9292 |
+--------------+----------------------------------+ [root@controller ~]# openstack endpoint create --region RegionOne image internal http://controller:9292
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 3d8f89a5b3eb418c916512cd35cf54c7 |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 55aab5302a5940709a49a899d678b578 |
| service_name | glance |
| service_type | image |
| url | http://controller:9292 |
+--------------+----------------------------------+ [root@controller ~]# openstack endpoint create --region RegionOne image admin http://controller:9292
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 17c3f4400aa64188a190d13fb5e067f0 |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 55aab5302a5940709a49a899d678b578 |
| service_name | glance |
| service_type | image |
| url | http://controller:9292 |
+--------------+----------------------------------+
6)安裝glance
[root@controller ~]# yum install -y openstack-glance
7)編輯修改/etc/glance/glance-api.conf文件
(1)在[database]部分中添加connection選項,如有其它connection選項存在則注釋或刪除
[database] connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance
(2)在[keystone_authtoken]和[paste_deploy]部分中,配置身份服務訪問
#~~~~~~~~~省略若干~~~~~~~~# [keystone_authtoken] auth_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = glance password = glance123 ===>為OpenStack的glance用戶的密碼 #~~~~~~~~~省略若干~~~~~~~~# [paste_deploy] flavor = keystone #~~~~~~~~~省略若干~~~~~~~~#
注意:注釋掉或刪除該[keystone_authtoken]
部分中的任何其他選項
(3)在[glance_store]部分中配置本地系統存儲和映像文件的位置,添加一下內容:
[glance_store] stores = file,http default_store = file filesystem_store_datadir = /var/lib/glance/images/
8)編輯/etc/glance/glance-registry.conf文件並完成以下操作
(1)在[database]部分中添加connection選項,如有其它connection選項存在則注釋或刪除
[database] connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance
(2)在[keystone_authtoken]和[paste_deploy]部分中,配置身份服務訪問
[keystone_authtoken] auth_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = glance password = glance123 ===>此處為OpenStack的glance用戶的密碼
[paste_deploy]
flavor = keystone
注意:注釋掉或刪除該[keystone_authtoken]
部分中的任何其他選項
9)填充Image服務數據庫
[root@controller ~]# su -s /bin/sh -c "glance-manage db_sync" glance /usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py:1336: OsloDBDeprecationWarning: EngineFacade is deprecated; please use oslo_db.sqlalchemy.enginefacade expire_on_commit=expire_on_commit, _conf=conf) INFO [alembic.runtime.migration] Context impl MySQLImpl. INFO [alembic.runtime.migration] Will assume non-transactional DDL. ..........
注:這里有INFO出現一般就是填充成功了,同時也可以在結束之后去glance的數據庫查看一下是否有表生成。
10)啟動Image服務並將其配置為開機自啟
[root@controller ~]# systemctl enable openstack-glance-api.service openstack-glance-registry.service Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-glance-api.service to /usr/lib/systemd/system/openstack-glance-api.service. Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-glance-registry.service to /usr/lib/systemd/system/openstack-glance-registry.service. [root@controller ~]# systemctl start openstack-glance-api.service openstack-glance-registry.service
11)驗證操作
(1)source 環境腳本
[root@controller ~] . admin-openrc
(2)下載源映像
[root@controller ~]# wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img
(3)使用QCOW2磁盤格式,裸 容器格式和公共可見性將映像上載到映像服務 ,以便所有項目都可以訪問它,並確認上傳映像並驗證屬性:
[root@controller ~]# openstack image create "cirros" --file cirros-0.4.0-x86_64-disk.img --disk-format qcow2 --container-format bare --public +------------------+------------------------------------------------------+
| Field | Value |
+------------------+------------------------------------------------------+
| checksum | 443b7623e27ecf03dc9e01ee93f67afe |
| container_format | bare |
| created_at | 2018-07-26T07:30:24Z |
| disk_format | qcow2 |
| file | /v2/images/7feafdcd-374a-4dde-9604-3f2123b62756/file |
| id | 7feafdcd-374a-4dde-9604-3f2123b62756 |
| min_disk | 0 |
| min_ram | 0 |
| name | cirros |
| owner | a6fa6483ff3d4fa0b08c63a4c42ac82e |
| protected | False |
| schema | /v2/schemas/image |
| size | 12716032 |
| status | active |
| tags | |
| updated_at | 2018-07-26T07:30:25Z |
| virtual_size | None |
| visibility | public |
+------------------+------------------------------------------------------+ [root@controller ~]# openstack image list +--------------------------------------+--------+--------+
| ID | Name | Status |
+--------------------------------------+--------+--------+
| 7feafdcd-374a-4dde-9604-3f2123b62756 | cirros | active |
+--------------------------------------+--------+--------+
15.安裝Nova前環境配置(控制節點)
在安裝和配置Compute服務之前,必須創建數據庫,服務憑據和API端點
1)創建數據庫並賦予權限
MariaDB [(none)]> CREATE DATABASE nova_api; Query OK, 1 row affected (0.00 sec) MariaDB [(none)]> CREATE DATABASE nova; Query OK, 1 row affected (0.00 sec) MariaDB [(none)]> CREATE DATABASE nova_cell0; Query OK, 1 row affected (0.00 sec) MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' IDENTIIFIED BY 'NOVA_DBPASS'; Query OK, 0 rows affected (0.00 sec) MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' IDENTIFIED BY 'NOVA_DBPASS'; Query OK, 0 rows affected (0.00 sec) MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'controller' IDENTTIFIED BY 'NOVA_DBPASS'; Query OK, 0 rows affected (0.00 sec) MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'NOVA_DBPASS'; Query OK, 0 rows affected (0.00 sec) MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'NOVVA_DBPASS'; Query OK, 0 rows affected (0.00 sec) MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'controller' IDENTIFIEED BY 'NOVA_DBPASS'; Query OK, 0 rows affected (0.00 sec) MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' IDENTIFIED BY 'NOVA_DBPASS'; Query OK, 0 rows affected (0.00 sec) MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' IDENTIFIED BY 'NOVA_DBPASS'; Query OK, 0 rows affected (0.00 sec) MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'controller' IDEENTIFIED BY 'NOVA_DBPASS'; Query OK, 0 rows affected (0.00 sec) MariaDB [(none)]> flush privileges; Query OK, 0 rows affected (0.00 sec)
注意:數據庫nova用戶的密碼為NOVA_DBPASS
2)創建compute服務憑據
(1)創建nova用戶:
[root@controller ~]# . admin-openrc [root@controller ~]# openstack user create --domain default --password-prompt nova User Password: ===>nova123 Repeat User Password: +---------------------+----------------------------------+
| Field | Value |
+---------------------+----------------------------------+
| domain_id | default |
| enabled | True |
| id | efccc87d880a4ec2b9f0daa72ed58aa6 |
| name | nova |
| options | {} |
| password_expires_at | None |
+---------------------+----------------------------------+
注意nova用戶的密碼為nova123
(2)將admin角色添加到nova用戶
[root@controller ~]# openstack role add --project service --user nova admin
(3)創建nova服務實體
[root@controller ~]# openstack service create --name nova --description "OpenStack Compute" compute +-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | OpenStack Compute |
| enabled | True |
| id | fcb02966d25c415c974856ae9d306c51 |
| name | nova |
| type | compute |
+-------------+----------------------------------+
3)創建Compute API服務端點
[root@controller ~]# openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 6a26d75718ae44038911e9ab73078f3f |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | fcb02966d25c415c974856ae9d306c51 |
| service_name | nova |
| service_type | compute |
| url | http://controller:8774/v2.1 |
+--------------+----------------------------------+ [root@controller ~]# openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 1b7d8a52ed404dd985eb569375fe449c |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | fcb02966d25c415c974856ae9d306c51 |
| service_name | nova |
| service_type | compute |
| url | http://controller:8774/v2.1 |
+--------------+----------------------------------+ [root@controller ~]# openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 2e80238a040440fcb2fb5801f7cf5853 |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | fcb02966d25c415c974856ae9d306c51 |
| service_name | nova |
| service_type | compute |
| url | http://controller:8774/v2.1 |
+--------------+----------------------------------+
4)創建Placement服務用戶
[root@controller ~]# openstack user create --domain default --password-prompt placement User Password: ===>密碼為placement123 Repeat User Password: +---------------------+----------------------------------+
| Field | Value |
+---------------------+----------------------------------+
| domain_id | default |
| enabled | True |
| id | f5c22dc2fabe49ebbd733e97d4fb9a5f |
| name | placement |
| options | {} |
| password_expires_at | None |
+---------------------+----------------------------------+
注意:placement用戶的密碼為placement123
5)使用admin角色將placement用戶添加到服務項目
[root@controller ~]# openstack role add --project service --user placement admin
6)在服務目錄中創建Placement API條目:
[root@controller ~]# openstack service create --name placement --description "Placement API" placement +-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | Placement API |
| enabled | True |
| id | a26c6f356a204d8eb071fc3c8085339b |
| name | placement |
| type | placement |
+-------------+----------------------------------+
7)創建Placement API服務端點
[root@controller ~]# openstack endpoint create --region RegionOne placement public http://controller:8778
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 845d90d6bc19454d827058658a77de2d |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | a26c6f356a204d8eb071fc3c8085339b |
| service_name | placement |
| service_type | placement |
| url | http://controller:8778 |
+--------------+----------------------------------+ [root@controller ~]# openstack endpoint create --region RegionOne placement internal http://controller:8778
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 85db113b78324c25b3d3fe98969ec401 |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | a26c6f356a204d8eb071fc3c8085339b |
| service_name | placement |
| service_type | placement |
| url | http://controller:8778 |
+--------------+----------------------------------+ [root@controller ~]# openstack endpoint create --region RegionOne placement admin http://controller:8778
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | cbe88b1e991f4782a23a3536e3d92556 |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | a26c6f356a204d8eb071fc3c8085339b |
| service_name | placement |
| service_type | placement |
| url | http://controller:8778 |
+--------------+----------------------------------+
16.安裝配置Nova(控制節點)
1)安裝
[root@controller ~]# yum install -y openstack-nova-api openstack-nova-conductor openstack-nova-console openstack-nova-novncproxy openstack-nova-scheduler openstack-nova-placement-api
2)編輯/etc/nova/nova.conf文件
(1)再[DEFAULT]部分中僅啟用計算和元數據API,以及配置RabbitMQ消息隊列訪問添加以下內容
[DEFAULT]
##僅啟用計算和元數據API enabled_apis=osapi_compute,metadata ##配置RabbitMQ消息隊列訪問 transport_url = rabbit://openstack:RABBIT_PASS@controller
##配置my_ip選項以使用控制節點的管理接口ip地址 my_ip=10.132.226.103 ##啟用對網絡服務的支持 use_neutron=true firewall_driver=nova.virt.firewall.NoopFirewallDriver ===>啟用此防火牆驅動 ##默認情況下,Compute使用內部防火牆驅動程序。由於Networking服務包含防火牆驅動程序,因此必須使用nova.virt.firewall.NoopFirewallDriver防火牆驅動程序禁用Compute防火牆驅動 程序。
(2)在[api_database] 和 [database]部分中,配置數據庫訪問
[api_database] #connection=mysql://nova:nova@localhost/nova
connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api
[database] connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova
(3)在[api]和[keystone_authtoken]部分中,配置身份服務訪問
[api] auth_strategy=keystone [keystone_authtoken] auth_url = http://controller:5000/v3
memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = nova password = nova123
(4)在該[vnc]
部分中,配置VNC代理以使用控制器節點的管理接口IP地址:
[vnc] enabled=true server_listen = $my_ip server_proxyclient_address=$my_ip
(5)在[glance]部分中配置Image服務和API的位置
[glance] api_servers=http://controller:9292
(6)在[oslo_concurrency]部分中,配置鎖定路徑
[oslo_concurrency] lock_path=/var/lib/nova/tmp
(7)在[plancement]部分中配置Placement API
[placement] os_region_name = RegionOne project_domain_name = Default project_name = service auth_type = password user_domain_name = Default auth_url = http://controller:5000/v3
username = placement password = placement123 ===>placement用戶的密碼
3)添加以下內容到/etc/httpd/conf.d/00-nova-placement-api.conf
<Directory /usr/bin>
<IfVersion >= 2.4> Require all granted </IfVersion>
<IfVersion < 2.4> Order allow,deny Allow from all </IfVersion>
</Directory>
4)重啟httpd服務
[root@controller ~]# systemctl restart httpd
5)填充nova-api數據庫
[root@controller ~]# su -s /bin/sh -c "nova-manage api_db sync" nova /usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py:332: NotSupportedWarning: Configuration option(s) ['use_tpool'] not supported
注意:此時查看一下數據庫中nova-api數據庫有沒有表生成
6)注冊cell0數據庫
[root@controller ~]# su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova /usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py:332: NotSupportedWarning: Configuration option(s) ['use_tpool'] not supported exception.NotSupportedWarning
7)創建cell1單元格
[root@controller ~]# su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova /usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py:332: NotSupportedWarning: Configuration option(s) ['use_tpool'] not supported exception.NotSupportedWarning c2a802c6-4d8e-470f-953a-d4744ab507a7 ===>輸出信息
8)填充nova數據庫
[root@controller ~]# su -s /bin/sh -c "nova-manage db sync" nova /usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py:332: NotSupportedWarning: Configuration option(s) ['use_tpool'] not supported exception.NotSupportedWarning /usr/lib/python2.7/site-packages/pymysql/cursors.py:166: Warning: (1831, u'Duplicate index `block_device_mapping_instance_uuid_virtual_name_device_name_idx`. This is deprecated and will be disallowed in a future release.') result = self._query(query) /usr/lib/python2.7/site-packages/pymysql/cursors.py:166: Warning: (1831, u'Duplicate index `uniq_instances0uuid`. This is deprecated and will be disallowed in a future release.') result = self._query(query)
注意:查看一下nova數據庫中是否有表生成
9)驗證nova cell0 和 cell1是否正確注冊
[root@controller ~]# nova-manage cell_v2 list_cells /usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py:332: NotSupportedWarning: Configuration option(s) ['use_tpool'] not supported exception.NotSupportedWarning +-------+--------------------------------------+------------------------------------+-------------------------------------------------+
| Name | UUID | Transport URL | Database Connection |
+-------+--------------------------------------+------------------------------------+-------------------------------------------------+
| cell0 | 00000000-0000-0000-0000-000000000000 | none:/ | mysql+pymysql://nova:****@controller/nova_cell0 |
| cell1 | c2a802c6-4d8e-470f-953a-d4744ab507a7 | rabbit://openstack:****@controller | mysql+pymysql://nova:****@controller/nova |
+-------+--------------------------------------+------------------------------------+-------------------------------------------------+
10)啟動nova等服務,並設置開機自啟
[root@controller ~]# systemctl enable openstack-nova-api.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-nova-api.service to /usr/lib/systemd/system/openstack-nova-api.service. Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-nova-consoleauth.service to /usr/lib/systemd/system/openstack-nova-consoleauth.service. Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-nova-scheduler.service to /usr/lib/systemd/system/openstack-nova-scheduler.service. Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-nova-conductor.service to /usr/lib/systemd/system/openstack-nova-conductor.service. Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-nova-novncproxy.service to /usr/lib/systemd/system/openstack-nova-novncproxy.service. [root@controller ~]# systemctl start openstack-nova-api.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service
17.計算節點安裝nova
1)安裝
[root@compute1 ~]# yum install -y openstack-nova-compute
注意:此處安裝出現問題,解決方法見下一篇隨筆
2)編輯/etc/nova/nova.conf文件完成以下操作
(1)修改[DEFAULT]部分中的相關內容
[DEFAULT] ## 僅啟用計算和元數據API enabled_apis=osapi_compute,metadata ## 配置RabbitMQ消息隊列訪問 transport_url = rabbit://openstack:RABBIT_PASS@controller
## 配置my_ip選項 my_ip = 10.132.226.104 ## 計算節點的ip ## 啟用對網絡服務的支持 use_neutron=true firewall_driver=nova.virt.firewall.NoopFirewallDriver
(2)在[api]和[keystone_authtoken]部分中,配置身份服務訪問:
[api] auth_strategy=keystone [keystone_authtoken] auth_url = http://controller:5000/v3
memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = nova password = nova123
(3)在[vnc]中配置Image服務API的位置
[vnc] api_servers = http://controller:9292
(4)在[oslo_concurrency]部分中,配置鎖定路徑
[oslo_concurrency] lock_path=/var/lib/nova/tmp
(5)在[placement]部分中配置Placement API
[placement] os_region_name = RegionOne project_domain_name = Default project_name = service auth_type = password user_domain_name = Default auth_url = http://controller:5000/v3
username = placement password = placement123
3)確定計算節點是否支持虛擬機的硬件加速
# egrep -c '(vmx|svm)' /proc/cpuinfo 8 如果此命令返回值,則計算節點支持硬件加速,通常不需要其他配置。one or greater 如果此命令返回值zero,則您的計算節點不支持硬件加速,您必須配置libvirt為使用QEMU而不是KVM。 編輯文件中的[libvirt]部分,/etc/nova/nova.conf如下所示: [libvirt] #... virt_type = qemu
4)啟動compute等服務並設置開機自啟
[root@compute1 ~]# systemctl enable libvirtd.service openstack-nova-compute.service Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-nova-compute.service to /usr/lib/systemd/system/openstack-nova-compute.service. [root@compute1 ~]# systemctl start libvirtd.service openstack-nova-compute.service
注意:如果nova-compute
服務無法啟動,請檢查 /var/log/nova/nova-compute.log
。該錯誤消息可能表示控制器節點上的防火牆阻止訪問端口5672.將防火牆配置為打開控制器節點上的端口5672並重新啟動 計算節點上的服務。AMQP server on controller:5672 isunreachable
nova-comput
5)(控制節點操作)將計算節點添加到單元數據庫
(1)獲取管理員憑據,確認數據庫中有計算主機
[root@controller ~]# . admin-openrc [root@controller ~]# openstack compute service list --service nova-compute +----+--------------+----------+------+---------+-------+----------------------------+
| ID | Binary | Host | Zone | Status | State | Updated At |
+----+--------------+----------+------+---------+-------+----------------------------+
| 9 | nova-compute | compute1 | nova | enabled | up | 2018-07-27T09:51:19.000000 |
+----+--------------+----------+------+---------+-------+----------------------------+
(2)發現主機
[root@controller ~]# su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova /usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py:332: NotSupportedWarning: Configuration option(s) ['use_tpool'] not supported exception.NotSupportedWarning Found 2 cell mappings. Skipping cell0 since it does not contain hosts. Getting computes from cell 'cell1': c2a802c6-4d8e-470f-953a-d4744ab507a7 Checking host mapping for compute host 'compute1': 8554856e-43e2-4234-892d-e461b385afec Creating host mapping for compute host 'compute1': 8554856e-43e2-4234-892d-e461b385afec Found 1 unmapped computes in cell: c2a802c6-4d8e-470f-953a-d4744ab507a7 [root@controller ~]# vim /etc/nova/nova.conf [root@controller ~]# su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova /usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py:332: NotSupportedWarning: Configuration option(s) ['use_tpool'] not supported exception.NotSupportedWarning Found 2 cell mappings. Skipping cell0 since it does not contain hosts. Getting computes from cell 'cell1': c2a802c6-4d8e-470f-953a-d4744ab507a7 Found 0 unmapped computes in cell: c2a802c6-4d8e-470f-953a-d4744ab507a7
注意:添加新計算節點時,必須在控制器節點上運行 nova-manage cell_v2 discover_hosts 以注冊這些新計算節點。或者,您可以在/etc/nova/nova.conf文件以下位置設置適當的間隔 :
[scheduler] discover_hosts_in_cells_interval = 300
6)驗證操作
(1)獲取管理員憑據,並列出服務組件已驗證每個進程的成功啟動個注冊:
[root@controller ~]# . admin-openrc [root@controller ~]# openstack compute service list +----+------------------+------------+----------+---------+-------+----------------------------+
| ID | Binary | Host | Zone | Status | State | Updated At |
+----+------------------+------------+----------+---------+-------+----------------------------+
| 1 | nova-consoleauth | controller | internal | enabled | up | 2018-07-27T11:37:44.000000 |
| 2 | nova-scheduler | controller | internal | enabled | up | 2018-07-27T11:37:44.000000 |
| 3 | nova-conductor | controller | internal | enabled | up | 2018-07-27T11:37:40.000000 |
| 9 | nova-compute | compute1 | nova | enabled | up | 2018-07-27T11:37:42.000000 |
+----+------------------+------------+----------+---------+-------+----------------------------+
(2)列出identity服務中的AP端點以驗證與Identity服務的連接:
[root@controller ~]# openstack catalog list +-----------+-----------+-----------------------------------------+
| Name | Type | Endpoints |
+-----------+-----------+-----------------------------------------+
| glance | image | RegionOne |
| | | admin: http://controller:9292 |
| | | RegionOne |
| | | internal: http://controller:9292 |
| | | RegionOne |
| | | public: http://controller:9292 |
| | | |
| keystone | identity | RegionOne |
| | | public: http://controller:5000/v3/ |
| | | RegionOne |
| | | admin: http://controller:5000/v3/ |
| | | RegionOne |
| | | internal: http://controller:5000/v3/ |
| | | |
| placement | placement | RegionOne |
| | | public: http://controller:8778 |
| | | RegionOne |
| | | internal: http://controller:8778 |
| | | RegionOne |
| | | admin: http://controller:8778 |
| | | |
| nova | compute | RegionOne |
| | | internal: http://controller:8774/v2.1 |
| | | RegionOne |
| | | admin: http://controller:8774/v2.1 |
| | | RegionOne |
| | | public: http://controller:8774/v2.1 |
| | | |
+-----------+-----------+-----------------------------------------+
(3)列出Image服務中的圖像以驗證與Image服務的連接:
[root@controller ~]# openstack image list +--------------------------------------+--------+--------+
| ID | Name | Status |
+--------------------------------------+--------+--------+
| 7feafdcd-374a-4dde-9604-3f2123b62756 | cirros | active |
+--------------------------------------+--------+--------+
(4)檢查單元格和placement API是否成功
[root@controller ~]# nova-status upgrade check /usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py:332: NotSupportedWarning: Configuration option(s) ['use_tpool'] not supported exception.NotSupportedWarning Option "os_region_name" from group "placement" is deprecated. Use option "region-name" from group "placement". +--------------------------------+
| Upgrade Check Results |
+--------------------------------+
| Check: Cells v2 |
| Result: Success |
| Details: None |
+--------------------------------+
| Check: Placement API |
| Result: Success |
| Details: None |
+--------------------------------+
| Check: Resource Providers |
| Result: Success |
| Details: None |
+--------------------------------+
| Check: Ironic Flavor Migration |
| Result: Success |
| Details: None |
+--------------------------------+
| Check: API Service Version |
| Result: Success |
| Details: None |
+--------------------------------+
18.安裝Neutron前的環境設置(控制節點)
1)創建數據庫以及數據庫用戶(neutron)
MariaDB [(none)]> CREATE DATABASE neutron; Query OK, 1 row affected (0.00 sec) MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'NEUTRON_DBPASS'; Query OK, 0 rows affected (0.00 sec) MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'controller' IDEENTIFIED BY 'NEUTRON_DBPASS'; Query OK, 0 rows affected (0.00 sec) MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'NEUTRON_DBPASS'; Query OK, 0 rows affected (0.00 sec) MariaDB [(none)]> flush privileges; Query OK, 0 rows affected (0.00 sec)
注意:數據庫neutron用戶的密碼為NEUTRON_DBPASS
2)創建OpenStack的neutron用戶
[root@controller ~]# . admin-openrc [root@controller ~]# openstack user create --domain default --password-prompt neutron User Password: ===>neutron123 Repeat User Password: +---------------------+----------------------------------+ | Field | Value | +---------------------+----------------------------------+ | domain_id | default | | enabled | True | | id | e66f255ee370425e81b031d6a9a9e558 | | name | neutron | | options | {} | | password_expires_at | None | +---------------------+----------------------------------+
注意:OpenStack的neutron用戶密碼為neutron123
3)將admin角色添加到neutron用戶
[root@controller ~]# openstack role add --project service --user neutron admin
4)創建neutron服務實體
[root@controller ~]# openstack service create --name neutron --description "OpenStack Networking" network +-------------+----------------------------------+ | Field | Value | +-------------+----------------------------------+ | description | OpenStack Networking | | enabled | True | | id | fac621943283457d9ae8b84361d9841a | | name | neutron | | type | network | +-------------+----------------------------------+
5)創建網絡服務API端點
[root@controller ~]# openstack endpoint create --region RegionOne network public http://controller:9696 +--------------+----------------------------------+ | Field | Value | +--------------+----------------------------------+ | enabled | True | | id | fb25b8b704e346ef97ed3bb862c6e3ab | | interface | public | | region | RegionOne | | region_id | RegionOne | | service_id | fac621943283457d9ae8b84361d9841a | | service_name | neutron | | service_type | network | | url | http://controller:9696 | +--------------+----------------------------------+ [root@controller ~]# openstack endpoint create --region RegionOne network internal http://controller:9696 +--------------+----------------------------------+ | Field | Value | +--------------+----------------------------------+ | enabled | True | | id | 1d37082dd81b4796bad283e3d5184301 | | interface | internal | | region | RegionOne | | region_id | RegionOne | | service_id | fac621943283457d9ae8b84361d9841a | | service_name | neutron | | service_type | network | | url | http://controller:9696 | +--------------+----------------------------------+ [root@controller ~]# openstack endpoint create --region RegionOne network admin http://controller:9696 +--------------+----------------------------------+ | Field | Value | +--------------+----------------------------------+ | enabled | True | | id | 4550b213679f40668a2661d63bfdb843 | | interface | admin | | region | RegionOne | | region_id | RegionOne | | service_id | fac621943283457d9ae8b84361d9841a | | service_name | neutron | | service_type | network | | url | http://controller:9696 | +--------------+----------------------------------+
19.安裝Neutron之配置自助服務網絡(控制節點)
1)組件安裝
[root@controller ~]# yum install openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge ebtables
2)編輯/etc/neutron/neutron.conf文件
(1)在[database]部分中,配置數據庫訪問
[database] connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron
(2)在[DEFAULT]部分中啟用模塊化第二層(ML2)插件,路由器服務和重疊的ip地址:
[DEFAULT] core_plugin = ml2 service_plugins = router allow_overlapping_ips = True
(3)在[DEFAULT]部分中,配置RabbitMQ消息隊列訪問
[DEFAULT] transport_url = rabbit://openstack:RABBIT_PASS@controller
(4)在[DEFAULT]和[keystone_authtoken]部分中,配置身份服務訪問
[DEFAULT] auth_strategy = keystone [keystone_authtoken] auth_uri = http://controller:5000 auth_url = http://controller:35357 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = neutron password = neutron123 ===>OpenStack的neutron用戶密碼
(5)在[DEFAULT]和[nova]部分中,配置網絡以通知Compute網絡拓撲更改
[DEFAULT] notify_nova_on_port_status_changes = true notify_nova_on_port_data_changes = true [nova] auth_url = http://controller:35357 auth_type = password project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service username = nova password = nova123
(6)在[oslo_concurrency]部分中,配置鎖定路徑
[oslo_concurrency]
lock_path = /var/lib/neutron/tmp
3)編輯/etc/neutron/plugins/ml2/ml2_conf.ini文件
(1)修改[ml2]部分中
[ml2] ## 在該[ml2]部分中,啟用flat,VLAN和VXLAN網絡: type_drivers = flat,vlan,vxlan ## 在該[ml2]部分中,啟用VXLAN自助服務網絡: tenant_network_types = vxlan ## 在該[ml2]部分中,啟用Linux橋和第2層填充機制: mechanism_drivers = linuxbridge,l2population ## 配置ML2插件后,刪除type_drivers選項中的值 可能會導致數據庫不一致。 ## 在該[ml2]部分中,啟用端口安全性擴展驅動程序: extension_drivers = port_security
(2)在[ml2_type_flat]部分中,將提供虛擬網絡配置為扁平網絡
[ml2_type_flat]
flat_networks = provider
(3)在[ml2_type_vxlan]部分中,為自助服務網配置VXLAN網絡標識符范圍
[ml2_type_vxlan] vni_ranges = 1:1000
(4)在[securityproup]部分中,啟用ipset以提高安全組規則的效率
[securitygroup] enable_ipset = true
4)配置Linux橋代理——編輯/etc/neutron/plugins/ml2/linuxbridge_agent.ini文件
(1)在[linux_bridge]部分中,將提供虛擬網絡映射到提供者物理網絡接口:
[linux_bridge] physical_interface_mappings = provider:enp0s31f6 ## enp0s31f6是網絡接口名可以通過ipconfig命令查看
(2)在[vxlan]部分中,啟用VXLAN重疊網絡,配置處理覆蓋網絡的物理網絡接口的IP地址,並啟用第2層填充
[vxlan] enable_vxlan = true local_ip = 10.132.226.103 #===>控制節點ip l2_population = true
(3)在[securitygroup]部分中,啟用安全組並配置Linux橋接iptables防火牆驅動程序
[securitygroup] firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver enable_security_group = true
(4)使用sysctl命令將以下選項的值設為1
[root@controller ~]# sysctl net.bridge.bridge-nf-call-iptables=1 net.bridge.bridge-nf-call-iptables = 1 [root@controller ~]# sysctl net.bridge.bridge-nf-call-ip6tables=1 net.bridge.bridge-nf-call-ip6tables = 1
5)配置第三次代理,編輯/etc/neutron/l3_agent.ini文件並添加以下內容
[DEFAULT]
interface_driver = linuxbridge
6)配置DHCP代理,編輯/etc/neutron/dhcp_agent.ini文件並添加以下內容
[DEFAULT] interface_driver = linuxbridge dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq enable_isolated_metadata = true
20.安裝配置Neutron(控制節點)
1)配置元數據代理,編輯/etc/neutron/metadata_agent.ini文件添加以下內容
[DEFAULT] nova_metadata_host = controller metadata_proxy_shared_secret = METADATA_SECRET
2)配置Compute服務以使用Networking服務
(1)編輯/etc/nova/nova.conf文件 在[neutron]部分的添加以下內容
[neutron] url = http://controller:9696 auth_url = http://controller:35357 auth_type = password project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service username = neutron password = neutron123 # ===>neutron用戶密碼 service_metadata_proxy = true metadata_proxy_shared_secret = METADATA_SECRET
3)創建符號鏈接
[root@controller ~]# ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
4)填充數據庫
[root@controller ~]# su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron INFO [alembic.runtime.migration] Context impl MySQLImpl. INFO [alembic.runtime.migration] Will assume non-transactional DDL. Running upgrade for neutron ... INFO [alembic.runtime.migration] Context impl MySQLImpl. INFO [alembic.runtime.migration] Will assume non-transactional DDL. INFO [alembic.runtime.migration] Running upgrade -> kilo, kilo_initial INFO [alembic.runtime.migration] Running upgrade kilo -> 354db87e3225, nsxv_vdr_metadata.py INFO [alembic.runtime.migration] Running upgrade 354db87e3225 -> 599c6a226151, neutrodb_ipam ......
5)重啟Compute API服務
[root@controller ~]# systemctl restart openstack-nova-api.service
6)啟動網絡服務,並設置開機自啟
[root@controller ~]# systemctl enable neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service Created symlink from /etc/systemd/system/multi-user.target.wants/neutron-server.service to /usr/lib/systemd/system/neutron-server.service. Created symlink from /etc/systemd/system/multi-user.target.wants/neutron-linuxbridge-agent.service to /usr/lib/systemd/system/neutron-linuxbridge-agent.service. Created symlink from /etc/systemd/system/multi-user.target.wants/neutron-dhcp-agent.service to /usr/lib/systemd/system/neutron-dhcp-agent.service. Created symlink from /etc/systemd/system/multi-user.target.wants/neutron-metadata-agent.service to /usr/lib/systemd/system/neutron-metadata-agent.service. [root@controller ~]# systemctl start neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service
7)啟用第三次服務,並設置開機自啟
[root@controller ~]# systemctl enable neutron-l3-agent.service Created symlink from /etc/systemd/system/multi-user.target.wants/neutron-l3-agent.service to /usr/lib/systemd/system/neutron-l3-agent.service. [root@controller ~]# systemctl start neutron-l3-agent.service
21.安裝Neutron(計算節點)
1)安裝組件
[root@controller ~]# yum install openstack-neutron-linuxbridge ebtables ipset
2)編輯/etc/neutron/neutronn.conf,添加以下內容
[DEFAULT] ## 配置RabbitMQ 消息隊列訪問: transport_url = rabbit://openstack:RABBIT_PASS@controller ##配置身份服務訪問: auth_strategy = keystone ##配置身份服務訪問: [keystone_authtoken] auth_uri = http://controller:5000 auth_url = http://controller:35357 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = neutron password = neutron123 ===>neutron用戶的密碼 ##配置鎖定路徑 [oslo_concurrency] lock_path = /var/lib/neutron/tmp
3)配置自助網絡服務
(1)編輯/etc/neutron/plugins/ml2/linuxbridge_agent.ini文件,並添加以下內容
[linux_bridge] ###將提供者虛擬網絡映射到提供者物理網絡接口: physical_interface_mappings = provider:enp0s31f6 ## enp0s31f6是網卡信息通過ipconfig可以查看 [securitygroup] ###啟用安全組並配置Linux橋接iptables防火牆驅動程序: enable_security_group = true firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver [vxlan] ###啟用VXLAN重疊網絡,配置處理覆蓋網絡的物理網絡接口的IP地址,並啟用第2層填充: enable_vxlan = true local_ip = 10.132.226.104 ===>本機的ip地址 l2_population = true
(2)通過驗證以下所有sysctl
值設置為1
:確保您的Linux操作系統內核支持網橋過濾器:
[root@compute1 ~]# modprobe bridge [root@compute1 ~]# modprobe br_netfilter [root@compute1 ~]# sysctl net.bridge.bridge-nf-call-iptables=1 net.bridge.bridge-nf-call-iptables = 1 [root@compute1 ~]# sysctl net.bridge.bridge-nf-call-ip6tables=1 net.bridge.bridge-nf-call-ip6tables = 1
4)編輯/etc/nova/nova.conf文件,添加以下內容
[neutron] url = http://controller:9696 auth_url = http://controller:35357 auth_type = password project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service username = neutron password = neutron123
5)重啟Compute服務
[root@compute1 ~]# systemctl restart openstack-nova-compute.service
6)啟動Linux網橋代理並將其配置為系統引導時啟動:
[root@compute1 ~]# systemctl enable neutron-linuxbridge-agent.service Created symlink from /etc/systemd/system/multi-user.target.wants/neutron-linuxbridge-agent.service to /usr/lib/systemd/system/neutron-linuxbridge-agent.service. [root@compute1 ~]# systemctl start neutron-linuxbridge-agent.service
7)控制節點驗證
[root@controller ~]# openstack extension list --network +----------------------------------------------------------------------------------------------+---------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------+ | Name | Alias | Description | +----------------------------------------------------------------------------------------------+---------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------+ | Default Subnetpools | default-subnetpools | Provides ability to mark and use a subnetpool as the default. | | Availability Zone | availability_zone | The availability zone extension. | | Network Availability Zone | network_availability_zone | Availability zone support for network. | | Auto Allocated Topology Services | auto-allocated-topology | Auto Allocated Topology Services. | | Neutron L3 Configurable external gateway mode | ext-gw-mode | Extension of the router abstraction for specifying whether SNAT should occur on the external gateway | | Port Binding | binding | Expose port bindings of a virtual port to external application | | agent | agent | The agent management extension. | | Subnet Allocation | subnet_allocation | Enables allocation of subnets from a subnet pool | | L3 Agent Scheduler | l3_agent_scheduler | Schedule routers among l3 agents | | Tag support | tag | Enables to set tag on resources. | | Neutron external network | external-net | Adds external network attribute to network resource. | | Tag support for resources with standard attribute: trunk, policy, security_group, floatingip | standard-attr-tag | Enables to set tag on resources with standard attribute. | | Neutron Service Flavors | flavors | Flavor specification for Neutron advanced services. | | Network MTU | net-mtu | Provides MTU attribute for a network resource. | | Network IP Availability | network-ip-availability | Provides IP availability data for each network and subnet. | | Quota management support | quotas | Expose functions for quotas management per tenant | | If-Match constraints based on revision_number | revision-if-match | Extension indicating that If-Match based on revision_number is supported. | | HA Router extension | l3-ha | Adds HA capability to routers. | | Provider Network | provider | Expose mapping of virtual networks to physical networks | | Multi Provider Network | multi-provider | Expose mapping of virtual networks to multiple physical networks | | Quota details management support | quota_details | Expose functions for quotas usage statistics per project | | Address scope | address-scope | Address scopes extension. | | Neutron Extra Route | extraroute | Extra routes configuration for L3 router | | Network MTU (writable) | net-mtu-writable | Provides a writable MTU attribute for a network resource. | | Subnet service types | subnet-service-types | Provides ability to set the subnet service_types field | | Resource timestamps | standard-attr-timestamp | Adds created_at and updated_at fields to all Neutron resources that have Neutron standard attributes. | | Neutron Service Type Management | service-type | API for retrieving service providers for Neutron advanced services | | Router Flavor Extension | l3-flavors | Flavor support for routers. | | Port Security | port-security | Provides port security | | Neutron Extra DHCP options | extra_dhcp_opt | Extra options configuration for DHCP. For example PXE boot options to DHCP clients can be specified (e.g. tftp-server, server-ip-address, bootfile-name) | | Resource revision numbers | standard-attr-revisions | This extension will display the revision number of neutron resources. | | Pagination support | pagination | Extension that indicates that pagination is enabled. | | Sorting support | sorting | Extension that indicates that sorting is enabled. | | security-group | security-group | The security groups extension. | | DHCP Agent Scheduler | dhcp_agent_scheduler | Schedule networks among dhcp agents | | Router Availability Zone | router_availability_zone | Availability zone support for router. | | RBAC Policies | rbac-policies | Allows creation and modification of policies that control tenant access to resources. | | Tag support for resources: subnet, subnetpool, port, router | tag-ext | Extends tag support to more L2 and L3 resources. | | standard-attr-description | standard-attr-description | Extension to add descriptions to standard attributes | | IP address substring filtering | ip-substring-filtering | Provides IP address substring filtering when listing ports | | Neutron L3 Router | router | Router abstraction for basic L3 forwarding between L2 Neutron networks and access to external networks via a NAT gateway. | | Allowed Address Pairs | allowed-address-pairs | Provides allowed address pairs | | project_id field enabled | project-id | Extension that indicates that project_id field is enabled. | | Distributed Virtual Router | dvr | Enables configuration of Distributed Virtual Routers. | +----------------------------------------------------------------------------------------------+---------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------+
[root@controller ~]# openstack network agent list +--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+ | ID | Agent Type | Host | Availability Zone | Alive | State | Binary | +--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+ | 0244254a-c782-4505-ad9a-7a7751216526 | Linux bridge agent | compute1 | None | :-) | UP | neutron-linuxbridge-agent | | 1cb1bf35-bcea-4ed1-8eff-a69f439ed4a0 | DHCP agent | controller | nova | :-) | UP | neutron-dhcp-agent | | 3e2a99a4-07c6-4ba5-b627-a321408ed40c | L3 agent | controller | nova | :-) | UP | neutron-l3-agent | | 5d5359c2-79f6-47a0-846d-c672eac7537f | Metadata agent | controller | None | :-) | UP | neutron-metadata-agent | | c2fded1e-d70f-4a85-9c3c-1955e7853f79 | Linux bridge agent | controller | None | :-) | UP | neutron-linuxbridge-agent | +--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+
22.安裝dashboard(Horizon)
1)安裝
[root@controller ~]# yum install openstack-dashboard
2)編輯/etc/openstack-dashboard/local_setting文件
(1)配置dashboard以在controller節點上使用OpenStack服務:
OPENSTACK_HOST = "controller"
(2)允許其他主機訪問Dashboard
ALLOWED_HOSTS = ['*'] # 使用'*'表示接受所有主機的訪問
(3)配置memcached回話存儲服務
SESSION_ENGINE = 'django.contrib.sessions.backends.cache' CACHES = { 'default': { 'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache', 'LOCATION': 'controller:11211', } }
注意:此處要注釋掉其他的回話存儲配置
(4)啟用Identity API版本3:
OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST
(5)啟用對域的支持:
OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
(6)配置API版本:
OPENSTACK_API_VERSIONS = { "identity": 3, "image": 2, "volume": 2, }
(7)配置Default為通過儀表盤創建的用戶的默認域:
OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = 'Default'
(8)配置user為您通過儀表盤創建的用戶的默認角色
OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"
(9)配置可選時區
TIME_ZONE = "UTC"
3)重新啟動web服務器和會話存儲服務
[root@controller ~]# systemctl restart httpd.service memcached.service
4)驗證,通過瀏覽器訪問http://10.132.226.103/dashboard進入Horizon的登錄界面
部署成功!!!