CentOS7.5上配置Openstack-Rocky


一.安裝CentOS7和基礎配置

安裝過程大部分都是缺省配置,只有如下兩處存儲和軟件選擇配置需要注意:

1.1存儲配置

 

 

安裝位置-->我要配置分區-->完成-->分區方案選LVM-->新增如下四個掛載點-->完成-->接受更改。其中:

ü boot通常配置1G,設備類型選標准分區,文件系統選ext3

ü swap通常4G,設備類型選LVM,文件系統當然是swap

ü Root100G左右,設備類型選LVM,文件系統選ext3

ü 剩下最大的空間當然留給home,設備類型選LVM,文件系統選ext3

 

(補充:后來想給win7實例掛額外的E盤時,由於沒有額外的存儲節點,於是想在控制節點上直接挖剩余空間,此時發現磁盤空間已經被分配完了。因此建議home分配3、500G就可以了。剩下的不分配,這樣后續就可以掛載為sdb使用了。)

1.2軟件選擇

使用缺省的最小安裝即可:

 

 

1.3網絡配置

ü 對於控制節點,ens44f0地址為:10.47.181.26,網關10.47.181.1DNS10.30.1.9ens44f1暫不啟用;

ü 對於計算節點,ens44f0地址為:10.47.181.27,網關10.47.181.1DNS10.30.1.9ens44f1暫不啟用;

ü 同時控制節點的主機名改為controller,計算節點的主機名改為compute

ü 如果后續要手工配置IP地址:[root@controller /]# vi /etc/sysconfig/network-scripts/ifcfg-ens44f0。(特別注意:配置文件中的ONBOOT要配置為yesBOOTPROTO要從dhcp改為nonestatic,其它只需配置IPADDR0=10.47.181.26PREFIX0=24GATEWAY0=10.47.181.1DNS1=10.30.1.9即可)。修改配置后,重啟網卡的命令是[root@controller /]# service network restart

ü 手工修改主界面配置文件:[root@controller /]# vi /etc/hostname。直接查看主界面的命令:[root@controller /]# hostname

ü Hosts文件修改[root@controller /]# vi /etc/hosts,增加一下對本實踐中控制節點和計算節點的配置:

10.47.181.26 controller

10.47.181.27 compute

ü root密碼設置為root

1.4關閉防火牆和SELinux

(控制和計算節點都執行)

[root@controller /]# systemctl stop firewalld

[root@controller /]# systemctl disable firewalld

[root@controller /]# setenforce 0

[root@controller /]# sed -i 's/=enforcing/=disabled/' /etc/selinux/config

1.5修改yum

(控制和計算節點都一樣配置)

ü 先備份原有*.repo;

ü 新建:[root@controller /]# vi /etc/yum.repos.d/zte-mirror.repo,內容如下:

[base]
name=CentOS-$releasever - Base
baseurl=http://mirrors.zte.com.cn/centos/7/os/$basearch/
gpgcheck=1

enabled=1

gpgkey=http://mirrors.zte.com.cn/centos/RPM-GPG-KEY-CentOS-7
[epel]
name=CentOS-$releasever - Epel
baseurl=http://mirrors.zte.com.cn/epel/7/$basearch/
gpgcheck=0

enabled=1
[extras]
name=CentOS-$releasever - Extras
baseurl=http://mirrors.zte.com.cn/centos/7/extras/$basearch/
gpgcheck=0

enabled=1
[updates]
name=CentOS-$releasever - Updates
baseurl=http://mirrors.zte.com.cn/centos/7/updates/$basearch/

gpgcheck=0
enabled=1

[openstack-rocky]

name=CentOS-$releasever - Rocky

baseurl=http://mirrors.zte.com.cn/centos/7/cloud/x86_64/openstack-rocky/

gpgcheck=0

enabled=1

ü 保存后依次執行:

[root@controller /]# yum clean all

[root@controller /]# yum makecache

[root@controller /]# yum update

[root@controller /]# reboot

(重啟后出現一次刪掉的*.repo又回來了,那就再刪除(只保留zte-mirror.repo),並重新clean allmakecache

1.6安裝ChronyNTP時鍾同步服務

1.6.1控制節點安裝Chrony

ü 安裝:[root@controller /]# yum install chrony

ü 配置:[root@controller /]# vi /etc/chrony.conf 

注釋掉原有的server,新增兩個配置:

server controller iburst

allow 10.47.0.0/16

ü 啟動服務:

[root@controller /]# systemctl start chronyd

[root@controller /]# systemctl enable chronyd

1.6.2計算節點安裝Chrony

除了配置chrony.conf,其它同上:

注釋掉原有的server,新增一個配置:

server controller iburst

1.6.3控制節點安裝NTP

前面安裝Chrony后,觀察發現沒有同步時鍾,暫時先不查原因。先把已經熟練掌握的NTP搞上。同時將chronyd.service關掉(關掉方法:[root@controller /]# systemctl stop chronyd [root@controller /]# systemctl disable chronyd)。

ü 安裝:[root@controller ~]# yum install ntp

ü 配置:[root@controller ~]# vi /etc/ntp.conf

注釋掉原有的server,新增如下兩行配置:

server 127.127.1.0

fudeg  127.127.1.0 startum 10

ü 配置:[root@controller ~]# vi /etc/sysconfig/ntpd

增加配置:SYNC_HWCLOCK=yes

ü 啟動服務:

[root@controller /]# systemctl start ntp

[root@controller /]# systemctl enable ntp

1.6.4計算節點安裝NTP

ü 安裝:[root@controller ~]# yum install ntp

ü 配置:[root@controller ~]# vi /etc/ntp.conf

注釋掉原有的server,新增如下兩行配置:

server controller

ü 配置:[root@controller ~]# vi /etc/sysconfig/ntpd

增加配置:SYNC_HWCLOCK=yes

ü 啟動服務:

[root@controller /]# systemctl start ntp

[root@controller /]# systemctl enable ntp

ü 觀察同步狀態:[root@compute /]# ntpq -p

remote     refid      st  t  when poll   reach   delay   offset  jitter

===========================================================================

*controller  LOCAL(0)    6  u   25   64   77    0.160   1.140   0.741

 

1.7安裝openstack客戶端selinux服務

(控制和計算節點都安裝)

[root@controller /]# yum install python-openstackclient

[root@controller /]# yum install openstack-selinux

二.控制節點的安裝

2.1安裝數據庫

ü 安裝:[root@controller /]# yum install mariadb mariadb-server python2-PyMySQL

ü 新建文件:[root@controller /]# vi /etc/my.cnf.d/openstack.cnf

內容為:

[mysqld]

bind-address = 10.47.181.26

default-storage-engine = innodb

innodb_file_per_table = on

max_connections = 4096

collation-server = utf8_general_ci

character-set-server = utf8

ü 啟動服務

[root@controller /]# systemctl enable mariadb.service

[root@controller /]# systemctl start mariadb.service

ü 通過腳本[root@controller /]# mysql_secure_installation設置DB的密碼為dbrootpass設置過程中其它都選Y即可。第一次設置需要輸入當前密碼,因為是空,所以直接回車即可。

ü 調大最大連接數:

1)查看當前連接數(Threads):[root@controller ~]# mysqladmin -uroot -pdbrootpass status

Uptime: 431  Threads: 214  Questions: 24884  Slow queries: 0  Opens: 67  Flush tables: 1  Open tables: 61  Queries per second avg: 57.735

2)查看默認最大連接數:[root@controller ~]# mysql -uroot -pdbrootpass

MariaDB [(none)]> show variables like "max_connections";

+-----------------+-------+

| Variable_name   | Value |

+-----------------+-------+

| max_connections | 214   |

+-----------------+-------+

3)編輯:[root@controller ~]# vi /etc/my.cnf

[mysqld]下新增一行:max_connections=1000

4)編輯:[root@controller ~]# vi /usr/lib/systemd/system/mariadb.service

[service]下新增兩行:

LimitNOFILE=10000

LimitNPROC=10000

5)重啟數據庫:

[root@controller ~]# systemctl --system daemon-reload

[root@controller ~]# systemctl restart mariadb.service

6)重新驗證:

[root@controller ~]# mysqladmin -uroot -pdbrootpass status

Uptime: 1012  Threads: 238  Questions: 55067  Slow queries: 0  Opens: 70  Flush tables: 1  Open tables: 64  Queries per second avg: 54.414

7)[root@controller ~]# mysql -uroot -pdbrootpass

MariaDB [(none)]> show variables like "max_connections";

+-----------------+-------+

| Variable_name   | Value |

+-----------------+-------+

| max_connections | 4096  |

+-----------------+-------+

2.2安裝Message queue

ü 安裝:[root@controller /]# yum install rabbitmq-server

ü 啟動服務

[root@controller /]# systemctl enable rabbitmq-server.service

[root@controller /]# systemctl start rabbitmq-server.service

ü 添加openstack用戶,密碼為rabbitpass

[root@controller /]# rabbitmqctl add_user openstack rabbitpass

ü openstack用戶最高權限:

[root@controller /]# rabbitmqctl set_permissions openstack “.*” “.*” “.*”

返回:Setting permissions for user "openstack" in vhost "/" ...

2.3安裝Memcached

ü 安裝:[root@controller /]# yum install memcached python-memcached

ü 編輯:[root@controller /]# vi /etc/sysconfig/memcached

在現有OPTIONS中增加控制節點地址,如下紅色字體:

OPTIONS="-l 127.0.0.1,::1,controller"

ü 啟動服務

[root@controller /]# systemctl enable memcached.service

[root@controller /]# systemctl start memcached.service

2.4安裝Etcd

ü 安裝:[root@controller /]# yum install etcd

ü 編輯:[root@controller /]# vi /etc/etcd/etcd.conf

#[Member]節點下修改如下配置:

ETCD_LISTEN_PEER_URLS="http://10.47.181.26:2380"

ETCD_LISTEN_CLIENT_URLS="http://10.47.181.26:2379"

ETCD_NAME="controller"

#[Clustering]節點修改如下配置:

ETCD_INITIAL_ADVERTISE_PEER_URLS="http://10.47.181.26:2380"

ETCD_ADVERTISE_CLIENT_URLS="http://10.47.181.26:2379"

ETCD_INITIAL_CLUSTER="controller=http://10.47.181.26:2380"

ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster-01"

ETCD_INITIAL_CLUSTER_STATE="new"

后來將上面配置中幾個ip地址替換為localhost,也能正常啟動本服務。

ü 啟動並設置為開機自啟動:

[root@controller /]# systemctl enable etcd

[root@controller /]# systemctl start etcd

2.5安裝Keystone

2.5.1數據庫中創建keystone相關數據

(密碼為keystonedbpass

ü [root@controller /]# mysql -uroot -pdbrootpass

ü MariaDB [(none)]> CREATE DATABASE keystone;

ü MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY 'keystonedbpass';

ü MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'keystonedbpass';

ü MariaDB [(none)]> exit

2.5.2安裝Keystone

ü 安裝:[root@controller /]# yum install openstack-keystone httpd mod_wsgi

ü 編輯:[root@controller /]# vi /etc/keystone/keystone.conf

[database]節點下配置:

connection = mysql+pymysql://keystone:keystonedbpass@controller/keystone

[token]節點下配置:

provider = fernet

ü 同步數據庫[root@controller /]# su -s /bin/sh -c "keystone-manage db_sync" keystone

ü 初始化fernet

[root@controller /]# keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone

[root@controller /]# keystone-manage credential_setup --keystone-user keystone --keystone-group keystone

ü 引導身份認證admin用戶的密碼為設定為adminpass):[root@controller /]# keystone-manage bootstrap --bootstrap-password adminpass --bootstrap-admin-url http://controller:5000/v3/ --bootstrap-internal-url http://controller:5000/v3/ --bootstrap-public-url http://controller:5000/v3/ --bootstrap-region-id RegionOne

2.5.3配置Apache HTTP sever

ü 編輯:[root@controller /]# vi /etc/httpd/conf/httpd.conf

ServerName controller

ü 創建文件鏈接:[root@controller /]# ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/

ü 啟動httpd服務:

[root@controller /]# systemctl enable httpd.service

[root@controller /]# systemctl start httpd.service

啟動時遇到啟動失敗,重新執行了一下文檔開頭部分執行的關閉SELinuxsetenforce 0后,再次啟動httpd.service成功。

ü 准備一個環境變量腳本[root@controller /]# vi admin-openrc.sh,內容如下:

export OS_USERNAME=admin

export OS_PASSWORD=adminpass

export OS_PROJECT_NAME=admin

export OS_USER_DOMAIN_NAME=Default

export OS_PROJECT_DOMAIN_NAME=Default

export OS_AUTH_URL=http://controller:5000/v3

export OS_IDENTITY_API_VERSION=3

export OS_IMAGE_API_VERSION=2

保存后加載:[root@controller /]# source admin-openrc.sh

2.5.4創建service項目

ü 創建project[root@controller /]# openstack project create --domain default --description "Service Project" service

+-------------+----------------------------------+

| Field       | Value                            |

+-------------+----------------------------------+

| description | Service Project                  |

| domain_id   | default                          |

| enabled     | True                             |

| id          | d16834db814a423aa6354644c20b6384 |

| is_domain   | False                            |

| name        | service                          |

| parent_id   | default                          |

| tags        | []                               |

+-------------+----------------------------------+

ü 驗證:

[root@controller /]# openstack user list

+----------------------------------+-------+

| ID                               | Name  |

+----------------------------------+-------+

| cd365f993a51434d9443230e1faa1d44 | admin |

+----------------------------------+-------+

[root@controller /]# openstack token issue

+------------+--------------------------------------------------------------+

| Field      | Value                                                        |

+------------+--------------------------------------------------------------+

| expires    | 2018-10-27T02:17:39+0000                                     |

| id         | gAAAAABb07yzbeKvZPi_uZT0UKkqA7sLaDvJ3sZEFebqDk3Tnk......     |

| project_id | b8471b54426d4b0ba497592862054d5a                             |

| user_id    | cd365f993a51434d9443230e1faa1d44                             |

+------------+--------------------------------------------------------------+

id太長,被我縮減了一下貼在這里)

2.6安裝Glance

2.6.1數據庫中創建glance相關數據

(密碼為glancedbpass

ü [root@controller /]# mysql -uroot -pdbrootpass

ü MariaDB [(none)]> CREATE DATABASE glance;

ü MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO glance@'localhost' IDENTIFIED BY ‘glancedbpass';

ü MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO glance@'%' IDENTIFIED BY 'glancedbpass';

ü MariaDB [(none)]> exit

2.6.2創建用戶、角色和服務等

ü 加載環境變量腳本:[root@controller /]# source admin-openrc.sh

ü 創建glance用戶:[root@controller ~]# openstack user create --domain default --password-prompt glance

User Password:(此處輸入user密碼為userpass

Repeat User Password:(重復輸入userpass

+---------------------+----------------------------------+

| Field               | Value                            |

+---------------------+----------------------------------+

| domain_id           | default                          |

| enabled             | True                             |

| id                  | fee4fcb2d77b4df19d28dcf3e2163dd6 |

| name                | glance                           |

| options             | {}                               |

| password_expires_at | None                             |

+---------------------+----------------------------------+

ü 創建glance角色:[root@controller ~]# openstack role add --project service --user glance admin

ü 創建glance服務:[root@controller ~]# openstack service create --name glance --description "OpenStack Image" image

+-------------+----------------------------------+

| Field       | Value                            |

+-------------+----------------------------------+

| description | OpenStack Image                  |

| enabled     | True                             |

| id          | 9fa19cf860ac4f9c9f8a494df611a2c2 |

| name        | glance                           |

| type        | image                            |

+-------------+----------------------------------+

ü 創建鏡像公共節點:[root@controller ~]# openstack endpoint create --region RegionOne image public http://controller:9292

+--------------+----------------------------------+

| Field        | Value                            |

+--------------+----------------------------------+

| enabled      | True                             |

| id           | 880e0f6663a34b5ab17928a8a5d5ac17 |

| interface    | public                           |

| region       | RegionOne                        |

| region_id    | RegionOne                        |

| service_id   | 9fa19cf860ac4f9c9f8a494df611a2c2 |

| service_name | glance                           |

| service_type | image                            |

| url          | http://controller:9292           |

+--------------+----------------------------------+

ü 創建鏡像內部節點:[root@controller ~]# openstack endpoint create --region RegionOne image internal http://controller:9292

+--------------+----------------------------------+

| Field        | Value                            |

+--------------+----------------------------------+

| enabled      | True                             |

| id           | 1d05c65ce1d9434f940e7d5c18ec6f32 |

| interface    | internal                         |

| region       | RegionOne                        |

| region_id    | RegionOne                        |

| service_id   | 9fa19cf860ac4f9c9f8a494df611a2c2 |

| service_name | glance                           |

| service_type | image                            |

| url          | http://controller:9292           |

+--------------+----------------------------------+

ü 創建鏡像管理員節點:[root@controller ~]# openstack endpoint create --region RegionOne image admin http://controller:9292

+--------------+----------------------------------+

| Field        | Value                            |

+--------------+----------------------------------+

| enabled      | True                             |

| id           | fca8e745877a4416b9b23f0a70407338 |

| interface    | admin                            |

| region       | RegionOne                        |

| region_id    | RegionOne                        |

| service_id   | 9fa19cf860ac4f9c9f8a494df611a2c2 |

| service_name | glance                           |

| service_type | image                            |

| url          | http://controller:9292           |

+--------------+----------------------------------+

2.6.3安裝Glance

ü 安裝:[root@controller ~]# yum install openstack-glance

ü 編輯:[root@controller ~]# vi /etc/glance/glance-api.conf

[database]節點下修改如下配置:

connection = mysql+pymysql://glance:glancedbpass@controller/glance

[keystone_authtoken]節點下修改如下配置:

www_authenticate_uri = http://controller:5000

auth_url = http://controller:5000(務必小心:原文件寫的是auth_uri,一定要改為auth_url

memcached_servers = controller:11211

auth_type = password

以及新增如下配置:

project_domain_name = Default

user_domain_name = Default

project_name = service

username = glance

password = userpass

[paste_deploy]節點下放開如下配置:

flavor = keystone

[glance_store]節點下放開如下配置:

stores = file,http

default_store = file

filesystem_store_datadir = /var/lib/glance/images(保存鏡像文件的路徑)

ü 編輯:[root@controller ~]# vi /etc/glance/glance-registry.conf

[database]節點下修改如下配置:

connection = mysql+pymysql://glance:glancedbpass@controller/glance

[keystone_authtoken]節點下修改如下配置:

www_authenticate_uri = http://controller:5000

auth_url = http://controller:5000(務必小心:原文件寫的是auth_uri,一定要改為auth_url

memcached_servers = controller:11211

auth_type = password

以及新增如下配置:

project_domain_name = Default

user_domain_name = Default

project_name = service

username = glance

password = userpass

[paste_deploy]節點下放開如下配置:

flavor = keystone

ü 同步數據庫:[root@controller ~]# su -s /bin/sh -c "glance-manage db_sync" glance

......

Database is synced successfully.

ü 啟動服務:

[root@controller ~]# systemctl start openstack-glance-api.service openstack-glance-registry.service

[root@controller ~]# systemctl enable openstack-glance-api.service openstack-glance-registry.service

ü 驗證:

1)本控制節點還不能上外網,那就通過能訪問外網的PC機直接通過IE瀏覽器下載,https://download.cirros-cloud.net/,下載其中的cirros-0.3.2-x86_64-disk.img即可。然后上傳的本控制節點:

[root@controller ~]# ll

總用量 12888

-rw-r--r--  1 root root      264 10月 27 09:36 admin-openrc.sh

-rw-------. 1 root root     2063 10月 26 16:37 anaconda-ks.cfg

-rw-r--r--  1 root root 13167616 10月 27 10:30 cirros-0.3.2-x86_64-disk.img

2)加載環境變量:[root@controller /]# source admin-openrc.sh

3)創建鏡像:[root@controller ~]# openstack image create "cirros" --file cirros-0.3.2-x86_64-disk.img --disk-format qcow2 --container-format bare --public

+------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+

| Field            | Value                                                                                                                                                                                      |

+------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+

| checksum         | 64d7c1cd2b6f60c92c14662941cb7913                                                                                                                                                           |

| container_format | bare                                                                                                                                                                                       |

| created_at       | 2018-10-27T02:43:53Z                                                                                                                                                                       |

| disk_format      | qcow2                                                                                                                                                                                      |

| file             | /v2/images/b50f92a7-f49b-4908-9144-568f98dbbb8f/file                                                                                                                                       |

| id               | b50f92a7-f49b-4908-9144-568f98dbbb8f                                                                                                                                                       |

| min_disk         | 0                                                                                                                                                                                          |

| min_ram          | 0                                                                                                                                                                                          |

| name             | cirros                                                                                                                                                                                     |

| owner            | b8471b54426d4b0ba497592862054d5a                                                                                                                                                           |

| properties       | os_hash_algo='sha512', os_hash_value='de74eeff61ad129d3945dead39dbdb02c942702e423628c6fbb35cf18747141d4ebdae914ffebaf6e18dcb174d4066010df8829960c6b95f8777d4f5fb5567f2', os_hidden='False' |

| protected        | False                                                                                                                                                                                      |

| schema           | /v2/schemas/image                                                                                                                                                                          |

| size             | 13167616                                                                                                                                                                                   |

| status           | active                                                                                                                                                                                     |

| tags             |                                                                                                                                                                                            |

| updated_at       | 2018-10-27T02:43:54Z                                                                                                                                                                       |

| virtual_size     | None                                                                                                                                                                                       |

| visibility       | public                                                                                                                                                                                     |

+------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+

4)查看鏡像:[root@controller ~]# openstack image list

+--------------------------------------+--------+--------+

| ID                                   | Name   | Status |

+--------------------------------------+--------+--------+

| b50f92a7-f49b-4908-9144-568f98dbbb8f | cirros | active |

+--------------------------------------+--------+--------+

2.7安裝Nova

2.7.1數據庫中創建nova相關數據

(密碼為novadbpassplacementdbpass

ü [root@controller /]# mysql -uroot -pdbrootpass

ü MariaDB [(none)]> CREATE DATABASE nova_api;

ü MariaDB [(none)]> CREATE DATABASE nova;

ü MariaDB [(none)]> CREATE DATABASE nova_cell0;

ü MariaDB [(none)]> CREATE DATABASE placement;

ü MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO ‘nova’@'localhost' IDENTIFIED BY ‘novadbpass';

ü MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO ‘nova’@'%' IDENTIFIED BY 'novadbpass';

ü MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO ‘nova’@'localhost' IDENTIFIED BY ‘novadbpass';

ü MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO ‘nova’@'%' IDENTIFIED BY 'novadbpass';

ü MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO ‘nova’@'localhost' IDENTIFIED BY ‘novadbpass';

ü MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO ‘nova’@'%' IDENTIFIED BY 'novadbpass';

ü MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO ‘placement’@'localhost' IDENTIFIED BY placementdbpass';

ü MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO ‘placement’@'%' IDENTIFIED BY 'placementdbpass';

MariaDB [(none)]> exit

2.7.2創建用戶、角色和服務等

ü 創nova用戶:[root@controller ~]# openstack user create --domain default --password-prompt nova

User Password:(此處輸入user密碼為userpass

Repeat User Password:(重復輸入userpass

+---------------------+----------------------------------+

| Field               | Value                            |

+---------------------+----------------------------------+

| domain_id           | default                          |

| enabled             | True                             |

| id                  | 2a0232df17b04e18ba0f4840eabdcb30 |

| name                | nova                             |

| options             | {}                               |

| password_expires_at | None                             |

+---------------------+----------------------------------+

ü 創建nova角色:[root@controller ~]# openstack role add --project service --user nova admin

ü 創建nova服務:[root@controller ~]# openstack service create --name nova --description "OpenStack Compute" compute

+-------------+----------------------------------+

| Field       | Value                            |

+-------------+----------------------------------+

| description | OpenStack Compute                |

| enabled     | True                             |

| id          | 3f06ee745943444e8d8bdafb853ee589 |

| name        | nova                             |

| type        | compute                          |

+-------------+----------------------------------+

ü 創建計算公共節點:[root@controller ~]# openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1

+--------------+----------------------------------+

| Field        | Value                            |

+--------------+----------------------------------+

| enabled      | True                             |

| id           | 3497ffc263dc478280b262771deda363 |

| interface    | public                           |

| region       | RegionOne                        |

| region_id    | RegionOne                        |

| service_id   | 3f06ee745943444e8d8bdafb853ee589 |

| service_name | nova                             |

| service_type | compute                          |

| url          | http://controller:8774/v2.1      |

+--------------+----------------------------------+

ü 創建計算內部節點:[root@controller ~]# openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1

+--------------+----------------------------------+

| Field        | Value                            |

+--------------+----------------------------------+

| enabled      | True                             |

| id           | bb05c8c80f0144ec8f99321079aae2f6 |

| interface    | internal                         |

| region       | RegionOne                        |

| region_id    | RegionOne                        |

| service_id   | 3f06ee745943444e8d8bdafb853ee589 |

| service_name | nova                             |

| service_type | compute                          |

| url          | http://controller:8774/v2.1      |

+--------------+----------------------------------+

ü 創建計算管理員節點:[root@controller ~]# openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1

+--------------+----------------------------------+

| Field        | Value                            |

+--------------+----------------------------------+

| enabled      | True                             |

| id           | 24a99f4bd80044b697badf2eeee521d0 |

| interface    | admin                            |

| region       | RegionOne                        |

| region_id    | RegionOne                        |

| service_id   | 3f06ee745943444e8d8bdafb853ee589 |

| service_name | nova                             |

| service_type | compute                          |

| url          | http://controller:8774/v2.1      |

+--------------+----------------------------------+

ü 創建placement用戶[root@controller ~]# openstack user create --domain default --password-prompt placement

User Password:(此處輸入user密碼為userpass

Repeat User Password:(重復輸入userpass

+---------------------+----------------------------------+

| Field               | Value                            |

+---------------------+----------------------------------+

| domain_id           | default                          |

| enabled             | True                             |

| id                  | 44bee82aab8c4edcbb2bdc27df93ef07 |

| name                | placement                        |

| options             | {}                               |

| password_expires_at | None                             |

+---------------------+----------------------------------+

ü 創建placement角色:[root@controller ~]# openstack role add --project service --user placement admin

ü 創建placement服務:[root@controller ~]# openstack service create --name placement --description "Placement API" placement

+-------------+----------------------------------+

| Field       | Value                            |

+-------------+----------------------------------+

| description | Placement API                    |

| enabled     | True                             |

| id          | 515d392c6a72479491f5f893a77d2cb2 |

| name        | placement                        |

| type        | placement                        |

+-------------+----------------------------------+

ü 創建公共placement節點:[root@controller ~]# openstack endpoint create --region RegionOne placement public http://controller:8778

+--------------+----------------------------------+

| Field        | Value                            |

+--------------+----------------------------------+

| enabled      | True                             |

| id           | 72c78deee3fc47cc9fee0c9d22c1e0a1 |

| interface    | public                           |

| region       | RegionOne                        |

| region_id    | RegionOne                        |

| service_id   | 515d392c6a72479491f5f893a77d2cb2 |

| service_name | placement                        |

| service_type | placement                        |

| url          | http://controller:8778           |

+--------------+----------------------------------+

ü 創建內部placement節點:[root@controller ~]# openstack endpoint create --region RegionOne placement internal http://controller:8778

+--------------+----------------------------------+

| Field        | Value                            |

+--------------+----------------------------------+

| enabled      | True                             |

| id           | 3e1c1336907247c9825bbb6c7cf98273 |

| interface    | internal                         |

| region       | RegionOne                        |

| region_id    | RegionOne                        |

| service_id   | 515d392c6a72479491f5f893a77d2cb2 |

| service_name | placement                        |

| service_type | placement                        |

| url          | http://controller:8778           |

+--------------+----------------------------------+

ü 創建管理員placement節點:[root@controller ~]# openstack endpoint create --region RegionOne placement admin http://controller:8778

+--------------+----------------------------------+

| Field        | Value                            |

+--------------+----------------------------------+

| enabled      | True                             |

| id           | b3451088d3774e94a08d9aa1561fb78d |

| interface    | admin                            |

| region       | RegionOne                        |

| region_id    | RegionOne                        |

| service_id   | 515d392c6a72479491f5f893a77d2cb2 |

| service_name | placement                        |

| service_type | placement                        |

| url          | http://controller:8778           |

+--------------+----------------------------------+

2.7.3安裝Nova

ü 安裝:[root@controller ~]# yum install openstack-nova-api openstack-nova-conductor openstack-nova-console openstack-nova-novncproxy openstack-nova-scheduler openstack-nova-placement-api

ü 編輯:[root@controller ~]# vi /etc/nova/nova.conf

[DEFAULT]節點修改如下配置:

enabled_apis=osapi_compute,metadata

transport_url=rabbit://openstack:rabbitpass@controller

my_ip=10.47.181.26

use_neutron=true

firewall_driver=nova.virt.firewall.NoopFirewallDriver

[api_database]節點修改如下配置:

connection=mysql+pymysql://nova:novadbpass@controller/nova_api

[database]節點修改如下配置:

connection=mysql+pymysql://nova:novadbpass@controller/nova

[placement_database]節點修改如下配置:

connection=mysql+pymysql://placement:placementdbpass@controller/placement

[DEFAULT]節點修改如下配置:

[api]節點放開如下配置:

auth_strategy=keystone

[keystone_authtoken]節點修改如下配置:

auth_url=http://controller:5000/v3(務必小心,原文件是uri,此處要改為url

memcached_servers=controller:11211

auth_type=password

並新增如下配置:

project_domain_name = default

user_domain_name = default

project_name = service

username = nova

password = userpass

[vnc]節點修改如下配置:

enabled=true

server_listen=$my_ip

server_proxyclient_address=$my_ip

[glance]節點修改如下配置:

api_servers=http://controller:9292

[oslo_concurrency]節點放開如下配置:

lock_path=/var/lib/nova/tmp

[placement]節點修改如下配置:

region_name=RegionOne

project_domain_name=Default

project_name=service

auth_type=password

user_domain_name=Default

auth_url=http://controller:5000/v3

username=placement

password=userpass

ü 小竅門:上面這個配置文件中有大量的以#開頭的注釋,對於准確快速地找到具體的配置項帶來了很大的麻煩,因此可以用這個指令只顯示字母az開頭和符號’[‘開頭的文字,檢查配置結果:

[root@controller ~]# grep ^[a-z,\[] /etc/nova/nova.conf

[DEFAULT]

instances_path=/home/novainstances

my_ip=10.47.181.26

use_neutron=true

firewall_driver=nova.virt.firewall.NoopFirewallDriver

enabled_apis=osapi_compute,metadata

transport_url=rabbit://openstack:rabbitpass@controller

[api]

auth_strategy=keystone

[api_database]

connection=mysql+pymysql://nova:novadbpass@controller/nova_api

[barbican]

[cache]

[cells]

[cinder]

[compute]

[conductor]

[console]

[consoleauth]

[cors]

[database]

connection=mysql+pymysql://nova:novadbpass@controller/nova

[devices]

[ephemeral_storage_encryption]

[filter_scheduler]

[glance]

api_servers=http://controller:9292

[guestfs]

[healthcheck]

[hyperv]

[ironic]

[key_manager]

[keystone]

[keystone_authtoken]

auth_url=http://controller:5000/v3

memcached_servers=controller:11211

auth_type=password

project_domain_name = default

user_domain_name = default

project_name = service

username = nova

password = userpass

[libvirt]

[matchmaker_redis]

[metrics]

[mks]

[neutron]

url=http://controller:9696

service_metadata_proxy=true

metadata_proxy_shared_secret = METADATA_SECRET

auth_type=password

auth_url=http://controller:5000

project_name=service

project_domain_name=default

username=neutron

user_domain_name=default

password=userpass

region_name=RegionOne

[notifications]

[osapi_v21]

[oslo_concurrency]

lock_path=/var/lib/nova/tmp

[oslo_messaging_amqp]

[oslo_messaging_kafka]

[oslo_messaging_notifications]

[oslo_messaging_rabbit]

[oslo_messaging_zmq]

[oslo_middleware]

[oslo_policy]

[pci]

[placement]

auth_type=password

auth_url=http://controller:5000/v3

project_name=service

project_domain_name=Default

username=placement

user_domain_name=Default

password=userpass

region_name=RegionOne

[placement_database]

connection=mysql+pymysql://placement:placementdbpass@controller/placement

[powervm]

[profiler]

[quota]

[rdp]

[remote_debug]

[scheduler]

discover_hosts_in_cells_interval=300

[serial_console]

[service_user]

[spice]

[upgrade_levels]

[vault]

[vendordata_dynamic_auth]

[vmware]

[vnc]

enabled=true

server_listen=0.0.0.0

server_proxyclient_address=$my_ip

novncproxy_base_url=http://10.47.171.26:6080/vnc_auto.htmldaid

[workarounds]

[wsgi]

[xenserver]

[xvp]

[zvm]

ü 編輯:[root@controller ~]# vi /etc/httpd/conf.d/00-nova-placement-api.conf

添加如下內容:

<Directory /usr/bin>

  <IfVersion >= 2.4>

    Require all granted

  </IfVersion>

  <IfVersion < 2.4>

    Order allow,deny

    Allow from all

  </IfVersion>

</Directory>

ü 重啟httpd服務:[root@controller ~]# systemctl restart httpd

ü 同步nova_api數據庫:[root@controller ~]# su -s /bin/sh -c "nova-manage api_db sync" nova

ü 注冊cell0數據庫:[root@controller ~]# su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova

ü 創建cell1單元:[root@controller ~]# su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova

d697d4f8-5df5-4725-860a-858b79fa989f

ü 同步nova數據庫:[root@controller ~]# su -s /bin/sh -c "nova-manage db sync" nova(有兩個警告產生,正常!)

ü 驗證cell0cell1注冊成功:[root@controller ~]# su -s /bin/sh -c "nova-manage cell_v2 list_cells" nova

+-------+--------------------------------------+------------------------------------+-------------------------------------------------+----------+

|  名稱 |                 UUID                 |           Transport URL            |                    數據庫連接                   | Disabled |

+-------+--------------------------------------+------------------------------------+-------------------------------------------------+----------+

| cell0 | 00000000-0000-0000-0000-000000000000 |               none:/               | mysql+pymysql://nova:****@controller/nova_cell0 |  False   |

| cell1 | d697d4f8-5df5-4725-860a-858b79fa989f | rabbit://openstack:****@controller |    mysql+pymysql://nova:****@controller/nova    |  False   |

+-------+--------------------------------------+------------------------------------+-------------------------------------------------+----------+

(此處輸出結果和官方文檔不一樣,后面抽空再查一下。。。)

ü 啟動服務:

[root@controller ~]# systemctl start openstack-nova-api.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service openstack-nova-conductor openstack-nova-consoleauth.service

[root@controller ~]# systemctl enable openstack-nova-api.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service openstack-nova-conductor openstack-nova-consoleauth.service

(注意:官方文檔此處缺少openstack-nova-conductor,會導致虛機創建失敗;官方文檔還缺少openstack-nova-consoleauth.service,會導致dashboard上打不開實例的控制台。)

(特別注意:到此時應該先跳到計算節點,完成計算節點的計算服務的安裝(3.1章節)。然后再回到此處檢查數據庫中是否有計算節點了。)

2.8檢查計算節點

ü 確認數據庫中有發現節點:

[root@controller ~]# . admin-openrc.sh

[root@controller ~]# openstack compute service list --service nova-compute

An unexpected error prevented the server from fulfilling your request. (HTTP 500) (Request-ID: req-15c38b56-c9bd-4bb1-a648-a5419789458b)

出現上述錯誤后, 檢查發現時鍾沒有同步。於是重新安裝了熟悉的NTP,同步后再次確認,結果如下:

+----+--------------+---------+------+---------+-------+----------------------------+

| ID | Binary       | Host    | Zone | Status  | State | Updated At                 |

+----+--------------+---------+------+---------+-------+----------------------------+

| 10 | nova-compute | compute | nova | enabled | up    | 2018-10-29T00:19:46.000000 |

+----+--------------+---------+------+---------+-------+----------------------------+

ü 主動發現節點:[root@controller ~]# su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova

Found 2 cell mappings.

Skipping cell0 since it does not contain hosts.

Getting computes from cell 'cell1': d697d4f8-5df5-4725-860a-858b79fa989f

Checking host mapping for compute host 'compute': 3bc5b488-5013-4585-a3f7-c084cc80098e

Creating host mapping for compute host 'compute': 3bc5b488-5013-4585-a3f7-c084cc80098e

Found 1 unmapped computes in cell: d697d4f8-5df5-4725-860a-858b79fa989f

ü 如果想自動發現節點,配置:[root@controller ~]# vi /etc/nova/nova.conf

[scheduler]節點修改如下配置:

discover_hosts_in_cells_interval=300

2.9安裝Neutron

2.9.1數據庫中創建Neutron相關數據

(密碼為neutrondbpass

ü [root@controller ~]# mysql -uroot-pdbrootpass

ü MariaDB [(none)]> CREATE DATABASE neutron;

ü MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'neutrondbpass';

ü MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'neutrondbpass';

ü MariaDB [(none)]> exit

2.9.2創建用戶、角色和服務等

ü 創建neutron用戶:[root@controller ~]# openstack user create --domain default --password-prompt neutron

User Password:(此處輸入user密碼為userpass

Repeat User Password:(重復輸入userpass

+---------------------+----------------------------------+

| Field               | Value                            |

+---------------------+----------------------------------+

| domain_id           | default                          |

| enabled             | True                             |

| id                  | daa00fa366c34859adfec17275f70311 |

| name                | neutron                          |

| options             | {}                               |

| password_expires_at | None                             |

+---------------------+----------------------------------+

ü 創建neutron角色:[root@controller ~]# openstack role add --project service --user neutron admin

ü 創建neutron服務:[root@controller ~]# openstack service create --name neutron --description "OpenStack Networking" network

+-------------+----------------------------------+

| Field       | Value                            |

+-------------+----------------------------------+

| description | OpenStack Networking             |

| enabled     | True                             |

| id          | 77d6cdb4d2cf4e33a7fd57377493a1e9 |

| name        | neutron                          |

| type        | network                          |

+-------------+----------------------------------+

ü 創建網絡公共節點:[root@controller ~]# openstack endpoint create --region RegionOne network public http://controller:9696

+--------------+----------------------------------+

| Field        | Value                            |

+--------------+----------------------------------+

| enabled      | True                             |

| id           | f379acbc323948558b938ae413aa0513 |

| interface    | public                           |

| region       | RegionOne                        |

| region_id    | RegionOne                        |

| service_id   | 77d6cdb4d2cf4e33a7fd57377493a1e9 |

| service_name | neutron                          |

| service_type | network                          |

| url          | http://controller:9696           |

+--------------+----------------------------------+

ü 創建網絡內部節點:[root@controller ~]# openstack endpoint create --region RegionOne network internal http://controller:9696

+--------------+----------------------------------+

| Field        | Value                            |

+--------------+----------------------------------+

| enabled      | True                             |

| id           | 1656c27c8f4b4295a36e93fb8f217558 |

| interface    | internal                         |

| region       | RegionOne                        |

| region_id    | RegionOne                        |

| service_id   | 77d6cdb4d2cf4e33a7fd57377493a1e9 |

| service_name | neutron                          |

| service_type | network                          |

| url          | http://controller:9696           |

+--------------+----------------------------------+

ü 創建網絡管理節點:[root@controller ~]# openstack endpoint create --region RegionOne network admin http://controller:9696

+--------------+----------------------------------+

| Field        | Value                            |

+--------------+----------------------------------+

| enabled      | True                             |

| id           | 24b0401b9c794fdb9a4198d2e2146f85 |

| interface    | admin                            |

| region       | RegionOne                        |

| region_id    | RegionOne                        |

| service_id   | 77d6cdb4d2cf4e33a7fd57377493a1e9 |

| service_name | neutron                          |

| service_type | network                          |

| url          | http://controller:9696           |

+--------------+----------------------------------+

2.9.3配置provider網絡

(與下一節self-serive網絡是二選一,本次練習不使用provider,請直接跳到下一節。

ü 安裝:[root@controller ~]# yum install openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge ebtables

ü 編輯:[root@controller ~]# vi /etc/neutron/neutron.conf

[database]節點修改如下配置:

connection = mysql+pymysql://neutron:neutrondbpass@controller/neutron

[DEFAULT]節點修改如下配置:

core_plugin = ml2

service_plugins =

transport_url = rabbit://openstack:rabbitpass@controller

auth_strategy = keystone

notify_nova_on_port_status_changes = true

notify_nova_on_port_data_changes = true

[keystone_authtoken]節點修改如下配置:

www_authenticate_uri = http://controller:5000

auth_url = http://controller:5000(注意要把uri改為url

memcached_servers = controller:11211

auth_type = password

並新增如下配置:

project_domain_name = default

user_domain_name = default

project_name = service

username = neutron

password = userpass

[nova]節點修改如下配置:

auth_url = http://controller:5000

auth_type = password

project_domain_name = default

user_domain_name = default

region_name = RegionOne

project_name = service

username = nova

password = userpass

[oslo_concurrency]節點修改如下配置:

lock_path = /var/lib/neutron/tmp

ü 編輯:[root@controller ~]# vi /etc/neutron/plugins/ml2/ml2_conf.ini

[ml2]節點修改如下配置:

type_drivers = flat,vlan

tenant_network_types =

mechanism_drivers = linuxbridge

extension_drivers = port_security

[ml2_type_flat]節點修改如下配置:

flat_networks = provider

[securitygroup]節點修改如下配置:

enable_ipset = true

ü 編輯:[root@controller ~]# vi /etc/neutron/plugins/ml2/linuxbridge_agent.ini

[linux_bridge]節點修改如下配置:

physical_interface_mappings = provider:ens44f0(暫時填寫第一個網卡。后面再研究。)

[vxlan]節點修改如下配置:

enable_vxlan = false

[securitygroup]節點修改如下配置:

firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

enable_security_group = true

ü 編輯:[root@controller ~]# vi /etc/neutron/dhcp_agent.ini

[DEFAULT]節點修改如下配置:

interface_driver = linuxbridge

dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq

enable_isolated_metadata = true

2.9.4配置self-service網絡

(與上一節的proivder網絡是二選一,本次實驗使用本self-service網絡。)

ü 安裝:[root@controller ~]# yum install openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge ebtables

ü 編輯:[root@controller ~]# vi /etc/neutron/neutron.conf

[database]節點修改如下配置:

connection = mysql+pymysql://neutron:neutrondbpass@controller/neutron

[DEFAULT]節點修改如下配置:

core_plugin = ml2

service_plugins = router

allow_overlapping_ips = true

transport_url = rabbit://openstack:rabbitpass@controller

auth_strategy = keystone

notify_nova_on_port_status_changes = true

notify_nova_on_port_data_changes = true

[keystone_authtoken]節點修改如下配置:

www_authenticate_uri = http://controller:5000

auth_url = http://controller:5000(注意要把uri改為url

memcached_servers = controller:11211

auth_type = password

並新增如下配置:

project_domain_name = default

user_domain_name = default

project_name = service

username = neutron

password = userpass

[nova]節點修改如下配置:

auth_url = http://controller:5000

auth_type = password

project_domain_name = default

user_domain_name = default

region_name = RegionOne

project_name = service

username = nova

password = userpass

[oslo_concurrency]節點修改如下配置:

lock_path = /var/lib/neutron/tmp

注意:下面這些配置和官網文檔差別較大,官網使用linuxbridge,現在更流行openvswitch。並且這些配置也是為了后續配置vlan、路由、以及通過浮動IP實現外網互通做准備。

ü 編輯:[root@controller ~]# vi /etc/neutron/plugins/ml2/ml2_conf.ini

[ml2]節點修改如下配置:

type_drivers = local,flat,vlan,gre,vxlan

tenant_network_types = vlan

mechanism_drivers = openvswitch,l2population

extension_drivers = port_security

[ml2_type_flat]節點修改如下配置:

flat_networks = *

修改[ml2_type_vlan]節點的如下配置:

network_vlan_ranges = default:3001:4000

新增[ovs]節點及其如下配置:

physical_interface_mappings = default:ens44f1

上述配置中,default[ml2_type_vlan]label(含義是物理網絡名稱),任意字符串都可以,並且配置其對應的物理網卡為ens44f1

[securitygroup]節點修改如下配置:

enable_ipset = true

配置:[root@controller ~]# vi /etc/neutron/plugins/ml2/openvswitch_agent.ini

修改[agent]節點的如下配置:

tunnel_types = vxlan

l2_population = True

prevent_arp_spoofing = True

修改[ovs]節點的如下配置:

phynic_mappings = default:ens44f1

local_ip = 10.47.181.26

bridge_mappings = default:br-eth

修改[securitygroup]節點的如下配置:

enable_security_group = false

ü 編輯:[root@controller ~]# vi /etc/neutron/l3_agent.ini

[DEFAULT]節點修改如下配置:

interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver

ü 編輯:[root@controller ~]# vi /etc/neutron/dhcp_agent.ini

[DEFAULT]節點修改如下配置:

interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver

dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq

enable_isolated_metadata = true

2.9.5繼續配置

ü 配置:[root@controller ~]# vi /etc/neutron/metadata_agent.ini

[DEFAULT]節點修改如下配置:

nova_metadata_host = controller

metadata_proxy_shared_secret = METADATA_SECRET

ü 配置:[root@controller ~]# vi /etc/nova/nova.conf

[neutron]節點修改如下配置:

url=http://controller:9696

auth_url=http://controller:5000

auth_type=password

project_domain_name=default

user_domain_name=default

region_name=RegionOne

project_name=service

username=neutron

password=userpass

service_metadata_proxy=true

metadata_proxy_shared_secret = METADATA_SECRET

ü 創建鏈接:[root@controller ~]# ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini

ü 同步數據庫:[root@controller ~]# su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron

ü 配置openvswitch,添加網橋,把物理網卡加入到網橋:

[root@controller ~]# ovs-vsctl add-br br-eth

[root@controller ~]# ovs-vsctl add-port br-eth ens44f1

ü 啟動服務:(如果啟動失敗,就reboot一下)

[root@controller ~]# systemctl restart openstack-nova-api

[root@controller ~]# systemctl start neutron-server.service neutron-openvswitch-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service

[root@controller ~]# systemctl enable neutron-server.service neutron-openvswitch-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service

ü 如果選擇了self-service網絡,還需啟動這個服務:

[root@controller ~]# systemctl start neutron-l3-agent.service

[root@controller ~]# systemctl enable neutron-l3-agent.service

特別注意:此時應該跳轉到計算節點安裝Neutron3.2章節。

2.10安裝dashboard

2.10.1安裝

ü 安裝:[root@controller ~]# yum install openstack-dashboard

ü 編輯:[root@controller ~]# vi /etc/openstack-dashboard/local_settings

修改如下配置:

OPENSTACK_HOST = "controller"

ALLOWED_HOSTS = ['*', 'localhost']

SESSION_ENGINE = 'django.contrib.sessions.backends.cache'(新增的)

CACHES = {

    'default': {

        'BACKEND': 'django.core.cache.backends.locmem.LocMemCache',

        'LOCATION':'controller:11211',(新增的)

    },

}

OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST(不變)

OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True

OPENSTACK_API_VERSIONS = {

    "identity": 3,

    "image": 2,

    "volume": 2,

}

OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = 'Default'

OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"

OPENSTACK_NEUTRON_NETWORK = {

    'enable_router': False,

    'enable_quotas': False,

    'enable_distributed_router': False,

    'enable_ha_router': False,

    'enable_lb':False,(新增的)

    'enable_firewall':False,(新增的)

    'enable_vpn':False,(新增的)

    'enable_fip_topology_check': False,

    ......

}

ü 編輯:[root@controller ~]# vi /etc/httpd/conf.d/openstack-dashboard.conf

增加一行配置:WSGIApplicationGroup %{GLOBAL}

ü 啟動服務:[root@controller ~]# systemctl restart httpd.service memcached.service

ü 驗證:瀏覽器中輸入http://10.47.181.26/dashboard

 

域:Default,用戶名:admin,密碼:adminpass

2.10.2簡單配置和管理鏡像、網絡和實例

這里不詳細描述了,可以參考4.4章節的網絡配置。

三.計算節點的安裝

3.1安裝 Nova

(注意:和控制節點安裝Nova不一樣!)

ü 安裝:[root@compute /]# yum install openstack-nova-compute

剛開始安裝時,遇到報錯:需要:qemu-kvm-rhev >= 2.10.0

於是在yum源文件zte-mirror.repo中新增一個源,如下:

[Virt]

name=CentOS-$releasever - Virt

baseurl=http://mirrors.zte.com.cn/centos/7.5.1804/virt/x86_64/kvm-common/

gpgcheck=0

gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7

然后依次yum clean allmakecacheupdate。再重新安裝openstack-nova-compute,問題解決。

ü 編輯:[root@compute /]# vi /etc/nova/nova.conf

[DEFAULT]節點修改如下配置:

enabled_apis=osapi_compute,metadata

transport_url=rabbit://openstack:rabbitpass@controller

my_ip=10.47.181.27

use_neutron=true

firewall_driver=nova.virt.firewall.NoopFirewallDriver

[api]節點放開如下配置:

auth_strategy=keystone

[keystone_authtoken]節點修改如下配置:

auth_url=http://controller:5000/v3(務必小心,原文件是uri,要改為url

memcached_servers=controller:11211

auth_type=password

並新增如下配置:

project_domain_name = default

user_domain_name = default

project_name = service

username = nova

password = userpass

[vnc]節點修改如下配置:

enabled=true

server_listen=0.0.0.0

server_proxyclient_address=$my_ip

novncproxy_base_url=http://10.47.181.26:6080/vnc_auto.html(注意:此處不能用hostname代替地址,因為這個url是通過PC上的瀏覽器訪問的,PC用的DNS應該不能解析controller這個hostname。)

[glance]節點修改如下配置:

api_servers=http://controller:9292

[oslo_concurrency]節點放開如下配置:

lock_path=/var/lib/nova/tmp

[placement]節點修改如下配置:

region_name=RegionOne

project_domain_name=Default

project_name=service

auth_type=password

user_domain_name=Default

auth_url=http://controller:5000/v3

username=placement

password=userpass

ü 檢查是否支持虛擬化:[root@compute /]# egrep -c '(vmx|svm)' /proc/cpuinfo

56

(注意:如果返回0,表示不支持,則需要/etc/nova/nova.conf[libvirt]下修改virt_type=qemu)

啟動服務:

[root@compute /]# systemctl start libvirtd.service openstack-nova-compute.service

[root@compute /]# systemctl enable libvirtd.service openstack-nova-compute.service

(現在回到計算節點安裝的2.8章節,檢查是否能發現此計算節點。)

3.2安裝Neutron

3.2.1基礎安裝和配置

ü 安裝:[root@compute ~]# yum install openstack-neutron-openvswitch ebtables ipset

ü 編輯:[root@compute ~]# vi /etc/neutron/neutron.conf

[DEFAULT]節點修改如下配置:

transport_url = rabbit://openstack:rabbitpass@controller

auth_strategy = keystone

[keystone_authtoken]節點修改如下配置:

www_authenticate_uri = http://controller:5000

auth_url = http://controller:5000(注意將uri改為url

memcached_servers = controller:11211

auth_type = password

並新增如下配置:

project_domain_name = default

user_domain_name = default

project_name = service

username = neutron

password = userpass

[oslo_concurrency]節點修改如下配置:

lock_path = /var/lib/neutron/tmp

3.2.2配置provider網絡

provider網絡和self-service網絡二選一,本次使用self-service,請直接跳到下一節。

ü 編輯:[root@compute ~]# vi /etc/neutron/plugins/ml2/linuxbridge_agent.ini

[linux_bridge]節點修改如下配置:

physical_interface_mappings = provider:ens44f0

[vxlan]節點修改如下配置:

enable_vxlan = false

[securitygroup]節點修改如下配置:

enable_security_group = true

firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

3.2.3配置self-service網絡

provider網絡和self-service網絡二選一本次練習選用self-service

這里的配置與官網差別較大,官網使用linuxbridge,本次使用更流行的openvswitch同時也是為了后續配置vlan、路由、以及通過浮動IP與外網互通做准備。

ü 配置:[root@controller ~]# vi /etc/neutron/plugins/ml2/openvswitch_agent.ini

修改[agent]節點的如下配置:

tunnel_types = vxlan

l2_population = True

prevent_arp_spoofing = True

修改[ovs]節點的如下配置:

phynic_mappings = default:ens44f1

local_ip = 10.47.181.26

bridge_mappings = default:br-eth

修改[securitygroup]節點的如下配置:

enable_security_group = false

然后重啟neutron-openvswitch-agent服務:

[root@controller ~]# systemctl restart neutron-openvswitch-agent.service

配置OVS,添加網橋,把物理網卡加入到網橋

[root@controller ~]# ovs-vsctl add-br br-eth

[root@controller ~]# ovs-vsctl add-port br-eth ens44f1

3.2.4繼續配置

ü 編輯:[root@compute ~]# vi /etc/nova/nova.conf

[neutron]節點修改如下配置:

url=http://controller:9696

auth_url=http://controller:5000

auth_type=password

project_domain_name=default

user_domain_name=default

region_name=RegionOne

project_name=service

username=neutron

password=userpass

ü 啟動服務:

[root@compute ~]# systemctl restart openstack-nova-compute

[root@compute ~]# systemctl start neutron-openvswitch-agent.service

[root@compute ~]# systemctl enable neutron-openvswitch-agent.service

ü 在控制節點上驗證:[root@controller ~]# openstack network agent list

結果顯示HTTP 500錯誤,經過分析,應該是超過MySQL的最大連接數,於是回到2.1章節修改數據庫最大連接數,然后再回來驗證,結果類似如下:

[root@controller ~]# openstack network agent list

+--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+

| ID                                   | Agent Type         | Host       | Availability Zone | Alive | State | Binary                    |

+--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+

| 186f4029-341a-4a2f-a662-e0b388f517ab | Linux bridge agent | controller | None              | XXX   | UP    | neutron-linuxbridge-agent |

| 33705d2b-3cb0-4003-b1c1-f1434bee8abc | Open vSwitch agent | controller | None              | :-)   | UP    | neutron-openvswitch-agent |

| 4b73a101-e17c-4de2-85ff-213cff13df4a | Linux bridge agent | compute    | None              | XXX   | UP    | neutron-linuxbridge-agent |

| 5668bf57-6d63-4883-bf9a-fa3596447bf8 | L3 agent           | controller | nova              | :-)   | UP    | neutron-l3-agent          |

| d4524c6d-7ca7-4c7a-adc4-a1663c5be9df | Metadata agent     | controller | None              | :-)   | UP    | neutron-metadata-agent    |

| e690b9d6-3022-4a66-921c-5af7292a4038 | Open vSwitch agent | compute    | None              | :-)   | UP    | neutron-openvswitch-agent |

| f358e884-2158-42e4-a927-edaafa2ff650 | DHCP agent         | controller | nova              | :-)   | UP    | neutron-dhcp-agent        |

+--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+

 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM