OpenStack雲計算之路-Mitaka 版本


1.1 雲計算簡介

雲計算(英語:cloud computing ),是一種基於互聯網的計算方式,通過這種方式,共享的軟硬件資源和信息可以按需求提供給計算機各種終端和其他設備。

 

雲計算是繼1980年代大型計算機到客戶端-服務器的大轉變之后的又一種巨變。用戶不再需要了解“雲”中基礎設施的細節,不必具有相應的專業知識,也無需直接進行控制。雲計算描述了一種基於互聯網的新的IT服務增加、使用和交付模式,通常涉及通過互聯網來提供動態易擴展而且經常是虛擬化的資源。

1.1.1 雲計算的特點

互聯網上的雲計算服務特征和自然界的雲、水循環具有一定的相似性,因此,雲是一個相當貼切的比喻。根據技術研究院的定義如下。

雲計算服務應該具備以下幾條特征:

🥦 隨需應變自助服務。

🥦 隨時隨地用任何網絡設備訪問。

🥦 多人共享資源池。

🥦 快速重新部署靈活度。

🥦 可被監控與量測的服務。

一般認為還有如下特征:

🍀 基於虛擬化技術快速部署資源或獲得服務。

🍀 減少用戶終端的處理負擔。

🍀 降低了用戶對於IT專業知識的依賴。

1.1.2 雲計算服務模式

雲計算定義中明確了三種服務模式:

 

- 服務模式詳情

軟件即服務(SaaS):

      Software-as-a-service

消費者使用應用程序,但並不掌控操作系統、硬件或運作的網絡基礎架構。是一種服務觀念的基礎,軟件服務供應商,以租賃的概念提供客戶服務,而非購買,比較常見的模式是提供一組賬號密碼。

例如:Microsoft CRMSalesforce.com

平台即服務(PaaS):

      Platform-as-a-service

消費者使用主機操作應用程序。消費者掌控運作應用程序的環境(也擁有主機部分掌控權),但並不掌控操作系統、硬件或運作的網絡基礎架構。平台通常是應用程序基礎架構。

例如:Google App Engine

基礎設施即服務(IaaS):

      Infrastructure-as-a-service

消費者使用“基礎計算資源”,如處理能力、存儲空間、網絡組件或中間件。消費者能掌控操作系統、存儲空間、已部署的應用程序及網絡組件(如防火牆、負載平衡器等),但並不掌控雲基礎架構。

例如:Amazon AWSRackspace

關於這三種服務模式更多詳情可以參考:https://www.zhihu.com/question/21641778

1.1.3 雲計算的類型

 

- 雲類型示例

🔊 公有雲(Public Cloud

簡而言之,公用雲服務可通過網絡及第三方服務供應者,開放給客戶使用,“公有”一詞並不一定代表“免費”,但也可能代表免費或相當廉價,公用雲並不表示用戶數據可供任何人查看,公用雲供應者通常會對用戶實施使用訪問控制機制,公用雲作為解決方案,既有彈性,又具備成本效益。

🔊 私有雲(Private Cloud

私有雲具備許多公用雲環境的優點,例如彈性、適合提供服務,兩者差別在於私有雲服務中,數據與程序皆在組織內管理,且與公用雲服務不同,不會受到網絡帶寬、安全疑慮、法規限制影響;此外,私有雲服務讓供應者及用戶更能掌控雲基礎架構、改善安全與彈性,因為用戶與網絡都受到特殊限制。

🔊 混合雲(Hybrid Cloud

混合雲結合公用雲及私有雲,這個模式中,用戶通常將非企業關鍵信息外包,並在公用雲上處理,但同時掌控企業關鍵服務及數據。

1.1.4 為什么要選擇雲計算

1、有效解決硬件單點故障問題

2、按需增/減硬件資源

3BGP線路解決南北互通問題

4、按需增/減帶寬

5、更有吸引力的費用支付方式

詳情查看《雲計算之路:為什么要選擇雲計算》

https://www.cnblogs.com/cmt/archive/2013/02/27/why-into-cloud.html

1.2 OpenStack簡介

 

OpenStack是一個美國宇航局和Rackspace合作研發的雲計算軟件,以Apache授權條款2.0授權,並且是一個自由軟件和開放源代碼項目。

OpenStack是基礎設施即服務(IaaS)軟件,讓任何人都可以自行創建和提供雲計算服務。

此外,OpenStack也用作創建防火牆內的“私有雲”(Private Cloud),提供機構或企業內各部門共享資源。

1.2.1 市場趨向

RackspaceOpenStack為基礎的私有雲業務每年營收7億美元,增長率超過了20%

OpenStack雖然有些方面還不太成熟,然而它有全球大量的組織支持,大量的開發人員參與,發展迅速。國際上已經有很多使用OpenStack搭建的公有雲、私有雲、混合雲,例如:RackspaceCloud、惠普雲、MercadoLibreIT基礎設施雲、AT&TCloudArchitec、戴爾的OpenStack解決方案等等。而在國內OpenStack的熱度也在逐漸升溫,華勝天成、高德地圖、京東、阿里巴巴、百度、中興、華為等都對OpenStack產生了濃厚的興趣並參與其中。

2010年創立以來,已發布10個版本。其中Icehouse版本有120個組織、1202名代碼貢獻者參與,而最新的是Juno版本。OpenStack很可能在未來的基礎設施即服務(IaaS)資源管理方面占據領導位置,成為公有雲、私有雲及混合雲管理的“雲操作系統”標准

1.2.2 大型用戶

美國國家航空航天局

加拿大半官方機構CANARIE網絡DAIRDigital Accelerator for Innovation and Research)項目,向大學與中小型企業提供研究和開發雲端運算環境。

惠普雲(使用Ubuntu Linux

MercadoLibreIT基礎設施雲,現時以OpenStack管理超過6000 台虛擬機器。

AT&T的“Cloud Architect,將在美國的達拉斯、聖地亞哥和新澤西州提供對外雲端服務。

1.2.3 OpenStack項目介紹

 

- 各項目關系圖

各組件的詳細說明:

服務類型

項目名稱

描述

Dashboard

Horizon

提供web界面

提供了一個基於web的自服務門戶,與OpenStack底層服務交互,諸如啟動一個實例,分配IP地址以及配置訪問控制。

Compute

Nova

計算節點

OpenStack環境中計算實例的生命周期管理。按需響應包括生成、調度、回收虛擬機等操作。

Networking

Neutron

網絡服務

確保為其它OpenStack服務提供網絡連接即服務,比如OpenStack計算。為用戶提供API定義網絡和使用。基於插件的架構其支持眾多的網絡提供商和技術。

存儲

Object Storage

Swift

對象存儲

通過一個 RESTful,基於HTTP的應用程序接口存儲和任意檢索的非結構化數據對象。它擁有高容錯機制,基於數據復制和可擴展架構。它的實現並像是一個文件服務器需要掛載目錄。在此種方式下,它寫入對象和文件到多個硬盤中,以確保數據是在集群內跨服務器的多份復制。

Block Storage

Cinder

塊存儲

為運行實例而提供的持久性塊存儲。它的可插拔驅動架構的功能有助於創建和管理塊存儲設備。

共享服務

Identity service

Keystone

認證節點

為其他OpenStack服務提供認證和授權服務,為所有的OpenStack服務提供一個端點目錄。

Image service

Glance

鏡像服務

存儲和檢索虛擬機磁盤鏡像,OpenStack計算會在實例部署時使用此服務。

Telemetry

Ceilometer

計費

OpenStack雲的計費、基准、擴展性以及統計等目的提供監測和計量。

高層次服務

Orchestration

Heat

Orchestration服務支持多樣化的綜合的雲應用,通過調用OpenStack-native REST APICloudFormation-compatible Query API,支持:term:`HOT <Heat Orchestration Template (HOT)>`格式模板或者AWS CloudFormation格式模板

1.2.4 系統環境說明

本文檔使用主機環境均安裝官方推薦進行設置:https://docs.openstack.org/mitaka/zh_CN/install-guide-rdo/environment.html

controller節點說明

[root@controller ~]# cat /etc/redhat-release 
CentOS Linux release 7.2.1511 (Core) 
[root@controller ~]# uname -r
3.10.0-327.el7.x86_64
[root@controller ~]# sestatus 
SELinux status:                 disabled
[root@controller ~]# systemctl status firewalld.service 
● firewalld.service - firewalld - dynamic firewall daemon
   Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled)
   Active: inactive (dead)
[root@controller ~]# hostname -I
10.0.0.11 172.16.1.11 
[root@controller ~]# tail -3  /etc/hosts
10.0.0.11   controller
10.0.0.31   compute1
10.0.0.32   compute2

compute1compute2節點的配置與controller相同。

系統安裝參考文檔:http://www.cnblogs.com/clsn/p/8338099.html#_label1

系統優化說明:http://www.cnblogs.com/clsn/p/8338099.html#_label4

注意點:網卡的名稱修改

1.3 OpenStack基礎配置服務

注:本文中所使用的用戶及密碼都參考該文檔,並且高度一致。

https://docs.openstack.org/mitaka/zh_CN/install-guide-rdo/environment-security.html

OpenStack 相關服務安裝流程(keystone服務除外):

1)在數據庫中,創庫,授權;

2)在keystone中創建用戶並授權;

3)在keystone中創建服務實體,和注冊API接口;

4)安裝軟件包;

5)修改配置文件(數據庫信息);

6)同步數據庫;

7)啟動服務。

1.3.1 OpenStack服務部署順序

1.3.2 配置本地yum

首先將鏡像掛載到 /mnt

mount /dev/cdrom /mnt
echo 'mount /dev/cdrom /mnt' > /etc/rc.d/rc.local
chmod +x  /etc/rc.d/rc.local 

創建repo文件

cat >/etc/yum.repos.d/local.repo<<-'EOF'
[local]
name=local
baseurl=file:///mnt
gpgcheck=0

[openstack]
name=openstack-mitaka
baseurl=file:///opt/repo
gpgcheck=0
EOF

生成yum緩存

[root@controller repo]# yum makecache

1.3.3 安裝NTP時間服務

官方文檔:https://docs.openstack.org/mitaka/zh_CN/install-guide-rdo/environment-ntp.html

控制節點(提供時間服務,供其他機器同步)

安裝軟件

yum install chrony -y

配置控制節點,修改第22

[root@controller ~]# vim /etc/chrony.conf 
···
# Allow NTP client access from local network.
allow 10/8

啟動,設置自啟動

systemctl enable chronyd.service
systemctl start chronyd.service

計算節點(配置chrony客戶端)

安裝軟件

yum install chrony -y

配置文件第三行,刪除無用的上游服務器。

使用sed命令修改

sed -ri.bak '/server/s/^/#/g;2a server 10.0.0.11 iburst' /etc/chrony.conf

   配置文件說明:

[root@compute1 ~]# vim /etc/chrony.conf 
# Use public servers from the pool.ntp.org project.
# Please consider joining the pool (http://www.pool.ntp.org/join.html).
server 10.0.0.11 iburst

啟動,設置自啟動

systemctl enable chronyd.service
systemctl start chronyd.service

1.3.4 OpenStack的包操作(添加新的計算節點時需要安裝)

   官方文檔:https://docs.openstack.org/mitaka/zh_CN/install-guide-rdo/environment-packages.html

安裝 OpenStack 客戶端:

yum -y install python-openstackclient 

RHEL CentOS 默認啟用了 SELinux

# 安裝 openstack-selinux 軟件包以便自動管理 OpenStack 服務的安全策略
    yum -y install openstack-selinux

1.3.5 SQL數據庫安裝(在控制節點操作)

官方文檔:https://docs.openstack.org/mitaka/zh_CN/install-guide-rdo/environment-sql-database.html

安裝mariadb軟件包:

[root@controller ~]# yum -y install mariadb mariadb-server python2-PyMySQL

創建配置文件

cat > /etc/my.cnf.d/openstack.cnf <<-'EOF'
[mysqld]
bind-address = 10.0.0.11
default-storage-engine = innodb
innodb_file_per_table
max_connections = 4096
collation-server = utf8_general_ci
character-set-server = utf8
EOF

啟動mariadb

systemctl enable mariadb.service
systemctl start mariadb.service

執行mariadb安全初始化

為了保證數據庫服務的安全性,運行``mysql_secure_installation``腳本。特別需要說明的是,為數據庫的root用戶設置一個適當的密碼。

[root@controller ~]# mysql_secure_installation
···
Enter current password for root (enter for none): 
OK, successfully used password, moving on...
Set root password? [Y/n] n
 ... skipping.
Remove anonymous users? [Y/n] Y
 ... Success!
Disallow root login remotely? [Y/n] Y
 ... Success!
Remove test database and access to it? [Y/n] Y - Dropping test database...
 ... Success!
 - Removing privileges on test database...
 ... Success!
Reload privilege tables now? [Y/n] Y
 ... Success!

Thanks for using MariaDB! 

1.3.6 NoSQL 數據庫

官方文檔:https://docs.openstack.org/mitaka/zh_CN/install-guide-rdo/environment-nosql-database.html

Telemetry 服務使用 NoSQL 數據庫來存儲信息,典型地,這個數據庫運行在控制節點上。

向導中使用MongoDB

ceilometer中計費使用。由於本次搭建的為私有雲平台,私有雲不需要計費服務,這里就不進行安裝了。

1.3.7 消息隊列部署

官方文檔:https://docs.openstack.org/mitaka/zh_CN/install-guide-rdo/environment-messaging.html

安裝消息隊列軟件

[root@controller ~]# yum -y install rabbitmq-server

啟動消息隊列服務並將其配置為隨系統啟動:

systemctl enable rabbitmq-server.service
systemctl start rabbitmq-server.service

添加 openstack 用戶:

[root@controller ~]# rabbitmqctl add_user openstack RABBIT_PASS
Creating user "openstack" ...
用合適的密碼替換 RABBIT_DBPASS。

``openstack``用戶配置寫和讀權限:

[root@controller ~]# rabbitmqctl set_permissions openstack ".*" ".*" ".*"
Setting permissions for user "openstack" in vhost "/" ...

1.3.8 Memcached服務部署

官方文檔:https://docs.openstack.org/mitaka/zh_CN/install-guide-rdo/environment-memcached.html

安裝memcached軟件包

[root@controller ~]# yum -y install memcached python-memcached

配置memcached配置文件

[root@controller ~]# cat  /etc/sysconfig/memcached
PORT="11211"
USER="memcached"
MAXCONN="1024"
CACHESIZE="64"
OPTIONS="-l 10.0.0.11"  <--修改位置,配置為memcached主機地址或網段信息

啟動Memcached服務,並且配置它隨機啟動。

systemctl enable memcached.service
systemctl start memcached.service

1.3.9 驗證以上部署的服務是否正常

查看端口信息

[root@controller ~]# netstat -lntup
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
tcp        0      0 0.0.0.0:25672           0.0.0.0:*               LISTEN      17164/beam          
tcp        0      0 10.0.0.11:3306          0.0.0.0:*               LISTEN      16985/mysqld        
tcp        0      0 10.0.0.11:11211         0.0.0.0:*               LISTEN      17962/memcached     
tcp        0      0 0.0.0.0:4369            0.0.0.0:*               LISTEN      1/systemd           
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      1402/sshd           
tcp6       0      0 :::5672                 :::*                    LISTEN      17164/beam          
tcp6       0      0 :::22                   :::*                    LISTEN      1402/sshd           
udp        0      0 0.0.0.0:123             0.0.0.0:*                           1681/chronyd        
udp        0      0 127.0.0.1:323           0.0.0.0:*                           1681/chronyd        
udp        0      0 10.0.0.11:11211         0.0.0.0:*                           17962/memcached     
udp6       0      0 ::1:323                 :::*                                1681/chronyd    

端口信息說明

chronyd服務          123(提供給其他機器)、323(與上游同步端口)

Mariadb 數據庫        3306數據接口

rabbitmq  消息隊列    4369、25672(高可用架構使用)、5672(程序寫端口)

memcached token保存  11211

  至此OpenStack 基礎配置完成。

1.4 Keystone認證服務配置

官方文檔:https://docs.openstack.org/mitaka/zh_CN/install-guide-rdo/keystone-install.html

認證管理:授權管理和服務目錄服務管理提供單點整合。

目錄服務:相當於呼叫中心(前台)

   在控制節點上安裝和配置OpenStack身份認證服務,代碼名稱keystone。出現性能原因,這個配置部署Fernet令牌和Apache HTTP服務處理請求。

1.4.1 創建數據庫

用數據庫連接客戶端以 root 用戶連接到數據庫服務器:

[root@controller ~]# mysql -u root -p

創建 keystone 數據庫:

MariaDB [(none)]> CREATE DATABASE keystone;

``keystone``數據庫授予恰當的權限:

GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY 'KEYSTONE_DBPASS';
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'KEYSTONE_DBPASS';

添加完成后退出數據庫客戶端。

MariaDB [(none)]> exit

1.4.2 安裝keystone

yum -y install openstack-keystone httpd mod_wsgi

安裝的軟件包為 keystone服務包,http服務,用於連接python程序與web服務的中間件

   如何理解 CGI, WSGI https://www.zhihu.com/question/19998865

1.4.3 修改配置文件

備份配置文件

[root@controller ~]# cp /etc/keystone/keystone.conf{,.bak}

精簡化配置文件

[root@controller ~]# egrep -v '^#|^$' /etc/keystone/keystone.conf.bak  >/etc/keystone/keystone.conf

手動修改配置文件

``[DEFAULT]``部分,定義初始管理令牌的值

[DEFAULT]
admin_token = ADMIN_TOKEN

[database] 部分,配置數據庫訪問 

[database]
connection = mysql+pymysql://keystone:KEYSTONE_DBPASS@controller/keystone

``[token]``部分,配置Fernet UUID令牌的提供者

[token]
provider = fernet 關於令牌類型的說明:https://www.abcdocker.com/abcdocker/1797

【自動化】自動化配置-配置文件(本文大量使用)

安裝自動配置軟件openstack-utils

yum install openstack-utils.noarch -y
[root@controller ~]# rpm -ql openstack-utils
/usr/bin/openstack-config

自動化配置命令

cp /etc/keystone/keystone.conf{,.bak}
grep '^[a-Z\[]' /etc/keystone/keystone.conf.bak > /etc/keystone/keystone.conf
openstack-config --set /etc/keystone/keystone.conf DEFAULT admin_token  ADMIN_TOKEN
openstack-config --set /etc/keystone/keystone.conf database connection  mysql+pymysql://keystone:KEYSTONE_DBPASS@controller/keystone
openstack-config --set /etc/keystone/keystone.conf token provider  fernet

1.4.4 初始化身份認證服務的數據庫(同步數據庫)

[root@controller ~]# su -s /bin/sh -c "keystone-manage db_sync" keystone

驗證數據庫是否同步成功

[root@controller ~]# mysql keystone -e 'show tables'

1.4.5 初始化Fernet keys

[root@controller ~]# keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone

命令執行后會在/etc/keystone/目錄下生成fernet-keys  文件:

[root@controller ~]# ls /etc/keystone/
default_catalog.templates  keystone.conf.bak   policy.json
fernet-keys                keystone-paste.ini  sso_callback_template.html
keystone.conf              logging.conf

1.4.6 配置 Apache HTTP 服務器

編輯``/etc/httpd/conf/httpd.conf`` 文件,配置``ServerName``

echo 'ServerName controller' >>/etc/httpd/conf/httpd.conf

創建配置文件 /etc/httpd/conf.d/wsgi-keystone.conf

   注:keystone服務較為特殊,其他的服務可自行創建配置文件。

[root@controller ~]# cat /etc/httpd/conf.d/wsgi-keystone.conf
Listen 5000
Listen 35357

<VirtualHost *:5000>
    WSGIDaemonProcess keystone-public processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP}
    WSGIProcessGroup keystone-public
    WSGIScriptAlias / /usr/bin/keystone-wsgi-public
    WSGIApplicationGroup %{GLOBAL}
    WSGIPassAuthorization On
    ErrorLogFormat "%{cu}t %M"
    ErrorLog /var/log/httpd/keystone-error.log
    CustomLog /var/log/httpd/keystone-access.log combined

    <Directory /usr/bin>
        Require all granted
    </Directory>
</VirtualHost>

<VirtualHost *:35357>
    WSGIDaemonProcess keystone-admin processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP}
    WSGIProcessGroup keystone-admin
    WSGIScriptAlias / /usr/bin/keystone-wsgi-admin
    WSGIApplicationGroup %{GLOBAL}
    WSGIPassAuthorization On
    ErrorLogFormat "%{cu}t %M"
    ErrorLog /var/log/httpd/keystone-error.log
    CustomLog /var/log/httpd/keystone-access.log combined

    <Directory /usr/bin>
        Require all granted
    </Directory>
</VirtualHost>

1.4.7 啟動 Apache HTTP 服務並配置其隨系統啟動

systemctl enable httpd.service
systemctl start httpd.service 

1.4.8 創建服務實體和API端點

官方文檔:https://docs.openstack.org/mitaka/zh_CN/install-guide-rdo/keystone-services.html

a.配置環境變量

配置認證令牌

export OS_TOKEN=ADMIN_TOKEN

配置端點URL

export OS_URL=http://controller:35357/v3

配置認證 API 版本

export OS_IDENTITY_API_VERSION=3

查看環境變量

[root@controller ~]# env |grep OS

命令集:

export OS_TOKEN=ADMIN_TOKEN
export OS_URL=http://controller:35357/v3
export OS_IDENTITY_API_VERSION=3
env |grep OS

b.創建服務實體和API端點

創建命令

openstack service create --name keystone --description "OpenStack Identity" identity

執行過程

[root@controller ~]#  openstack service create \
>   --name keystone --description "OpenStack Identity" identity
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | OpenStack Identity               |
| enabled     | True                             |
| id          | f08ec36b2b7340d6976fcb2bbd24e83b |
| name        | keystone                         |
| type        | identity                         |
+-------------+----------------------------------+

c.創建認證服務的 API 端點

   命令集

openstack endpoint create --region RegionOne identity public http://controller:5000/v3
openstack endpoint create --region RegionOne identity internal http://controller:5000/v3
openstack endpoint create --region RegionOne identity admin http://controller:35357/v3

執行過程

[root@controller ~]# openstack endpoint create --region RegionOne \
>   identity public http://controller:5000/v3
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | e27dd713753f47b8a1062ac50ca33845 |
| interface    | public                           |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | f08ec36b2b7340d6976fcb2bbd24e83b |
| service_name | keystone                         |
| service_type | identity                         |
| url          | http://controller:5000/v3        |
+--------------+----------------------------------+

[root@controller ~]# openstack endpoint create --region RegionOne \
>   identity internal http://controller:5000/v3
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | 71b7435fa2df4c58bb6ca5cc38a434a7 |
| interface    | internal                         |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | f08ec36b2b7340d6976fcb2bbd24e83b |
| service_name | keystone                         |
| service_type | identity                         |
| url          | http://controller:5000/v3        |
+--------------+----------------------------------+

[root@controller ~]# openstack endpoint create --region RegionOne \
>   identity admin http://controller:35357/v3
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | cf58eee084c04777a520d487adc1a88f |
| interface    | admin                            |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | f08ec36b2b7340d6976fcb2bbd24e83b |
| service_name | keystone                         |
| service_type | identity                         |
| url          | http://controller:35357/v3       |
+--------------+----------------------------------+

1.4.9 創建域、項目、用戶和角色

官方文檔https://docs.openstack.org/mitaka/zh_CN/install-guide-rdo/keystone-users.html

a.創建域``default``

openstack domain create --description "Default Domain" default

b.在你的環境中,為進行管理操作,創建管理的項目、用戶和角色

創建 admin 項目

openstack project create --domain default --description "Admin Project" admin

創建 admin 用戶

openstack user create --domain default --password-prompt admin

創建 admin 角色

openstack role create admin

添加``admin`` 角色到 admin 項目和用戶上

openstack role add --project admin --user admin admin

命令集:

openstack domain create --description "Default Domain" default
openstack project create --domain default --description "Admin Project" admin
openstack user create --domain default --password ADMIN_PASS admin
openstack role create admin
openstack role add --project admin --user admin admin

c.創建servers項目

[root@controller ~]#  openstack project create --domain default --description "Service Project" service
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | Service Project                  |
| domain_id   | df6407ae93bb407d876f2ee4787ede79 |
| enabled     | True                             |
| id          | cd2107aa3a8f4066a871ca029641cfd7 |
| is_domain   | False                            |
| name        | service                          |
| parent_id   | df6407ae93bb407d876f2ee4787ede79 |
+-------------+----------------------------------+

驗證之前的所有操作

命令集:

openstack service list 
openstack endpoint list | grep keystone |wc -l 
openstack domain list 
openstack project list 
openstack user list 
openstack role list 

   查看服務列表

[root@controller ~]# openstack service list 
+----------------------------------+----------+----------+
| ID                               | Name     | Type     |
+----------------------------------+----------+----------+
| f08ec36b2b7340d6976fcb2bbd24e83b | keystone | identity |
+----------------------------------+----------+----------+

   查看當前的域

[root@controller ~]# openstack domain list 
+----------------------------------+---------+---------+----------------+
| ID                               | Name    | Enabled | Description    |
+----------------------------------+---------+---------+----------------+
| df6407ae93bb407d876f2ee4787ede79 | default | True    | Default Domain |
+----------------------------------+---------+---------+----------------+

   查看集合

[root@controller ~]# openstack project list 
+----------------------------------+---------+
| ID                               | Name    |
+----------------------------------+---------+
| cd2107aa3a8f4066a871ca029641cfd7 | service |
| d0dfbdbc115b4a728c24d28bc1ce1e62 | admin   |
+----------------------------------+---------+

   查看當前的用戶列表

[root@controller ~]# openstack user list 
+----------------------------------+-------+
| ID                               | Name  |
+----------------------------------+-------+
| d8f4a1d74f52482d8ebe2184692d2c1c | admin |
+----------------------------------+-------+

   查看當前的角色

[root@controller ~]# openstack role list 
+----------------------------------+-------+
| ID                               | Name  |
+----------------------------------+-------+
| 4de514c418ee480d898773e4f543b79d | admin |
+----------------------------------+-------+

關於域、項目、用戶和角色的說明:

類型

說明

Domain

表示 project user 的集合,在公有雲或者私有雲中常常表示一個客戶

Group

一個domain 中的部分用戶的集合

Project

項目、IT基礎設施資源的集合,比如虛機,卷,鏡像等

Role

授權,角色,表示一個 user 對一個 project resource 的權限

Token

一個 user 對於某個目標(project 或者 domain)的一個有限時間段內的身份令牌

1.4.10 創建 OpenStack 客戶端環境腳本

官方文檔:https://docs.openstack.org/mitaka/zh_CN/install-guide-rdo/keystone-openrc.html

編輯文件 admin-openrc 並添加如下內容

[root@controller ~]# vi admin-openrc
export OS_PROJECT_DOMAIN_NAME=default
export OS_USER_DOMAIN_NAME=default
export OS_PROJECT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=ADMIN_PASS
export OS_AUTH_URL=http://controller:35357/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2

   【重要】務必使用環境變量腳本

使用腳本創建環境變量

[root@controller ~]# source admin-openrc 
[root@controller ~]# env|grep OS
HOSTNAME=controller
OS_USER_DOMAIN_NAME=default
OS_IMAGE_API_VERSION=2
OS_PROJECT_NAME=admin
OS_IDENTITY_API_VERSION=3
OS_PASSWORD=ADMIN_PASS
OS_AUTH_URL=http://controller:35357/v3
OS_USERNAME=admin
OS_PROJECT_DOMAIN_NAME=default

1.5 鏡像服務glance部署

官方文檔:https://docs.openstack.org/mitaka/zh_CN/install-guide-rdo/glance.html

1.5.1 創庫授權

參考文檔:https://docs.openstack.org/mitaka/zh_CN/install-guide-rdo/glance-install.html

# 登陸mysql數據庫
[root@controller ~]# mysql

創建 glance 數據庫:

CREATE DATABASE glance;

``glance``數據庫授予恰當的權限:

GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY 'GLANCE_DBPASS';
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY 'GLANCE_DBPASS';

1.5.2 創建glance用戶和授權

[重要]加載環境變量

注:每次使用openstack管理命令時都依賴與環境變量

[root@controller ~]# . admin-openrc

創建 glance 用戶

openstack user create --domain default --password GLANCE_PASS glance

添加 admin 角色到 glance 用戶和 service 項目上

openstack role add --project service --user glance admin

1.5.3 創建鏡像服務的 API 端點,並注冊

創建``glance``服務實體

openstack service create --name glance --description "OpenStack Image" image

   執行過程

[root@controller ~]# openstack service create --name glance --description "OpenStack Image" image
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | OpenStack Image                  |
| enabled     | True                             |
| id          | 30357ca18e5046b98dbc0dd4f1e7d69c |
| name        | glance                           |
| type        | image                            |
+-------------+----------------------------------+

創建鏡像服務的 API 端點

命令集

openstack endpoint create --region RegionOne image public http://controller:9292
openstack endpoint create --region RegionOne image internal http://controller:9292
openstack endpoint create --region RegionOne image admin http://controller:9292

執行過程 

[root@controller ~]# openstack endpoint create --region RegionOne image public http://controller:9292
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | 671486d2528448e9a4067ab04a15e015 |
| interface    | public                           |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | 30357ca18e5046b98dbc0dd4f1e7d69c |
| service_name | glance                           |
| service_type | image                            |
| url          | http://controller:9292           |
+--------------+----------------------------------+
[root@controller ~]# openstack endpoint create --region RegionOne image internal http://controller:9292
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | 8ff6131b7e1b4234bb4f34daecbbd615 |
| interface    | internal                         |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | 30357ca18e5046b98dbc0dd4f1e7d69c |
| service_name | glance                           |
| service_type | image                            |
| url          | http://controller:9292           |
+--------------+----------------------------------+
[root@controller ~]# openstack endpoint create --region RegionOne image admin http://controller:9292
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | 4a1b3341a0604dbfb710eaa63aab626a |
| interface    | admin                            |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | 30357ca18e5046b98dbc0dd4f1e7d69c |
| service_name | glance                           |
| service_type | image                            |
| url          | http://controller:9292           |
+--------------+----------------------------------+

1.5.4 安裝glance軟件包

yum install openstack-glance -y

服務說明:

glance-api 負責鏡像的上傳、下載、查看、刪除

glance-registry 修改鏡像的源數據:鏡像所需的配置

1.5.5 修改glance相關配置文件

/etc/glance/glance-api.conf      # 接收鏡像API的調用,諸如鏡像發現、恢復、存儲。

/etc/glance/glance-registry.conf #存儲、處理和恢復鏡像的元數據,元數據包括項諸如大小和類型。

1、編輯文件 /etc/glance/glance-registry.conf

[database] 部分,配置數據庫訪問

    [database]
    ...
    connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance

[keystone_authtoken] [paste_deploy] 部分,配置認證服務訪問

    [keystone_authtoken]
    ...
    auth_uri = http://controller:5000
    auth_url = http://controller:35357
    memcached_servers = controller:11211
    auth_type = password
    project_domain_name = default
    user_domain_name = default
    project_name = service
    username = glance
    password = GLANCE_PASS

    [paste_deploy]
    ...
    flavor = keystone

[glance_store] 部分,配置本地文件系統存儲和鏡像文件位置

    [glance_store]
    ...
    stores = file,http
    default_store = file
    filesystem_store_datadir = /var/lib/glance/images/

   命令集

cp /etc/glance/glance-api.conf{,.bak}
grep '^[a-Z\[]' /etc/glance/glance-api.conf.bak >/etc/glance/glance-api.conf
openstack-config --set /etc/glance/glance-api.conf  database  connection  mysql+pymysql://glance:GLANCE_DBPASS@controller/glance
openstack-config --set /etc/glance/glance-api.conf  glance_store stores  file,http
openstack-config --set /etc/glance/glance-api.conf  glance_store default_store  file
openstack-config --set /etc/glance/glance-api.conf  glance_store filesystem_store_datadir  /var/lib/glance/images/
openstack-config --set /etc/glance/glance-api.conf  keystone_authtoken auth_uri  http://controller:5000
openstack-config --set /etc/glance/glance-api.conf  keystone_authtoken auth_url  http://controller:35357
openstack-config --set /etc/glance/glance-api.conf  keystone_authtoken memcached_servers  controller:11211
openstack-config --set /etc/glance/glance-api.conf  keystone_authtoken auth_type  password
openstack-config --set /etc/glance/glance-api.conf  keystone_authtoken project_domain_name  default
openstack-config --set /etc/glance/glance-api.conf  keystone_authtoken user_domain_name  default
openstack-config --set /etc/glance/glance-api.conf  keystone_authtoken project_name  service
openstack-config --set /etc/glance/glance-api.conf  keystone_authtoken username  glance
openstack-config --set /etc/glance/glance-api.conf  keystone_authtoken password  GLANCE_PASS
openstack-config --set /etc/glance/glance-api.conf  paste_deploy flavor  keystone

2、編輯文件 ``/etc/glance/glance-registry.conf``

[database] 部分,配置數據庫訪問

    [database]
    ...
    connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance

[keystone_authtoken] [paste_deploy] 部分,配置認證服務訪問

    [keystone_authtoken]
    ...
    auth_uri = http://controller:5000
    auth_url = http://controller:35357
    memcached_servers = controller:11211
    auth_type = password
    project_domain_name = default
    user_domain_name = default
    project_name = service
    username = glance
    password = GLANCE_PASS
 
    [paste_deploy]
    ...
    flavor = keystone

   命令集

cp /etc/glance/glance-registry.conf{,.bak}
grep '^[a-Z\[]' /etc/glance/glance-registry.conf.bak > /etc/glance/glance-registry.conf
openstack-config --set /etc/glance/glance-registry.conf  database  connection  mysql+pymysql://glance:GLANCE_DBPASS@controller/glance
openstack-config --set /etc/glance/glance-registry.conf  keystone_authtoken auth_uri  http://controller:5000
openstack-config --set /etc/glance/glance-registry.conf  keystone_authtoken auth_url  http://controller:35357
openstack-config --set /etc/glance/glance-registry.conf  keystone_authtoken memcached_servers  controller:11211
openstack-config --set /etc/glance/glance-registry.conf  keystone_authtoken auth_type  password
openstack-config --set /etc/glance/glance-registry.conf  keystone_authtoken project_domain_name  default
openstack-config --set /etc/glance/glance-registry.conf  keystone_authtoken user_domain_name  default
openstack-config --set /etc/glance/glance-registry.conf  keystone_authtoken project_name  service
openstack-config --set /etc/glance/glance-registry.conf  keystone_authtoken username  glance
openstack-config --set /etc/glance/glance-registry.conf  keystone_authtoken password  GLANCE_PASS
openstack-config --set /etc/glance/glance-registry.conf  paste_deploy flavor  keystone

1.5.6 同步數據庫

[root@controller ~]#  su -s /bin/sh -c "glance-manage db_sync" glance

注:忽略輸出中任何不推薦使用的信息。

檢查數據庫是否同步成功

[root@controller ~]# mysql glance -e "show tables" |wc -l 
21

1.5.7 啟動glance服務

啟動鏡像服務、配置他們隨機啟動

systemctl enable openstack-glance-api.service openstack-glance-registry.service
systemctl start  openstack-glance-api.service openstack-glance-registry.service

1.5.8 驗證glance服務操作

a.設置環境變量

. admin-openrc

b.下載源鏡像   

wget http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img

c.使用 QCOW2 磁盤格式, bare 容器格式上傳鏡像到鏡像服務並設置公共可見,這樣所有的項目都可以訪問它

openstack image create "cirros" --file cirros-0.3.4-x86_64-disk.img --disk-format qcow2 --container-format bare --public

執行過程如下

[root@controller ~]# openstack image create "cirros" --file cirros-0.3.4-x86_64-disk.img --disk-format qcow2 --container-format bare --public
+------------------+------------------------------------------------------+
| Field            | Value                                                |
+------------------+------------------------------------------------------+
| checksum         | ee1eca47dc88f4879d8a229cc70a07c6                     |
| container_format | bare                                                 |
| created_at       | 2018-01-23T10:20:19Z                                 |
| disk_format      | qcow2                                                |
| file             | /v2/images/9d92c601-0824-493a-bc6e-cecb10e9a4c6/file |
| id               | 9d92c601-0824-493a-bc6e-cecb10e9a4c6                 |
| min_disk         | 0                                                    |
| min_ram          | 0                                                    |
| name             | cirros                                               |
| owner            | d0dfbdbc115b4a728c24d28bc1ce1e62                     |
| protected        | False                                                |
| schema           | /v2/schemas/image                                    |
| size             | 13287936                                             |
| status           | active                                               |
| tags             |                                                      |
| updated_at       | 2018-01-23T10:20:20Z                                 |
| virtual_size     | None                                                 |
| visibility       | public                                               |
+------------------+------------------------------------------------------+

查看鏡像列表

[root@controller ~]# openstack image list 
+--------------------------------------+--------+--------+
| ID                                   | Name   | Status |
+--------------------------------------+--------+--------+
| 9d92c601-0824-493a-bc6e-cecb10e9a4c6 | cirros | active |
+--------------------------------------+--------+--------+

鏡像位置,鏡像上傳后以id命名。

[root@controller ~]# ll -h  /var/lib/glance/images/ 
total 13M
-rw-r----- 1 glance glance 13M Jan 23 18:20 9d92c601-0824-493a-bc6e-cecb10e9a4c6

至此glance服務配置完成

1.6 計算服務(nova)部署

官方文檔:https://docs.openstack.org/mitaka/zh_CN/install-guide-rdo/nova.html

1.6.1 在控制節點安裝並配置

參考文獻:https://docs.openstack.org/mitaka/zh_CN/install-guide-rdo/nova-controller-install.html

1)在數據庫中,創庫,授權

數據庫連接客戶端以 root 用戶連接到數據庫服務器

mysql -u root -p

創建 nova_api nova 數據庫:  

CREATE DATABASE nova_api;
CREATE DATABASE nova;

對數據庫進行正確的授權  

GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' IDENTIFIED BY 'NOVA_DBPASS';
GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' IDENTIFIED BY 'NOVA_DBPASS';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'NOVA_DBPASS';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'NOVA_DBPASS';

2)在keystone中創建用戶並授權

加載環境變量

[root@controller ~]#  . admin-openrc

創建用戶

openstack user create --domain default --password NOVA_PASS nova

關聯角色

openstack role add --project service --user nova admin

3)在keystone中創建服務實體,和注冊API接口

創建服務實體

openstack service create --name nova --description "OpenStack Compute" compute

注冊API接口

openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1/%\(tenant_id\)s
openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1/%\(tenant_id\)s
openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1/%\(tenant_id\)s

4)安裝軟件包

yum -y install openstack-nova-api openstack-nova-conductor openstack-nova-console openstack-nova-novncproxy openstack-nova-scheduler   

軟件包說明

nova-api             # 提供api接口

nova-scheduler  # 調度

nova-conductor  # 替代計算節點進入數據庫操作

nova-consoleauth   # 提供web界面版的vnc管理

nova-novncproxy  # 提供web界面版的vnc管理

nova-compute   # 調度libvirtd進行虛擬機生命周期的管理

5)修改配置文件

編輯``/etc/nova/nova.conf``文件並完成下面的操作:

``[DEFAULT]``部分,只啟用計算和元數據API

    [DEFAULT]
    ...
    enabled_apis = osapi_compute,metadata

``[api_database]````[database]``部分,配置數據庫的連接:

    [api_database]
    ...
    connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api

    [database]
    ...
    connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova

[DEFAULT] [oslo_messaging_rabbit]”部分,配置 RabbitMQ 消息隊列訪問

    [DEFAULT]
    ...
    rpc_backend = rabbit

    [oslo_messaging_rabbit]
    ...
    rabbit_host = controller
    rabbit_userid = openstack
    rabbit_password = RABBIT_PASS

[DEFAULT] [keystone_authtoken] 部分,配置認證服務訪問

    [DEFAULT]
    ...
    auth_strategy = keystone

    [keystone_authtoken]
    ...
    auth_uri = http://controller:5000
    auth_url = http://controller:35357
    memcached_servers = controller:11211
    auth_type = password
    project_domain_name = default
    user_domain_name = default
    project_name = service
    username = nova
    password = NOVA_PASS

[DEFAULT]部分,配置``my_ip`` 來使用控制節點的管理接口的IP 地址。

    [DEFAULT]
    ...
    my_ip = 10.0.0.11

[DEFAULT] 部分,使能 Networking 服務:

    [DEFAULT]
    ...
    use_neutron = True
    firewall_driver = nova.virt.firewall.NoopFirewallDriver

``[vnc]``部分,配置VNC代理使用控制節點的管理接口IP地址

    [vnc]
    ...
    vncserver_listen = $my_ip
    vncserver_proxyclient_address = $my_ip

[glance] 區域,配置鏡像服務 API 的位置:

    [glance]
    ...
    api_servers = http://controller:9292

[oslo_concurrency] 部分,配置鎖路徑:

    [oslo_concurrency]
    ...
    lock_path = /var/lib/nova/tmp

命令集

cp /etc/nova/nova.conf{,.bak}
grep '^[a-Z\[]' /etc/nova/nova.conf.bak >/etc/nova/nova.conf
openstack-config --set /etc/nova/nova.conf  DEFAULT enabled_apis  osapi_compute,metadata
openstack-config --set /etc/nova/nova.conf  DEFAULT rpc_backend  rabbit
openstack-config --set /etc/nova/nova.conf  DEFAULT auth_strategy  keystone
openstack-config --set /etc/nova/nova.conf  DEFAULT my_ip  10.0.0.11
openstack-config --set /etc/nova/nova.conf  DEFAULT use_neutron  True
openstack-config --set /etc/nova/nova.conf  DEFAULT firewall_driver  nova.virt.firewall.NoopFirewallDriver
openstack-config --set /etc/nova/nova.conf  api_database connection  mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api
openstack-config --set /etc/nova/nova.conf  database  connection  mysql+pymysql://nova:NOVA_DBPASS@controller/nova
openstack-config --set /etc/nova/nova.conf  glance api_servers  http://controller:9292
openstack-config --set /etc/nova/nova.conf  keystone_authtoken  auth_uri  http://controller:5000
openstack-config --set /etc/nova/nova.conf  keystone_authtoken  auth_url  http://controller:35357
openstack-config --set /etc/nova/nova.conf  keystone_authtoken  memcached_servers  controller:11211
openstack-config --set /etc/nova/nova.conf  keystone_authtoken  auth_type  password
openstack-config --set /etc/nova/nova.conf  keystone_authtoken  project_domain_name  default
openstack-config --set /etc/nova/nova.conf  keystone_authtoken  user_domain_name  default
openstack-config --set /etc/nova/nova.conf  keystone_authtoken  project_name  service
openstack-config --set /etc/nova/nova.conf  keystone_authtoken  username  nova
openstack-config --set /etc/nova/nova.conf  keystone_authtoken  password  NOVA_PASS
openstack-config --set /etc/nova/nova.conf  oslo_concurrency lock_path  /var/lib/nova/tmp
openstack-config --set /etc/nova/nova.conf  oslo_messaging_rabbit   rabbit_host  controller
openstack-config --set /etc/nova/nova.conf  oslo_messaging_rabbit   rabbit_userid  openstack
openstack-config --set /etc/nova/nova.conf  oslo_messaging_rabbit   rabbit_password  RABBIT_PASS
openstack-config --set /etc/nova/nova.conf  vnc vncserver_listen  '$my_ip'
openstack-config --set /etc/nova/nova.conf  vnc vncserver_proxyclient_address  '$my_ip'

6)同步數據庫

su -s /bin/sh -c "nova-manage api_db sync" nova
su -s /bin/sh -c "nova-manage db sync" nova

注意:忽略執行過程中輸出中任何不推薦使用的信息 

[root@controller ~]# mysql nova_api -e 'show tables' |wc -l 
10
[root@controller ~]# mysql nova -e 'show tables' |wc -l 
110

7)啟動服務

設置開啟自啟動

systemctl enable openstack-nova-api.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service

啟動服務

systemctl start openstack-nova-api.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service

# 查看服務狀態

[root@controller ~]# systemctl status openstack-nova-api.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service |grep 'active (running)' |wc -l

5

1.6.2 在計算節點安裝和配置

查考文獻:https://docs.openstack.org/mitaka/zh_CN/install-guide-rdo/nova-compute-install.html

1)安裝軟件包

yum -y install openstack-nova-compute

2)修改配置文件

編輯``/etc/nova/nova.conf``文件並完成下面的操作

``[DEFAULT]`` [oslo_messaging_rabbit]部分,配置``RabbitMQ``消息隊列的連接:

    [DEFAULT]
    ...
    rpc_backend = rabbit

    [oslo_messaging_rabbit]
    ...
    rabbit_host = controller
    rabbit_userid = openstack
    rabbit_password = RABBIT_PASS

[DEFAULT] [keystone_authtoken] 部分,配置認證服務訪問:

    [DEFAULT]
    ...
    auth_strategy = keystone

    [keystone_authtoken]
    ...
    auth_uri = http://controller:5000
    auth_url = http://controller:35357
    memcached_servers = controller:11211
    auth_type = password
    project_domain_name = default
    user_domain_name = default
    project_name = service
    username = nova
    password = NOVA_PASS

[DEFAULT] 部分,配置 my_ip 選項:

    [DEFAULT]
    ...
    my_ip = MANAGEMENT_INTERFACE_IP_ADDRESS

   注意: 將其中的 MANAGEMENT_INTERFACE_IP_ADDRESS 替換為計算節點上的管理網絡接口的IP 地址,例如 :ref:`example architecture <overview-example-architectures>`中所示的第一個節點 10.0.0.31

[DEFAULT] 部分,使能 Networking 服務:

    [DEFAULT]
    ...
    use_neutron = True
    firewall_driver = nova.virt.firewall.NoopFirewallDriver

 ``[vnc]``部分,啟用並配置遠程控制台訪問

    [vnc]
    ...
    enabled = True
    vncserver_listen = 0.0.0.0
    vncserver_proxyclient_address = $my_ip
    novncproxy_base_url = http://controller:6080/vnc_auto.html

[glance] 區域,配置鏡像服務 API 的位置:

    [glance]
    ...
    api_servers = http://controller:9292

[oslo_concurrency] 部分,配置鎖路徑:

    [oslo_concurrency]
    ...
    lock_path = /var/lib/nova/tmp

命令集 

cp /etc/nova/nova.conf{,.bak}
grep '^[a-Z\[]' /etc/nova/nova.conf.bak >/etc/nova/nova.conf
openstack-config --set /etc/nova/nova.conf  DEFAULT enabled_apis  osapi_compute,metadata
openstack-config --set /etc/nova/nova.conf  DEFAULT rpc_backend  rabbit
openstack-config --set /etc/nova/nova.conf  DEFAULT auth_strategy  keystone
openstack-config --set /etc/nova/nova.conf  DEFAULT my_ip  10.0.0.31
openstack-config --set /etc/nova/nova.conf  DEFAULT use_neutron  True
openstack-config --set /etc/nova/nova.conf  DEFAULT firewall_driver  nova.virt.firewall.NoopFirewallDriver
openstack-config --set /etc/nova/nova.conf  glance api_servers  http://controller:9292
openstack-config --set /etc/nova/nova.conf  keystone_authtoken  auth_uri  http://controller:5000
openstack-config --set /etc/nova/nova.conf  keystone_authtoken  auth_url  http://controller:35357
openstack-config --set /etc/nova/nova.conf  keystone_authtoken  memcached_servers  controller:11211
openstack-config --set /etc/nova/nova.conf  keystone_authtoken  auth_type  password
openstack-config --set /etc/nova/nova.conf  keystone_authtoken  project_domain_name  default
openstack-config --set /etc/nova/nova.conf  keystone_authtoken  user_domain_name  default
openstack-config --set /etc/nova/nova.conf  keystone_authtoken  project_name  service
openstack-config --set /etc/nova/nova.conf  keystone_authtoken  username  nova
openstack-config --set /etc/nova/nova.conf  keystone_authtoken  password  NOVA_PASS
openstack-config --set /etc/nova/nova.conf  oslo_concurrency lock_path  /var/lib/nova/tmp
openstack-config --set /etc/nova/nova.conf  oslo_messaging_rabbit   rabbit_host  controller
openstack-config --set /etc/nova/nova.conf  oslo_messaging_rabbit   rabbit_userid  openstack
openstack-config --set /etc/nova/nova.conf  oslo_messaging_rabbit   rabbit_password  RABBIT_PASS
openstack-config --set /etc/nova/nova.conf  vnc enabled  True
openstack-config --set /etc/nova/nova.conf  vnc vncserver_listen  0.0.0.0
openstack-config --set /etc/nova/nova.conf  vnc vncserver_proxyclient_address  '$my_ip'
openstack-config --set /etc/nova/nova.conf  vnc novncproxy_base_url  http://controller:6080/vnc_auto.html

3)啟動服務

確定您的計算節點是否支持虛擬機的硬件加速

[root@compute1 ~]#  egrep -c '(vmx|svm)' /proc/cpuinfo
1

說明:如果這個命令返回了 1 或更大的值,那么你的計算節點支持硬件加速且不需要額外的配置。

啟動,開機自啟動

systemctl enable libvirtd.service openstack-nova-compute.service
systemctl start libvirtd.service openstack-nova-compute.service
# 查看狀態
systemctl status libvirtd.service openstack-nova-compute.service

在控制節點查看計算節點狀態

[root@controller ~]# source admin-openrc 
[root@controller ~]# openstack compute service list 
+----+------------------+------------+----------+---------+-------+----------------------------+
| Id | Binary           | Host       | Zone     | Status  | State | Updated At                 |
+----+------------------+------------+----------+---------+-------+----------------------------+
|  1 | nova-scheduler   | controller | internal | enabled | up    | 2018-01-23T12:02:04.000000 |
|  2 | nova-conductor   | controller | internal | enabled | up    | 2018-01-23T12:02:03.000000 |
|  3 | nova-consoleauth | controller | internal | enabled | up    | 2018-01-23T12:02:05.000000 |
|  6 | nova-compute     | compute1   | nova     | enabled | up    | 2018-01-23T12:02:05.000000 |
+----+------------------+------------+----------+---------+-------+----------------------------+

1.6.3 驗證服務

在進行下一步操作之前,先驗證之前部署的服務是否正常。

注意: 執行命令前需先加載環境變量腳本

# 檢查認證服務
openstack user list 
# 檢查鏡像服務
openstack image list 
# 檢查計算服務
openstack compute service list

1.7 Networking(neutron)服務

官方文檔:https://docs.openstack.org/mitaka/zh_CN/install-guide-rdo/neutron.html

1.7.1 安裝並配置控制節點

以下全命令全在 controller 主機中執行

   參考文獻:https://docs.openstack.org/mitaka/zh_CN/install-guide-rdo/neutron-controller-install.html

1)在數據庫中,創庫,授權

連接到數據庫服務器

mysql

創建``neutron`` 數據庫

CREATE DATABASE neutron;

``neutron`` 數據庫授予合適的訪問權限

GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'NEUTRON_DBPASS';
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'NEUTRON_DBPASS';

2)在keystone中創建用戶並授權

創建``neutron``用戶

openstack user create --domain default --password NEUTRON_PASS neutron

添加``admin`` 角色到``neutron`` 用戶

openstack role add --project service --user neutron admin

3)在keystone中創建服務實體,和注冊API接口

創建``neutron``服務實體

openstack service create --name neutron --description "OpenStack Networking" network

創建網絡服務API端點

openstack endpoint create --region RegionOne network public http://controller:9696
openstack endpoint create --region RegionOne network internal http://controller:9696
openstack endpoint create --region RegionOne network admin http://controller:9696    

4)安裝軟件包

   這這里我選用的時’網絡選項1:公共網絡‘ 該網絡模式較為簡單。

   官方文檔:https://docs.openstack.org/mitaka/zh_CN/install-guide-rdo/neutron-controller-install-option1.html

安裝軟件包

yum -y install openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge ebtables

5)修改配置文件

編輯``/etc/neutron/neutron.conf`` 文件並完成如下操作

[database] 部分,配置數據庫訪問

    [database]
    ...
    connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron

``[DEFAULT]``部分,啟用ML2插件並禁用其他插件

    [DEFAULT]
    ...
    core_plugin = ml2
    service_plugins =

[DEFAULT] [oslo_messaging_rabbit]”部分,配置 RabbitMQ 消息隊列的連接

    [DEFAULT]
    ...
    rpc_backend = rabbit

    [oslo_messaging_rabbit]
    ...
    rabbit_host = controller
    rabbit_userid = openstack
    rabbit_password = RABBIT_PASS

[DEFAULT] [keystone_authtoken] 部分,配置認證服務訪問

    [DEFAULT]
    ...
    auth_strategy = keystone

    [keystone_authtoken]
    ...
    auth_uri = http://controller:5000
    auth_url = http://controller:35357
    memcached_servers = controller:11211
    auth_type = password
    project_domain_name = default
    user_domain_name = default
    project_name = service
    username = neutron
    password = NEUTRON_PASS

``[DEFAULT]````[nova]``部分,配置網絡服務來通知計算節點的網絡拓撲變化

    [DEFAULT]
    ...
    notify_nova_on_port_status_changes = True
    notify_nova_on_port_data_changes = True

    [nova]
    ...
    auth_url = http://controller:35357
    auth_type = password
    project_domain_name = default
    user_domain_name = default
    region_name = RegionOne
    project_name = service
    username = nova
    password = NOVA_PASS

[oslo_concurrency] 部分,配置鎖路徑

    [oslo_concurrency]
    ...
    lock_path = /var/lib/neutron/tmp

命令集

cp /etc/neutron/neutron.conf{,.bak}
grep '^[a-Z\[]' /etc/neutron/neutron.conf.bak >/etc/neutron/neutron.conf
openstack-config --set /etc/neutron/neutron.conf  DEFAULT core_plugin  ml2
openstack-config --set /etc/neutron/neutron.conf  DEFAULT service_plugins
openstack-config --set /etc/neutron/neutron.conf  DEFAULT rpc_backend  rabbit
openstack-config --set /etc/neutron/neutron.conf  DEFAULT auth_strategy  keystone
openstack-config --set /etc/neutron/neutron.conf  DEFAULT notify_nova_on_port_status_changes  True
openstack-config --set /etc/neutron/neutron.conf  DEFAULT notify_nova_on_port_data_changes  True
openstack-config --set /etc/neutron/neutron.conf  database connection  mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron
openstack-config --set /etc/neutron/neutron.conf  keystone_authtoken auth_uri  http://controller:5000
openstack-config --set /etc/neutron/neutron.conf  keystone_authtoken auth_url  http://controller:35357
openstack-config --set /etc/neutron/neutron.conf  keystone_authtoken memcached_servers  controller:11211
openstack-config --set /etc/neutron/neutron.conf  keystone_authtoken auth_type  password
openstack-config --set /etc/neutron/neutron.conf  keystone_authtoken project_domain_name  default
openstack-config --set /etc/neutron/neutron.conf  keystone_authtoken user_domain_name  default
openstack-config --set /etc/neutron/neutron.conf  keystone_authtoken project_name  service
openstack-config --set /etc/neutron/neutron.conf  keystone_authtoken username  neutron
openstack-config --set /etc/neutron/neutron.conf  keystone_authtoken password  NEUTRON_PASS
openstack-config --set /etc/neutron/neutron.conf  nova auth_url  http://controller:35357
openstack-config --set /etc/neutron/neutron.conf  nova auth_type  password 
openstack-config --set /etc/neutron/neutron.conf  nova project_domain_name  default
openstack-config --set /etc/neutron/neutron.conf  nova user_domain_name  default
openstack-config --set /etc/neutron/neutron.conf  nova region_name  RegionOne
openstack-config --set /etc/neutron/neutron.conf  nova project_name  service
openstack-config --set /etc/neutron/neutron.conf  nova username  nova
openstack-config --set /etc/neutron/neutron.conf  nova password  NOVA_PASS
openstack-config --set /etc/neutron/neutron.conf  oslo_concurrency lock_path  /var/lib/neutron/tmp
openstack-config --set /etc/neutron/neutron.conf  oslo_messaging_rabbit rabbit_host  controller
openstack-config --set /etc/neutron/neutron.conf  oslo_messaging_rabbit rabbit_userid  openstack
openstack-config --set /etc/neutron/neutron.conf  oslo_messaging_rabbit rabbit_password  RABBIT_PASS

   ② 配置 Modular Layer 2 (ML2) 插件

編輯``/etc/neutron/plugins/ml2/ml2_conf.ini``文件並完成以下操作

``[ml2]``部分,啟用flatVLAN網絡

    [ml2]
    ...
    type_drivers = flat,vlan

``[ml2]``部分,禁用私有網絡

    [ml2]
    ...
    tenant_network_types =

``[ml2]``部分,啟用Linuxbridge機制

    [ml2]
    ...
    mechanism_drivers = linuxbridge

``[ml2]`` 部分,啟用端口安全擴展驅動

    [ml2]
    ...
    extension_drivers = port_security    

``[ml2_type_flat]``部分,配置公共虛擬網絡為flat網絡

    [ml2_type_flat]
    ...
    flat_networks = provider

``[securitygroup]``部分,啟用 ipset 增加安全組規則的高效性

    [securitygroup]
    ...
    enable_ipset = True

   命令集

cp /etc/neutron/plugins/ml2/ml2_conf.ini{,.bak}
grep '^[a-Z\[]' /etc/neutron/plugins/ml2/ml2_conf.ini.bak >/etc/neutron/plugins/ml2/ml2_conf.ini
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini  ml2 type_drivers  flat,vlan
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini  ml2 tenant_network_types 
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini  ml2 mechanism_drivers  linuxbridge
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini  ml2 extension_drivers  port_security
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini  ml2_type_flat flat_networks  provider
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini  securitygroup enable_ipset  True

   ③ 配置Linuxbridge代理

編輯``/etc/neutron/plugins/ml2/linuxbridge_agent.ini``文件並且完成以下操作

``[linux_bridge]``部分,將公共虛擬網絡和公共物理網絡接口對應起來

    [linux_bridge]
    physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME

   注意:將``PUBLIC_INTERFACE_NAME`` 替換為底層的物理公共網絡接口,例如eth0

``[vxlan]``部分,禁止VXLAN覆蓋網絡 

    [vxlan]
    enable_vxlan = False

``[securitygroup]``部分,啟用安全組並配置

    [securitygroup]
    ...
    enable_security_group = True
    firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

   命令集

cp /etc/neutron/plugins/ml2/linuxbridge_agent.ini{,.bak}
grep '^[a-Z\[]' /etc/neutron/plugins/ml2/linuxbridge_agent.ini.bak >/etc/neutron/plugins/ml2/linuxbridge_agent.ini
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini  linux_bridge physical_interface_mappings  provider:eth0
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini  securitygroup enable_security_group  True
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini  securitygroup firewall_driver  neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini  vxlan enable_vxlan  False

   ④ 配置DHCP代理

編輯``/etc/neutron/dhcp_agent.ini``文件並完成下面的操作

``[DEFAULT]``部分,配置Linuxbridge驅動接口,DHCP驅動並啟用隔離元數據,這樣在公共網絡上的實例就可以通過網絡來訪問元數據

    [DEFAULT]
    ...
    interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver
    dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
    enable_isolated_metadata = True

命令集

neutron.agent.linux.interface.BridgeInterfaceDriver
openstack-config --set /etc/neutron/dhcp_agent.ini  DEFAULT dhcp_driver neutron.agent.linux.dhcp.Dnsmasq
openstack-config --set /etc/neutron/dhcp_agent.ini  DEFAULT enable_isolated_metadata true

配置元數據代理

編輯``/etc/neutron/metadata_agent.ini``文件並完成以下操作

``[DEFAULT]`` 部分,配置元數據主機以及共享密碼

    [DEFAULT]
    ...
    nova_metadata_ip = controller
    metadata_proxy_shared_secret = METADATA_SECRET

命令集

openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT nova_metadata_ip  controller
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT metadata_proxy_shared_secret  METADATA_SECRET

   ⑥ 為nove配置網絡服務

再次編輯``/etc/nova/nova.conf``文件並完成以下操作

``[neutron]``部分,配置訪問參數,啟用元數據代理並設置密碼

    [neutron]
    ...
    url = http://controller:9696
    auth_url = http://controller:35357
    auth_type = password
    project_domain_name = default
    user_domain_name = default
    region_name = RegionOne
    project_name = service
    username = neutron
    password = NEUTRON_PASS

    service_metadata_proxy = True
    metadata_proxy_shared_secret = METADATA_SECRET

   命令集

openstack-config --set /etc/nova/nova.conf  neutron url  http://controller:9696
openstack-config --set /etc/nova/nova.conf  neutron auth_url  http://controller:35357
openstack-config --set /etc/nova/nova.conf  neutron auth_type  password
openstack-config --set /etc/nova/nova.conf  neutron project_domain_name  default
openstack-config --set /etc/nova/nova.conf  neutron user_domain_name  default
openstack-config --set /etc/nova/nova.conf  neutron region_name  RegionOne
openstack-config --set /etc/nova/nova.conf  neutron project_name  service
openstack-config --set /etc/nova/nova.conf  neutron username  neutron
openstack-config --set /etc/nova/nova.conf  neutron password  NEUTRON_PASS
openstack-config --set /etc/nova/nova.conf  neutron service_metadata_proxy  True
openstack-config --set /etc/nova/nova.conf  neutron metadata_proxy_shared_secret  METADATA_SECRET

6)同步數據庫

網絡服務初始化腳本需要一個超鏈接 /etc/neutron/plugin.ini``指向ML2插件配置文件/etc/neutron/plugins/ml2/ml2_conf.ini``

如果超鏈接不存在,使用下面的命令創建它

ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini

同步數據庫

su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron

7)啟動服務

重啟計算API 服務

systemctl restart openstack-nova-api.service

當系統啟動時,啟動 Networking 服務並配置它啟動。

systemctl enable neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service
systemctl start neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service
systemctl status neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service

1.7.2 安裝和配置計算節點

官方文檔:https://docs.openstack.org/mitaka/zh_CN/install-guide-rdo/neutron-compute-install.html

1)安裝組件

yum -y install openstack-neutron-linuxbridge ebtables ipset

2)修改配置文件

在計算節點配置選擇 網絡選項1:公共網絡,與控制節點相同

編輯``/etc/neutron/neutron.conf`` 文件並完成如下操作

[DEFAULT] [oslo_messaging_rabbit]”部分,配置 RabbitMQ 消息隊列的連接 

    [DEFAULT]
    ...
    rpc_backend = rabbit

    [oslo_messaging_rabbit]
    ...
    rabbit_host = controller
    rabbit_userid = openstack
    rabbit_password = RABBIT_PASS    

[DEFAULT] [keystone_authtoken] 部分,配置認證服務訪問

    [DEFAULT]
    ...
    auth_strategy = keystone

    [keystone_authtoken]
    ...
    auth_uri = http://controller:5000
    auth_url = http://controller:35357
    memcached_servers = controller:11211
    auth_type = password
    project_domain_name = default
    user_domain_name = default
    project_name = service
    username = neutron
    password = NEUTRON_PASS    

   [oslo_concurrency] 部分,配置鎖路徑

    [oslo_concurrency]
    ...
    lock_path = /var/lib/neutron/tmp

   命令集

cp /etc/neutron/neutron.conf{,.bak}
grep -Ev '^$|#' /etc/neutron/neutron.conf.bak >/etc/neutron/neutron.conf
openstack-config --set /etc/neutron/neutron.conf  DEFAULT rpc_backend  rabbit
openstack-config --set /etc/neutron/neutron.conf  DEFAULT auth_strategy  keystone
openstack-config --set /etc/neutron/neutron.conf  keystone_authtoken auth_uri  http://controller:5000
openstack-config --set /etc/neutron/neutron.conf  keystone_authtoken auth_url  http://controller:35357
openstack-config --set /etc/neutron/neutron.conf  keystone_authtoken memcached_servers  controller:11211
openstack-config --set /etc/neutron/neutron.conf  keystone_authtoken auth_type  password
openstack-config --set /etc/neutron/neutron.conf  keystone_authtoken project_domain_name  default
openstack-config --set /etc/neutron/neutron.conf  keystone_authtoken user_domain_name  default
openstack-config --set /etc/neutron/neutron.conf  keystone_authtoken project_name  service
openstack-config --set /etc/neutron/neutron.conf  keystone_authtoken username  neutron
openstack-config --set /etc/neutron/neutron.conf  keystone_authtoken password  NEUTRON_PASS
openstack-config --set /etc/neutron/neutron.conf  oslo_concurrency lock_path  /var/lib/neutron/tmp
openstack-config --set /etc/neutron/neutron.conf  oslo_messaging_rabbit rabbit_host  controller
openstack-config --set /etc/neutron/neutron.conf  oslo_messaging_rabbit rabbit_userid  openstack
openstack-config --set /etc/neutron/neutron.conf  oslo_messaging_rabbit rabbit_password  RABBIT_PASS

配置Linuxbridge代理

編輯``/etc/neutron/plugins/ml2/linuxbridge_agent.ini``文件並且完成以下操作

``[linux_bridge]``部分,將公共虛擬網絡和公共物理網絡接口對應起來

    [linux_bridge]
    physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME        

   注意:將``PUBLIC_INTERFACE_NAME`` 替換為底層的物理公共網絡接口,例如eth0     ``[vxlan]``部分,禁止VXLAN覆蓋網絡      

    [vxlan]        
    enable_vxlan = False    

   ``[securitygroup]``部分,啟用安全組並配置   

    [securitygroup]        
    ...        
    enable_security_group = True        
    firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

命令集

cp /etc/neutron/plugins/ml2/linuxbridge_agent.ini{,.bak}
grep '^[a-Z\[]' /etc/neutron/plugins/ml2/linuxbridge_agent.ini.bak >/etc/neutron/plugins/ml2/linuxbridge_agent.ini
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini  linux_bridge physical_interface_mappings  provider:eth0
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini  securitygroup enable_security_group  True
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini  securitygroup firewall_driver  neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini  vxlan enable_vxlan  False

   ③ 為計算節點配置網絡服務

編輯``/etc/nova/nova.conf``文件並完成下面的操作

``[neutron]`` 部分,配置訪問參數

    [neutron]
    ...
    url = http://controller:9696
    auth_url = http://controller:35357
    auth_type = password
    project_domain_name = default
    user_domain_name = default
    region_name = RegionOne
    project_name = service
    username = neutron
    password = NEUTRON_PASS

命令集

openstack-config --set /etc/nova/nova.conf  neutron url  http://controller:9696
openstack-config --set /etc/nova/nova.conf  neutron auth_url  http://controller:35357
openstack-config --set /etc/nova/nova.conf  neutron auth_type  password
openstack-config --set /etc/nova/nova.conf  neutron project_domain_name  default
openstack-config --set /etc/nova/nova.conf  neutron user_domain_name  default
openstack-config --set /etc/nova/nova.conf  neutron region_name  RegionOne
openstack-config --set /etc/nova/nova.conf  neutron project_name  service
openstack-config --set /etc/nova/nova.conf  neutron username  neutron
openstack-config --set /etc/nova/nova.conf  neutron password  NEUTRON_PASS

3)啟動服務

重啟計算服務

systemctl restart openstack-nova-compute.service

啟動Linuxbridge代理並配置它開機自啟動

systemctl enable neutron-linuxbridge-agent.service
systemctl start neutron-linuxbridge-agent.service

1.7.3 驗證操作

官方驗證方法

 https://docs.openstack.org/mitaka/zh_CN/install-guide-rdo/neutron-verify.html

 https://docs.openstack.org/mitaka/zh_CN/install-guide-rdo/neutron-verify-option1.html

 # 在這里,我只進行驗證網絡,網絡正常說明服務正常

[root@controller ~]# neutron agent-list 
+--------------------------------------+--------------------+------------+-------------------+-------+----------------+---------------------------+
| id                                   | agent_type         | host       | availability_zone | alive | admin_state_up | binary                    |
+--------------------------------------+--------------------+------------+-------------------+-------+----------------+---------------------------+
| 3ab2f17f-737e-4c3f-86f0-2289c56a541b | DHCP agent         | controller | nova              | :-)   | True           | neutron-dhcp-agent        |
| 4f64caf6-a9b0-4742-b0d1-0d961063200a | Linux bridge agent | controller |                   | :-)   | True           | neutron-linuxbridge-agent |
| 630540de-d0a0-473b-96b5-757afc1057de | Linux bridge agent | compute1   |                   | :-)   | True           | neutron-linuxbridge-agent |
| 9989ddcb-6aba-4b7f-9bd7-7d61f774f2bb | Metadata agent     | controller |                   | :-)   | True           | neutron-metadata-agent    |
+--------------------------------------+--------------------+------------+-------------------+-------+----------------+---------------------------+

1.8 Dashboardhorizon-web界面)安裝

官方文檔:https://docs.openstack.org/mitaka/zh_CN/install-guide-rdo/horizon.html

1.8.1 安全並配置組件(單獨主機安裝)

查考文獻:https://docs.openstack.org/mitaka/zh_CN/install-guide-rdo/horizon-install.html#install-and-configure-components

安裝軟件包

[root@compute1 ~]# yum -y install openstack-dashboard

   由於Dashboard服務需要使用到httpd服務,安裝在控制節點,可能回影響到Keystone服務的正常運行,所以選擇單獨安裝,與官方文檔略有不同。

1.8.2 修改配置文件

編輯文件 /etc/openstack-dashboard/local_settings 並完成如下動作

controller 節點上配置儀表盤以使用 OpenStack 服務

    OPENSTACK_HOST = "controller"
    # 指向認證服務的地址=keystone

允許所有主機訪問儀表板

ALLOWED_HOSTS = ['*', ]

配置 memcached 會話存儲服務

    SESSION_ENGINE = 'django.contrib.sessions.backends.cache'

    CACHES = {
        'default': {
             'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
             'LOCATION': 'controller:11211',
        }
    }

啟用第3版認證API:

OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST

啟用對域的支持

OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True

配置API版本

    OPENSTACK_API_VERSIONS = {
        "identity": 3,
        "image": 2,
        "volume": 2,
    }

通過儀表盤創建用戶時的默認域配置為 default :

OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "default"

通過儀表盤創建的用戶默認角色配置為 user

OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"

如果您選擇網絡選項1,需要禁用支持3層網絡服務

    OPENSTACK_NEUTRON_NETWORK = {
        ...
        'enable_router': False,
        'enable_quotas': False,
        'enable_distributed_router': False,
        'enable_ha_router': False,
        'enable_lb': False,
        'enable_firewall': False,
        'enable_vpn': False,
        'enable_fip_topology_check': False,
    }

可以選擇性地配置時區

TIME_ZONE = "Asia/Shanghai"

   最終配置文件

wget https://files.cnblogs.com/files/clsn/local_settings.zip

  文件詳情:

  1 # -*- coding: utf-8 -*-
  2 
  3 import os
  4 
  5 from django.utils.translation import ugettext_lazy as _
  6 
  7 
  8 from openstack_dashboard import exceptions
  9 from openstack_dashboard.settings import HORIZON_CONFIG
 10 
 11 DEBUG = False
 12 TEMPLATE_DEBUG = DEBUG
 13 
 14 
 15 # WEBROOT is the location relative to Webserver root
 16 # should end with a slash.
 17 WEBROOT = '/dashboard/'
 18 #LOGIN_URL = WEBROOT + 'auth/login/'
 19 #LOGOUT_URL = WEBROOT + 'auth/logout/'
 20 #
 21 # LOGIN_REDIRECT_URL can be used as an alternative for
 22 # HORIZON_CONFIG.user_home, if user_home is not set.
 23 # Do not set it to '/home/', as this will cause circular redirect loop
 24 #LOGIN_REDIRECT_URL = WEBROOT
 25 
 26 # If horizon is running in production (DEBUG is False), set this
 27 # with the list of host/domain names that the application can serve.
 28 # For more information see:
 29 # https://docs.djangoproject.com/en/dev/ref/settings/#allowed-hosts
 30 ALLOWED_HOSTS = ['*', ]
 31 
 32 # Set SSL proxy settings:
 33 # Pass this header from the proxy after terminating the SSL,
 34 # and don't forget to strip it from the client's request.
 35 # For more information see:
 36 # https://docs.djangoproject.com/en/1.8/ref/settings/#secure-proxy-ssl-header
 37 #SECURE_PROXY_SSL_HEADER = ('HTTP_X_FORWARDED_PROTO', 'https')
 38 
 39 # If Horizon is being served through SSL, then uncomment the following two
 40 # settings to better secure the cookies from security exploits
 41 #CSRF_COOKIE_SECURE = True
 42 #SESSION_COOKIE_SECURE = True
 43 
 44 # The absolute path to the directory where message files are collected.
 45 # The message file must have a .json file extension. When the user logins to
 46 # horizon, the message files collected are processed and displayed to the user.
 47 #MESSAGES_PATH=None
 48 
 49 # Overrides for OpenStack API versions. Use this setting to force the
 50 # OpenStack dashboard to use a specific API version for a given service API.
 51 # Versions specified here should be integers or floats, not strings.
 52 # NOTE: The version should be formatted as it appears in the URL for the
 53 # service API. For example, The identity service APIs have inconsistent
 54 # use of the decimal point, so valid options would be 2.0 or 3.
 55 OPENSTACK_API_VERSIONS = {
 56 #    "data-processing": 1.1,
 57     "identity": 3,
 58     "image": 2,
 59     "volume": 2,
 60     "compute": 2,
 61 }
 62 
 63 # Set this to True if running on multi-domain model. When this is enabled, it
 64 # will require user to enter the Domain name in addition to username for login.
 65 OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
 66 
 67 # Overrides the default domain used when running on single-domain model
 68 # with Keystone V3. All entities will be created in the default domain.
 69 # NOTE: This value must be the ID of the default domain, NOT the name.
 70 # Also, you will most likely have a value in the keystone policy file like this
 71 #    "cloud_admin": "rule:admin_required and domain_id:<your domain id>"
 72 # This value must match the domain id specified there.
 73 OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = 'default'
 74 
 75 # Set this to True to enable panels that provide the ability for users to
 76 # manage Identity Providers (IdPs) and establish a set of rules to map
 77 # federation protocol attributes to Identity API attributes.
 78 # This extension requires v3.0+ of the Identity API.
 79 #OPENSTACK_KEYSTONE_FEDERATION_MANAGEMENT = False
 80 
 81 # Set Console type:
 82 # valid options are "AUTO"(default), "VNC", "SPICE", "RDP", "SERIAL" or None
 83 # Set to None explicitly if you want to deactivate the console.
 84 #CONSOLE_TYPE = "AUTO"
 85 
 86 # If provided, a "Report Bug" link will be displayed in the site header
 87 # which links to the value of this setting (ideally a URL containing
 88 # information on how to report issues).
 89 #HORIZON_CONFIG["bug_url"] = "http://bug-report.example.com"
 90 
 91 # Show backdrop element outside the modal, do not close the modal
 92 # after clicking on backdrop.
 93 #HORIZON_CONFIG["modal_backdrop"] = "static"
 94 
 95 # Specify a regular expression to validate user passwords.
 96 #HORIZON_CONFIG["password_validator"] = {
 97 #    "regex": '.*',
 98 #    "help_text": _("Your password does not meet the requirements."),
 99 #}
100 
101 # Disable simplified floating IP address management for deployments with
102 # multiple floating IP pools or complex network requirements.
103 #HORIZON_CONFIG["simple_ip_management"] = False
104 
105 # Turn off browser autocompletion for forms including the login form and
106 # the database creation workflow if so desired.
107 #HORIZON_CONFIG["password_autocomplete"] = "off"
108 
109 # Setting this to True will disable the reveal button for password fields,
110 # including on the login form.
111 #HORIZON_CONFIG["disable_password_reveal"] = False
112 
113 LOCAL_PATH = '/tmp'
114 
115 # Set custom secret key:
116 # You can either set it to a specific value or you can let horizon generate a
117 # default secret key that is unique on this machine, e.i. regardless of the
118 # amount of Python WSGI workers (if used behind Apache+mod_wsgi): However,
119 # there may be situations where you would want to set this explicitly, e.g.
120 # when multiple dashboard instances are distributed on different machines
121 # (usually behind a load-balancer). Either you have to make sure that a session
122 # gets all requests routed to the same dashboard instance or you set the same
123 # SECRET_KEY for all of them.
124 SECRET_KEY='65941f1393ea1c265ad7'
125 
126 # We recommend you use memcached for development; otherwise after every reload
127 # of the django development server, you will have to login again. To use
128 # memcached set CACHES to something like
129 SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
130 CACHES = {
131     'default': {
132         'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
133         'LOCATION': 'controller:11211',
134     },
135 }
136 
137 #CACHES = {
138 #    'default': {
139 #        'BACKEND': 'django.core.cache.backends.locmem.LocMemCache',
140 #    },
141 #}
142 
143 # Send email to the console by default
144 EMAIL_BACKEND = 'django.core.mail.backends.console.EmailBackend'
145 # Or send them to /dev/null
146 #EMAIL_BACKEND = 'django.core.mail.backends.dummy.EmailBackend'
147 
148 # Configure these for your outgoing email host
149 #EMAIL_HOST = 'smtp.my-company.com'
150 #EMAIL_PORT = 25
151 #EMAIL_HOST_USER = 'djangomail'
152 #EMAIL_HOST_PASSWORD = 'top-secret!'
153 
154 # For multiple regions uncomment this configuration, and add (endpoint, title).
155 #AVAILABLE_REGIONS = [
156 #    ('http://cluster1.example.com:5000/v2.0', 'cluster1'),
157 #    ('http://cluster2.example.com:5000/v2.0', 'cluster2'),
158 #]
159 
160 OPENSTACK_HOST = "controller"
161 OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST
162 OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"
163 
164 # Enables keystone web single-sign-on if set to True.
165 #WEBSSO_ENABLED = False
166 
167 # Determines which authentication choice to show as default.
168 #WEBSSO_INITIAL_CHOICE = "credentials"
169 
170 # The list of authentication mechanisms which include keystone
171 # federation protocols and identity provider/federation protocol
172 # mapping keys (WEBSSO_IDP_MAPPING). Current supported protocol
173 # IDs are 'saml2' and 'oidc'  which represent SAML 2.0, OpenID
174 # Connect respectively.
175 # Do not remove the mandatory credentials mechanism.
176 # Note: The last two tuples are sample mapping keys to a identity provider
177 # and federation protocol combination (WEBSSO_IDP_MAPPING).
178 #WEBSSO_CHOICES = (
179 #    ("credentials", _("Keystone Credentials")),
180 #    ("oidc", _("OpenID Connect")),
181 #    ("saml2", _("Security Assertion Markup Language")),
182 #    ("acme_oidc", "ACME - OpenID Connect"),
183 #    ("acme_saml2", "ACME - SAML2"),
184 #)
185 
186 # A dictionary of specific identity provider and federation protocol
187 # combinations. From the selected authentication mechanism, the value
188 # will be looked up as keys in the dictionary. If a match is found,
189 # it will redirect the user to a identity provider and federation protocol
190 # specific WebSSO endpoint in keystone, otherwise it will use the value
191 # as the protocol_id when redirecting to the WebSSO by protocol endpoint.
192 # NOTE: The value is expected to be a tuple formatted as: (<idp_id>, <protocol_id>).
193 #WEBSSO_IDP_MAPPING = {
194 #    "acme_oidc": ("acme", "oidc"),
195 #    "acme_saml2": ("acme", "saml2"),
196 #}
197 
198 # Disable SSL certificate checks (useful for self-signed certificates):
199 #OPENSTACK_SSL_NO_VERIFY = True
200 
201 # The CA certificate to use to verify SSL connections
202 #OPENSTACK_SSL_CACERT = '/path/to/cacert.pem'
203 
204 # The OPENSTACK_KEYSTONE_BACKEND settings can be used to identify the
205 # capabilities of the auth backend for Keystone.
206 # If Keystone has been configured to use LDAP as the auth backend then set
207 # can_edit_user to False and name to 'ldap'.
208 #
209 # TODO(tres): Remove these once Keystone has an API to identify auth backend.
210 OPENSTACK_KEYSTONE_BACKEND = {
211     'name': 'native',
212     'can_edit_user': True,
213     'can_edit_group': True,
214     'can_edit_project': True,
215     'can_edit_domain': True,
216     'can_edit_role': True,
217 }
218 
219 # Setting this to True, will add a new "Retrieve Password" action on instance,
220 # allowing Admin session password retrieval/decryption.
221 #OPENSTACK_ENABLE_PASSWORD_RETRIEVE = False
222 
223 # The Launch Instance user experience has been significantly enhanced.
224 # You can choose whether to enable the new launch instance experience,
225 # the legacy experience, or both. The legacy experience will be removed
226 # in a future release, but is available as a temporary backup setting to ensure
227 # compatibility with existing deployments. Further development will not be
228 # done on the legacy experience. Please report any problems with the new
229 # experience via the Launchpad tracking system.
230 #
231 # Toggle LAUNCH_INSTANCE_LEGACY_ENABLED and LAUNCH_INSTANCE_NG_ENABLED to
232 # determine the experience to enable.  Set them both to true to enable
233 # both.
234 #LAUNCH_INSTANCE_LEGACY_ENABLED = True
235 #LAUNCH_INSTANCE_NG_ENABLED = False
236 
237 # A dictionary of settings which can be used to provide the default values for
238 # properties found in the Launch Instance modal.
239 #LAUNCH_INSTANCE_DEFAULTS = {
240 #    'config_drive': False,
241 #}
242 
243 # The Xen Hypervisor has the ability to set the mount point for volumes
244 # attached to instances (other Hypervisors currently do not). Setting
245 # can_set_mount_point to True will add the option to set the mount point
246 # from the UI.
247 OPENSTACK_HYPERVISOR_FEATURES = {
248     'can_set_mount_point': False,
249     'can_set_password': False,
250     'requires_keypair': False,
251 }
252 
253 # The OPENSTACK_CINDER_FEATURES settings can be used to enable optional
254 # services provided by cinder that is not exposed by its extension API.
255 OPENSTACK_CINDER_FEATURES = {
256     'enable_backup': False,
257 }
258 
259 # The OPENSTACK_NEUTRON_NETWORK settings can be used to enable optional
260 # services provided by neutron. Options currently available are load
261 # balancer service, security groups, quotas, VPN service.
262 OPENSTACK_NEUTRON_NETWORK = {
263     'enable_router': False,
264     'enable_quotas': False,
265     'enable_ipv6': False,
266     'enable_distributed_router': False,
267     'enable_ha_router': False,
268     'enable_lb': False,
269     'enable_firewall': False,
270     'enable_vpn': False,
271     'enable_fip_topology_check': False,
272 
273     # Neutron can be configured with a default Subnet Pool to be used for IPv4
274     # subnet-allocation. Specify the label you wish to display in the Address
275     # pool selector on the create subnet step if you want to use this feature.
276     'default_ipv4_subnet_pool_label': None,
277 
278     # Neutron can be configured with a default Subnet Pool to be used for IPv6
279     # subnet-allocation. Specify the label you wish to display in the Address
280     # pool selector on the create subnet step if you want to use this feature.
281     # You must set this to enable IPv6 Prefix Delegation in a PD-capable
282     # environment.
283     'default_ipv6_subnet_pool_label': None,
284 
285     # The profile_support option is used to detect if an external router can be
286     # configured via the dashboard. When using specific plugins the
287     # profile_support can be turned on if needed.
288     'profile_support': None,
289     #'profile_support': 'cisco',
290 
291     # Set which provider network types are supported. Only the network types
292     # in this list will be available to choose from when creating a network.
293     # Network types include local, flat, vlan, gre, and vxlan.
294     'supported_provider_types': ['*'],
295 
296     # Set which VNIC types are supported for port binding. Only the VNIC
297     # types in this list will be available to choose from when creating a
298     # port.
299     # VNIC types include 'normal', 'macvtap' and 'direct'.
300     # Set to empty list or None to disable VNIC type selection.
301     'supported_vnic_types': ['*'],
302 }
303 
304 # The OPENSTACK_HEAT_STACK settings can be used to disable password
305 # field required while launching the stack.
306 OPENSTACK_HEAT_STACK = {
307     'enable_user_pass': True,
308 }
309 
310 # The OPENSTACK_IMAGE_BACKEND settings can be used to customize features
311 # in the OpenStack Dashboard related to the Image service, such as the list
312 # of supported image formats.
313 #OPENSTACK_IMAGE_BACKEND = {
314 #    'image_formats': [
315 #        ('', _('Select format')),
316 #        ('aki', _('AKI - Amazon Kernel Image')),
317 #        ('ami', _('AMI - Amazon Machine Image')),
318 #        ('ari', _('ARI - Amazon Ramdisk Image')),
319 #        ('docker', _('Docker')),
320 #        ('iso', _('ISO - Optical Disk Image')),
321 #        ('ova', _('OVA - Open Virtual Appliance')),
322 #        ('qcow2', _('QCOW2 - QEMU Emulator')),
323 #        ('raw', _('Raw')),
324 #        ('vdi', _('VDI - Virtual Disk Image')),
325 #        ('vhd', _('VHD - Virtual Hard Disk')),
326 #        ('vmdk', _('VMDK - Virtual Machine Disk')),
327 #    ],
328 #}
329 
330 # The IMAGE_CUSTOM_PROPERTY_TITLES settings is used to customize the titles for
331 # image custom property attributes that appear on image detail pages.
332 IMAGE_CUSTOM_PROPERTY_TITLES = {
333     "architecture": _("Architecture"),
334     "kernel_id": _("Kernel ID"),
335     "ramdisk_id": _("Ramdisk ID"),
336     "image_state": _("Euca2ools state"),
337     "project_id": _("Project ID"),
338     "image_type": _("Image Type"),
339 }
340 
341 # The IMAGE_RESERVED_CUSTOM_PROPERTIES setting is used to specify which image
342 # custom properties should not be displayed in the Image Custom Properties
343 # table.
344 IMAGE_RESERVED_CUSTOM_PROPERTIES = []
345 
346 # OPENSTACK_ENDPOINT_TYPE specifies the endpoint type to use for the endpoints
347 # in the Keystone service catalog. Use this setting when Horizon is running
348 # external to the OpenStack environment. The default is 'publicURL'.
349 #OPENSTACK_ENDPOINT_TYPE = "publicURL"
350 
351 # SECONDARY_ENDPOINT_TYPE specifies the fallback endpoint type to use in the
352 # case that OPENSTACK_ENDPOINT_TYPE is not present in the endpoints
353 # in the Keystone service catalog. Use this setting when Horizon is running
354 # external to the OpenStack environment. The default is None.  This
355 # value should differ from OPENSTACK_ENDPOINT_TYPE if used.
356 #SECONDARY_ENDPOINT_TYPE = "publicURL"
357 
358 # The number of objects (Swift containers/objects or images) to display
359 # on a single page before providing a paging element (a "more" link)
360 # to paginate results.
361 API_RESULT_LIMIT = 1000
362 API_RESULT_PAGE_SIZE = 20
363 
364 # The size of chunk in bytes for downloading objects from Swift
365 SWIFT_FILE_TRANSFER_CHUNK_SIZE = 512 * 1024
366 
367 # Specify a maximum number of items to display in a dropdown.
368 DROPDOWN_MAX_ITEMS = 30
369 
370 # The timezone of the server. This should correspond with the timezone
371 # of your entire OpenStack installation, and hopefully be in UTC.
372 TIME_ZONE = "Asia/Shanghai"
373 
374 # When launching an instance, the menu of available flavors is
375 # sorted by RAM usage, ascending. If you would like a different sort order,
376 # you can provide another flavor attribute as sorting key. Alternatively, you
377 # can provide a custom callback method to use for sorting. You can also provide
378 # a flag for reverse sort. For more info, see
379 # http://docs.python.org/2/library/functions.html#sorted
380 #CREATE_INSTANCE_FLAVOR_SORT = {
381 #    'key': 'name',
382 #     # or
383 #    'key': my_awesome_callback_method,
384 #    'reverse': False,
385 #}
386 
387 # Set this to True to display an 'Admin Password' field on the Change Password
388 # form to verify that it is indeed the admin logged-in who wants to change
389 # the password.
390 #ENFORCE_PASSWORD_CHECK = False
391 
392 # Modules that provide /auth routes that can be used to handle different types
393 # of user authentication. Add auth plugins that require extra route handling to
394 # this list.
395 #AUTHENTICATION_URLS = [
396 #    'openstack_auth.urls',
397 #]
398 
399 # The Horizon Policy Enforcement engine uses these values to load per service
400 # policy rule files. The content of these files should match the files the
401 # OpenStack services are using to determine role based access control in the
402 # target installation.
403 
404 # Path to directory containing policy.json files
405 POLICY_FILES_PATH = '/etc/openstack-dashboard'
406 
407 # Map of local copy of service policy files.
408 # Please insure that your identity policy file matches the one being used on
409 # your keystone servers. There is an alternate policy file that may be used
410 # in the Keystone v3 multi-domain case, policy.v3cloudsample.json.
411 # This file is not included in the Horizon repository by default but can be
412 # found at
413 # http://git.openstack.org/cgit/openstack/keystone/tree/etc/ \
414 # policy.v3cloudsample.json
415 # Having matching policy files on the Horizon and Keystone servers is essential
416 # for normal operation. This holds true for all services and their policy files.
417 #POLICY_FILES = {
418 #    'identity': 'keystone_policy.json',
419 #    'compute': 'nova_policy.json',
420 #    'volume': 'cinder_policy.json',
421 #    'image': 'glance_policy.json',
422 #    'orchestration': 'heat_policy.json',
423 #    'network': 'neutron_policy.json',
424 #    'telemetry': 'ceilometer_policy.json',
425 #}
426 
427 # TODO: (david-lyle) remove when plugins support adding settings.
428 # Note: Only used when trove-dashboard plugin is configured to be used by
429 # Horizon.
430 # Trove user and database extension support. By default support for
431 # creating users and databases on database instances is turned on.
432 # To disable these extensions set the permission here to something
433 # unusable such as ["!"].
434 #TROVE_ADD_USER_PERMS = []
435 #TROVE_ADD_DATABASE_PERMS = []
436 
437 # Change this patch to the appropriate list of tuples containing
438 # a key, label and static directory containing two files:
439 # _variables.scss and _styles.scss
440 #AVAILABLE_THEMES = [
441 #    ('default', 'Default', 'themes/default'),
442 #    ('material', 'Material', 'themes/material'),
443 #]
444 
445 LOGGING = {
446     'version': 1,
447     # When set to True this will disable all logging except
448     # for loggers specified in this configuration dictionary. Note that
449     # if nothing is specified here and disable_existing_loggers is True,
450     # django.db.backends will still log unless it is disabled explicitly.
451     'disable_existing_loggers': False,
452     'handlers': {
453         'null': {
454             'level': 'DEBUG',
455             'class': 'logging.NullHandler',
456         },
457         'console': {
458             # Set the level to "DEBUG" for verbose output logging.
459             'level': 'INFO',
460             'class': 'logging.StreamHandler',
461         },
462     },
463     'loggers': {
464         # Logging from django.db.backends is VERY verbose, send to null
465         # by default.
466         'django.db.backends': {
467             'handlers': ['null'],
468             'propagate': False,
469         },
470         'requests': {
471             'handlers': ['null'],
472             'propagate': False,
473         },
474         'horizon': {
475             'handlers': ['console'],
476             'level': 'DEBUG',
477             'propagate': False,
478         },
479         'openstack_dashboard': {
480             'handlers': ['console'],
481             'level': 'DEBUG',
482             'propagate': False,
483         },
484         'novaclient': {
485             'handlers': ['console'],
486             'level': 'DEBUG',
487             'propagate': False,
488         },
489         'cinderclient': {
490             'handlers': ['console'],
491             'level': 'DEBUG',
492             'propagate': False,
493         },
494         'keystoneclient': {
495             'handlers': ['console'],
496             'level': 'DEBUG',
497             'propagate': False,
498         },
499         'glanceclient': {
500             'handlers': ['console'],
501             'level': 'DEBUG',
502             'propagate': False,
503         },
504         'neutronclient': {
505             'handlers': ['console'],
506             'level': 'DEBUG',
507             'propagate': False,
508         },
509         'heatclient': {
510             'handlers': ['console'],
511             'level': 'DEBUG',
512             'propagate': False,
513         },
514         'ceilometerclient': {
515             'handlers': ['console'],
516             'level': 'DEBUG',
517             'propagate': False,
518         },
519         'swiftclient': {
520             'handlers': ['console'],
521             'level': 'DEBUG',
522             'propagate': False,
523         },
524         'openstack_auth': {
525             'handlers': ['console'],
526             'level': 'DEBUG',
527             'propagate': False,
528         },
529         'nose.plugins.manager': {
530             'handlers': ['console'],
531             'level': 'DEBUG',
532             'propagate': False,
533         },
534         'django': {
535             'handlers': ['console'],
536             'level': 'DEBUG',
537             'propagate': False,
538         },
539         'iso8601': {
540             'handlers': ['null'],
541             'propagate': False,
542         },
543         'scss': {
544             'handlers': ['null'],
545             'propagate': False,
546         },
547     },
548 }
549 
550 # 'direction' should not be specified for all_tcp/udp/icmp.
551 # It is specified in the form.
552 SECURITY_GROUP_RULES = {
553     'all_tcp': {
554         'name': _('All TCP'),
555         'ip_protocol': 'tcp',
556         'from_port': '1',
557         'to_port': '65535',
558     },
559     'all_udp': {
560         'name': _('All UDP'),
561         'ip_protocol': 'udp',
562         'from_port': '1',
563         'to_port': '65535',
564     },
565     'all_icmp': {
566         'name': _('All ICMP'),
567         'ip_protocol': 'icmp',
568         'from_port': '-1',
569         'to_port': '-1',
570     },
571     'ssh': {
572         'name': 'SSH',
573         'ip_protocol': 'tcp',
574         'from_port': '22',
575         'to_port': '22',
576     },
577     'smtp': {
578         'name': 'SMTP',
579         'ip_protocol': 'tcp',
580         'from_port': '25',
581         'to_port': '25',
582     },
583     'dns': {
584         'name': 'DNS',
585         'ip_protocol': 'tcp',
586         'from_port': '53',
587         'to_port': '53',
588     },
589     'http': {
590         'name': 'HTTP',
591         'ip_protocol': 'tcp',
592         'from_port': '80',
593         'to_port': '80',
594     },
595     'pop3': {
596         'name': 'POP3',
597         'ip_protocol': 'tcp',
598         'from_port': '110',
599         'to_port': '110',
600     },
601     'imap': {
602         'name': 'IMAP',
603         'ip_protocol': 'tcp',
604         'from_port': '143',
605         'to_port': '143',
606     },
607     'ldap': {
608         'name': 'LDAP',
609         'ip_protocol': 'tcp',
610         'from_port': '389',
611         'to_port': '389',
612     },
613     'https': {
614         'name': 'HTTPS',
615         'ip_protocol': 'tcp',
616         'from_port': '443',
617         'to_port': '443',
618     },
619     'smtps': {
620         'name': 'SMTPS',
621         'ip_protocol': 'tcp',
622         'from_port': '465',
623         'to_port': '465',
624     },
625     'imaps': {
626         'name': 'IMAPS',
627         'ip_protocol': 'tcp',
628         'from_port': '993',
629         'to_port': '993',
630     },
631     'pop3s': {
632         'name': 'POP3S',
633         'ip_protocol': 'tcp',
634         'from_port': '995',
635         'to_port': '995',
636     },
637     'ms_sql': {
638         'name': 'MS SQL',
639         'ip_protocol': 'tcp',
640         'from_port': '1433',
641         'to_port': '1433',
642     },
643     'mysql': {
644         'name': 'MYSQL',
645         'ip_protocol': 'tcp',
646         'from_port': '3306',
647         'to_port': '3306',
648     },
649     'rdp': {
650         'name': 'RDP',
651         'ip_protocol': 'tcp',
652         'from_port': '3389',
653         'to_port': '3389',
654     },
655 }
656 
657 # Deprecation Notice:
658 #
659 # The setting FLAVOR_EXTRA_KEYS has been deprecated.
660 # Please load extra spec metadata into the Glance Metadata Definition Catalog.
661 #
662 # The sample quota definitions can be found in:
663 # <glance_source>/etc/metadefs/compute-quota.json
664 #
665 # The metadata definition catalog supports CLI and API:
666 #  $glance --os-image-api-version 2 help md-namespace-import
667 #  $glance-manage db_load_metadefs <directory_with_definition_files>
668 #
669 # See Metadata Definitions on: http://docs.openstack.org/developer/glance/
670 
671 # TODO: (david-lyle) remove when plugins support settings natively
672 # Note: This is only used when the Sahara plugin is configured and enabled
673 # for use in Horizon.
674 # Indicate to the Sahara data processing service whether or not
675 # automatic floating IP allocation is in effect.  If it is not
676 # in effect, the user will be prompted to choose a floating IP
677 # pool for use in their cluster.  False by default.  You would want
678 # to set this to True if you were running Nova Networking with
679 # auto_assign_floating_ip = True.
680 #SAHARA_AUTO_IP_ALLOCATION_ENABLED = False
681 
682 # The hash algorithm to use for authentication tokens. This must
683 # match the hash algorithm that the identity server and the
684 # auth_token middleware are using. Allowed values are the
685 # algorithms supported by Python's hashlib library.
686 #OPENSTACK_TOKEN_HASH_ALGORITHM = 'md5'
687 
688 # Hashing tokens from Keystone keeps the Horizon session data smaller, but it
689 # doesn't work in some cases when using PKI tokens.  Uncomment this value and
690 # set it to False if using PKI tokens and there are 401 errors due to token
691 # hashing.
692 #OPENSTACK_TOKEN_HASH_ENABLED = True
693 
694 # AngularJS requires some settings to be made available to
695 # the client side. Some settings are required by in-tree / built-in horizon
696 # features. These settings must be added to REST_API_REQUIRED_SETTINGS in the
697 # form of ['SETTING_1','SETTING_2'], etc.
698 #
699 # You may remove settings from this list for security purposes, but do so at
700 # the risk of breaking a built-in horizon feature. These settings are required
701 # for horizon to function properly. Only remove them if you know what you
702 # are doing. These settings may in the future be moved to be defined within
703 # the enabled panel configuration.
704 # You should not add settings to this list for out of tree extensions.
705 # See: https://wiki.openstack.org/wiki/Horizon/RESTAPI
706 REST_API_REQUIRED_SETTINGS = ['OPENSTACK_HYPERVISOR_FEATURES',
707                               'LAUNCH_INSTANCE_DEFAULTS']
708 
709 # Additional settings can be made available to the client side for
710 # extensibility by specifying them in REST_API_ADDITIONAL_SETTINGS
711 # !! Please use extreme caution as the settings are transferred via HTTP/S
712 # and are not encrypted on the browser. This is an experimental API and
713 # may be deprecated in the future without notice.
714 #REST_API_ADDITIONAL_SETTINGS = []
715 
716 # DISALLOW_IFRAME_EMBED can be used to prevent Horizon from being embedded
717 # within an iframe. Legacy browsers are still vulnerable to a Cross-Frame
718 # Scripting (XFS) vulnerability, so this option allows extra security hardening
719 # where iframes are not used in deployment. Default setting is True.
720 # For more information see:
721 # http://tinyurl.com/anticlickjack
722 #DISALLOW_IFRAME_EMBED = True
View Code 修改文件全文   

:上傳配置文件時需要注意配置文件權限問題

[root@compute1 ~]# ll /etc/openstack-dashboard/local_settings 
-rw-r----- 1 root apache 26505 Jan 24 11:10 /etc/openstack-dashboard/local_settings

1.8.3 啟動服務

systemctl restart httpd.service
systemctl enable  httpd.service

1.8.4 驗證操作

     使用瀏覽器訪問 http://10.0.0.31/dashboard ,推薦使用火狐瀏覽器。

 

信息說明:第一次連接時速度較慢,耐心等待。

域:default

用戶名:admin

密碼:ADMIN_PASS

至此 horizon 安裝完成

1.9 啟動第一台實例

官方文檔:https://docs.openstack.org/mitaka/zh_CN/install-guide-rdo/launch-instance.html

1.9.1 創建虛擬網絡

公有網絡參考:https://docs.openstack.org/mitaka/zh_CN/install-guide-rdo/launch-instance-networks-provider.html

- 公共網絡拓撲圖-概述

 

- 連接性

加載環境變量

. admin-openrc

創建網絡

neutron net-create --shared --provider:physical_network provider --provider:network_type flat provider  

在網絡上創建出一個子網

語法說明:

neutron subnet-create --name provider --allocation-pool start=START_IP_ADDRESS,end=END_IP_ADDRESS --dns-nameserver DNS_RESOLVER --gateway PROVIDER_NETWORK_GATEWAY provider PROVIDER_NETWORK_CIDR

   參數說明

# 使用提供者物理網絡的子網CIDR標記替換``PROVIDER_NETWORK_CIDR``。
# 將``START_IP_ADDRESS``和``END_IP_ADDRESS``使用你想分配給實例的子網網段的第一個和最后一個IP地址。這個范圍不能包括任何已經使用的IP地址。
# 將 DNS_RESOLVER 替換為DNS解析服務的IP地址。在大多數情況下,你可以從主機``/etc/resolv.conf`` 文件選擇一個使用。
# 將``PUBLIC_NETWORK_GATEWAY`` 替換為公共網絡的網關,一般的網關IP地址以 ”.1” 結尾。

配置示例:

neutron subnet-create --name provider --allocation-pool start=10.0.0.101,end=10.0.0.250 --dns-nameserver 223.5.5.5 --gateway 10.0.0.254 provider 10.0.0.0/24

配置過程

[root@controller ~]# neutron subnet-create --name provider \
>   --allocation-pool start=10.0.0.101,end=10.0.0.250 \
>   --dns-nameserver 223.5.5.5 --gateway 10.0.0.254 \
>   provider 10.0.0.0/24
Created a new subnet:
+-------------------+----------------------------------------------+
| Field             | Value                                        |
+-------------------+----------------------------------------------+
| allocation_pools  | {"start": "10.0.0.101", "end": "10.0.0.250"} |
| cidr              | 10.0.0.0/24                                  |
| created_at        | 2018-01-24T03:41:27                          |
| description       |                                              |
| dns_nameservers   | 223.5.5.5                                    |
| enable_dhcp       | True                                         |
| gateway_ip        | 10.0.0.254                                   |
| host_routes       |                                              |
| id                | d507bf57-28e6-4af5-b54b-d969e76f4fd6         |
| ip_version        | 4                                            |
| ipv6_address_mode |                                              |
| ipv6_ra_mode      |                                              |
| name              | provider                                     |
| network_id        | 54f942f7-cc28-4292-a4d6-e37b8833e35f         |
| subnetpool_id     |                                              |
| tenant_id         | d0dfbdbc115b4a728c24d28bc1ce1e62             |
| updated_at        | 2018-01-24T03:41:27                          |
+-------------------+----------------------------------------------+

1.9.2 創建m1.nano規格的主機

官方文檔:   https://docs.openstack.org/mitaka/zh_CN/install-guide-rdo/launch-instance.html#create-m1-nano-flavor

默認的最小規格的主機需要512 MB內存。對於環境中計算節點內存不足4 GB的,我們推薦創建只需要64 MB``m1.nano``規格的主機。

若單純為了測試的目的,請使用``m1.nano``規格的主機來加載CirrOS鏡像

配置命令

[root@controller ~]# openstack flavor create --id 0 --vcpus 1 --ram 64 --disk 1 m1.nano
+----------------------------+---------+
| Field                      | Value   |
+----------------------------+---------+
| OS-FLV-DISABLED:disabled   | False   |
| OS-FLV-EXT-DATA:ephemeral  | 0       |
| disk                       | 1       |
| id                         | 0       |
| name                       | m1.nano |
| os-flavor-access:is_public | True    |
| ram                        | 64      |
| rxtx_factor                | 1.0     |
| swap                       |         |
| vcpus                      | 1       |
+----------------------------+---------+

1.9.3 生成一個鍵值對,創建密鑰對

生成密鑰,並使用

ssh-keygen -q -N "" -f ~/.ssh/id_rsa
openstack keypair create --public-key ~/.ssh/id_rsa.pub mykey

   分配密鑰

[root@controller ~]# openstack keypair create --public-key ~/.ssh/id_rsa.pub mykey
+-------------+-------------------------------------------------+
| Field       | Value                                           |
+-------------+-------------------------------------------------+
| fingerprint | 4f:77:29:9d:4c:96:5c:45:e3:7c:5d:fa:0f:b0:bc:59 |
| name        | mykey                                           |
| user_id     | d8f4a1d74f52482d8ebe2184692d2c1c                |
+-------------+-------------------------------------------------+

檢查密鑰對

[root@controller ~]# openstack keypair list 
+-------+-------------------------------------------------+
| Name  | Fingerprint                                     |
+-------+-------------------------------------------------+
| mykey | 4f:77:29:9d:4c:96:5c:45:e3:7c:5d:fa:0f:b0:bc:59 |
+-------+-------------------------------------------------+

1.9.4 增加安全組規則

允許 ICMP (ping)

openstack security group rule create --proto icmp default

允許安全 shell (SSH) 的訪問

openstack security group rule create --proto tcp --dst-port 22 default

1.9.5 啟動第一台雲主機

官方文檔:https://docs.openstack.org/mitaka/zh_CN/install-guide-rdo/launch-instance-provider.html

啟動之前先進行基礎環境的檢查

一個實例指定了虛擬機資源的大致分配,包括處理器、內存和存儲。

openstack flavor list 

列出可用鏡像

openstack image list

列出可用網絡

openstack network list

列出可用的安全組

openstack security group list

獲取網絡id

[root@controller ~]# openstack network list 
+--------------------------------------+----------+--------------------------------------+
| ID                                   | Name     | Subnets                              |
+--------------------------------------+----------+--------------------------------------+
| 54f942f7-cc28-4292-a4d6-e37b8833e35f | provider | d507bf57-28e6-4af5-b54b-d969e76f4fd6 |
+--------------------------------------+----------+--------------------------------------+

啟動雲主機,注意net-id為創建的network ID

openstack server create --flavor m1.nano  --image cirros \
  --nic net-id=54f942f7-cc28-4292-a4d6-e37b8833e35f  --security-group default \
  --key-name mykey clsn

檢查雲主機的狀況

[root@controller ~]# nova list 
+--------------------------------------+---------------+--------+------------+-------------+---------------------+
| ID                                   | Name          | Status | Task State | Power State | Networks            |
+--------------------------------------+---------------+--------+------------+-------------+---------------------+
| aa5bcbb8-64a7-44c8-b302-6e1ccd1af6ef | www.nmtui.com | ACTIVE | -          | Running     | provider=10.0.0.102 |
+--------------------------------------+---------------+--------+------------+-------------+---------------------+

1.9.6 WEB端進行查看

瀏覽器訪問:http://10.0.0.31/dashboard/

查看雲主機狀態

 

使用控制台登陸

 

使用控制台登陸

 

  用戶名為:cirros,密碼為:cubswin:) 

1.9.7 使用web界面創建一個實例

1、選擇啟動實例

 

2、設置主機名稱,點下一項

 

3、選擇一個鏡像

 

4、選擇一個配置

 

5、網絡

 

6、安全組

 

7、密鑰對

 

8、啟動實例

 

9、創建完成

 

10、查看主機列表

[root@controller ~]# nova list 
+--------------------------------------+----------------+--------+------------+-------------+---------------------+
| ID                                   | Name           | Status | Task State | Power State | Networks            |
+--------------------------------------+----------------+--------+------------+-------------+---------------------+
| ff46e8a7-9085-4afb-b7b7-193f37efb86d | clsn           | ACTIVE | -          | Running     | provider=10.0.0.103 |
| d275ceac-535a-4c05-92ab-3040ed9fb9d8 | clsn-openstack | ACTIVE | -          | Running     | provider=10.0.0.104 |
| aa5bcbb8-64a7-44c8-b302-6e1ccd1af6ef | www.nmtui.com  | ACTIVE | -          | Running     | provider=10.0.0.102 |
+--------------------------------------+----------------+--------+------------+-------------+---------------------+

11、密鑰連接測試

[root@controller ~]# ssh cirros@10.0.0.104
The authenticity of host '10.0.0.104 (10.0.0.104)' can't be established.
RSA key fingerprint is 9d:ca:25:cd:23:c9:f8:73:c6:26:84:53:46:56:67:63.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '10.0.0.104' (RSA) to the list of known hosts.
$ hostname
clsn-openstack

   至此雲主機創建完成。

1.10 cinder塊存儲服務

官方文檔:https://docs.openstack.org/mitaka/zh_CN/install-guide-rdo/cinder.html

1.10.1 環境准備

compute1計算節點添加兩塊硬盤,分別為:

    sdb      8:16   0  30G  0 disk 
    sdc      8:32   0  20G  0 disk

1.10.2 安裝並配置控制節點

1)在數據庫中,創庫,授權

創建 cinder 數據庫

CREATE DATABASE cinder;

允許 cinder 數據庫合適的訪問權限

GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' IDENTIFIED BY 'CINDER_DBPASS';
GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIED BY 'CINDER_DBPASS';

2)在keystone中創建用戶並授權

創建一個 cinder

openstack user create --domain default --password  CINDER_PASS cinder

添加 admin 角色到 cinder 用戶上。

openstack role add --project service --user cinder admin

3)在keystone中創建服務實體,和注冊API接口

創建 cinder cinderv2 服務實體

openstack service create --name cinder --description "OpenStack Block Storage" volume
openstack service create --name cinderv2 --description "OpenStack Block Storage" volumev2

創建塊設備存儲服務的 API 入口點。注意:需要注冊兩個版本

# v1版本注冊

openstack endpoint create --region RegionOne volume public http://controller:8776/v1/%\(tenant_id\)s
openstack endpoint create --region RegionOne volume internal http://controller:8776/v1/%\(tenant_id\)s
openstack endpoint create --region RegionOne volume admin http://controller:8776/v1/%\(tenant_id\)s

 # v2版本注冊

 openstack endpoint create --region RegionOne volumev2 public http://controller:8776/v2/%\(tenant_id\)s
 openstack endpoint create --region RegionOne volumev2 internal http://controller:8776/v2/%\(tenant_id\)s
openstack endpoint create --region RegionOne volumev2 admin http://controller:8776/v2/%\(tenant_id\)s

4)安裝軟件包

yum -y install openstack-cinder

5)修改配置文件

編輯 /etc/cinder/cinder.conf,同時完成如下動作

[database] 部分,配置數據庫訪問

[database]
...
connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder

[DEFAULT] [oslo_messaging_rabbit]”部分,配置 RabbitMQ 消息隊列訪問

[DEFAULT]
...
rpc_backend = rabbit

[oslo_messaging_rabbit]
...
rabbit_host = controller
rabbit_userid = openstack
rabbit_password = RABBIT_PASS

   [DEFAULT] [keystone_authtoken] 部分,配置認證服務訪問

[DEFAULT]
...
auth_strategy = keystone

[keystone_authtoken]
...
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = cinder
password = CINDER_PASS

[DEFAULT 部分,配置``my_ip`` 來使用控制節點的管理接口的IP 地址

[DEFAULT]
...
my_ip = 10.0.0.11

[oslo_concurrency] 部分,配置鎖路徑

[oslo_concurrency]
...
lock_path = /var/lib/cinder/tmp

配置計算服務使用塊設備存儲

編輯文件 /etc/nova/nova.conf 並添加如下到其中

vim /etc/nova/nova.conf
[cinder]
os_region_name = RegionOne

6)同步數據庫

su -s /bin/sh -c "cinder-manage db sync" cinder
# 忽略輸出中任何不推薦使用的信息。

7)啟動服務

重啟計算API 服務

systemctl restart openstack-nova-api.service
systemctl status openstack-nova-api.service

啟動塊設備存儲服務,並將其配置為開機自啟

systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service
systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service
systemctl status openstack-cinder-api.service openstack-cinder-scheduler.service

1.10.3 安裝並配置一個存儲節點

參考:https://docs.openstack.org/mitaka/zh_CN/install-guide-rdo/cinder-storage-install.html

1)安裝lvm軟件

安裝支持的工具包

yum -y install lvm2

啟動LVMmetadata服務並且設置該服務隨系統啟動

systemctl enable lvm2-lvmetad.service
systemctl start lvm2-lvmetad.service
systemctl status lvm2-lvmetad.service

2)創建物理卷

   將之前添加的兩塊硬盤創建物理卷

pvcreate /dev/sdb
pvcreate /dev/sdc

   執行過程

[root@compute1 ~]#  pvcreate /dev/sdb
  Physical volume "/dev/sdb" successfully created.
[root@compute1 ~]#  pvcreate /dev/sdc
  Physical volume "/dev/sdc" successfully created.

3)創建 LVM 卷組 

vgcreate cinder-volumes-sata /dev/sdb 
vgcreate cinder-volumes-ssd /dev/sdc

查看創建出來的卷組

[root@compute1 ~]# vgs
  VG                  #PV #LV #SN Attr   VSize  VFree 
  cinder-volumes-sata   1   0   0 wz--n- 30.00g 30.00g
  cinder-volumes-ssd    1   0   0 wz--n- 20.00g 20.00g

刪除卷組方法

# vgremove vg-name

4)修改配置文件

只有實例可以訪問塊存儲卷組。不過,底層的操作系統管理這些設備並將其與卷關聯。

   默認情況下,LVM卷掃描工具會掃描``/dev`` 目錄,查找包含卷的塊存儲設備。

   如果項目在他們的卷上使用LVM,掃描工具檢測到這些卷時會嘗試緩存它們,可能會在底層操作系統和項目卷上產生各種問題。

編輯``/etc/lvm/lvm.conf``文件並完成下面的操作

devices {
...
# 在130行下增加如下行
filter = [ "a/sdb/", "a/sdc/", "r/.*/"]

5)安裝軟件並配置組件

yum -y install openstack-cinder targetcli python-keystone

6)配置文件修改

編輯 /etc/cinder/cinder.conf,同時完成如下動作

[database] 部分,配置數據庫訪問

[database]
...
connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder

[DEFAULT] [oslo_messaging_rabbit]”部分,配置 RabbitMQ 消息隊列訪問

[DEFAULT]
...
rpc_backend = rabbit

[oslo_messaging_rabbit]
...
rabbit_host = controller
rabbit_userid = openstack
rabbit_password = RABBIT_PASS

[DEFAULT] [keystone_authtoken] 部分,配置認證服務訪問

[DEFAULT]
...
auth_strategy = keystone

[keystone_authtoken]
...
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = cinder
password = CINDER_PASS

[DEFAULT] 部分,配置 my_ip 選項

[DEFAULT]
...
my_ip = MANAGEMENT_INTERFACE_IP_ADDRESS

注意:將其中的``MANAGEMENT_INTERFACE_IP_ADDRESS``替換為存儲節點上的管理網絡接口的IP 地址

``[lvm]``部分,配置LVM后端以LVM驅動結束,卷組``cinder-volumes`` iSCSI 協議和正確的 iSCSI服務

[lvm]
...
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_group = cinder-volumes
iscsi_protocol = iscsi
iscsi_helper = lioadm

[DEFAULT] 部分,啟用 LVM 后端

[DEFAULT]
...
enabled_backends = lvm

[DEFAULT] 區域,配置鏡像服務 API 的位置

[DEFAULT]
...
glance_api_servers = http://controller:9292

[oslo_concurrency] 部分,配置鎖路徑

[oslo_concurrency]
...
lock_path = /var/lib/cinder/tmp

配置文件最終內容

 1 [root@compute1 ~]# cat /etc/cinder/cinder.conf
 2 [DEFAULT]
 3 glance_api_servers = http://10.0.0.32:9292
 4 enabled_backends = lvm
 5 rpc_backend = rabbit
 6 auth_strategy = keystone
 7 my_ip = 10.0.0.31
 8 [BACKEND]
 9 [BRCD_FABRIC_EXAMPLE]
10 [CISCO_FABRIC_EXAMPLE]
11 [COORDINATION]
12 [FC-ZONE-MANAGER]
13 [KEYMGR]
14 [cors]
15 [cors.subdomain]
16 [database]
17 connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder
18 [keystone_authtoken]
19 auth_uri = http://controller:5000
20 auth_url = http://controller:35357
21 memcached_servers = controller:11211
22 auth_type = password
23 project_domain_name = default
24 user_domain_name = default
25 project_name = service
26 username = cinder
27 password = CINDER_PASS
28 [matchmaker_redis]
29 [oslo_concurrency]
30 lock_path = /var/lib/cinder/tmp
31 [oslo_messaging_amqp]
32 [oslo_messaging_notifications]
33 [oslo_messaging_rabbit]
34 rabbit_host = controller
35 rabbit_userid = openstack
36 rabbit_password = RABBIT_PASS
37 [oslo_middleware]
38 [oslo_policy]
39 [oslo_reports]
40 [oslo_versionedobjects]
41 [ssl]
42 [lvm]
43 volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
44 volume_group = cinder-volumes-sata
45 iscsi_protocol = iscsi
46 iscsi_helper = lioadm
View Code 配置文件最終內容

7)啟動服務

systemctl enable openstack-cinder-volume.service target.service
systemctl start openstack-cinder-volume.service target.service
systemctl status openstack-cinder-volume.service target.service

8)驗證檢查狀態

[root@controller ~]#  cinder service-list
+------------------+--------------+------+---------+-------+----------------------------+-----------------+
|      Binary      |     Host     | Zone |  Status | State |         Updated_at         | Disabled Reason |
+------------------+--------------+------+---------+-------+----------------------------+-----------------+
| cinder-scheduler |  controller  | nova | enabled |   up  | 2018-01-25T11:01:41.000000 |        -        |
|  cinder-volume   | compute1@lvm | nova | enabled |   up  | 2018-01-25T11:01:40.000000 |        -        |
+------------------+--------------+------+---------+-------+----------------------------+-----------------+

1.10.4 添加ssd盤配置信息

修改配置文件

[root@compute1 ~]#  vim /etc/cinder/cinder.conf
# 修改內容如下
[DEFAULT]
···
enabled_backends = lvm,ssd

[lvm]
···
volume_backend_name = sata

[ssd]
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_group = cinder-volumes-ssd
iscsi_protocol = iscsi
iscsi_helper = lioadm
volume_backend_name = ssd

重啟服務

[root@compute1 ~]# systemctl restart openstack-cinder-volume.service

檢查cinder服務狀態

[root@controller ~]#  cinder service-list
+------------------+--------------+------+---------+-------+----------------------------+-----------------+
|      Binary      |     Host     | Zone |  Status | State |         Updated_at         | Disabled Reason |
+------------------+--------------+------+---------+-------+----------------------------+-----------------+
| cinder-scheduler |  controller  | nova | enabled |   up  | 2018-01-25T11:45:42.000000 |        -        |
|  cinder-volume   | compute1@lvm | nova | enabled |   up  | 2018-01-25T11:45:21.000000 |        -        |
|  cinder-volume   | compute1@ssd | nova | enabled |   up  | 2018-01-25T11:45:42.000000 |        -        |
+------------------+--------------+------+---------+-------+----------------------------+-----------------+

1.10.5 Dashboard中如何創建硬盤

1、登陸瀏覽器dashboardhttp://10.0.0.31/dashboard

   選擇創建卷

 

2)創建一個sata類型的卷

 

3)創建過程

 

   創建完成

 

4)床啊進ssd類型卷

 

5)在查看創建的硬盤

 

在命令行中查看添加的塊存儲

[root@compute1 ~]# lvs
  LV                                          VG                  Attr       LSize Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  volume-0ea47012-c0fb-4dc4-90e7-89427fe9e675 cinder-volumes-sata -wi-a----- 1.00g                                                    
  volume-288efecb-6bf0-4409-9564-81b0a6edc9b8 cinder-volumes-sata -wi-a----- 1.00g                                                    
  volume-ab347594-6402-486d-87a1-19358aa92a08 cinder-volumes-sata -wi-a----- 1.00g                                                    
  volume-33ccbb43-8bd3-4006-849d-73fe6176ea90 cinder-volumes-ssd  -wi-a----- 1.00g                                                    
  volume-cfd0ac03-f03f-4fe2-b369-76dba946934d cinder-volumes-ssd  -wi-a----- 1.00g     

1.10.6 添加硬盤到虛擬機

  

連接到一個實例

 

登陸虛擬機

[root@controller ~]# ssh cirros@172.16.1.101
$ lsblk 
NAME   MAJ:MIN RM    SIZE RO TYPE MOUNTPOINT
vda    253:0    0      1G  0 disk 
`-vda1 253:1    0 1011.9M  0 part /
vdb    253:16   0      1G  0 disk

格式化磁盤

$ sudo mkfs.ext3  /dev/vdb  
$ sudo mount /dev/vdb  /mnt/

   創建文件測試

$ cd /mnt/
$ sudo touch clsn
$ ls
clsn        lost+found

1.11 添加一台新的計算節點

1.11.1 主機基礎環境配置

要求:主機的配置與之前的系統相同配置相同,推薦4G以上內存。

1)配置本地yum倉庫(提高安裝速度)

cd /opt/ && wget http://10.0.0.1:8080/openstack/openstack_rpm.tar.gz
tar xf openstack_rpm.tar.gz
echo  'mount /dev/cdrom /mnt'  >>/etc/rc.d/rc.local
mount /dev/cdrom /mnt
chmod +x /etc/rc.d/rc.local
cat >/etc/yum.repos.d/local.repo<<-'EOF'
[local]
name=local
baseurl=file:///mnt
gpgcheck=0

[openstack]
name=openstack-mitaka
baseurl=file:///opt/repo
gpgcheck=0
EOF

2)配置NTP時間服務

# 安裝軟件
yum install chrony -y 
# 修改配置信息,同步chrony服務
sed -ri.bak '/server/s/^/#/g;2a server 10.0.0.11 iburst' /etc/chrony.conf
# 啟動,設置自啟動
systemctl enable chronyd.service
systemctl start chronyd.service

3)安裝OpenStack的包操作

#安裝 OpenStack 客戶端:
yum -y install python-openstackclient
#安裝 openstack-selinux 軟件包
yum -y install openstack-selinux    

1.11.2 安裝配置計算服務

安裝nova軟件包

yum -y install openstack-nova-compute

命令集修改配置文件

yum install openstack-utils -y
cp /etc/nova/nova.conf{,.bak}
grep '^[a-Z\[]' /etc/nova/nova.conf.bak >/etc/nova/nova.conf
openstack-config --set /etc/nova/nova.conf  DEFAULT enabled_apis  osapi_compute,metadata
openstack-config --set /etc/nova/nova.conf  DEFAULT rpc_backend  rabbit
openstack-config --set /etc/nova/nova.conf  DEFAULT auth_strategy  keystone
openstack-config --set /etc/nova/nova.conf  DEFAULT my_ip  10.0.0.32
openstack-config --set /etc/nova/nova.conf  DEFAULT use_neutron  True
openstack-config --set /etc/nova/nova.conf  DEFAULT firewall_driver  nova.virt.firewall.NoopFirewallDriver
openstack-config --set /etc/nova/nova.conf  glance api_servers  http://controller:9292
openstack-config --set /etc/nova/nova.conf  keystone_authtoken  auth_uri  http://controller:5000
openstack-config --set /etc/nova/nova.conf  keystone_authtoken  auth_url  http://controller:35357
openstack-config --set /etc/nova/nova.conf  keystone_authtoken  memcached_servers  controller:11211
openstack-config --set /etc/nova/nova.conf  keystone_authtoken  auth_type  password
openstack-config --set /etc/nova/nova.conf  keystone_authtoken  project_domain_name  default
openstack-config --set /etc/nova/nova.conf  keystone_authtoken  user_domain_name  default
openstack-config --set /etc/nova/nova.conf  keystone_authtoken  project_name  service
openstack-config --set /etc/nova/nova.conf  keystone_authtoken  username  nova
openstack-config --set /etc/nova/nova.conf  keystone_authtoken  password  NOVA_PASS
openstack-config --set /etc/nova/nova.conf  oslo_concurrency lock_path  /var/lib/nova/tmp
openstack-config --set /etc/nova/nova.conf  oslo_messaging_rabbit   rabbit_host  controller
openstack-config --set /etc/nova/nova.conf  oslo_messaging_rabbit   rabbit_userid  openstack
openstack-config --set /etc/nova/nova.conf  oslo_messaging_rabbit   rabbit_password  RABBIT_PASS
openstack-config --set /etc/nova/nova.conf  vnc enabled  True
openstack-config --set /etc/nova/nova.conf  vnc vncserver_listen  0.0.0.0
openstack-config --set /etc/nova/nova.conf  vnc vncserver_proxyclient_address  '$my_ip'
openstack-config --set /etc/nova/nova.conf  vnc novncproxy_base_url  http://controller:6080/vnc_auto.html

1.11.3 配置neutron網絡

安裝neutron相關組件

yum -y install openstack-neutron-linuxbridge ebtables ipset

修改neutron配置

cp /etc/neutron/neutron.conf{,.bak}
grep -Ev '^$|#' /etc/neutron/neutron.conf.bak >/etc/neutron/neutron.conf
openstack-config --set /etc/neutron/neutron.conf  DEFAULT rpc_backend  rabbit
openstack-config --set /etc/neutron/neutron.conf  DEFAULT auth_strategy  keystone
openstack-config --set /etc/neutron/neutron.conf  keystone_authtoken auth_uri  http://controller:5000
openstack-config --set /etc/neutron/neutron.conf  keystone_authtoken auth_url  http://controller:35357
openstack-config --set /etc/neutron/neutron.conf  keystone_authtoken memcached_servers  controller:11211
openstack-config --set /etc/neutron/neutron.conf  keystone_authtoken auth_type  password
openstack-config --set /etc/neutron/neutron.conf  keystone_authtoken project_domain_name  default
openstack-config --set /etc/neutron/neutron.conf  keystone_authtoken user_domain_name  default
openstack-config --set /etc/neutron/neutron.conf  keystone_authtoken project_name  service
openstack-config --set /etc/neutron/neutron.conf  keystone_authtoken username  neutron
openstack-config --set /etc/neutron/neutron.conf  keystone_authtoken password  NEUTRON_PASS
openstack-config --set /etc/neutron/neutron.conf  oslo_concurrency lock_path  /var/lib/neutron/tmp
openstack-config --set /etc/neutron/neutron.conf  oslo_messaging_rabbit rabbit_host  controller
openstack-config --set /etc/neutron/neutron.conf  oslo_messaging_rabbit rabbit_userid  openstack
openstack-config --set /etc/neutron/neutron.conf  oslo_messaging_rabbit rabbit_password  RABBIT_PASS

配置Linuxbridge代理

cp /etc/neutron/plugins/ml2/linuxbridge_agent.ini{,.bak}
grep '^[a-Z\[]' /etc/neutron/plugins/ml2/linuxbridge_agent.ini.bak >/etc/neutron/plugins/ml2/linuxbridge_agent.ini
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini  linux_bridge physical_interface_mappings  provider:eth0
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini  securitygroup enable_security_group  True
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini  securitygroup firewall_driver  neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini  vxlan enable_vxlan  False

再次配置 nova 服務

openstack-config --set /etc/nova/nova.conf  neutron url  http://controller:9696
openstack-config --set /etc/nova/nova.conf  neutron auth_url  http://controller:35357
openstack-config --set /etc/nova/nova.conf  neutron auth_type  password
openstack-config --set /etc/nova/nova.conf  neutron project_domain_name  default
openstack-config --set /etc/nova/nova.conf  neutron user_domain_name  default
openstack-config --set /etc/nova/nova.conf  neutron region_name  RegionOne
openstack-config --set /etc/nova/nova.conf  neutron project_name  service
openstack-config --set /etc/nova/nova.conf  neutron username  neutron
openstack-config --set /etc/nova/nova.conf  neutron password  NEUTRON_PASS

1.11.4 啟動計算節點

#啟動nova服務,設置開機自啟動

systemctl enable libvirtd.service openstack-nova-compute.service
systemctl start libvirtd.service openstack-nova-compute.service

# 啟動Linuxbridge代理並配置它開機自啟動

systemctl enable neutron-linuxbridge-agent.service
systemctl start neutron-linuxbridge-agent.service

# 查看狀態

systemctl status libvirtd.service openstack-nova-compute.service
systemctl stauts neutron-linuxbridge-agent.service

1.11.5 驗證之前的操作

在控制節點驗證配置

neutron agent-list

 驗證網絡配置

[root@controller ~]# neutron agent-list 
+----------------------+--------------------+------------+-------------------+-------+----------------+-------------------------+
| id                   | agent_type         | host       | availability_zone | alive | admin_state_up | binary                  |
+----------------------+--------------------+------------+-------------------+-------+----------------+-------------------------+
| 3ab2f17f-737e-4c3f-  | DHCP agent         | controller | nova              | :-)   | True           | neutron-dhcp-agent      |
| 86f0-2289c56a541b    |                    |            |                   |       |                |                         |
| 4f64caf6-a9b0-4742-b | Linux bridge agent | controller |                   | :-)   | True           | neutron-linuxbridge-    |
| 0d1-0d961063200a     |                    |            |                   |       |                | agent                   |
| 630540de-d0a0-473b-  | Linux bridge agent | compute1   |                   | :-)   | True           | neutron-linuxbridge-    |
| 96b5-757afc1057de    |                    |            |                   |       |                | agent                   |
| 9989ddcb-6aba-4b7f-  | Metadata agent     | controller |                   | :-)   | True           | neutron-metadata-agent  |
| 9bd7-7d61f774f2bb    |                    |            |                   |       |                |                         |
| af40d1db-ff24-4201-b | Linux bridge agent | compute2   |                   | :-)   | True           | neutron-linuxbridge-    |
| 0f2-175fc1542f26     |                    |            |                   |       |                | agent                   |
+----------------------+--------------------+------------+-------------------+-------+----------------+-------------------------+

   驗證計算節點

[root@controller ~]# openstack compute service list 
+----+------------------+------------+----------+---------+-------+----------------------------+
| Id | Binary           | Host       | Zone     | Status  | State | Updated At                 |
+----+------------------+------------+----------+---------+-------+----------------------------+
|  1 | nova-scheduler   | controller | internal | enabled | up    | 2018-01-24T06:06:02.000000 |
|  2 | nova-conductor   | controller | internal | enabled | up    | 2018-01-24T06:06:04.000000 |
|  3 | nova-consoleauth | controller | internal | enabled | up    | 2018-01-24T06:06:03.000000 |
|  6 | nova-compute     | compute1   | nova     | enabled | up    | 2018-01-24T06:06:05.000000 |
|  7 | nova-compute     | compute2   | nova     | enabled | up    | 2018-01-24T06:06:00.000000 |
+----+------------------+------------+----------+---------+-------+----------------------------+

1.12 Glance鏡像服務遷移

glance服務遷移到其他節點上,減輕控制節點壓力,提高性能。

1.12.1 數據庫遷移

本次glance遷移到compute2節點上

安裝數據庫

yum -y install mariadb mariadb-server python2-PyMySQL

修改數據庫配置文件

[root@compute2 ~]# vim /etc/my.cnf.d/openstack.cnf
[mysqld]
bind-address = 10.0.0.32
default-storage-engine = innodb
innodb_file_per_table
max_connections = 4096
collation-server = utf8_general_ci
character-set-server = utf8

啟動數據庫,並設置開機自啟動

systemctl enable mariadb.service
systemctl start mariadb.service

【重要】為了保證數據庫服務的安全性,運行``mysql_secure_installation``腳本

mysql_secure_installation

1.12.2 鏡像glance 數據庫遷移

在控制節點的數據庫將glance庫導出,文件傳到計算節點

[root@controller ~]# mysqldump -B glance  > glance.sql
[root@controller ~]# rsync -avz  glance.sql  10.0.0.32:/opt/

以下操作在compute2節點上進行操作

導入數據庫:

[root@compute2  ~]# mysql 
MariaDB [(none)]> source /opt/glance.sql

   重新創建glance授權用戶

GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY 'GLANCE_DBPASS';
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%'  IDENTIFIED BY 'GLANCE_DBPASS';

1.12.3 安裝glance服務

參考文檔https://docs.openstack.org/mitaka/zh_CN/install-guide-rdo/glance.html

安裝glance相關軟件包

yum -y install openstack-glance

編輯配置文件 /etc/glance/glance-api.conf

注意:修改其中的數據庫指向地址,修改為copmute2上的數據庫。

   批量修改命令集:

yum install openstack-utils -y
cp /etc/glance/glance-api.conf{,.bak}
grep '^[a-Z\[]' /etc/glance/glance-api.conf.bak >/etc/glance/glance-api.conf
openstack-config --set /etc/glance/glance-api.conf  database  connection  mysql+pymysql://glance:GLANCE_DBPASS@10.0.0.32/glance
openstack-config --set /etc/glance/glance-api.conf  glance_store stores  file,http
openstack-config --set /etc/glance/glance-api.conf  glance_store default_store  file
openstack-config --set /etc/glance/glance-api.conf  glance_store filesystem_store_datadir  /var/lib/glance/images/
openstack-config --set /etc/glance/glance-api.conf  keystone_authtoken auth_uri  http://controller:5000
openstack-config --set /etc/glance/glance-api.conf  keystone_authtoken auth_url  http://controller:35357
openstack-config --set /etc/glance/glance-api.conf  keystone_authtoken memcached_servers  controller:11211
openstack-config --set /etc/glance/glance-api.conf  keystone_authtoken auth_type  password
openstack-config --set /etc/glance/glance-api.conf  keystone_authtoken project_domain_name  default
openstack-config --set /etc/glance/glance-api.conf  keystone_authtoken user_domain_name  default
openstack-config --set /etc/glance/glance-api.conf  keystone_authtoken project_name  service
openstack-config --set /etc/glance/glance-api.conf  keystone_authtoken username  glance
openstack-config --set /etc/glance/glance-api.conf  keystone_authtoken password  GLANCE_PASS
openstack-config --set /etc/glance/glance-api.conf  paste_deploy flavor  keystone

   編輯配置文件 /etc/glance/glance-registry.conf

注意:修改其中的數據庫指向地址,修改為copmute2上的數據庫。

   批量修改命令集:

yum install openstack-utils -y
cp /etc/glance/glance-registry.conf{,.bak}
grep '^[a-Z\[]' /etc/glance/glance-registry.conf.bak > /etc/glance/glance-registry.conf
openstack-config --set /etc/glance/glance-registry.conf  database  connection  mysql+pymysql://glance:GLANCE_DBPASS@10.0.0.32/glance
openstack-config --set /etc/glance/glance-registry.conf  keystone_authtoken auth_uri  http://controller:5000
openstack-config --set /etc/glance/glance-registry.conf  keystone_authtoken auth_url  http://controller:35357
openstack-config --set /etc/glance/glance-registry.conf  keystone_authtoken memcached_servers  controller:11211
openstack-config --set /etc/glance/glance-registry.conf  keystone_authtoken auth_type  password
openstack-config --set /etc/glance/glance-registry.conf  keystone_authtoken project_domain_name  default
openstack-config --set /etc/glance/glance-registry.conf  keystone_authtoken user_domain_name  default
openstack-config --set /etc/glance/glance-registry.conf  keystone_authtoken project_name  service
openstack-config --set /etc/glance/glance-registry.conf  keystone_authtoken username  glance
openstack-config --set /etc/glance/glance-registry.conf  keystone_authtoken password  GLANCE_PASS
openstack-config --set /etc/glance/glance-registry.conf  paste_deploy flavor  keystone

1.12.4 遷移原有鏡像文件

將原glance上的鏡像文件,傳輸到compute2

[root@controller ~]# cd  /var/lib/glance/images/
[root@controller ~]# rsync -avz `pwd`/ 10.0.0.32:`pwd`/ 

【注意權限】傳輸過后,在compute2上查看權限

[root@compute2 ~]# cd  /var/lib/glance/images/
[root@compute2 ~]# chown glance:glance *

1.12.5 修改現有keystone glance服務注冊信息

備份數據庫endpoint表數據

[root@controller ~]# mysqldump keystone endpoint > endpoint.sql

修改keystone注冊信息

cp endpoint.sql{,.bak}
sed -i 's#http://controller:9292#http://10.0.0.32:9292#g' endpoint.sql

重新將修改后的sql文件導入數據庫

[root@controller ~]# mysql keystone < endpoint.sql

1.12.6 修改nova節點配置文件

所有的節點上的配置文件都進行修改

sed -i 's#api_servers = http://controller:9292#api_servers = http://10.0.0.32:9292#g' /etc/nova/nova.conf

控制節點重啟

systemctl restart openstack-nova-api.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service

   計算節點重啟

systemctl restart   openstack-nova-compute.service

   停掉glance原節點的服務

systemctl stop openstack-glance-api.service  openstack-glance-registry.service

1.12.7 驗證操作

copmute2節點啟動glance服務

systemctl start openstack-glance-api.service  openstack-glance-registry.service

查看鏡像列表

[root@controller ~]# openstack image list 
+--------------------------------------+----------+--------+
| ID                                   | Name     | Status |
+--------------------------------------+----------+--------+
| 68222030-a808-4d05-978f-1d4a6f85f7dd | clsn-img | active |
| 9d92c601-0824-493a-bc6e-cecb10e9a4c6 | cirros   | active |
+--------------------------------------+----------+--------+

查看web界面中的鏡像信息

 

1.13 添加一個新的網段並讓它能夠上網

1.13.1 環境准備

1)為openstack服務機器機器添加一塊新的網卡(所有機器操作)。

網卡選擇LAN區段,並保證所有的機器在同一個LAN區段當中。

 

   2)主機修改配置,啟動eth1網卡(所有節點操作)

   查看網卡設備

[root@compute1 ~]# ls /proc/sys/net/ipv4/conf/
all  brq2563bcef-c6  brq54f942f7-cc  default  eth0  eth1  lo
[root@compute1 ~]# cp /etc/sysconfig/network-scripts/ifcfg-eth{0,1}

   修改網卡配置

[root@compute1 ~]# vim /etc/sysconfig/network-scripts/ifcfg-eth1
TYPE=Ethernet
BOOTPROTO=none
NAME=eth1
DEVICE=eth1
ONBOOT=yes
IPADDR=172.16.1.31
NETMASK=255.255.255.0

   啟動網卡

[root@compute1 ~]# ifup eth0

1.13.2 配置neutron服務

再增加一個faulte網絡,這里添加的名為net172

[root@controller ~]# vim /etc/neutron/plugin.ini 
[DEFAULT]
[ml2]
type_drivers = flat,vlan
tenant_network_types =
mechanism_drivers = linuxbridge
extension_drivers = port_security
[ml2_type_flat]
flat_networks = provider,net172
[ml2_type_geneve]
[ml2_type_gre]
[ml2_type_vlan]
[ml2_type_vxlan]
[securitygroup]
enable_ipset = True

   修改橋接配置,添加eth1信息

[root@controller ~]# vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini
[DEFAULT]
[agent]
[linux_bridge]
physical_interface_mappings = provider:eth0,net172:eth1
[securitygroup]
enable_security_group = True
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
[vxlan]
enable_vxlan = False

   將橋接配置文件發往各個節點

[root@controller ~]# rsync -avz /etc/neutron/plugins/ml2/linuxbridge_agent.ini 10.0.0.31:/etc/neutron/plugins/ml2/linuxbridge_agent.ini
····

1.13.3 重啟服務

控制節點重啟網絡服務

[root@controller ~]# systemctl restart  neutron-server.service  neutron-linuxbridge-agent.service

在其他計算節點重啟網絡服務

[root@compute1 ~]# systemctl restart neutron-linuxbridge-agent.service 

查看當前網絡狀態

[root@controller ~]# neutron agent-list
+----------------------+--------------------+------------+-------------------+-------+----------------+-------------------------+
| id                   | agent_type         | host       | availability_zone | alive | admin_state_up | binary                  |
+----------------------+--------------------+------------+-------------------+-------+----------------+-------------------------+
| 3ab2f17f-737e-4c3f-  | DHCP agent         | controller | nova              | :-)   | True           | neutron-dhcp-agent      |
| 86f0-2289c56a541b    |                    |            |                   |       |                |                         |
| 4f64caf6-a9b0-4742-b | Linux bridge agent | controller |                   | :-)   | True           | neutron-linuxbridge-    |
| 0d1-0d961063200a     |                    |            |                   |       |                | agent                   |
| 630540de-d0a0-473b-  | Linux bridge agent | compute1   |                   | :-)   | True           | neutron-linuxbridge-    |
| 96b5-757afc1057de    |                    |            |                   |       |                | agent                   |
| 9989ddcb-6aba-4b7f-  | Metadata agent     | controller |                   | :-)   | True           | neutron-metadata-agent  |
| 9bd7-7d61f774f2bb    |                    |            |                   |       |                |                         |
| af40d1db-ff24-4201-b | Linux bridge agent | compute2   |                   | :-)   | True           | neutron-linuxbridge-    |
| 0f2-175fc1542f26     |                    |            |                   |       |                | agent                   |
+----------------------+--------------------+------------+-------------------+-------+----------------+-------------------------+

1.13.4 配置iptables服務器作子網網關

主機信息

[root@route ~]# uname -r 
3.10.0-327.el7.x86_64
[root@route ~]# hostname -I 
10.0.0.2 172.16.1.2

   配置內核轉發

[root@route ~]# echo 'net.ipv4.ip_forward=1' >>/etc/sysctl.conf
[root@route ~]# sysctl -p 
net.ipv4.ip_forward = 1

   配置iptables轉發規則

iptables -t nat -A POSTROUTING -s 172.16.1.0/24 -o eth0 -j MASQUERADE

1.13.5 web界面創建子網

1)選擇創建網絡

 

   2)配置在子網

      網關選擇搭建的iptables服務器,經由iptables服務器進行代理上網

 

   3)配置子網IP地范圍,配置完成子網創建成功

 

   4)創建一個新的實例測試子網

      注意:在創建時,網絡選擇剛剛創建的net172網絡

   實例創建完成

 

   5)登陸控制台

   查看網關信息

 

   檢測網絡連通性

 

   至此一個新的子網創建成功

1.14 Cinder服務對接NFS配置

NFS服務介紹參考文檔:http://www.cnblogs.com/clsn/p/7694456.html

1.14.1 NFS服務部署

注意:實驗環境使用控制節點做nfs服務器,在生產環境中,需配置高性能存儲服務器。

安裝nfs相關軟件包

yum install nfs-utils rpcbind -y

配置nfs服務

[root@controller ~]# cat /etc/exports
/data 10.0.0.0/24(rw,async,no_root_squash,no_all_squash)
# 創建目錄
[root@controller ~]# mkdir /data

啟動nfs服務,並設置開機自啟動

systemctl restart rpcbind 
systemctl restart nfs
systemctl enable rpcbind  nfs
systemctl status rpcbind  nfs

1.14.2 測試NFS的可用性

在計算節點查看nfs信息

[root@compute1 ~]# showmount -e 10.0.0.11
Export list for 10.0.0.11:
/data 10.0.0.0/24

   進行掛載測試

[root@compute1 ~]# mount 10.0.0.11:/data /srv

   寫入文件

[root@compute1 ~]# cd /srv/
[root@compute1 srv]# touch clsn 

   在服務端查看文件是否寫入成功。

[root@controller data]# ll
total 0
-rw-r--r-- 1 root root 0 Jan 26 15:35 clsn

1.14.3 修改Cinder節點配置文件

   首先我們需要知道,cinder是通過在cinder.conf配置文件來配置驅動從而使用不同的存儲介質的, 所以如果我們使用NFS作為存儲介質,那么就需要配置成NFS的驅動,

   那么問題來了,如何找到NFS的驅動呢?請看下面查找步驟:

[root@controller ~]# cd /usr/lib/python2.7/site-packages/cinder   # 切換到cinder的模塊包里
[root@controller cinder]# cd volume/drivers/   # 找到卷的驅動
[root@controller drivers]# grep Nfs nfs.py   # 過濾下Nfs就能找到
class NfsDriver(driver.ExtendVD, remotefs.RemoteFSDriver):  # 這個class定義的類就是Nfs的驅動名字了

驅動找到了,現在修改cinder配置添加nfs服務器信息

[root@compute1 ~]# vim /etc/cinder/cinder.conf
[DEFAULT]
···
enabled_backends = lvm,ssd,nfs

[nfs]
volume_driver = cinder.volume.drivers.nfs.NfsDriver
nfs_shares_config = /etc/cinder/nfs_shares
volume_backend_name = nfs

   nfs信息文件

[root@compute1 ~]# cat /etc/cinder/nfs_shares
10.0.0.11:/data
# 修改權限
chown root:cinder /etc/cinder/nfs_shares
chmod 640 /etc/cinder/nfs_shares

1.14.4 重啟服務

重啟cinder-volume服務

[root@compute1 ~]# systemctl restart openstack-cinder-volume

   查看掛載信息

[root@compute1 ~]# df -h
Filesystem       Size  Used Avail Use% Mounted on
/dev/sda2         48G  4.0G   45G   9% /
devtmpfs         480M     0  480M   0% /dev
tmpfs            489M     0  489M   0% /dev/shm
tmpfs            489M   13M  477M   3% /run
tmpfs            489M     0  489M   0% /sys/fs/cgroup
/dev/sr0         4.1G  4.1G     0 100% /mnt
tmpfs             98M     0   98M   0% /run/user/0
10.0.0.11:/data   48G  2.9G   46G   6% /var/lib/cinder/mnt/490717a467bd12d34ec324c86a4f35b3

在控制節點驗證服務是否正常

[root@controller ~]#  cinder service-list
+------------------+--------------+------+---------+-------+----------------------------+-----------------+
|      Binary      |     Host     | Zone |  Status | State |         Updated_at         | Disabled Reason |
+------------------+--------------+------+---------+-------+----------------------------+-----------------+
| cinder-scheduler |  controller  | nova | enabled |   up  | 2018-01-26T13:18:45.000000 |        -        |
|  cinder-volume   | compute1@lvm | nova | enabled |   up  | 2018-01-26T13:18:42.000000 |        -        |
|  cinder-volume   | compute1@nfs | nova | enabled |   up  | 2018-01-26T13:18:42.000000 |        -        |
|  cinder-volume   | compute1@ssd | nova | enabled |   up  | 2018-01-26T13:18:42.000000 |        -        |
|  cinder-volume   | compute2@lvm | nova | enabled |   up  | 2018-01-26T13:18:50.000000 |        -        |
+------------------+--------------+------+---------+-------+----------------------------+-----------------+

1.14.5 添加NFS存儲卷

1)創建nfs類型卷

 

   2)創建成功

   3)查看卷的詳細信息

 

   nfs服務端,查找到標識一致的文件

[root@controller ~]# ll /data/
total 0
-rw-r--r-- 1 root root          0 Jan 26 15:35 clsn
-rw-rw-rw- 1 root root 1073741824 Jan 26 21:23 volume-8c55c9bf-6ab2-4828-a14e-76bd525ba4ad

   至此Cinder對接NFS就完成了

1.15 OpenStack中的VXLAN網絡

本次的配置時基於" 網絡選項1:公共網絡" 進行配置。配置參考 "網絡選項2:私有網絡"

1.15.1 前期准備

1)添加網卡eth2 (所有節點操作)

 

2)配置網卡,配置網段172.16.0.x

cp /etc/sysconfig/network-scripts/ifcfg-eth{1,2}
vim  /etc/sysconfig/network-scripts/ifcfg-eth2
TYPE=Ethernet
BOOTPROTO=none
NAME=eth2
DEVICE=eth2
ONBOOT=yes
IPADDR=172.16.0.X
NETMASK=255.255.255.0

3)啟動網卡

ifup eth2

1.15.2 修改控制節點配置

參考文檔:https://docs.openstack.org/mitaka/zh_CN/install-guide-rdo/neutron-controller-install-option2.html

1)安裝組件

yum -y install openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge ebtables

   2)修改配置文件

修改 /etc/neutron/neutron.conf

[DEFAULT]
...
core_plugin = ml2
service_plugins = router
allow_overlapping_ips = True

配置 Modular Layer 2 (ML2) 插件,修改/etc/neutron/plugins/ml2/ml2_conf.ini

``[ml2]``部分,啟用flatVLAN以及VXLAN網絡

[ml2]
...
type_drivers = flat,vlan,vxlan        

``[ml2]``部分,啟用VXLAN私有網絡

[ml2]
...
tenant_network_types = vxlan

``[ml2]``部分,啟用Linuxbridgelayer2機制:

[ml2]
...
mechanism_drivers = linuxbridge,l2population

``[ml2_type_vxlan]``部分,為私有網絡配置VXLAN網絡識別的網絡范圍

[ml2_type_vxlan]
...
vni_ranges = 1:1000

配置Linuxbridge代理,修改 /etc/neutron/plugins/ml2/linuxbridge_agent.ini

[vxlan]
enable_vxlan = True
local_ip = 172.16.0.11
l2_population = True

配置layer3代理,編輯``/etc/neutron/l3_agent.ini``文件並完成以下操作

``[DEFAULT]``部分,配置Linuxbridge接口驅動和外部網絡網橋

[DEFAULT]
...
interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver
external_network_bridge =

同步數據庫

su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \
  --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron

啟動服務

systemctl restart neutron-server.service \
neutron-linuxbridge-agent.service neutron-dhcp-agent.service \
neutron-metadata-agent.service
# 啟動l3網絡
systemctl enable neutron-l3-agent.service
systemctl start neutron-l3-agent.service

   檢查網絡狀態

[root@controller ~]# neutron agent-list
+----------------------+--------------------+------------+-------------------+-------+----------------+-------------------------+
| id                   | agent_type         | host       | availability_zone | alive | admin_state_up | binary                  |
+----------------------+--------------------+------------+-------------------+-------+----------------+-------------------------+
| 3ab2f17f-737e-4c3f-  | DHCP agent         | controller | nova              | :-)   | True           | neutron-dhcp-agent      |
| 86f0-2289c56a541b    |                    |            |                   |       |                |                         |
| 4f64caf6-a9b0-4742-b | Linux bridge agent | controller |                   | :-)   | True           | neutron-linuxbridge-    |
| 0d1-0d961063200a     |                    |            |                   |       |                | agent                   |
| 630540de-d0a0-473b-  | Linux bridge agent | compute1   |                   | :-)   | True           | neutron-linuxbridge-    |
| 96b5-757afc1057de    |                    |            |                   |       |                | agent                   |
| 9989ddcb-6aba-4b7f-  | Metadata agent     | controller |                   | :-)   | True           | neutron-metadata-agent  |
| 9bd7-7d61f774f2bb    |                    |            |                   |       |                |                         |
| af40d1db-ff24-4201-b | Linux bridge agent | compute2   |                   | :-)   | True           | neutron-linuxbridge-    |
| 0f2-175fc1542f26     |                    |            |                   |       |                | agent                   |
| b08be87c-4abe-48ce-  | L3 agent           | controller | nova              | :-)   | True           | neutron-l3-agent        |
| 983f-0bb08208f6de    |                    |            |                   |       |                |                         |
+----------------------+--------------------+------------+-------------------+-------+----------------+-------------------------+

1.15.3 修改配置計算節點文件

配置Linuxbridge代理,添加配置

vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini
[vxlan]
enable_vxlan = True
local_ip = OVERLAY_INTERFACE_IP_ADDRESS
l2_population = True

重啟服務

systemctl restart neutron-linuxbridge-agent.service

   再次檢查網絡狀態

[root@controller ~]# neutron agent-list
+----------------------+--------------------+------------+-------------------+-------+----------------+-------------------------+
| id                   | agent_type         | host       | availability_zone | alive | admin_state_up | binary                  |
+----------------------+--------------------+------------+-------------------+-------+----------------+-------------------------+
| 3ab2f17f-737e-4c3f-  | DHCP agent         | controller | nova              | :-)   | True           | neutron-dhcp-agent      |
| 86f0-2289c56a541b    |                    |            |                   |       |                |                         |
| 4f64caf6-a9b0-4742-b | Linux bridge agent | controller |                   | :-)   | True           | neutron-linuxbridge-    |
| 0d1-0d961063200a     |                    |            |                   |       |                | agent                   |
| 630540de-d0a0-473b-  | Linux bridge agent | compute1   |                   | :-)   | True           | neutron-linuxbridge-    |
| 96b5-757afc1057de    |                    |            |                   |       |                | agent                   |
| 9989ddcb-6aba-4b7f-  | Metadata agent     | controller |                   | :-)   | True           | neutron-metadata-agent  |
| 9bd7-7d61f774f2bb    |                    |            |                   |       |                |                         |
| af40d1db-ff24-4201-b | Linux bridge agent | compute2   |                   | :-)   | True           | neutron-linuxbridge-    |
| 0f2-175fc1542f26     |                    |            |                   |       |                | agent                   |
| b08be87c-4abe-48ce-  | L3 agent           | controller | nova              | :-)   | True           | neutron-l3-agent        |
| 983f-0bb08208f6de    |                    |            |                   |       |                |                         |
+----------------------+--------------------+------------+-------------------+-------+----------------+-------------------------+

1.15.4 修改dashboard開啟路由界面顯示

該操作是在web界面開啟route功能

vim /etc/openstack-dashboard/local_settings
OPENSTACK_NEUTRON_NETWORK = {
    'enable_router': True,
····

重啟dashboard服務

systemctl restart httpd.service

1.15.5 配置VXLAN網絡

1)查看現在網絡拓撲

 

   2)編輯網絡配置,開啟外部網絡

 

   3)配置網絡

 

   4)配置子網

 

   5)創建路由器

      創建路由時,注意配置外部網絡連接.

路由器實質為創建命名空間

查看命名空間列表

[root@controller ~]# ip netns
qdhcp-ac1f482b-5c37-4da2-8922-c8d02e3ad27b
qrouter-546678a3-7277-42a6-9ddd-a060e3d3198d
qdhcp-2563bcef-c6b0-43f1-9b17-1eca15472893
qdhcp-54f942f7-cc28-4292-a4d6-e37b8833e35f

 

    進入命名空間

 

[root@controller ~]# ip netns exec qrouter-546678a3-7277-42a6-9ddd-a060e3d3198d /bin/bash

6)為路由器添加接口連接子網

 

   7)創建一台實例,使用配置的VXLAN網絡

      注意選擇配置vxlan的網絡配置

 

   8)為創建的實例配置浮動IP

 

   配置浮動IP后的實例

 

1.15.6 連接浮動IP測試

使用ssh連接主機,由於之前定制的進行root密碼進行修改可以使用root用戶直接進行 連接。

[root@compute2 ~]# ssh root@10.0.0.115
root@10.0.0.115's password: 
# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc pfifo_fast qlen 1000
    link/ether fa:16:3e:fc:70:31 brd ff:ff:ff:ff:ff:ff
    inet 1.1.1.101/24 brd 1.1.1.255 scope global eth0
    inet6 fe80::f816:3eff:fefc:7031/64 scope link 
       valid_lft forever preferred_lft forever
# ping baidu.com -c1
PING baidu.com (111.13.101.208): 56 data bytes
64 bytes from 111.13.101.208: seq=0 ttl=127 time=5.687 ms

--- baidu.com ping statistics ---
1 packets transmitted, 1 packets received, 0% packet loss
round-trip min/avg/max = 5.687/5.687/5.687 ms

   查看當前網絡拓撲

 

   到此VXLAN網絡已實現

1.16 openstack API應用用

   官方API列表:https://docs.openstack.org/pike/api/

   官方提供了豐富的API接口,方便用戶的使用。可以使用curl命令調用API

curl命令是Linux下一個可以使用多種協議收發數據的工具,包括http協議。openstackAPI接口都是URL地址:http://controller:35357/v3可以使用curl命令進行調用。

1.16.1 獲取token方法

獲取token

[root@controller ~]# openstack token issue |awk '/ id /{print $4}' 
gAAAAABaa0MpXNGCHgaytnvyPMbIF3IecIu9jA4WeMaL1kLWueNYs_Q1APXwdXDU7K34wdLg0I1spUIzDhAkst-Qdrizn_L3N5YBlApUrkY7gSw96MkKpTTDjUhIgm0eAD85Ayi6TL_1HmJJQIhm5ERY91zcKi9dvl73jj0dFNDWRqD9Cc9_oPA

將獲取token給變量復制

token=` token=`openstack token issue |awk '/ id /{print $4}'`

1.16.2 常用獲取命令

參考:http://www.qstack.com.cn/archives/168.html

使用api端口查看鏡像列表

curl -H "X-Auth-Token:$token"  -H "Content-Type: application/json"  http://10.0.0.32:9292/v2/images

獲取roles列表

curl -H "X-Auth-Token:$token" -H "Content-Type: application/json" http://10.0.0.11:35357/v3/roles

獲取主機列表

curl -H "X-Auth-Token:$token" -H "Content-Type: application/json" http://10.0.0.11:8774/v2.1/servers

獲取網絡列表

curl -H "X-Auth-Token:$token" -H "Content-Type: application/json" http://10.0.0.11:9696/v2.0/networks

獲取子網列表

curl -H "X-Auth-Token:$token" -H "Content-Type: application/json" http://10.0.0.11:9696/v2.0/subnets

下載一個鏡像

curl -o clsn.qcow2 -H "X-Auth-Token:$token" -H "Content-Type: application/json" http://10.0.0.11:9292/v2/images/eb9e7015-d5ef-48c7-bd65-88a144c59115/file

1.17 附錄

1.17.1 附錄-常見錯誤

1、配置用戶時的錯誤

【錯誤】Multiple service matches found for 'identity', use an ID to be more specific.

解決辦法:

openstack endpoint list # 查看列表

    openstack endpoint delete  'id'  # 利用ID刪除API 端點

    openstack service list  # 查看服務列表

2、用戶管理時錯誤

HTTP 503錯誤:

    glance日志位置: /var/log/glance/

    用戶刪除后,重新重建用戶后,再關聯次角色

    openstack role add --project service --user glance admin

3、未加載環境變量時出錯

[root@controller ~]# openstack user list

Missing parameter(s):

Set a username with --os-username, OS_USERNAME, or auth.username

Set an authentication URL, with --os-auth-url, OS_AUTH_URL or auth.auth_url

Set a scope, such as a project or domain, set a project scope with --os-project-name, OS_PROJECT_NAME or auth.project_name, set a domain scope with --os-domain-name, OS_DOMAIN_NAME or auth.domain_name

1.17.2 附錄-OpenStack組件使用的默認端口號

OpenStack service

Default ports

Port type

Block Storage (cinder)

8776

publicurl and adminurl

Compute (nova) endpoints

8774

publicurl and adminurl

Compute API (nova-api)

8773, 8775

 

Compute ports for access to virtual machine consoles

5900-5999

 

Compute VNC proxy for browsers ( openstack-nova-novncproxy)

6080

 

Compute VNC proxy for traditional VNC clients (openstack-nova-xvpvncproxy)

6081

 

Proxy port for HTML5 console used by Compute service

6082

 

Data processing service (sahara) endpoint

8386

publicurl and adminurl

Identity service (keystone) administrative endpoint

35357

adminurl

Identity service public endpoint

5000

publicurl

Image service (glance) API

9292

publicurl and adminurl

Image service registry

9191

 

Networking (neutron)

9696

publicurl and adminurl

Object Storage (swift)

6000, 6001, 6002

 

Orchestration (heat) endpoint

8004

publicurl and adminurl

Orchestration AWS CloudFormation-compatible API (openstack-heat-api-cfn)

8000

 

Orchestration AWS CloudWatch-compatible API (openstack-heat-api-cloudwatch)

8003

 

Telemetry (ceilometer)

8777

publicurl and adminurl

1.17.3 附錄-openstack組件使用的默認端口號

Service

Default port

Used by

HTTP

80

OpenStack dashboard (Horizon) when it is not configured to use secure access.

HTTP alternate

8080

OpenStack Object Storage (swift) service.

HTTPS

443

Any OpenStack service that is enabled for SSL, especially secure-access dashboard.

rsync

873

OpenStack Object Storage. Required.

iSCSI target

3260

OpenStack Block Storage. Required.

MySQL database service

3306

Most OpenStack components.

Message Broker (AMQP traffic)

5672

25672

OpenStack Block Storage, Networking, Orchestration, and Compute.

NTP(chrony)

123,323

時間同步

memcached

11211

緩存服務器

1.17.4 附錄-openstack新建雲主機流程圖

虛擬機啟動過程文字表述如下:

1.	界面或命令行通過RESTful API向keystone獲取認證信息。
2.	keystone通過用戶請求認證信息,並生成auth-token返回給對應的認證請求。
3.	界面或命令行通過RESTful API向nova-api發送一個boot instance的請求(攜帶auth-token)。
4.	nova-api接受請求后向keystone發送認證請求,查看token是否為有效用戶和token。
5.	keystone驗證token是否有效,如有效則返回有效的認證和對應的角色(注:有些操作需要有角色權限才能操作)。
6.	通過認證后nova-api和數據庫通訊。
7.	初始化新建虛擬機的數據庫記錄。
8.	nova-api通過rpc.call向nova-scheduler請求是否有創建虛擬機的資源(Host ID)。
9.	nova-scheduler進程偵聽消息隊列,獲取nova-api的請求。
10.	nova-scheduler通過查詢nova數據庫中計算資源的情況,並通過調度算法計算符合虛擬機創建需要的主機。
11.	對於有符合虛擬機創建的主機,nova-scheduler更新數據庫中虛擬機對應的物理主機信息。
12.	nova-scheduler通過rpc.cast向nova-compute發送對應的創建虛擬機請求的消息。
13.	nova-compute會從對應的消息隊列中獲取創建虛擬機請求的消息。
14.	nova-compute通過rpc.call向nova-conductor請求獲取虛擬機消息。(Flavor)
15.	nova-conductor從消息隊隊列中拿到nova-compute請求消息。
16.	nova-conductor根據消息查詢虛擬機對應的信息。
17.	nova-conductor從數據庫中獲得虛擬機對應信息。
18.	nova-conductor把虛擬機信息通過消息的方式發送到消息隊列中。
19.	nova-compute從對應的消息隊列中獲取虛擬機信息消息。
20.	nova-compute通過keystone的RESTfull API拿到認證的token,並通過HTTP請求glance-api獲取創建虛擬機所需要鏡像。
21.	glance-api向keystone認證token是否有效,並返回驗證結果。
22.	token驗證通過,nova-compute獲得虛擬機鏡像信息(URL)。
23.	nova-compute通過keystone的RESTfull API拿到認證k的token,並通過HTTP請求neutron-server獲取創建虛擬機所需要的網絡信息。
24.	neutron-server向keystone認證token是否有效,並返回驗證結果。
25.	token驗證通過,nova-compute獲得虛擬機網絡信息。
26.	nova-compute通過keystone的RESTfull API拿到認證的token,並通過HTTP請求cinder-api獲取創建虛擬機所需要的持久化存儲信息。
27.	cinder-api向keystone認證token是否有效,並返回驗證結果。
28.	token驗證通過,nova-compute獲得虛擬機持久化存儲信息。
29.	nova-compute根據instance的信息調用配置的虛擬化驅動來創建虛擬機。

1.17.5 附錄-MetaData IP 169.254.169.254說明

查考文獻:http://server.51cto.com/sVirtual-516706.htm

OpenStack metadata

要理解如何實現的,我們需要先了解OpenStackmetadatametadata字面上是元數據,主要用來給客戶提供一個可以修改設置OpenStack instence(雲主機)的機制,就像我們想在虛擬機放置一個公鑰這樣的需求,或者設置主機名等都可以通過metadata來實現。讓我來梳理一下思路:

 1.OpenStack有一個叫做Metadata的東東。

 2.我們創建虛擬機時候設置的主機名、密鑰對,都保存在Metadata中。

 3.虛擬機創建后,在啟動的時候獲取Metadata,並進行系統配置。

虛擬機如何取到Metadata?

那么虛擬機到底是怎么取到這個metadata?讓我們在虛擬機試試這個。

$ curl http://169.254.169.254
1.0
2007-01-19
2007-03-01
2007-08-29
2007-10-10
2007-12-15
2008-02-01
2008-09-01
2009-04-04
latest

為啥是169.254.169.254?

或許你和我有一樣的疑問,為啥這個meatadataip地址是169.254.169.254?

這個就要提到Amazon了。因為metadata是亞馬遜提出來的。然后大家再給亞馬遜定制各種操作系統鏡像的時候獲取metadataapi地址就寫的是169.254.169.254

為了這些鏡像也能在OpenStack上運行,為了兼容它。OpenStack就保留了這個地址。其實早期的OpenStack版本是通過iptables NAT來映射169.254.169.254到真實APIIP地址上。

不過現在更靈活了,直接在虛擬機里面增加了一條路由條目來實現,讓虛擬機順利的訪問到這個IP地址。關於這個IP的產生需要了解到‘命名空間’的概念,關於命名空間可以參考這篇博文: http://blog.csdn.net/preterhuman_peak/article/details/40857117

進入命名空間

[root@controller ~]# ip  netns  exec qdhcp-54f942f7-cc28-4292-a4d6-e37b8833e35f  /bin/bash 
[root@controller ~]# 
[root@controller ~]# ifconfig 
lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 0  (Local Loopback)
        RX packets 3  bytes 1728 (1.6 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 3  bytes 1728 (1.6 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

ns-432508f9-da: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.0.0.101  netmask 255.255.255.0  broadcast 10.0.0.255
        inet6 fe80::f816:3eff:fedb:5a54  prefixlen 64  scopeid 0x20<link>
        ether fa:16:3e:db:5a:54  txqueuelen 1000  (Ethernet)
        RX packets 3609  bytes 429341 (419.2 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 777  bytes 89302 (87.2 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

命名空間中的進程

[root@controller ~]# netstat  -lntup 
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN      31094/python2       
tcp        0      0 10.0.0.101:53           0.0.0.0:*               LISTEN      41418/dnsmasq       
tcp        0      0 169.254.169.254:53      0.0.0.0:*               LISTEN      41418/dnsmasq       
tcp6       0      0 fe80::f816:3eff:fedb:53 :::*                    LISTEN      41418/dnsmasq       
udp        0      0 10.0.0.101:53           0.0.0.0:*                           41418/dnsmasq       
udp        0      0 169.254.169.254:53      0.0.0.0:*                           41418/dnsmasq       
udp        0      0 0.0.0.0:67              0.0.0.0:*                           41418/dnsmasq       
udp6       0      0 fe80::f816:3eff:fedb:53 :::*                                41418/dnsmasq      

1.17.6 附錄-將控制節點秒變計算節點

1)在控制節點操作

yum -y install openstack-nova-compute

2)修改nova配置文件

[root@controller ~]# vim /etc/nova/nova.conf
[vnc]
...
enabled = True
vncserver_listen = 0.0.0.0
vncserver_proxyclient_address = $my_ip
novncproxy_base_url = http://controller:6080/vnc_auto.html

3)啟動計算節點服務

systemctl enable libvirtd.service openstack-nova-compute.service
systemctl start libvirtd.service openstack-nova-compute.service

1.17.7 附錄-如何把實例轉換為鏡像

需求說明:將一台配置好的服務器,做成鏡像,利用該鏡像創建新的實例

1)對實例進行拍攝快照

 

   設置快照名稱

 

   快照創建文件

 

   但是這里顯示的快照名字讓人很不爽,下面就將他改為映像

2)查看進行上的標識信息

 

   3)在glace服務端查看鏡像文件

[root@compute2 ~]# ll /var/lib/glance/images/ -h 
total 1.9G
-rw-r----- 1 glance glance 1.1G Jan 26 16:27 1473524b-df75-45f5-afc2-83ab3e6915cc
-rw-r----- 1 glance glance  22M Jan 26 21:33 1885a4c7-d400-4d97-964c-eddcbeb245a3
-rw-r----- 1 glance glance 857M Jan 26 09:37 199bae53-fc7b-4eeb-a02a-83e17ae73e20
-rw-r----- 1 glance glance  13M Jan 25 11:31 68222030-a808-4d05-978f-1d4a6f85f7dd
-rw-r----- 1 glance glance  13M Jan 23 18:20 9d92c601-0824-493a-bc6e-cecb10e9a4c6

    將生成的鏡像文件移動到其他目錄

[root@compute2 ~]# mv /var/lib/glance/images/1885a4c7-d400-4d97-964c-eddcbeb245a3  /root

   4)在web界面刪除剛剛生成的快照

 

   5)將鏡像文件重新上傳

[root@compute2 ~]# . admin-openrc 
[root@compute2 ~]# openstack image create "clsn-image-upload"   --file 1885a4c7-d400-4d97-964c-eddcbeb245a3   --disk-format qcow2 --container-format bare   --public
+------------------+------------------------------------------------------+
| Field            | Value                                                |
+------------------+------------------------------------------------------+
| checksum         | 45fdc3a04021042855890712f31de1f9                     |
| container_format | bare                                                 |
| created_at       | 2018-01-26T13:46:15Z                                 |
| disk_format      | qcow2                                                |
| file             | /v2/images/ab30d820-94e5-4567-8110-605759745112/file |
| id               | ab30d820-94e5-4567-8110-605759745112                 |
| min_disk         | 0                                                    |
| min_ram          | 0                                                    |
| name             | clsn-image-upload                                    |
| owner            | d0dfbdbc115b4a728c24d28bc1ce1e62                     |
| protected        | False                                                |
| schema           | /v2/schemas/image                                    |
| size             | 22085632                                             |
| status           | active                                               |
| tags             |                                                      |
| updated_at       | 2018-01-26T13:46:40Z                                 |
| virtual_size     | None                                                 |
| visibility       | public                                               |
+------------------+------------------------------------------------------+

   6)在查看剛才創建的鏡像

   7)使用新鏡像創建一台實例

 

   至此實例轉換為鏡像完成

1.18 參考文獻

[1] [openstack官方參考文檔] https://docs.openstack.org/mitaka/zh_CN/install-guide-rdo/

[2] https://zh.wikipedia.org/wiki/%e9%9b%b2%e7%ab%af%e9%81%8b%e7%ae%97

[3] http://www.ruanyifeng.com/blog/2017/07/iaas-paas-saas.html

[4] https://wiki.openstack.org/wiki/Main_Page

[5] https://zh.wikipedia.org/wiki/OpenStack

[6] https://www.cnblogs.com/pythonxiaohu/p/5861409.html

[7] https://linux.cn/article-5019-1.html

[8] https://www.cnblogs.com/endoresu/p/5018688.html

[9] https://developer.openstack.org/api-ref/compute/


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM