OpenStack搭建實戰


OpenStack
OpenStack介紹
OpenStack是一種免費的開源平台,幫助服務提供商實現類似於亞馬遜EC2和S3的基礎設施服務。OpenStack當前有三個核心項目:計算(Nova),對象存儲(Swift),鏡像管理(Glance)。每個項目可以獨立安裝運行,該文檔將幫助您快速學習OpenStack。
OpenStack背景現狀
OpenStack是由Rackspace Cloud和NASA(美國航天局)於2010年7月開始共同開發支持,整合了Rackspace的Cloud Files platform和NASA的Nebula platform技術,目的是能為任何一個組織創建和提供雲計算服務。
目前,超過150家公司參與了這個項目,包括Crtrix Systems, Dell, AMD, Intel, Cisco, HP等。 OpenStack最近發布了Austin產品,它是第一個開源的雲計算平台,它是基於Rackspace的雲服務器加上雲服務,以及NASA的Nebula技術發布的。 似乎是作為對此的響應,Amazon為新用戶提供一年的AWS免費使用方式。在OpenStack發布Austin之后,微軟也宣稱Windows Server 2008 R2 Hyper-V可以與OpenStack整合。 微軟會為http://www.360docs.net/doc/info-1edcde5bcc22bcd127ff0c0f.html 提供架構和技術上的指引,它會編寫必要的代碼,從而OpenStack能夠在微軟的虛擬平台上運行。 這些代碼會在http://www.360docs.net/doc/info-1edcde5bcc22bcd127ff0c0f.html 上提供。
OpenStack是什么?及OpenStack核心項目
OpenStack是一種免費的開源平台,幫助服務提供商實現類似於亞馬遜EC2和S3的基礎設施服務。OpenStack當前有三個核心項目:計算(Nova),對象存儲(Swift),鏡像管理(Glance)。每個項目可以獨立安裝運行。另外還有兩個新增項目:身份驗證(Keystone)和儀表盤(Horizon)。
OpenStack計算是一個雲控制器,用來啟動一個用戶或一個組的虛擬實例,它也用於配置每個實例或項目中包含多個實例為某個特定項目的聯網。
OpenStack對象存儲是一個在具有內置冗余和容錯的大容量系統中存儲對象的系統。對象存儲有各種應用,如備份或存檔數據,存儲圖形或視頻(流媒體數據傳輸到用戶的瀏覽器),儲存二級或三級靜態數據,發展與數據存儲集成新的應用程序,當預測存儲容量困難時存儲數據,創造彈性和靈活的雲存儲Web應用程序。
OpenStack鏡像服務是一個查找和虛擬機圖像檢索系統。它可以配置三種方式:使用OpenStack對象存儲來存儲圖像;使用亞馬遜S3直接存儲,或使用S3對象存儲作為S3訪問中間存儲。
目前為止共有五個版本:
1. Austin
2. Bexar
3. Cactus
4. Diablo
5. Mitaka
OpenStack 功能
OpenStack能幫我們建立自己的IaaS,提供類似Amazon Web Service的服務給用戶:
1、普通用戶可以通過它注冊雲服務,查看運行和計費情況
2、開發和運維人員可以創建和存儲他們應用的自定義鏡像,並通過這些鏡像啟動、監控和、平台的管理人員能夠配置和操作網絡,存儲等基礎架構。
OpenStack的優勢是平台分模塊化,由每個獨立的組件組成,每個nova組件都可以單獨安裝在獨立的服務器上,各個組件之間不共享狀態,各個組件之間通過消息隊列(MQ)來進行異步通訊。也可以通過選用合適組件來定制個性化服務,便於應用改進。使用apache協議可以支持企業使用。
OpenStack架構
Compute(Nova)的軟件架構,每個nova-xxx組件是由python代碼編寫的守護進程,每個進程之間通過隊列(Queue)和數據庫(nova database)來交換信息,執行各種請求。而用戶通過nova-api暴露的web service來同其他組件進行交互。Glance是相對獨立的基礎架構,nova通過glance-api來和它交互。
Nova組件的作用 nova-api是Nova的中心。它為所有外部調用提供服務,除了提供OpenStack本身的API規范外,他還提供了兼容EC2的部分API,所以也可以用EC2的管理工具對nova進行日常管理。nova-compute負責對虛擬機實例進行創建、終止、遷移、Resize的操作。工作原理可以簡單描述為:從隊列中接收請求,通過相關的系統命令執行他們,再更新數據庫的狀態。nova-volume管理映射到虛擬機實例的卷的創建、附加和取消。nova-network從隊列中接收網絡任務,然后執行任務控制虛擬機的網絡,比如創建橋接網絡或改變iptables的規則。nova-scheduler 提供調度,來決定在哪台資源空閑的機器上啟動新的虛擬機實例Queue為守護進程傳遞消息。只要支持AMQP協議的任何Message Queue Sever都可以,當前官方推薦用RabbitMQ。SQL database存儲雲基礎架構中的各種數據。包括了虛擬機實例數據,網絡數據等。
user dashboard是一個可選的項目。它提供了一個web界面來給普通用戶或者管理者來管理、配置他們的計算資源。
所有的計算節點需要和控制節點進行鏡像交互,網絡交互,控制節點是整個架構的瓶頸,這種配置主要用於概念證明或實驗環境。多節點:增加節點單獨運行nova-volume,同時在計算節點上運行nova-network,並且根據不同的網絡硬件架構選擇DHCP或者VLan模式,讓控制網絡和公共網絡的流量分離。
OpenStack在企業中的應用
更多的企業不只是談論OpenStack,而是在實際生產環境中部署它,包括Rackspace基於Puppet的公有雲。OpenStack自研發伊始,一直被視作雲計算領域的Linux,其推動開放源代碼服務的努力得到了眾多公司的支持。目前就有超過100個機構參與了代碼庫的建設,或在其它方面參與該項目。新浪雲計算將與OpenStack一起合力打造一套可以管理和配置各種虛擬化技術的IaaS平台,在開源代碼庫的建設方面將有着不小的貢獻。
實驗環境
1. 安裝Oracle VM VirtualBox軟件
2. 准備centos鏡像
3. 創建虛擬機openstack-node1
注意:網卡配置


實驗過程及步驟
設置網卡1
vim /etc/sysconfig/network-scripts/ifcfg-eth0

設置網卡2
vim /etc/sysconfig/network-scripts/ifcfg-eth1

1.3.3 網絡內部域名解析
(1)設置主機的hostname
vim /etc/sysconfig/network
修改以下配置:

(2)設置主機節點的域名解析
vim /etc/hosts
添加以下內容:
192.168.56.111 open-node1.example.com
顯示:

1.3.4內核參數調整
vim /etc/sysctl.conf
修改成下面內容:


reboot(重新啟動電腦)

2實驗環境軟件的安裝
2.1.基礎軟件包
rpm -ivh http://mirrors.ustc.edu.cn/fedora/epel//6/x86_64/epel-release-6-8.noarch.rpm
Python-pip
2.2 Yum的安裝
yum install -y python-pip gcc gcc-c++ make libtool patch automake python-devel libxslt-devel MySQL-python openssl-devel libudev-devel git wget libvirt-python libvirt qemu-kvm gedit python-numdisplay python-eventlet device-mapper bridge-utils libffi-devel libffi python-crypto lrzsz swig

2.2.1安裝redhat的rdo倉庫
vim /etc/yum.repos.d/rdo-release.repo
復制下面文字
[openstack-icehouse]
name=OpenStack Icehouse Repository
baseurl=http://repos.fedorapeople.org/repos/openstack/EOL/openstack-icehouse/epel-6/
enabled=1
gpgcheck=0
gpgkey=

2.2.2keystone安裝
yum install openstack-keystone python-keystoneclient
2.2.3 glance安裝
yum install openstack-glance python-glanceclient python-crypto

 

2.2.4 Nova的控制節點安裝’
yum install openstack-nova-api openstack-nova-cert \
openstack-nova-conductor openstack-nova-console openstack-nova-novncproxy \
openstack-nova-scheduler python-novaclient

2.2.5 neutron控制節點的安裝
yum install openstack-neutron openstack-neutron-ml2 python-neutronclient openstack-neutron-linuxbridge
2.2.6 horizion的安裝
yum install -y httpd mod_wsgi memcached python-memcached openstack-dashboard
2.2.7 cinder的安裝
yum install openstack-cinder python-cinderclient
2.2.8 Cinder安裝
yum install openstack-cinder


3 基礎服務部署
3.1數據庫服務(MySQL)
3.1.1MySQL 安裝
[root@open-node1 ~] # yum install mysql-server
[root@open-node1 ~] # cp /usr/share/mysql/my-medium.cnf /etc/my.cnf
[root@open-node1 ~] #vim /etc/my.cnf
增加以下配置”

[mysqld]
default-storage-engine = innodb
innodb_file_per_table
collation-server = utf8_general_ci
init-connect = 'SET NAMES utf8'
character-set-server = utf8

[root@open-node1 ~] # chkconfig mysqld on
[root@open-node1 ~] # /etc/init.d/mysqld start
3.1.2 數據庫的安裝

[root@open-node1 ~] # mysql –u root
mysql>show databases;

3.1.3 創建keystone數據庫並授權

mysql> create database keystone;
mysql> grant all on keystone.* to keystone@'192.168.56.0/255.255.255.0' identified by 'keystone';
3.1.4 創建glance數據庫並授權

mysql> create database glance;
mysql> grant all on glance.* to glance@'192.168.56.0/255.255.255.0' identified by 'glance';
3.1.5創建nova數據庫並授權
mysql> create database nova;
mysql> grant all on nova.* to nova@'192.168.56.0/255.255.255.0' identified by 'nova';

3.1.6 創建neutron並授權

mysql> create database neutron;
mysql> grant all on neutron.* to neutron@'192.168.56.0/255.255.255.0' identified by 'neutron';

3.1.7 創建cinder並授權
mysql> create database cinder;
mysql> grant all on cinder.* to cinder@'192.168.56.0/255.255.255.0' identified by 'cinder';
mysql>show databases
-> ;

顯示:

| Database |
+--------------------+
| information_schema |
| cinder |
| glance |
| keystone |
| mysql |
| neutron |
| nova |
| test |
+--------------------+
8 rows in set (0.00 sec)

3.2 消息代理服務RabbitMQ
3.2.1 RabbitMQ
3.2.2 RabbitMQ安裝
mysql> \q
[root@open-node1 ~]# yum -y install ncurses-devel
[root@open-node1 ~] # yum install -y erlang rabbitmq-server
[root@open-node1 ~] # chkconfig rabbitmq-server on
關閉防火牆:
(1)[root@open-node1 ~]# /etc/init.d/iptables stop
顯示:
iptables: Setting chains to policy ACCEPT: nat mangle filte[ OK ]
iptables: Flushing firewall rules: [ OK ]
iptables: Unloading modules: [ OK ]

(2)[root@open-node1 ~]# chkonfig iptables off
(3)[root@open-node1 ~]# chkconfig --list | grep iptables
顯示:
iptables 0:off 1:off 2:off 3:off 4:off 5:off 6:off

3.2.3啟用 Web 監控插件
啟用后就可以通過 http://IP:15672/來訪問 web 管理界面。默認yum安裝的 rabbitmq-server沒有將rabbitmq-plugins 命令放到搜索路徑,需要使用絕對路徑來執行。
[root@open-node1 ~] # /usr/lib/rabbitmq/bin/rabbitmq-plugins list
[root@open-node1 ~] # /usr/lib/rabbitmq/bin/rabbitmq-plugins enable rabbitmq_management
[root@open-node1 ~] # /etc/init.d/rabbitmq-server restart
打開本地瀏覽器,輸入http://IP:15672/,這里的IP為192.168.56.111,即輸入
http://192.168.56.111:15672,打開如下圖的rabbitmq管理界面


4.認證服務keystone
Keystone。為OpenStack其他服務提供身份驗證、服務規則和服務令牌的功能,管理Domains、Projects、Users、Groups、Roles。自Essex版本集成到項目中。
4.1.1安裝包的下載
[root@open-node1 ~]# cd /usr/local/src
[root@open-node1 src]#
wget https://launchpad.net/keystone/icehouse/2014.1.3/+download/keystone-2014.1.3.tar.gz
wget https://launchpad.net/nova/icehouse/2014.1.3/+download/nova-2014.1.3.tar.gz
wget https://launchpad.net/glance/icehouse/2014.1.3/+download/glance-2014.1.3.tar.gz
wget https://launchpad.net/horizon/icehouse/2014.1.3/+download/horizon-2014.1.3.tar.gz
wget https://launchpad.net/neutron/icehouse/2014.1.3/+download/neutron-2014.1.3.tar.gz
wget https://launchpad.net/cinder/icehouse/2014.1.3/+download/cinder-2014.1.3.tar.gz


tar zxf keystone-2014.1.3.tar.gz
tar zxf nova-2014.1.3.tar.gz
tar zxf glance-2014.1.3.tar.gz
tar zxf neutron-2014.1.3.tar.gz
tar zxf horizon-2014.1.3.tar.gz
tar zxf cinder-2014.1.3.tar.gz

4.1.2keystone的安裝
[root@open-node1 src]# cd keystone-2014.1.3
4.2.2創建配置文件
[root@open-node1 keystone-2014.1.3]#
cp etc/keystone-paste.ini /etc/keystone
[root@open-node1 keystone-2014.1.3]#
cp etc/policy.v3cloudsample.json /etc/keystone

[root@open-node1 keystone-2014.1.3]# cd /etc/keystone
[root@open-node1 keystone]# ll

[root@open-node1 keystone]# mv policy.v3cloudsample.json policy.v3clonud.json
4.2.3配置keystone
(1)配置admin_token
[root@open-node1 ~]# ADMIN_TOKEN=$(openssl rand -hex 10)
[root@open-node1 ~]# echo $ADMIN_TOKEN
7de805df0ce7a2e127ab
上面是產生的隨機碼
[root@open-node1 ~]# vim /etc/keystone/keystone.conf

set number(設置行數)
【DEFAULT】
(修改的時候取 # ,然后修改)
13 admin_token=7de805df0ce7a2e127ab
374 debug=true
439 log_file=keystone.log
444 log_dir=/var/log/keystone
(3)配置數據庫
在[database]中的[connection]中配置數據庫的連接
619 connection=mysql://keystone:keystone@192.168.56.111/keystone
(4)驗證日志的配置
[root@open-node1 ~]#
grep "^[a-z]" /etc/keystone/keystone.conf
顯示:
admin_token=9991692567660be100ff
debug=true
log_file=keystone.log
log_dir=/var/log/keystone
connection=mysql://keystone:keystone@192.168.56.111/keystone

[root@open-node1 keystone]# cd
4.2.4 設置PKI Token
[root@open-node1 ~]#
keystone-manage pki_setup --keystone-user root --keystone-group root
顯示:
Generating RSA private key, 2048 bit long modulus
..........................+++
..............................+++
e is 65537 (0x10001)
Generating RSA private key, 2048 bit long modulus
.................................+++
.....................................+++
e is 65537 (0x10001)
Using configuration from /etc/keystone/ssl/certs/openssl.conf
Check that the request matches the signature
Signature ok
The Subject's Distinguished Name is as follows
countryName :PRINTABLE:'US'
stateOrProvinceName :ASN.1 12:'Unset'
localityName :ASN.1 12:'Unset'
organizationName :ASN.1 12:'Unset'
commonName :ASN.1 12:'www.example.com'
Certificate is to be certified until Sep 18 09:24:56 2026 GMT (3650 days)

Write out database with 1 new entries
Data Base Updated

[root@open-node1 ~]#chown -R root:root /etc/keystone/ssl(授權)
[root@open-node1 ~]#chmod -R o-rwx /etc/keystone/ssl(授權)


4.2.5同步數據庫
[root@open-node1 ~]# keystone-manage db_sync
[root@open-node1 ~]# mysql -h 192.168.56.111 -ukeystone -pkeystone -e " use keystone;show tables;"
顯示:
+-----------------------+
| Tables_in_keystone |
+-----------------------+
| assignment |
| credential |
| domain |
| endpoint |
| group |
| migrate_version |
| policy |
| project |
| region |
| role |
| service |
| token |
| trust |
| trust_role |
| user |
| user_group_membership |
+-----------------------+

4.3 keystone管理

4.3.1啟動keystone

[root@open-node1 ~]# keystone-all --config-file=/etc/keystone/keystone.conf

顯示:
2016-09-21 18:15:20.071 11955 INFO keystone.common.environment.eventlet_server [-] Starting /usr/bin/keystone-all on 0.0.0.0:35357
2016-09-21 18:15:20.093 11955 INFO keystone.common.environment.eventlet_server [-] Starting /usr/bin/keystone-all on 0.0.0.0:5000
2016-09-21 18:15:20.094 11955 INFO eventlet.wsgi.server [-] (11955) wsgi starting up on http://0.0.0.0:35357/
2016-09-21 18:15:20.095 11955 INFO eventlet.wsgi.server [-] (11955) wsgi starting up on http://0.0.0.0:5000/

Ctrl+c,終止此命令進入命令行輸入
[root@open-node1 ~]# nohup keystone-all --config-file=/etc/keystone/keystone.conf &
[1] 10992
打開日志文件查看執行情況
[root@open-node1 ~]# tail -f /var/log/keystone/keystone.log
輸出內容:
[root@open-node1 ~]# nohup: ignoring input and appending output to `nohup.out'

 

[root@open-node1 ~]# tail -f /var/log/keystone/keystone.log
輸出內容:
2016-09-23 21:12:11.490 2199 DEBUG keystone-all [-] token.expiration = 3600 log_opt_values /usr/lib/python2.6/site-packages/oslo/config/cfg.py:1953
2016-09-23 21:12:11.490 2199 DEBUG keystone-all [-] token.provider = None log_opt_values /usr/lib/python2.6/site-packages/oslo/config/cfg.py:1953
2016-09-23 21:12:11.490 2199 DEBUG keystone-all [-] token.revocation_cache_time = 3600 log_opt_values /usr/lib/python2.6/site-packages/oslo/config/cfg.py:1953
2016-09-23 21:12:11.490 2199 DEBUG keystone-all [-] token.revoke_by_id = True log_opt_values /usr/lib/python2.6/site-packages/oslo/config/cfg.py:1953
2016-09-23 21:12:11.491 2199 DEBUG keystone-all [-] ******************************************************************************** log_opt_values /usr/lib/python2.6/site-packages/oslo/config/cfg.py:1955
2016-09-23 21:12:12.632 2199 WARNING keystone.openstack.common.versionutils [-] Deprecated: keystone.middleware.core.XmlBodyMiddleware is deprecated as of Icehouse in favor of support for "application/json" only and may be removed in K.
2016-09-23 21:12:12.690 2199 INFO keystone.common.environment.eventlet_server [-] Starting /usr/bin/keystone-all on 0.0.0.0:35357
2016-09-23 21:12:12.723 2199 INFO keystone.common.environment.eventlet_server [-] Starting /usr/bin/keystone-all on 0.0.0.0:5000
2016-09-23 21:12:12.724 2199 INFO eventlet.wsgi.server [-] (2199) wsgi starting up on http://0.0.0.0:35357/
2016-09-23 21:12:12.725 2199 INFO eventlet.wsgi.server [-] (2199) wsgi starting up on http://0.0.0.0:5000/
4.3.3創建Admin用戶
[root@open-node1 ~]# export OS_SERVICE_TOKEN=$ADMIN_TOKEN
[root@open-node1~]#
export OS_SERVICE_TOKEN=7de805df0ce7a2e127ab
[root@open-node1~]#
export OS_SERVICE_ENDPOINT=http://192.168.56.111:35357/v2.0
[root@open-node1 ~]# keystone role-list
輸出內容:
+----------------------------------+----------+
| id | name |
+----------------------------------+----------+
| 1894a90878d3e92bab9fe2ff9ee4384b | _member_ |
+----------------------------------+----------+
(1)創建Admin用戶
[root@open-node1 ~]#
keystone user-create --name=admin --pass=admin
顯示:
+----------+----------------------------------+
| Property | Value |
+----------+----------------------------------+
| email | |
| enabled | True |
| id | 4ab52cb511186e4d56841b7fcf6894ed |
| name | admin |
| username | admin |
+----------+----------------------------------+

(2)創建admin角色
[root@open-node1 ~]# keystone role-create --name=admin
顯示:
+----------+----------------------------------+
| Property | Value |
+----------+----------------------------------+
| id | 49f2c254f067137641678456382d94b8 |
| name | admin |
+----------+----------------------------------+

(3)創建admin租戶
[root@open-node1 ~]#
keystone tenant-create --name=admin --description="Admin Tenant"
顯示:
+-------------+----------------------------------+
| Property | Value |
+-------------+----------------------------------+
| description | Admin Tenant |
| enabled | True |
| id | cfacb264b2b3a4bbc5994460e80e5042 |
| name | admin |
+-------------+----------------------------------+


(4)鏈接Admin的用戶,角色和租戶。
[root@open-node1 ~]# keystone user-role-add --user=admin --tenant=admin --role=admin

(5)連接admin用戶、_member_角色和admin租戶
[root@open-node1 ~]# keystone user-role-add --user=admin --role=_member_ --tenant=admin

查看剛才創建的用戶,角色和租戶情況
[root@open-node1 ~]# keystone user-list
+----------------------------------+-------+---------+-------+
| id | name | enabled | email |
+----------------------------------+-------+---------+-------+
| 4ab52cb511186e4d56841b7fcf6894ed | admin | True | |
+----------------------------------+-------+---------+-------+

[root@open-node1 ~]# keystone role-list

+----------------------------------+----------+
| id | name |
+----------------------------------+----------+
| 1894a90878d3e92bab9fe2ff9ee4384b | _member_ |
| 49f2c254f067137641678456382d94b8 | admin |
+----------------------------------+----------+

[root@open-node1 ~]# keystone tenant-list

+----------------------------------+-------+---------+
| id | name | enabled |
+----------------------------------+-------+---------+
| cfacb264b2b3a4bbc5994460e80e5042 | admin | True |
+----------------------------------+-------+---------+

4.3.4 創建普通用戶
[root@open-node1 ~]# keystone user-create --name=demo --pass=demo
顯示:
+----------+----------------------------------+
| Property | Value |
+----------+----------------------------------+
| email | |
| enabled | True |
| id | c5886edf6229406080c0ac7cfdbb5e94 |
| name | demo |
| username | demo |
+----------+----------------------------------+

[root@open-node1 ~]# keystone tenant-create --name=demo --description="Demo Tenant"
顯示:
+-------------+----------------------------------+
| Property | Value |
+-------------+----------------------------------+
| description | Demo Tenant |
| enabled | True |
| id | 8701ad62c68b889ab6b046480f97444b |
| name | demo |
+-------------+----------------------------------+

[root@open-node1 ~]# keystone user-role-add --user=demo --role=_member_ --tenant=demo
4.3.5創建keystone的service和endpoint
[root@open-node1 ~]# keystone service-create --name=keystone --type=identity --description="OpenStack Identity"
顯示:
+-------------+----------------------------------+
| Property | Value |
+-------------+----------------------------------+
| description | OpenStack Identity |
| enabled | True |
| id | 28365312bcd00630c36f820630c29bce |
| name | keystone |
| type | identity |
+-------------+----------------------------------+

查看日志
[root@open-node1 ~]# keystone service-list
+----------------------------------+----------+----------+--------------------+
| id | name | type | description |
+----------------------------------+----------+----------+--------------------+
| 28365312bcd00630c36f820630c29bce | keystone | identity | OpenStack Identity |
+----------------------------------+----------+----------+--------------------+

下面endpoint的創建需要創建Service時生成的service ID ,注意這個ID是一個隨機生成的ID,但和上圖中的service ID必須一致

[root@open-node1 ~]# keystone endpoint-create \
> --service-id=28365312bcd00630c36f820630c29bce \(自己輸出的)
> --publicurl=http://192.168.56.111:5000/v2.0 \
> --internalurl=http://192.168.56.111:5000/v2.0 \
> --adminurl=http://192.168.56.111:35357/v2.0
+-------------+----------------------------------+
| Property | Value |
+-------------+----------------------------------+
| adminurl | http://192.168.56.111:35357/v2.0 |
| id | db02d960816d42c58cd4fce08f2ca4c0 |
| internalurl | http://192.168.56.111:5000/v2.0 |
| publicurl | http://192.168.56.111:5000/v2.0 |
| region | regionOne |
| service_id | d29483b5f2ed49528c6fc6d72d5bdc99 |
+-------------+----------------------------------+
[root@open-node1 ~]# keystone endpoint-list
顯示:
+----------------------------------+-----------+---------------------------------+---------------------------------+----------------------------------+----------------------------------+
| id | region | publicurl | internalurl | adminurl | service_id |
+----------------------------------+-----------+---------------------------------+---------------------------------+----------------------------------+----------------------------------+
| db02d960816d42c58cd4fce08f2ca4c0 | regionOne | http://192.168.56.111:5000/v2.0 | http://192.168.56.111:5000/v2.0 | http://192.168.56.111:35357/v2.0 | d29483b5f2ed49528c6fc6d72d5bdc99 |
+----------------------------------+-----------+---------------------------------+---------------------------------+----------------------------------+----------------------------------+

[root@open-node1 ~]# keystone --help | grep list
顯示:
ec2-credentials-list
endpoint-list List configured service endpoints.
role-list List all roles.
service-list List all services in Service Catalog.
tenant-list List all tenants.
user-list List users.
user-role-list List roles granted to a user.
4.4.驗證keystone安裝
[root@open-node1 ~]#
unset OS_SERVICE_TOKEN OS_SERVICE_ENDPOINT
.14.4.1驗證測試
[root@open-node1 ~]#
keystone --os_username=admin --os_password=admin --os-auth-url=http://192.168.56.111:35357/v2.0 token-get

[root@open-node1 ~]#
keystone --os-username=admin --os-password=admin --os-tenant-name=admin --os-auth-url=http://192.168.56.111:35357/v2.0 token-get
4.4.2 環境變量的配置
[root@open-node1 ~]# vim keystone-admin


復制下面內容:
export OS_TENANT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=admin
export OS_AUTH_URL=http://192.168.56.111:35357/v2.0

[root@open-node1 ~]# keystone token-get
[root@open-node1 ~]# source keystone-admin
[root@open-node1 ~]# keystone token-get

[root@open-node1 ~]# vim keystone-demo
復制下面內容:
export OS_TENANT_NAME=demo
export OS_USERNAME=demo
export OS_PASSWORD=demo
export OS_AUTH_URL=http://192.168.56.111:35357/v2.0


[root@open-node1 ~]# source keystone-admin
[root@open-node1 ~]#
keystone user-role-list --user admin --tenant admin
[root@open-node1 ~]#
keystone user-role-list --user demo --tenant demo


[root@open-node1 ~]# source keystone-demo
[root@open-node1 ~]#
keystone user-role-list --user demo --tenant demo
顯示:
You are not authorized to perform the requested action, admin_required. (HTTP 403)

 

5 Image Service(Glance)

5.1 Glance 安裝
[root@open-node1 ~]# cd /usr/local/src/glance-2014.1.3
[root@open-node1 ~]# python setup.py install
5.2 Glance 配置准備
5.2.1 初始化配置文件目錄
[root@open-node1 ~]#mkdir /etc/glance
[root@open-node1 ~]#mkdir /var/log/glance
[root@open-node1 ~]#mkdir /var/lib/glance
[root@open-node1 ~]#mkdir /var/run/glance
5.2.2 復制配置文件
[root@open-node1 ~]#cd /usr/local/src/glance-2014.1.3/etc
[root@open-node1 etc]# cp * /etc/glance
[root@open-node1 etc]# cd /etc/glance/
5.2.3更改部分配置文件的文件名
[root@open-node1 ~]#mv logging.cnf.sample logging.cnf

[root@open-node1 ~]#
mv property-protections-policies.conf.sample property-protections-policies.conf

[root@open-node1 ~]#
mv property-protections-roles.conf.sample property-protections-roles.conf

5.3 設置數據庫mysql
5.3.1 配置文件
[root@open-node1 glance]# vim /etc/glance/glance-api.conf

connection=mysql://glance:glance@192.168.56.111/glance

[root@open-node1 glance]# vim glance-registry.conf


connection=mysql://glance:glance@192.168.56.111/glance


5.3.2 同步數據庫
[root@open-node1 glance]# glance-manage db_sync
[root@open-node1 glance]#
mysql -h 192.168.56.111 -u glance -pglance -e "use glance;show tables;"
顯示:
+------------------+
| Tables_in_glance |
+------------------+
| image_locations |
| image_members |
| image_properties |
| image_tags |
| images |
| migrate_version |
| task_info |
| tasks |
+------------------+

5.4 設置RabbitMQ

[root@open-node1 ~]# vim /etc/glance/glance-api.conf
修改以下內容:
notifier_strategy = rabbit
rabbit_host = 192.168.56.111
rabbit_port = 5672
rabbit_use_ssl = false
rabbit_userid = guest
rabbit_password = guest
rabbit_virtual_host = /
rabbit_notification_exchange = glance
5.5 設置Keystone(按實驗手冊修改,紅色的是手冊上錯誤的)
[root@open-node1 ~]# vim /etc/glance/glance-api.conf
648 admin_tenant_name=admin

[root@open-node1 ~]# vim /etc/glance/glance-registry.conf
178 admin_tenant_name=admin

[root@open-node1 ~]# diff /usr/local/src/glance-2014.1.3/etc/glance-api.conf/etc/glance/glance-api.conf

5.6 Glance 的啟動
5.6.1 glance的命令啟動
[root@open-node1 ~]#

glance-api --config-file=/etc/glance/glance-api.conf

2016-10-02 15:20:24.269 10641 INFO glance.wsgi.server [-] Starting 1 workers
2016-10-02 15:20:24.274 10641 INFO glance.wsgi.server [-] Started child 10648
2016-10-02 15:20:24.280 10648 INFO glance.wsgi.server [-] (10648) wsgi starting up on http://0.0.0.0:9292/

[root@open-node1 ~]#
glance-registry --config-file=/etc/glance/glance-registry.conf


2016-10-02 15:25:38.859 10763 INFO glance.wsgi.server [-] Starting 1 workers
2016-10-02 15:25:38.863 10763 INFO glance.wsgi.server [-] Started child 10768
2016-10-02 15:25:38.869 10768 INFO glance.wsgi.server [-] (10768) wsgi starting up on http://0.0.0.0:9191/

5.6.2 glance 的腳本啟動
[root@open-node1 ~]#git clone https://github.com/unixhot/openstack-inc.git

[root@open-node1 ~]#cd openstack-inc/control/init.d

[root@open-node1 init.d]#
cp openstack-keystone openstack-glance-* /etc/init.d/

cp: overwrite `/etc/init.d/openstack-keystone'? y
cp: overwrite `/etc/init.d/openstack-glance-api'? y
cp: overwrite `/etc/init.d/openstack-glance-registry'? Y

[root@open-node1 ~]# chmod +x /etc/init.d/openstack-glance-*
[root@open-node1 ~]# chkconfig --add openstack-glance-api
[root@open-node1 ~]#
chkconfig --add openstack-glance-registry
[root@open-node1 ~]# chkconfig openstack-glance-api on
[root@open-node1 ~]# chkconfig openstack-glance-registry on
[root@open-node1 ~]# /etc/init.d/openstack-glance-api start

Starting openstack-glance-api: [ OK ]

[root@open-node1 ~]#
/etc/init.d/openstack-glance-registry start

Starting openstack-glance-registry: [ OK ]

[root@open-node1 ~]# chkconfig --add openstack-keystone
[root@open-node1 ~]# chkconfig openstack-keystone on
[root@open-node1 ~]# ps aux | grep keystone

root 2419 0.0 2.8 398216 55736 pts/0 S 09:27 0:01 /usr/bin/python /usr/bin/keystone-all --config-file=/etc/keystone/keystone.conf
root 11044 0.0 0.0 103252 828 pts/0 S+ 15:36 0:00 grep keystone

[root@open-node1 ~]# /etc/init.d/openstack-keystone start

Starting keystone: [ OK ]

[root@open-node1 ~]# ps aux | grep keystone

root 2419 0.0 2.8 398216 55736 pts/0 S 09:27 0:01 /usr/bin/python /usr/bin/keystone-all --config-file=/etc/keystone/keystone.conf
root 11074 0.0 0.0 103252 828 pts/0 S+ 15:36 0:00 grep keystone
5.7 測試Glance
Glance。一套虛擬機鏡像查找及檢索系統,支持多種虛擬機鏡像格式(AKI、AMI、ARI、ISO、QCOW2、Raw、VDI、VHD、VMDK),有創建上傳鏡像、刪除鏡像、編輯鏡像基本信息的功能。自Bexar版本集成到項目中。
作為IaaS的存儲服務 與OpenStack Compute對接,為其存儲鏡像 文檔存儲 存儲需要長期保存的數據,例如log 存儲網站的圖片,縮略圖等
OpenStack項目架構三 – Glance架構
OpenStack鏡像服務提供OpenStack Nova虛擬機鏡像的發現,注冊,取得服務。通過Glance,虛擬機鏡像可以被存儲到多種存儲上,比如簡單的文件存儲或者對象存儲(比如OpenStack中swift項目)。
Glace組件架構
• Glance目前提供的參考實現中Registry Server僅是使用Sql數據庫存儲metadata。
• 前端通過API Server向多個Client提供服務。
• 可以使用多種后端存儲。Glance目前支持S3,Swift,簡單的文件存儲及只讀的HTTPS存儲。
• 后續也可能支持其他后端,如分布式存儲系統(SheepDog或Ceph) Glace組件架構特性
1.基於組件的架構 :便於快速增加新特性
2.高可用性:支持大負荷
3.容錯性:獨立的進程避免串行錯誤
4.開放標准: 對社區驅動的API提供參考實現
OpenStack功能
1、 Dashboard提供資源池管理功能, 通過資源池的方式對物理資源進行重新組織。
2、 提供基於命令行的虛擬機在線遷移功能,擬機生命周期管理,例如創建、啟
動、休眠、喚醒、關閉、遷移、銷毀虛擬機。
3、 將常用的運行環境保存為虛擬機模板,可以方便地創建一系列相同或者是相
似的運行環境,只能手動創建所需用戶模板,類似Eucalyptus。
4、 在計算資源允許的情況下提供高可用性、動態負載均衡、備份與恢復
5、 對所有的物理機和虛擬機進行監控,生成報表並在必要的情況下發出預警,
監控和報表功能據說可以采用外圍組件實現

5.7.1 在keystone中注冊glance

[root@open-node1 ~]# glance image-list
[root@open-node1 ~]# source keystone-admin
[root@open-node1 ~]# glance image-list


[root@open-node1 ~]# keystone service-create --name=glance --type=image

+-------------+----------------------------------+
| Property | Value |
+-------------+----------------------------------+
| description | |
| enabled | True |
| id | 4c167f51163d462aae11f9112144836b |
| name | glance |
| type | image |
+-------------+----------------------------------+


[root@open-node1 ~]# keystone endpoint-create \
> --service-id=4c167f51163d462aae11f9112144836b \
> --publicurl=http://192.168.56.111:9292 \
> --internalurl=http://192.168.56.111:9292 \
> --adminurl=http://192.168.56.111:9292
+-------------+----------------------------------+
| Property | Value |
+-------------+----------------------------------+
| adminurl | http://192.168.56.111:9292 |
| id | 85f05ad0fa0e4510846bd0c1a6bee7c9 |
| internalurl | http://192.168.56.111:9292 |
| publicurl | http://192.168.56.111:9292 |
| region | regionOne |
| service_id | 73e4849e6c6f49fdbee4b0bce8247fe4 |
+-------------+----------------------------------+

[root@open-node1 ~]# keystone service-list

+----------------------------------+----------+----------+--------------------+
| id | name | type | description |
+----------------------------------+----------+----------+--------------------+
| 73e4849e6c6f49fdbee4b0bce8247fe4 | glance | image | |
| 136b5312bcd044b2836f820630c29bce | keystone | identity | OpenStack Identity |
+----------------------------------+----------+----------+--------------------+

[root@open-node1 ~]# keystone endpoint-list

[root@open-node1 ~]# glance image-list

+----+------+-------------+------------------+------+--------+
| ID | Name | Disk Format | Container Format | Size | Status |
+----+------+-------------+------------------+------+--------+

5.7.2 glance的鏡像測試
下載一個鏡像
[root@openstack-node1 ~]# wget https://launchpad.net/cirros/trunk/0.3.0/+download/cirros-0.3.0-x86_64-disk.img
然后上傳
[root@openstack-node1 ~]# glance image-create --name "cirros-0.3.0-x86_64" --disk-format qcow2 --container-format bare --is-public True --file cirros-0.3.0-x86_64-disk.img
下圖列出了該鏡像的元數據

[root@openstack-node1 ~]# glance image-list
[root@openstack-node1 ~]# cd /var/lib/glance/images
[root@openstack-node1 images]# ll
total 9536
-rw-r----- 1 root root 9761280 Sep 27 17:05 9446e7de-5e5b-40cf-8be5-b2eb089f2447
鏡像保存是以鏡像的ID保存


6 Compute Services (Nova)
Nova。一套控制器,用於為單個用戶或使用群組管理虛擬機實例的整個生命周期,根據用戶需求來提供虛擬服務。負責虛擬機創建、開機、關機、掛起、暫停、調整、遷移、重啟、銷毀等操作,配置CPU、內存等信息規格。自Austin版本集成到項目中。
Nova功能介紹 用戶通過訪問horizon(dashboard)請求資源,horizon會調用nova-api。OpenStack首先對用戶進行身份認證,這個功能通過keystone模塊來完成。然后通過任務調度器(nova-scheduler)確定在哪一個計算節點上創建新的虛擬機。所有的任務都會通過MQ來進行異步通訊。
雲管理員用戶也可以通過Euca2ools來管理和創建虛擬機,因為OpenStack支持EC2和S3接口。
OpenStack項目架構二: Swift架構
OpenStack Object Storage(Swift)是OpenStack開源雲計算項目的子項目之一。前身是Rackspace Cloud Files項目。OpenStack對象存儲是一個在具有內置冗余和容錯的大容量系統
中存儲對象的系統。對象存儲有各種應用,如備份或存檔數據,存儲圖形或視頻,儲存二級或三級靜態數據,發展與數據存儲集成新的應用程序,當預測存儲容量困難時存儲數據,創造彈性和靈活的雲存儲Web應用程序。
Swift功能
Swift使用普通的服務器來構建冗余的、可擴展的分布式對象存儲集群,存儲容量可達PB級。 Swift提供的服務與AWS S3相同,可以用以下用途:
Openstack創建instance的流程 1. 用戶向nova-api發送請求
用戶發送請求到nova-api,這里有兩種:
a.通過openstack api
從server.py's controller.create():
然后就進入等待直到instance的狀態變為running.
a. networker 分配ip

在控制節點安裝時,需要安裝除了nova-compute之外的其他的所有nova服務。
6.1 Nova 安裝
【yum中已經安裝,可以忽略】
[root@openstack-node1 ~]# cd /usr/local/nova-2014.1.3
[root@openstack-node1 nova-2014.1.3]#python setup.py install
6.2 創建配置文件
6.2.1 創建相關目錄
[root@openstack-node1 nova-2014.1.3]# mkdir /etc/nova
[root@openstack-node1 nova-2014.1.3]# mkdir /var/log/nova
[root@openstack-node1 nova-2014.1.3]# mkdir /var/lib/nova/instances –p
[root@openstack-node1 nova-2014.1.3]# mkdir /var/run/nova
6.2.2 復制部分配置文件
[root@openstack-node1 nova-2014.1.3]# cd etc/nova/
[root@openstack-node1 nova]# cp -a * /etc/nova/
cp: overwrite `/etc/nova/api-paste.ini'? y
cp: overwrite `/etc/nova/policy.json'? y
cp: overwrite `/etc/nova/rootwrap.conf'? y
[root@openstack-node1 nova]# mv logging_sample.conf logging.conf
6.3 nova的配置
6.3.1 配置數據庫
[root@openstack-node1 nova]# vim /etc/nova/nova.conf
2475 connection=mysql://nova:nova@192.168.56.111/nova
6.3.2 同步數據庫
[root@openstack-node1 ~]# nova-manage db sync
測試數據庫同步情況
[root@openstack-node1 ~]# mysql -h 192.168.56.111 -unova -pnova -e "use nova;show tables;"
+--------------------------------------------+
| Tables_in_nova |
+--------------------------------------------+
| agent_builds |
| aggregate_hosts |
| aggregate_metadata |
| aggregates |
| block_device_mapping |
| bw_usage_cache |
| cells |
| certificates |
| compute_nodes |
| console_pools |
| consoles |
| dns_domains |
| fixed_ips |
| floating_ips |
| instance_actions |
| instance_actions_events |
| instance_faults |
| instance_group_member |
| instance_group_metadata |
| instance_group_policy |
| instance_groups |
| instance_id_mappings |
| instance_info_caches |
| instance_metadata |
| instance_system_metadata |
| instance_type_extra_specs |
| instance_type_projects |
| instance_types |
| instances |
| iscsi_targets |
| key_pairs |
| migrate_version |
| migrations |
| networks |
| pci_devices |
| project_user_quotas |
| provider_fw_rules |
| quota_classes |
| quota_usages |
| quotas |
| reservations |
| s3_images |
| security_group_default_rules |
| security_group_instance_association |
| security_group_rules |
| security_groups |
| services |
| shadow_agent_builds |
| shadow_aggregate_hosts |
| shadow_aggregate_metadata |
| shadow_aggregates |
| shadow_block_device_mapping |
| shadow_bw_usage_cache |
| shadow_cells |
| shadow_certificates |
| shadow_compute_nodes |
| shadow_console_pools |
| shadow_consoles |
| shadow_dns_domains |
| shadow_fixed_ips |
| shadow_floating_ips |
| shadow_instance_actions |
| shadow_instance_actions_events |
| shadow_instance_faults |
| shadow_instance_group_member |
| shadow_instance_group_metadata |
| shadow_instance_group_policy |
| shadow_instance_groups |
| shadow_instance_id_mappings |
| shadow_instance_info_caches |
| shadow_instance_metadata |
| shadow_instance_system_metadata |
| shadow_instance_type_extra_specs |
| shadow_instance_type_projects |
| shadow_instance_types |
| shadow_instances |
| shadow_iscsi_targets |
| shadow_key_pairs |
| shadow_migrate_version |
| shadow_migrations |
| shadow_networks |
| shadow_pci_devices |
| shadow_project_user_quotas |
| shadow_provider_fw_rules |
| shadow_quota_classes |
| shadow_quota_usages |
| shadow_quotas |
| shadow_reservations |
| shadow_s3_images |
| shadow_security_group_default_rules |
| shadow_security_group_instance_association |
| shadow_security_group_rules |
| shadow_security_groups |
| shadow_services |
| shadow_snapshot_id_mappings |
| shadow_snapshots |
| shadow_task_log |
| shadow_virtual_interfaces |
| shadow_volume_id_mappings |
| shadow_volume_usage_cache |
| shadow_volumes |
| snapshot_id_mappings |
| snapshots |
| task_log |
| virtual_interfaces |
| volume_id_mappings |
| volume_usage_cache |
| volumes |
+--------------------------------------------+
6.3.3 RabbitMQ 配置
[root@openstack-node1 ~]# vim /etc/nova/nova.conf
72 rabbit_host=192.168.56.111
83 rabbit_port=5672
92rabbit_userid=guest
95 rabbit_password=guest
189 rpc_backend=rabbit
6.3.4 vnc相關配置
2036 novncproxy_base_url=http://192.168.56.111:6080/vnc_auto.html
2044 vncserver_listen=0.0.0.0
2048 vncserver_proxyclient_address=192.168.56.111
2051 vnc_enabled=true
2054 vnc_keymap=en-us
6.3.5 Keystone 相關配置
544 auth_strategy=keystone
2687 auth_host=192.168.56.111
2690 auth_port=35357
2694 auth_protocol=http
2697 auth_uri=http://192.168.56.111:5000
2701 auth_version=v2.0
2728 admin_user=admin
2731 admin_password=admin
2735 admin_tenant_name=admin
6.3.6 其他配置
302state_path=/var/lib/nova
885 instances_path=$state_path/instances
1576 lock_path=/var/lib/nova/tmp
6.3.7 查看配置內容
[root@openstack-node1 ~]# grep "^[a-z]" /etc/nova/nova.conf

6.4 創建Nova service和endpoint
[root@openstack-node1 ~]# source keystone-admin
[root@openstack-node1 ~]# keystone service-list

+----------------------------------+----------+----------+--------------------+
| id | name | type | description |
+----------------------------------+----------+----------+--------------------+
| 2fc2f88956d445eeb1e1d61d0b79c6e8 | glance | image | |
| 07484bbbeccd447ab8513e707af84944 | keystone | identity | OpenStack Identity |
+----------------------------------+----------+----------+--------------------+
6.4.1 創建novaservice
[root@openstack-node1 ~]# keystone service-create --name=nova --type=compute
+-------------+----------------------------------+
| Property | Value |
+-------------+----------------------------------+
| description | |
| enabled | True |
| id | 9602f000ea7046dca8c98049bd05add6 |
| name | nova |
| type | compute |
+-------------+----------------------------------+
6.4.2 創建nova endpoint
[root@openstack-node1 ~]# keystone endpoint-create --service-id=9602f000ea7046dca8c98049bd05add6 --publicurl=http://192.168.56.111:8774/v2/%\(tenant_id\)s --internalurl=http://192.168.56.111:8774/v2/%\(tenant_id\)s --adminurl=http://192.168.56.111:8774/v2/%\(tenant_id\)s

+-------------+---------------------------------------------+
| Property | Value |
+-------------+---------------------------------------------+
| adminurl | http://192.168.56.111:8774/v2/%(tenant_id)s |
| id | 975e5ecc6f6f439880dff67aeda45cc6 |
| internalurl | http://192.168.56.111:8774/v2/%(tenant_id)s |
| publicurl | http://192.168.56.111:8774/v2/%(tenant_id)s |
| region | regionOne |
| service_id | 9602f000ea7046dca8c98049bd05add6 |
+-------------+---------------------------------------------+
6.4.3 查看keystone的service
[root@openstack-node1 ~]# keystone service-list

+----------------------------------+----------+----------+--------------------+
| id | name | type | description |
+----------------------------------+----------+----------+--------------------+
| 2fc2f88956d445eeb1e1d61d0b79c6e8 | glance | image | |
| 07484bbbeccd447ab8513e707af84944 | keystone | identity | OpenStack Identity |
| 9602f000ea7046dca8c98049bd05add6 | nova | compute | |
+----------------------------------+----------+----------+--------------------+
Nova的service創建成功
注意,如果想刪除多余的service,可以使用keystone service-delete+id
例如:keystone service-delete 9602f000ea7046dca8c98049bd05add6
6.5 啟動Nova Service
[root@openstack-node1 ~]# mkdir /var/lib/nova/tamp
[root@openstack-node1 ~]# cd openstack-inc/control/init.d
[root@openstack-node1 init.d]# cp openstack-nova-* /etc/init.d/
[root@openstack-node1 init.d]# chmod +x /etc/init.d/openstack-nova-*
[root@openstack-node1 ~]# for i in {api,cert,conductor,console,consoleauth,novncproxy,scheduler};do chkconfig --add openstack-nova-$i;done
[root@openstack-node1 ~]# for i in {api,cert,conductor,console,consoleauth,novncproxy,scheduler};do chkconfig openstack-nova-$i on;done
[root@openstack-node1 ~]# for i in {api,cert,conductor,console,consoleauth,novncproxy,scheduler};do service openstack-nova-$i start;done

[root@openstack-node1 ~]# ps aux |grep nova

發現openstack-nova-novncproxy未啟動
下面是對novncproxy的更新和啟動
[root@openstack-node1 ~]# /etc/init.d/openstack-nova-novncproxy start
[root@openstack-node1 ~]# /etc/init.d/openstack-nova-novncproxy status
[root@openstack-node1 ~]# pip install websockify==0.5.1
[root@openstack-node1 ~]# /etc/init.d/openstack-nova-novncproxy start
[root@openstack-node1 ~]# /etc/init.d/openstack-nova-novncproxy status
6.6 安裝nova並啟動該服務
[root@openstack-node1 ~]# cd /usr/local/src
[root@openstack-node1 src]# wget https://github.com/kanaka/noVNC/archive/v0.5.tar.gz
[root@openstack-node1 src]# tar zxf v0.5.tar.gz
[root@openstack-node1 src]# mv noVNC-0.5/ /usr/share/novnc
[root@openstack-node1 ~]# /etc/init.d/openstack-nova-novncproxy start
6.7 驗證nova的安裝
[root@openstack-node1 ~]# nova host-list

+-----------------------------+-------------+----------+
| host_name | service | zone |
+-----------------------------+-------------+----------+
| openstack-node1.example.com | consoleauth | internal |
| openstack-node1.example.com | conductor | internal |
| openstack-node1.example.com | console | internal |
| openstack-node1.example.com | cert | internal |
| openstack-node1.example.com | scheduler | internal |
+-----------------------------+-------------+----------+


[root@openstack-node1 ~]# nova flavor-list

+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
| ID | Name | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
| 1 | m1.tiny | 512 | 1 | 0 | | 1 | 1.0 | True |
| 2 | m1.small | 2048 | 20 | 0 | | 1 | 1.0 | True |
| 3 | m1.medium | 4096 | 40 | 0 | | 2 | 1.0 | True |
| 4 | m1.large | 8192 | 80 | 0 | | 4 | 1.0 | True |
| 5 | m1.xlarge | 16384 | 160 | 0 | | 8 | 1.0 | True |
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
Nova配置完成
7 Dashboard(Horizon)
Horizon。OpenStack中各種服務的Web管理門戶,用於簡化用戶對服務的操作,例如:啟動實例、分配IP地址、配置訪問控制等。自Essex版本集成到項目中。
Cinder。為運行實例提供穩定的數據塊存儲服務,它的插件驅動架構有利於塊設備的創建和管理,如創建卷、刪除卷,在實例上掛載和卸載卷。自Folsom版本集成到項目中。
7.1 Horizon安裝
7.2 Horizon配置
[root@openstack-node1 ~]# cd /usr/local/src/
[root@openstack-node1 src]# mv horizon-2014.1.3 /var/www/
[root@openstack-node1 src]# cd /var/www/horizon-2014.1.3/openstack_dashboard/local
[root@openstack-node1 local]# mv local_settings.py.example local_settings.py
修改以下內容:
128 OPENSTACK_HOST = "192.168.56.111"
7.3 Apache配置
相關話題:集群中的Session解決方案。

[root@openstack-node1 ~]# chown -R apache:apache /var/www/horizon-2014.1.3/
[root@openstack-node1 ~]# vim /etc/httpd/conf.d/horizon.conf
<VirtualHost *:80>
ServerAdmin admin@unixhot.com
ServerName 192.168.56.111
DocumentRoot /var/www/horizon-2014.1.3/
ErrorLog /var/log/httpd/horizon_error.log
LogLevel info
CustomLog /var/log/httpd/horizon_access.log combined
WSGIScriptAlias / /var/www/horizon-2014.1.3/openstack_dashboard/wsgi/django.wsgi
WSGIDaemonProcess horizon user=apache group=apache processes=3 threads=10 home=/var/www/horizon-2014.1.3
WSGIApplicationGroup horizon
SetEnv APACHE_RUN_USER apache
SetEnv APACHE_RUN_GROUP apache
WSGIProcessGroup horizon
Alias /media /var/www/horizon-2014.1.3/openstack_dashboard/static
<Directory /var/www/horizon-2014.1.3/>
Options FollowSymLinks MultiViews
AllowOverride None
Order allow,deny
Allow from all
</Directory>
</VirtualHost>
WSGISocketPrefix /var/run/horizon
7.4 啟動apache
[root@openstack-node1 ~]# chown -R apache:apache /var/www/horizon-2014.1.3/
[root@openstack-node1 ~]# /etc/init.d/httpd restart
Stopping httpd: [ OK ]
Starting httpd: httpd: Could not reliably determine the server's fully qualified domain name, using openstack-node1.example.com for ServerName
[ OK ]
出現httpd: Could not reliably determine the server's fully qualified domain name, using openstack-node1.example.com for ServerName的提示,編輯vim /etc/httpd/conf/httpd.conf文件,將里面的
ServerName www.example.com:80注釋去掉。
8 Networking Services(Neutron)
Neutron。提供雲計算的網絡虛擬化技術,為OpenStack其他服務提供網絡連接服務。為用戶提供接口,可以定義Network、Subnet、Router,配置DHCP、DNS、負載均衡、L3服務,網絡支持GRE、VLAN。插件架構支持許多主流的網絡廠家和技術,如OpenvSwitch。自Folsom版本集成到項目中。
8.1 Neutron安裝
[root@openstack-node1 ~]# cd /usr/local/src/neutron-2014.1.3
8.2 Neutron 配置
8.2.1 配置文件初始化
復制模板配置文件到配置目錄下。
[root@openstack-node1 neutron-2014.1.3]# cp -a etc/* /etc/neutron/
cp: overwrite `/etc/neutron/dhcp_agent.ini'? y
cp: overwrite `/etc/neutron/fwaas_driver.ini'? y
cp: overwrite `/etc/neutron/l3_agent.ini'? y
cp: overwrite `/etc/neutron/lbaas_agent.ini'? y
cp: overwrite `/etc/neutron/metadata_agent.ini'? y
cp: overwrite `/etc/neutron/neutron.conf'? y
cp: overwrite `/etc/neutron/policy.json'? y
cp: overwrite `/etc/neutron/rootwrap.conf'? y
8.2.2 Neutron 數據庫配置
[root@openstack-node1 neutron-2014.1.3]# cd
[root@openstack-node1 ~]# vim /etc/neutron/neutron.conf

411 connection=mysql://neutron:neutron@192.168.56.111:3306/neutron
8.2.3 Keystone 相關配置
[root@openstack-node1 ~]# vim /etc/neutron/neutron.conf

70 auth_strategy = keystone

8.2.4 RabbitMQ 相關設置
[root@openstack-node1 ~]# vim /etc/neutron/neutron.conf

8.2.5 Nova相關配置在neutron.conf
[root@openstack-node1 ~]# vim /etc/neutron/neutron.conf

327 nova_admin_auth_url = http://192.168.56.111:35357/v2.0
8.2.6 網絡和日志相關配置
網絡的配置
53 core_plugin = ml2
62 service_plugins = router,lbaas
日志文件配置
3 verbose = true
6 debug = true
29 log_file = neutron.log
30 log_dir = /var/log/neutron
查看相關配置
[root@openstack-node1 neutron]# grep "^[a-z]" /etc/neutron/neutron.conf
verbose = true
debug = true
lock_path = $state_path/lock
log_file = neutron.log
log_dir = /var/log/neutron
core_plugin = ml2
service_plugins = router,lbaas
auth_strategy = keystone
rabbit_host = 192.168.56.111
rabbit_password = guest
rabbit_port = 5672
rabbit_userid = guest
rabbit_virtual_host = /
notification_driver = neutron.openstack.common.notifier.rpc_notifier
notify_nova_on_port_status_changes = True
notify_nova_on_port_data_changes = True
nova_url = http://192.168.56.111:8774/v2
nova_admin_username = admin
nova_admin_password = admin
nova_admin_auth_url = http://192.168.56.111:35357/v2.0
auth_host = 192.168.56.111
auth_port = 35357
auth_protocol = http
admin_tenant_name = admin
admin_user = admin
admin_password = admin
signing_dir = $state_path/keystone-signing
connection=mysql://neutron:neutron@192.168.56.111:3306/neutron
service_provider=LOADBALANCER:Haproxy:neutron.services.loadbalancer.drivers.haproxy.plugin_driver.HaproxyOnHostPluginDriver:default
service_provider=VPN:openswan:neutron.services.vpn.service_drivers.ipsec.IPsecVPNDriver:default
8.2.7 Nova 相關配置在nova.conf
[root@openstack-node1 ~]# vim /etc/nova/nova.conf
253 my_ip=192.168.56.111
1200 network_api_class=nova.network.neutronv2.api.API
1321 linuxnet_interface_driver=nova.network.linux_net.LinuxBridgeInterfaceDriver
1464 notify_nova_on_port_status_changes=true

1466 neutron_url=http://192.168.56.111:9696
1474 neutron_admin_username=admin
1478 neutron_admin_password=admin
1488 neutron_admin_tenant_name=admin
1496 neutron_admin_auth_url=http://192.168.56.111:5000/v2.0
1503 neutron_auth_strategy=keystone
1536 security_group_api=neutron
1966 vif_plugging_is_fatal=false
1973 vif_plugging_timeout=10
1982 firewall_driver=nova.virt.firewall.NoopFirewallDriver
2872 vif_driver=nova.virt.libvirt.vif.NeutronLinuxBridgeVIFDriver
修改完畢后,需要重新啟動nova相關服務。
[root@openstack-node1 neutron]# cd
[root@openstack-node1 ~]# vim /etc/nova/nova.conf
[root@openstack-node1 ~]# for i in {api,cert,conductor,consoleauth,novncproxy,scheduler};do /etc/init.d/openstack-nova-$i restart;done
Stopping openstack-nova-api: [ OK ]
Starting openstack-nova-api: [ OK ]
Stopping openstack-nova-cert: [ OK ]
Starting openstack-nova-cert: [ OK ]
Stopping openstack-nova-conductor: [ OK ]
Starting openstack-nova-conductor: [ OK ]
Stopping openstack-nova-consoleauth: [ OK ]
Starting openstack-nova-consoleauth: [ OK ]
Stopping openstack-nova-novncproxy: [ OK ]
Starting openstack-nova-novncproxy: [ OK ]
Stopping openstack-nova-scheduler: [ OK ]
Starting openstack-nova-scheduler: [ OK ]
8.2.8創建neutron Service 和endpoint
[root@openstack-node1 ~]# keystone service-create --name neutron --type network
/usr/lib64/python2.6/site-packages/Crypto/Util/number.py:57: PowmInsecureWarning: Not using mpz_powm_sec. You should rebuild using libgmp >= 5 to avoid timing attack vulnerability.
_warn("Not using mpz_powm_sec. You should rebuild using libgmp >= 5 to avoid timing attack vulnerability.", PowmInsecureWarning)
+-------------+----------------------------------+
| Property | Value |
+-------------+----------------------------------+
| description | |
| enabled | True |
| id | d91b0b73659a58c03e5fdd0874b9e4c4 |
| name | neutron |
| type | network |
+-------------+----------------------------------+
[root@openstack-node1 ~]# keystone endpoint-create --service_id=d91b0b73659a58c03e5fdd0874b9e4c4 --publicurl=http://192.168.56.111:9696 --adminurl=http://192.168.56.111:9696 --internalurl=http://192.168.56.111:9696
/usr/lib64/python2.6/site-packages/Crypto/Util/number.py:57: PowmInsecureWarning: Not using mpz_powm_sec. You should rebuild using libgmp >= 5 to avoid timing attack vulnerability.
_warn("Not using mpz_powm_sec. You should rebuild using libgmp >= 5 to avoid timing attack vulnerability.", PowmInsecureWarning)
+-------------+----------------------------------+
| Property | Value |
+-------------+----------------------------------+
| adminurl | http://192.168.56.111:9696 |
| id | cda66403995b8574936b074fd277a946 |
| internalurl | http://192.168.56.111:9696 |
| publicurl | http://192.168.56.111:9696 |
| region | regionOne |
| service_id | d91b0b73659a58c03e5fdd0874b9e4c4 |
+-------------+----------------------------------+

[root@openstack-node1 ~]# keystone service-list

/usr/lib64/python2.6/site-packages/Crypto/Util/number.py:57: PowmInsecureWarning: Not using mpz_powm_sec. You should rebuild using libgmp >= 5 to avoid timing attack vulnerability.
_warn("Not using mpz_powm_sec. You should rebuild using libgmp >= 5 to avoid timing attack vulnerability.", PowmInsecureWarning)
+----------------------------------+----------+----------+--------------------+
| id | name | type | description |
+----------------------------------+----------+----------+--------------------+
| 35d71cc7cece4b07ae64f26f32cf4ce4 | glance | image | |
| 7b829b9cdd4b440294067009cf7ae206 | keystone | identity | OpenStack Identity |
| d91b0b73659a58c03e5fdd0874b9e4c4 | neutron | network | |
| c91735236d1c4a2ca2c0094806a4fbf9 | nova | compute | |
+----------------------------------+----------+----------+--------------------+

[root@openstack-node1 ~]# keystone endpoint-list

/usr/lib64/python2.6/site-packages/Crypto/Util/number.py:57: PowmInsecureWarning: Not using mpz_powm_sec. You should rebuild using libgmp >= 5 to avoid timing attack vulnerability.
_warn("Not using mpz_powm_sec. You should rebuild using libgmp >= 5 to avoid timing attack vulnerability.", PowmInsecureWarning)
+----------------------------------+-----------+---------------------------------------------+---------------------------------------------+---------------------------------------------+----------------------------------+
| id | region | publicurl | internalurl | adminurl | service_id |
+----------------------------------+-----------+---------------------------------------------+---------------------------------------------+---------------------------------------------+----------------------------------+
| 06d8b0042ba04935aa8654f8063ad1dc | regionOne | http://192.168.56.111:5000/v2.0 | http://192.168.56.111:5000/v2.0 | http://192.168.56.111:35357/v2.0 | 7b829b9cdd4b440294067009cf7ae206 |
| 6b074fd277a946cda66403995b857493 | regionOne | http://192.168.56.111:9696 | http://192.168.56.111:9696 | http://192.168.56.111:9696 | e5fdd0874b9e4c4d91b0b73659a58c03 |
| 9356cedf4f1947d1ab2076ec9a89f5ec | regionOne | http://192.168.56.111:9292 | http://192.168.56.111:9292 | http://192.168.56.111:9292 | 35d71cc7cece4b07ae64f26f32cf4ce4 |
| e858b4c5c8e04d6fb0458aac83316acd | regionOne | http://192.168.56.111:8774/v2/%(tenant_id)s | http://192.168.56.111:8774/v2/%(tenant_id)s | http://192.168.56.111:8774/v2/%(tenant_id)s | c91735236d1c4a2ca2c0094806a4fbf9 |
+----------------------------------+-----------+---------------------------------------------+---------------------------------------------+---------------------------------------------+----------------------------------+
8.3 Neutron Plugin
8.3.1 Neutron ML2配置
[root@openstack-node1 ~]# vim /etc/neutron/plugins/ml2/ml2_conf.ini

5 type_drivers = flat,vlan,gre,vxlan
12 tenant_network_types = flat
17 mechanism_drivers = linuxbridge
29 flat_networks = physnet1
62 enable_security_group = True
8.3.2 Linuxbridge 配置
[root@openstack-node1 ~]# vim /etc/neutron/plugins/linuxbridge/linuxbridge_conf.ini

20 network_vlan_ranges = physnet1
31 physical_interface_mappings = physnet1:eth0
78 enable_security_group = True

8.4 neutron 啟動
[root@openstack-node1 ~]# neutron-server --config-file=/etc/neutron/neutron.conf --config-file=/etc/neutron/plugins/ml2/ml2_conf.ini --config-file=/etc/neutron/plugins/linuxbridge/linuxbridge_conf.ini
2016-10-08 19:20:12.893 12522 INFO neutron.plugins.ml2.managers [-] Loaded mechanism driver names: ['linuxbridge']
2016-10-08 19:20:12.894 12522 INFO neutron.plugins.ml2.managers [-] Registered mechanism drivers: ['linuxbridge']
2016-10-08 19:20:12.915 12522 WARNING neutron.openstack.common.db.sqlalchemy.session [-] This application has not enabled MySQL traditional mode, which means silent data corruption may occur. Please encourage the application developers to enable this mode.
^C2016-10-08 19:20:18.490 12522 DEBUG neutron.openstack.common.lockutils [-] Semaphore / lock released "_create_instance" inner /usr/lib/python2.6/site-packages/neutron/openstack/common/lockutils.py:252

[root@openstack-node1 ~]# neutron-linuxbridge-agent
--config-file=/etc/neutron/neutron.conf
--config-file=/etc/neutron/plugins/ml2/ml2_conf.ini
--config-file=/etc/neutron/plugins/linuxbridge/linuxbridge_conf.ini

[root@openstack-node1 ~]# cd openstack-inc/control/init.d
[root@openstack-node1 init.d]# cp openstack-neutron-* /etc/init.d/
[root@openstack-node1 init.d]# cd /etc/init.d/
[root@openstack-node1 init.d]# chmod +x /etc/init.d/openstack-neutron-*
[root@openstack-node1 init.d]# chkconfig --add openstack-neutron-server
[root@openstack-node1 init.d]# chkconfig --add openstack-neutron-linuxbridge-agent
[root@openstack-node1 init.d]# /etc/init.d/openstack-neutron-server start
Starting openstack-neutron-server: [ OK ]
[root@openstack-node1 init.d]# /etc/init.d/openstack-neutron-linuxbridge-agent start
Starting openstack-neutron-linuxbridge-agent: [ OK ]
8.5 測試Neutron安裝
[root@openstack-node1 init.d]# cd
[root@openstack-node1 ~]# neutron agent-list
+--------------------------------------+--------------------+-----------------------------+-------+----------------+
| id | agent_type | host | alive | admin_state_up |
+--------------------------------------+--------------------+-----------------------------+-------+----------------+
| 8f55528a-6a46-4a25-91c2-a617f2f2950a | Linux bridge agent | openstack-node1.example.com | :-) | True |
+--------------------------------------+--------------------+-----------------------------+-------+----------------+


總結

 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM