openstack深入學習
一、OpenStack雲計算的介紹
(一)雲計算的服務類型
IAAS:基礎設施即服務,如:雲主機;
PAAS:平台即服務,如:docker;
SAAS:軟件即服務,如:購買企業郵箱,CDN;
| 傳統IT | IAAS | PAAS | SAAS |
|---|---|---|---|
| 1應用 | 你管理 | 你管理 | 服務上管理 |
| 2數據 | 你管理 | 你管理 | 服務上管理 |
| 3運行時 | 你管理 | 服務上管理 | 服務上管理 |
| 4中間件 | 你管理 | 服務上管理 | 服務上管理 |
| 5操作系統 | 服務上管理 | 服務上管理 | 服務上管理 |
| 6虛擬化 | 服務上管理 | 服務上管理 | 服務上管理 |
| 7服務器 | 服務上管理 | 服務上管理 | 服務上管理 |
| 8存儲 | 服務上管理 | 服務上管理 | 服務上管理 |
| 9網絡 | 服務上管理 | 服務上管理 | 服務上管理 |
(二)openstack定義及組件
是開源的雲計算管理平台項目,通過各種互補的服務提供了基礎設施即服務(IAAS)的解決方案,每個服務提供API以進行集成。
OpenStack的每個主版本系列以字母表順序(A~Z)命名,以年份及當年內的排序做版本號,從第一版的Austin(2010.1)到目前最新的穩定版Rocky(2018.8),共經歷了18個主版本,第19版的Stein仍在開發中。版本發布策略:幾乎是每半年發布一個新版本openstack官方文檔地址https://docs.openstack.org
1、OpenStack的架構圖:

openstack架構模塊中文含義:
horizon:UI界面;Neutron:網絡;clinder:硬盤;nova:計算;Glance:鏡像;VM:虛擬機;keystone:授權;cellometer:監控;swift:校驗;heat:編排
2、openst核心組件含義及作用:
①計算:Nova
一套控制器,用於為單個用戶或使用群組管理虛擬機實例的整個生命周期,負責虛擬機創建、開機、關機、掛起、暫停、調整、遷移、重啟、銷毀等操作。
②鏡像服務:Glance
一套虛擬機鏡像查找及檢索,支持多種虛擬機鏡像格式(AKI,AMI,ARI,ISO,QCOW2,RAW,VMDK),有創建上傳鏡像、刪除鏡像、編輯鏡像基本信息的功能。
③身份服務:keystone
為openstack其他服務提供身份驗證、服務規則和服務令牌功能,管理Domains,Projects,Users,Groups,Roles.
④網絡&地址管理:Neutron
提供雲計算的網絡虛擬化技術,為OpenStack其他服務提供網絡連接服務。為用戶提供接口,可以定義Network、Subnet、Router,配置DHCP、DNS、負載均衡、L3服務,網絡支持GRE、VLAN。
⑤塊存儲:Cinder
為運行實例提供穩定的數據塊存儲服務,它的插件驅動架構有利於塊設備的創建和管理,如創建卷、刪除卷,在實例上掛載和卸載卷。
⑥UI界面:Horizon
OpenStack中各種服務的Web管理門戶,用於簡化用戶對服務的操作,例如:啟動實例、分配IP地址、配置訪問控制等。
3、soa架構介紹
soa(拆業務)千萬用戶同時訪問。每個網頁都是一個集群。
首頁 www.jd.com/index.html(5張,nginx+php+mysql)
秒殺 miaosha.jd.com/index.html(15張, nginx+php+mysql )
優惠卷 juan.jd.com/index.html (15張, nginx+php+mysql )
會員 plus.jd.com/index.html(15張, nginx+php+mysql )
登錄 login.jd.com/index.html(15張, nginx+php+mysql )
二、OpenStack基礎服務的安裝
(一)hosts文件配置
控制節點和計算節點
[root@computer1 /]# cat /etc/hosts
10.0.0.11 controller
10.0.0.31 computer1
10.0.0.32 computer2
[root@computer1 /]#
(二)yum源配置
控制節點和計算節點:
[root@computer1 etc]# mount /dev/cdrom /mnt/
[root@computer1 etc]# cat /etc/rc.local
mount /dev/cdrom /mnt/
[root@computer1 etc]# chom +x /etc/rc.local
將資料包里的openstack的rpm上傳至/opt,並解壓
[root@computer1 opt]# cat /etc/yum.repos.d/local.repo
[local]
name=local
baseurl=file:///mnt
gpgcheck=0
[openstack]
name=openstack
baseurl=file:///opt/repo
gpgcheck=0
[root@controller /]# yum makecache
(三)安裝時間同步(chrony)
控制節點:
[root@controller /]# yum install chrony -y
[root@controller /]# vim /etc/chrony.conf
allow 10.0.0.0/24
計算節點:
[root@computer1 /]# yum install chrony -y
[root@computer1 /]# vim /etc/chrony.conf
server 10.0.0.11 iburst
控制節點和計算節點:
[root@computer1 /]# systemctl restart chronyd.service
(四)安裝openstack客戶端和selinux
控制節點和計算節點
[root@computer1 /]# yum install python-openstackclient.noarch openstack-selinux.noarch
(五)安裝配置mariadb
僅僅控制節點
[root@controller /]# yum install mariadb mariadb-server.x86_64 python2-PyMySQL.noarch
[root@controller /]# cat >> /etc/my.cnf.d/openstack.cnf << EOF
> [mysqld]
> bind-address = 10.0.0.11
> default-storage-engine = innodb
> innodb_file_per_table
> max_connections = 4096
> collation-server = utf8_general_ci
> character-set-server = utf8
> EOF
[root@controller /]#
[root@controller /]# systemctl start mariadb.service
[root@controller /]# systemctl status mariadb.service
[root@controller /]# systemctl enable mariadb
mysql優化配置:
[root@controller /]# mysql_secure_installation
(六)安裝rabbitmq並創建用戶
(僅僅控制節點)
[root@controller /]# yum install rabbitmq-server.noarch -y
[root@controller /]# systemctl start rabbitmq-server.service
[root@controller /]# systemctl status rabbitmq-server.service
[root@controller /]# systemctl enable rabbitmq-server.service
[root@controller /]# rabbitmq-plugins enable rabbitmq_management
[root@controller /]# rabbitmqctl add_user openstack RABBIT_PASS
Creating user "openstack" ...
[root@controller /]# rabbitmqctl set_permissions openstack ".*" ".*" ".*"
Setting permissions for user "openstack" in vhost "/" ...
[root@controller /]# rabbitmq-plugins enable rabbitmq_management
(七)安裝緩存memcached
(僅僅控制節點):
[root@controller /]# yum install memcached.x86_64 python-memcached.noarch -y
[root@controller /]# vim /etc/sysconfig/memcached
PORT="11211"
USER="memcached"
MAXCONN="1024"
CACHESIZE="64"
OPTIONS="-l 10.0.0.11,::1"
[root@controller /]# systemctl start memcached.service
[root@controller /]# systemctl enable memcached.service
三、安裝openstack認證服務(keystone)
(一)keystone概念
keystone的主要功能:認證管理,授權管理和服務目錄。
①認證:也可以理解成賬號管理,openstack所有的用戶,都是在keystone上注冊的。
②授權: glance,nova,neutron,cinder等其他服務都統一使用keystone的賬號管理,就像現在很多網站支持qq登陸是一樣的。
③服務目錄:每增加一個服務,都需要在keystone上做注冊登記,用戶通過keystone可以知道由有那些服務,這么服務的url地址是多少,然后用戶就可以直接訪問這些服務。
(二)keystone認證服務配置
1、創庫授權
數據庫授權命令格式:
grant 權限 on 數據庫對象 to 用戶
grant 權限 on 數據庫對象 to 用戶 identified by ‘密碼'
[root@controller ~]# mysql
MariaDB [(none)]> CREATE DATABASE keystone;
Query OK, 1 row affected (0.00 sec)
MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \
-> IDENTIFIED BY 'KEYSTONE_DBPASS';
Query OK, 0 rows affected (0.00 sec)
MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \
-> IDENTIFIED BY 'KEYSTONE_DBPASS';
Query OK, 0 rows affected (0.00 sec)
MariaDB [(none)]>
2、安裝keystone相關軟件包
php,nginx +fastcgi --->php #使用fastcgi插件使得nginx可以連接php。
python,httpd +wsgi--->python #使用wsgi插件使得httpd可以連接pyhon。
[root@controller ~]# yum install openstack-keystone httpd mod_wsgi -y
3、修改配置文件
[root@controller ~]# \cp /etc/keystone/keystone.conf{,.bak}
[root@controller ~]# grep -Ev '^$|#' /etc/keystone/keystone.conf.bak >/etc/keystone/keystone.conf
[root@controller ~]# vim /etc/keystone/keystone.conf
方法1:打開配置文件進行添加數據配置
定義初始管理令牌的值
[DEFAULT]
admin_token = ADMIN_TOKEN
配置數據庫訪問
[database]
connection = mysql+pymysql://keystone:KEYSTONE_DBPASS@controller/keystone
配置Fernet UUID令牌的提供者
[token]
provider = fernet #提供臨時隨機秘鑰(令牌)
方法2:
[root@controller keystone]# yum install openstack-utils -y
openstack-config --set /etc/keystone/keystone.conf DEFAULT admin_token ADMIN_TOKEN
openstack-config --set /etc/keystone/keystone.conf database connection mysql+pymysql://keystone:KEYSTONE_DBPASS@controller/keystone
openstack-config --set /etc/keystone/keystone.conf token provider fernet
4、同步數據庫
su -s /bin/sh -c "keystone-manage db_sync" keystone #su到keystone用戶,使用/bin/sh去執行命令。
-s:表示解釋器
-c:表示要執行的命令
keystone:是數據庫用戶名
[root@controller keystone]# su -s /bin/sh -c "keystone-manage db_sync" keystone
檢查表:檢查是否差生了表(同步數據庫的結果)
[root@controller keystone]# mysql keystone -e "show tables"
查看同步日志:
[root@controller keystone]# vim /var/log/keystone/keystone.log
5、初始化fernet
[root@controller keystone]# keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
[root@controller keystone]# ll /etc/keystone/
drwx------ 2 keystone keystone 24 Jan 4 22:32 fernet-keys(初始化后差生的令牌文件)
6、配置httpd(apachd)
優化啟動速度:
[root@controller keystone]# echo "ServerName controller" >>/etc/httpd/conf/httpd.conf
以下文件默認不存在:
[root@controller keystone]# vim /etc/httpd/conf.d/wsgi-keystone.conf
#監聽兩個端口
Listen 5000
Listen 35357
#配置兩個虛擬主機站點:
<VirtualHost *:5000>
WSGIDaemonProcess keystone-public processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP}
WSGIProcessGroup keystone-public
WSGIScriptAlias / /usr/bin/keystone-wsgi-public
WSGIApplicationGroup %{GLOBAL}
WSGIPassAuthorization On
ErrorLogFormat "%{cu}t %M"
ErrorLog /var/log/httpd/keystone-error.log
CustomLog /var/log/httpd/keystone-access.log combined
<Directory /usr/bin>
Require all granted
</Directory>
</VirtualHost>
<VirtualHost *:35357>
WSGIDaemonProcess keystone-admin processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP}
WSGIProcessGroup keystone-admin
WSGIScriptAlias / /usr/bin/keystone-wsgi-admin
WSGIApplicationGroup %{GLOBAL}
WSGIPassAuthorization On
ErrorLogFormat "%{cu}t %M"
ErrorLog /var/log/httpd/keystone-error.log
CustomLog /var/log/httpd/keystone-access.log combined
<Directory /usr/bin>
Require all granted
</Directory>
</VirtualHost>
[root@controller keystone]#
檢驗:
[root@controller keystone]# md5sum /etc/httpd/conf.d/wsgi-keystone.conf
8f051eb53577f67356ed03e4550315c2 /etc/httpd/conf.d/wsgi-keystone.conf
7、啟動httpd
[root@controller keystone]# systemctl start httpd.service
[root@controller keystone]# systemctl enable httpd.service
netstat -lntp #查看是否監聽5000和35357端口
8、創建服務和注冊api
在keystone上配置,keystone還不完善,無法為自己和其他服務注冊服務和api。使用管理員token(管理員令牌)初始的方法注冊服務和api。
聲明環境變量
[root@controller ~]# export OS_TOKEN=ADMIN_TOKEN
[root@controller ~]# export OS_URL=http://controller:35357/v3
[root@controller ~]# export OS_IDENTITY_API_VERSION=3
env|grep OS
創建認證服務:
[root@controller ~]# openstack service create \
> --name keystone --description "OpenStack Identity" identity
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | OpenStack Identity |
| enabled | True |
| id | b251b397df344ed58b77879709a82340 |
| name | keystone |
| type | identity |
+-------------+----------------------------------+
注冊API:
[root@controller ~]# openstack endpoint create --region RegionOne \
> identity public http://controller:5000/v3 #公共使用的url
tp://controller:35357/v3
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 034a286a309c4d998c2918cb9ad6f161 |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | b251b397df344ed58b77879709a82340 |
| service_name | keystone |
| service_type | identity |
| url | http://controller:5000/v3 |
+--------------+----------------------------------+
[root@controller ~]#
[root@controller ~]# openstack endpoint create --region RegionOne \
> identity internal http://controller:5000/v3 #內部組件使用的url
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | dedefe5fe8424132b9ced6c0ead9291c |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | b251b397df344ed58b77879709a82340 |
| service_name | keystone |
| service_type | identity |
| url | http://controller:5000/v3 |
+--------------+----------------------------------+
[root@controller ~]#
[root@controller ~]# openstack endpoint create --region RegionOne \
> identity admin http://controller:35357/v3 #管理員使用的url
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 64af2fb03db945d79d77e3c4b67b75ab |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | b251b397df344ed58b77879709a82340 |
| service_name | keystone |
| service_type | identity |
| url | http://controller:35357/v3 |
+--------------+----------------------------------+
[root@controller ~]#
9、創建域、項目、用戶、角色
[root@controller ~]# openstack domain create --description "Default Domain" default
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | Default Domain |
| enabled | True |
| id | 30c30c794d4a4e92ae4474320e75bf47 |
| name | default |
+-------------+----------------------------------+
[root@controller ~]#
[root@controller ~]# openstack project create --domain default \
> --description "Admin Project" admin
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | Admin Project |
| domain_id | 30c30c794d4a4e92ae4474320e75bf47 |
| enabled | True |
| id | 17b0da567cc341c7b33205572bd0470b |
| is_domain | False |
| name | admin |
| parent_id | 30c30c794d4a4e92ae4474320e75bf47 |
+-------------+----------------------------------+
[root@controller ~]#
[root@controller ~]# openstack user create --domain default \
> --password ADMIN_PASS admin #通過--password指定用戶的密碼,與官網不同。
+-----------+----------------------------------+
| Field | Value |
+-----------+----------------------------------+
| domain_id | 30c30c794d4a4e92ae4474320e75bf47 |
| enabled | True |
| id | a7b53c25b6c94a78a6efe00bc9150c33 |
| name | admin |
+-----------+----------------------------------+
[root@controller ~]#
[root@controller ~]# openstack role create admin
+-----------+----------------------------------+
| Field | Value |
+-----------+----------------------------------+
| domain_id | None |
| id | 043b3090d03f436eab223f9f1cedf815 |
| name | admin |
+-----------+----------------------------------+
#關聯項目,用戶,角色
[root@controller ~]# openstack role add --project admin --user admin admin
#在admin項目上,給admin用戶賦予admin角色
[root@controller ~]# openstack project create --domain default \
> --description "Service Project" service
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | Service Project |
| domain_id | 30c30c794d4a4e92ae4474320e75bf47 |
| enabled | True |
| id | 317c63946e484b518dc0d99774ff6772 |
| is_domain | False |
| name | service |
| parent_id | 30c30c794d4a4e92ae4474320e75bf47 |
+-------------+----------------------------------+
10、測試授權
[root@controller ~]# unset OS_TOKEN OS_URL #取消環境變量
#沒有環境變臉情況下配置:
[root@controller ~]# openstack --os-auth-url http://controller:35357/v3 \
> --os-project-domain-name default --os-user-domain-name default \
> --os-project-name admin --os-username admin --os-password ADMIN_PASS token issue
[root@controller ~]# openstack --os-auth-url http://controller:35357/v3 \
> --os-project-domain-name default --os-user-domain-name default \
> --os-project-name admin --os-username admin --os-password ADMIN_PASS user list
(三)創建環境變量腳本
[root@controller ~]# vim admin-openrc #這里都是大寫,命令行中都是小寫,效果一樣。
export OS_PROJECT_DOMAIN_NAME=default
export OS_USER_DOMAIN_NAME=default
export OS_PROJECT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=ADMIN_PASS
export OS_AUTH_URL=http://controller:35357/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
[root@controller ~]# source admin-openrc
[root@controller ~]# openstack user list
[root@controller ~]# openstack token issue #檢查是否可以分別出token
四、鏡像服務glance
(一)鏡像服務介紹
1、概念
鏡像服務 (glance) 允許用戶查詢、上傳和下載虛擬機鏡像。
組件:glance-api
接收鏡像API的調用,諸如鏡像發現、恢復、存儲。
組件:glance-registry
存儲、處理和恢復鏡像的元數據,元數據包括項諸如大小和類型。
(二)鏡像配置
1、openstack通用步驟
a:數據庫創庫授權
b:在keystone創建系統用戶關聯角色
c:在keystone上創建服務,注冊api
d:安裝相應服務軟件包
e:修改相應服務的配置文件f:同步數據庫
g:啟動服務
2、鏡像配置步驟
①數據庫創庫授權
[root@controller ~]# mysql
MariaDB [(none)]> CREATE DATABASE glance;
Query OK, 1 row affected (0.00 sec)
MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \
-> IDENTIFIED BY 'GLANCE_DBPASS';
Query OK, 0 rows affected (0.00 sec)
MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \
-> IDENTIFIED BY 'GLANCE_DBPASS';
Query OK, 0 rows affected (0.00 sec)
②在keystone創建glance用戶關聯角色
[root@controller ~]# openstack user create --domain default --password GLANCE_PASS glance
+-----------+----------------------------------+
| Field | Value |
+-----------+----------------------------------+
| domain_id | 30c30c794d4a4e92ae4474320e75bf47 |
| enabled | True |
| id | dc68fd42c718411085a1cbc1379a662e |
| name | glance |
+-----------+----------------------------------+
[root@controller ~]# openstack role add --project service --user glance admin
③在keystone上創建服務和注冊api
[root@controller ~]# openstack service create --name glance \
> --description "OpenStack Image" image
nstack endpoint create --region RegionOne \
image admin http://controller:9292
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | OpenStack Image |
| enabled | True |
| id | 7f258ec0b235433188c5664c9e710d7c |
| name | glance |
| type | image |
+-------------+----------------------------------+
[root@controller ~]# openstack endpoint create --region RegionOne \
> image public http://controller:9292
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 2b8484d91ec94bd8a5aafd56ea7a1cfe |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 7f258ec0b235433188c5664c9e710d7c |
| service_name | glance |
| service_type | image |
| url | http://controller:9292 |
+--------------+----------------------------------+
[root@controller ~]# openstack endpoint create --region RegionOne \
> image internal http://controller:9292
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | aec16a57566a4bccae96f9c63885c0b5 |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 7f258ec0b235433188c5664c9e710d7c |
| service_name | glance |
| service_type | image |
| url | http://controller:9292 |
+--------------+----------------------------------+
[root@controller ~]# openstack endpoint create --region RegionOne \
> image admin http://controller:9292
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | ceba791635b341d79c1c47182c22c4df |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 7f258ec0b235433188c5664c9e710d7c |
| service_name | glance |
| service_type | image |
| url | http://controller:9292 |
+--------------+----------------------------------+
[root@controller ~]#
④安裝服務相應軟件包
[root@controller ~]# yum install openstack-glance -y
⑤修改相應服務的配置文件
[root@controller ~]# cp /etc/glance/glance-api.conf{,.bak}
[root@controller ~]# grep '^[a-Z\[]' /etc/glance/glance-api.conf.bak >/etc/glance/glance-api.conf
配置api配置文件:
openstack-config --set /etc/glance/glance-api.conf database connection mysql+pymysql://glance:GLANCE_DBPASS@controller/glance
openstack-config --set /etc/glance/glance-api.conf glance_store stores file,http
openstack-config --set /etc/glance/glance-api.conf glance_store default_store file
openstack-config --set /etc/glance/glance-api.conf glance_store filesystem_store_datadir /var/lib/glance/images/
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_uri http://controller:5000
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_url http://controller:35357
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken memcached_servers controller:11211
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_type password
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken project_domain_name default
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken user_domain_name default
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken project_name service
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken username glance
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken password GLANCE_PASS
openstack-config --set /etc/glance/glance-api.conf paste_deploy flavor keystone
配置注冊配置文件:
cp /etc/glance/glance-registry.conf{,.bak}
grep '^[a-Z\[]' /etc/glance/glance-registry.conf.bak > /etc/glance/glance-registry.conf
openstack-config --set /etc/glance/glance-registry.conf database connection mysql+pymysql://glance:GLANCE_DBPASS@controller/glance
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_uri http://controller:5000
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_url http://controller:35357
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken memcached_servers controller:11211
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_type password
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken project_domain_name default
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken user_domain_name default
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken project_name service
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken username glance
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken password GLANCE_PASS
openstack-config --set /etc/glance/glance-registry.conf paste_deploy flavor keystone
⑥同步數據庫
[root@controller ~]# su -s /bin/sh -c "glance-manage db_sync" glance
[root@controller ~]# mysql glance -e "show tables"
⑦啟動服務
[root@controller ~]# systemctl start openstack-glance-scrubber.service openstack-glance-api.service
[root@controller ~]# systemctl enable openstack-glance-scrubber.service openstack-glance-api.service
[root@controller ~]# systemctl status openstack-glance-scrubber.service openstack-glance-api.service
⑧驗證
將鏡像(cirros-0.3.4-x86_64-disk.img)上傳至根目錄
驗證鏡像上傳:
openstack image create "cirros" \
--file cirros-0.3.4-x86_64-disk.img \
--disk-format qcow2 --container-format bare \(也可以作為容器鏡像)
--public
查看鏡像上傳成功:
[root@controller images]# pwd
/var/lib/glance/images
[root@controller images]# ll
total 12980
-rw-r----- 1 glance glance 13287936 Jan 4 23:29 456d7600-3bd1-4fb5-aa84-144a61c0eb07
[root@controller images]#
五、計算服務nova
(一)nova介紹
nova服務是openstack雲計算中的最核心服務。常用組件有:
nova-api:接受並響應所有的計算服務請求,管理虛擬機(雲主機)生命周期
nova-compute(多個):真正管理虛擬機的生命周期
nova-scheduler: nova調度器(挑選出最合適的nova-compute來創建虛機)
nova-conductor: 幫助nova-compute代理修改數據庫中虛擬機的狀態
nova-network : 早期openstack版本管理虛擬機的網絡(已棄用,neutron)
nova-consoleauth : 為web版的vnc提供訪問令牌
tokennovncproxy:web版 vnc客戶端
nova-api-metadata:接受來自虛擬機發送的元數據請求
(二)nova配置
1、openstack通用配置流程
a:數據庫創庫授權
b:在keystone創建系統用戶關聯角色
c:在keystone上創建服務,注冊api
d:安裝相應服務軟件包
e:修改相應服務的配置文件
f:同步數據庫
g:啟動服務
2、nova控制節點配置步驟
①數據庫創庫授權
[root@controller ~]# mysql
CREATE DATABASE nova_api;
CREATE DATABASE nova;
GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' \
IDENTIFIED BY 'NOVA_DBPASS';
GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \
IDENTIFIED BY 'NOVA_DBPASS';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \
IDENTIFIED BY 'NOVA_DBPASS';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \
IDENTIFIED BY 'NOVA_DBPASS';
②在keystone創建系統用戶(glance,nova,neutron)關聯角色
openstack user create --domain default \
--password NOVA_PASS nova
openstack role add --project service --user nova admin
③在keystone上創建服務和注冊api
openstack service create --name nova \
--description "OpenStack Compute" compute
openstack endpoint create --region RegionOne \
compute public http://controller:8774/v2.1/%\(tenant_id\)s
openstack endpoint create --region RegionOne \
compute internal http://controller:8774/v2.1/%\(tenant_id\)s
openstack endpoint create --region RegionOne \
compute admin http://controller:8774/v2.1/%\(tenant_id\)s
④安裝服務相應軟件包
yum install openstack-nova-api openstack-nova-conductor \
openstack-nova-console openstack-nova-novncproxy \
openstack-nova-scheduler -y
⑤修改相應服務的配置文件
cp /etc/nova/nova.conf{,.bak}
grep '^[a-Z\[]' /etc/nova/nova.conf.bak >/etc/nova/nova.conf
openstack-config --set /etc/nova/nova.conf DEFAULT enabled_apis osapi_compute,metadata
openstack-config --set /etc/nova/nova.conf DEFAULT rpc_backend rabbit
openstack-config --set /etc/nova/nova.conf DEFAULT auth_strategy keystone
openstack-config --set /etc/nova/nova.conf DEFAULT my_ip 10.0.0.11
openstack-config --set /etc/nova/nova.conf DEFAULT use_neutron True
openstack-config --set /etc/nova/nova.conf DEFAULT firewall_driver nova.virt.firewall.NoopFirewallDriver
openstack-config --set /etc/nova/nova.conf api_database connection mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api
openstack-config --set /etc/nova/nova.conf database connection mysql+pymysql://nova:NOVA_DBPASS@controller/nova
openstack-config --set /etc/nova/nova.conf glance api_servers http://controller:9292
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_uri http://controller:5000
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_url http://controller:35357
openstack-config --set /etc/nova/nova.conf keystone_authtoken memcached_servers controller:11211
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_type password
openstack-config --set /etc/nova/nova.conf keystone_authtoken project_domain_name default
openstack-config --set /etc/nova/nova.conf keystone_authtoken user_domain_name default
openstack-config --set /etc/nova/nova.conf keystone_authtoken project_name service
openstack-config --set /etc/nova/nova.conf keystone_authtoken username nova
openstack-config --set /etc/nova/nova.conf keystone_authtoken password NOVA_PASS
openstack-config --set /etc/nova/nova.conf oslo_concurrency lock_path /var/lib/nova/tmp
openstack-config --set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_host controller
openstack-config --set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_userid openstack
openstack-config --set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_password RABBIT_PASS
openstack-config --set /etc/nova/nova.conf vnc vncserver_listen '$my_ip'
openstack-config --set /etc/nova/nova.conf vnc vncserver_proxyclient_address '$my_ip'
驗證
[root@controller ~]# md5sum /etc/nova/nova.conf
47ded61fdd1a79ab91bdb37ce59ef192 /etc/nova/nova.conf
⑥同步數據庫
[root@controller ~]# su -s /bin/sh -c "nova-manage api_db sync" nova
[root@controller ~]# su -s /bin/sh -c "nova-manage db sync" nova
⑦啟動服務
systemctl enable openstack-nova-api.service \
openstack-nova-consoleauth.service openstack-nova-scheduler.service \
openstack-nova-conductor.service openstack-nova-novncproxy.service
systemctl start openstack-nova-api.service \
openstack-nova-consoleauth.service openstack-nova-scheduler.service \
openstack-nova-conductor.service openstack-nova-novncproxy.service
查看:
[root@controller ~]# openstack compute service list
nova service-list
glance image-list
openstack image list
openstack compute service list
3、nova計算節點配置步驟
①nova-compute調用libvirtd來創建虛擬機
安裝相關軟件:
yum install openstack-nova-compute -y
yum install openstack-utils.noarch -y
②配置
[root@computer1 ~]# cp /etc/nova/nova.conf{,.bak}
[root@computer1 ~]# grep '^[a-Z\[]' /etc/nova/nova.conf.bak >/etc/nova/nova.conf
openstack-config --set /etc/nova/nova.conf DEFAULT enabled_apis osapi_compute,metadata
openstack-config --set /etc/nova/nova.conf DEFAULT rpc_backend rabbit
openstack-config --set /etc/nova/nova.conf DEFAULT auth_strategy keystone
openstack-config --set /etc/nova/nova.conf DEFAULT my_ip 10.0.0.31
openstack-config --set /etc/nova/nova.conf DEFAULT use_neutron True
openstack-config --set /etc/nova/nova.conf DEFAULT firewall_driver nova.virt.firewall.NoopFirewallDriver
openstack-config --set /etc/nova/nova.conf glance api_servers http://controller:9292
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_uri http://controller:5000
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_url http://controller:35357
openstack-config --set /etc/nova/nova.conf keystone_authtoken memcached_servers controller:11211
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_type password
openstack-config --set /etc/nova/nova.conf keystone_authtoken project_domain_name default
openstack-config --set /etc/nova/nova.conf keystone_authtoken user_domain_name default
openstack-config --set /etc/nova/nova.conf keystone_authtoken project_name service
openstack-config --set /etc/nova/nova.conf keystone_authtoken username nova
openstack-config --set /etc/nova/nova.conf keystone_authtoken password NOVA_PASS
openstack-config --set /etc/nova/nova.conf oslo_concurrency lock_path /var/lib/nova/tmp
openstack-config --set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_host controller
openstack-config --set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_userid openstack
openstack-config --set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_password RABBIT_PASS
openstack-config --set /etc/nova/nova.conf vnc enabled True
openstack-config --set /etc/nova/nova.conf vnc vncserver_listen 0.0.0.0
openstack-config --set /etc/nova/nova.conf vnc vncserver_proxyclient_address '$my_ip'
openstack-config --set /etc/nova/nova.conf vnc novncproxy_base_url http://controller:6080/vnc_auto.html
驗證
[root@computer1 ~]# md5sum /etc/nova/nova.conf
45cab6030a9ab82761e9f697d6d79e14 /etc/nova/nova.conf
③啟動服務
systemctl enable libvirtd.service openstack-nova-compute.service
systemctl start libvirtd.service openstack-nova-compute.service
④驗證(全局變量要生效)
控制節點
[root@controller ~]# openstack compute service list
六、網絡服務neutron
(一)概念介紹
OpenStack Networking(neutron),允許創建、附加網卡設備,這些設備由其他的OpenStack服務管理。插件式的實現可以容納不同的網絡設備和軟件,為OpenStack架構與部署提供了靈活性。
常用組件有:
neutron-server :接受和響應外部的網絡管理請求
neutron-linuxbridge-agent:負責創建橋接網卡
neutron-dhcp-agent:負責分配IP
neutron-metadata-agent:配合nova-metadata-api實現虛擬機的定制化操作
L3-agent:實現三層網絡vxlan(網絡層)
(二)neutron組件配置
1、控制節點配置
①數據庫授權
[root@controller ~]# mysql
CREATE DATABASE neutron;
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \
IDENTIFIED BY 'NEUTRON_DBPASS';
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \
IDENTIFIED BY 'NEUTRON_DBPASS';
②在keystone創建系統用戶(glance,nova,neutron)關聯角色
openstack user create --domain default --password NEUTRON_PASS neutron
openstack role add --project service --user neutron admin
③在keystone上創建服務和注冊api
openstack service create --name neutron \
--description "OpenStack Networking" network
openstack endpoint create --region RegionOne \
network public http://controller:9696
openstack endpoint create --region RegionOne \
network internal http://controller:9696
openstack endpoint create --region RegionOne \
network admin http://controller:9696
④安裝服務相應軟件包
yum install openstack-neutron openstack-neutron-ml2 \
openstack-neutron-linuxbridge ebtables -y
⑤修改相應服務的配置文件
文件:/etc/neutron/neutron.conf
cp /etc/neutron/neutron.conf{,.bak}
grep '^[a-Z\[]' /etc/neutron/neutron.conf.bak >/etc/neutron/neutron.conf
openstack-config --set /etc/neutron/neutron.conf DEFAULT core_plugin ml2
openstack-config --set /etc/neutron/neutron.conf DEFAULT service_plugins
openstack-config --set /etc/neutron/neutron.conf DEFAULT rpc_backend rabbit
openstack-config --set /etc/neutron/neutron.conf DEFAULT auth_strategy keystone
openstack-config --set /etc/neutron/neutron.conf DEFAULT notify_nova_on_port_status_changes True
openstack-config --set /etc/neutron/neutron.conf DEFAULT notify_nova_on_port_data_changes True
openstack-config --set /etc/neutron/neutron.conf database connection mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_uri http://controller:5000
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_url http://controller:35357
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken memcached_servers controller:11211
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_type password
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken project_domain_name default
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken user_domain_name default
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken project_name service
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken username neutron
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken password NEUTRON_PASS
openstack-config --set /etc/neutron/neutron.conf nova auth_url http://controller:35357
openstack-config --set /etc/neutron/neutron.conf nova auth_type password
openstack-config --set /etc/neutron/neutron.conf nova project_domain_name default
openstack-config --set /etc/neutron/neutron.conf nova user_domain_name default
openstack-config --set /etc/neutron/neutron.conf nova region_name RegionOne
openstack-config --set /etc/neutron/neutron.conf nova project_name service
openstack-config --set /etc/neutron/neutron.conf nova username nova
openstack-config --set /etc/neutron/neutron.conf nova password NOVA_PASS
openstack-config --set /etc/neutron/neutron.conf oslo_concurrency lock_path /var/lib/neutron/tmp
openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_host controller
openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_userid openstack
openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_password RABBIT_PASS
驗證
[root@controller ~]# md5sum /etc/neutron/neutron.conf
e399b7958cd22f47becc6d8fd6d3521a /etc/neutron/neutron.conf
文件:/etc/neutron/plugins/ml2/ml2_conf.ini
cp /etc/neutron/plugins/ml2/ml2_conf.ini{,.bak}
grep '^[a-Z\[]' /etc/neutron/plugins/ml2/ml2_conf.ini.bak >/etc/neutron/plugins/ml2/ml2_conf.ini
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 type_drivers flat,vlan
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 tenant_network_types
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 mechanism_drivers linuxbridge
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 extension_drivers port_security
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_flat flat_networks provider
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup enable_ipset True
驗證:
[root@controller ~]# md5sum /etc/neutron/plugins/ml2/ml2_conf.ini
2640b5de519fafcd675b30e1bcd3c7d5 /etc/neutron/plugins/ml2/ml2_conf.ini
文件:/etc/neutron/plugins/ml2/linuxbridge_agent.ini
cp /etc/neutron/plugins/ml2/linuxbridge_agent.ini{,.bak}
grep '^[a-Z\[]' /etc/neutron/plugins/ml2/linuxbridge_agent.ini.bak >/etc/neutron/plugins/ml2/linuxbridge_agent.ini
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini linux_bridge physical_interface_mappings provider:eth0
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini securitygroup enable_security_group True
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini securitygroup firewall_driver neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan enable_vxlan False
驗證:
[root@controller ~]# md5sum /etc/neutron/plugins/ml2/linuxbridge_agent.ini
3f474907a7f438b34563e4d3f3c29538 /etc/neutron/plugins/ml2/linuxbridge_agent.ini
文件:/etc/neutron/dhcp_agent.ini
cp /etc/neutron/dhcp_agent.ini{,.bak}
grep -Ev '^$|#' /etc/neutron/dhcp_agent.ini.bak >/etc/neutron/dhcp_agent.ini
openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT interface_driver neutron.agent.linux.interface.BridgeInterfaceDriver
openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT dhcp_driver neutron.agent.linux.dhcp.Dnsmasq
openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT enable_isolated_metadata True
驗證:
[root@controller ~]# md5sum /etc/neutron/dhcp_agent.ini
d39579607b2f7d92e88f8910f9213520 /etc/neutron/dhcp_agent.ini
文件:/etc/neutron/metadata_agent.ini
cp /etc/neutron/metadata_agent.ini{,.bak}
grep -Ev '^$|#' /etc/neutron/metadata_agent.ini.bak >/etc/neutron/metadata_agent.ini
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT nova_metadata_ip controller
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT metadata_proxy_shared_secret METADATA_SECRET
驗證:
[root@controller ~]# md5sum /etc/neutron/metadata_agent.ini
e1166b0dfcbcf4507d50860d124335d6 /etc/neutron/metadata_agent.ini
文件:再次修改/etc/nova/nova.conf
openstack-config --set /etc/nova/nova.conf neutron url http://controller:9696
openstack-config --set /etc/nova/nova.conf neutron auth_url http://controller:35357
openstack-config --set /etc/nova/nova.conf neutron auth_type password
openstack-config --set /etc/nova/nova.conf neutron project_domain_name default
openstack-config --set /etc/nova/nova.conf neutron user_domain_name default
openstack-config --set /etc/nova/nova.conf neutron region_name RegionOne
openstack-config --set /etc/nova/nova.conf neutron project_name service
openstack-config --set /etc/nova/nova.conf neutron username neutron
openstack-config --set /etc/nova/nova.conf neutron password NEUTRON_PASS
openstack-config --set /etc/nova/nova.conf neutron service_metadata_proxy True
openstack-config --set /etc/nova/nova.conf neutron metadata_proxy_shared_secret METADATA_SECRET
驗證:
[root@controller ~]# md5sum /etc/nova/nova.conf
6334f359655efdbcf083b812ab94efc1 /etc/nova/nova.conf
⑥同步數據庫
ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \
--config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
⑦啟動服務
systemctl restart openstack-nova-api.service
systemctl enable neutron-server.service \
neutron-linuxbridge-agent.service neutron-dhcp-agent.service \
neutron-metadata-agent.service
systemctl start neutron-server.service \
neutron-linuxbridge-agent.service neutron-dhcp-agent.service \
neutron-metadata-agent.service
2、計算節點配置
①安裝相關軟件
yum install openstack-neutron-linuxbridge ebtables ipset -y
②配置
cp /etc/neutron/neutron.conf{,.bak}
grep '^[a-Z\[]' /etc/neutron/neutron.conf.bak >/etc/neutron/neutron.conf
openstack-config --set /etc/neutron/neutron.conf DEFAULT rpc_backend rabbit
openstack-config --set /etc/neutron/neutron.conf DEFAULT auth_strategy keystone
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_uri http://controller:5000
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_url http://controller:35357
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken memcached_servers controller:11211
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_type password
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken project_domain_name default
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken user_domain_name default
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken project_name service
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken username neutron
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken password NEUTRON_PASS
openstack-config --set /etc/neutron/neutron.conf oslo_concurrency lock_path /var/lib/neutron/tmp
openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_host controller
openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_userid openstack
openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_password RABBIT_PASS
驗證:
[root@computer1 ~]# md5sum /etc/neutron/neutron.conf
77ffab503797be5063c06e8b956d6ed0 /etc/neutron/neutron.conf
文件:/etc/neutron/plugins/ml2/linuxbridge_agent.ini
cp /etc/neutron/plugins/ml2/linuxbridge_agent.ini{,.bak}
grep '^[a-Z\[]' /etc/neutron/plugins/ml2/linuxbridge_agent.ini.bak >/etc/neutron/plugins/ml2/linuxbridge_agent.ini
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini linux_bridge physical_interface_mappings provider:eth0
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini securitygroup enable_security_group True
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini securitygroup firewall_driver neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan enable_vxlan False
驗證:
[root@computer1 ~]# md5sum /etc/neutron/plugins/ml2/linuxbridge_agent.ini
3f474907a7f438b34563e4d3f3c29538 /etc/neutron/plugins/ml2/linuxbridge_agent.ini
[root@computer1 ~]#
文件:再次配置/etc/nova/nova.conf
openstack-config --set /etc/nova/nova.conf neutron url http://controller:9696
openstack-config --set /etc/nova/nova.conf neutron auth_url http://controller:35357
openstack-config --set /etc/nova/nova.conf neutron auth_type password
openstack-config --set /etc/nova/nova.conf neutron project_domain_name default
openstack-config --set /etc/nova/nova.conf neutron user_domain_name default
openstack-config --set /etc/nova/nova.conf neutron region_name RegionOne
openstack-config --set /etc/nova/nova.conf neutron project_name service
openstack-config --set /etc/nova/nova.conf neutron username neutron
openstack-config --set /etc/nova/nova.conf neutron password NEUTRON_PASS
驗證:
[root@computer1 ~]# md5sum /etc/nova/nova.conf
328cd5f0745e26a420e828b0dfc2934e /etc/nova/nova.conf
控制節點上查看:
[root@controller ~]# neutron agent-list
③組件啟動
systemctl restart openstack-nova-compute.service
systemctl enable neutron-linuxbridge-agent.service
systemctl start neutron-linuxbridge-agent.service
七、儀表盤服務horizon
(一)概念介紹
Dashboard(horizon)是一個web接口,使得雲平台管理員以及用戶可以管理不同的Openstack資源以及服務。它是使用python django框架開發的,它沒有自己的數據庫,web頁面展示,全程依賴調用其他服務的api。儀表盤服務安裝在計算節點上(官方文檔安裝在控制節點上)
(二)組件安裝配置
①安裝相關軟件
yum install openstack-dashboard python-memcached -y
②配置
openstack資料包里准備好的配置文件(local-setting)導入配置文件:
[root@computer1 ~]# cat local_settings >/etc/openstack-dashboard/local_settings
配置組件
[root@computer1 ~]# grep -Ev '^$|#' local_settings
import os
from django.utils.translation import ugettext_lazy as _
from openstack_dashboard import exceptions
from openstack_dashboard.settings import HORIZON_CONFIG
DEBUG = False
TEMPLATE_DEBUG = DEBUG
WEBROOT = '/dashboard/'
ALLOWED_HOSTS = ['*', ]
OPENSTACK_API_VERSIONS = {
"identity": 3,
"image": 2,
"volume": 2,
"compute": 2,
}
OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = 'default'
LOCAL_PATH = '/tmp'
SECRET_KEY='65941f1393ea1c265ad7'
SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
'LOCATION': 'controller:11211',
},
}
EMAIL_BACKEND = 'django.core.mail.backends.console.EmailBackend'
OPENSTACK_HOST = "controller"
OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST
OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"
OPENSTACK_KEYSTONE_BACKEND = {
'name': 'native',
'can_edit_user': True,
'can_edit_group': True,
'can_edit_project': True,
'can_edit_domain': True,
'can_edit_role': True,
}
OPENSTACK_HYPERVISOR_FEATURES = {
'can_set_mount_point': False,
'can_set_password': False,
'requires_keypair': False,
}
OPENSTACK_CINDER_FEATURES = {
'enable_backup': False,
}
OPENSTACK_NEUTRON_NETWORK = {
'enable_router': False,
'enable_quotas': False,
'enable_ipv6': False,
'enable_distributed_router': False,
'enable_ha_router': False,
'enable_lb': False,
'enable_firewall': False,
'enable_vpn': False,
'enable_fip_topology_check': False,
'default_ipv4_subnet_pool_label': None,
'default_ipv6_subnet_pool_label': None,
'profile_support': None,
'supported_provider_types': ['*'],
'supported_vnic_types': ['*'],
}
OPENSTACK_HEAT_STACK = {
'enable_user_pass': True,
}
IMAGE_CUSTOM_PROPERTY_TITLES = {
"architecture": _("Architecture"),
"kernel_id": _("Kernel ID"),
"ramdisk_id": _("Ramdisk ID"),
"image_state": _("Euca2ools state"),
"project_id": _("Project ID"),
"image_type": _("Image Type"),
}
IMAGE_RESERVED_CUSTOM_PROPERTIES = []
API_RESULT_LIMIT = 1000
API_RESULT_PAGE_SIZE = 20
SWIFT_FILE_TRANSFER_CHUNK_SIZE = 512 * 1024
DROPDOWN_MAX_ITEMS = 30
TIME_ZONE = "Asia/Shanghai"
POLICY_FILES_PATH = '/etc/openstack-dashboard'
LOGGING = {
'version': 1,
'disable_existing_loggers': False,
'handlers': {
'null': {
'level': 'DEBUG',
'class': 'logging.NullHandler',
},
'console': {
'level': 'INFO',
'class': 'logging.StreamHandler',
},
},
'loggers': {
'django.db.backends': {
'handlers': ['null'],
'propagate': False,
},
'requests': {
'handlers': ['null'],
'propagate': False,
},
'horizon': {
'handlers': ['console'],
'level': 'DEBUG',
'propagate': False,
},
'openstack_dashboard': {
'handlers': ['console'],
'level': 'DEBUG',
'propagate': False,
},
'novaclient': {
'handlers': ['console'],
'level': 'DEBUG',
'propagate': False,
},
'cinderclient': {
'handlers': ['console'],
'level': 'DEBUG',
'propagate': False,
},
'keystoneclient': {
'handlers': ['console'],
'level': 'DEBUG',
'propagate': False,
},
'glanceclient': {
'handlers': ['console'],
'level': 'DEBUG',
'propagate': False,
},
'neutronclient': {
'handlers': ['console'],
'level': 'DEBUG',
'propagate': False,
},
'heatclient': {
'handlers': ['console'],
'level': 'DEBUG',
'propagate': False,
},
'ceilometerclient': {
'handlers': ['console'],
'level': 'DEBUG',
'propagate': False,
},
'swiftclient': {
'handlers': ['console'],
'level': 'DEBUG',
'propagate': False,
},
'openstack_auth': {
'handlers': ['console'],
'level': 'DEBUG',
'propagate': False,
},
'nose.plugins.manager': {
'handlers': ['console'],
'level': 'DEBUG',
'propagate': False,
},
'django': {
'handlers': ['console'],
'level': 'DEBUG',
'propagate': False,
},
'iso8601': {
'handlers': ['null'],
'propagate': False,
},
'scss': {
'handlers': ['null'],
'propagate': False,
},
},
}
SECURITY_GROUP_RULES = {
'all_tcp': {
'name': _('All TCP'),
'ip_protocol': 'tcp',
'from_port': '1',
'to_port': '65535',
},
'all_udp': {
'name': _('All UDP'),
'ip_protocol': 'udp',
'from_port': '1',
'to_port': '65535',
},
'all_icmp': {
'name': _('All ICMP'),
'ip_protocol': 'icmp',
'from_port': '-1',
'to_port': '-1',
},
'ssh': {
'name': 'SSH',
'ip_protocol': 'tcp',
'from_port': '22',
'to_port': '22',
},
'smtp': {
'name': 'SMTP',
'ip_protocol': 'tcp',
'from_port': '25',
'to_port': '25',
},
'dns': {
'name': 'DNS',
'ip_protocol': 'tcp',
'from_port': '53',
'to_port': '53',
},
'http': {
'name': 'HTTP',
'ip_protocol': 'tcp',
'from_port': '80',
'to_port': '80',
},
'pop3': {
'name': 'POP3',
'ip_protocol': 'tcp',
'from_port': '110',
'to_port': '110',
},
'imap': {
'name': 'IMAP',
'ip_protocol': 'tcp',
'from_port': '143',
'to_port': '143',
},
'ldap': {
'name': 'LDAP',
'ip_protocol': 'tcp',
'from_port': '389',
'to_port': '389',
},
'https': {
'name': 'HTTPS',
'ip_protocol': 'tcp',
'from_port': '443',
'to_port': '443',
},
'smtps': {
'name': 'SMTPS',
'ip_protocol': 'tcp',
'from_port': '465',
'to_port': '465',
},
'imaps': {
'name': 'IMAPS',
'ip_protocol': 'tcp',
'from_port': '993',
'to_port': '993',
},
'pop3s': {
'name': 'POP3S',
'ip_protocol': 'tcp',
'from_port': '995',
'to_port': '995',
},
'ms_sql': {
'name': 'MS SQL',
'ip_protocol': 'tcp',
'from_port': '1433',
'to_port': '1433',
},
'mysql': {
'name': 'MYSQL',
'ip_protocol': 'tcp',
'from_port': '3306',
'to_port': '3306',
},
'rdp': {
'name': 'RDP',
'ip_protocol': 'tcp',
'from_port': '3389',
'to_port': '3389',
},
}
REST_API_REQUIRED_SETTINGS = ['OPENSTACK_HYPERVISOR_FEATURES',
'LAUNCH_INSTANCE_DEFAULTS']
③啟動服務
[root@computer1 ~]# systemctl start httpd.service
④使用瀏覽器http://10.0.0.31/dashboard,
⑤如果出現Internal Server Error
解決辦法:
[root@computer1 ~]# vim /etc/httpd/conf.d/openstack-dashboard.conf
在WSGISocketPrefix run/wsgi后一行添加:
WSGIApplicationGroup %{GLOBAL}
[root@computer1 ~]# systemctl restart httpd.service
⑥登錄dashboard
域:default
用戶名:admin
密碼:ADMIN_PASS
八、啟動一個實例
(一)啟動實例步驟
第一次啟動實例需要步驟:
1:創建openstack網絡
2:創建實例的硬件配置方案
3:創建密鑰對(控制節點免秘鑰登錄)
4:創建安全組規則
5:啟動一個實例(通過命令行創建實例,或者通過web頁面啟動實例)
(二)通過命令行創建實例
①創建網絡
neutron net-create --shared --provider:physical_network provider \
--provider:network_type flat oldboy
# physical_network provider,這里的名稱相同與:
#[root@controller ~]# cat /etc/neutron/plugins/ml2/ml2_conf.ini | grep flat_networks
#flat_networks = provider
創建子網:
neutron subnet-create --name oldgirl \
--allocation-pool start=10.0.0.101,end=10.0.0.250 \
--dns-nameserver 223.5.5.5 --gateway 10.0.0.254 \
oldboy 10.0.0.0/24
②配置硬件配置方案:
查看已有配置方案(硬件方案):
[root@controller ~]# openstack flavor list
創建方案:
[root@controller ~]# openstack flavor create --id 0 --vcpus 1 --ram 64 --disk 1 m1.nano
③創建秘鑰對
創建秘鑰對,然后上傳到OpenStack上:
[root@controller ~]# ssh-keygen -q -N "" -f ~/.ssh/id_rsa
[root@controller ~]# openstack keypair create --public-key ~/.ssh/id_rsa.pub mykey
④創建安全組規則
openstack security group rule create --proto icmp default
openstack security group rule create --proto tcp --dst-port 22 default
#開放的兩個協議,icmp,tcp協議22端口
⑤啟動一個實例(命令行):
查看已有鏡像:
[root@controller ~]# openstack image list
查看網絡id:
[root@controller ~]# neutron net-list
dd7500f9-1cb1-42df-8025-a232ef90d54c
創建鏡像
openstack server create --flavor m1.nano --image cirros \
--nic net-id=dd7500f9-1cb1-42df-8025-a232ef90d54c --security-group default \
--key-name mykey oldboy
#實例的名稱叫oldboy,秘鑰對名稱跟上面一樣:mykey
檢查:
[root@controller images]# openstack server list
[root@controller images]# nova list
安裝相關插件查看虛擬機列表:
yum install libvirt -y
virsh list #查看虛擬機列表
netstat -lntp #監控5900端口
注意事項:
①出現controller無法解析,在電腦的hosts文件中添加:10.0.0.11 controller
②如果實例卡在gurb界面
計算節點修改配置:
vim /etc/nova/nova.conf
[libvirt]
cpu_mode = none
virt_type = qemu
[root@computer1 ~]# systemctl restart openstack-nova-compute
(三)web頁面創建實例步驟
①點擊“計算”
②點擊“實例”
③點擊右上角“啟用實例”
④詳情信息:instance name :實例名稱,count:選擇創建實例數量
⑤選擇鏡像源,點擊加號
⑥flavor:實例的硬件配置,選擇點擊后端加號(硬件的配置方案)
⑦網絡選擇已經創建的,默認的
⑧下面都是默認配置
在windows的host文件中增加:10.0.0.11 controller #(避免報錯)
創建實例:控制台可以進入虛擬機;
點擊控制台進入虛擬機內部時候:
提示錯誤:booting from hard disk grub:
解決辦法:
計算節點:nova配置文件:
vim /etc/nova/nova.conf
[libvirt]
cpu_mode=none
virt_type=qemu
systemctl restart openstack-nova-compute
硬重啟實例。
九、增加一個計算節點
(一)增加計算節點的步驟
1:配置yum源
2:時間同步
3:安裝openstack基礎包
4:安裝nova-compute
5:安裝neutron-linuxbridge-agent
6:啟動服務nova-compute和linuxbridge-agent
7:驗證
(二)安裝yum
mount /dev/cdrom /mnt
rz 上傳openstack_rpm.tar.gz到/opt,並解壓
生成repo配置文件
echo '[local]
name=local
baseurl=file:///mnt
gpgcheck=0
[openstack]
name=openstack
baseurl=file:///opt/repo
gpgcheck=0' >/etc/yum.repos.d/local.repo
yum makecache
echo 'mount /dev/cdrom /mnt' >>/etc/rc.local
chmod +x /etc/rc.d/rc.local
(三)時間同步和openstack基礎包
時間同步:
vim /etc/chrony.conf
修改第3行為
server 10.0.0.11 iburst
systemctl restart chronyd
安裝openstack客戶端和openstack-selinux
yum install python-openstackclient.noarch openstack-selinux.noarch -y
(四)安裝nova-compute和網絡
yum install openstack-nova-compute -y
yum install openstack-utils.noarch -y
\cp /etc/nova/nova.conf{,.bak}
grep -Ev '^$|#' /etc/nova/nova.conf.bak >/etc/nova/nova.conf
openstack-config --set /etc/nova/nova.conf DEFAULT rpc_backend rabbit
openstack-config --set /etc/nova/nova.conf DEFAULT auth_strategy keystone
openstack-config --set /etc/nova/nova.conf DEFAULT my_ip 10.0.0.32
openstack-config --set /etc/nova/nova.conf DEFAULT use_neutron True
openstack-config --set /etc/nova/nova.conf DEFAULT firewall_driver nova.virt.firewall.NoopFirewallDriver
openstack-config --set /etc/nova/nova.conf glance api_servers http://controller:9292
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_uri http://controller:5000
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_url http://controller:35357
openstack-config --set /etc/nova/nova.conf keystone_authtoken memcached_servers controller:11211
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_type password
openstack-config --set /etc/nova/nova.conf keystone_authtoken project_domain_name default
openstack-config --set /etc/nova/nova.conf keystone_authtoken user_domain_name default
openstack-config --set /etc/nova/nova.conf keystone_authtoken project_name service
openstack-config --set /etc/nova/nova.conf keystone_authtoken username nova
openstack-config --set /etc/nova/nova.conf keystone_authtoken password NOVA_PASS
openstack-config --set /etc/nova/nova.conf oslo_concurrency lock_path /var/lib/nova/tmp
openstack-config --set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_host controller
openstack-config --set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_userid openstack
openstack-config --set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_password RABBIT_PASS
openstack-config --set /etc/nova/nova.conf vnc enabled True
openstack-config --set /etc/nova/nova.conf vnc vncserver_listen 0.0.0.0
openstack-config --set /etc/nova/nova.conf vnc vncserver_proxyclient_address '$my_ip'
openstack-config --set /etc/nova/nova.conf vnc novncproxy_base_url http://controller:6080/vnc_auto.html
openstack-config --set /etc/nova/nova.conf neutron url http://controller:9696
openstack-config --set /etc/nova/nova.conf neutron auth_url http://controller:35357
openstack-config --set /etc/nova/nova.conf neutron auth_type password
openstack-config --set /etc/nova/nova.conf neutron project_domain_name default
openstack-config --set /etc/nova/nova.conf neutron user_domain_name default
openstack-config --set /etc/nova/nova.conf neutron region_name RegionOne
openstack-config --set /etc/nova/nova.conf neutron project_name service
openstack-config --set /etc/nova/nova.conf neutron username neutron
openstack-config --set /etc/nova/nova.conf neutron password NEUTRON_PASS
安裝neutron-linuxbridge-agent
yum install openstack-neutron-linuxbridge ebtables ipset -y
\cp /etc/neutron/neutron.conf{,.bak}
grep -Ev '^$|#' /etc/neutron/neutron.conf.bak >/etc/neutron/neutron.conf
openstack-config --set /etc/neutron/neutron.conf DEFAULT rpc_backend rabbit
openstack-config --set /etc/neutron/neutron.conf DEFAULT auth_strategy keystone
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_uri http://controller:5000
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_url http://controller:35357
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken memcached_servers controller:11211
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_type password
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken project_domain_name default
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken user_domain_name default
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken project_name service
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken username neutron
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken password NEUTRON_PASS
openstack-config --set /etc/neutron/neutron.conf oslo_concurrency lock_path /var/lib/neutron/tmp
openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_host controller
openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_userid openstack
openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_password RABBIT_PASS
####
#vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini
\cp /etc/neutron/plugins/ml2/linuxbridge_agent.ini{,.bak}
grep '^[a-Z\[]' /etc/neutron/plugins/ml2/linuxbridge_agent.ini.bak >/etc/neutron/plugins/ml2/linuxbridge_agent.ini
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini linux_bridge physical_interface_mappings provider:eth0
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini securitygroup enable_security_group True
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini securitygroup firewall_driver neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan enable_vxlan False
(五)啟動服務
[root@computer2 /]# systemctl start libvirtd openstack-nova-compute neutron-linuxbridge-agent
[root@computer2 /]# systemctl status libvirtd openstack-nova-compute neutron-linuxbridge-agent
在控制節點上進行檢測:
nova service-list
neutron agent-list
(六)檢查計算節點
創建虛機來檢查新增的計算節點是否可用:
①創建主機聚集:管理員-主機聚集-創建主機聚集-主機聚集信息(名稱,域:oldboyboy)-管理聚集內主機(compute2)--創建 #對nova-compute進行分組。
②創建主機:項目-實例-啟動實例-詳細信息(可用區域選擇剛創建的oldobyboy,主機聚集名稱)-源-flavor-網絡-網絡端口-安全組-秘鑰對-配置-元數據-創建 #在管理員目錄下可以看到實例屬於的主機。
注意:如果實例卡在gurb界面
vim /etc/nova/nova.conf
[libvirt]
cpu_mode = none
virt_type = qemu
systemctl restart openstack-nova-compute
十、openstack用戶項目和角色關系
(一)項目用戶角色的關系圖

1、創建域
openstack domain create --description "Default Domain" default
2、創建項目
openstack project create --domain default --description "Admin Project" admin
3、創建用戶
openstack user create --domain default --password ADMIN_PASS admin
4、創建角色
openstack role create admin
5、關聯角色,授權
openstack role add --project admin --user admin admin
(二)身份管理里創建角色(admin,user)
①先創建角色
②創建項目
調整配額,在項目里調整。
③創建用戶
普通用戶里無管理員目錄
admin角色:所有項目管理員
user角色:單個項目的用戶
只用管理員才能看到所有實例。
(三)項目配額管理
管理員-項目-管理成員-項目配額:配置相應的配額。
十一、遷移glance鏡像服務
(一)背景
當openstack管理的計算節點越來越多的時候,控制節點的壓力越來越大,由於所有的服務都安裝在控制節點,這時候控制節點上的openstack服務隨時都團滅的風險。
openstack是基於soa架構設計的,我們已經實現了horizon的遷移,接下來,我們實現glance鏡像服務的遷移,后面其他的服務都可以遷移,讓控制節點只保留一個keystone服務,是soa架構的最佳實踐。
本次,我們將glance鏡像服務,由控制節點遷移到compute2上。
(二)glance鏡像服務遷移的主要步驟
1:停止控制節點上的glance服務
2:備份遷移glance數據庫
3:在新的節點上安裝配置glance
4:遷移原有glance鏡像文件
5:修改keystone中glance的api地址
6:修改所有節點nova配置文件中glance的api地址
7:測試,上傳鏡像,創建實例
(三)操作過程
①控制節點上關閉相關服務:
[root@controller ~]# systemctl stop openstack-glance-api.service openstack-glance-registry.service
[root@controller ~]# systemctl disable openstack-glance-api.service openstack-glance-registry.service
②在控制節點上備份庫:
[root@controller ~]# mysqldump -uroot -B glance >glance.sql
[root@controller ~]# scp glance.sql 10.0.0.32:/root
(四)數據庫遷移
在compute2上:
yum install mariadb-server.x86_64 python2-PyMySQL -y
systemctl start mariadb
systemctl enable mariadb
mysql_secure_installation
導入從控制節點上備份的glance數據庫
mysql < glance.sql
[root@computer2 ~]# mysql
mysql>
show databases;
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \
IDENTIFIED BY 'GLANCE_DBPASS';
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \
IDENTIFIED BY 'GLANCE_DBPASS';
(五)安裝glance服務
在compute2上安裝:
yum install openstack-glance -y
在控制節點上將配置文件發送至compute2上:
[root@controller ~]# scp /etc/glance/glance-api.conf 10.0.0.32:/etc/glance/
[root@controller ~]# scp /etc/glance/glance-registry.conf 10.0.0.32:/etc/glance/
修改:
connection = mysql+pymysql://glance:GLANCE_DBPASS@10.0.0.32/glance
復制之前的內容,注意:修改數據庫的ip地址為10.0.0.32
systemctl start openstack-glance-api.service openstack-glance-registry.service
systemctl enable openstack-glance-api.service openstack-glance-registry.service
(六)遷移鏡像
[root@computer2 glance]# scp -rp 10.0.0.11:/var/lib/glance/images/* /var/lib/glance/images/
[root@computer2 images]# chown -R glance:glance /var/lib/glance/images/
在控制節點上檢查:
source admin-openrc
openstack endpoint list | grep image
依舊是之前的鏡像
(七)修改keystone上的glance的api地址
在控制節點上:
查看相關數據庫:
msyql keystone:
select * from endpoint
[root@controller ~]# mysqldump -uroot keystone endpoint >endpoint.sql
[root@controller ~]# cp endpoint.sql /opt/
修改數據庫配置文件:
[root@controller ~]# sed -i 's#http://controller:9292#http://10.0.0.32:9292#g' endpoint.sql
導入修改好的數據庫文件:
[root@controller ~]# mysql keystone < endpoint.sql
查看glance接口地址:
[root@controller ~]# openstack endpoint list|grep image
[root@controller ~]# openstack image list
(八)修改所有節點nova配置文件
sed -i 's#http://controller:9292#http://10.0.0.32:9292#g' /etc/nova/nova.conf
grep '9292' /etc/nova/nova.conf
systemctl restart openstack-nova-api.service openstack-nova-compute.service
控制節點重啟:openstack-nova-api.service
計算節點重啟:openstack-nova-compute.service
控制節點:
[root@controller ~]# nova service-list
+----+------------------+------------+-----------+---------+-------+----------------------------+-----------------+
| Id | Binary | Host | Zone | Status | State | Updated_at | Disabled Reason |
+----+------------------+------------+-----------+---------+-------+----------------------------+-----------------+
| 1 | nova-conductor | controller | internal | enabled | up | 2020-01-05T16:53:07.000000 | - |
| 2 | nova-consoleauth | controller | internal | enabled | up | 2020-01-05T16:53:10.000000 | - |
| 3 | nova-scheduler | controller | internal | enabled | up | 2020-01-05T16:53:10.000000 | - |
| 6 | nova-compute | computer1 | nova | enabled | up | 2020-01-05T16:53:08.000000 | - |
| 7 | nova-compute | computer2 | oldboyboy | enabled | up | 2020-01-05T16:53:08.000000 | - |
+----+------------------+------------+-----------+---------+-------+----------------------------+-----------------+
(九)測試,上傳鏡像,創建實例
1、上傳鏡像:
項目-鏡像-創建鏡像(centos-cloud)
鏡像存儲位置:/var/lib/glance/image
查看鏡像信息:qemu-image info /var/lib/glance/image/...
2、創建實例:
項目-實例-啟用實例
web頁面;項目里可以上傳鏡像
qemu-img info .. #查看鏡像信息
啟動實例過程中查看nova中信息:從glance上下載鏡像,並進行格式轉換。下載的位置:
/var/lib/nova/instance/_base/
十二、塊存儲服務cinder
(一)安裝cinder塊服務
1、塊存儲服務cinder的介紹
為雲主機添加磁盤。塊存儲服務(cinder)為實例提供塊存儲。存儲的分配和消耗是由塊存儲驅動器,或者多后端配置的驅動器決定的。還有很多驅動程序可用:NAS/SAN,NFS,LVM,Ceph等。常用組件:
cinder-api:接收和響應外部有關塊存儲請求
cinder-volume: 提供存儲空間
cinder-scheduler:調度器,決定將要分配的空間由哪一個cinder-volume提供。
cinder-backup: 備份卷
2、openstack服務通用安裝步驟
a:數據庫創庫授權
b:在keystone創建系統用戶關聯角色
c:在keystone上創建服務,注冊api
d:安裝相應服務軟件包
e:修改相應服務的配置文件
f:同步數據庫
g:啟動服務
(二)塊存儲服務節安裝
1、cinder塊存儲服務控制節點
①數據庫創庫授權
[root@controller ~]# mysql
CREATE DATABASE cinder;
GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' \
IDENTIFIED BY 'CINDER_DBPASS';
GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' \
IDENTIFIED BY 'CINDER_DBPASS';
②在keystone創建系統用戶(glance,nova,neutron,cinder)關聯角色
openstack user create --domain default --password CINDER_PASS cinder
openstack role add --project service --user cinder admin
③在keystone上創建服務和注冊api(source admin-openrc )
openstack service create --name cinder \
--description "OpenStack Block Storage" volume
openstack service create --name cinderv2 \
--description "OpenStack Block Storage" volumev2
openstack endpoint create --region RegionOne \
volume public http://controller:8776/v1/%\(tenant_id\)s
openstack endpoint create --region RegionOne \
volume internal http://controller:8776/v1/%\(tenant_id\)s
openstack endpoint create --region RegionOne \
volume admin http://controller:8776/v1/%\(tenant_id\)s
openstack endpoint create --region RegionOne \
volumev2 public http://controller:8776/v2/%\(tenant_id\)s
openstack endpoint create --region RegionOne \
volumev2 internal http://controller:8776/v2/%\(tenant_id\)s
openstack endpoint create --region RegionOne \
volumev2 admin http://controller:8776/v2/%\(tenant_id\)s
④安裝服務相應軟件包
[root@controller ~]# yum install openstack-cinder -y
⑤修改相應服務的配置文件
cp /etc/cinder/cinder.conf{,.bak}
grep -Ev '^$|#' /etc/cinder/cinder.conf.bak >/etc/cinder/cinder.conf
openstack-config --set /etc/cinder/cinder.conf DEFAULT rpc_backend rabbit
openstack-config --set /etc/cinder/cinder.conf DEFAULT auth_strategy keystone
openstack-config --set /etc/cinder/cinder.conf DEFAULT my_ip 10.0.0.11
openstack-config --set /etc/cinder/cinder.conf database connection mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_uri http://controller:5000
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_url http://controller:35357
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken memcached_servers controller:11211
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_type password
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken project_domain_name default
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken user_domain_name default
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken project_name service
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken username cinder
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken password CINDER_PASS
openstack-config --set /etc/cinder/cinder.conf oslo_concurrency lock_path /var/lib/cinder/tmp
openstack-config --set /etc/cinder/cinder.conf oslo_messaging_rabbit rabbit_host controller
openstack-config --set /etc/cinder/cinder.conf oslo_messaging_rabbit rabbit_userid openstack
openstack-config --set /etc/cinder/cinder.conf oslo_messaging_rabbit rabbit_password RABBIT_PASS
openstack-config --set /etc/nova/nova.conf cinder os_region_name RegionOne
⑥同步數據庫
[root@controller ~]# su -s /bin/sh -c "cinder-manage db sync" cinder
⑦啟動服務
systemctl restart openstack-nova-api.service
systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service
systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service
[root@controller ~]# systemctl status openstack-cinder-api.service openstack-cinder-scheduler.service
⑧檢查:
[root@controller ~]# cinder service-list
(二)安裝cinder塊服務存儲節點
為節省資源,安裝在計算節點上,在computer1上增加兩塊硬盤,添加兩塊硬盤,一塊30G,一塊10G。
1、lvm原理:

2、存儲節點安裝步驟
①在計算節點上安裝lvm相關軟件
yum install lvm2 -y
systemctl enable lvm2-lvmetad.service
systemctl start lvm2-lvmetad.service
②創建卷組
echo '- - -' >/sys/class/scsi_host/host0/scan
#以上命令是重新掃描硬盤
fdisk -l
創建物理卷
pvcreate /dev/sdb
pvcreate /dev/sdc
創卷成卷組,將兩個物理卷分別創建成相應的卷組:
vgcreate cinder-ssd /dev/sdb
vgcreate cinder-sata /dev/sdc
檢查創建情況:
pvs:
vgs:
lvs:
③修改/etc/lvm/lvm.conf
在130下面插入一行:
只接受sdb,sdc訪問
filter = [ "a/sdb/", "a/sdc/","r/.*/"]
④安裝cinder相關軟件
yum install openstack-cinder targetcli python-keystone -y
⑤修改配置文件
[root@computer1 ~]# cat /etc/cinder/cinder.conf
[DEFAULT]
rpc_backend = rabbit
auth_strategy = keystone
my_ip = 10.0.0.31
glance_api_servers = http://10.0.0.32:9292
enabled_backends = ssd,sata #區別官方文檔,這里配置兩種磁盤格式
[BACKEND]
[BRCD_FABRIC_EXAMPLE]
[CISCO_FABRIC_EXAMPLE]
[COORDINATION]
[FC-ZONE-MANAGER]
[KEYMGR]
[cors]
[cors.subdomain]
[database]
connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder
[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = cinder
password = CINDER_PASS
[matchmaker_redis]
[oslo_concurrency]
lock_path = /var/lib/cinder/tmp
[oslo_messaging_amqp]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
rabbit_host = controller
rabbit_userid = openstack
rabbit_password = RABBIT_PASS
[oslo_middleware]
[oslo_policy]
[oslo_reports]
[oslo_versionedobjects]
[ssl]
[ssd]
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_group = cinder-ssd
iscsi_protocol = iscsi
iscsi_helper = lioadm
volume_backend_name = ssd
[sata]
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_group = cinder-sata
iscsi_protocol = iscsi
iscsi_helper = lioadm
volume_backend_name = sata
⑥啟動應用程序
systemctl enable openstack-cinder-volume.service target.service
systemctl start openstack-cinder-volume.service target.service
⑦在控制節點上檢測
[root@controller ~]# cinder service-list
⑧創建卷,驗證:
項目-計算-卷-創建卷;在計算節點1上lvs查看創建成功的卷。
⑨掛卷:
步驟一:卷-編輯卷-管理連接-掛至相應實例;
步驟二:在相應的實例上查看:
sudo su
fdisk -l
步驟三:格式化新增的卷並掛載:
web頁面中:項目-卷-管理連接-連接至創建的實例上:
點擊卷后面的實例:進入控制台的實例:
mkfs.ext4 /dev/vdb
mount /dev/vdb /mnt
df -h
⑩擴容卷
步驟一:
unmount /mnt
步驟二:
項目-計算-卷-編輯卷-卷管理-分離卷
項目-計算-卷-編輯卷-卷管理-編輯卷-擴展卷(2g,computer1:lvs)
項目-計算-卷-編輯卷-卷管理-編輯卷-管理卷-掛至相應的實例
實例控制台:
mount /dev/vdb /mnt
df -h
resize2fs /dev/vdb
df -h
查看存儲信息:
[root@computer1 ~]# vgs
⑪創卷卷組類型
已定義卷組類型:
volume_backend_name = ssd
volume_backend_name = sata
管理員-卷-創建類型卷-名稱-查看卷類型-已創建-分別在鍵和值里填寫以上信息。
創建卷:可選擇卷的類型
項目-卷-創建卷-創建卷過程中可以選擇已經創建好的卷類型。lvs查看創建情況
十三、增加一個flat網絡
分別在三台機器上增加一個網卡,選擇lan網段,地址172.16.0.0/24
(一)增加一個flat網絡原因
我們的openstack當前環境只有一個基於eth0網卡橋接的,它能使用的ip范圍有限,就決定着它能創建的實例數量有限,無法超過可用ip的數量,當我們的openstack私有雲規模比較大的時候,這時候只有一個網絡,就不能滿足我們的需求了,所以這里我們來學習如何增加一個網絡我們使用的環境是VMware workstation,無法模擬vlan的場景,所以這里我們繼續使用flat,網絡類型。
(二)添加網卡eth1
分別虛擬機上添加一塊網卡,為lan區段,172.16.0.0/24
拷貝ifcfg-eth0 至ifcfg-eth1,修改eth1的地址為172.16.0.0/24地址段,並ifup eth1啟動網卡。
[root@computer1 network-scripts]# scp ifcfg-eth1 10.0.0.11:`pwd`
(三)控制節點配置
1:控制節點
a:
vim /etc/neutron/plugins/ml2/ml2_conf.ini
[ml2_type_flat]
flat_networks = provider,net172_16 #增加網絡名詞net172.16
b:
vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini
[linux_bridge]
physical_interface_mappings = provider:eth0,net172_16:eth1 #網絡名詞和網卡要對應
c:重啟
systemctl restart neutron-server.service neutron-linuxbridge-agent.service
(四)計算節點配置
a:
vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini
[linux_bridge]
physical_interface_mappings = provider:eth0,net172_16:eth1
b:重啟
systemctl restart neutron-linuxbridge-agent.service
檢測:控制節點
neutron agent-list
(五)創建網絡
1、命令行創建:
neutron net-create --shared --provider:physical_network net172_16 \
--provider:network_type flat net172_16
neutron subnet-create --name oldgirl \
--allocation-pool start=172.16.0.1,end=172.16.0.250 \
--dns-nameserver 223.5.5.5 --gateway 172.16.0.254 \
net172_16 172.16.0.0/24
2、web頁面創建網絡:
管理員-網絡-創建網絡(供應商,平面)-創建子網
創建實例:項目-實例-創建實例(創建過程中可以選擇剛創建的網絡)
注意:創建一個linux系統作為路由器使用:
基於net172_16網絡上網,路由器服務器需要配置:
配置eth0和eth1,但是eth1的網絡地址為172.16.0.254,為虛擬機網關地址,不配置網關。
編輯內核配置文件,開啟轉發
vim /etc/sysctl.conf
net.ipv4.ip_forward = 1
使內核生效
sysctl -p
清空防火牆的filter表
iptables -F
#添加轉發規則
iptables -t nat -A POSTROUTING -s 172.16.0.0/24 -j MASQUERADE
十四、cinder對接nfs共享存儲
(一)背景
cinder使用nfs做后端存儲,cinder服務和nova服務很相似。
nova:不提供虛擬化,支持多種虛擬化技術,kvm,xen,qemu,lxc
cinder:不提供存儲,支持多種存儲技術,lvm,nfs,glusterFS,ceph
后期如果需要對接其他類型后端存儲,方法都類似。
(二)相關配置
①前提條件控制節點安裝nfs
安裝
[root@controller ~]# yum install nfs-utils.x86_64 -y
配置
[root@controller ~]# mkdir /data
[root@controller ~]# vim /etc/exports
/data 10.0.0.0/24(rw,async,no_root_squash,no_all_squash)
啟動
[root@controller ~]# systemctl restart rpcbind.socket
[root@controller ~]# systemctl restart nfs
②存儲節點的配置
[root@computer1 ~]# yum install nfs -y
修改/etc/cinder/cinder.conf
[DEFAULT]
enabled_backends = sata,ssd,nfs #增加nfs存儲類型,下面配置nfs模塊
[nfs]
volume_driver = cinder.volume.drivers.nfs.NfsDriver
nfs_shares_config = /etc/cinder/nfs_shares
volume_backend_name = nfs
vi /etc/cinder/nfs_shares
10.0.0.11:/data
[root@computer1 ~]# showmount -e 10.0.0.11
Export list for 10.0.0.11:
/data 10.0.0.0/24
重啟cinder-volume
systemctl restart openstack-cinder-volume.service
在控制節點上檢查:cinder service-list
③錯誤糾正
查看卷日志:
[root@computer1 ~]# vim /var/log/cinder/volume.log
報錯,需要配置:
[root@computer1 ~]# chown -R cinder:cinder /var/lib/cinder/mnt/
④創建卷類型,掛載實例
管理員-卷-創建類型卷-查看extra spec,設置鍵和值
項目-卷-創建卷-管理連接-連接到實例
[root@computer1 ~]# qemu-img info /var/lib/cinder/mnt/490717a467bd12d34ec324c86a4f35b3/volume-b5f95e9f-7c11-4014-a2a0-26fc756bcdc3
image: /var/lib/cinder/mnt/490717a467bd12d34ec324c86a4f35b3/volume-b5f95e9f-7c11-4014-a2a0-26fc756bcdc3
file format: raw
virtual size: 2.0G (2147483648 bytes)
disk size: 0
[root@computer1 ~]#
[root@controller ~]# ll /data/
total 0
-rw-rw-rw- 1 qemu qemu 2147483648 Jan 8 22:48 volume-b5f95e9f-7c11-4014-a2a0-26fc756bcdc3
實例位置
[root@computer1 5ad1db06-c52b-49aa-893d-51d60892c7a5]# ll
total 2536
-rw------- 1 qemu qemu 25100 Jan 8 22:53 console.log
-rw-r--r-- 1 qemu qemu 2555904 Jan 8 22:54 disk
-rw-r--r-- 1 nova nova 79 Jan 8 01:19 disk.info
-rw-r--r-- 1 nova nova 2529 Jan 8 01:19 libvirt.xml
[root@computer1 5ad1db06-c52b-49aa-893d-51d60892c7a5]# qemu-img info disk
image: disk
file format: qcow2
virtual size: 10G (10737418240 bytes)
disk size: 2.4M
cluster_size: 65536
backing file: /var/lib/nova/instances/_base/01c2721b07aea0ded3af18fafca0af9de5ed767c
Format specific information:
compat: 1.1
lazy refcounts: false
refcount bits: 16
corrupt: false
[root@computer1 5ad1db06-c52b-49aa-893d-51d60892c7a5]# pwd
/var/lib/nova/instances/5ad1db06-c52b-49aa-893d-51d60892c7a5
⑤錯誤糾正
查看報錯日志:
[root@controller cinder]# cat /var/log/cinder/api.log
2020-01-08 23:06:08.748 3023 ERROR cinder.image.glance CommunicationError: Error finding address for http://10.0.0.11:9292/v1/images/456d7600-3bd1-4fb5-aa84-144a61c0eb07: HTTPConnectionPool(host='10.0.0.11', port=9292): Max retries exceeded with url: /v1/images/456d7600-3bd1-4fb5-aa84-144a61c0eb07 (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0x6a58990>: Failed to establish a new connection: [Errno 111] ECONNREFUSED',))
2020-01-08 23:06:08.748 3023 ERROR cinder.image.glance
鏡像glance位置:
[root@controller ~]# openstack endpoint list | grep image
| 2b8484d91ec94bd8a5aafd56ea7a1cfe | RegionOne | glance | image | True | public | http://10.0.0.32:9292 |
| aec16a57566a4bccae96f9c63885c0b5 | RegionOne | glance | image | True | internal | http://10.0.0.32:9292 |
| ceba791635b341d79c1c47182c22c4df | RegionOne | glance | image | True | admin | http://10.0.0.32:9292 |
[root@controller ~]#
增加配置:
[root@controller ~]# cat /etc/cinder/cinder.conf
[DEFAULT]
rpc_backend = rabbit
auth_strategy = keystone
my_ip = 10.0.0.11
glance_api_servers = http://10.0.0.32:9292 #默認在控制節點
[root@controller ~]# systemctl restart openstack-cinder-api.service openstack-cinder-scheduler.service
[root@controller ~]# systemctl status openstack-cinder-api.service openstack-cinder-scheduler.service
[root@controller ~]# cinder service-list
創建實例,把實例放在卷上面:
項目-實例-創建實例-源(創建新卷)-存儲大小(大小要與flavour相同)
實例直接放在卷上的位置:
[root@computer1 024b11f6-c490-460b-93b3-b915149fa76e]# ll -h
total 24K
-rw------- 1 qemu qemu 19K Jan 8 23:19 console.log
-rw-r--r-- 1 nova nova 2.5K Jan 8 23:18 libvirt.xml
[root@computer1 024b11f6-c490-460b-93b3-b915149fa76e]# pwd
/var/lib/nova/instances/024b11f6-c490-460b-93b3-b915149fa76e
十五、openstack雲主機的冷遷移
(一)遷移條件
前提條件:
①至少有2個計算節點
②2個計算節點必須處於同一個可用區域(主機聚集)
③計算節點,有足夠的剩余計算資源
(二)配置計算節點nova用戶互信
所有計算節點
usermod -s /bin/bash nova
計算節點2:
[root@computer2 ~]# su - nova
Last login: Wed Jan 8 23:40:51 CST 2020 on pts/1
-bash-4.2$
-bash-4.2$
-bash-4.2$ ssh-keygen -q -N "" -f ~/.ssh/id_rsa
/var/lib/nova/.ssh/id_rsa already exists.
Overwrite (y/n)? yes
-bash-4.2$ ls .ssh/
id_rsa id_rsa.pub
-bash-4.2$ cp -fa .ssh/id_rsa.pub .ssh/authorized_keys
-bash-4.2$ ll .ssh/
total 12
-rw-r--r-- 1 nova nova 396 Jan 8 23:45 authorized_keys
-rw------- 1 nova nova 1675 Jan 8 23:45 id_rsa
-rw-r--r-- 1 nova nova 396 Jan 8 23:45 id_rsa.pub
-bash-4.2$ ssh nova@10.0.0.32
The authenticity of host '10.0.0.32 (10.0.0.32)' can't be established.
ECDSA key fingerprint is SHA256:GYtp4W43k6E/1PUlY9PGAT6HR+oI6j4E4HJF19ZuCHU.
ECDSA key fingerprint is MD5:3f:b3:8b:8e:21:38:6f:51:ba:f4:67:ca:2a:bc:e1:34.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '10.0.0.32' (ECDSA) to the list of known hosts.
Last login: Wed Jan 8 23:44:51 2020
-bash-4.2$
-bash-4.2$
-bash-4.2$
-bash-4.2$ scp -rp .ssh root@10.0.0.31:`pwd`
計算節點2:
計算節點1是以root發送的,所以在計算節點1上都是root用戶屬主:
[root@computer1 ~]# chown -R nova:nova /var/lib/nova
(三)控制節點配置:
[root@controller ~]# vim /etc/nova/nova.conf
[DEFAULT]
cheduler_default_filters=RetryFilter,AvailabilityZoneFilter,RamFilter,DiskF
ilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,ServerG
roupAntiAffinityFilter,ServerGroupAffinityFilter
[root@controller ~]# systemctl restart openstack-nova-scheduler.service
[root@controller ~]# systemctl status openstack-nova-scheduler.service
(四)兩個計算節點配置:
[root@computer1 ~]# vim /etc/nova/nova.conf
[DEFAULT]
allow_resize_to_same_host = True
[root@computer1 ~]# systemctl restart openstack-nova-compute.service
[root@computer1 ~]# systemctl status openstack-nova-compute.service
(五)創建實例驗證
創建實例:
項目-實例-創建實例
管理員-實例-編輯實例-遷移實例-確認。
netstat -lntp #檢查是否有實例端口,確認實例遷移成功。
十六、定制雲主機
(一)openstack新建雲主機流程
流程圖:

下載鏡像存儲的位置:/var/lib/nova/instance/_base
(二)openstack定制雲主機
思考:為什么基於同一個鏡像模板啟動的雲主機,雲主機的主機名和實例名稱一樣?
思考:為什么在控制節點上可以免密碼登陸我們的雲主機?
思考:控制節點nova的配置文件中,需要配置下面紅色標記的兩行?
vi /etc/nova/nova.com
......
[neutron]
url = http://controller:9696
auth_url = http://controller:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = NEUTRON_PASS
service_metadata_proxy = True
metadata_proxy_shared_secret = METADATA_SECRET
思考:為什么neutron-metadata和dhcp-agent要配置以下內容?
vi /etc/neutron/metadata_agent.ini
[DEFAULT]
nova_metadata_ip = controller
metadata_proxy_shared_secret = METADATA_SECRET
vi /etc/neutron/dhcp_agent.ini
[DEFAULT]
interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = True
[root@controller ~]# ssh cirros@10.0.0.111
$ cat .ssh/authorized_keys
$ curl http://169.254.169.254/latest/meta-data/
$ curl http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key/
$ ls .ssh/authorized_keys
.ssh/authorized_keys
$ ping 169.254.169.254
$ route -n
[root@controller ~]# ip netns #網絡命名空間容器
qdhcp-df321bea-c8fd-4920-9a65-5f89bc036357 (id: 1)
qdhcp-dd7500f9-1cb1-42df-8025-a232ef90d54c (id: 0)
通過以下方式進入容器:
[root@controller ~]# ip netns exec qdhcp-df321bea-c8fd-4920-9a65-5f89bc036357 /bin/bash
[root@controller ~]# ifconfig
[root@controller ~]# route -n
[root@controller ~]# ip a
[root@controller ~]# netstat -lntp
[root@controller ~]# ps -ef | grep 19193
(三)定制雲主機流程

十七、openstack三層網絡vxlan
(一)問題思考
思考:為什么現在的公有雲買的雲主機,使用公網ip地址連接后,看到的卻是一個私網ip?
思考:公有雲上每一個用戶都可以建立多個vpc網絡,雲廠商如何實現這么多vpc網絡的隔離?
使用vlan的話,最多產生1-4094個隔離的網絡
使用vxlan的話,最多產生4096*4096-2約等於1678萬個隔離的網絡
(二)配置步驟
1、控制節點配置
①三層網絡vxlan控制節點配置步驟1
為所有節點,增加一塊網卡,作為vxlan網絡的隧道通信ip
[root@controller ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth2
TYPE=Ethernet
BOOTPROTO=none
NAME=eth2
DEVICE=eth2
ONBOOT=yes
IPADDR=172.16.1.11
NETMASK=255.255.255.0
GATEWAY=172.16.1.254
DNS1=223.5.5.5
[root@controller ~]#ifup eth2
②三層網絡vxlan控制節點配置步驟2
修改控制節點/etc/neutron/neutron.conf文件
[DEFAULT]
...
core_plugin = ml2
service_plugins =
修改為:
[root@controller ~]# vim /etc/neutron/neutron.conf
[DEFAULT]
core_plugin = ml2
service_plugins = router
allow_overlapping_ips = True
③三層網絡vxlan控制節點配置步驟3
將/etc/neutron/plugins/ml2/ml2_conf.ini修改為
[DEFAULT]
[ml2]
type_drivers = flat,vlan,vxlan
tenant_network_types = vxlan
mechanism_drivers = linuxbridge,l2population
extension_drivers = port_security
[ml2_type_flat]
flat_networks = provider,net172_16
[ml2_type_geneve]
[ml2_type_gre]
[ml2_type_vlan]
[ml2_type_vxlan]
vni_ranges = 1:100000
[securitygroup]
enable_ipset = True
④三層網絡vxlan控制節點配置步驟4
將/etc/neutron/plugins/ml2/linuxbridge_agent.ini修改為
[DEFAULT]
[agent]
[linux_bridge]
physical_interface_mappings = provider:eth0,net172_16:eth1
[securitygroup]
enable_security_group = True
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
[vxlan]
enable_vxlan = True
local_ip = 172.16.1.11
l2_population = True
⑤三層網絡vxlan控制節點配置步驟5
將/etc/neutron/l3_agent.ini修改為
[DEFAULT]
interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver
external_network_bridge =
[AGENT]
啟動服務:
systemctl restart neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service
systemctl enable neutron-l3-agent.service
systemctl start neutron-l3-agent.service
2、計算節點配置
①三層網絡vxlan計算節點配置步驟
將/etc/neutron/plugins/ml2/linuxbridge_agent.ini修改為
[DEFAULT]
[agent]
[linux_bridge]
physical_interface_mappings = provider:eth0,net172_16:eth1
[securitygroup]
enable_security_group = True
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
[vxlan]
enable_vxlan = True
local_ip = 172.16.1.31
l2_population = True
systemctl restart neutron-linuxbridge-agent.service
在控制節點查看:
[root@controller ~]# neutron agent-list
3、web界面操作
步驟1:修改oldboy外網為外部網絡。管理員-網絡-oldbly-編輯網絡-修改為外部網絡
步驟2:創建一個內部測試網絡。項目-網絡-創建網絡(test,192.168.1.0/24)
步驟3:開啟外部網絡的路由功能。在computer1上:
[root@computer1 ~]# vim /etc/openstack-dashboard/local_settings
OPENSTACK_NEUTRON_NETWORK = {
'enable_router': True,
重啟apache:
[root@computer1 ~]# systemctl restart httpd
步驟4:使用oldboy用戶創建一個網絡。項目-網絡-創建網絡(test內部網絡)
步驟5:使用oldboy用戶創建一個路由器。項目-網絡-路由器(testr),在路由器添加接口外部和內部接口。
步驟6:admin用戶創建兩個實例,使用內部網絡創建,ping www.baidu.com
基於test網絡創建的實例,在此種情況下,是不能被外界訪問的(10.0.0.0/24)。
如果想被外界訪問,需要綁定一個外界網絡的一個浮動ip。在對應實例右面選擇綁定浮動ip。通過浮動ip地址鏈接至實例虛擬機。
(三)網絡原理圖:

查看網絡容器:
ip netns
進入容器:
ip netns exec ... /bin/bash
查看地址情況:
ifconfig
十八、二次開發需要了解的
openstack的api,遵循restful api的原則,調用api的本質就是發送一個http的請求,通過調用api,可以實現創建實例,刪除實例,創建用戶,刪除用戶,創建卷,刪除卷等等操作學會了,openstack的api,就可以使用其他編程語言二次開發出一個dashboard和更多其他功能的定制關於restful api可以通過https://blog.csdn.net/hjc1984117/article/details/77334616了解更多
(一)token獲取
1、方法1
1:獲取token
方法1:
curl -i -X POST -H "Content-type: application/json" \
-d '{
"auth": {
"identity": {
"methods": [
"password"
],
"password": {
"user": {
"domain": {
"name": "default"
},
"name": "admin",
"password": "123456"
}
}
},
"scope": {
"project": {
"domain": {
"name": "default"
},
"name": "admin"
}
}
}
}' http://10.0.0.11:5000/v3/auth/tokens
2、方法2:
方法2:
openstack token issue|awk 'NR==5{print $4}’
(二)glance的api調用
查看glance鏡像列表(json化后查看更利於查看)
curl -H "X-Auth-Token:$token" -H "Content-Type: application/json" http://10.0.0.32:9292/v2/images
(三)刪除glance鏡像
刪除glance鏡像
獲取鏡像id:
openstack image list
刪除鏡像:
curl -X DELETE -H "X-Auth-Token:$token" -H "Content-Type: application/json" http://10.0.0.32:9292/v2/images/160a5601-6092-445a-8e1b-fbb63e3c7434
(四)nova的api調用
nova的api調用
neutron net-list #獲取網絡id
openstack flavor list #獲取實例id
openstack imge list
啟動一個實例:通過wireshake 抓web啟動實例的過程(tcp 8774端口 post動作的http)
curl -H "Content-Type:application/json" -H "X-Auth-Token:$token" -d '
{
"server": {
"name": "vvvvvvvvvvv",
"imageRef": "91d3c4d8-085d-45cc-9d4c-3cd89bf63e28",
"availability_zone": "nova",
"key_name": "mykey",
"flavorRef": "382ecb64-cbb6-43ba-bb84-b5d489a78845",
"OS-DCF:diskConfig": "AUTO",
"max_count": 1,
"min_count": 1,
"networks": [{
"uuid": "d35f62b8-dbfd-4804-8784-12e74e2fda9d"
}],
"security_groups": [{
"name": "e3430acf-6650-4ed2-8d67-aa10de80a78c"
}]
}
}' http://10.0.0.11:8774/v2.1/faa9a9bf8d524fd7932f49b82be953ff/servers
1、刪除一個實例
刪除一個實例:
nova list #獲取實例的ID
curl -X DELETE -H "Content-Type:application/json" -H "X-Auth-Token:$token" http://10.0.0.11:8774/v2.1/faa9a9bf8d524fd7932f49b82be953ff/servers/85d25f05-e683-4782-9da1-b0f45978f462 (后面跟的是實例的id)
