之前寫過一篇《openstack mitaka 配置詳解》然而最近使用發現阿里不再提供m版本的源,所以最近又開始學習ocata版本,並進行總結,寫下如下文檔
OpenStack ocata版本官方文檔:https://docs.openstack.org/ocata/install-guide-rdo/environment.html
同時如果不想一步步安裝,可以執行安裝腳本:http://www.cnblogs.com/yaohong/p/7251852.html
一:環境
1.1主機網絡
-
系統版本 CentOS7
-
控制節點: 1 處理器, 4 GB 內存, 及5 GB 存儲
-
計算節點: 1 處理器, 2 GB 內存, 及10 GB 存儲
說明:
1:以CentOS7為鏡像,安裝兩台機器(怎樣安裝詳見http://www.cnblogs.com/yaohong/p/7240387.html)並注意配置雙網卡和控制兩台機器的內存。
2:修改機器主機名分別為:controller和compute1
#hostnamectl set-hostname hostname
3:編輯controller和compute1的 /etc/hosts 文件
#vi /etc/hosts
4:驗證
采取互ping以及ping百度的方式
1.2網絡時間協議(NTP)
[控制節點安裝NTP]
NTP主要為同步時間所用,時間不同步,可能造成你不能創建雲主機
#yum install chrony (安裝軟件包)
#vi /etc/chrony.conf 增加
server NTP_SERVER iburst allow ip地址網段(可以去掉,指代允許你的ip地址網段可以訪問NTP) |
#systemctl enable chronyd.service (設置為系統自啟動)
#systemctl start chronyd.service (啟動NTP服務)
[計算節點安裝NTP]
# yum install chrony
#vi /etc/chrony.conf `` 釋除``server`` 值外的所有內容。修改它引用控制節點:
server controller iburst
# systemctl enable chronyd.service (加入系統自啟動)
# systemctl start chronyd.service (啟動ntp服務)
[驗證NTP]
控制節點和計算節點分別執行#chronyc sources,出現如下
[驗證NTP]
控制節點和計算節點分別執行#chronyc sources,出現如下
1.3Openstack包
[openstack packages安裝在控制和計算節點]
安裝openstack最新的源:
#yum install centos-release-openstack-ocata
#yum install https://rdoproject.org/repos/rdo-release.rpm
#yum upgrade (在主機上升級包)
#yum install python-openstackclient (安裝opentack必須的插件)
#yum install openstack-selinux
1.4SQL數據庫
安裝在控制節點,指南中的步驟依據不同的發行版使用MariaDB或 MySQL。OpenStack 服務也支持其他 SQL 數據庫。
#yum install mariadb mariadb-server python2-PyMySQL
#vi /etc/mysql/conf.d/mariadb_openstack.cnf
加入:
[mysqld]
bind-address = 192.168.1.73 (安裝mysql的機器的IP地址,這里為controller地址)
default-storage-engine = innodb
innodb_file_per_table
collation-server = utf8_general_ci
character-set-server = utf8
#systemctl enable mariadb.service (將數據庫服務設置為自啟動)
#systemctl start mariadb.service (將數據庫服務設置為開啟)
設置mysql屬性:
#mysql_secure_installation (此處參照http://www.cnblogs.com/yaohong/p/7352386.html,中坑一)
1.5消息隊列
消息隊列在openstack整個架構中扮演着至關重要(交通樞紐)的作用,正是因為openstack部署的靈活性、模塊的松耦合、架構的扁平化,反而使openstack更加依賴於消息隊列(不一定使用RabbitMQ,
可以是其他的消息隊列產品),所以消息隊列收發消息的性能和消息隊列的HA能力直接影響openstack的性能。如果rabbitmq沒有運行起來,你的整openstack平台將無法使用。rabbitmq使用5672端口。
#yum install rabbitmq-server
#systemctl enable rabbitmq-server.service(加入自啟動)
#systemctl start rabbitmq-server.service(啟動)
#rabbitmqctl add_user openstack RABBIT_PASS (增加用戶openstack,密碼自己設置替換掉RABBIT_PASS)
#rabbitmqctl set_permissions openstack ".*" ".*" ".*" (給新增的用戶授權,沒有授權的用戶將不能接受和傳遞消息)
1.6Memcached
memcache為選擇安裝項目。使用端口11211
[控制節點]
#yum install memcached python-memcached
修改/etc/sysconfig/memcached中的OPTIONS為。
OPTIONS="-l 127.0.0.1,::1,controller" |
#systemctl enable memcached.service
#systemctl start memcached.service
二:認證服務
2.1安裝和配置
登錄數據庫創建keystone數據庫。
【只在控制節點部署】
#mysql -u root -p
#CREATE DATABASE keystone;
設置授權用戶和密碼:
#GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \
IDENTIFIED BY '自定義的密碼';
#GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \
IDENTIFIED BY '自定義的密碼';
安全並配置組件:
#yum install openstack-keystone httpd mod_wsgi
#vi /etc/keystone/keystone.conf
[database] connection = mysql+pymysql://keystone:密碼@controller/keystone |
初始化身份認證服務的數據庫
# su -s /bin/sh -c "keystone-manage db_sync" keystone(一點要查看數據庫是否生成表成功)
初始化keys:
#keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
引導身份服務:
keystone-manage bootstrap --bootstrap-password ADMIN_PASS \ --bootstrap-admin-url http://controller:35357/v3/ \ --bootstrap-internal-url http://controller:5000/v3/ \ --bootstrap-public-url http://controller:5000/v3/ \ --bootstrap-region-id RegionOne |
配置apache:
#vi /etc/httpd/conf/httpd.conf
ServerName controller(將ServerName 后面改成主機名,防止啟動報錯) |
創建一個指向/usr/share/keystone/wsgi-keystone.conf文件的鏈接:
#ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/
啟動httpd:
#systemctl enable httpd.service
#systemctl start httpd.service
配置管理賬戶
#vi admin加入
export OS_USERNAME=admin
export OS_PASSWORD=123456
export OS_PROJECT_NAME=admin
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_DOMAIN_NAME=Default
export OS_AUTH_URL=http://controller:35357/v3
export OS_IDENTITY_API_VERSION=3
2.2創建域、項目、用戶和角色
創建Service Project:
#penstack project create --domain default \
--description "Service Project" service
創建Demo Project:
#openstack project create --domain default \
--description "Demo Project" demo
創建 demo 用戶:
#openstack user create --domain default \
--password-prompt demo
創建user角色:
#openstack role create user
將用戶租戶角色連接起來:
#openstack role add --project demo --user demo user
2.3驗證
vi /etc/keystone/keystone-paste.ini
從``[pipeline:public_api]``,[pipeline:admin_api]``和``[pipeline:api_v3]``部分刪除``admin_token_auth
重置``OS_TOKEN``和``OS_URL`` 環境變量:
unset OS_AUTH_URL OS_PASSWORD
作為 admin 用戶,請求認證令牌:
#openstack --os-auth-url http://controller:35357/v3 \
--os-project-domain-name default --os-user-domain-name default \
--os-project-name admin --os-username admin token issue
這里會遇到錯誤:
由於是Http錯誤,所以返回Apache HTTP 服務配置的地方,重啟Apache 服務,並重新設置管理賬戶:
# systemctlrestart httpd.service
$ export OS_USERNAME=admin
$ export OS_PASSWORD=ADMIN_PASS
$ export OS_PROJECT_NAME=admin
$ export OS_USER_DOMAIN_NAME=Default
$ export OS_PROJECT_DOMAIN_NAME=Default
$ export OS_AUTH_URL=http://controller:35357/v3
$ export OS_IDENTITY_API_VERSION=3
執行完后再次執行
#openstack --os-auth-url http://controller:35357/v3 \
--os-project-domain-name default --os-user-domain-name default \
--os-project-name admin --os-username admin token issue
輸入密碼之后,有正確的輸出即為配置正確。
圖2.4 admin認證服務驗證
作為``demo`` 用戶,請求認證令牌:
#openstack --os-auth-url http://controller:5000/v3 \
--os-project-domain-name default --os-user-domain-name default \
--os-project-name demo --os-username demo token issue
2.4創建 OpenStack 客戶端環境腳本
可將環境變量設置為腳本:
#vi admin-openrc 加入:
export OS_PROJECT_DOMAIN_NAME=default |
#vi demo-openrc 加入:
export OS_PROJECT_DOMAIN_NAME=default |
#. admin-openrc (加載``admin-openrc``文件來身份認證服務的環境變量位置和``admin``項目和用戶證書)
#openstack token issue(請求認證令牌)
圖2.6 請求認證令牌
三:鏡像服務
3.1安裝配置
建立glance數據
登錄mysql
#mysql -u root -p (用數據庫連接客戶端以 root 用戶連接到數據庫服務器)
#CREATE DATABASE glance;(創建 glance 數據庫)
授權
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \
IDENTIFIED BY '密碼'; (對``glance``數據庫授予恰當的權限)
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \
IDENTIFIED BY '密碼';(對``glance``數據庫授予恰當的權限)
運行環境變量:
#. admin-openrc
創建glance用戶信息:
#openstack user create --domain default --password-prompt glance
添加 admin 角色到 glance 用戶和 service 項目上
#openstack role add --project service --user glance admin
創建``glance``服務實體:
#openstack service create --name glance \
--description "OpenStack Image" image
圖3.1 創建glance服務實體
創建鏡像服務的 API 端點:
#penstack endpoint create --region RegionOne \
image public http://controller:9292
圖3.2 創建鏡像服務API端點
#penstack endpoint create --region RegionOne \
image internal http://controller:9292
圖3.3 創建鏡像服務API端點
#penstack endpoint create --region RegionOne \
image admin http://controller:9292
圖3.4 創建鏡像服務API端點
安裝:
#yum install openstack-glance
#vi /etc/glance/glance-api.conf 配置
[database] connection = mysql+pymysql://glance:密碼@controller/glance |
#vi /etc/glance/glance-registry.conf
[database]
|
同步數據庫:
#su -s /bin/sh -c "glance-manage db_sync" glance
啟動glance:
#systemctl enable openstack-glance-api.service \
openstack-glance-registry.service
# systemctl start openstack-glance-api.service \
openstack-glance-registry.service
3.2驗證
運行環境變量:
#. admin-openrc
下載一個比較小的鏡像:
#wget http://download.cirros-cloud.net/0.3.5/cirros-0.3.5-x86_64-disk.img
解決辦法: yum -y install wget 再執行 wget http://download.cirros-cloud.net/0.3.5/cirros-0.3.5-x86_64-disk.img |
上傳鏡像:
#openstack image create "cirros" \
--file cirros-0.3.5-x86_64-disk.img \
--disk-format qcow2 --container-format bare \
--public
圖3.5 上傳鏡像
查看:
#openstack image list
圖3.6 確認鏡像上傳
有輸出證明glance配置正確
四:計算服務
4.1安裝並配置控制節點
建立nova的數據庫:
#mysql -u root -p (用數據庫連接客戶端以 root 用戶連接到數據庫服務器)
#CREATE DATABASE nova_api;
#CREATE DATABASE nova; (創建 nova_api 和 nova 數據庫:)
#CREATE DATABASE nova_cell0;
對數據庫進行正確的授權:
#GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' \
IDENTIFIED BY '密碼';
#GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \
IDENTIFIED BY '密碼';
#GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \
IDENTIFIED BY '密碼';
#GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \
IDENTIFIED BY '密碼';
#GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' \
IDENTIFIED BY '密碼';
#GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' \
IDENTIFIED BY '密碼';
運行環境變量:
#. admin-openrc
創建nova用戶:
#openstack user create --domain default \
--password-prompt nova
#openstack role add --project service --user nova admin
創建 nova 服務實體:
#openstack service create --name nova \
--description "OpenStack Compute" compute
創建 Compute 服務 API 端點:
#openstack endpoint create --region RegionOne \
compute public http://controller:8774/v2.1
#openstack endpoint create --region RegionOne \
compute internal http://controller:8774/v2.1
#openstack endpoint create --region RegionOne \
compute admin http://controller:8774/v2.1
#openstack user create --domain default --password-prompt placement
#openstack role add --project service --user placement admin
#openstack service create --name placement --description "Placement API" placement
#openstack endpoint create --region RegionOne placement public http://controller:8778
# openstack endpoint create --region RegionOne placement internal http://controller:8778
#openstack endpoint create --region RegionOne placement admin http://controller:8778
安裝:
# yum install openstack-nova-api openstack-nova-conductor \
openstack-nova-console openstack-nova-novncproxy \
openstack-nova-scheduler openstack-nova-placement-api
#vi /etc/nova/nova.conf
[DEFAULT]. enabled_apis = osapi_compute,metadata [api_database] # connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api [database] # connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova [DEFAULT] #transport_url = rabbit://openstack:RABBIT_PASS@controller
[api] #auth_strategy = keystone [keystone_authtoken] #auth_uri = http://controller:5000 auth_url = http://controller:35357 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = nova password = 密碼 [DEFAULT] #my_ip = 10.0.0.11 [DEFAULT] # use_neutron = True firewall_driver = nova.virt.firewall.NoopFirewallDriver [vnc] enabled = true vncserver_listen = $my_ip vncserver_proxyclient_address = $my_ip [glance] #api_servers = http://controller:9292 [oslo_concurrency] #lock_path = /var/lib/nova/tmp [placement] #os_region_name = RegionOne project_domain_name = Default project_name = service auth_type = password user_domain_name = Default auth_url = http://controller:35357/v3 username = placement password = PLACEMENT_PASS |
#vi /etc/httpd/conf.d/00-nova-placement-api.conf
加入:
<Directory /usr/bin> <IfVersion >= 2.4> Require all granted </IfVersion> <IfVersion < 2.4> Order allow,deny Allow from all </IfVersion> </Directory> |
重啟httpd 服務:
#systemctl restart httpd
填充nova-api數據庫:
#su -s /bin/sh -c "nova-manage api_db sync" nova
注冊cell0數據庫:
#su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova
創建cell1單元格
#su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova
填充新星數據庫:
su -s /bin/sh -c "nova-manage db sync" nova
驗證nova cell0和cell1是否正確注冊:
nova-manage cell_v2 list_cells
#systemctl enable openstack-nova-api.service \
openstack-nova-consoleauth.service openstack-nova-scheduler.service \
openstack-nova-conductor.service openstack-nova-novncproxy.service
# systemctl start openstack-nova-api.service \
openstack-nova-consoleauth.service openstack-nova-scheduler.service \
openstack-nova-conductor.service openstack-nova-novncproxy.service
4.2安裝並配置計算節點
#yum install openstack-nova-compute
編輯
#vi /etc/nova/nova.conf
[DEFAULT] enabled_apis = osapi_compute,metadata transport_url = rabbit://openstack:RABBIT_PASS@controller my_ip = MANAGEMENT_INTERFACE_IP_ADDRESS(計算節點ip地址) use_neutron = True firewall_driver = nova.virt.firewall.NoopFirewallDriver [api] auth_strategy = keystone [keystone_authtoken] auth_uri = http://controller:5000 auth_url = http://controller:35357 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = nova password = NOVA_PASS [vnc] enabled = True vncserver_listen = 0.0.0.0 vncserver_proxyclient_address = $my_ip novncproxy_base_url = http://controller:6080/vnc_auto.html [glance] api_servers = http://controller:9292
[oslo_concurrency] lock_path = /var/lib/nova/tmp [placement] os_region_name = RegionOne project_domain_name = Default project_name = service auth_type = password user_domain_name = Default auth_url = http://controller:35357/v3 username = placement password = PLACEMENT_PASS |
#egrep -c '(vmx|svm)' /proc/cpuinfo (確定您的計算節點是否支持虛擬機的硬件加速)
如果為0則需要修改#vi /etc/nova/nova.conf
[libvirt] |
啟動計算服務及其依賴,並將其配置為隨系統自動啟動:
啟動:
#systemctl enable libvirtd.service openstack-nova-compute.service
#systemctl start libvirtd.service openstack-nova-compute.service
將計算節點添加到單元數據庫
這個在控制節點上執行
#. admin-openrc
# openstack hypervisor list
#su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova
vi /etc/nova/nova.conf
[scheduler]
discover_hosts_in_cells_interval = 300
4.3驗證
在控制節點驗證:
運行環境變量:
#. admin-openrc
#openstack compute service list
輸出正常即為配置正確
#openstack catalog list
#openstack image list
#nova-status upgrade check
五:Networking服務
5.1安裝並配置控制節點
創建neutron數據庫
#mysql -u root -p
#CREATE DATABASE neutron;
對``neutron`` 數據庫授予合適的訪問權限,使用合適的密碼替換``NEUTRON_DBPASS``:
#GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \
IDENTIFIED BY 'NEUTRON_DBPASS';
#GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \
IDENTIFIED BY 'NEUTRON_DBPASS';
運行環境變量:
#. admin-openrc
創建``neutron``用戶:
#openstack user create --domain default --password-prompt neutron
#openstack role add --project service --user neutron admin
添加``admin`` 角色到``neutron`` 用戶:
#openstack service create --name neutron \
--description "OpenStack Networking" network
創建網絡服務API端點
#openstack endpoint create --region RegionOne \
network public http://controller:9696
#openstack endpoint create --region RegionOne \
network internal http://controller:9696
#openstack endpoint create --region RegionOne \
network admin http://controller:9696
創建vxlan網絡:
#yum install openstack-neutron openstack-neutron-ml2 \
openstack-neutron-linuxbridge ebtables
#vi /etc/neutron/neutron.conf
[DEFAULT] core_plugin = ml2 service_plugins = router allow_overlapping_ips = true transport_url = rabbit://openstack:密碼@controller auth_strategy = keystone notify_nova_on_port_status_changes = true notify_nova_on_port_data_changes = true
[database] connection = mysql+pymysql://neutron:密碼@controller/neutron
[keystone_authtoken] auth_uri = http://controller:5000 auth_url = http://controller:35357 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = neutron password =密碼
[nova] auth_url = http://controller:35357 auth_type = password project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service username = nova password = 密碼
[oslo_concurrency] lock_path = /var/lib/neutron/tmp |
配置ml2擴展:
#vi /etc/neutron/plugins/ml2/ml2_conf.ini
[ml2] type_drivers = flat,vlan,vxlan tenant_network_types = vxlan mechanism_drivers = linuxbridge,l2population extension_drivers = port_security
[ml2_type_flat] flat_networks = provider
[ml2_type_vxlan] vni_ranges = 1:1000
[securitygroup] enable_ipset = true |
配置網橋:
#vi /etc/neutron/plugins/ml2/linuxbridge_agent.ini
[DEFAULT] [agent] [linux_bridge] physical_interface_mappings = provider:“第二張網卡名稱” [securitygroup] enable_security_group = true firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver [vxlan] enable_vxlan = true local_ip = 192.168.1.146(本地網絡ip) l2_population = true
|
配置3層網絡:
#vi /etc/neutron/l3_agent.ini
[DEFAULT] |
配置dhcp:
#vi /etc/neutron/dhcp_agent.ini
[DEFAULT] interface_driver = linuxbridge dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq enable_isolated_metadata = true |
配置metadata agent
#vi /etc/neutron/metadata_agent.ini
[DEFAULT] |
為計算機節點配置網絡服務
#vi /etc/nova/nova.conf
[neutron] |
創建擴展連接:
#ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
同步數據庫
# su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \
--config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
重啟計算API 服務:
#systemctl restart openstack-nova-api.service
#systemctl enable neutron-server.service \
neutron-linuxbridge-agent.service neutron-dhcp-agent.service \
neutron-metadata-agent.service
#systemctl start neutron-server.service \
neutron-linuxbridge-agent.service neutron-dhcp-agent.service \
neutron-metadata-agent.service
啟用layer-3服務並設置其隨系統自啟動
# systemctl enable neutron-l3-agent.service
#systemctl start neutron-l3-agent.service
5.2安裝並配置計算節點
#yum install openstack-neutron-linuxbridge ebtables ipset
#vi /etc/neutron/neutron.conf
[DEFAULT] transport_url = rabbit://openstack:密碼@controller auth_strategy = keystone
[keystone_authtoken] auth_uri = http://controller:5000 auth_url = http://controller:35357 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = neutron password = 密碼
[oslo_concurrency] lock_path = /var/lib/neutron/tmp |
配置vxlan
#vi /etc/neutron/plugins/ml2/linuxbridge_agent.ini
[linux_bridge] |
#vi /etc/nova/nova.conf
[neutron] |
重啟計算服務
#systemctl restart openstack-nova-compute.service
#systemctl enable neutron-linuxbridge-agent.service
#systemctl enable neutron-linuxbridge-agent.service
5.3驗證
運行環境變量:
#. admin-openrc
#openstack extension list --network
#openstack network agent list
六:Dashboard
6.1配置
#yum install openstack-dashboard
#vi /etc/openstack-dashboard/local_settings
OPENSTACK_HOST = "controller" SESSION_ENGINE = 'django.contrib.sessions.backends.cache' 'enable_router': False, 'enable_quotas': False, 'enable_distributed_router': False, 'enable_ha_router': False, 'enable_lb': False, 'enable_firewall': False, 'enable_vpn': False, 'enable_fip_topology_check': False, } TIME_ZONE = "TIME_ZONE" |
啟動:
#systemctl restart httpd.service memcached.service
6.2登錄
在網頁上輸入網址http://控制節點ip/dashboard/auth/login
域:default
用戶名:admin或者demo
密碼:自己設置的
圖6.1 登錄頁面