openstack train 版本安裝


版本查看

https://releases.openstack.org/

openstack 安裝:

簡介

OpenStack是一個開源的雲計算管理平台項目,是一系列軟件開源項目的組合,由NASA(美國國家航空航天局)和Rackspace合作研發並發起,以Apache許可證授權的開源代碼項目
OpenStack為私有雲和公有雲提供可擴展的彈性的雲計算服務,項目目標是提供實施簡單、可大規模擴展、豐富、標准統一的雲計算管理平台
OpenStack覆蓋了網絡、虛擬化、操作系統、服務器等各個方面,它是一個正在開發中的雲計算平台項目,根據成熟及重要程度的不同,被分解成核心項目、孵化項目,以及支持項目和相關項目,每個項目都有自己的委員會和項目技術主管,而且每個項目都不是一成不變的,孵化項目可以根據發展的成熟度和重要性,轉變為核心項目
核心組件
1、計算(Compute)Nova:一套控制器,用於為單個用戶或使用群組管理虛擬機實例的整個生命周期,根據用戶需求來提供虛擬服務。負責虛擬機創建、開機、關機、掛起、暫停、調整、遷移、重啟、銷毀等操作,配置CPU、內存等信息規格
2、對象存儲(Object Storage)Swift:一套用於在大規模可擴展系統中通過內置冗余及高容錯機制實現對象存儲的系統,允許進行存儲或者檢索文件,可為Glance提供鏡像存儲,為Cinder提供卷備份服務
3、鏡像服務(Image Service)Glance:一套虛擬機鏡像查找及檢索系統,支持多種虛擬機鏡像格式(AKI、AMI、ARI、ISO、QCOW2、Raw、VDI、VHD、VMDK),有創建上傳鏡像、刪除鏡像、編輯鏡像基本信息的功能
4、身份服務(Identity Service)Keystone:為OpenStack其他服務提供身份驗證、服務規則和服務令牌的功能,管理Domains、Projects、Users、Groups、Roles
5、網絡&地址管理(Network)Neutron:提供雲計算的網絡虛擬化技術,為OpenStack其他服務提供網絡連接服務。為用戶提供接口,可以定義Network、Subnet、Router,配置DHCP、DNS、負載均衡、L3服務,網絡支持GRE、VLAN,插件架構支持許多主流的網絡廠家和技術,如OpenvSwitch
6、塊存儲(Block Storage)Cinder:為運行實例提供穩定的數據塊存儲服務,它的插件驅動架構有利於塊設備的創建和管理,如創建卷、刪除卷,在實例上掛載和卸載卷
7、UI 界面(Dashboard)Horizon:OpenStack中各種服務的Web管理門戶,用於簡化用戶對服務的操作,例如:啟動實例、分配IP地址、配置訪問控制等
8、測量(Metering)Ceilometer:能把OpenStack內部發生的幾乎所有的事件都收集起來,然后為計費和監控以及其它服務提供數據支撐
9、部署編排(Orchestration)Heat:提供了一種通過模板定義的協同部署方式,實現雲基礎設施軟件運行環境(計算、存儲和網絡資源)的自動化部署
10、數據庫服務(Database Service)Trove:為用戶在OpenStack的環境提供可擴展和可靠的關系和非關系數據庫引擎服務

前期准備

准備兩台Centos7服務器,配置IP地址和hostname,同步系統時間,關閉防火牆和selinux,修改ip地址和hostname映射

1.修改機器主機名分別為:controller和compute

#hostnamectl set-hostname hostname 

2.編輯controller和compute的 /etc/hosts 文件

 #vi /etc/hosts
172.30.154.44 controller
172.30.154.47 compute

3.驗證

采取互ping以及ping百度的方式

4.關閉防火牆

systemctl stop firewalld.service            #停止firewall
systemctl disable firewalld.service        #禁止firewall開機啟動

5.關閉selinux

#setenforce 0 臨時關閉
#vi /etc/selinux/config
#SELINUX=enforcing改為SELINUX=disabled
重啟生效
  1. 設置同一時間可以打開的文件數為65535
[root@controller]# ulimit -n 65535
[root@controller# ulimit -n
65535
[root@rabbitmq rabbitmq]# vim /etc/security/limits.conf
* soft nofile 65536
* hard nofile 131072
* soft nproc 2048
* hard nproc 4096
  1. 設置時間同步

自行百度設置時間同步

  1. 重啟服務器

reboot

部署服務

安裝epel源

[root@controller ~]# yum install epel-release -y
[root@computer ~]# yum install epel-release -y

安裝openstack源(train 版本)

[root@controller ~]# yum install centos-release-openstack-train -y
[root@computer ~]# yum install centos-release-openstack-train -y

安裝openstack的客戶端和selinux服務

[root@controller ~]# yum install -y python2-openstackclient openstack-selinux
[root@computer ~]# yum install -y python2-openstackclient openstack-selinux

部署Mariadb數據庫和memcached

[root@controller ~]# yum install mariadb mariadb-server python2-PyMySQL memcached -y

安裝消息隊列服務

[root@controller ~]# yum install rabbitmq-server -y

安裝keystone服務

[root@controller bin]# yum install openstack-keystone httpd mod_wsgi -y

安裝glance服務

[root@controller ~]# yum install openstack-glance -y

安裝placememt服務

[root@controller ~]# yum install openstack-placement-api -y

controller安裝nova服務

[root@controller ~]# yum install openstack-nova-api openstack-nova-conductor openstack-nova-novncproxy openstack-nova-scheduler  -y

computer安裝nova服務

[root@computer ~]# yum install openstack-nova-compute -y  

controller安裝neutron服務

[root@controller ~]# yum install openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge ebtables ipset iproute -y

computer安裝neutron服務

[root@computer ~]# yum install openstack-neutron-linuxbridge ebtables ipset iproute -y

安裝dashboard組件

[root@controller ~]# yum install openstack-dashboard -y

controller安裝cinder和lvm服務

[root@storager ~]# yum install l openstack-cinder -y

computer安裝cinder和lvm服務

[root@storager ~]# yum install lvm2 device-mapper-persistent-data openstack-cinder targetcli python2-keystone -y 

開啟硬件加速

[root@controller ~]# modprobe kvm-intel
[root@computer ~]# modprobe kvm-intel

安裝依賴

[root@controller ~]# yum -y install libibverbs

配置消息隊列服務

開啟服務

[root@controller ~]# systemctl start rabbitmq-server.service
[root@controller ~]# systemctl enable rabbitmq-server.service

添加用戶

[root@controller ~]# rabbitmqctl add_user openstack openstack

授權限

[root@controller ~]#rabbitmqctl set_user_tags admin administrator
[root@controller ~]# rabbitmqctl set_permissions openstack ".*" ".*" ".*"

啟動監控界面

[root@controller ~]#rabbitmq-plugins enable rabbitmq_management    

配置memcached服務

修改配置文件

[root@controller ~]# vim /etc/sysconfig/memcached 
PORT="11211"
USER="memcached"
MAXCONN="65535"
CACHESIZE="1024"
OPTIONS="-l 127.0.0.1,::1,controller"

啟動服務

[root@controller ~]# systemctl start memcached.service
[root@controller ~]# systemctl enable memcached.service

配置數據庫服務

修改配置文件

[root@controller ~]# vim /etc/my.cnf.d/mariadb-server.cnf
bind-address = 172.30.154.44
default-storage-engine = innodb
innodb_file_per_table = on
max_connections = 4096
collation-server = utf8_general_ci
character-set-server = utf8

啟動服務

[root@controller ~]# systemctl start mariadb.service
[root@controller ~]# systemctl enable mariadb.service

創建數據庫

MariaDB [(none)]> create database keystone;
MariaDB [(none)]> create database glance;
MariaDB [(none)]> create database nova;
MariaDB [(none)]> create database nova_api;
MariaDB [(none)]> create database nova_cell0;
MariaDB [(none)]> create database neutron;
MariaDB [(none)]> create database cinder;
MariaDB [(none)]> create database placement;

授權用戶

MariaDB [(none)]> grant all privileges on keystone.* to 'keystone'@'localhost' identified by '123456';
MariaDB [(none)]> grant all privileges on keystone.* to 'keystone'@'%' identified by '123456';

MariaDB [(none)]> grant all privileges on glance.* to 'glance'@'localhost' identified by '123456';
MariaDB [(none)]> grant all privileges on glance.* to 'glance'@'%' identified by '123456';

MariaDB [(none)]> grant all privileges on nova.* to 'nova'@'localhost' identified by '123456';
MariaDB [(none)]> grant all privileges on nova.* to 'nova'@'%' identified by '123456';

MariaDB [(none)]> grant all privileges on nova_api.* to 'nova'@'localhost' identified by '123456';
MariaDB [(none)]> grant all privileges on nova_api.* to 'nova'@'%' identified by '123456';

MariaDB [(none)]> grant all privileges on nova_cell0.* to 'nova'@'localhost' identified by '123456';
MariaDB [(none)]> grant all privileges on nova_cell0.* to 'nova'@'%' identified by '123456';

MariaDB [(none)]> grant all privileges on neutron.* to 'neutron'@'localhost' identified by '123456';
MariaDB [(none)]> grant all privileges on neutron.* to 'neutron'@'%' identified by '123456';

MariaDB [(none)]> grant all privileges on cinder.* to 'cinder'@'localhost' identified by '123456';
MariaDB [(none)]> grant all privileges on cinder.* to 'cinder'@'%' identified by '123456';

MariaDB [(none)]> grant all privileges on placement.* to 'placement'@'localhost' identified by '123456';
MariaDB [(none)]> grant all privileges on placement.* to 'placement'@'%' identified by '123456';

MariaDB [(none)]> flush privileges;

通過運行mysql_secure_installation腳本來保護數據庫服務

[root@controller ~]# mysql_secure_installation

配置keystone服務

修改配置文件

[root@controller ~]# vim /etc/keystone/keystone.conf
[database]
connection = mysql+pymysql://keystone:your_password@controller/keystone
[token]
provider = fernet

數據庫同步

[root@controller ~]# su -s /bin/sh -c "keystone-manage db_sync" keystone

密鑰庫初始化

[root@controller ~]# keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone

[root@controller ~]# keystone-manage credential_setup --keystone-user keystone --keystone-group keystone

[root@controller ~]# keystone-manage bootstrap --bootstrap-password openstack --bootstrap-admin-url http://controller:5000/v3/ --bootstrap-internal-url http://controller:5000/v3/ --bootstrap-public-url http://controller:5000/v3/ --bootstrap-region-id RegionOne

配置httpd服務

#修改配置文件
[root@controller ~]# vi /etc/httpd/conf/httpd.conf
ServerName controller

#創建軟連接
[root@controller ~]# ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/

#啟動服務
[root@controller ~]# systemctl start httpd
[root@controller ~]# systemctl enable httpd

配置admin環境變量腳本

[root@controller ~]# vi admin-openrc 
export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=openstack
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
export OS_VOLUME_API_VERSION=2

驗證環境變量

[root@controller ~]# source admin-openrc
[root@controller ~]# openstack token issue

創建service項目

[root@controller ~]# openstack project create --domain default --description "Service Project" service

創建demo項目

[root@controller ~]# openstack project create --domain default --description "Demo Project" demo

創建demo用戶

[root@controller ~]# openstack user create --domain default --password-prompt demo

創建user角色

[root@controller ~]# openstack role create user

添加user角色到demo項目和用戶

[root@controller ~]# openstack role add --project demo --user demo user

配置demo環境變量腳本

[root@controller ~]# vi demo-openrc 
export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=demo
export OS_USERNAME=demo
export OS_PASSWORD=demo
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2

配置glance服務

創建並配置glance用戶

[root@controller ~]# openstack user create --domain default --password-prompt glance
[root@controller ~]# openstack role add --project service --user glance admin

創建glance服務實體

[root@controller ~]# openstack service create --name glance  --description "OpenStack Image" image

創建glance服務端點

[root@controller ~]# openstack endpoint create --region RegionOne image public http://controller:9292
[root@controller ~]# openstack endpoint create --region RegionOne image internal http://controller:9292
[root@controller ~]# openstack endpoint create --region RegionOne  image admin http://controller:9292

修改配置文件

[root@controller ~]# vim /etc/glance/glance-api.conf
[database]
connection = mysql+pymysql://glance:your_password@controller/glance
[keystone_authtoken]
www_authenticate_uri = http://controller:5000
auth_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = glance
password = glance

[paste_deploy]
flavor = keystone

[glance_store]
stores = file,http
default_store = file
filesystem_store_datadir = /var/lib/glance/images/

同步數據庫

[root@controller ~]# su -s /bin/sh -c "glance-manage db_sync" glance

啟動服務

[root@controller ~]# systemctl enable openstack-glance-api.service
[root@controller ~]# systemctl start openstack-glance-api.service

上傳鏡像

[root@controller ~]# glance image-create --name Centos7 --disk-format qcow2 --container-format bare --progress < CentOS-7-x86_64-GenericCloud-1907.qcow2

查看鏡像

[root@controller ~]# openstack image list

Controller配置placement服務

創建並配置placement用戶

[root@controller ~]# openstack user create --domain default --password-prompt placement
[root@controller ~]# openstack role add --project service --user placement admin

創建placement服務實體

[root@controller ~]# openstack service create --name placement   --description "Placement API" placement

創建placement服務端點

[root@controller ~]# openstack endpoint create --region RegionOne   placement public http://controller:8778
[root@controller ~]# openstack endpoint create --region RegionOne   placement internal http://controller:8778
[root@controller ~]# openstack endpoint create --region RegionOne   placement admin http://controller:8778

修改配置文件

[root@controller ~]# vim /etc/placement/placement.conf
[placement_database]
connection = mysql+pymysql://placement:'your_password'@controller/placement

[api]
auth_strategy = keystone

[keystone_authtoken]
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000/v3
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = placement
password = placement

同步數據庫

[root@controller ~]# su -s /bin/sh -c "placement-manage db sync" placement

重啟服務

[root@controller ~]# systemctl restart httpd  

Controller配置nova服務

創建並配置nova用戶

[root@controller ~]# openstack user create --domain default --password-prompt nova
[root@controller ~]# openstack role add --project service --user nova admin

創建nova服務實體

[root@controller ~]# openstack service create --name nova --description "OpenStack Compute" compute

創建nova服務端點

[root@controller ~]# openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1
[root@controller ~]# openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1
[root@controller ~]# openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1

修改配置文件

[root@controller nova]# cp -a /etc/nova/nova.conf{,.bak2}
[root@controller nova]# grep -Ev '^$|#' /etc/nova/nova.conf.bak2 > /etc/nova/nova.conf
[root@controller ~]# vim /etc/nova/nova.conf
[DEFAULT]
enabled_apis = osapi_compute,metadata
my_ip = 192.168.29.148
transport_url = rabbit://openstack:openstack@controller:5672/
auth_strategy=keystone
block_device_allocate_retries = 600

[api_database]
connection = mysql+pymysql://nova:your_password@controller/nova_api

[database]
connection = mysql+pymysql://nova:your_password@controller/nova

[api]
auth_strategy = keystone 

[keystone_authtoken]
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000/v3
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = nova

[vnc]
enabled = true
server_listen = $my_ip
server_proxyclient_address = $my_ip

[glance]
api_servers = http://controller:9292

[oslo_concurrency]
lock_path = /var/lib/nova/tmp

[placement]
os_region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller:5000/v3
username = placement
password = placement

[root@controller ~]# vi /etc/httpd/conf.d/00-placement-api.conf

添加如下內容

<Directory /usr/bin>
   <IfVersion >= 2.4>
      Require all granted
   </IfVersion>
   <IfVersion < 2.4>
      Order allow,deny
      Allow from all
   </IfVersion>
</Directory>

重啟httpd服務

[root@controller ~]# systemctl restart httpd

同步數據庫

[root@controller ~]# su -s /bin/sh -c "nova-manage api_db sync" nova
[root@controller ~]# su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova
[root@controller ~]# su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova
[root@controller ~]# su -s /bin/sh -c "nova-manage db sync" nova

驗證

[root@controller ~]# su -s /bin/sh -c "nova-manage cell_v2 list_cells" nova

啟動服務

[root@controller ~]# systemctl start openstack-nova-api.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service
[root@controller ~]# systemctl enable openstack-nova-api.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service

compute配置nova服務

修改配置文件

[root@controller nova]# cp -a /etc/nova/nova.conf{,.bak2}
[root@controller nova]# grep -Ev '^$|#' /etc/nova/nova.conf.bak2 > /etc/nova/nova.conf
[root@compute ~]# vi /etc/nova/nova.conf
[DEFAULT]
enabled_apis = osapi_compute,metadata
transport_url = rabbit://openstack:openstack@controller
my_ip = 192.168.29.146
auth_strategy=keystone
block_device_allocate_retries = 600

[api]
auth_strategy = keystone

[keystone_authtoken]
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000/v3
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = nova

[vnc]
enabled = True
server_listen = 0.0.0.0
server_proxyclient_address = $my_ip
novncproxy_base_url = http://controller:6080/vnc_auto.html

[glance]
api_servers = http://controller:9292

[oslo_concurrency]
lock_path = /var/lib/nova/tmp

[placement]
os_region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller:5000/v3
username = placement
password = placement

[libvirt]
virt_type = kvm
#虛擬機部署集群需要用qemu
#virt_type = qemu

啟動服務

[root@compute ~]# systemctl start libvirtd.service openstack-nova-compute.service
[root@compute ~]# systemctl enable libvirtd.service openstack-nova-compute.service   

controller添加computer進入數據庫

查看nova-compute結點

[root@controller ~]# openstack compute service list --service nova-compute

添加數據庫

[root@controller ~]# su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova

controller配置neutron服務

創建並配置neutron用戶

[root@controller ~]# openstack user create --domain default --password-prompt neutron
[root@controller ~]# openstack role add --project service --user neutron admin

創建neutron服務實體

[root@controller ~]# openstack service create --name neutron --description "OpenStack Networking" network

創建neutron服務端點

[root@controller ~]# openstack endpoint create --region RegionOne network public http://controller:9696
[root@controller ~]# openstack endpoint create --region RegionOne  network internal http://controller:9696
[root@controller ~]# openstack endpoint create --region RegionOne  network admin http://controller:9696

修改配置文件(linuxbridge網絡架構)

[root@controller ~]# vi /etc/neutron/neutron.conf

[DEFAULT]
core_plugin = ml2
service_plugins = router
allow_overlapping_ips = true
transport_url = rabbit://openstack:openstack@controller
auth_strategy = keystone
notify_nova_on_port_status_changes = true
notify_nova_on_port_data_changes = true

[database]
connection = mysql+pymysql://neutron:your_password@controller/neutron

[keystone_authtoken]
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = neutron

[nova]
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = nova

[oslo_concurrency]
lock_path = /var/lib/neutron/tmp

[root@controller ~]# vi /etc/neutron/plugins/ml2/ml2_conf.ini

[ml2]
#type_drivers = flat,vlan,vxlan
type_drivers = flat,vlan
#tenant_network_types = vxlan
tenant_network_types =
mechanism_drivers = linuxbridge,l2population
extension_drivers = port_security

[ml2_type_flat]
flat_networks = provider

[ml2_type_vxlan]
#vni_ranges = 1:1000

[securitygroup]
enable_ipset = true

[root@controller ~]# vi /etc/neutron/plugins/ml2/linuxbridge_agent.ini

[linux_bridge]
physical_interface_mappings = provider:ens160 #ens160為物理機對外訪問網卡

[vxlan]
enable_vxlan = false
#enable_vxlan = true
#local_ip = 172.30.154.44
#l2_population = true

[securitygroup]
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

[root@controller ~]# vi /etc/neutron/l3_agent.ini

[DEFAULT]
interface_driver = linuxbridge

[root@controller ~]# vi /etc/neutron/dhcp_agent.ini

[DEFAULT]
interface_driver = linuxbridge
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = true

[root@controller ~]# vi /etc/neutron/metadata_agent.ini

[DEFAULT]
nova_metadata_host = controller
#metadata_proxy_shared_secret = 000000
metadata_proxy_shared_secret = METADATA_SECRET

[root@controller ~]# vi /etc/nova/nova.conf

[neutron]
url = http://controller:9696
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = neutron
service_metadata_proxy = true
metadata_proxy_shared_secret = METADATA_SECRET

創建軟鏈接

[root@controller ~]# ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini

同步數據庫

[root@controller ~]# su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron

啟動服務

#重啟nova-api服務
[root@controller ~]# systemctl restart openstack-nova-api.service

#linuxbridge架構
[root@controller ~]# systemctl start neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service neutron-l3-agent.service
[root@controller ~]# systemctl enable neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service neutron-l3-agent.service

computer配置neutron服務

修改配置文件(linuxbridge架構)

[root@computer ~]# vi /etc/neutron/neutron.conf
[DEFAULT]
transport_url = rabbit://openstack:openstack@controller
auth_strategy = keystone

[keystone_authtoken]
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = neutron

[oslo_concurrency]
lock_path = /var/lib/neutron/tmp

[root@computer ~]# vi /etc/neutron/plugins/ml2/linuxbridge_agent.ini

[linux_bridge]
physical_interface_mappings = provider:ens160

[vxlan]
enable_vxlan = false
#local_ip = 192.168.29.149
#l2_population = true

[securitygroup]
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

[root@computer ~]# vi /etc/nova/nova.conf

[neutron]
url = http://controller:9696
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = neutron

啟動服務

#重啟nova-compute服務
[root@compute ~]# systemctl stop openstack-nova-compute.service
[root@compute ~]# systemctl start openstack-nova-compute.service
#注:直接restart重啟可能會導致報錯

#linuxbridge架構
[root@compute ~]# systemctl start neutron-linuxbridge-agent.service
[root@compute ~]# systemctl enable neutron-linuxbridge-agent.service
驗證

[root@controller ~]# openstack network agent list

#查看日志
[root@computer ~]# tail /var/log/nova/nova-compute.log

配置dashboard組件

修改配置文件

[root@controller ~]# vim /etc/openstack-dashboard/local_settings
OPENSTACK_HOST = "controller"
ALLOWED_HOSTS = ['*', 'two.example.com']
SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
CACHES = { 'default': { 'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache', 'LOCATION': 'controller:11211', }
}
OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST
OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
OPENSTACK_API_VERSIONS = { "identity": 3, "image": 2, "volume": 2,
}
OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default"
OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"
WEBROOT = '/dashboard/'

重建apache的dashboard配置文件

[root@controller  ~]# cd /usr/share/openstack-dashboard
[root@controller  ~]# python manage.py make_web_conf --apache > /etc/httpd/conf.d/openstack-dashboard.conf
[root@controller  ~]#ln -s /etc/openstack-dashboard /usr/share/openstack-dashboard/openstack_dashboard/conf
[root@controller ~]# vim /etc/httpd/conf.d/openstack-dashboard.conf
WSGIApplicationGroup %{GLOBAL}
#將原有的配置注釋掉,添加以下配置
#WSGIScriptAlias / /usr/share/openstack-dashboard/openstack_dashboard/wsgi.py
#Alias /static /usr/share/openstack-dashboard/stati

WSGIScriptAlias /dashboard /usr/share/openstack-dashboard/openstack_dashboard/wsgi/django.wsgi
Alias /dashboard/static /usr/share/openstack-dashboard/static

重啟服務

[root@controller ~]# systemctl restart httpd.service memcached.service

訪問web界面

瀏覽器訪問http://ip/dashboard

安裝並配置一個存儲節點(類似於計算節點)

沒有存儲節點默認存儲在計算節點的本地硬盤,目錄為/var/lib/nova/instances

本地硬盤的優勢是性能好,缺點是不靈活

本次試驗還是安裝在控制節點上面(對於cinder來說就是一個存儲節點)

Computer配置cinder服務(有第二塊硬盤可配置,區別系統盤)

配置cinder硬盤

[root@computer~]# mkfs.xfs -f /dev/sdb

配置邏輯卷

[root@computer~]# pvcreate /dev/sdb
[root@computer~]# vgcreate cinder-volumes /dev/sdb

修改lvm的配置文件中添加filter,只有instance可以訪問

[root@test03 ~]# vim /etc/lvm/lvm.conf
143 filter = ["a/sda/","a/sdb/","r/.*/"]

啟動lvm 程序

[root@test03 ~]# systemctl enable lvm2-lvmetad.service
[root@test03 ~]# systemctl start lvm2-lvmetad.service

修改配置文件

[root@controller ~]# cp -a /etc/cinder/cinder.conf{,.bak}
[root@controller ~]# grep -Ev '^$|#' /etc/cinder/cinder.conf.bak > /etc/cinder/cinder.conf
[root@controller ~]# vim /etc/cinder/cinder.conf

[default]
transport_url = rabbit://openstack:openstack@controller
auth_strategy = keystone
my_ip = 192.168.29.149
enabled_backends = lvm
glance_api_servers = http://controller:9292

[database]
connection = mysql+pymysql://cinder:your_password@controller/cinder

[keystone_authtoken]
www_authenticate_uri = http://controller:5000
#auth_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_id = default
user_domain_id = default
project_name = service
username = cinder
password = cinder

[oslo_concurrency]
lock_path = /var/lib/cinder/tmp

#沒有lvm標簽自行添加
[lvm]
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_group = cinder-volumes
#volumes_dir = $state_path/volumes
iscsi_protocol = iscsi
iscsi_helper = lioadm
#iscsi_ip_address = 172.30.154.47#存儲節點ip

啟動服務

[root@computer~]# systemctl start openstack-cinder-volume.service  target.service 
[root@computer~]# systemctl enable openstack-cinder-volume.service  target.service 

Computer配置tgt服務(此服務官方文檔未說明,具體作用未知,查看源碼有此服務,故安裝,不安裝報錯)

安裝 scsi-target-utils

[root@computer~]# yum --enablerepo=epel -y install scsi-target-utils libxslt

配置

vim /etc/tgt/tgtd.conf
添加如下內容
include /var/lib/cinder/volumes/*
啟動 tgtd 服務
//設置開機啟動
systemctl enable tgtd

//啟動
systemctl start tgtd

Controller配置cinder服務

創建並配置cinder用戶

[root@controller ~]# openstack user create --domain default --password-prompt cinder
[root@controller ~]# openstack role add --project service --user cinder admin

創建cinder服務實體

[root@controller ~]# openstack service create --name cinderv2 --description "OpenStack Block Storage" volumev2
[root@controller ~]# openstack service create --name cinderv3 --description "OpenStack Block Storage" volumev3

創建cinder服務端點

[root@controller ~]# openstack endpoint create --region RegionOne  volumev2 public http://controller:8776/v2/%\(project_id\)s
[root@controller ~]# openstack endpoint create --region RegionOne  volumev2 internal  http://controller:8776/v2/%\(project_id\)s
[root@controller ~]# openstack endpoint create --region RegionOne  volumev2 admin  http://controller:8776/v2/%\(project_id\)s

[root@controller ~]# openstack endpoint create --region RegionOne  volumev3 public http://controller:8776/v3/%\(project_id\)s
[root@controller ~]# openstack endpoint create --region RegionOne  volumev3 internal  http://controller:8776/v3/%\(project_id\)s
[root@controller ~]# openstack endpoint create --region RegionOne  volumev3 admin http://controller:8776/v3/%\(project_id\)s

編輯配置文件

[root@controller ~]# cp -a /etc/cinder/cinder.conf{,.bak}
[root@controller ~]# grep -Ev '^$|#' /etc/cinder/cinder.conf.bak > /etc/cinder/cinder.conf
[root@controller ~]# vim /etc/cinder/cinder.conf

[default]
transport_url = rabbit://openstack:openstack@controller
auth_strategy = keystone
my_ip = 192.168.29.148

[database]
connection = mysql+pymysql://cinder:your_password@controller/cinder
[keystone_authtoken]
www_authenticate_uri = http://controller:5000
#auth_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = cinder
password = cinder

[oslo_concurrency]
lock_path = /var/lib/cinder/tmp

[root@controller ~]# vim /etc/nova/nova.conf

[cinder]
os_region_name = RegionOne

重啟nova服務

systemctl restart openstack-nova-api.service

同步數據庫

[root@controller ~]# su -s /bin/sh -c "cinder-manage db sync" cinder

啟動服務

[root@controller ~]# systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service 
[root@controller ~]# systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service 

查看狀態

[root@controller ~]# openstack volume service list

創建卷

#容量為1G
[root@controller ~]# cinder create --name demo_volume 1

掛載卷

#查看卷id
[root@controller ~]# cinder list
#掛載卷到雲主機
[root@controller ~]# nova volume-attach mycentos e9804810-9dce-47f6-84f7-25a8da672800

創建實例

鏡像下載

下載clould 鏡像 (官網鏡像存在bug,不建議使用)

最簡單的方法是使用標准鏡像。主流的Linux發行版都提供可以在 OpenStack 中直接使用的cloud鏡像,下載地址:

CentOS6:http://cloud.centos.org/centos/6/images/

CentOS7:http://cloud.centos.org/centos/7/images/

Ubuntu14.04:http://cloud-images.ubuntu.com/trusty/current/

Ubuntu16.04:http://cloud-images.ubuntu.com/xenial/current/

定制 openstack centos7鏡像(推薦)

參考次博客方法定制:
https://www.jianshu.com/p/137e6f3f0369

上傳鏡像

openstack image create "centos" --file CentOS-7-x86_64-Azure-1707.qcow2 --disk-format qcow2 --container-format bare --public

openstack image create --disk-format qcow2 --container-format bare --public --file /root/CentOS-7-x86_64-Minimal-1708.iso CentOS-7-x86_64

查看制作的鏡像信息

openstack image list

創建網絡

openstack network create --share --external --provider-physical-network provider --provider-network-type flat provider

參數
–share 允許所有項目使用虛擬網絡
–external 定義外接虛擬網絡 如果需要創建外網使用 --internal
–provider-physical-network provider && --provider-network-type flat 連接flat 虛擬網絡

創建子網

openstack subnet create --network provider  --allocation-pool start=10.71.11.50,end=10.71.11.60 --dns-nameserver 114.114.114.114 --gateway 10.71.11.254 --subnet-range 10.71.11.0/24 provider
openstack network list

創建flavor

openstack flavor create --id 1 --vcpus 4 --ram 128 --disk 1 m2.nano
openstack flavor list

控制節點生成秘鑰對,在啟動實例之前,需要將公鑰添加到Compute服務

ssh-keygen -q -N ""
openstack keypair create --public-key ~/.ssh/id_rsa.pub mykey

添加安全組,允許ICMP(ping)和安全shell(SSH)

openstack security group rule create --proto icmp default
openstack security group list

允許安全shell(SSH)訪問

openstack security group rule create --proto tcp --dst-port 22 default

創建虛擬機

方法一:界面按照指引創建

方法二:

openstack server create --flavor min --image cirros --nic net-id=6ef57ba4-b18b-4d37-9696-ca6d740ae586 --security-group default --key-name liukey cirros

查看虛擬機狀態

openstack server list


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM