Openstack環境部署 (參考文獻:http://www.cnblogs.com/kevingrace/p/5707003.html 和 https://docs.openstack.org/mitaka/zh_CN)
注:建議更改某個服務的配置文件時,拷貝一份,防止修改錯誤而亂刪亂改!!!
1、系統:centOS7
2、數量:暫定4台
1、控制節點:controller1 IP:192.168.2.201 外網:124.65.181.122
2、計算節點:nova1 IP:192.168.2.202 外網:124.65.181.122
3、塊存儲節點:cinder IP:192.168.2.222
4、共享文件節點:manila IP:192.168.2.223
3、域名解析和關閉iptables、selinux(所有節點)
域名解析:vi /etc/hosts
192.168.2.201 controller1
192.168.2.202 nova1
192.168.2.222 cinder1
192.168.2.223 manila1
注:可選擇編輯controller1節點的hosts文件然后逐一發送至其他節點:scp /etc/hosts IP地址:/etc/hosts
關閉selinux
永久關閉:vi /etc/selinux/config
SELINUX=disabled
臨時關閉:setenforce 0
關閉iptables
永久關閉:systemctl disable firewalld.service
臨時關閉:systemctl stop firewalld.service
4、配置網絡時間協議(NTP)
控制節點:
yum install chrony
編輯:vi /etc/chrony.conf
allow 192.168/24 #允許的服務器和自己同步時間
systemctl enable chronyd.service #開機自啟
systemctl start chronyd.service
timedatectl set-timezone Asia/Shanghai #設置時區
timedatectl status #查看
其他節點:
yum install chrony
編輯:vi /etc/chrony.conf
servcer controller1 iburst #設置時間服務主機名/IP
systemctl enable chronyd.service #開機自啟
systemctl start chronyd.service
timedatectl set-timezone Asia/Shanghai #設置時區
chronyc sources
測試是否時間同步
所有節點執行相同:chronyc sources
5、升級包、系統(所有節點)
yum install centos-release-openstack-mitaka
升級包:yum upgrade #若更新新內核,需重啟來使用新內核
客戶端:yum install python-openstackclient
安全策略:yum install openstack-selinux
6、數據庫---mysql (控制節點)
安裝軟件包:yum install mariadb mariadb-server MySQL-python
拷貝配置文件:cp /usr/share/mariadb/my-medium.cnf /etc/my.cnf #或者/usr/share/mysql/my-medium.cnf /etc/my.cnf
編輯:vi /etc/my.cnf
[mysqld]
default-storage-engine = innodb
innodb_file_per_table
collation-server = utf8_general_ci
init-connect = 'SET NAMES utf8'
character-set-server = utf8
設置開機自啟:systemctl enable mariadb.service
鏈接: ln -s /usr/lib/systemd/system/mariadb.service /etc/systemd/system/multi-user.target.wants/mariadb.service
初始化數據庫:mysql_install_db --datadir="/var/lib/mysql" --user="mysql"
開啟數據庫:systemctl start mariadb.service
設置密碼及初始化:mysql_secure_installation
此處我們登陸數據庫,分別創建核心節點的數據庫然后賦予相應權限
CREATE DATABASE keystone; #身份認證
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY 'keystone';
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'keystone';
CREATE DATABASE glance; #鏡像服務
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY 'glance';
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY 'glance';
CREATE DATABASE nova; #計算服務
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'nova';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'nova';
CREATE DATABASE neutron; #網絡服務
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'neutron';
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'neutron';
CREATE DATABASE cinder; #塊存儲服務
GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' IDENTIFIED BY 'cinder';
GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIED BY 'cinder';
刷新數據庫:flush privileges;
查看:show databases;
7、消息隊列----rabbitmq (控制節點)
安裝軟件包:yum install rabbitmq-server
啟動rabbitmq:端口為5672
systemctl enable rabbitmq-server.service
鏈接:
ln -s /usr/lib/systemd/system/rabbitmq-server.service /etc/systemd/system/multi-user.target.wants/rabbitmq-server.service
啟動:systemctl start rabbitmq-server.service
#啟動會報錯則去/usr/sbin/ 找到這個服務用./rabbitmq-server 啟動會看到詳細報錯原因
#報錯原因是hostname里面的主機名要和hosts里面主機名對應才可以,否則rabbitmq-server.service 檢測不到,識別不了
注:若驗證是否開啟成功執行查看端口命令:netstat -anpt
添加openstack用戶及密碼:rabbitmqctl add_user openstack openstack123 #openstack123表示自行定義的密碼
為openstack用戶設置權限:rabbitmqctl set_permissions openstack “.*” “.*” “.*” #允許配置、寫、讀訪問openstack
查看支持的插件:rabbitmq-plugins list
啟動插件:rabbitmq-plugins enable rabbitmq_management #rabbitmq_management表示實現WEB管理
重啟rabbitmq服務: systemctl restart rabbitmq-server.service
端口:lsof -i:15672
測試訪問http://192.168.2.201:15672 登陸的用戶密碼皆是guest
8、認證服務----keystone (端口:5000和35357) #控制節點執行
1、安裝軟件包:yum install openstack-keystone httpd mod_wsgi memcached python-memcached
注:memcached表示認證服務緩存
2、首先生成隨機值:openssl rand -hex 10
3、拷貝一份keystone配置文件,防止修改出錯后排查:cp /etc/keystone/keystone.conf /etc/keystone/keystone.conf.bak
編輯文件vi /etc/keystone/keystone.conf:
[DEFAULT]
admin_token = b6f89e3f5d766bb71bf8 #此處是生成的隨機值
token_format = UUID
[database]
connection = mysql+pymysql://keystone:keystone123@controller1/keystone
[memcache]
servers = controller1:11211
[token]
provider = uuid
driver = keystone.token.persistence.backends.sql.Token
注:keystone默認使用SQL數據庫存儲token,token默認值為1天(24h)。Openstack中每個組件執行的每次命令(請求)都需要token驗證,每次訪問都會創建token,增長速度非常快,token表數據也會越來越多。隨着時間的推移,無效的記錄越來越多,企業私有雲的量就可以幾萬條、幾十萬條。這么多無效的token導致針對token表的SQL語句變慢,性能也會變差,要么手動寫個定時腳本清理token表;要么把token存放在memcache緩存中,利用memcache特性,自動刪除不使用的緩存。(本次使用第二種方法)
4、創建數據庫表,使用命令同步:su -s /bin/sh -c "keystone-manage db_sync" keystone
數據庫檢查表:mysql -h 192.168.2.201 -u keystone -pkeystone123 #密碼鍵入,直接登陸keystone庫
#echo -n redhat | openssl md5 生成md5加密密碼
#update users set passwd='e2798af12a7a0f4f70b4d69efbc25f4d' where userid = '1';
5、啟動apache和memcache
啟動memcache:
systemctl enable memcached
注:執行此命令后若出現:Created symlink from /etc/systemd/system/multi-user.target.wants/memcached.service to /usr/lib/systemd/system/memcached.service.表示做了一條鏈接,讓其開機自啟。然后重新執行此命令!
systemctl start memcached #啟動memcache
### /usr/bin/memcached -d -uroot #若沒有11211端口則用此方法啟動
驗證方法則是查看其默認的11211端口是否開啟
6、配置httpd,編輯其/etc/httpd/conf/httpd.conf文件
ServerName controller1:80
創建文件/etc/httpd/conf.d/wsgi-keystone.conf,內容如下:
Listen 5000
Listen 35357
<VirtualHost *:5000>
WSGIDaemonProcess keystone-public processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP}
WSGIProcessGroup keystone-public
WSGIScriptAlias / /usr/bin/keystone-wsgi-public
WSGIApplicationGroup %{GLOBAL}
WSGIPassAuthorization On
ErrorLogFormat "%{cu}t %M"
ErrorLog /var/log/httpd/keystone-error.log
CustomLog /var/log/httpd/keystone-access.log combined
<Directory /usr/bin>
Require all granted
</Directory>
</VirtualHost>
<VirtualHost *:35357>
WSGIDaemonProcess keystone-admin processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP}
WSGIProcessGroup keystone-admin
WSGIScriptAlias / /usr/bin/keystone-wsgi-admin
WSGIApplicationGroup %{GLOBAL}
WSGIPassAuthorization On
ErrorLogFormat "%{cu}t %M"
ErrorLog /var/log/httpd/keystone-error.log
CustomLog /var/log/httpd/keystone-access.log combined
<Directory /usr/bin>
Require all granted
</Directory>
</VirtualHost>
啟動httpd:
systemctl enable httpd
systemctl start httpd
過濾查看:netstat -lntup | grep httpd #或者查看全部其開啟的端口 netstat -anpt
7、創建keystone用戶
臨時設置admin_token用戶的環境變量,用來創建用戶
配置認證令牌:export OS_TOKEN=b6f89e3f5d766bb71bf8 #產生的隨機值要寫/etc/keystone/keystone.conf里面的
配置端點URL:export OS_URL=http://controller1:35357/v3
配置認證API版本:export OS_IDENTITY_API_VERSION=3
8、創建服務實體和身份認證服務:openstack service create --name keystone --description "Openstack Identity" identity
(注:實體ID:e6aa9c8d2e504978a77d09d09d8213d4 名稱:keystone 類:identity) #只是標記,你可忽略
#出現錯誤則:keystone-manage db_sync 重新命名解決
9、創建認證服務API端點:(public公共的、internal內部的、admin管理的)
openstack endpoint create --region RegionOne identity public http://controller1:5000/v3
openstack endpoint create --region RegionOne identity internal http://controller1:5000/v3
openstack endpoint create --region RegionOne identity admin http://controller1:5000/v3
查看端點列表:
10、創建域‘default’:openstack domain create --description "Default Domain" default
查看域列表:
11、創建admin項目、admin用戶、admin角色;添加``admin`` 角色到 admin 項目和用戶上
項目:openstack project create --domain default --description "Admin Project" admin
用戶:openstack user create --domain default --password-prompt admin #執行命令后,輸入自定義密碼,本次密碼為admin123
角色:openstack role create admin
添加:openstack role add --project admin --user admin admin #--project admin代表項目,--user admin代表用戶
注意:此處陳述下大致的openstack邏輯關系======================================================
1、創建域,以下說明皆在域內,可以說域相當於總框架
2、admin表示管理任務服務的項目;demo表示常務任務服務的項目;service表示每個服務包含獨有用戶的項目
3、service項目中對應每個模塊的一個實體
4、每個模塊對應三個變種端點:public(公共)、internal(內部)、admin(管理)
5、除了service獨有用戶的項目以外,基本其他項目都相對應一個用戶、角色
6、每個模塊的用戶我們使用openstack項目名稱做代表(keystone、glance、nova等)
7、而每個模塊下的用戶基本會對應一個角色
8、基本架構可簡單描述:域--->項目→用戶→角色
↓
變種端點
其他:
查看域列表:openstack domain list
查看API端點列表:openstack endpoint list
查看項目列表:openstack project list
查看用戶列表:openstack user list
查看角色列表:openstack role list
過濾配置文件內容:cat 配置文件路徑 grep -v "^#"|grep -v "^$"
( 一些常見問題:http://www.cnblogs.com/kevingrace/p/5811167.html )
注意問題:若查看列表時出現以下顯示
1、[root@controller1 ~]# openstack project list
Could not find requested endpoint in Service Catalog.或者
__init__() got an unexpected keyword argument 'token'或者
The resource could not be found. (HTTP 404)
請重新執行token認證:(unset OS_TOKEN OS_URL)
12、創建service項目:openstack project create --domain default --description "Service Project" service
13、創建demo項目:openstack project create --domain default --description "Demo Project" demo
查看項目列表:
創建demo用戶:openstack user create --domain default --password-prompt demo #執行后輸入自定義密碼,本次密碼為demo123
創建user角色:openstack role create user
添加:openstack role add --project demo --user demo user
查看用戶列表:
查看角色列表:
14、驗證,獲取token(只有獲取到才能說明keystone配置成功):unset OS_TOKEN OS_URL
用戶admin,請求認證令牌:openstack --os-auth-url http://controller1:35357/v3 --os-project-domain-name default --os-user-domain-name default --os-project-name admin --os-username admin token issue
15、創建環境變量腳本:
編輯admin:
export OS_PROJECT_DOMAIN_NAME=default
export OS_USER_DOMAIN_NAME=default
export OS_PROJECT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=admin123
export OS_AUTH_URL=http://controller1:35357/v3
export OS_INENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
編輯demo:
export OS_PROJECT_DOMAIN_NAME=default
export OS_USER_DOMAIN_NAME=default
export OS_PROJECT_NAME=demo
export OS_USERNAME=demo
export OS_PASSWORD=demo123
export OS_AUTH_URL=http://controller1:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
測試切換admin環境變量: .admin-openrc
測試切換demo環境變量: . demo-openrc
鏡像模塊(端口 API9191;registry9292)
1、安裝包:yum install openstack-glance python-glance python-glanceclient
2、編輯修改/etc/glance/glance-api.conf #注意,修改前請拷貝一份其配置文件;使其配置出錯可以恢復
[database]
connection = mysql+pymysql://glance:glance123@controller1/glance
[glance_store]
stores = file,http
default_store = file
filesystem_store_datadir = /var/lib/glance/images/
[keystone_authtoken]
auth_uri = http://controller1:5000
auth_url = http://controller1:35357
memcached_servers = controller1:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = glance
password = glance123
[paste_deploy]
flavor = keystone
3、編輯修改/etc/glance/glance-registry.conf #注意,修改前請拷貝一份其配置文件;使其配置出錯可以恢復
[database]
connection = mysql+pymysql://glance:glance123@controller1/glance
[glance_store]
[keystone_authtoken]
auth_uri = http://controller1:5000
auth_url = http://controller1:35357
memcached_servers = controller1:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = glance
password = glance123
[paste_deploy]
flavor = keystone
創建數據庫表,初始化數據庫: su -s /bin/sh -c "glance-manage db_sync" glance #忽略輸出信息,比如:
測試登陸數據然后查看列表:mysql -h controller1 -uglance -pglanage123
4、切換環境變量: . admin-openrc
創建關於glance用戶:openstack user create --domain default --password-prompt glance #本次glance用戶密碼定義為glance123
查看用戶列表:
添加admin角色到glance用戶和service項目上:openstack role add --project service --user glance admin
設置開機自啟:systemctl enable openstack-glance-api openstack-glance-registry
開啟:systemctl start openstack-glance-api openstack-glance-registry
查看是否有相應端口,確認是否開啟:netstat -lnutp |grep 9191
5、創建glance服務實體:openstack service create --name glance --description "OpenStack Image service" image
查看實體列表:
創建鏡像服務的API端點:
openstack endpoint create --region RegionOne image public http://controller1:9292
openstack endpoint create --region RegionOne image internal http://controller1:9292
openstack endpoint create --region RegionOne image admin http://controller1:9292
查看端點列表:
6、測試
下載源鏡像:wget -q http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img
注:若提示wget命令未找到須執行:yum install wget -y
上傳:glance image-create --name "cirros" --file cirros-0.3.4-x86_64-disk.img --disk-format qcow2 --container-format bare --visibility public --progress
查看鏡像列表:
#####若上傳報500錯誤則執行su -s /bin/sh -c "glance-manage db_sync" glance
計算服務
控制節點安裝的軟件包:
yum install -y openstack-nova-api openstack-nova-cert openstack-nova-conductor openstack-nova-console openstack-nova-novncproxy openstack-nova-scheduler python-novaclient
注:具體安裝包解釋請查看編寫的openstack技術數據文檔!
控制節點執行編輯/etc/nova/nova.conf(表示如果控制節點也作為計算節點便設置)
[DEFAULT]#只啟用計算和元數據API
my_ip=192.168.2.201 #控制節點IP
enabled_apis=osapi_compute,metadata
auth_strategy=keystone
allow_resize_to_same_host=true
firewall_driver=nova.virt.firewall.NoopFirewallDriver
network_api_class=nova.network.neutronv2.api.API
use_neutron=true
rpc_backend=rabbit
[api_database]#配置數據庫連接
connection=mysql+pymysql://nova:nova123@controller1/nova_api
[database]
connection=mysql+pymysql://nova:nova123@controller1/nova
[glance]#配置服務API的位置
...
api_servers= http://controller1:9292
[keystone_authtoken]#配置認證服務訪問
...
auth_uri=http://controller1:5000
auth_url = http://controller1:35357
memcached_servers = controller1:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = nova123
[libvirt]
...
virt_type=kvm #若控制節點也作為計算節點,這一行需添加
[neutron]#網絡配置
...
url=http://controller1:9696
auth_url = http://controller1:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = neutron123
service_metadata_proxy = True
metadata_proxy_shared_secret = neutron
[oslo_messaging_rabbit]#配置消息隊列訪問
...
rabbit_host=controller1
rabbit_userid=openstack
rabbit_password=openstack123 #openstack定義的密碼
[vnc]#配置VNC代理
...
keymap=en-us #若控制節點也作為計算節點,需添加
vncserver_listen=$my_ip
vncserver_proxyclient_address=$my_ip
novncproxy_base_url=http://124.65.181.122:6080/vnc_auto.html #若控制節點也作為計算幾點,需添加
同步compute數據庫:
su -s /bin/sh -c "nova-manage api_db sync" nova
su -s /bin/sh -c "nova-manage db sync" nova
創建nova用戶:openstack user create --domain default --password-prompt nova #注:本次密碼自定義設置的是nova123
查看用戶列表:
給nova用戶添加admin角色:openstack role add --project service --user nova admin
啟動相關nova相關的服務:
systemctl enable openstack-nova-api.service openstack-nova-cert.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service
systemctl start openstack-nova-api.service openstack-nova-cert.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service
創建nova實體:openstack service create --name nova --description "OpenStack Compute" compute
查看實體列表:
創建compute服務API端點:
openstack endpoint create --region RegionOne compute public http://controller1:8774/v2.1/%\(tenant_id\)s
openstack endpoint create --region RegionOne compute internal http://controller1:8774/v2.1/%\(tenant_id\)s
openstack endpoint create --region RegionOne compute admin http://controller1:8774/v2.1/%\(tenant_id\)s
端點列表查看:
檢查:
計算節點安裝的軟件包:yum install -y openstack-nova-compute sysfsutils
編輯文件計算節點/etc/nova/nova.conf
[DEFAULT]
my_ip=192.168.2.202 #計算節點1的IP
enabled_apis=osapi_compute,metadata
auth_strategy=keystone
firewall_driver=nova.virt.firewall.NoopFirewallDriver
network_api_class=nova.network.neutronv2.api.API
use_neutron=true
rpc_backend=rabbit
[api_database]
connection=mysql+pymysql://nova:nova123@controller1/nova_api
[database]
connection=mysql+pymysql://nova:nova123@controller1/nova
[glance]
api_servers= http://controller1:9292
[keystone_authtoken]
auth_uri=http://controller1:5000
auth_url = http://controller1:35357
memcached_servers = controller1:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = nova123 #自定義的計算節點密碼
[libvirt]
virt_type=qemu
[neutron]
url=http://controller1:9696
auth_url = http://controller1:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = neutron123 #自定義的網絡模塊密碼
[oslo_concurrency]
lock_path=/var/lib/nova/tmp
[oslo_messaging_rabbit]
rabbit_host=controller1
rabbit_userid=openstack
rabbit_password=openstack123
[vnc]
keymap=en-us
vncserver_listen=0.0.0.0 #所有IP訪問
vncserver_proxyclient_address=$my_ip
novncproxy_base_url=http://192.168.2.201:6080/vnc_auto.html #控制節點IP
啟動服務:
systemctl enable libvirtd openstack-nova-compute
systemctl start libvirtd openstack-nova-compute
測試glance是否正常:(已解決,詳情在下)
測試keystone是否正常:
網絡模塊
控制節點安裝:yum install openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge ebtables 計算節點安裝:yum install -y openstack-neutron openstack-neutron-linuxbridge ebtables ipset
1、控制節點編輯以下配置文件
1、編輯/etc/neutron/neutron.conf
[DEFAULT]
auth_strategy = keystone
core_plugin = ml2
service_plugins = router
allow_overlapping_ips = True
notify_nova_on_port_status_changes = true
notify_nova_on_port_data_changes = true
rpc_backend = rabbit
[database]
connection = mysql+pymysql://neutron:neutron123@controller1/neutron
[keystone_authtoken]
auth_uri = http://controller1:5000
auth_url = http://controller1:35357
memcached_servers = controller1:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = neutron123
[nova]#配置網絡通知計算網絡拓撲變化
auth_url = http://controller1:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = nova123
[oslo_concurrency]
lock_path = /var/log/neutron/tmp
[oslo_messaging_rabbit]
rabbit_host = controller1
rabbit_userid = openstack
rabbit_password = openstack123
2、編輯/etc/neutron/plugins/ml2/ml2_conf.ini
[ml2]
type_drivers = flat,vlan,vxlan
tenant_network_types = vxlan
mechanism_drivers = linuxbridge,l2population
extension_drivers = port_security #啟用端口安全
[ml2_type_flat]#虛擬網絡配置提供者平面網絡
flat_networks = provider
[ml2_type_vxlan]
vni_ranges = 1:1000
[securitygroup]
enable_ipset = true
3、編輯/etc/neutron/plugins/ml2/linuxbridge_agent.ini
[linux_bridge]
physical_interface_mappings = provider:enp5s0 #網卡名稱
[securitygroup]
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
enable_security_group = true
[vxlan]
enable_vxlan = true
local_ip = 192.168.2.201
l2_population = true
4、編輯/etc/neutron/dhcp_agent.ini
[DEFAULT]
interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = true
5、編輯/etc/neutron/l3_agent.ini,添加如下
[DEFAULT]
interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver
external_network_bridge =
6、編輯/etc/neutron/metadata_agent.ini
[DEFAULT]
nova_metadata_ip = controller1
metadata_proxy_shared_secret = neutron123
1、創建連接:ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
2、創建neutron用戶:openstack user create --domain default --password-prompt neutron #本次設置自定義用戶密碼為neutron123
查看用戶列表:
3、添加admin角色到neutron用戶:openstack role add --project service --user neutron admin
4、更新數據庫:su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
5、創建neutron服務實體:openstack service create --name neutron --description "OpenStack Network" network
查看實體列表:
6、創建網絡服務API端點:
openstack endpoint create --region RegionOne network public http://controller1:9696
openstack endpoint create --region RegionOne network internal http://controller1:9696
openstack endpoint create --region RegionOne network admin http://controller1:9696
查看端點列表:
5、啟動服務並檢查(注:由於計算和網絡有聯系,在nova.conf中做了網絡的關聯配置,需重啟api)
systemctl restart openstack-nova-api.service openstack-nova-cert.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service
6、啟動網絡相關服務
開機自啟:systemctl enable neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service
啟動服務:systemctl start neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service
計算節點配置:
1、編輯/etc/neutron/neutron.conf #可從controller1節點中把文件拷貝到compute1節點
[DEFAULT]
state_path = /var/lib/neutron
auth_strategy = keystone
core_plugin = ml2
service_plugins = router
notify_nova_on_port_status_changes = true
notify_nova_on_port_data_changes = true
nova_url = http://controller1:8774/v2.1
rpc_backend = rabbit
[database]
connection = mysql+pymysql://neutron:neutron123@controller1/neutron
[keystone_authtoken]
auth_uri = http://controller1:5000
auth_url = http://controller1:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = neutron
password = neutron123
admin_tenant_name = %SERVICE_TENANT_NAME%
admin_user = %SERVICE_USER%
admin_password = %SERVICE_PASSWORD%
[nova]
auth_url = http://controller1:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
region_name = RegionOne
project_name = service
username = nova
password = nova123
[oslo_concurrency]
lock_path = $state_path/lock
[oslo_messaging_rabbit]
rabbit_host = controller1
rabbit_port = 5672
rabbit_userid = openstack
rabbit_password = openstack123
2、編輯/etc/neutron/plugins/ml2/linuxbridge_agent.ini
[agent]
prevent_arp_spoofing = true
[linux_bridge]
bridge_mappings = provider:em1
[securitygroup]
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
enable_security_group = true
[vxlan]
enable_vxlan = false
7、公網測試查看:
查看neutron-server進程是否正常啟動:
問題:在控制節點測試若發現以下問題
1、[root@controller1 ~]# neutron agent-list
404-{u'error': {u'message': u'The resource could not be found.', u'code': 404, u'title': u'Not Found'}}
Neutron server returns request_ids: ['req-649eb926-7200-4a3d-ad91-b212ee5ef767']
請執行:unset OS_TOKEN OS_URL #初始化
2、[root@controller1 ~]# neutron agent-list
Unable to establish connection to http://controller1:9696/v2.0/agents.json
請執行重新啟動:systemctl restart neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service
創建虛擬機
1、創建橋接網絡
在那個項目下創建虛擬機,此處我們選擇admin: .admin-openrc(若選擇demo,相應切換即可)
執行:neutron net-create flat --shared --provider:physical_network provider --provider:network_type flat#provider表示在配置文件中的:provider:網卡名稱。
創建子網:neutron subnet-create flat 192.168.2.0/24 --name flat-subnet --allocation-pool start=192.168.2.100,end=192.168.2.200 --dns-nameserver 192.168.2.1 --gateway 192.168.2.1
注:填寫宿主機的內網網關,下面DNS和內網網關可以設置成宿主機的內網ip,下面192.168.2.100-200是分配給虛擬機的ip范圍
查看子網:
注:創建的網絡刪除方法
1、查看是否有路由------neutron router-list
2、刪除路由網關-----neutron router-gateway-clear 路由名稱(查看路由后,直接輸入要刪除的路由)
3、刪除路由接口-----neutron router-interface-delete 路由名稱 路由接口(注:路由接口則是你在創建時鍵入的名稱)
4、刪除路由-----neutron router-delete 路由名稱
5、刪除子網----neutron subnet-delete 子網名稱(注:子網名稱則是同刪除路由相關而創建的子網)
6、刪除網絡----neutron net-delete 網絡名稱
注:查看網絡----neutron net-list
查看子網----neutron subnet-list
查看路由---neutron router-list
若沒有路由則直接刪除子網即可!
創建虛擬機
1、創建key
[root@controller1 ~]# . demo-openrc #這是在demo賬號下常見虛擬機;如果要在admin賬號下創建虛擬機,相應切換即可
[root@controller1 ~]# ssh-keygen -q -N ""
2、將公鑰添加到虛擬機
[root@controller1 ~]# nova keypair-add --pub-key /root/.ssh/id_rsa.pub mykey
3、創建安全組
nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0 #表示可ping
nova secgroup-add-rule default tcp 22 22 0.0.0.0/0 #表示可ssh連接
4、創建虛擬機
查看支持的虛擬機類型
查看鏡像:
查看網絡:
創建虛擬機:nova boot --flavor m1.tiny --image cirros --nic net-id=f3a7aa1e-9799-47cd-a1d4-fb1e4d191f2d --security-group default --key-name mykey hello-instance
注:--flavor m1.tiny #表示選擇的虛擬機類型
--image cirros #cirros表示的是鏡像名稱,可自定義
--key-name mykey #表示key的名稱,可以自定義
hello-instance #表示虛擬機名稱,可自定義
查看列表:
執行命令,讓其Web界面打開虛擬機:(輸入URL即可進入登陸界面)
使用瀏覽器登陸novnc:(谷歌瀏覽器)
注:登陸雲主機用戶名為:cirros 密碼為默認密碼:cubswin:) (圖中有提示)
控制節點刪除虛擬機使用的命令:nova delete ID(查看列表中的ID)
也可以在控制節點命令行中執行ssh命令,然后切換雲主機:ssh cirros@IP;如果ssh切換提示失敗等,我們把生成的key文件修改權限至700。在主機使用ssh切換時,需要使用默認用戶名登陸,登陸成功后則使用su命令切換即可。 (查看列表中有相應IP顯示)
其他centOS鏡像地址:http://cloud.centos.org/centos/
本次使用鏡像下載地址:http://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud.qcow2
遠程是否可以連接參考文獻:https://sanwen8.cn/p/171lmWW.html
建議在控制節點使用ssh登陸,一般情況下centos鏡像6.x默認用戶為“centos-user”;centos7.x默認用戶是“centos”;由於創建虛擬機時我們創建了公鑰,所以不需要密碼就可以登陸虛擬機,登陸到虛擬機時我們需要修改下密碼,命令為:sudo passwd 用戶名
另外當我們ssh進入雲主機時,在novnc中我們可以選擇用戶名root,密碼則為我們修改的密碼
安裝dashboard,登陸web管理界面:(控制節點)
1、安裝包:yum install openstack-dashboard -y
2、編輯/etc/openstack-dashboard/local_settings
OPENSTACK_HOST = "192.168.2.201"#或者書寫controller1
ALLOWED_HOSTS = ['*', ]#表示允許所有主機訪問儀表盤
添加此句:SESSION_ENGINE = 'django.contrib.sessions.backends.file'’#表示配置memcached會話存儲服務
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
'LOCATION': '192.168.2.202:11211',
}
}
OPENSTACK_KEYSTONE_URL=”http://%s:5000/v3”% OPENSTACK_HOST#啟用第3版認證API
OPENSTACK_API_VERSIONS = {#配置API版本
"identity": 3,
"volume": 2,
"compute": 2,
}
OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"#通過儀表盤創建的用戶默認角色配置為user
OPENSTACK_NEUTRON_NETWORK = {#本次選擇的網絡參數是公共網絡,禁用支持3層網絡服務
'enable_router': False,
'enable_quotas': False,
'enable_distributed_router': False,
'enable_ha_router': False,
'enable_lb': False,
'enable_firewall': False,
'enable_vpn': False,
'enable_fip_topology_check': False,
TIME_ZONE = "Asia/Shanghai"#配置時區
3、重啟web服務器以及會話存儲服務:systemctl restart httpd.service memcached.service
4、測試登陸:http://192.168.2.201/dashboard
此處顯示的則是創建時的項目、用戶等
查看雲主機
添加、刪除安全規則:
使創建的VM主機聯網,配置如下:
1、安裝軟件包:yum install squid #在控制節點
2、修改配置文件/etc/squid如下 #建議修改之前備份一份配置文件
把http_access deny all改為http_access allow all #表示所有用戶都可以訪問這個代理
把http_port 3128改為http_port 192.168.2.201:3128 #IP及端口是squid的代理IP及端口(也就是宿主機的IP)
3、啟動前測試,命令如下:
使用命令啟動:
查看3128端口是否開啟: #其他------netstat -nltp。此命令是查看所有tcp端口
4、虛擬機VM(雲主機)上進行squid代理配置
編輯系統環境變量配置文件/etc/profile,在文件最后位置添加即可:export http_proxy=http://192.168.2.201:3128
刷新配置文件:source /etc/profile
5、測試虛擬機是否對外訪問:
訪問:curl http://www.baidu.com
正常在線使用yum: yum list
安裝塊存儲(cinder)
創建cinder用戶:[root@controller1 ~]# openstack user create --domain default --password-prompt cinder
查看用戶列表
添加admin角色到cinder用戶上:[root@controller1 ~]# openstack role add --project service --user cinder admin
創建服務實體(塊設備存儲要求兩個服務實體):
openstack service create --name cinder --description "OpenStack Block Storage" volume
openstack service create --name cinderv2 --description "OpenStack Block Storage" volumev2
查看實體列表:
創建塊設備存儲服務的API入口點:
實體volume:
openstack endpoint create --region RegionOne volume public http://controller1:8776/v1/%\(tenant_id\)s
openstack endpoint create --region RegionOne volume internal http://controller1:8776/v1/%\(tenant_id\)s
openstack endpoint create --region RegionOne volume admin http://controller1:8776/v1/%\(tenant_id\)s
實體volumev2:
openstack endpoint create --region RegionOne volumev2 public http://controller1:8776/v2/%\(tenant_id\)s
openstack endpoint create --region RegionOne volumev2 internal http://controller1:8776/v2/%\(tenant_id\)s
openstack endpoint create --region RegionOne volumev2 admin http://controller1:8776/v2/%\(tenant_id\)s
API端點列表查看:
安裝軟件包:yum install openstack-cinder
編輯修改/etc/cinder/cinder.conf:
[DEFAULT]
...
my_ip = 192.168.2.201
auth_strategy = keystone
rpc_backend = rabbit
[database]
...
connection = mysql+pymysql://cinder:cinder123@controller1/cinder
[oslo_messaging_rabbit]
...
rabbit_host = controller1
rabbit_userid = openstack
rabbit_password = openstack123
[keystone_authtoken]
...
auth_uri = http://controller1:5000
auth_url = http://controller1:35357
memcached_servers = controller1:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = cinder
password = cinder123
[oslo_concurrency]
...
lock_path = /var/lib/cinder/tmp
初始化塊設備服務的數據庫:su -s /bin/sh -c "cinder-manage db sync" cinder
配置計算節點以使用塊存儲(/etc/nova/nova.conf):
[cinder]
...
os_region_name=RegionOne
重啟計算API服務:systemctl restart openstack-nova-api.service
啟動塊存儲並開機自啟
systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service
systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service
在存儲節點執行:
查看是否安裝包:[root@cinder1 ~]# yum install lvm2
啟動服務:[root@cinder1 ~]# service lvm2-lvmetad start
在“devices”部分,添加一個過濾器,只接受“/dev/sdb”設備,拒絕其他所有設備:
devices {
...
filter = [ "a/sda/","a/sdb/","r/.*/"]