繼上一篇博客CentOS7.4安裝部署openstack [Liberty版] (一),本篇繼續講述后續部分的內容
一、添加塊設備存儲服務
1.服務簡述:
OpenStack塊存儲服務為實例提供塊存儲。存儲的分配和消耗是由塊存儲驅動器,或者多后端配置的驅動器決定的。還有很多驅動程序可用:NAS/SAN,NFS,ISCSI,Ceph等等。塊存儲API和調度程序服務通常運行在控制節點上。取決於所使用的驅動程序,卷服務可以運行在控制,計算節點或者獨立的存儲節點上。 OpenStack塊存儲服務(cinder)為虛擬機添加持久的存儲,塊存儲提供一個基礎設施為了管理卷,以及和OpenStack計算服務交互,為實例提供卷。此服務也會激活管理卷的快照和卷類型的功能。 塊存儲服務通常包含下列組件: cinder-api 接受API請求,並將其路由到"cinder-volume"執行。 cinder-volume 與塊存儲服務和例如"cinder-scheduler"的進程進行直接交互。它也可以與這些進程通過一個消息隊列進行交互。"cinder-volume"服務響應送到塊存儲服務的讀寫請求來維持狀態。它也可以和多種存儲提供者在驅動架構下進行交互。 cinder-scheduler守護進程 選擇最優存儲提供節點來創建卷。其與"nova-scheduler"組件類似。 cinder-backup守護進程 "cinder-backup"服務提供任何種類備份卷到一個備份存儲提供者。就像"cinder-volume"服務,它與多種存儲提供者在驅動架構下進行交互。 消息隊列 在塊存儲的進程之間路由信息。
2.部署需求:在安裝和配置塊存儲服務之前,必須創建數據庫、服務證書和API端點。
[root@controller ~]#mysql -u root -p123456 #創建數據庫及訪問權限 MariaDB [(none)]>GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' IDENTIFIED BY '123456'; MariaDB [(none)]>GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIED BY '123456'; MariaDB [(none)]>\q [root@controller ~]#. admin-openrc [root@controller ~]# openstack user create --domain default --password-prompt cinder User Password: #密碼為:123456 Repeat User Password: [root@controller ~]#openstack role add --project service --user cinder admin #添加 admin 角色到 cinder 用戶上,這個命令執行后沒有輸出。 [root@controller ~]#openstack service create --name cinder --description "OpenStack Block Storage" volume #創建 cinder 和 cinderv2 服務實體,塊設備存儲服務要求兩個服務實體。 [root@controller ~]#openstack service create --name cinderv2 --description "OpenStack Block Storage" volumev2 [root@controller ~]#openstack endpoint create --region RegionOne volume public http://controller:8776/v1/%\(tenant_id\)s #創建塊設備存儲服務的 API 入口點,塊設備存儲服務每個服務實體都需要端點。
[root@controller ~]#openstack endpoint create --region RegionOne volume internal http://controller:8776/v1/%\(tenant_id\)s
[root@controller ~]#openstack endpoint create --region RegionOne volume admin http://controller:8776/v1/%\(tenant_id\)s
[root@controller ~]#openstack endpoint create --region RegionOne volumev2 public http://controller:8776/v2/%\(tenant_id\)s
[root@controller ~]#openstack endpoint create --region RegionOne volumev2 internal http://controller:8776/v2/%\(tenant_id\)s
[root@controller ~]#openstack endpoint create --region RegionOne volumev2 admin http://controller:8776/v2/%\(tenant_id\)s
3.服務安裝
控制節點:
[root@controller ~]#yum install -y openstack-cinder python-cinderclient [root@controller ~]# egrep -v "^$|^#" /etc/cinder/cinder.conf #編輯cinder.conf [DEFAULT] rpc_backend = rabbit #配置 RabbitMQ 消息隊列訪問 auth_strategy = keystone #配置認證服務訪問 my_ip = 192.168.1.101 #配置 my_ip 來使用控制節點的管理接口的IP 地址 verbose = True #啟用詳細日志 [BRCD_FABRIC_EXAMPLE] [CISCO_FABRIC_EXAMPLE] [cors] [cors.subdomain] [database] connection = mysql://cinder:123456@controller/cinder #配置數據庫訪問
[fc-zone-manager] [keymgr] [keystone_authtoken] #配置認證服務訪問,在 [keystone_authtoken] 中注釋或者刪除其他選項。 auth_uri = http://controller:5000
auth_url = http://controller:35357
auth_plugin = password project_domain_id = default user_domain_id = default project_name = service username = cinder password = 123456 [matchmaker_redis] [matchmaker_ring] [oslo_concurrency] lock_path = /var/lib/cinder/tmp #配置鎖路徑 [oslo_messaging_amqp] [oslo_messaging_qpid] [oslo_messaging_rabbit] #配置 RabbitMQ 消息隊列訪問 rabbit_host = controller rabbit_userid = openstack rabbit_password = 123456 [oslo_middleware] [oslo_policy] [oslo_reports] [profiler] [root@controller ~]#su -s /bin/sh -c "cinder-manage db sync" cinder #初始化塊設備服務的數據庫 [root@controller ~]#[root@controller ~]# grep -A 1 "\[cinder\]" /etc/nova/nova.conf #配置計算節點以使用塊設備存儲,編輯文件 /etc/nova/nova.conf 並添加如下內容 [cinder] os_region_name = RegionOne [root@controller ~]# systemctl restart openstack-nova-api.service [root@controller ~]#systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service [root@controller ~]#systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service
存儲節點:
[root@block1 ~]# yum install lvm2 -y [root@block1 ~]# systemctl enable lvm2-lvmetad.service [root@block1 ~]# systemctl start lvm2-lvmetad.service [root@block1 ~]#pvcreate /dev/sdb #創建LVM 物理卷 /dev/sdb Physical volume "/dev/sdb" successfully created [root@block1 ~]#vgcreate cinder-volumes /dev/sdb #創建 LVM 卷組 cinder-volumes,塊存儲服務會在這個卷組中創建邏輯卷 Volume group "cinder-volumes" successfully created [root@block1 ~]# vim /etc/lvm/lvm.conf #編輯etc/lvm/lvm.conf文件,在devices部分,添加一個過濾器,只接受/dev/sdb設備,拒絕其他所有設備 devices { filter = [ "a/sda/", "a/sdb/", "r/.*/"] #如果存儲節點在操作系統磁盤上也使用了 LVM,也需要添加相關的設備到過濾器中 [root@block1 ~]# yum install openstack-cinder targetcli python-oslo-policy -y [root@block1 ~]# systemctl enable openstack-cinder-volume.service target.service [root@block1 ~]# systemctl restart openstack-cinder-volume.service target.service [root@block1 ~]# egrep -v "^$|^#" /etc/cinder/cinder.conf [DEFAULT] rpc_backend = rabbit #配置 RabbitMQ 消息隊列訪問 auth_strategy = keystone #配置認證服務訪問 my_ip = 192.168.1.103 #存儲節點上的管理網絡接口的IP 地址 enabled_backends = lvm #啟用 LVM 后端 glance_host = controller #配置鏡像服務的位置 [BRCD_FABRIC_EXAMPLE] [CISCO_FABRIC_EXAMPLE] [cors] [cors.subdomain] [database] connection = mysql://cinder:123456@controller/cinder #配置數據庫訪問
[fc-zone-manager] [keymgr] [keystone_authtoken] #配置認證服務訪問,在 [keystone_authtoken] 中注釋或者刪除其他選項。 auth_uri = http://controller:5000
auth_url = http://controller:35357
auth_plugin = password project_domain_id = default user_domain_id = default project_name = service username = cinder password = 123456 [matchmaker_redis] [matchmaker_ring] [oslo_concurrency] lock_path = /var/lib/cinder/tmp #配置鎖路徑 [oslo_messaging_amqp] [oslo_messaging_qpid] [oslo_messaging_rabbit] #配置 RabbitMQ 消息隊列訪問 rabbit_host = controller rabbit_userid = openstack rabbit_password = 123456 [oslo_middleware] [oslo_policy] [oslo_reports] [profiler] [lvm] #配置LVM后端以LVM驅動結束,卷組cinder-volumes,iSCSI 協議和正確的 iSCSI服務 volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver volume_group = cinder-volumes iscsi_protocol = iscsi iscsi_helper = lioadm [root@block1 ~]# systemctl enable openstack-cinder-volume.service target.service [root@block1 ~]# systemctl start openstack-cinder-volume.service target.service
驗證:
[root@controller ~]#source admin-openrc.sh [root@controller ~]#cinder service-list #列出服務組件以驗證是否每個進程都成功啟動 +------------------+------------+------+---------+-------+----------------------------+-----------------+
| Binary | Host | Zone | Status | State | Updated_at | Disabled Reason |
+------------------+------------+------+---------+-------+----------------------------+-----------------+
| cinder-scheduler | controller | nova | enabled | up | 2014-10-18T01:30:54.000000 | None |
| cinder-volume | block1@lvm | nova | enabled | up | 2014-10-18T01:30:57.000000 | None |
+------------------+------------+------+---------+-------+----------------------------+-----------------+
二、添加對象存儲服務
1.服務簡述
OpenStack對象存儲服務(swift) 通過一系列: REST API 一起提供對象存儲和恢復服務。在布署對象存儲前,你的環境當中必須至少包括認證服務(keystone)。 OpenStack對象存儲是一個多租戶的對象存儲系統,它支持大規模擴展,可以以低成本來管理大型的非結構化數據,通過RESTful HTTP 應用程序接口。 它包含下列組件: 代理服務器(swift-proxy-server) 接收OpenStack對象存儲API和純粹的HTTP請求以上傳文件,更改元數據,以及創建容器。它可服務於在web瀏覽器下顯示文件和容器列表。為了改進性能,代理服務可以使用可選的緩存,通常部署的是memcache。 賬戶服務器 (swift-account-server) 管理由對象存儲定義的賬戶。 容器服務器 (swift-container-server) 管理容器或文件夾的映射,對象存儲內部。 對象服務器 (swift-object-server) 在存儲節點上管理實際的對象,比如:文件。 各種定期進程 為了駕馭大型數據存儲的任務,復制服務需要在集群內確保一致性和可用性,其他定期進程有審計,更新和reaper。 WSGI中間件 掌控認證,使用OpenStack認證服務。 swift 客戶端 用戶可以通過此命令行客戶端來向REST API提交命令,授權的用戶角色可以是管理員用戶,經銷商用戶,或者是swift用戶。 swift-init 初始化環鏈文件生成的腳本,將守護進程名稱當作參數並提供命令。歸檔於http://docs.openstack.org/developer/swift/admin_guide.html#managing-services。 swift-recon 一個被用於檢索多種關於一個集群的度量和計量信息的命令行接口工具已被swift-recon中間件采集。 swift-ring-builder 存儲環鏈建立並重平衡實用程序。歸檔於http://docs.openstack.org/developer/swift/admin_guide.html#managing-the-rings。
2.部署需求:配置對象存儲服務前,必須創建服務憑證和API端點。
[root@controller ~]# source admin-openrc.sh [root@controller ~]# openstack user create --domain default --password-prompt swift #創建 swift用戶 User Password: #密碼為:123456 Repeat User Password: [root@controller ~]# openstack role add --project service --user swift admin #添加admin角色到 swift 用戶 [root@controller ~]#openstack service create --name swift --description "OpenStack Object Storage" object-store #創建 swift 服務實體 [root@controller ~]#openstack endpoint create --region RegionOne object-store public http://controller:8080/v1/AUTH_%\(tenant_id\)s #創建對象存儲服務API端點
[root@controller ~]#openstack endpoint create --region RegionOne object-store internal http://controller:8080/v1/AUTH_%\(tenant_id\)s
[root@controller ~]#openstack endpoint create --region RegionOne object-store admin http://controller:8080/v1
3.服務安裝
控制節點:
[root@controller ~]#yum install -y openstack-swift-proxy python-swiftclient python-keystoneclient python-keystonemiddleware memcached [root@controller ~]# vim /etc/swift/proxy-server.conf #配置文件在各發行版本中可能不同。你可能需要添加這些部分和選項而不是修改已經存在的部分和選項!!! [DEFAULT] #在[DEFAULT]部分,配置綁定端口,用戶和配置目錄 bind_port = 8080 user = swift swift_dir = /etc/swift [pipeline:main] #在[pipeline:main]部分,啟用合適的模塊 pipeline = catch_errors gatekeeper healthcheck proxy-logging cache container_sync bulk ratelimit authtoken keystoneauth container-quotas account-quotas slo dlo versioned_writes proxy-logging proxy-server [app:proxy-server] #在[app:proxy-server]部分,啟用自動帳號創建 use = egg:swift#proxy account_autocreate = true [filter:keystoneauth] #在[filter:keystoneauth]部分,配置操作員角色 use = egg:swift#keystoneauth operator_roles = admin,user [filter:authtoken] #在[filter:authtoken]部分,配置認證服務訪問 paste.filter_factory = keystonemiddleware.auth_token:filter_factory auth_uri = http://controller:5000
auth_url = http://controller:35357
auth_plugin = password project_domain_id = default user_domain_id = default project_name = service username = swift password = 123456 delay_auth_decision = true [filter:cache] #在[filter:cache]部分,配置memcached位置 use = egg:swift#memcache memcache_servers = 127.0.0.1:11211
存儲節點:(在每個存儲節點上執行這些步驟)
[root@object1 ~]#yum install xfsprogs rsync -y #安裝支持的工具包 [root@object1 ~]#mkfs.xfs /dev/sdb #使用XFS格式化/dev/sdb和/dev/sdc設備 [root@object1 ~]#mkfs.xfs /dev/sdc [root@object1 ~]#mkdir -p /srv/node/sdb #創建掛載點目錄結構 [root@object1 ~]#mkdir -p /srv/node/sdc [root@object1 ~]#tail -2 /etc/fstab #編輯"/etc/fstab"文件並包含以下內容 /dev/sdb /srv/node/sdb xfs noatime,nodiratime,nobarrier,logbufs=8 0 2
/dev/sdc /srv/node/sdc xfs noatime,nodiratime,nobarrier,logbufs=8 0 2 [root@object1 ~]#mount /srv/node/sdb #掛載設備 [root@object1 ~]#mount /srv/node/sdc [root@object1 ~]#cat /etc/rsyncd.conf #編輯"/etc/rsyncd.conf" 文件並包含以下內容 uid = swift gid = swift log file = /var/log/rsyncd.log pid file = /var/run/rsyncd.pid address = 192.168.1.104 #本機的網絡管理接口 [account] max connections = 2 path = /srv/node/ read only = false
lock file = /var/lock/account.lock [container] max connections = 2 path = /srv/node/ read only = false
lock file = /var/lock/container.lock [object] max connections = 2 path = /srv/node/ read only = false
lock file = /var/lock/object.lock [root@object1 ~]#systemctl enable rsyncd.service [root@object1 ~]# systemctl start rsyncd.service [root@object1 ~]# yum install openstack-swift-account openstack-swift-container openstack-swift-object -y [root@object1 ~]#vim /etc/swift/account-server.conf [DEFAULT] #在[DEFAULT]`部分,配置綁定IP地址,綁定端口,用戶,配置目錄和掛載目錄 bind_ip = 192.168.1.104 bind_port = 6002 user = swift swift_dir = /etc/swift devices = /srv/node mount_check = true [pipeline:main] #在[pipeline:main]部分,啟用合適的模塊 pipeline = healthcheck recon account-server [filter:recon] #在[filter:recon]部分,配置recon (meters)緩存目錄 use = egg:swift#recon recon_cache_path = /var/cache/swift [root@object1 ~]# vim /etc/swift/container-server.conf [DEFAULT] #在[DEFAULT] 部分,配置綁定IP地址,綁定端口,用戶,配置目錄和掛載目錄 bind_ip = 192.168.1.104 bind_port = 6001 user = swift swift_dir = /etc/swift devices = /srv/node mount_check = true [pipeline:main] #在[pipeline:main]部分,啟用合適的模塊 pipeline = healthcheck recon container-server [filter:recon] #在[filter:recon]部分,配置recon (meters)緩存目錄 use = egg:swift#recon recon_cache_path = /var/cache/swift [root@object1 ~]#vim /etc/swift/object-server.conf [DEFAULT] #在[DEFAULT] 部分,配置綁定IP地址,綁定端口,用戶,配置目錄和掛載目錄 bind_ip = 192.168.1.104 bind_port = 6000 user = swift swift_dir = /etc/swift devices = /srv/node mount_check = true [pipeline:main] #在[pipeline:main]部分,啟用合適的模塊 pipeline = healthcheck recon object-server [filter:recon] #在[filter:recon]部分,配置recon (meters)緩存目錄和鎖文件目錄 use = egg:swift#recon recon_cache_path = /var/cache/swift recon_lock_path = /var/lock [root@object1 ~]#chown -R swift:swift /srv/node [root@object1 ~]#restorecon -R /srv/node [root@object1 ~]#mkdir -p /var/cache/swift [root@object1 ~]#chown -R root:swift /var/cache/swift
創建和分發初始化rings
控制節點:
[root@controller ~]# cd /etc/swift/ [root@controller swift]# swift-ring-builder account.builder create 10 3 1 #創建account.builder 文件 [root@controller swift]# wift-ring-builder account.builder add --region 1 --zone 1 --ip 192.168.1.104 --port 6002 --device sdb --weight 100 #添加每個節點到 ring 中 [root@controller swift]# swift-ring-builder account.builder add --region 1 --zone 1 --ip 192.168.1.104 --port 6002 --device sdc --weight 100 [root@controller swift]# swift-ring-builder account.builder add --region 1 --zone 1 --ip 192.168.1.105 --port 6002 --device sdb --weight 100 [root@controller swift]# swift-ring-builder account.builder add --region 1 --zone 1 --ip 192.168.1.105 --port 6002 --device sdc --weight 100 [root@controller swift]# swift-ring-builder account.builder #驗證 ring 的內容 account.builder, build version 4
1024 partitions, 3.000000 replicas, 1 regions, 1 zones, 4 devices, 0.00 balance, 0.00 dispersion The minimum number of hours before a partition can be reassigned is 1 The overload factor is 0.00% (0.000000) Devices: id region zone ip address port replication ip replication port name weight partitions balance meta 0 1 1 192.168.1.104 6002 192.168.1.104 6002 sdb 100.00 768 0.00
1 1 1 192.168.1.104 6002 192.168.1.104 6002 sdc 100.00 768 0.00
2 1 1 192.168.1.105 6002 192.168.1.105 6002 sdb 100.00 768 0.00
3 1 1 192.168.1.105 6002 192.168.1.105 6002 sdc 100.00 768 0.00 [root@controller swift]# swift-ring-builder account.builder rebalance #平衡 ring Reassigned 1024 (100.00%) partitions. Balance is now 0.00. Dispersion is now 0.00 [root@controller swift]# swift-ring-builder container.builder create 10 3 1 #創建container.builder文件 [root@controller swift]# swift-ring-builder container.builder add --region 1 --zone 1 --ip 192.168.1.104 --port 6001 --device sdb --weight 100 #添加每個節點到 ring 中 [root@controller swift]# swift-ring-builder container.builder add --region 1 --zone 1 --ip 192.168.1.104 --port 6001 --device sdc --weight 100 [root@controller swift]# swift-ring-builder container.builder add --region 1 --zone 1 --ip 192.168.1.105 --port 6001 --device sdb --weight 100 [root@controller swift]# swift-ring-builder container.builder add --region 1 --zone 1 --ip 192.168.1.105 --port 6001 --device sdc --weight 100 [root@controller swift]# swift-ring-builder container.builder #驗證 ring 的內容 container.builder, build version 4
1024 partitions, 3.000000 replicas, 1 regions, 1 zones, 4 devices, 0.00 balance, 0.00 dispersion The minimum number of hours before a partition can be reassigned is 1 The overload factor is 0.00% (0.000000) Devices: id region zone ip address port replication ip replication port name weight partitions balance meta 0 1 1 192.168.1.104 6001 192.168.1.104 6001 sdb 100.00 768 0.00
1 1 1 192.168.1.104 6001 192.168.1.104 6001 sdc 100.00 768 0.00
2 1 1 192.168.1.105 6001 192.168.1.105 6001 sdb 100.00 768 0.00
3 1 1 192.168.1.105 6001 192.168.1.105 6001 sdc 100.00 768 0.00 [root@controller swift]# swift-ring-builder container.builder rebalance #平衡 ring Reassigned 1024 (100.00%) partitions. Balance is now 0.00. Dispersion is now 0.00 [root@controller swift]# swift-ring-builder object.builder create 10 3 1 #創建object.builder文件 [root@controller swift]# swift-ring-builder object.builder add --region 1 --zone 1 --ip 192.168.1.104 --port 6000 --device sdb --weight 100 #添加每個節點到 ring 中 [root@controller swift]# swift-ring-builder object.builder add --region 1 --zone 1 --ip 192.168.1.104 --port 6000 --device sdc --weight 100 [root@controller swift]# swift-ring-builder object.builder add --region 1 --zone 1 --ip 192.168.1.105 --port 6000 --device sdb --weight 100 [root@controller swift]# swift-ring-builder object.builder add --region 1 --zone 1 --ip 192.168.1.105 --port 6000 --device sdc --weight 100 [root@controller swift]# swift-ring-builder object.builder #驗證 ring 的內容 object.builder, build version 4
1024 partitions, 3.000000 replicas, 1 regions, 1 zones, 4 devices, 0.00 balance, 0.00 dispersion The minimum number of hours before a partition can be reassigned is 1 The overload factor is 0.00% (0.000000) Devices: id region zone ip address port replication ip replication port name weight partitions balance meta 0 1 1 192.168.1.105 6000 192.168.1.105 6000 sdb 100.00 768 0.00
1 1 1 192.168.1.105 6000 192.168.1.105 6000 sdc 100.00 768 0.00
2 1 1 192.168.1.104 6000 192.168.1.104 6000 sdb 100.00 768 0.00
3 1 1 192.168.1.104 6000 192.168.1.104 6000 sdc 100.00 768 0.00 [root@controller swift]# swift-ring-builder object.builder rebalance #平衡 ring Reassigned 1024 (100.00%) partitions. Balance is now 0.00. Dispersion is now 0.00 [root@controller swift]# scp account.ring.gz container.ring.gz object.ring.gz 192.168.1.104:/etc/swift/ #復制account.ring.gz,container.ring.gz和object.ring.gz文件到每個存儲節點和其他運行了代理服務的額外節點的 /etc/swift 目錄 [root@controller swift]# scp account.ring.gz container.ring.gz object.ring.gz 192.168.1.105:/etc/swift/ [root@controller swift]# vim /etc/swift/swift.conf #編輯/etc/swift/swift.conf文件並完成以下操作 [swift-hash] #在[swift-hash]部分,為你的環境配置哈希路徑前綴和后綴,這些值要保密,並且不要修改或丟失。 swift_hash_path_suffix = 0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!"#$%&'()*+,-./:;<=>?@[\]^_`{|}~
swift_hash_path_prefix = 0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!"#$%&'()*+,-./:;<=>?@[\]^_`{|}~
[storage-policy:0] #在[storage-policy:0]部分,配置默認存儲策略 name = Policy-0
default = yes [root@controller swift]# chown -R root:swift /etc/swift [root@controller swift]# systemctl enable openstack-swift-proxy.service memcached.service #在控制節點和其他運行了代理服務的節點上,啟動對象存儲代理服務及其依賴服務,並將它們配置為隨系統啟動 [root@controller swift]# systemctl start openstack-swift-proxy.service memcached.service
存儲節點:
[root@object1 ~]#systemctl enable openstack-swift-account.service openstack-swift-account-auditor.service openstack-swift-account-reaper.service openstack-swift-account-replicator.service [root@object1 ~]#systemctl enable openstack-swift-container.service openstack-swift-container-auditor.service openstack-swift-container-replicator.service openstack-swift-container-updater.service [root@object1 ~]# systemctl enable openstack-swift-object.service openstack-swift-object-auditor.service openstack-swift-object-replicator.service openstack-swift-object-updater.service [root@object1 ~]#systemctl start openstack-swift-account.service openstack-swift-account-auditor.service openstack-swift-account-reaper.service openstack-swift-account-replicator.service [root@object1 ~]#systemctl start openstack-swift-container.service openstack-swift-container-auditor.service openstack-swift-container-replicator.service openstack-swift-container-updater.service [root@object1 ~]#systemctl start openstack-swift-object.service openstack-swift-object-auditor.service openstack-swift-object-replicator.service openstack-swift-object-updater.service
驗證操作:
控制節點:
[root@controller swift]#cd [root@controller ~]# echo "export OS_AUTH_VERSION=3" | tee -a admin-openrc.sh demo-openrc.sh #配置對象存儲服務客戶端使用版本3的認證API [root@controller ~]# swift stat #顯示服務狀態 Account: AUTH_444fce5db34546a7907af45df36d6e99 Containers: 0 Objects: 0 Bytes: 0 X-Put-Timestamp: 1518798659.41272 X-Timestamp: 1518798659.41272 X-Trans-Id: tx304f1ed71c194b1f90dd2-005a870740 Content-Type: text/plain; charset=utf-8 [root@controller ~]# swift upload container1 demo-openrc.sh #上傳一個測試文件 demo-openrc.sh [root@controller ~]# swift list #列出容器 container1 [root@controller ~]# swift download container1 demo-openrc.sh #下載一個測試文件 demo-openrc.sh [auth 0.295s, headers 0.339s, total 0.339s, 0.005 MB/s]
三、添加 Orchestration(編排) 服務
1.服務簡述
編排服務通過運行調用生成運行中雲應用程序的OpenStack API為描述雲應用程序提供基於模板的編排。該軟件將其他OpenStack核心組件整合進一個單文件模板系統。模板允許你創建很多種類的OpenStack資源,如實例,浮點IP,雲硬盤,安全組和用戶。它也提供高級功能,如實例高可用,實例自動縮放,和嵌套棧。這使得OpenStack的核心項目有着龐大的用戶群。 服務使部署人員能夠直接或者通過定制化插件來與編排服務集成 編排服務包含以下組件: heat命令行客戶端 一個命令行工具,和``heat-api``通信,以運行:term:AWS CloudFormation API,最終開發者可以直接使用Orchestration REST API。 heat-api組件 一個OpenStack本地 REST API ,發送API請求到heat-engine,通過遠程過程調用(RPC)。 heat-api-cfn組件 AWS 隊列API,和AWS CloudFormation兼容,發送API請求到``heat-engine``,通過遠程過程調用。 heat-engine 啟動模板和提供給API消費者回饋事件。
2.部署需求:在安裝和配置流程服務之前,必須創建數據庫,服務憑證和API端點。流程同時需要在認證服務中添加額外信息。
在控制節點上:
[root@controller ~]# mysql -u root -p123456 #創建數據庫並設置權限 MariaDB [(none)]>CREATE DATABASE heat; MariaDB [(none)]>GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'localhost' IDENTIFIED BY '123456'; MariaDB [(none)]>GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'%' IDENTIFIED BY '123456'; MariaDB [(none)]>\q [root@controller ~]# source admin-openrc.sh [root@controller ~]# openstack user create --domain default --password-prompt heat #創建heat用戶 User Password: #密碼為:1234546 Repeat User Password: [root@controller ~]# openstack role add --project service --user heat admin #添加 admin 角色到 heat 用戶上 [root@controller ~]# openstack service create --name heat --description "Orchestration" orchestration #創建heat和 heat-cfn 服務實體 [root@controller ~]# openstack service create --name heat-cfn --description "Orchestration" cloudformation [root@controller ~]# openstack endpoint create --region RegionOne orchestration public http://controller:8004/v1/%\(tenant_id\)s #創建 Orchestration 服務的 API 端點
[root@controller ~]# openstack endpoint create --region RegionOne orchestration internal http://controller:8004/v1/%\(tenant_id\)s
[root@controller ~]# openstack endpoint create --region RegionOne orchestration admin http://controller:8004/v1/%\(tenant_id\)s
[root@controller ~]# openstack endpoint create --region RegionOne cloudformation public http://controller:8000/v1
[root@controller ~]# openstack endpoint create --region RegionOne cloudformation internal http://controller:8000/v1
[root@controller ~]# openstack endpoint create --region RegionOne cloudformation admin http://controller:8000/v1
[root@controller ~]# openstack domain create --description "Stack projects and users" heat #為棧創建 heat 包含項目和用戶的域 [root@controller ~]# openstack user create --domain heat --password-prompt heat_domain_admin #在 heat 域中創建管理項目和用戶的heat_domain_admin用戶 User Password: #密碼為:1234546 Repeat User Password: [root@controller ~]# openstack role add --domain heat --user heat_domain_admin admin #添加admin角色到 heat 域 中的heat_domain_admin用戶,啟用heat_domain_admin用戶管理棧的管理權限 [root@controller ~]# openstack role create heat_stack_owner #創建 heat_stack_owner 角色 [root@controller ~]# openstack role add --project demo --user demo heat_stack_owner #添加heat_stack_owner角色到demo項目和用戶,啟用demo用戶管理棧 [root@controller ~]# openstack role create heat_stack_user #創建 heat_stack_user 角色,Orchestration 自動地分配 heat_stack_user角色給在 stack 部署過程中創建的用戶。默認情況下,這個角色會限制 API 的操作。為了避免沖突,請不要為用戶添加 heat_stack_owner角色。
3.服務部署
控制節點:
[root@controller ~]# yum install -y openstack-heat-api openstack-heat-api-cfn openstack-heat-engine python-heatclient [root@controller ~]# vim /etc/heat/heat.conf #編輯 /etc/heat/heat.conf 文件並完成如下內容 [database] connection = mysql://heat:123456@controller/heat #配置數據庫訪問
[DEFAULT] rpc_backend = rabbit #配置RabbitMQ消息隊列訪問 heat_metadata_server_url = http://controller:8000 #配置元數據和 等待條件URLs
heat_waitcondition_server_url = http://controller:8000/v1/waitcondition
stack_domain_admin = heat_domain_admin #配置棧域與管理憑據 stack_domain_admin_password = 123456 stack_user_domain_name = heat verbose = True #部分啟用詳細日志 [oslo_messaging_rabbit] rabbit_host = controller rabbit_userid = openstack rabbit_password = RABBIT_PASS [keystone_authtoken] #配置認證服務訪問 auth_uri = http://controller:5000
auth_url = http://controller:35357
auth_plugin = password project_domain_id = default user_domain_id = default project_name = service username = heat password = 123456 [trustee] #配置認證服務訪問 auth_plugin = password auth_url = http://controller:35357
username = heat password = 123456 user_domain_id = default [clients_keystone] #配置認證服務訪問 auth_uri = http://controller:5000
[ec2authtoken] #配置認證服務訪問 auth_uri = http://controller:5000/v3
[root@controller ~]# su -s /bin/sh -c "heat-manage db_sync" heat #同步Orchestration數據庫 [root@controller ~]# systemctl enable openstack-heat-api.service openstack-heat-api-cfn.service openstack-heat-engine.service [root@controller ~]#systemctl start openstack-heat-api.service openstack-heat-api-cfn.service openstack-heat-engine.service
驗證操作
[root@controller ~]# source admin-openrc.sh [root@controller ~]# heat service-list #該輸出顯示表明在控制節點上有應該四個heat-engine組件 +------------+-------------+--------------------------------------+------------+--------+----------------------------+--------+
| hostname | binary | engine_id | host | topic | updated_at | status |
+------------+-------------+--------------------------------------+------------+--------+----------------------------+--------+
| controller | heat-engine | 0d26b5d3-ec8a-44ad-9003-b2be72ccfaa7 | controller | engine | 2017-02-16T11:59:41.000000 | up |
| controller | heat-engine | 587b87e2-9e91-4cac-a8b2-53f51898a9c5 | controller | engine | 2017-02-16T11:59:41.000000 | up |
| controller | heat-engine | 8891e45b-beda-49b2-bfc7-29642f072eac | controller | engine | 2017-02-16T11:59:41.000000 | up |
| controller | heat-engine | b0ef7bbb-cfb9-4000-a214-db9049b12a25 | controller | engine | 2017-02-16T11:59:41.000000 | up |
+------------+-------------+--------------------------------------+------------+--------+----------------------------+--------+
四、添加 Telemetry(計量數據收集) 服務
1.服務概述
計量數據收集(Telemetry)服務提供如下功能: 1.相關OpenStack服務的有效調查計量數據。 2.通過監測通知收集來自各個服務發送的事件和計量數據。 3.發布收集來的數據到多個目標,包括數據存儲和消息隊列。 Telemetry服務包含以下組件: 計算代理 (ceilometer-agent-compute) 運行在每個計算節點中,推送資源的使用狀態,也許在未來會有其他類型的代理,但是目前來說社區專注於創建計算節點代理。 中心代理 (ceilometer-agent-central) 運行在中心管理服務器以推送資源使用狀態,既不捆綁到實例也不在計算節點。代理可啟動多個以橫向擴展它的服務。 ceilometer通知代理; 運行在中心管理服務器(s)中,獲取來自消息隊列(s)的消息去構建事件和計量數據。 ceilometor收集器(負責接收信息進行持久化存儲) 運行在中心管理服務器(s),分發收集的telemetry數據到數據存儲或者外部的消費者,但不會做任何的改動。 API服務器 (ceilometer-api) 運行在一個或多個中心管理服務器,提供從數據存儲的數據訪問。 檢查告警服務 當收集的度量或事件數據打破了界定的規則時,計量報警服務會出發報警。 計量報警服務包含以下組件: API服務器 (aodh-api) 運行於一個或多個中心管理服務器上提供訪問存儲在數據中心的警告信息。 報警評估器 (aodh-evaluator) 運行在一個或多個中心管理服務器,當警告發生是由於相關聯的統計趨勢超過閾值以上的滑動時間窗口,然后作出決定。 通知監聽器 (aodh-listener) 運行在一個中心管理服務器上,來檢測什么時候發出告警。根據對一些事件預先定義一些規則,會產生相應的告警,同時能夠被Telemetry數據收集服務的通知代理捕獲到。 報警通知器 (aodh-notifier) 運行在一個或多個中心管理服務器,允許警告為一組收集的實例基於評估閥值來設置。 這些服務使用OpenStack消息總線來通信,只有收集者和API服務可以訪問數據存儲。
2.部署需求:安裝和配置Telemetry服務之前,你必須創建創建一個數據庫、服務憑證和API端點。但是,不像其他服務,Telemetry服務使用NoSQL 數據庫
控制節點:
[root@controller ~]# yum install -y mongodb-server mongodb [root@controller ~]# vim /etc/mongod.conf #編輯 /etc/mongod.conf文件,並修改或添加如下內容 bind_ip = 192.168.1.101 smallfiles = true #默認情況下,MongoDB會在/var/lib/mongodb/journal目錄下創建幾個 1 GB 大小的日志文件。如果你想將每個日志文件大小減小到128MB並且限制日志文件占用的總空間為512MB,配置 smallfiles 的值 [root@controller ~]# systemctl enable mongod.service [root@controller ~]# systemctl start mongod.service [root@controller ~]# mongo --host controller --eval 'db = db.getSiblingDB("ceilometer"); db.createUser({user: "ceilometer",pwd: "123456",roles: [ "readWrite", "dbAdmin" ]})' #創建 ceilometer 數據庫 MongoDB shell version: 2.6.12 connecting to: controller:27017/test Successfully added user: { "user" : "ceilometer", "roles" : [ "readWrite", "dbAdmin" ] } [root@controller ~]# source admin-openrc.sh [root@controller ~]# openstack user create --domain default --password-prompt ceilometer #創建 ceilometer 用戶 User Password: #密碼為:123456 Repeat User Password: [root@controller ~]# openstack role add --project service --user ceilometer admin #添加 admin 角色到 ceilometer 用戶上 [root@controller ~]# openstack service create --name ceilometer --description "Telemetry" metering #創建 ceilometer 服務實體 [root@controller ~]# openstack endpoint create --region RegionOne metering public http://controller:8777 #創建Telemetry服務API端點
[root@controller ~]# openstack endpoint create --region RegionOne metering internal http://controller:8777
[root@controller ~]# openstack endpoint create --region RegionOne metering admin http://controller:8777
3.服務部署
控制節點:
[root@controller ~]# yum install openstack-ceilometer-api openstack-ceilometer-collector openstack-ceilometer-notification openstack-ceilometer-central openstack-ceilometer-alarm python-ceilometerclient -y [root@controller ~]# vim /etc/ceilometer/ceilometer.conf #編輯 /etc/ceilometer/ceilometer.conf,修改或添加如下內容 [DEFAULT] rpc_backend = rabbit #配置RabbitMQ消息隊列訪問 auth_strategy = keystone #配置認證服務訪問 verbose = True [oslo_messaging_rabbit] #配置RabbitMQ消息隊列訪問 rabbit_host = controller rabbit_userid = openstack rabbit_password = 123456 [keystone_authtoken] #配置認證服務訪問 auth_uri = http://controller:5000
auth_url = http://controller:35357
auth_plugin = password project_domain_id = default user_domain_id = default project_name = service username = ceilometer password = 123456 [service_credentials] #配置服務證書 os_auth_url = http://controller:5000/v2.0
os_username = ceilometer os_tenant_name = service os_password = 123456 os_endpoint_type = internalURL os_region_name = RegionOne [root@controller ~]# systemctl enable openstack-ceilometer-api.service openstack-ceilometer-notification.service openstack-ceilometer-central.service openstack-ceilometer-collector.service openstack-ceilometer-alarm-evaluator.service openstack-ceilometer-alarm-notifier.service [root@controller ~]# systemctl start openstack-ceilometer-api.service openstack-ceilometer-notification.service openstack-ceilometer-central.service openstack-ceilometer-collector.service openstack-ceilometer-alarm-evaluator.service openstack-ceilometer-alarm-notifier.service
4.啟用鏡像服務計量
[root@controller ~]# vim /etc/glance/glance-api.conf #編輯 /etc/glance/glance-api.conf 和 /etc/glance/glance-registry.conf 文件,同時修改或添加如下內容 [DEFAULT] #配置 notifications 和”RabbitMQ 消息隊列訪問 notification_driver = messagingv2 rpc_backend = rabbit [oslo_messaging_rabbit] rabbit_host = controller rabbit_userid = openstack rabbit_password = 123456 [root@controller ~]# systemctl restart openstack-glance-api.service openstack-glance-registry.service #重啟鏡像服務
5啟用計算服務計量
[root@controller ~]# yum install -y openstack-ceilometer-compute python-ceilometerclient python-pecan [root@controller ~]# vim /etc/ceilometer/ceilometer.conf #編輯 /etc/ceilometer/ceilometer.conf,添加或修改如下內容 [DEFAULT] rpc_backend = rabbit #配置RabbitMQ消息隊列訪問 auth_strategy = keystone #配置認證服務訪問 verbose = True [oslo_messaging_rabbit] #配置RabbitMQ消息隊列訪問 rabbit_host = controller rabbit_userid = openstack rabbit_password = 123456 [keystone_authtoken] #配置認證服務訪問 auth_uri = http://controller:5000
auth_url = http://controller:35357
auth_plugin = password project_domain_id = default user_domain_id = default project_name = service username = ceilometer password = 123456 [service_credentials] #配置服務證書 os_auth_url = http://controller:5000/v2.0
os_username = ceilometer os_tenant_name = service os_password = 123456 os_endpoint_type = internalURL os_region_name = RegionOne [root@controller ~]#vim /etc/nova/nova.conf #編輯 /etc/nova/nova.conf 文件,添加或修改如下內容 [DEFAULT] instance_usage_audit = True #配置notifications instance_usage_audit_period = hour notify_on_state_change = vm_and_task_state notification_driver = messagingv2 [root@controller ~]# systemctl enable openstack-ceilometer-compute.service #啟動代理並配置開機啟動 [root@controller ~]# systemctl start openstack-ceilometer-compute.service [root@controller ~]# systemctl restart openstack-nova-compute.service #重啟計算服務
6.啟用塊存儲計量
在控制節點和塊存儲節點上執行這些步驟
[root@controller ~]# vim /etc/cinder/cinder.conf #編輯 /etc/cinder/cinder.conf,同時完成如下內容 [DEFAULT] notification_driver = messagingv2 [root@controller ~] [root@controller ~]# systemctl restart openstack-cinder-volume.service # 重啟控制節點上的塊設備存儲服務!!!
存儲節點上:
[root@block1 ~]# systemctl restart openstack-cinder-volume.service #重啟存儲節點上的塊設備存儲服務!!!
7.啟用對象存儲計量
[root@controller ~]# source admin-openrc.sh [root@controller ~]# openstack role create ResellerAdmin [root@controller ~]# openstack role add --project service --user ceilometer ResellerAdmin [root@controller ~]# yum install -y python-ceilometermiddleware [root@controller ~]# vim/etc/swift/proxy-server.conf #編輯 /etc/swift/proxy-server.conf 文件,添加或修改如下內容 [filter:keystoneauth] operator_roles = admin, user, ResellerAdmin #添加 ResellerAdmin 角色 [pipeline:main] #添加 ceilometer pipeline = catch_errors gatekeeper healthcheck proxy-logging cache container_sync bulk ratelimit authtoken keystoneauth container-quotas account-quotas slo dlo versioned_writes proxy-logging ceilometer proxy-server proxy-server [filter:ceilometer] #配置提醒 paste.filter_factory = ceilometermiddleware.swift:filter_factory control_exchange = swift url = rabbit://openstack:123456@controller:5672/
driver = messagingv2 topic = notifications log_level = WARN [root@controller ~]# systemctl restart openstack-swift-proxy.service #重啟對象存儲的代理服務
8.驗證
在控制節點上執行
[root@controller ~]# source admin-openrc.sh [root@controller ~]# ceilometer meter-list |grep image #列出可用的 meters,過濾鏡像服務 +---------------------------------+------------+-----------+-----------------------------------------------------------------------+----------------------------------+----------------------------------+
| Name | Type | Unit | Resource ID | User ID | Project ID |
+---------------------------------+------------+-----------+-----------------------------------------------------------------------+----------------------------------+----------------------------------+
| image | gauge | image | 68259f9f-c5c1-4975-9323-cef301cedb2b | None | b1d045eb3d62421592616d56a69c4de3 |
| image.size | gauge | B | 68259f9f-c5c1-4975-9323-cef301cedb2b | None |
+---------------------------------+------------+-----------+-----------------------------------------------------------------------+----------------------------------+----------------------------------+ [root@controller ~]# glance image-list | grep 'cirros' | awk '{ print $2 }' #從鏡像服務下載CirrOS鏡像 68259f9f-c5c1-4975-9323-cef301cedb2b [root@controller ~]# glance image-download 68259f9f-c5c1-4975-9323-cef301cedb2b > /tmp/cirros.img [root@controller ~]# ceilometer meter-list|grep image #再次列出可用的 meters 以驗證鏡像下載的檢查 | image | gauge | image | 68259f9f-c5c1-4975-9323-cef301cedb2b | 7bafc586c1f442c6b4c92f42ba90efd4 | b1d045eb3d62421592616d56a69c4de3 |
| image.download | delta | B | 68259f9f-c5c1-4975-9323-cef301cedb2b | 7bafc586c1f442c6b4c92f42ba90efd4 | b1d045eb3d62421592616d56a69c4de3 |
| image.serve | delta | B | 68259f9f-c5c1-4975-9323-cef301cedb2b | 7bafc586c1f442c6b4c92f42ba90efd4 | b1d045eb3d62421592616d56a69c4de3 |
| image.size | gauge | B | 68259f9f-c5c1-4975-9323-cef301cedb2b | 7bafc586c1f442c6b4c92f42ba90efd4 | b1d045eb3d62421592616d56a69c4de3 | [root@controller ~]# ceilometer statistics -m image.download -p 60 #從 image.download 表讀取使用量統計值 +--------+----------------------------+----------------------------+------------+------------+------------+------------+-------+----------+----------------------------+----------------------------+
| Period | Period Start | Period End | Max | Min | Avg | Sum | Count | Duration | Duration Start | Duration End |
+--------+----------------------------+----------------------------+------------+------------+------------+------------+-------+----------+----------------------------+----------------------------+
| 60 | 2018-02-16T12:47:46.351000 | 2018-02-16T12:48:46.351000 | 13287936.0 | 13287936.0 | 13287936.0 | 13287936.0 | 1 | 0.0 | 2018-02-16T12:48:23.052000 | 2018-02-16T12:48:23.052000 |
+--------+----------------------------+----------------------------+------------+------------+------------+------------+-------+----------+----------------------------+----------------------------+ [root@controller ~]# ll /tmp/cirros.img #查看下載的鏡像文件大小和使用量是否一致 -rw-r--r-- 1 root root 13287936 2月 16 20:48 /tmp/cirros.img