分布式存儲ceph——(2)openstack對接ceph存儲后端


ceph對接openstack環境

一、使用rbd方式提供存儲如下數據:

(1)image:保存glanc中的image;
(2)volume存儲:保存cinder的volume;保存創建虛擬機時選擇創建新卷;
(3)vms的存儲:保存創建虛擬機時不選擇創建新卷;

二、實施步驟:

(1)客戶端也要有cent用戶:
1
2
3
useradd cent && echo "123" | passwd --stdin cent
echo -e 'Defaults:cent !requiretty\ncent ALL = (root) NOPASSWD:ALL' | tee /etc/sudoers.d/ceph
chmod 440 /etc/sudoers.d/ceph
(2)openstack要用ceph的節點(比如compute-node和storage-node)安裝下載的軟件包:
1
yum localinstall ./* -y
或則:每個節點安裝 clients(要訪問ceph集群的節點):
1
2
3
yum install python-rbd
yum install ceph-common
如果先采用上面的方式安裝客戶端,其實這兩個包在rpm包中早已經安裝過了
(3)部署節點上執行,為openstack節點安裝ceph:
1
2
ceph-deploy install controller
ceph-deploy admin controller
(4)客戶端執行
1
sudo chmod 644 /etc/ceph/ceph.client.admin.keyring
(5)create pools,只需在一個ceph節點上操作即可:
1
2
3
ceph osd pool create images 1024
ceph osd pool create vms 1024
ceph osd pool create volumes 1024
顯示pool的狀態
1
ceph osd lspools
(6)在ceph集群中,創建glance和cinder用戶, 只需在一個ceph節點上操作即可:
1
2
3
4
5
ceph auth get-or-create client.glance mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=images'
  
ceph auth get-or-create client.cinder mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=vms, allow rx pool=images'
  
nova使用cinder用戶,就不單獨創建了
(7)拷貝ceph-ring, 只需在一個ceph節點上操作即可:
1
2
ceph auth get-or-create client.glance > /etc/ceph/ceph.client.glance.keyring
ceph auth get-or-create client.cinder > /etc/ceph/ceph.client.cinder.keyring
使用scp拷貝到其他節點(ceph集群節點和openstack的要用ceph的節點比如compute-node和storage-node,本次對接的是一個all-in-one的環境,所以copy到controller節點即可 )
1
2
3
4
[root@yunwei ceph]# ls
ceph.client.admin.keyring  ceph.client.cinder.keyring  ceph.client.glance.keyring  ceph.conf  rbdmap  tmpR3uL7W
[root@yunwei ceph]#
[root@yunwei ceph]# scp ceph.client.glance.keyring ceph.client.cinder.keyring controller:/etc/ceph/
(8)更改文件的權限(所有客戶端節點均執行)
1
2
chown glance:glance /etc/ceph/ceph.client.glance.keyring
chown cinder:cinder /etc/ceph/ceph.client.cinder.keyring
(9)更改libvirt權限(只需在nova-compute節點上操作即可,每個計算節點都做)
uuidgen
940f0485-e206-4b49-b878-dcd0cb9c70a4
在/etc/ceph/目錄下(在什么目錄沒有影響,放到/etc/ceph目錄方便管理):
cat > secret.xml <<EOF
<secret ephemeral='no' private='no'>
<uuid>940f0485-e206-4b49-b878-dcd0cb9c70a4</uuid>
<usage type='ceph'>
 <name>client.cinder secret</name>
</usage>
</secret>
EOF

 

將 secret.xml 拷貝到所有compute節點,並執行::

virsh secret-define --file secret.xml
ceph auth get-key client.cinder > ./client.cinder.key
virsh secret-set-value --secret 940f0485-e206-4b49-b878-dcd0cb9c70a4 --base64 $(cat ./client.cinder.key)

 

最后所有compute節點的client.cinder.key和secret.xml都是一樣的, 記下之前生成的uuid:940f0485-e206-4b49-b878-dcd0cb9c70a4
如遇如下錯誤:
[root@controller ceph]# virsh secret-define --file secret.xml
錯誤:使用 secret.xml 設定屬性失敗
錯誤:internal error: 已將 UUID 為d448a6ee-60f3-42a3-b6fa-6ec69cab2378 的 secret 定義為與 client.cinder secret 一同使用
 
[root@controller ~]# virsh secret-list
UUID                                  用量
--------------------------------------------------------------------------------
d448a6ee-60f3-42a3-b6fa-6ec69cab2378  ceph client.cinder secret

 

[root@controller ~]# virsh secret-undefine d448a6ee-60f3-42a3-b6fa-6ec69cab2378
已刪除 secret d448a6ee-60f3-42a3-b6fa-6ec69cab2378

 

[root@controller ~]# virsh secret-list
UUID                                  用量
--------------------------------------------------------------------------------

 

[root@controller ceph]# virsh secret-define --file secret.xml
生成 secret 940f0485-e206-4b49-b878-dcd0cb9c70a4

 

[root@controller ~]# virsh secret-list
UUID                                  用量
--------------------------------------------------------------------------------
940f0485-e206-4b49-b878-dcd0cb9c70a4  ceph client.cinder secret
 
virsh secret-set-value --secret 940f0485-e206-4b49-b878-dcd0cb9c70a4 --base64 $(cat ./client.cinder.key)

 

(10)配置Glance, 在所有的controller節點上做如下更改:
vim /etc/glance/glance-api.conf
[DEFAULT]
default_store = rbd
[cors]
[cors.subdomain]
[database]
connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance
[glance_store]
stores = rbd
default_store = rbd
rbd_store_pool = images
rbd_store_user = glance
rbd_store_ceph_conf = /etc/ceph/ceph.conf
rbd_store_chunk_size = 8
[image_format]
[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = glance
password = glance
[matchmaker_redis]
[oslo_concurrency]
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_messaging_zmq]
[oslo_middleware]
[oslo_policy]
[paste_deploy]
flavor = keystone
[profiler]
[store_type_location_strategy]
[task]
[taskflow_executor]

 

在所有的controller節點上做如下更改
1
2
systemctl restart openstack-glance-api.service
systemctl status openstack-glance-api.service

創建image驗證:

1
2
3
4
5
[root@controller ~]# openstack image create "cirros"   --file cirros-0.3.3-x86_64-disk.img.img   --disk-format qcow2 --container-format bare --public
  
[root@controller ~]# rbd ls images
9ce5055e-4217-44b4-a237-e7b577a20dac
**********有輸出鏡像說明成功了
 (8)配置 Cinder:
vim /etc/cinder/cinder.conf
[DEFAULT]
my_ip = 172.16.254.63
glance_api_servers = http://controller:9292
auth_strategy = keystone
enabled_backends = ceph
state_path = /var/lib/cinder
transport_url = rabbit://openstack:admin@controller
[backend]
[barbican]
[brcd_fabric_example]
[cisco_fabric_example]
[coordination]
[cors]
[cors.subdomain]
[database]
connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder
[fc-zone-manager]
[healthcheck]
[key_manager]
[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = cinder
password = cinder
[matchmaker_redis]
[oslo_concurrency]
lock_path = /var/lib/cinder/tmp
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_messaging_zmq]
[oslo_middleware]
[oslo_policy]
[oslo_reports]
[oslo_versionedobjects]
[profiler]
[ssl]
[ceph]
volume_driver = cinder.volume.drivers.rbd.RBDDriver
rbd_pool = volumes
rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_flatten_volume_from_snapshot = false
rbd_max_clone_depth = 5
rbd_store_chunk_size = 4
rados_connect_timeout = -1
glance_api_version = 2
rbd_user = cinder
rbd_secret_uuid = 940f0485-e206-4b49-b878-dcd0cb9c70a4
volume_backend_name=ceph

 

重啟cinder服務:
1
2
systemctl restart openstack-cinder-api.service openstack-cinder-scheduler.service openstack-cinder-volume.service
systemctl status openstack-cinder-api.service openstack-cinder-scheduler.service openstack-cinder-volume.service

 創建volume驗證:

1
2
[root@controller gfs]# rbd ls volumes
volume-43b7c31d-a773-4604-8e4a-9ed78ec18996

 (9)配置Nova:

vim /etc/nova/nova.conf
[DEFAULT]
my_ip=172.16.254.63
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver
enabled_apis=osapi_compute,metadata
transport_url = rabbit://openstack:admin@controller

[api] auth_strategy = keystone
[api_database] connection
= mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api [barbican] [cache] [cells]
[cinder] os_region_name
= RegionOne [cloudpipe] [conductor] [console] [consoleauth] [cors] [cors.subdomain] [crypto]
[database] connection
= mysql+pymysql://nova:NOVA_DBPASS@controller/nova [ephemeral_storage_encryption] [filter_scheduler]
[glance] api_servers
= http://controller:9292 [guestfs] [healthcheck] [hyperv] [image_file_url] [ironic] [key_manager]
[keystone_authtoken] auth_uri
= http://controller:5000 auth_url = http://controller:35357 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = nova password = nova
[libvirt] virt_type
=qemu images_type = rbd images_rbd_pool = vms images_rbd_ceph_conf = /etc/ceph/ceph.conf rbd_user = cinder rbd_secret_uuid = 940f0485-e206-4b49-b878-dcd0cb9c70a4 [matchmaker_redis] [metrics] [mks]
[neutron] url
= http://controller:9696 auth_url = http://controller:35357 auth_type = password project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service username = neutron password = neutron service_metadata_proxy = true metadata_proxy_shared_secret = METADATA_SECRET [notifications] [osapi_v21]
[oslo_concurrency] lock_path
=/var/lib/nova/tmp [oslo_messaging_amqp] [oslo_messaging_kafka] [oslo_messaging_notifications] [oslo_messaging_rabbit] [oslo_messaging_zmq] [oslo_middleware] [oslo_policy] [pci]
[placement] os_region_name
= RegionOne auth_type = password auth_url = http://controller:35357/v3 project_name = service project_domain_name = Default username = placement password = placement user_domain_name = Default [quota] [rdp] [remote_debug] [scheduler] [serial_console] [service_user] [spice] [ssl] [trusted_computing] [upgrade_levels] [vendordata_dynamic_auth] [vmware]
[vnc] enabled
=true vncserver_listen=$my_ip vncserver_proxyclient_address=$my_ip novncproxy_base_url = http://172.16.254.63:6080/vnc_auto.html [workarounds] [wsgi] [xenserver] [xvp]

 

重啟nova服務:
1
2
systemctl restart openstack-nova-api.service openstack-nova-consoleauth.service openstack-nova-scheduler.service  openstack-nova-compute.service openstack-nova-cert.service
systemctl status openstack-nova-api.service openstack-nova-consoleauth.service openstack-nova-scheduler.service  openstack-nova-compute.service openstack-nova-cert.service

 創建虛機驗證:


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM