一、cinder簡介
cinder是openstack環境中的塊存儲服務,主要為運行在openstack之上的虛擬機提供塊存儲服務的;所謂塊存儲就是我們經常用的硬盤呀,U盤啊,SD卡等等這些塊設備的,這里不同我們生活中看到的那樣的磁盤,這里我們可以理解為類似雲盤的東西;對於cinder來講,它主要由三個組件組成,cinder-api、cinder-scheduler、cinder-volume;其中cinder-api和cinder-scheduler這兩個組件通常部署在在控制節點,cinder-volume通常部署在storage節點;cinder-volume主要作用是通過接收openstack控制節點上cinder-scheduler發送過來的請求,進行卷管理;cinder-api主要作用是接收請求,並將請求負責丟到對應的隊列中去;cinder-scheduler主要負責調度后面的cinder-volume來完成管理卷;
cinder架構
cinder工作過程:首先客戶端會把請求發送給cinder-api,cinder-api接收請求,並將其放到對應的消息隊列中;然后cinder-scheduler從對應的消息隊列取出用戶的請求進行調度,把最終的調度結果放到對應的消息隊列中去,同時也往數據庫中保存一份;在cinder-scheduler把調度結果放到消息隊列中去以后,對應的cinder-volume會從對應的消息隊列取出消息在本地執行,最后把執行的狀態信息寫一份到數據庫,同時把執行的狀態放到消息隊列,從而讓cinder-api把對應的結果返回給客戶端;大致工作流程就是這樣;
二、cinder服務安裝配置
1、在控制節點安裝配置cinder-api、cinder-scheduler
創建cinder數據庫以及相關訪問數據庫的用戶和授權
[root@node02 ~]# mysql Welcome to the MariaDB monitor. Commands end with ; or \g. Your MariaDB connection id is 318 Server version: 10.1.20-MariaDB MariaDB Server Copyright (c) 2000, 2016, Oracle, MariaDB Corporation Ab and others. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. MariaDB [(none)]> CREATE DATABASE cinder; Query OK, 1 row affected (0.01 sec) MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIED BY 'cinder'; Query OK, 0 rows affected (0.01 sec) MariaDB [(none)]> flush privileges; Query OK, 0 rows affected (0.00 sec) MariaDB [(none)]>
驗證:在其他節點使用cinder用戶登錄數據庫,看看創建的cinder用戶是否可以正常登錄數據庫?
[root@node01 ~]# mysql -ucinder -pcinder -hnode02 Welcome to the MariaDB monitor. Commands end with ; or \g. Your MariaDB connection id is 319 Server version: 10.1.20-MariaDB MariaDB Server Copyright (c) 2000, 2016, Oracle, MariaDB Corporation Ab and others. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. MariaDB [(none)]> show databases; +--------------------+ | Database | +--------------------+ | cinder | | information_schema | | test | +--------------------+ 3 rows in set (0.00 sec) MariaDB [(none)]> exit Bye [root@node01 ~]#
在控制節點上導出admin用戶環境變量,創建cinder用戶並將其域指向default,密碼設置為cinder
[root@node01 ~]# source admin.sh [root@node01 ~]# openstack user create --domain default --password-prompt cinder User Password: Repeat User Password: +---------------------+----------------------------------+ | Field | Value | +---------------------+----------------------------------+ | domain_id | 47c0915c914c49bb8670703e4315a80f | | enabled | True | | id | a795dd0941e942da85291177fe434f60 | | name | cinder | | options | {} | | password_expires_at | None | +---------------------+----------------------------------+ [root@node01 ~]#
將cinder用戶添加到service項目,並授權為admin角色權限
[root@node01 ~]# openstack role add --project service --user cinder admin [root@node01 ~]#
創建cinderv2和cinderv3服務
[root@node01 ~]# openstack service create --name cinderv2 \ > --description "OpenStack Block Storage" volumev2 +-------------+----------------------------------+ | Field | Value | +-------------+----------------------------------+ | description | OpenStack Block Storage | | enabled | True | | id | 1145b9f35e3f419bb707f0d500bc2e3b | | name | cinderv2 | | type | volumev2 | +-------------+----------------------------------+ [root@node01 ~]# openstack service create --name cinderv3 \ > --description "OpenStack Block Storage" volumev3 +-------------+----------------------------------+ | Field | Value | +-------------+----------------------------------+ | description | OpenStack Block Storage | | enabled | True | | id | aa142423921d408db8ba8dc7e48784f0 | | name | cinderv3 | | type | volumev3 | +-------------+----------------------------------+ [root@node01 ~]#
提示:cinder有兩個版本,這兩個版本都要創建服務端點;
分別注冊cinderv2和cinderv3到公共、私有、管理端點
cinderv2公共端點
[root@node01 ~]# openstack endpoint create --region RegionOne \ > volumev2 public http://controller:8776/v2/%\(project_id\)s +--------------+------------------------------------------+ | Field | Value | +--------------+------------------------------------------+ | enabled | True | | id | b60100d78284490a886b8b134f730b76 | | interface | public | | region | RegionOne | | region_id | RegionOne | | service_id | 1145b9f35e3f419bb707f0d500bc2e3b | | service_name | cinderv2 | | service_type | volumev2 | | url | http://controller:8776/v2/%(project_id)s | +--------------+------------------------------------------+ [root@node01 ~]#
cinderv2私有端點
[root@node01 ~]# openstack endpoint create --region RegionOne \ > volumev2 internal http://controller:8776/v2/%\(project_id\)s +--------------+------------------------------------------+ | Field | Value | +--------------+------------------------------------------+ | enabled | True | | id | 8c5b226eab8e4882af33e09e91f5a478 | | interface | internal | | region | RegionOne | | region_id | RegionOne | | service_id | 1145b9f35e3f419bb707f0d500bc2e3b | | service_name | cinderv2 | | service_type | volumev2 | | url | http://controller:8776/v2/%(project_id)s | +--------------+------------------------------------------+ [root@node01 ~]#
cinderv2管理端點
[root@node01 ~]# openstack endpoint create --region RegionOne \ > volumev2 admin http://controller:8776/v2/%\(project_id\)s +--------------+------------------------------------------+ | Field | Value | +--------------+------------------------------------------+ | enabled | True | | id | b0d35d4b02954fe0a335f8e94365471d | | interface | admin | | region | RegionOne | | region_id | RegionOne | | service_id | 1145b9f35e3f419bb707f0d500bc2e3b | | service_name | cinderv2 | | service_type | volumev2 | | url | http://controller:8776/v2/%(project_id)s | +--------------+------------------------------------------+ [root@node01 ~]#
cinderv3公共端點
[root@node01 ~]# openstack endpoint create --region RegionOne \ > volumev3 public http://controller:8776/v3/%\(project_id\)s +--------------+------------------------------------------+ | Field | Value | +--------------+------------------------------------------+ | enabled | True | | id | dff608b23a6b45cebb44d54c7ff68718 | | interface | public | | region | RegionOne | | region_id | RegionOne | | service_id | aa142423921d408db8ba8dc7e48784f0 | | service_name | cinderv3 | | service_type | volumev3 | | url | http://controller:8776/v3/%(project_id)s | +--------------+------------------------------------------+ [root@node01 ~]#
cinderv3私有端點
[root@node01 ~]# openstack endpoint create --region RegionOne \ > volumev3 internal http://controller:8776/v3/%\(project_id\)s +--------------+------------------------------------------+ | Field | Value | +--------------+------------------------------------------+ | enabled | True | | id | 240ea84001e740c7b1514122e38cdca8 | | interface | internal | | region | RegionOne | | region_id | RegionOne | | service_id | aa142423921d408db8ba8dc7e48784f0 | | service_name | cinderv3 | | service_type | volumev3 | | url | http://controller:8776/v3/%(project_id)s | +--------------+------------------------------------------+ [root@node01 ~]#
cinderv3管理端點
[root@node01 ~]# openstack endpoint create --region RegionOne \ > volumev3 admin http://controller:8776/v3/%\(project_id\)s +--------------+------------------------------------------+ | Field | Value | +--------------+------------------------------------------+ | enabled | True | | id | 19b83da138564fb586ddd0f9edf57c67 | | interface | admin | | region | RegionOne | | region_id | RegionOne | | service_id | aa142423921d408db8ba8dc7e48784f0 | | service_name | cinderv3 | | service_type | volumev3 | | url | http://controller:8776/v3/%(project_id)s | +--------------+------------------------------------------+ [root@node01 ~]#
安裝openstack-cinder軟件包
[root@node01 ~]# yum install openstack-cinder -y
編輯配置文件/etc/cinder/cinder.conf,在【database】配置段配置連接cinder數據庫的地址
在【DEFAULT】配置段配置使用keystone來做認證以及rabbitmq連接地址
在【keystone_authtoken】配置段配置去keystone認證的相關信息
在【DEFAULT】配置段配置my_ip為本地ip地址
在【oslo_concurrency】配置段配置存放鎖文件路徑
cinder.conf的最終配置
[root@node01 ~]# grep -i ^"[a-z\[]" /etc/cinder/cinder.conf [DEFAULT] transport_url = rabbit://openstack:openstack123@node02 auth_strategy = keystone my_ip = 192.168.0.41 [backend] [backend_defaults] [barbican] [brcd_fabric_example] [cisco_fabric_example] [coordination] [cors] [database] connection = mysql+pymysql://cinder:cinder@node02/cinder [fc-zone-manager] [healthcheck] [key_manager] [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = node02:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = cinder password = cinder [matchmaker_redis] [nova] [oslo_concurrency] lock_path = /var/lib/cinder/tmp [oslo_messaging_amqp] [oslo_messaging_kafka] [oslo_messaging_notifications] [oslo_messaging_rabbit] [oslo_messaging_zmq] [oslo_middleware] [oslo_policy] [oslo_reports] [oslo_versionedobjects] [profiler] [sample_remote_file_source] [service_user] [ssl] [vault] [root@node01 ~]#
填充數據庫
[root@node01 ~]# su -s /bin/sh -c "cinder-manage db sync" cinder Deprecated: Option "logdir" from group "DEFAULT" is deprecated. Use option "log-dir" from group "DEFAULT". [root@node01 ~]#
驗證:在cinder庫中,查看是否有表生成?
MariaDB [(none)]> use cinder Reading table information for completion of table and column names You can turn off this feature to get a quicker startup with -A Database changed MariaDB [cinder]> show tables; +----------------------------+ | Tables_in_cinder | +----------------------------+ | attachment_specs | | backup_metadata | | backups | | cgsnapshots | | clusters | | consistencygroups | | driver_initiator_data | | encryption | | group_snapshots | | group_type_projects | | group_type_specs | | group_types | | group_volume_type_mapping | | groups | | image_volume_cache_entries | | messages | | migrate_version | | quality_of_service_specs | | quota_classes | | quota_usages | | quotas | | reservations | | services | | snapshot_metadata | | snapshots | | transfers | | volume_admin_metadata | | volume_attachment | | volume_glance_metadata | | volume_metadata | | volume_type_extra_specs | | volume_type_projects | | volume_types | | volumes | | workers | +----------------------------+ 35 rows in set (0.00 sec) MariaDB [cinder]>
編輯控制節點/etc/nova/nova.conf配置文件,在【cinder】配置段配置nova使用cinder服務
[cinder] os_region_name = RegionOne
重啟nova服務
[root@node01 ~]# systemctl restart openstack-nova-api.service [root@node01 ~]# ss -tnl State Recv-Q Send-Q Local Address:Port Peer Address:Port LISTEN 0 128 *:9292 *:* LISTEN 0 128 *:22 *:* LISTEN 0 100 127.0.0.1:25 *:* LISTEN 0 100 *:6080 *:* LISTEN 0 128 *:9696 *:* LISTEN 0 128 *:8774 *:* LISTEN 0 128 *:8775 *:* LISTEN 0 128 *:9191 *:* LISTEN 0 128 :::80 :::* LISTEN 0 128 :::22 :::* LISTEN 0 100 ::1:25 :::* LISTEN 0 128 :::5000 :::* LISTEN 0 128 :::8778 :::* [root@node01 ~]#
提示:重啟請確保nova-api服務所監聽端口(8774/8775)處於正常監聽;
啟動cinder-api和cinder-scheduler服務,並將其設置為開機啟動
[root@node01 ~]# systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service [root@node01 ~]# systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-cinder-api.service to /usr/lib/systemd/system/openstack-cinder-api.service. Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-cinder-scheduler.service to /usr/lib/systemd/system/openstack-cinder-scheduler.service. [root@node01 ~]# ss -tnl State Recv-Q Send-Q Local Address:Port Peer Address:Port LISTEN 0 128 *:9292 *:* LISTEN 0 128 *:22 *:* LISTEN 0 100 127.0.0.1:25 *:* LISTEN 0 100 *:6080 *:* LISTEN 0 128 *:9696 *:* LISTEN 0 128 *:8774 *:* LISTEN 0 128 *:8775 *:* LISTEN 0 128 *:9191 *:* LISTEN 0 128 *:8776 *:* LISTEN 0 128 :::80 :::* LISTEN 0 128 :::22 :::* LISTEN 0 100 ::1:25 :::* LISTEN 0 128 :::5000 :::* LISTEN 0 128 :::8778 :::* [root@node01 ~]#
提示:請確保8776端口處於正常監聽就說明cinder服務啟動正常;或者直接查看有沒有cinder進程啟動,以此來判斷cinder服務是否都啟動正常;如下
到此,在控制節點安裝配置cinder-api和cinder-scheduler服務就完成了;
2、在cinder storage節點安裝配置cinder-volume服務
storage node 環境說明,配置好yum倉庫,時間同步,以及主機名解析等等;有關基礎環境的配置,請參考https://www.cnblogs.com/qiuhom-1874/p/13886693.html;為了演示,我在node05上附加了三塊20g的硬盤,方便待會測試;
准備好基礎環境后,接下來開始cinder-volume服務需要的包
[root@node05 ~]# yum install lvm2 device-mapper-persistent-data -y
啟動lvm2-lvmetad服務並將其設置為開機啟動
[root@node05 ~]# systemctl start lvm2-lvmetad.service [root@node05 ~]# systemctl enable lvm2-lvmetad.service [root@node05 ~]#
查看node05上的硬盤
把/dev/sdb整塊盤創建一個pv
[root@node05 ~]# pvcreate /dev/sdb Physical volume "/dev/sdb" successfully created. [root@node05 ~]#
創建vg
[root@node05 ~]# vgcreate cinder-volumes /dev/sdb Volume group "cinder-volumes" successfully created [root@node05 ~]#
編輯lvm配置文件/etc/lvm/lvm.conf,將其只允許掃描我們附加的三塊磁盤和本地/dev/sda,默認lvm會掃描整個/dev下的所有設備
安裝openstack-cinder targetcli python-keystone包
[root@node05 ~]# yum install openstack-cinder targetcli python-keystone -y
編輯配置文件/etc/cinder/cinder.conf,在【database】配置段配置連接cinder數據庫相關信息
在【DEFAULT】配置段配置連接rabbitmq相關信息,配置認證策略為keystone,配置my_ip為本地和控制節點通信網卡ip地址
在【keystone_authtoken】配置段配置去keystone認證相關信息
在【lvm】配置段配置連接lvm相關驅動,vg,以及使用的傳輸協議;默認配置文件中沒有【lvm】配置段,我們可以在配置文件的末尾添加【lvm】配置段,把對應要配置的內容配置上就好;
[lvm] volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver volume_group = cinder-volumes iscsi_protocol = iscsi iscsi_helper = lioadm
提示:這里的volume_group要和我們之前創建的vg名稱保持一致;
在【DEFAULT】配置段開啟lvm back end,以及連接glance服務的地址
在【oslo_concurrency】配置鎖路徑
cinder.conf最終配置
[root@node05 ~]# [root@node05 ~]# grep -i ^"[a-z\[]" /etc/cinder/cinder.conf [DEFAULT] my_ip = 192.168.0.45 transport_url = rabbit://openstack:openstack123@node02 auth_strategy = keystone enabled_backends = lvm glance_api_servers = http://controller:9292 [backend] [backend_defaults] [barbican] [brcd_fabric_example] [cisco_fabric_example] [coordination] [cors] [database] connection = mysql+pymysql://cinder:cinder@node02/cinder [fc-zone-manager] [healthcheck] [key_manager] [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = node02:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = cinder password = cinder [matchmaker_redis] [nova] [oslo_concurrency] lock_path = /var/lib/cinder/tmp [oslo_messaging_amqp] [oslo_messaging_kafka] [oslo_messaging_notifications] [oslo_messaging_rabbit] [oslo_messaging_zmq] [oslo_middleware] [oslo_policy] [oslo_reports] [oslo_versionedobjects] [profiler] [sample_remote_file_source] [service_user] [ssl] [vault] [lvm] volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver volume_group = cinder-volumes iscsi_protocol = iscsi iscsi_helper = lioadm [root@node05 ~]#
啟動服務,並將其設置為開機啟動
[root@node05 ~]# systemctl start openstack-cinder-volume.service target.service [root@node05 ~]# systemctl enable openstack-cinder-volume.service target.service Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-cinder-volume.service to /usr/lib/systemd/system/openstack-cinder-volume.service. Created symlink from /etc/systemd/system/multi-user.target.wants/target.service to /usr/lib/systemd/system/target.service. [root@node05 ~]#
驗證:查看服務是否正常啟動
提示:能夠看到cinder-volume進程啟動,說明服務啟動沒有問題;
到此,在storage node上的配置就完成了;
驗證:在控制節點導出admin用戶環境變量,使用openstack volume service list命令,看看是否能夠列出服務組件信息?
[root@node01 ~]# source admin.sh [root@node01 ~]# openstack volume service list +------------------+---------------------+------+---------+-------+----------------------------+ | Binary | Host | Zone | Status | State | Updated At | +------------------+---------------------+------+---------+-------+----------------------------+ | cinder-volume | node05.test.org@lvm | nova | enabled | up | 2020-11-02T13:58:10.000000 | | cinder-scheduler | node01.test.org | nova | enabled | up | 2020-11-02T13:58:10.000000 | +------------------+---------------------+------+---------+-------+----------------------------+ [root@node01 ~]#
提示:能夠列出cinder-scheduler和cinder-volume這兩個組件就表示我們在控制節點和storage 節點上安裝的組件沒有問題;
驗證:導出demo用戶環境,創建卷,看看是否能夠正常創建?
[root@node01 ~]# source demo.sh [root@node01 ~]# openstack volume create --size 1 volume1 +---------------------+--------------------------------------+ | Field | Value | +---------------------+--------------------------------------+ | attachments | [] | | availability_zone | nova | | bootable | false | | consistencygroup_id | None | | created_at | 2020-11-02T14:16:37.000000 | | description | None | | encrypted | False | | id | f41c1ad9-8fb3-426c-bb63-33dafabfd47d | | multiattach | False | | name | volume1 | | properties | | | replication_status | None | | size | 1 | | snapshot_id | None | | source_volid | None | | status | creating | | type | None | | updated_at | None | | user_id | 5453d68782a34429a7dab7da9c51f0d9 | +---------------------+--------------------------------------+ [root@node01 ~]#
查看當前用戶的卷列表
[root@node01 ~]# openstack volume list +--------------------------------------+---------+-----------+------+-------------+ | ID | Name | Status | Size | Attached to | +--------------------------------------+---------+-----------+------+-------------+ | f41c1ad9-8fb3-426c-bb63-33dafabfd47d | volume1 | available | 1 | | +--------------------------------------+---------+-----------+------+-------------+ [root@node01 ~]#
在storage 節點上查看邏輯卷,看看是否有我們剛才創建的卷呢?
將卷附加到某個虛擬機實例上
[root@node01 ~]# openstack server list +--------------------------------------+-----------+---------+----------------------------------------------+--------+---------+ | ID | Name | Status | Networks | Image | Flavor | +--------------------------------------+-----------+---------+----------------------------------------------+--------+---------+ | 057103fc-97eb-4f5b-910d-beddccd3bd22 | test_vm-3 | SHUTOFF | provider-net=192.168.0.124 | cirros | m1.nano | | 32622be2-47dc-47c8-b0ef-c5c5c85eb9ba | test_vm-1 | SHUTOFF | provider-net=192.168.0.102 | cirros | m1.nano | | 5523730d-9dc4-4827-b53a-43f3c860b838 | test_vm-2 | SHUTOFF | provider-net=192.168.0.119 | cirros | m1.nano | | 3f220e22-50ce-4068-9b0b-cd9c07446e6c | demo_vm2 | SHUTOFF | demo_selfservice_net=10.0.1.2, 192.168.0.104 | cirros | m1.nano | | a9f76200-0636-48ab-9eda-69526dab0653 | demo_vm1 | SHUTOFF | provider-net=192.168.0.103 | cirros | m1.nano | +--------------------------------------+-----------+---------+----------------------------------------------+--------+---------+ [root@node01 ~]# openstack volume list +--------------------------------------+---------+-----------+------+-------------+ | ID | Name | Status | Size | Attached to | +--------------------------------------+---------+-----------+------+-------------+ | f41c1ad9-8fb3-426c-bb63-33dafabfd47d | volume1 | available | 1 | | +--------------------------------------+---------+-----------+------+-------------+ [root@node01 ~]# openstack server add volume demo_vm1 volume1 [root@node01 ~]# openstack volume list +--------------------------------------+---------+--------+------+-----------------------------------+ | ID | Name | Status | Size | Attached to | +--------------------------------------+---------+--------+------+-----------------------------------+ | f41c1ad9-8fb3-426c-bb63-33dafabfd47d | volume1 | in-use | 1 | Attached to demo_vm1 on /dev/vdb | +--------------------------------------+---------+--------+------+-----------------------------------+ [root@node01 ~]#
提示:可以看到當把某個卷附加給某個虛擬機實例以后,再次查看卷的狀態就變成了in-use狀態,並明確說明該卷附加在那個實例之上,並識別成那個設備;
驗證:連接對應虛擬機,查看是否有/dev/vdb的設備附加到虛擬機上?
在demo_vm1上對/dev/vdb進行分區
[root@node01 ~]# ssh cirros@192.168.0.103 $ sudo su - # fdisk -l /dev/vdb Disk /dev/vdb: 1 GiB, 1073741824 bytes, 2097152 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes # fdisk /dev/vdb Welcome to fdisk (util-linux 2.27). Changes will remain in memory only, until you decide to write them. Be careful before using the write command. Device does not contain a recognized partition table. Created a new DOS disklabel with disk identifier 0xcc2cfc07. Command (m for help): n Partition type p primary (0 primary, 0 extended, 4 free) e extended (container for logical partitions) Select (default p): p Partition number (1-4, default 1): First sector (2048-2097151, default 2048): Last sector, +sectors or +size{K,M,G,T,P} (2048-2097151, default 2097151): +300M Created a new partition 1 of type 'Linux' and of size 300 MiB. Command (m for help): p Disk /dev/vdb: 1 GiB, 1073741824 bytes, 2097152 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: dos Disk identifier: 0xcc2cfc07 Device Boot Start End Sectors Size Id Type /dev/vdb1 2048 616447 614400 300M 83 Linux Command (m for help): w The partition table has been altered. Calling ioctl() to re-read partition table. Syncing disks. #
格式化/dev/vdb1,並將其掛在至/mnt目錄
# mkfs.ext4 /dev/vdb1 mke2fs 1.42.12 (29-Aug-2014) Creating filesystem with 307200 1k blocks and 76912 inodes Filesystem UUID: bc228b69-bc7d-47ff-81bd-2b5a2291aa02 Superblock backups stored on blocks: 8193, 24577, 40961, 57345, 73729, 204801, 221185 Allocating group tables: done Writing inode tables: done Creating journal (8192 blocks): done Writing superblocks and filesystem accounting information: done # mount /dev/vdb1 /mnt # df -Th Filesystem Type Size Used Available Use% Mounted on /dev devtmpfs 19.2M 0 19.2M 0% /dev /dev/vda1 ext3 978.9M 24.1M 914.0M 3% / tmpfs tmpfs 23.2M 0 23.2M 0% /dev/shm tmpfs tmpfs 23.2M 88.0K 23.1M 0% /run /dev/vdb1 ext4 282.5M 2.0M 261.5M 1% /mnt #
提示:可以看到格式化成功,並成功掛載至/mnt
復制文件到/mnt下進行存儲,看看是否可正常存儲?
# ls -l /mnt total 12 drwx------ 2 root root 12288 Nov 1 15:22 lost+found # cp /etc/passwd /mnt # ls -l /mnt total 13 drwx------ 2 root root 12288 Nov 1 15:22 lost+found -rw------- 1 root root 586 Nov 1 15:24 passwd # cat /mnt/passwd root:x:0:0:root:/root:/bin/sh daemon:x:1:1:daemon:/usr/sbin:/bin/sh bin:x:2:2:bin:/bin:/bin/sh sys:x:3:3:sys:/dev:/bin/sh sync:x:4:100:sync:/bin:/bin/sync mail:x:8:8:mail:/var/spool/mail:/bin/sh proxy:x:13:13:proxy:/bin:/bin/sh www-data:x:33:33:www-data:/var/www:/bin/sh backup:x:34:34:backup:/var/backups:/bin/sh operator:x:37:37:Operator:/var:/bin/sh haldaemon:x:68:68:hald:/:/bin/sh dbus:x:81:81:dbus:/var/run/dbus:/bin/sh ftp:x:83:83:ftp:/home/ftp:/bin/sh nobody:x:99:99:nobody:/home:/bin/sh sshd:x:103:99:Operator:/var:/bin/sh cirros:x:1000:1000:non-root user:/home/cirros:/bin/sh #
提示:可以看到可以正常的存入文件,讀取文件內容都沒有問題;
重啟虛擬機,看看卷是否會自動掛載到對應虛擬機呢?
再啟動虛擬機
查看虛擬機是否有之前的塊設備?是否自動掛載了?數據是否有丟失?
[root@node01 ~]# ssh cirros@192.168.0.103 $ sudo su - # df -Th Filesystem Type Size Used Available Use% Mounted on /dev devtmpfs 19.2M 0 19.2M 0% /dev /dev/vda1 ext3 978.9M 24.1M 914.0M 3% / tmpfs tmpfs 23.2M 0 23.2M 0% /dev/shm tmpfs tmpfs 23.2M 88.0K 23.1M 0% /run # fdisk -l /dev/vdb Disk /dev/vdb: 1 GiB, 1073741824 bytes, 2097152 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: dos Disk identifier: 0xcc2cfc07 Device Boot Start End Sectors Size Id Type /dev/vdb1 2048 616447 614400 300M 83 Linux # mount /dev/vdb1 /mnt # ls -l /mnt total 13 drwx------ 2 root root 12288 Nov 1 15:22 lost+found -rw------- 1 root root 586 Nov 1 15:24 passwd # cat /mnt/passwd root:x:0:0:root:/root:/bin/sh daemon:x:1:1:daemon:/usr/sbin:/bin/sh bin:x:2:2:bin:/bin:/bin/sh sys:x:3:3:sys:/dev:/bin/sh sync:x:4:100:sync:/bin:/bin/sync mail:x:8:8:mail:/var/spool/mail:/bin/sh proxy:x:13:13:proxy:/bin:/bin/sh www-data:x:33:33:www-data:/var/www:/bin/sh backup:x:34:34:backup:/var/backups:/bin/sh operator:x:37:37:Operator:/var:/bin/sh haldaemon:x:68:68:hald:/:/bin/sh dbus:x:81:81:dbus:/var/run/dbus:/bin/sh ftp:x:83:83:ftp:/home/ftp:/bin/sh nobody:x:99:99:nobody:/home:/bin/sh sshd:x:103:99:Operator:/var:/bin/sh cirros:x:1000:1000:non-root user:/home/cirros:/bin/sh #
提示:可以看到重啟虛擬機它並不會自動幫我們掛載,原因是我們沒有配置它自動掛載;但是虛擬機重啟以后,塊設備會自動和虛擬機關聯;我們只需要配置對應的設備開機掛載就行;
到此cinder服務安裝配置以及測試就完成了;這也意味着我們現有的openstack環境之上運行的虛擬機,可以擁有真正意義上的持久存儲功能;只要我們的storage 節點不故障,保存在上面的卷就沒有問題,虛擬機在卷上存儲的數據就不會丟;