9 雲計算系列之Cinder的安裝與NFS作為cinder后端存儲


preface

在前面我們知道了如何搭建Openstack的keystone,glance,nova,neutron,horizon這幾個服務,然而在這幾個服務中唯獨缺少存儲服務,那么下面我們就學習塊存儲服務。

Cinder塊存儲服務

塊存儲服務(cinder)為實例提供塊存儲。存儲的分配和消耗是由塊存儲驅動器,或者多后端配置的驅動器決定的。還有很多驅動程序可用:NAS/SAN,NFS,ISCSI,Ceph等。典型情況下,塊服務API和調度器服務運行在控制節點上。取決於使用的驅動,卷服務器可以運行在控制節點、計算節點或單獨的存儲節點。
它由下面4個組件來組成的:
1.cinder-api:
接受API請求,並將請求調度到cinder-volume 執行
2.cinder-volume
與塊存儲服務,例如cinder-scheduler 的進程直接交互。它也可以與這些進程通過一個消息隊列交互。 cinder-volume服務響應到塊存儲服務的讀寫請求來維持狀態。它也可以和多種存儲驅動交互
3.cinder-scheduler守護進程
選擇最優存儲提供節點來創建卷。其與nova-scheduler組件類似。
4.cinder-backup daemon

cinder-backup服務提供任何種類備份卷到一個備份存儲提供者。就像cinder-volume服務,它與多種存儲提供者在驅動架構下進行交互。

5.消息隊列

在塊存儲的進程之間路由信息。

在沒有cinder服務的時候,我們的雲主機磁盤是在/var/lib/nova/instances/虛擬機ID 下面,如下所示

[root@linux-node2 instances]# ll -rt /var/lib/nova/instances/
total 8
drwxr-xr-x. 2 nova nova   69 Feb  8 20:26 afda0b61-a8f8-4e27-bf42-b20503496fe1   # 默認就在本地磁盤作為存儲實體
-rw-r--r--. 1 nova nova   45 Feb  8 21:36 compute_nodes
drwxr-xr-x. 2 nova nova  100 Feb  8 21:36 _base
drwxr-xr-x. 2 nova nova 4096 Feb  8 21:36 locks

部署安裝它

我們可以參照官網來安裝:http://docs.openstack.org/newton/install-guide-rdo/cinder-controller-install.html
我們在linux-node1節點上安裝它

1.創建數據庫與數據庫用戶
這一步我們之前在安裝keystone的時候已經完成了。那就在啰嗦一下吧:

mysql> CREATE DATABASE cinder;
mysql> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' \
  IDENTIFIED BY 'cinder';
mysql> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' \
  IDENTIFIED BY 'cinder';

2.創建Openstack 用戶
這個Openstack 的用戶我們也創建完成了在安裝keystone的時候,那就在啰嗦一下如何創建吧:

[root@linux-node1 ~]# source admin_openrc
[root@linux-node1 ~]# openstack user create --domain default --password-prompt cinder
[root@linux-node1 ~]# openstack role add --project service --user cinder admin

3.安裝cinder服務

[root@linux-node1 ~]# yum install openstack-cinder

4.配置cinder

[root@linux-node1 ~]# vim /etc/cinder/cinder.conf
[DEFAULT]
transport_url = rabbit://openstack:openstack@192.168.56.11
auth_strategy = keystone

[database]
connection = mysql+pymysql://cinder:cinder@192.168.56.11/cinder

[keystone_authtoken]
auth_uri = http://192.168.56.11:5000
auth_url = http://192.168.56.11:35357
memcached_servers = 192.168.56.11:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = cinder
password = cinder

[oslo_concurrency]
lock_path = /var/lib/cinder/tmp

5.同步數據庫

[root@linux-node1 ~]#  su -s /bin/sh -c "cinder-manage db sync" cinder
[root@linux-node1 ~]# mysql -h 192.168.56.11 -ucinder -pcinder
MariaDB [(none)]> use cinder;
MariaDB [cinder]> show tables;
[root@linux-node1 ~]# mysql -h 192.168.56.11 -ucinder -pcinder -e "use  cinder;show tables" |wc -l   # 核對下看是不是共34行
34

6.配置計算服務

[root@linux-node1 ~]# vim /etc/nova/nova.conf

[cinder]
os_region_name = RegionOne

7.重啟計算服務,並且設置cinder服務開機自啟

[root@linux-node1 ~]# systemctl restart openstack-nova-api.service
[root@linux-node1 ~]# systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service
[root@linux-node1 ~]# systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service

8.檢查端口是否起來和日至是否有異常:

[root@linux-node1 ~]# netstat -lnpt  |grep 8776
tcp        0      0 0.0.0.0:8776            0.0.0.0:*               LISTEN      6256/python2
[root@linux-node1 ~]# tail /var/log/cinder/api.log 

9.注冊服務

openstack endpoint create --region RegionOne  volume public http://192.168.56.11:8776/v1/%\(tenant_id\)s
openstack endpoint create --region RegionOne volume internal http://192.168.56.11:8776/v1/%\(tenant_id\)s
openstack endpoint create --region RegionOne  volume admin http://192.168.56.11:8776/v1/%\(tenant_id\)s
openstack endpoint create --region RegionOne  volumev2 public http://192.168.56.11:8776/v2/%\(tenant_id\)s
openstack endpoint create --region RegionOne  volumev2 internal http://192.168.56.11:8776/v2/%\(tenant_id\)s
openstack endpoint create --region RegionOne  volumev2 admin http://192.168.56.11:8776/v2/%\(tenant_id\)s

[root@linux-node1 ~]# openstack endpoint create --region RegionOne  volume public http://192.168.56.11:8776/v1/%\(tenant_id\)s
+--------------+--------------------------------------------+
| Field        | Value                                      |
+--------------+--------------------------------------------+
| enabled      | True                                       |
| id           | 69fa1e44b92a4511b87e6bba900a9d7a           |
| interface    | public                                     |
| region       | RegionOne                                  |
| region_id    | RegionOne                                  |
| service_id   | 79eaa15817444e518f08a31555a1cb36           |
| service_name | cinder                                     |
| service_type | volume                                     |
| url          | http://192.168.56.11:8776/v1/%(tenant_id)s |
+--------------+--------------------------------------------+
[root@linux-node1 ~]# openstack endpoint create --region RegionOne volume internal http://192.168.56.11:8776/v1/%\(tenant_id\)s
+--------------+--------------------------------------------+
| Field        | Value                                      |
+--------------+--------------------------------------------+
| enabled      | True                                       |
| id           | 4cc826bb78f848979303b478d7bb66ab           |
| interface    | internal                                   |
| region       | RegionOne                                  |
| region_id    | RegionOne                                  |
| service_id   | 79eaa15817444e518f08a31555a1cb36           |
| service_name | cinder                                     |
| service_type | volume                                     |
| url          | http://192.168.56.11:8776/v1/%(tenant_id)s |
+--------------+--------------------------------------------+
[root@linux-node1 ~]# openstack endpoint create --region RegionOne  volume admin http://192.168.56.11:8776/v1/%\(tenant_id\)s
+--------------+--------------------------------------------+
| Field        | Value                                      |
+--------------+--------------------------------------------+
| enabled      | True                                       |
| id           | c70163f3372449ef8978514aa19d5cad           |
| interface    | admin                                      |
| region       | RegionOne                                  |
| region_id    | RegionOne                                  |
| service_id   | 79eaa15817444e518f08a31555a1cb36           |
| service_name | cinder                                     |
| service_type | volume                                     |
| url          | http://192.168.56.11:8776/v1/%(tenant_id)s |
+--------------+--------------------------------------------+
[root@linux-node1 ~]# openstack endpoint create --region RegionOne  volumev2 public http://192.168.56.11:8776/v2/%\(tenant_id\)s
+--------------+--------------------------------------------+
| Field        | Value                                      |
+--------------+--------------------------------------------+
| enabled      | True                                       |
| id           | 028b68c6a48a49be81760c3359c3be3f           |
| interface    | public                                     |
| region       | RegionOne                                  |
| region_id    | RegionOne                                  |
| service_id   | 5452eb159d5a420187697669fbb0fb31           |
| service_name | cinderv2                                   |
| service_type | volumev2                                   |
| url          | http://192.168.56.11:8776/v2/%(tenant_id)s |
+--------------+--------------------------------------------+
[root@linux-node1 ~]# openstack endpoint create --region RegionOne  volumev2 internal http://192.168.56.11:8776/v2/%\(tenant_id\)s
+--------------+--------------------------------------------+
| Field        | Value                                      |
+--------------+--------------------------------------------+
| enabled      | True                                       |
| id           | 96aaa6d2023e457bafce320a3116fafa           |
| interface    | internal                                   |
| region       | RegionOne                                  |
| region_id    | RegionOne                                  |
| service_id   | 5452eb159d5a420187697669fbb0fb31           |
| service_name | cinderv2                                   |
| service_type | volumev2                                   |
| url          | http://192.168.56.11:8776/v2/%(tenant_id)s |
+--------------+--------------------------------------------+
[root@linux-node1 ~]# openstack endpoint create --region RegionOne  volumev2 admin http://192.168.56.11:8776/v2/%\(tenant_id\)s
+--------------+--------------------------------------------+
| Field        | Value                                      |
+--------------+--------------------------------------------+
| enabled      | True                                       |
| id           | d1cfc448bbad4d6db86e5bf79da4fb29           |
| interface    | admin                                      |
| region       | RegionOne                                  |
| region_id    | RegionOne                                  |
| service_id   | 5452eb159d5a420187697669fbb0fb31           |
| service_name | cinderv2                                   |
| service_type | volumev2                                   |
| url          | http://192.168.56.11:8776/v2/%(tenant_id)s |
+--------------+--------------------------------------------+
[root@linux-node1 ~]# openstack service list    # 查看食肉注冊成功
+----------------------------------+----------+----------+
| ID                               | Name     | Type     |
+----------------------------------+----------+----------+
| 5452eb159d5a420187697669fbb0fb31 | cinderv2 | volumev2 |
| 75791c905b92412ca4390b3970726f75 | glance   | image    |
| 79eaa15817444e518f08a31555a1cb36 | cinder   | volume   |
| 84f33de0de8c4da18cfb7f213b63a638 | nova     | compute  |
| c4dadf8bf2f74561b7408a5089541432 | neutron  | network  |
| d24e9eacb30a4c9fa6d1109c856f6b11 | keystone | identity |
+----------------------------------+----------+----------+
[root@linux-node1 ~]# openstack endpoint list  |grep cinder         # 查看食肉注冊成功
| 028b68c6a48a49be81760c3359c3be3f | RegionOne | cinderv2     | volumev2     | True    | public    | http://192.168.56.11:8776/v2/%(tenant_id)s   |
| 4cc826bb78f848979303b478d7bb66ab | RegionOne | cinder       | volume       | True    | internal  | http://192.168.56.11:8776/v1/%(tenant_id)s   |
| 69fa1e44b92a4511b87e6bba900a9d7a | RegionOne | cinder       | volume       | True    | public    | http://192.168.56.11:8776/v1/%(tenant_id)s   |
| 96aaa6d2023e457bafce320a3116fafa | RegionOne | cinderv2     | volumev2     | True    | internal  | http://192.168.56.11:8776/v2/%(tenant_id)s   |
| c70163f3372449ef8978514aa19d5cad | RegionOne | cinder       | volume       | True    | admin     | http://192.168.56.11:8776/v1/%(tenant_id)s   |
| d1cfc448bbad4d6db86e5bf79da4fb29 | RegionOne | cinderv2     | volumev2     | True    | admin     | http://192.168.56.11:8776/v2/%(tenant_id)s   |
安裝存儲節點

在安裝存儲節點之前,我們需要明白的是,我們在存儲節點上使用LVM生成可以存儲的卷組,然后通過ISCSI來提供可用存儲的卷組供雲主機使用。
存儲節點我在linux-node2上安裝,步驟如下:

  1. 安裝LVM且設置為開機自啟動,大多數CentOs都自帶LVM命令。
[root@linux-node2 ~]# yum install lvm2
[root@linux-node2 ~]#  systemctl enable lvm2-lvmetad.service
[root@linux-node2 ~]# systemctl start lvm2-lvmetad.service
  1. 創建LVM物理卷與卷組
[root@linux-node2 ~]#  pvcreate /dev/sdb
[root@linux-node2 ~]#  vgcreate cinder-volumes /dev/sdb

在/etc/lvm/lvm.conf 添加一個過濾器,只接受/dev/sdb設備,拒絕其他所有設備每個過濾器組中的元素都以a開頭,即為 accept,或以 r 開頭,即為reject,並且包括一個設備名稱的正則表達式規則。過濾器組必須以r/.*/結束,過濾所有保留設備。可以使用 :命令:vgs -vvvv 來測試過濾器。

[root@linux-node2 ~]#  vim /etc/lvm/lvm.conf
devices {    # 切記,一定要在devices下面
	filter = [ "a/sda/", "a/sdb/", "r/.*/"]
	}
  1. 安裝並配置Cinder
[root@linux-node2 ~]# yum install openstack-cinder targetcli python-keystone

安裝完成以后,我們配置下cinder的配置文件,為了方便起見,我們從linux-node1上copy配置文件到linux-node2上。

[root@linux-node1 ~]# scp /etc/cinder/cinder.conf root@192.168.56.12:/etc/cinder/

我們在此基礎之上添加幾條配置即可。

[root@linux-node2 ~]# vim /etc/cinder/cinder.conf
[DEFAULT]
enabled_backends = lvm
glance_api_servers = http://192.168.56.11:9292
iscsi_ip_address = 192.168.56.12   # 寫成本地的IP即可

[lvm]
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_group = cinder-volumes
iscsi_protocol = iscsi
iscsi_helper = lioadm

配置完成后,我們總的來看看這個cinder.conf的配置文件

[root@linux-node2 ~]# egrep  "^([a-Z]|\[)" /etc/cinder/cinder.conf
[DEFAULT]
transport_url = rabbit://openstack:openstack@192.168.56.11
glance_api_servers = http://192.168.56.11:9292
auth_strategy = keystone
enabled_backends = lvm
iscsi_ip_address = 192.168.56.12
[database]
connection = mysql+pymysql://cinder:cinder@192.168.56.11/cinder
[keystone_authtoken]
auth_uri = http://192.168.56.11:5000
auth_url = http://192.168.56.11:35357
memcached_servers = 192.168.56.11:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = cinder
password = cinder
[oslo_concurrency]
lock_path = /var/lib/cinder/tmp
[lvm]
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_group = cinder-volumes
iscsi_protocol = iscsi
iscsi_helper = lioadm

4.啟動服務
確認無誤后,我們啟動cinder服務

[root@linux-node2 ~]# systemctl enable openstack-cinder-volume.service target.service
[root@linux-node2 ~]# systemctl start openstack-cinder-volume.service target.service

5.驗證存儲服務是否正常工作
linux-node1上操作

[root@linux-node1 ~]# source /root/admin_openrc
[root@linux-node1 ~]# openstack volume service list
+------------------+-----------------------------+------+---------+-------+----------------------------+
| Binary           | Host                        | Zone | Status  | State | Updated At                 |
+------------------+-----------------------------+------+---------+-------+----------------------------+
| cinder-scheduler | linux-node1.example.com     | nova | enabled | up    | 2017-02-09T13:22:53.000000 |
| cinder-volume    | linux-node2.example.com@lvm | nova | enabled | up    | 2017-02-09T13:22:51.000000 |
+------------------+-----------------------------+------+---------+-------+----------------------------+

能夠識別到host,且狀態為UP狀態,那么就說明搭建成功了。

創建雲硬盤

在上面的步驟操作完成后,我們就可以在Openstack Horizon上查看到了。如下圖所示,使用demo用戶登陸
image

我們點擊右邊的創建卷,創建完成就可以給指定的雲主機使用了,操作流程如下:

image

綁定指定主機即可

image

使用雲硬盤

我們給虛擬機添加完雲硬盤以后,我們就可以使用它了,我們登陸剛才添加雲硬盤的虛擬機,然后執行下面的命令進行格式化分區后使用:

[root@host-192-168-56-101 ~]# fdisk -l
Disk /dev/vda: 3221 MB, 3221225472 bytes, 6291456 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x00067c89

   Device Boot      Start         End      Blocks   Id  System
/dev/vda1            2048     6291455     3144704   8e  Linux LVM

Disk /dev/vdb: 1073 MB, 1073741824 bytes, 2097152 sectors    # 剛才添加上的1G硬盤
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/centos-root: 3217 MB, 3217031168 bytes, 6283264 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

格式化並掛載

[root@host-192-168-56-101 ~]# mkfs.ext4 /dev/vdb
[root@host-192-168-56-101 ~]# mount /dev/vdb /mnt/
[root@host-192-168-56-101 ~]# df -hT
Filesystem              Type      Size  Used Avail Use% Mounted on
/dev/mapper/centos-root xfs       3.0G 1004M  2.1G  33% /
devtmpfs                devtmpfs  235M     0  235M   0% /dev
tmpfs                   tmpfs     245M     0  245M   0% /dev/shm
tmpfs                   tmpfs     245M  4.3M  241M   2% /run
tmpfs                   tmpfs     245M     0  245M   0% /sys/fs/cgroup
tmpfs                   tmpfs      49M     0   49M   0% /run/user/0
/dev/vdb                ext4      976M  2.6M  907M   1% /mnt   # 掛載使用了。
[root@host-192-168-56-101 ~]#

在使用中雲盤是不可以刪除的。但是可以這么干,把這塊雲硬盤先卸載,然后重新分配到另一個雲主機上使用這個雲硬盤上的數據。

此時我們回到linux-node2上查看lvm的使用情況,你會發現我們在Openstack創建的雲硬盤其實就等同於我們在cinder存儲節點上創建同樣大小的LVM卷。如下所示:

[root@linux-node2 ~]# lvdisplay
  --- Logical volume ---
  LV Path                /dev/cinder-volumes/volume-c5bbd596-0dab-408f-885f-941fc83e51df
  LV Name                volume-c5bbd596-0dab-408f-885f-941fc83e51df
  VG Name                cinder-volumes
  LV UUID                w3sQDU-MGsW-nJbB-zUxX-HXyH-0wXC-y9z9aH
  LV Write Access        read/write
  LV Creation host, time linux-node2.example.com, 2017-02-09 21:31:57 +0800
  LV Status              available
  # open                 0
  LV Size                1.00 GiB
  Current LE             256
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     8192
  Block device           253:0

使用NFS做cinder的后端存儲

在公司內部開發或者功能性測試的時候,我們可以考慮使用NFS這種簡單的方式來作為cinder的存儲解決方案。因為開發或者功能性測試對磁盤IO要求不高,大多數功能的內部交換機,網絡都是千兆的,計算一下每秒也能達128MB/S,能夠滿足大多數日常辦公的使用。所以我們在這里就聊聊使用NFS做cinder的后端存儲。在生產環境下,業界比較多的是采用Gluster和Ceph作為cinder的存儲后端。我們先講通過NFS來做cinder后端,后面我們會說說如何采用Ceph來做Cinder的后端。

我們可以參考Openstack上的wiki來弄:https://wiki.openstack.org/wiki/How_to_deploy_cinder_with_NFS
我們繼續在linux-node2上安裝NFS
1.安裝配置NFS

[root@linux-node2 ~]# yum -y install nfs-utils rpcbind
[root@linux-node2 ~]# cat /etc/exports
/data/nfs *(rw,no_root_squash)   # 把/data/nfs共享出去,

[root@linux-node2 ~]# mkfs.ext4 /dev/sdc
[root@linux-node2 ~]# mount /dev/sdc /data/nfs/

[root@linux-node2 ~]# systemctl restart nfs
[root@linux-node2 ~]# systemctl enabled nfs
[root@linux-node2 ~]# systemctl restart rpcbind
[root@linux-node2 ~]# systemctl enabled rpcbind

[root@linux-node2 ~]# showmount -e localhost
Export list for localhost:
/data/nfs *

2.配置cinder
首先我們需要知道,cinder是通過在cinder.conf配置文件來配置驅動從而使用不同的存儲介質的,所以如果我們使用NFS作為存儲介質,那么就需要配置成NFS的驅動,那么問題來了,如何找到NFS的驅動呢?請看下面查找步驟:

[root@linux-node2 ~]# cd /usr/lib/python2.7/site-packages/cinder   # 切換到cinder的模塊包里
[root@linux-node2 cinder]# cd volume/drivers/   # 找到卷的驅動
[root@linux-node2 drivers]# grep Nfs nfs.py   # 過濾下Nfs就能找到
class NfsDriver(driver.ExtendVD, remotefs.RemoteFSDriver):   # 這個class定義的類就是Nfs的驅動名字了

找到驅動名字以后,我們開始配置cinder.conf

[root@linux-node2 drivers]# tail /etc/cinder/cinder.conf
[DEFAULT]
enabled_backends = nfs   # 設置存儲后端為NFS

[nfs]
volume_driver = cinder.volume.drivers.nfs.NfsDriver   # 寫上驅動的名字
nfs_shares_config = /etc/cinder/nfs_shares   # 待會創建這個nfs的配置文件
nfs_mount_point_base = $state_path/mnt

創建nfs配置文件

[root@linux-node2 drivers]# cat /etc/cinder/nfs_shares
192.168.56.12:/data/nfs     
[root@linux-node2 drivers]# chown root:cinder /etc/cinder/nfs_shares
[root@linux-node2 drivers]# chmod 640 /etc/cinder/nfs_shares

[root@linux-node2 drivers]# ll  /etc/cinder/nfs_shares    # 確保權限一致
-rw-r-----. 1 root cinder 24 Feb 10 21:57 /etc/cinder/nfs_shares

3.重啟cinder服務

[root@linux-node2 drivers]# systemctl restart openstack-cinder-volume

4.在控制節點(linux-node1) 檢測是否注冊成功的cinder服務

[root@linux-node1 ~]# openstack volume service list
+------------------+-----------------------------+------+---------+-------+----------------------------+
| Binary           | Host                        | Zone | Status  | State | Updated At                 |
+------------------+-----------------------------+------+---------+-------+----------------------------+
| cinder-scheduler | linux-node1.example.com     | nova | enabled | up    | 2017-02-10T14:01:38.000000 |
| cinder-volume    | linux-node2.example.com@lvm | nova | enabled | down  | 2017-02-10T14:00:32.000000 |   # 這個down是屬於正常情況,因為我們把lvm改成了NFS。
| cinder-volume    | linux-node2.example.com@nfs | nova | enabled | up    | 2017-02-10T14:00:51.000000 |
+------------------+-----------------------------+------+---------+-------+----------------------------+

5.在控制節點創建NFS類型,然后與 linux-node2.example.com@nfs 進行綁定

[root@linux-node1 ~]# source admin_openrc
[root@linux-node1 ~]# cinder type-create NFS
+--------------------------------------+------+-------------+-----------+
| ID                                   | Name | Description | Is_Public |
+--------------------------------------+------+-------------+-----------+
| e7c50520-6d21-4314-a802-09ae8d799252 | NFS  | -           | True      |
+--------------------------------------+------+-------------+-----------+

[root@linux-node1 ~]# cinder type-create ISCSI       # 如果又使用LVM又使用NFS的話,那么也創建下它吧。
+--------------------------------------+-------+-------------+-----------+
| ID                                   | Name  | Description | Is_Public |
+--------------------------------------+-------+-------------+-----------+
| 80980708-8247-45f5-b8a4-072efb6d5054 | ISCSI | -           | True      |
+--------------------------------------+-------+-------------+-----------+

進行綁定,把卷與卷類型進行綁定。
我們先對NFS的Volume節點賦一個名字

[root@linux-node2 ~]# vim /etc/cinder/cinder.conf

[nfs]
volume_driver = cinder.volume.drivers.nfs.NfsDriver
nfs_shares_config = /etc/cinder/nfs_shares
nfs_mount_point_base = $state_path/mnt
volume_backend_name = NFS-Storage      # 只要添加這個,指定一個名字

進行綁定:

[root@linux-node1 ~]# cinder type-key NFS set volume_backend_name=NFS-Storage  

參數解釋下:

  • type-key 后面寫的自己定義的一個名字。
  • volume_backend_name 就是我們在上面volume節點的cinder.conf通過volume_backend_name設置的名字
    6.創建雲硬盤,硬盤類型選擇NFS。
    image
    7.掛載在指定的雲主機上就可以使用這個硬盤了。


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM