GlusterFS學習之路(三)客戶端掛載和管理GlusterFS卷


  • 一、客戶端掛載

  可以使用Gluster Native Client方法在GNU / Linux客戶端中實現高並發性,性能和透明故障轉移。可以使用NFS v3訪問gluster卷。已經對GNU / Linux客戶端和其他操作系統中的NFS實現進行了廣泛的測試,例如FreeBSD,Mac OS X,以及Windows 7(Professional和Up)和Windows Server 2003.其他NFS客戶端實現可以與gluster一起使用NFS服務器。使用Microsoft Windows以及SAMBA客戶端時,可以使用CIFS訪問卷。對於此訪問方法,Samba包需要存在於客戶端。

  總結:GlusterFS支持三種客戶端類型。Gluster Native Client、NFS和CIFS。Gluster Native Client是在用戶空間中運行的基於FUSE的客戶端,官方推薦使用Native Client,可以使用GlusterFS的全部功能。

  • 1、使用Gluster Native Client掛載

Gluster Native Client是基於FUSE的,所以需要保證客戶端安裝了FUSE。這個是官方推薦的客戶端,支持高並發和高效的寫性能。

在開始安裝Gluster Native Client之前,您需要驗證客戶端上是否已加載FUSE模塊,並且可以訪問所需的模塊,如下所示:

[root@localhost ~]# modprobe fuse  #將FUSE可加載內核模塊(LKM)添加到Linux內核
[root@localhost ~]#  dmesg | grep -i fuse  #驗證是否已加載FUSE模塊
[  569.630373] fuse init (API version 7.22)

安裝Gluseter Native Client:

[root@localhost ~]# yum -y install glusterfs-client  #安裝glusterfs-client客戶端
[root@localhost ~]# mkdir /mnt/glusterfs  #創建掛載目錄
[root@localhost ~]# mount.glusterfs 192.168.56.11:/gv1 /mnt/glusterfs/  #掛載/gv1
[root@localhost ~]# df -h
Filesystem          Size  Used Avail Use% Mounted on
/dev/sda2            20G  1.4G   19G   7% /
devtmpfs            231M     0  231M   0% /dev
tmpfs               241M     0  241M   0% /dev/shm
tmpfs               241M  4.6M  236M   2% /run
tmpfs               241M     0  241M   0% /sys/fs/cgroup
/dev/sda1           197M   97M  100M  50% /boot
tmpfs                49M     0   49M   0% /run/user/0
192.168.56.11:/gv1  4.0G  312M  3.7G   8% /mnt/glusterfs
[root@localhost ~]# ll /mnt/glusterfs/  #查看掛載目錄的內容
total 100000
-rw-r--r-- 1 root root 102400000 Aug  7 04:30 100M.file
[root@localhost ~]# mount  #查看掛載信息
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
......
192.168.56.11:/gv1 on /mnt/glusterfs type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)

手動掛載卷選項:

使用該mount -t glusterfs命令時,可以指定以下選項 請注意,您需要用逗號分隔所有選項。

backupvolfile-server=server-name  #在安裝fuse客戶端時添加了這個選擇,則當第一個vofile服務器故障時,該選項執行的的服務器將用作volfile服務器來安裝客戶端

volfile-max-fetch-attempts=number of attempts  指定在裝入卷時嘗試獲取卷文件的嘗試次數。

log-level=loglevel  #日志級別

log-file=logfile    #日志文件

transport=transport-type  #指定傳輸協議

direct-io-mode=[enable|disable]

use-readdirp=[yes|no]  #設置為ON,則強制在fuse內核模塊中使用readdirp模式

舉個例子:
# mount -t glusterfs -o backupvolfile-server=volfile_server2,use-readdirp=no,volfile-max-fetch-attempts=2,log-level=WARNING,log-file=/var/log/gluster.log server1:/test-volume /mnt/glusterfs

自動掛載卷:

除了使用mount掛載,還可以使用/etc/fstab自動掛載

語法格式:HOSTNAME-OR-IPADDRESS:/VOLNAME MOUNTDIR glusterfs defaults,_netdev 0 0

舉個例子:
192.168.56.11:/gv1 /mnt/glusterfs glusterfs defaults,_netdev 0 0
  •  二、管理GlusterFS卷

(1)停止卷

[root@gluster-node1 ~]# gluster volume stop gv1

(2)刪除卷

[root@gluster-node1 ~]# gluster volume delete gv1

 

(3)擴展卷

GlusterFS支持在線進行卷的擴展。

如果添加的節點還不是集群中的節點,需要使用下面命令添加到集群

語法:# gluster peer probe <SERVERNAME>

擴展卷語法:# gluster volume add-brick <VOLNAME> <NEW-BRICK>

[root@gluster-node1 ~]# gluster peer probe gluster-node3  #添加gluster-node3到集群
peer probe: success.

[root@gluster-node1 ~]# gluster volume add-brick test-volume gluster-node3:/storage/brick1 force  #擴展test-volume卷
volume add-brick: success
[root@gluster-node1 ~]# gluster volume info
 
Volume Name: test-volume
Type: Distribute
Volume ID: 26a625bb-301c-4730-a382-0a838ee63935
Status: Started
Snapshot Count: 0
Number of Bricks: 3
Transport-type: tcp
Bricks:
Brick1: gluster-node1:/storage/brick1
Brick2: gluster-node2:/storage/brick1
Brick3: gluster-node3:/storage/brick1      #增加的brick
Options Reconfigured:
transport.address-family: inet
nfs.disable: on

[root@gluster-node1 ~]# gluster volume rebalance test-volume start  #添加后,重新平衡卷以確保文件分發到新添加的brick
volume rebalance: test-volume: success: Rebalance on test-volume has been started successfully. Use rebalance status command to check status of the rebalance process.
ID: ca58bd21-11a5-4018-bb2a-8f9079982394

(4)收縮卷

收縮卷和擴展卷相似據以Brick為單位。

語法:# gluster volume remove-brick <VOLNAME> <BRICKNAME> start 

[root@gluster-node1 ~]# gluster volume remove-brick test-volume gluster-node3:/storage/brick1 start  #刪除brick
volume remove-brick start: success
ID: dd0004f0-b3e6-45d6-80ed-90506dc16159
[root@gluster-node1 ~]# gluster volume remove-brick test-volume gluster-node3:/storage/brick1 status  #查看remove brick操作的狀態
                                    Node Rebalanced-files          size       scanned      failures       skipped               status  run time in h:m:s
                               ---------      -----------   -----------   -----------   -----------   -----------         ------------     --------------
                           gluster-node3               35        0Bytes            35             0             0            completed        0:00:00
[root@gluster-node1 ~]# gluster volume remove-brick test-volume gluster-node3:/storage/brick1 commit  #顯示completed狀態后,提交remove-brick操作
volume remove-brick commit: success
[root@gluster-node1 ~]# gluster volume info
 
Volume Name: test-volume
Type: Distribute
Volume ID: 26a625bb-301c-4730-a382-0a838ee63935
Status: Started
Snapshot Count: 0
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: gluster-node1:/storage/brick1
Brick2: gluster-node2:/storage/brick1
Options Reconfigured:
performance.client-io-threads: on
transport.address-family: inet
nfs.disable: on

(5)遷移卷

要替換分布式卷上的brick,需要添加一個新的brick,然后刪除要替換的brick。在替換的過程中會觸發重新平衡的操作,會將移除的brick中的數據到新加入的brick中。

注意:這里僅支持可以對分布式復制卷或復制卷使用"replace-brick"命令進行替換操作。

(1)初始卷test-volume的配置信息
[root@gluster-node1 gv1]# gluster volume info Volume Name: test-volume Type: Distribute Volume ID: 26a625bb-301c-4730-a382-0a838ee63935 Status: Started Snapshot Count: 0 Number of Bricks: 2 Transport-type: tcp Bricks: Brick1: gluster-node1:/storage/brick1 Brick2: gluster-node2:/storage/brick1 Options Reconfigured: performance.client-io-threads: on transport.address-family: inet nfs.disable: on
(2)test-volume掛載目錄的文件和在實際存儲位置的文件信息 [root@gluster
-node1 gv1]# ll total 0 -rw-r--r-- 1 root root 0 Aug 13 22:22 file1 -rw-r--r-- 1 root root 0 Aug 13 22:22 file2 -rw-r--r-- 1 root root 0 Aug 13 22:22 file3 -rw-r--r-- 1 root root 0 Aug 13 22:22 file4 -rw-r--r-- 1 root root 0 Aug 13 22:22 file5 [root@gluster-node1 gv1]# ll /storage/brick1/ total 0 -rw-r--r-- 2 root root 0 Aug 13 22:22 file1 -rw-r--r-- 2 root root 0 Aug 13 22:22 file2 -rw-r--r-- 2 root root 0 Aug 13 22:22 file5 [root@gluster-node2 ~]# ll /storage/brick1/ total 0 -rw-r--r-- 2 root root 0 Aug 13 2018 file3 -rw-r--r-- 2 root root 0 Aug 13 2018 file4

(3)添加新brick gluster-node3:/storage/brick1 [root@gluster
-node1 ~]# gluster volume add-brick test-volume gluster-node3:/storage/brick1/ force volume add-brick: success
(4)啟動remove-brick [root@gluster
-node1 ~]# gluster volume remove-brick test-volume gluster-node2:/storage/brick1 start volume remove-brick start: success ID: 2acdaebb-25a9-477c-807e-980a6086796e
(5)查看remove-brick的狀態是否為completed [root@gluster
-node1 ~]# gluster volume remove-brick test-volume gluster-node2:/storage/brick1 status Node Rebalanced-files size scanned failures skipped status run time in h:m:s --------- ----------- ----------- ----------- ----------- ----------- ------------ -------------- gluster-node2 2 0Bytes 2 0 0 completed 0:00:00
(6)確認刪除舊的brick [root@gluster
-node1 ~]# gluster volume remove-brick test-volume gluster-node2:/storage/brick1 commit volume remove-brick commit: success
(7)test-volume的最新配置 [root@gluster
-node1 ~]# gluster volume info Volume Name: test-volume Type: Distribute Volume ID: 26a625bb-301c-4730-a382-0a838ee63935 Status: Started Snapshot Count: 0 Number of Bricks: 2 Transport-type: tcp Bricks: Brick1: gluster-node1:/storage/brick1 Brick2: gluster-node3:/storage/brick1 Options Reconfigured: performance.client-io-threads: on transport.address-family: inet nfs.disable: on
(8)檢查新增brick的文件存儲信息,原先存儲在gluster-node2節點的文件移動到了gluster-node3中 [root@gluster
-node3 ~]# ll /storage/brick1/ total 0 -rw-r--r-- 2 root root 0 Aug 13 2018 file3 -rw-r--r-- 2 root root 0 Aug 13 2018 file4

(6)系統配額

[root@gluster-node1 ~]# gluster volume quota test-volume enable    #啟用配額
volume quota : success

[root@gluster-node1 ~]# gluster volume quota test-volume disable    #禁用配額
volume quota : success

[root@gluster-node1 ~]# mount -t glusterfs 127.0.0.1:/test-volume /gv1  #掛載test-volume卷
[root@gluster-node1 ~]# mkdir /gv1/quota  #創建限制的目錄
[root@gluster-node1 ~]# gluster volume quota test-volume limit-usage /quota 10MB    #對/gv1/quota目錄限制

[root@gluster-node1 ~]# gluster volume quota test-volume list  #查看目錄限制信息
                  Path                   Hard-limit  Soft-limit      Used  Available  Soft-limit exceeded? Hard-limit exceeded?
-------------------------------------------------------------------------------------------------------------------------------
/quota                                    10.0MB     80%(8.0MB)   0Bytes  10.0MB              No                   No

[root@gluster-node1 ~]# gluster volume set test-volume features.quota-timeout 5      #設置信息的超時時間

[root@gluster-node1 quota]# cp /gv1/20M.file .  #拷貝20M文件到/gv1/quota下,已經超出了限額,但是依舊可以成功,由於限制的值較小,可能受到算法的影響
[root@gluster-node1 quota]# cp /gv1/20M.file ./20Mb.file  #再拷貝20M的文件,就會提示超出目錄限額
cp: cannot create regular file ‘./20Mb.file’: Disk quota exceeded

[root@gluster-node1 gv1]# gluster volume quota test-volume remove /quota  #刪除某個目錄的quota設置
volume quota : success

備注:

quota功能,主要是對掛載點下的某個目錄進行空間限額,如:/mnt/glusterfs/data目錄,而不是對組成卷組的空間進行限制。

(7)I/O信息查看

 Profile Command 提供接口查看一個卷中的每一個brick的IO信息。

[root@gluster-node1 ~]# gluster volume profile test-volume start  #啟動profiling,之后則可以進行IO信息查看
Starting volume profile on test-volume has been successful 
[root@gluster-node1 ~]# gluster volume profile test-volume info  #查看IO信息,可以查看到每個brick的IO信息
Brick: gluster-node1:/storage/brick1
------------------------------------
Cumulative Stats:
   Block Size:              32768b+              131072b+ 
 No. of Reads:                    0                     0 
No. of Writes:                    2                   312 
 %-latency   Avg-latency   Min-Latency   Max-Latency   No. of calls         Fop
 ---------   -----------   -----------   -----------   ------------        ----
      0.00       0.00 us       0.00 us       0.00 us            122      FORGET
      0.00       0.00 us       0.00 us       0.00 us            160     RELEASE
      0.00       0.00 us       0.00 us       0.00 us             68  RELEASEDIR
 
    Duration: 250518 seconds
   Data Read: 0 bytes
Data Written: 40960000 bytes
 
Interval 1 Stats:
 
    Duration: 27 seconds
   Data Read: 0 bytes
Data Written: 0 bytes
 
Brick: gluster-node3:/storage/brick1
------------------------------------
Cumulative Stats:
   Block Size:               1024b+                2048b+                4096b+ 
 No. of Reads:                    0                     0                     0 
No. of Writes:                    3                     1                    10 
 
   Block Size:               8192b+               16384b+               32768b+ 
 No. of Reads:                    0                     0                     1 
No. of Writes:                  291                   516                    68 
 
   Block Size:              65536b+              131072b+ 
 No. of Reads:                    0                   156 
No. of Writes:                    6                    20 
 %-latency   Avg-latency   Min-Latency   Max-Latency   No. of calls         Fop
 ---------   -----------   -----------   -----------   ------------        ----
      0.00       0.00 us       0.00 us       0.00 us              3     RELEASE
      0.00       0.00 us       0.00 us       0.00 us             31  RELEASEDIR
 
    Duration: 76999 seconds
   Data Read: 20480000 bytes
Data Written: 20480000 bytes
 
Interval 1 Stats:
 
    Duration: 26 seconds
   Data Read: 0 bytes
Data Written: 0 bytes
[root@gluster-node1 ~]# gluster volume profile test-volume stop  #查看結束后關閉profiling功能
Stopping volume profile on test-volume has been successful 

(8)Top監控

Top command 允許你查看bricks的性能例如:read, write, file open calls, file read calls, file write calls, directory open calls, and directory real calls

所有的查看都可以設置top數,默認100

# gluster volume top VOLNAME open [brick BRICK-NAME] [list-cnt]    //查看打開的fd

[root@gluster-node1 ~]# gluster volume top test-volume open brick gluster-node1:/storage/brick1 list-cnt 3
Brick: gluster-node1:/storage/brick1
Current open fds: 0, Max open fds: 4, Max openfd time: 2018-08-13 11:53:24.099217
Count        filename
=======================
1        /98.txt
1        /95.txt
1        /87.txt


# gluster volume top VOLNAME read [brick BRICK-NAME] [list-cnt]    //查看調用次數最多的讀調用

[root@gluster-node1 ~]# gluster volume top test-volume read brick gluster-node3:/storage/brick1 
Brick: gluster-node3:/storage/brick1
Count        filename
=======================
157        /20M.file # gluster volume top VOLNAME write [brick BRICK-NAME] [list-cnt]   //查看調用次數最多的寫調用

[root@gluster-node1 ~]# gluster volume top test-volume write brick gluster-node3:/storage/brick1 
Brick: gluster-node3:/storage/brick1
Count        filename
=======================
915        /20M.file # gluster volume top VOLNAME opendir [brick BRICK-NAME] [list-cnt] # gluster volume top VOLNAME readdir [brick BRICK-NAME] [list-cnt]    //查看次數最多的目錄調用

[root@gluster-node1 ~]# gluster volume top test-volume opendir brick gluster-node3:/storage/brick1 
Brick: gluster-node3:/storage/brick1
Count        filename
=======================
7        /quota

[root@gluster-node1 ~]# gluster volume top test-volume readdir brick gluster-node3:/storage/brick1 
Brick: gluster-node3:/storage/brick1
Count        filename
=======================
7        /quota


# gluster volume top VOLNAME read-perf [bsblk-size count count] [brick BRICK-NAME] [list-cnt]    //查看每個Brick的讀性能

[root@gluster-node1 ~]# gluster volume top test-volume read-perf bs 256 count 1 brick gluster-node3:/storage/brick1 
Brick: gluster-node3:/storage/brick1
Throughput 42.67 MBps time 0.0000 secs
MBps Filename                                        Time                      
==== ========                                        ====                      
   0 /20M.file                                       2018-08-14 03:32:24.7443 # gluster volume top VOLNAME write-perf [bsblk-size count count] [brick BRICK-NAME] [list-cnt]    //查看每個Brick的寫性能

[root@gluster-node1 ~]# gluster volume top test-volume write-perf bs 256 count 1 brick gluster-node1:/storage/brick1 
Brick: gluster-node1:/storage/brick1
Throughput 16.00 MBps time 0.0000 secs
MBps Filename                                        Time                      
==== ========                                        ====                      
   0 /quota/20Mb.file                                2018-08-14 11:34:21.957635
   0 /quota/20M.file                                 2018-08-14 11:31:02.767068

 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM