centos7 xfs 文件系統配置quota 用戶磁盤配額


 

centos7的xfs配置

 

XFS是擴展性高、高性能的文件系統。也是rhel7/centos7的默認文件系統。
XFS支持metadata journaling,這使其能從crash中更快速的恢復。
它也支持在掛載和活動的狀態下進行碎片整理和擴容。
通過延遲分配,XFS 贏得了許多機會來優化寫性能。
可通過工具xfsdump和xfsrestore來備份和恢復xfs文件系統,
xfsdump可使用dump級別來完成增量備份,還可通過size,subtree,inode flags來排除文件。
也支持user、group、project配額。

下面將介紹如何創建xfs文件系統,分配配額以及對其擴容:
###############################################################################
將/dev/sdb分區(2G),並啟動LVM功能

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
[root @localhost  zhongq] #parted /dev/sdb                              
GNU Parted 3.1
Using /dev/sdb
Welcome to GNU Parted! Type  'help'  to view a list of commands.
(parted) mkpart primary 4 2048
(parted) set 1 lvm on                                                   
(parted) p                                                            
Model: VMware, VMware Virtual S (scsi)
Disk /dev/sdb: 2147MB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
 
Number  Start   End     Size    File  system   Name     Flags
  1      4194kB  2048MB  2044MB               primary  lvm

 

###############################################################################
創建PV

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
[root @localhost  zhongq] # pvcreate /dev/sdb1
   Physical volume  "/dev/sdb1"  successfully created
 
[root @localhost  zhongq] # pvdisplay
   --- Physical volume ---
   PV Name               /dev/sda2
   VG Name               centos
   PV Size               24.51 GiB / not usable 3.00 MiB
   Allocatable           yes (but full)
   PE Size               4.00 MiB
   Total PE              6274
   Free PE               0
   Allocated PE          6274
   PV UUID               9hp8U7-IJM6-bwbP-G9Vn-IVuJ-yvE8-AkFjcB
    
   "/dev/sdb1"  is a new physical volume of  "1.90 GiB"
   --- NEW Physical volume ---
   PV Name               /dev/sdb1
   VG Name              
   PV Size               1.90 GiB
   Allocatable           NO
   PE Size               0  
   Total PE              0
   Free PE               0
   Allocated PE          0
   PV UUID               bu7yIH-1440-BPy1-APG2-FpvX-ejLS-2MIlA8

###############################################################################
將/dev/sdb1分配到名為xfsgroup00的VG

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
[root @localhost  zhongq] # vgcreate  xfsgroup00 /dev/sdb1
  Volume group  "xfsgroup00"  successfully created
[root @localhost  zhongq] # vgdisplay
  --- Volume group ---
   VG Name               centos
   System ID            
   Format                lvm2
   Metadata Areas        1
   Metadata Sequence No  3
   VG Access              read / write
   VG Status             resizable
   MAX LV                0
   Cur LV                2
   Open LV               2
   Max PV                0
   Cur PV                1
   Act PV                1
   VG Size               24.51 GiB
   PE Size               4.00 MiB
   Total PE              6274
   Alloc PE / Size       6274 / 24.51 GiB
   Free  PE / Size       0 / 0  
   VG UUID               T3Ryyg-R0rn-2i5r-7L5o-AZKG-yFkh-CDzhKm
    
   --- Volume group ---
   VG Name               xfsgroup00
   System ID            
   Format                lvm2
   Metadata Areas        1
   Metadata Sequence No  1
   VG Access              read / write
   VG Status             resizable
   MAX LV                0
   Cur LV                0
   Open LV               0
   Max PV                0
   Cur PV                1
   Act PV                1
   VG Size               1.90 GiB
   PE Size               4.00 MiB
   Total PE              487
   Alloc PE / Size       0 / 0  
   Free  PE / Size       487 / 1.90 GiB
   VG UUID               ejuwcc-sVES-MWWB-3Mup-n1wB-Kd0g-u7jm0H

###############################################################################
使用命令lvcreate來創建xfsgroup00組大小為1G的名為xfsdata的LV

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
[root @localhost  zhongq] # lvcreate -L 1024M -n xfsdata xfsgroup00
WARNING: xfs signature detected on /dev/xfsgroup00/xfsdata at offset 0. Wipe it? [y/n] y
   Wiping xfs signature on /dev/xfsgroup00/xfsdata.
   Logical volume  "xfsdata"  created
[root @localhost  zhongq] # lvdisplay
   --- Logical volume ---
   LV Path                /dev/centos/swap
   LV Name                swap
   VG Name                centos
   LV UUID                EnW3at-KlFG-XGaQ-DOoH-cGPP-8pSf-teSVbh
   LV Write Access         read / write
   LV Creation host,  time  localhost, 2014-08-18 20:15:25 +0800
   LV Status              available
   # open                 2
   LV Size                2.03 GiB
   Current LE             520
   Segments               1
   Allocation             inherit
   Read ahead sectors     auto
   - currently set to     8192
   Block device           253:0
    
   --- Logical volume ---
   LV Path                /dev/centos/root
   LV Name                root
   VG Name                centos
   LV UUID                zmZGkv-Ln4W-B8AY-oDnD-BEk2-6VWL-L0cZOv
   LV Write Access         read / write
   LV Creation host,  time  localhost, 2014-08-18 20:15:26 +0800
   LV Status              available
   # open                 1
   LV Size                22.48 GiB
   Current LE             5754
   Segments               1
   Allocation             inherit
   Read ahead sectors     auto
   - currently set to     8192
   Block device           253:1
    
   --- Logical volume ---
   LV Path                /dev/xfsgroup00/xfsdata
   LV Name                xfsdata
   VG Name                xfsgroup00
   LV UUID                O4yvoY-XGcD-0zPm-eilR-3JJP-updU-rRCSlJ
   LV Write Access         read / write
   LV Creation host,  time  localhost.localdomain, 2014-09-23 15:50:19 +0800
   LV Status              available
   # open                 0
   LV Size                1.00 GiB
   Current LE             256
   Segments               1
   Allocation             inherit
   Read ahead sectors     auto
   - currently set to     8192
   Block device           253:3

###############################################################################
格式化分區為xfs文件系統。
注意:xfs被創建后,其size將無法縮小,但可以通過xfs_growfs來增大

1
2
3
4
5
6
7
8
9
10
[root @localhost  zhongq] # mkfs.xfs /dev/xfsgroup00/xfsdata
meta-data=/dev/xfsgroup00/xfsdata isize=256    agcount=4, agsize=65536 blks
          =                       sectsz=512   attr=2, projid32bit=1
          =                       crc=0
data     =                       bsize=4096   blocks=262144, imaxpct=25
          =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=0
log       =internal  log            bsize=4096   blocks=2560, version=2
          =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0

###############################################################################
掛載xfs系統分區到指定目錄,並通過參數uquota,gquota開啟文件系統配額。

1
2
3
4
5
[root @localhost  zhongq] # mkdir /xfsdata
[root @localhost  zhongq] # mount -o uquota,gquota /dev/xfsgroup00/xfsdata /xfsdata
[root @localhost  zhongq] # chmod 777 /xfsdata
[root @localhost  zhongq] # mount|grep xfsdata
/dev/mapper/xfsgroup00-xfsdata on /xfsdata type xfs (rw,relatime,attr2,inode64,usrquota,grpquota)

###############################################################################
使用xfs_quota命令來查看配額信息以及為用戶和目錄分配配額,並驗證配額限制是否生效。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
[root @localhost  zhongq] # xfs_quota -x -c 'report' /xfsdata
User quota on /xfsdata (/dev/mapper/xfsgroup00-xfsdata)
                                Blocks                    
User ID          Used       Soft       Hard    Warn/Grace    
---------- --------------------------------------------------
root                0          0          0     00 [--------]
 
Group quota on /xfsdata (/dev/mapper/xfsgroup00-xfsdata)
                                Blocks                    
Group ID         Used       Soft       Hard    Warn/Grace    
---------- --------------------------------------------------
root                0          0          0     00 [--------]
 
[root @localhost  zhongq] # xfs_quota -x -c 'limit bsoft=100M bhard=120M zhongq' /xfsdata
[root @localhost  zhongq] #xfs_quota -x -c 'report' /xfsdata
User quota on /xfsdata (/dev/mapper/xfsgroup00-xfsdata)
                                Blocks                    
User ID          Used       Soft       Hard    Warn/Grace    
---------- --------------------------------------------------
root                0          0          0     00 [--------]
zhongq              0     102400     122880     00 [--------]
 
Group quota on /xfsdata (/dev/mapper/xfsgroup00-xfsdata)
                                Blocks                    
Group ID         Used       Soft       Hard    Warn/Grace    
---------- --------------------------------------------------
root                0          0          0     00 [--------]
 
[root @localhost  zhongq] # su zhongq
[zhongq @localhost  ~]$ dd  if =/dev/zero of=/xfsdata/zq00 bs=1M count=100
100+0 records in
100+0 records out
104857600 bytes (105 MB) copied, 28.9833 s, 3.6 MB/s
[zhongq @localhost  ~]$ dd  if =/dev/zero of=/xfsdata/zq01 bs=1M count=100
dd: error writing ‘/xfsdata/zq01’: Disk quota exceeded
21+0 records in
20+0 records out
20971520 bytes (21 MB) copied, 4.18921 s, 5.0 MB/s
 
[zhongq @localhost  ~]$  exit
 
[root @localhost  zhongq] # xfs_quota
xfs_quota> help
df [-bir] [-hn] [-f file] -- show free and used counts  for  blocks and inodes
help [command] -- help  for  one or all commands
print  -- list known mount points and projects
quit --  exit  the program
quota [-bir] [-gpu] [-hnNv] [-f file] [id|name]... -- show usage and limits
 
Use  'help commandname'  for  extended help.
xfs_quota>  print
Filesystem          Pathname
/                   /dev/mapper/centos-root
/boot               /dev/sda1
/var/lib/docker     /dev/mapper/centos-root
/xfsdata            /dev/mapper/xfsgroup00-xfsdata (uquota, gquota)
xfs_quota> quota -u zhongq
Disk quotas  for  User zhongq (1000)
Filesystem                        Blocks      Quota      Limit  Warn/Time      Mounted on
/dev/mapper/xfsgroup00-xfsdata    122880     102400     122880   00  [6 days]   /xfsdata

###############################################################################
先使用命令lvextend將LV擴展為1.5G(初始容量是1G),然后使用命令xfs_growfs來對xfs文件系統擴容(這里以block計數)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
[root @localhost  zhongq] # lvextend -L 1.5G /dev/xfsgroup00/xfsdata
   Extending logical volume xfsdata to 1.50 GiB
   Logical volume xfsdata successfully resized
   
[root @localhost  zhongq] # xfs_growfs /dev/xfsgroup00/xfsdata -D 393216
meta-data=/dev/mapper/xfsgroup00-xfsdata isize=256    agcount=4, agsize=65536 blks
          =                       sectsz=512   attr=2, projid32bit=1
          =                       crc=0
data     =                       bsize=4096   blocks=262144, imaxpct=25
          =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=0
log       =internal               bsize=4096   blocks=2560, version=2
          =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
data blocks changed from 262144 to 393216
   
[root @localhost  zhongq] # df -h|grep xfsdata
/dev/mapper/xfsgroup00-xfsdata  1.5G  153M  1.4G  10% /xfsdata


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM