Linux LVM分區之VG擴容、LV擴容、LV縮減、LVM快照
- A+
一、簡介 LVM是 Logical Volume Manager(邏輯卷管理)的簡寫,它是Linux環境下對磁盤分區進行管理的一種機制,它由Heinz Mauelshagen在Linux 2.4內核上實現,於1998年發布到Linux社區中,它允許你在Linux系統上用簡單的命令行管理一個完整的邏輯卷管理環境。
二、版本 LVM1 最初的LVM與1998年發布,只在Linux內核2.4版本上可用,它提供最基本的邏輯卷管理。 LVM2 LVM-1的更新版本,在Linux內核2.6中才可用,它在標准的LVM-1功能外還提供了額外的功能。 查看:(測試機CentOS 6.6 X86_64)
|
01
02
03
04
05
06
07
08
09
10
|
[root@ZhongH100 ~]
# rpm -qa | grep lvm
mesa-private-llvm-3.4-3.el6.x86_64
lvm2-libs-2.02.111-2.el6_6.2.x86_64
lvm2-2.02.111-2.el6_6.2.x86_64
[root@ZhongH100 ~]
# cat /etc/centos-release
CentOS release 6.6 (Final)
[root@ZhongH100 ~]
# uname -a
Linux ZhongH100.wxjr.com.cn 2.6.32-504.16.2.el6.centos.plus.x86_64
#1 SMP Wed Apr 22 00:59:31 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
[root@ZhongH100 ~]
# getconf LONG_BIT
64
|
三、LVM 模塊 Physical volume (PV)、Volume group (VG)、Logical volume(LV)、 Physical extent (PE),下面我們用一個簡單的圖來說明下物理卷、卷組、邏輯卷他們之間的關系(此圖只是個人理解,僅供參考) LVM 詳解
簡而言之: 邏輯卷的創建,就是將多塊硬盤創建物理卷,而將這些物理卷以邏輯的形式總成一個容器,然后從這個容器里面創建大小不同的分區文件,而這個容器就是所謂的邏輯卷,而從這個容器里創建大小不同的分區文件,這個分區文件就叫做邏輯卷。嘿嘿,你懂了嗎? ^_^ ……
四、具體操作 1. 分區 (本實驗環境使用的是一塊新磁盤/dev/sdb)
|
01
02
03
04
05
06
07
08
09
10
11
12
13
14
15
16
17
18
19
20
21
22
|
[root@ZhongH100 ~]
# fdisk -l /dev/sd[a-z]
Disk
/dev/sda
: 64.4 GB, 64424509440 bytes
255 heads, 63 sectors
/track
, 7832 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical
/physical
): 512 bytes / 512 bytes
I
/O
size (minimum
/optimal
): 512 bytes / 512 bytes
Disk identifier: 0x0006c656
Device Boot Start End Blocks Id System
/dev/sda1
* 1 64 512000 83 Linux
Partition 1 does not end on cylinder boundary.
/dev/sda2
64 6591 52428800 8e Linux LVM
Disk
/dev/sdb
: 64.4 GB, 64424509440 bytes
255 heads, 63 sectors
/track
, 7832 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical
/physical
): 512 bytes / 512 bytes
I
/O
size (minimum
/optimal
): 512 bytes / 512 bytes
Disk identifier: 0x00000000
[root@ZhongH100 ~]
#
|
|
001
002
003
004
005
006
007
008
009
010
011
012
013
014
015
016
017
018
019
020
021
022
023
024
025
026
027
028
029
030
031
032
033
034
035
036
037
038
039
040
041
042
043
044
045
046
047
048
049
050
051
052
053
054
055
056
057
058
059
060
061
062
063
064
065
066
067
068
069
070
071
072
073
074
075
076
077
078
079
080
081
082
083
084
085
086
087
088
089
090
091
092
093
094
095
096
097
098
099
100
101
102
|
[root@ZhongH100 ~]
# fdisk /dev/sdb #試用fdisk命令來管理磁盤分區
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel with disk identifier 0xfb1f25cf.
Changes will remain
in
memory only,
until
you decide to write them.
After that, of course, the previous content won't be recoverable.
Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)
WARNING: DOS-compatible mode is deprecated. It's strongly recommended to
switch off the mode (
command
'c'
) and change display
units
to
sectors (
command
'u'
).
Command (m
for
help): p
#輸入p來打印當前磁盤上的分區
Disk
/dev/sdb
: 64.4 GB, 64424509440 bytes
255 heads, 63 sectors
/track
, 7832 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical
/physical
): 512 bytes / 512 bytes
I
/O
size (minimum
/optimal
): 512 bytes / 512 bytes
Disk identifier: 0xfb1f25cf
Device Boot Start End Blocks Id System
Command (m
for
help): n
#輸入n 新建分區
Command action
e extended
p primary partition (1-4)
p
#輸入p 選擇分區類型為主分區
Partition number (1-4): 1
#輸入1 選擇為第一個主分區
First cylinder (1-7832, default 1):
#直接回車 選擇分區起始塊為1
Using default value 1
Last cylinder, +cylinders or +size{K,M,G} (1-7832, default 7832): +10G
#輸入+10G 為新分區大小為10G
Command (m
for
help): n
#輸入n 在當前磁盤上再次新建一個分區
Command action
e extended
p primary partition (1-4)
p
#輸入p 選擇分區類型為主分區
Partition number (1-4): 2
#輸入2 選擇為第二個主分區
First cylinder (1307-7832, default 1307):
#直接回車 選擇分區起始塊為1307
Using default value 1307
Last cylinder, +cylinders or +size{K,M,G} (1307-7832, default 7832): +10G
#輸入+10G 為新分區大小為10G
Command (m
for
help): n
#輸入n 在當前磁盤上再次新建一個分區
Command action
e extended
p primary partition (1-4)
p
#輸入p 選擇分區類型為主分區
Partition number (1-4): 3
#輸入3 選擇為第三個主分區
First cylinder (2613-7832, default 2613):
#直接回車 選擇分區起始塊為2613
Using default value 2613
Last cylinder, +cylinders or +size{K,M,G} (2613-7832, default 7832): +10G
#輸入+10G 為新分區大小為10G
Command (m
for
help): p
#輸入p來打印當前磁盤上的分區
Disk
/dev/sdb
: 64.4 GB, 64424509440 bytes
255 heads, 63 sectors
/track
, 7832 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical
/physical
): 512 bytes / 512 bytes
I
/O
size (minimum
/optimal
): 512 bytes / 512 bytes
Disk identifier: 0xfb1f25cf
Device Boot Start End Blocks Id System
/dev/sdb1
1 1306 10490413+ 83 Linux
/dev/sdb2
1307 2612 10490445 83 Linux
/dev/sdb3
2613 3918 10490445 83 Linux
Command (m
for
help): t
#輸入t 來改變分區類型
Partition number (1-4): 1
#輸入1 來選擇改變分區類型的分區號為1
Hex code (
type
L to list codes): 8e
#輸入8e 改變分區類型為LVM
Changed system
type
of partition 1 to 8e (Linux LVM)
Command (m
for
help): t
#輸入t 來改變分區類型
Partition number (1-4): 2
#輸入2 來選擇改變分區類型的分區號為2
Hex code (
type
L to list codes): 8e
#輸入8e 改變分區類型為LVM
Changed system
type
of partition 2 to 8e (Linux LVM)
Command (m
for
help): t
#輸入t 來改變分區類型
Partition number (1-4): 3
#輸入3 來選擇改變分區類型的分區號為3
Hex code (
type
L to list codes): 8e
#輸入8e 改變分區類型為LVM
Changed system
type
of partition 3 to 8e (Linux LVM)
Command (m
for
help): p
#輸入p來打印當前磁盤上的分區
Disk
/dev/sdb
: 64.4 GB, 64424509440 bytes
255 heads, 63 sectors
/track
, 7832 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical
/physical
): 512 bytes / 512 bytes
I
/O
size (minimum
/optimal
): 512 bytes / 512 bytes
Disk identifier: 0xfb1f25cf
Device Boot Start End Blocks Id System
/dev/sdb1
1 1306 10490413+ 8e Linux LVM
/dev/sdb2
1307 2612 10490445 8e Linux LVM
/dev/sdb3
2613 3918 10490445 8e Linux LVM
Command (m
for
help): w
The partition table has been altered!
Calling ioctl() to re-
read
partition table.
Syncing disks.
[root@ZhongH100 ~]
#
|
新分區完畢后我們需要讓內核重新載入,如果執行一次不能載入所有分區那么就多執行幾次,直至全部能識別到,我們的sdb上有3個分區,下面的命令已經顯示全部識別了
|
1
2
3
4
5
6
7
|
[root@ZhongH100 ~]
# partx -a /dev/sdb
BLKPG: Device or resource busy
error adding partition 1
BLKPG: Device or resource busy
error adding partition 2
BLKPG: Device or resource busy
error adding partition 3
|
2. 將物理分區與硬盤創建為物理卷(pvcreate)
|
01
02
03
04
05
06
07
08
09
10
11
|
[root@ZhongH100 ~]
# pvcreate /dev/sdb1 /dev/sdb2 /dev/sdb3 #這是正常的命令寫法 也可以使用下面那種擴展寫法^C
[root@ZhongH100 ~]
# pvcreate /dev/sdb{1,2,3}
Physical volume
"/dev/sdb1"
successfully created
Physical volume
"/dev/sdb2"
successfully created
Physical volume
"/dev/sdb3"
successfully created
[root@ZhongH100 ~]
# pvs #使用pvs來查看當前系統上所有的pv
PV VG Fmt Attr PSize PFree
/dev/sdb1
lvm2 --- 10.00g 10.00g
/dev/sdb2
lvm2 --- 10.00g 10.00g
/dev/sdb3
lvm2 --- 10.00g 10.00g
[root@ZhongH100 ~]
#
|
3. 將物理卷(pv)創建為卷組(vgcreate),名為VGtest
|
01
02
03
04
05
06
07
08
09
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
|
[root@ZhongH100 ~]
# vgcreate VGtest /dev/sdb{1,2,3}
Volume group
"VGtest"
successfully created
[root@ZhongH100 ~]
# vgs
VG
#PV #LV #SN Attr VSize VFree
VGtest 3 0 0 wz--n- 30.00g 30.00g
[root@ZhongH100 ~]
# vgdisplay
--- Volume group ---
VG Name VGtest
#卷組名是VGtest
System ID
Format lvm2
Metadata Areas 3
Metadata Sequence No 1
VG Access
read
/write
VG Status resizable
MAX LV 0
Cur LV 0
Open LV 0
Max PV 0
Cur PV 3
Act PV 3
VG Size 30.00 GiB
#新的VG大小是30G 3個10G分區組成的
PE Size 4.00 MiB
#物理盤的基本單位:默認4MB
Total PE 7680
Alloc PE / Size 0 / 0
Free PE / Size 7680 / 30.00 GiB
VG UUID W8fYiw-Zh46-53lr-qWuf-hqLR-Rqla-x1mFQH
[root@ZhongH100 ~]
#
|
4. 在卷組里創建邏輯卷並格式化、掛載使用
|
01
02
03
04
05
06
07
08
09
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
|
[root@ZhongH100 ~]
# lvcreate -L 2G -n LVtest1 VGtest #在名為VGtest的VG上創建一個名為LVtest1 大小為2G的邏輯卷
Logical volume
"LVtest1"
created
[root@ZhongH100 ~]
# lvs #查看系統上的LV邏輯卷
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
LVtest1 VGtest -wi-a----- 2.00g
[root@ZhongH100 ~]
# mke2fs -t ext4 /dev/VGtest/LVtest1 #格式化新建的LVtest1邏輯卷為ext4格式
mke2fs 1.41.12 (17-May-2010)
文件系統標簽=
操作系統:Linux
塊大小=4096 (log=2)
分塊大小=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
131072 inodes, 524288 blocks
26214 blocks (5.00%) reserved
for
the super user
第一個數據塊=0
Maximum filesystem blocks=536870912
16 block
groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912
正在寫入inode表: 完成
Creating journal (16384 blocks): 完成
Writing superblocks and filesystem accounting information: 完成
This filesystem will be automatically checked every 25 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.
[root@ZhongH100 ~]
# mkdir /LVtest1 #創建一個LVtest1的目錄
[root@ZhongH100 ~]
# mount /dev/VGtest/LVtest1 /LVtest1 #將/dev/VGtest/LVtest1這個邏輯卷掛載到 /LVtest1目錄上
[root@ZhongH100 ~]
# mount #查看掛載情況
/dev/mapper/vgzhongH-root
on /
type
ext4 (rw,acl)
proc on
/proc
type
proc (rw)
sysfs on
/sys
type
sysfs (rw)
devpts on
/dev/pts
type
devpts (rw,gid=5,mode=620)
tmpfs on
/dev/shm
type
tmpfs (rw)
/dev/sda1
on
/boot
type
ext4 (rw)
/dev/mapper/vgzhongH-data
on
/data
type
ext4 (rw,acl)
none on
/proc/sys/fs/binfmt_misc
type
binfmt_misc (rw)
/dev/mapper/VGtest-LVtest1
on
/LVtest1
type
ext4 (rw)
#掛載成功 分區格式是ext4 可讀寫
[root@ZhongH100 ~]
# df -hP #查看系統上的分區情況
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/vgzhongH-root
30G 3.3G 25G 12% /
tmpfs 932M 0 932M 0%
/dev/shm
/dev/sda1
477M 34M 418M 8%
/boot
/dev/mapper/vgzhongH-data
4.8G 10M 4.6G 1%
/data
/dev/mapper/VGtest-LVtest1
2.0G 3.0M 1.9G 1%
/LVtest1
#LVtest1邏輯卷分區正常
[root@ZhongH100 ~]
#
|
5. 發現卷組pv空間不夠,我們需要擴大卷組空間
現在系統上新增了一塊20G的硬盤/dev/sdc
|
01
02
03
04
05
06
07
08
09
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
|
[root@ZhongH100 ~]
# fdisk -l /dev/sd[a-z]
Disk
/dev/sda
: 64.4 GB, 64424509440 bytes
255 heads, 63 sectors
/track
, 7832 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical
/physical
): 512 bytes / 512 bytes
I
/O
size (minimum
/optimal
): 512 bytes / 512 bytes
Disk identifier: 0x0006c656
Device Boot Start End Blocks Id System
/dev/sda1
* 1 64 512000 83 Linux
Partition 1 does not end on cylinder boundary.
/dev/sda2
64 6591 52428800 8e Linux LVM
Disk
/dev/sdb
: 64.4 GB, 64424509440 bytes
255 heads, 63 sectors
/track
, 7832 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical
/physical
): 512 bytes / 512 bytes
I
/O
size (minimum
/optimal
): 512 bytes / 512 bytes
Disk identifier: 0xfb1f25cf
Device Boot Start End Blocks Id System
/dev/sdb1
1 1306 10490413+ 8e Linux LVM
/dev/sdb2
1307 2612 10490445 8e Linux LVM
/dev/sdb3
2613 3918 10490445 8e Linux LVM
Disk
/dev/sdc
: 21.5 GB, 21474836480 bytes
255 heads, 63 sectors
/track
, 2610 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical
/physical
): 512 bytes / 512 bytes
I
/O
size (minimum
/optimal
): 512 bytes / 512 bytes
Disk identifier: 0x00000000
|
|
01
02
03
04
05
06
07
08
09
10
11
12
13
|
[root@ZhongH100 ~]
# pvcreate /dev/sdc #將新硬盤/sdc加入物理卷上
Physical volume
"/dev/sdc"
successfully created
[root@ZhongH100 ~]
# pvs #查看物理卷
PV VG Fmt Attr PSize PFree
/dev/sdb1
VGtest lvm2 a-- 10.00g 8.00g
/dev/sdb2
VGtest lvm2 a-- 10.00g 10.00g
/dev/sdb3
VGtest lvm2 a-- 10.00g 10.00g
/dev/sdc
lvm2 --- 20.00g 20.00g
[root@ZhongH100 ~]
# vgextend VGtest /dev/sdc #擴展卷組
Volume group
"VGtest"
successfully extended
[root@ZhongH100 ~]
# vgs #查看卷組
VG
#PV #LV #SN Attr VSize VFree
VGtest 4 1 0 wz--n- 50.00g 48.00g
#從大小可以看出我們已經擴容成功
|
6. 擴展邏輯卷 (支持在線擴展)
在線將/dev/VGtest/LVtest1 擴展到4G,並且要求數據可以正常訪問
|
01
02
03
04
05
06
07
08
09
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
|
[root@ZhongH100 ~]
# cd /LVtest1/
[root@ZhongH100 LVtest1]
# echo "this is a test for LVM" > lvtest #穿件個lvtest的文件並寫入內容
[root@ZhongH100 LVtest1]
# cat lvtest
this is a
test
for
LVM
[root@ZhongH100 LVtest1]
# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
LVtest1 VGtest -wi-ao---- 2.00g
[root@ZhongH100 LVtest1]
# lvextend -L +2G /dev/VGtest/LVtest1
Size of logical volume VGtest
/LVtest1
changed from 2.00 GiB (512 extents) to 4.00 GiB (1024 extents).
Logical volume LVtest1 successfully resized
[root@ZhongH100 LVtest1]
# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
LVtest1 VGtest -wi-ao---- 4.00g
#邏輯卷空間已經增加
[root@ZhongH100 LVtest1]
# e2fsck -f /dev/VGtest/LVtest1
[root@ZhongH100 LVtest1]
# resize2fs -p /dev/VGtest/LVtest1 #通過 resize2fs 將文件系統的容量確實添加
resize2fs 1.41.12 (17-May-2010)
Filesystem at
/dev/VGtest/LVtest1
is mounted on
/LVtest1
; on-line resizing required
old desc_blocks = 1, new_desc_blocks = 1
Performing an on-line resize of
/dev/VGtest/LVtest1
to 1048576 (4k) blocks.
The filesystem on
/dev/VGtest/LVtest1
is now 1048576 blocks long.
[root@ZhongH100 LVtest1]
# cat l
lost+found/ lvtest
[root@ZhongH100 LVtest1]
# cat lvtest #文件沒有受損
this is a
test
for
LVM
[root@ZhongH100 LVtest1]
# df -hP
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/vgzhongH-root
30G 3.3G 25G 12% /
tmpfs 932M 0 932M 0%
/dev/shm
/dev/sda1
477M 34M 418M 8%
/boot
/dev/mapper/vgzhongH-data
4.8G 10M 4.6G 1%
/data
/dev/mapper/VGtest-LVtest1
3.9G 4.0M 3.7G 1%
/LVtest1
#掛載的分區空間已經增加
[root@ZhongH100 LVtest1]
#
|
|
01
02
03
04
05
06
07
08
09
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
|
[root@DS-VM-Node69 ~]
# pvcreate /dev/xvde
Physical volume
"/dev/xvde"
successfully created.
[root@DS-VM-Node69 ~]
# vgextend DTVG /dev/xvde
Volume group
"DTVG"
successfully extended
[root@DS-VM-Node69 ~]
# vgs
VG
#PV #LV #SN Attr VSize VFree
DTVG 4 3 0 wz--n- 624.48g 10.00g
[root@DS-VM-Node69 ~]
# lvresize -l +100%FREE -r /dev/DTVG/data1
Size of logical volume DTVG
/data1
changed from 599.99 GiB (153598 extents) to 609.99 GiB (156157 extents).
Logical volume DTVG
/data1
successfully resized.
meta-data=
/dev/mapper/DTVG-data1
isize=512 agcount=5, agsize=32767744 blks
= sectsz=512 attr=2, projid32bit=1
= crc=1 finobt=0 spinodes=0
data = bsize=4096 blocks=157284352, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0 ftype=1
log =internal bsize=4096 blocks=63999, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
data blocks changed from 157284352 to 159904768
[root@DS-VM-Node69 ~]
# vgs
VG
#PV #LV #SN Attr VSize VFree
DTVG 4 3 0 wz--n- 624.48g 0
[root@DS-VM-Node69 ~]
# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
data1 DTVG -wi-ao---- 609.99g
root DTVG -wi-ao---- 12.50g
swap DTVG -wi-ao---- 2.00g
[root@DS-VM-Node69 ~]
# xfs_growfs /dev/DTVG/data1
meta-data=
/dev/mapper/DTVG-data1
isize=512 agcount=5, agsize=32767744 blks
= sectsz=512 attr=2, projid32bit=1
= crc=1 finobt=0 spinodes=0
data = bsize=4096 blocks=159904768, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0 ftype=1
log =internal bsize=4096 blocks=63999, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
[root@DS-VM-Node69 ~]
# df -hP
文件系統 容量 已用 可用 已用% 掛載點
devtmpfs 3.9G 0 3.9G 0%
/dev
tmpfs 3.9G 0 3.9G 0%
/dev/shm
tmpfs 3.9G 401M 3.6G 11%
/run
tmpfs 3.9G 0 3.9G 0%
/sys/fs/cgroup
/dev/mapper/DTVG-root
13G 4.4G 8.2G 35% /
/dev/xvda1
497M 197M 301M 40%
/boot
/dev/mapper/DTVG-data1
610G 230G 381G 38%
/data
tmpfs 799M 0 799M 0%
/run/user/0
[root@DS-VM-Node69 ~]
#
|
如果是xfs文件系統話上面這種擴容方法就不行了,需要用下面的方法, 參考:https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Storage_Administration_Guide/xfsgrow.html http://oss.sgi.com/archives/xfs/2001-05/msg03189.html
|
01
02
03
04
05
06
07
08
09
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
|
[root@www ~]
# pvs
PV VG Fmt Attr PSize PFree
/dev/xvda2
LBVG lvm2 a-- 14.51g 0
/dev/xvda3
LBVG lvm2 a-- 135.00g 85.01g
[root@www ~]
# vgs
VG
#PV #LV #SN Attr VSize VFree
LBVG 2 2 0 wz--n- 149.51g 85.01g
[root@www ~]
# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
root LBVG -wi-ao---- 62.00g
swap LBVG -wi-ao---- 2.50g
[root@www ~]
# lvcreate -L 10G -n data LBVG
Logical volume
"data"
created.
[root@www ~]
# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
data LBVG -wi-a----- 10.00g
root LBVG -wi-ao---- 62.00g
swap LBVG -wi-ao---- 2.50g
[root@www ~]
# mkfs.xfs /dev/LBVG/data
meta-data=
/dev/LBVG/data
isize=512 agcount=4, agsize=655360 blks
= sectsz=512 attr=2, projid32bit=1
= crc=1 finobt=1, sparse=0
data = bsize=4096 blocks=2621440, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0 ftype=1
log =internal log bsize=4096 blocks=2560, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
[root@www ~]
# lvextend -L +5G /dev/LBVG/data
Size of logical volume LBVG
/data
changed from 10.00 GiB (2560 extents) to 15.00 GiB (3840 extents).
Logical volume data successfully resized.
[root@www ~]
# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
data LBVG -wi-a----- 15.00g
root LBVG -wi-ao---- 62.00g
swap LBVG -wi-ao---- 2.50g
[root@www ~]
# e2fsck -f /dev/LBVG/data
e2fsck 1.42.13 (17-May-2015)
ext2fs_open2: Bad magic number
in
super-block
e2fsck: 超級塊無效, trying backup blocks...
e2fsck: Bad magic number
in
super-block 當嘗試打開
/dev/LBVG/data
時
The 超級塊 could not be
read
or does not describe a valid ext2
/ext3/ext4
文件系統. If the 設備 is valid and it really contains an ext2
/ext3/ext4
文件系統 (and not swap or ufs or something
else
),
then
the 超級塊
is corrupt, and you might try running e2fsck with an alternate 超級塊:
e2fsck -b 8193 <設備>
or
e2fsck -b 32768 <設備>
[root@www ~]
# mkdir /data
[root@www ~]
# mount /dev/LBVG/data /data
[root@www ~]
# df -hP|grep /data
文件系統 容量 已用 可用 已用% 掛載點
/dev/mapper/LBVG-data
10G 33M 10G 1%
/data
[root@www ~]
# xfs_growfs /dev/LBVG/data
meta-data=
/dev/mapper/LBVG-data
isize=512 agcount=4, agsize=655360 blks
= sectsz=512 attr=2, projid32bit=1
= crc=1 finobt=1 spinodes=0
data = bsize=4096 blocks=2621440, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0 ftype=1
log =internal bsize=4096 blocks=2560, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
data blocks changed from 2621440 to 3932160
[root@www ~]
# df -hP|grep /data
文件系統 容量 已用 可用 已用% 掛載點
/dev/mapper/LBVG-data
15G 33M 15G 1%
/data
[root@www ~]
#
|
7. 縮減邏輯卷
查看邏輯卷使用空間狀況
不能在線縮減,得先卸載 切記
確保縮減后的空間大小依然能存儲原有的所有數據
在縮減之前應該先強行檢查文件,以確保文件系統處於一至性狀態
|
01
02
03
04
05
06
07
08
09
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
|
[root@ZhongH100 ~]
# umount /dev/VGtest/LVtest1 #卸載/dev/VGtest/LVtest1
[root@ZhongH100 ~]
# e2fsck -f /dev/VGtest/LVtest1 #強制檢查文件系統
e2fsck 1.41.12 (17-May-2010)
第一步: 檢查inode,塊,和大小
第二步: 檢查目錄結構
第3步: 檢查目錄連接性
Pass 4: Checking reference counts
第5步: 檢查簇概要信息
/dev/VGtest/LVtest1
: 12
/262144
files (0.0% non-contiguous), 33871
/1048576
blocks
[root@ZhongH100 ~]
# resize2fs /dev/VGtest/LVtest1 1G #縮減邏輯大小到1G
resize2fs 1.41.12 (17-May-2010)
Resizing the filesystem on
/dev/VGtest/LVtest1
to 262144 (4k) blocks.
The filesystem on
/dev/VGtest/LVtest1
is now 262144 blocks long.
[root@ZhongH100 ~]
# lvreduce -L 1G /dev/VGtest/LVtest1
WARNING: Reducing active logical volume to 1.00 GiB
THIS MAY DESTROY YOUR DATA (filesystem etc.)
Do you really want to reduce LVtest1? [y
/n
]: y
#輸入y 同意裁剪
Size of logical volume VGtest
/LVtest1
changed from 4.00 GiB (1024 extents) to 1.00 GiB (256 extents).
Logical volume LVtest1 successfully resized
[root@ZhongH100 ~]
# lvs #查看邏輯卷
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
LVtest1 VGtest -wi-a----- 1.00g
[root@ZhongH100 ~]
# mount /dev/VGtest/LVtest1 /LVtest1/ #掛載邏輯卷/dev/VGtest/LVtest1
[root@ZhongH100 ~]
# df -hP #查看系統分區詳情
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/vgzhongH-root
30G 3.3G 25G 12% /
tmpfs 932M 0 932M 0%
/dev/shm
/dev/sda1
477M 34M 418M 8%
/boot
/dev/mapper/vgzhongH-data
4.8G 10M 4.6G 1%
/data
/dev/mapper/VGtest-LVtest1
944M 2.6M 891M 1%
/LVtest1
#已經縮減成功
[root@ZhongH100 ~]
# cat /LVtest1/lvtest #查看縮減前文件是否受損
this is a
test
for
LVM
[root@ZhongH100 ~]
#
|
8. 縮減磁盤空間
發現物理磁盤空間使用不足,將其中一塊硬盤或分區拿掉
pvmove /dev/sdb1 #將/dev/sdb1上存儲的數據移到其它物理卷中
vgreduce VGtest /dev/sdb1 #將/dev/sdb1從VGtest卷組中移除
pvremove /dev/sdb1 #將/dev/sdb1從物理卷上移除
|
01
02
03
04
05
06
07
08
09
10
11
12
13
14
15
16
17
18
19
20
|
[root@ZhongH100 ~]
# pvs
PV VG Fmt Attr PSize PFree
/dev/sdb1
VGtest lvm2 a-- 10.00g 9.00g
/dev/sdb2
VGtest lvm2 a-- 10.00g 10.00g
/dev/sdb3
VGtest lvm2 a-- 10.00g 10.00g
/dev/sdc
VGtest lvm2 a-- 20.00g 20.00g
[root@ZhongH100 ~]
# pvmove /dev/sdb1
/dev/sdb1
: Moved: 2.3%
/dev/sdb1
: Moved: 86.3%
/dev/sdb1
: Moved: 100.0%
[root@ZhongH100 ~]
# vgreduce VGtest /dev/sdb1
Removed
"/dev/sdb1"
from volume group
"VGtest"
[root@ZhongH100 ~]
# pvremove /dev/sdb1
Labels on physical volume
"/dev/sdb1"
successfully wiped
[root@ZhongH100 ~]
# pvs
PV VG Fmt Attr PSize PFree
/dev/sdb2
VGtest lvm2 a-- 10.00g 9.00g
/dev/sdb3
VGtest lvm2 a-- 10.00g 10.00g
/dev/sdc
VGtest lvm2 a-- 20.00g 20.00g
[root@ZhongH100 ~]
#
|
9. 實現快照,進行備份還原
在/mnt/lvm目錄上,我們將原始的目錄文件進行快照,然后將/LVtets1目錄中的內容清空,並進行還原
|
01
02
03
04
05
06
07
08
09
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
|
[root@ZhongH100 ~]
# cat /LVtest1/lvtest
this is a
test
for
LVM
[root@ZhongH100 ~]
# lvcreate -L 30M -n backup -s -p r /dev/VGtest/LVtest1
Rounding up size to full physical extent 32.00 MiB
Logical volume
"backup"
created
[root@ZhongH100 ~]
# mkdir /tmp/backup/
[root@ZhongH100 ~]
# mount /dev/VGtest/backup /tmp/backup/
mount
: block device
/dev/mapper/VGtest-backup
is write-protected, mounting
read
-only
[root@ZhongH100 ~]
# cat /tmp/backup/lvtest
this is a
test
for
LVM
[root@ZhongH100 ~]
# rm -rf /LVtest1/*
You are going to execute
"/bin/rm -rf /LVtest1/lost+found /LVtest1/lvtest"
,please confirm (
yes
or no):
yes
[root@ZhongH100 ~]
# cd /LVtest1/
[root@ZhongH100 LVtest1]
# ls -l
總用量 0
[root@ZhongH100 LVtest1]
# tar xf /tmp/sandy.tar.bz2
[root@ZhongH100 LVtest1]
# ls -l
總用量 8
drwx------ 2 root root 4096 5月 21 23:33 lost+found
-rw-r--r-- 1 root root 23 5月 21 23:53 lvtest
[root@ZhongH100 LVtest1]
# cat lvtest
this is a
test
for
LVM
[root@ZhongH100 LVtest1]
# df -hP
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/vgzhongH-root
30G 3.3G 25G 12% /
tmpfs 932M 0 932M 0%
/dev/shm
/dev/sda1
477M 34M 418M 8%
/boot
/dev/mapper/vgzhongH-data
4.8G 10M 4.6G 1%
/data
/dev/mapper/VGtest-LVtest1
944M 2.5M 891M 1%
/LVtest1
/dev/mapper/VGtest-backup
944M 2.6M 891M 1%
/tmp/backup
[root@ZhongH100 LVtest1]
#
|

