LVM。。。讓我理解就是一個將好多分區磁盤幫到一起的玩意,類似於烙大餅。。。然后再切
新建了一個虛擬機,然后又掛了一個5G的硬盤,然后分出了5塊空間,掛載到了虛擬機上。這些步驟很簡單
fdisk mkdir mount......不贅述了。。。鳥哥也不贅述我也就不贅述了。繼續看重點
這是鳥哥的官方解釋,看看,是不是跟我說的一樣攤大餅,在切割?買過餅吃的人都應該懂的。。。。
LVM概念
好了。概念講完了,鳥哥講了動態分配的實現原理,繼續截圖
這幾個東東的關系,你看明白了么?沒看明白不要緊,我給你做大餅吃
首先,將磁盤都做成LVM可識別的格式。就是PV
然后,用VG將這些PV串成一張大餅
最后,就是切大餅 LV。那LV的最基礎的組成部分是什么呢?就是PE。PE就是切塊的最小單元。
看完我做的大餅,再看上面的圖,是否會更理解一下。
也就是說,你要擴充只能擴充VG中未被LV切塊的餅,是否能明白,稍微懂點分區的都明白應該。比如你現在空間不夠了,需要干嘛,先PV,然后加入到VG,然后再切餅。
LVM硬盤寫入,鳥哥說有兩種模式
線性模式,寫完一張再寫另一張
交錯模式,文件分成兩部分,兩張硬盤,交互寫入。注:我沒看明白為啥當初要設計這種模式的原因
[root@localhost ~]# pvcreate /dev/sdb{1,2,3,4,5} Device /dev/sdb1 excluded by a filter. Physical volume "/dev/sdb2" successfully created. Physical volume "/dev/sdb3" successfully created. Physical volume "/dev/sdb4" successfully created. Physical volume "/dev/sdb5" successfully created.
[root@localhost ~]# pvdisplay /dev/sdb3 WARNING: Device for PV rGAe2U-E01D-o80Z-GG2n-Q3Gt-JnxH-QpdECV not found or rejected by a filter. "/dev/sdb3" is a new physical volume of "954.00 MiB" --- NEW Physical volume --- PV Name /dev/sdb3 VG Name PV Size 954.00 MiB Allocatable NO PE Size 0 Total PE 0 Free PE 0 Allocated PE 0 PV UUID tiph15-4Jg5-4Yf0-Rdtn-Jj0o-Vmgj-LBpfhu
鳥哥的解釋很透徹,截圖如下
[root@localhost ~]# vgcreate -s 16M lsqvg /dev/sdb{1,2,3} WARNING: Device for PV rGAe2U-E01D-o80Z-GG2n-Q3Gt-JnxH-QpdECV not found or rejected by a filter. WARNING: Device for PV rGAe2U-E01D-o80Z-GG2n-Q3Gt-JnxH-QpdECV not found or rejected by a filter. Volume group "lsqvg" successfully created [root@localhost ~]# vgscan Reading volume groups from cache. Found volume group "lsqvg" using metadata type lvm2 Found volume group "centos" using metadata type lvm2 [root@localhost ~]# pvscan WARNING: Device for PV rGAe2U-E01D-o80Z-GG2n-Q3Gt-JnxH-QpdECV not found or rejected by a filter. PV /dev/sdb1 VG lsqvg lvm2 [944.00 MiB / 944.00 MiB free] PV /dev/sdb2 VG lsqvg lvm2 [944.00 MiB / 944.00 MiB free] PV /dev/sdb3 VG lsqvg lvm2 [944.00 MiB / 944.00 MiB free] PV /dev/sda2 VG centos lvm2 [19.70 GiB / 8.00 MiB free] PV /dev/sdb4 lvm2 [954.00 MiB] Total: 5 [23.40 GiB] / in use: 4 [<22.47 GiB] / in no VG: 1 [954.00 MiB] [root@localhost ~]# vgdisplay lsqvg --- Volume group --- VG Name lsqvg System ID Format lvm2 Metadata Areas 3 Metadata Sequence No 1 VG Access read/write VG Status resizable MAX LV 0 Cur LV 0 Open LV 0 Max PV 0 Cur PV 3 Act PV 3 VG Size <2.77 GiB PE Size 16.00 MiB Total PE 177 Alloc PE / Size 0 / 0 Free PE / Size 177 / <2.77 GiB VG UUID yLBdez-VeIK-2xjQ-bMeT-8IBH-ENt3-Mu9NwE [root@localhost ~]# vgextend lsqvg /dev/sdb4 WARNING: Device for PV rGAe2U-E01D-o80Z-GG2n-Q3Gt-JnxH-QpdECV not found or rejected by a filter. Volume group "lsqvg" successfully extended [root@localhost ~]# vgdisplay lsqvg --- Volume group --- VG Name lsqvg System ID Format lvm2 Metadata Areas 4 Metadata Sequence No 2 VG Access read/write VG Status resizable MAX LV 0 Cur LV 0 Open LV 0 Max PV 0 Cur PV 4 Act PV 4 VG Size <3.69 GiB PE Size 16.00 MiB Total PE 236 Alloc PE / Size 0 / 0 Free PE / Size 236 / <3.69 GiB VG UUID yLBdez-VeIK-2xjQ-bMeT-8IBH-ENt3-Mu9NwE
通過上述加粗,可以看出,如何新增VG。另VG的名字我們是可以自定義的,然后我們定義了每個PE的大小。第一行紅色的位置。基本上VG就這么些東東。也不難
PE有了。VG這張大餅也有了。剩下的就是需要對VG這張大餅切分了。通過我們上面的步驟可以看出,就是利用LV
讓我們來跟着鳥哥實踐一下
[root@localhost ~]# lvcreate -L 2G -n lsqlv lsqvg WARNING: LVM2_member signature detected on /dev/lsqvg/lsqlv at offset 536. Wipe it? [y/n]: y Wiping LVM2_member signature on /dev/lsqvg/lsqlv. Logical volume "lsqlv" created. [root@localhost ~]# lvscan ACTIVE '/dev/lsqvg/lsqlv' [2.00 GiB] inherit ACTIVE '/dev/centos/var' [1.86 GiB] inherit ACTIVE '/dev/centos/swap' [192.00 MiB] inherit ACTIVE '/dev/centos/root' [9.31 GiB] inherit ACTIVE '/dev/centos/home' [8.33 GiB] inherit [root@localhost ~]# lvdisplay /dev/lsqlv Volume group "lsqlv" not found Cannot process volume group lsqlv [root@localhost ~]# lvdisplay /dev/lsqvg/lsqlv --- Logical volume --- LV Path /dev/lsqvg/lsqlv LV Name lsqlv VG Name lsqvg LV UUID I36ZBB-abG3-YZVt-h61R-qAya-ohn0-5GmH2n LV Write Access read/write LV Creation host, time localhost.localdomain, 2019-08-20 11:00:50 +0800 LV Status available # open 0 LV Size 2.00 GiB Current LE 128 Segments 3 Allocation inherit Read ahead sectors auto - currently set to 8192 Block device 253:4
我們跟着鳥哥,創建了一個2G大小的LV。名字就是我自己的名字嘍。
然后查看一下。可以看到我們的LV的相關情況,還有其他的LV的情況,就是系統設置的LV的情況
然后單獨查看我們自己的LV,可以看到他的大小,名稱,等相關信息
OK。LV建立之后,就是格式化,掛載等等了。讓我們跟着鳥哥繼續往下走
[root@localhost ~]# mkdir /srv/lvm [root@localhost ~]# mount /dev/lsqvg/lsqlv /srv/lvm mount: /dev/mapper/lsqvg-lsqlv 寫保護,將以只讀方式掛載 mount: 未知的文件系統類型“(null)” [root@localhost ~]# ^C [root@localhost ~]# unmount /srv/lvm bash: unmount: 未找到命令... [root@localhost ~]# umount /srv/lvm umount: /srv/lvm:未掛載 [root@localhost ~]# mkfs.xfs /dev/lsqvg/lsqlv meta-data=/dev/lsqvg/lsqlv isize=512 agcount=4, agsize=131072 blks = sectsz=512 attr=2, projid32bit=1 = crc=1 finobt=0, sparse=0 data = bsize=4096 blocks=524288, imaxpct=25 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0 ftype=1 log =internal log bsize=4096 blocks=2560, version=2 = sectsz=512 sunit=0 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 [root@localhost ~]# mount /dev/lsqvg/lsqlv /dev/lvm mount: 掛載點 /dev/lvm 不存在 [root@localhost ~]# mount /dev/lsqvg/lsqlv /srv/lvm [root@localhost ~]# df -Th /srv/lvm 文件系統 類型 容量 已用 可用 已用% 掛載點 /dev/mapper/lsqvg-lsqlv xfs 2.0G 33M 2.0G 2% /srv/lvm
首先創建一個掛載點 /srv/lvm
然后格式化一下,格式化命令mkfs.xfs /dev/lsqvg/lsqlv 然后進行了格式化。加粗的信息就是格式化后的相關信息。
然后利用mount進行掛載,掛載之后。df -Th查看一下相關的信息。可以看到相關情況,表明我們掛載成功。
[root@localhost ~]# vgdisplay lsqvg --- Volume group --- VG Name lsqvg System ID Format lvm2 Metadata Areas 4 Metadata Sequence No 3 VG Access read/write VG Status resizable MAX LV 0 Cur LV 1 Open LV 0 Max PV 0 Cur PV 4 Act PV 4 VG Size <3.69 GiB PE Size 16.00 MiB Total PE 236 Alloc PE / Size 128 / 2.00 GiB Free PE / Size 108 / <1.69 GiB #這里可以看出,我們有足夠的容量來新增LV的大小 VG UUID yLBdez-VeIK-2xjQ-bMeT-8IBH-ENt3-Mu9NwE #新增一個500M的lv的大小 [root@localhost ~]# lvresize -L +500M /dev/lsqvg/lsqlv Rounding size to boundary between physical extents: 512.00 MiB. Size of logical volume lsqvg/lsqlv changed from 2.00 GiB (128 extents) to 2.50 GiB (160 extents). Logical volume lsqvg/lsqlv successfully resized.
[root@localhost ~]# lvscan ACTIVE '/dev/lsqvg/lsqlv' [2.50 GiB] inherit #從這可以看出確實是增加到2.5G了 ACTIVE '/dev/centos/var' [1.86 GiB] inherit ACTIVE '/dev/centos/swap' [192.00 MiB] inherit ACTIVE '/dev/centos/root' [9.31 GiB] inherit ACTIVE '/dev/centos/home' [8.33 GiB] inherit [root@localhost ~]# df -Th /srv/lvm 文件系統 類型 容量 已用 可用 已用% 掛載點 /dev/mapper/lsqvg-lsqlv xfs 2.0G 87M 2.0G 5% /srv/lvm #但是從掛載點來看,依然是2.0G。可以看出區別了
[root@localhost ~]# ls -l /srv/lvm 總用量 16 drwxr-xr-x. 161 root root 8192 8月 20 08:17 etc drwxr-xr-x. 23 root root 4096 8月 20 09:25 log [root@localhost ~]# xfs_info /srv/lvm meta-data=/dev/mapper/lsqvg-lsqlv isize=512 agcount=4, agsize=131072 blks #agcount=4 = sectsz=512 attr=2, projid32bit=1 = crc=1 finobt=0 spinodes=0 data = bsize=4096 blocks=524288, imaxpct=25 #blocks=524288 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0 ftype=1 log =internal bsize=4096 blocks=2560, version=2 = sectsz=512 sunit=0 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 [root@localhost ~]# xfs_growfs /srv/lvm #將原來的500m加入lv中 meta-data=/dev/mapper/lsqvg-lsqlv isize=512 agcount=4, agsize=131072 blks = sectsz=512 attr=2, projid32bit=1 = crc=1 finobt=0 spinodes=0 data = bsize=4096 blocks=524288, imaxpct=25 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0 ftype=1 log =internal bsize=4096 blocks=2560, version=2 = sectsz=512 sunit=0 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 data blocks changed from 524288 to 655360 [root@localhost ~]# xfs_info /srv/lvm meta-data=/dev/mapper/lsqvg-lsqlv isize=512 agcount=5, agsize=131072 blks #變化后變成了5 = sectsz=512 attr=2, projid32bit=1 = crc=1 finobt=0 spinodes=0 data = bsize=4096 blocks=655360, imaxpct=25 #bolcks也變了 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0 ftype=1 log =internal bsize=4096 blocks=2560, version=2 = sectsz=512 sunit=0 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 [root@localhost ~]# df -Th /srv/lvm 文件系統 類型 容量 已用 可用 已用% 掛載點 /dev/mapper/lsqvg-lsqlv xfs 2.5G 87M 2.5G 4% /srv/lvm #大小變成了2.5G [root@localhost ~]# ls -l /srv/lvm #文件還在 總用量 16 drwxr-xr-x. 161 root root 8192 8月 20 08:17 etc drwxr-xr-x. 23 root root 4096 8月 20 09:25 log
注:XFS文件系統中,文件系統只能放大,不能縮小。只有EXT4系統能夠放大和縮小
LVM動態分配
說白了。就是先建立一個容量池,然后實發實用,從容量池中調用容量。直到容量池耗盡為止
[root@localhost ~]# lvcreate -L 1G -T lsqvg/lsqpool Thin pool volume with chunk size 64.00 KiB can address at most 15.81 TiB of data. Logical volume "lsqpool" created. [root@localhost ~]# lvdisplay /dev/lsqvg/lsqpool --- Logical volume --- LV Name lsqpool VG Name lsqvg LV UUID QbEb2i-Eumf-cVfK-yol4-1Qlm-vlDO-leMa6e LV Write Access read/write LV Creation host, time localhost.localdomain, 2019-08-21 17:01:01 +0800 LV Pool metadata lsqpool_tmeta LV Pool data lsqpool_tdata LV Status available # open 0 LV Size 1.00 GiB Allocated pool data 0.00% Allocated metadata 10.23% Current LE 64 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 8192 Block device 253:7 [root@localhost ~]# lvs lsqvg LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert lsqlv lsqvg -wi-ao---- 2.50g lsqpool lsqvg twi-a-tz-- 1.00g 0.00 10.23 [root@localhost ~]# lvcreate -V 10G -T lsqvg/lsqpool -n lsqthin1 WARNING: Sum of all thin volume sizes (10.00 GiB) exceeds the size of thin pool lsqvg/lsqpool and the size of whole volume group (<3.69 GiB). WARNING: You have not turned on protection against thin pools running out of space. WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full. Logical volume "lsqthin1" created. [root@localhost ~]# lvs lsqvg LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert lsqlv lsqvg -wi-ao---- 2.50g lsqpool lsqvg twi-aotz-- 1.00g 0.00 10.25 lsqthin1 lsqvg Vwi-a-tz-- 10.00g lsqpool 0.00 [root@localhost ~]# mkfs.xfs /dev/lsqvg/lsqthin1 meta-data=/dev/lsqvg/lsqthin1 isize=512 agcount=16, agsize=163840 blks = sectsz=512 attr=2, projid32bit=1 = crc=1 finobt=0, sparse=0 data = bsize=4096 blocks=2621440, imaxpct=25 = sunit=16 swidth=16 blks naming =version 2 bsize=4096 ascii-ci=0 ftype=1 log =internal log bsize=4096 blocks=2560, version=2 = sectsz=512 sunit=16 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 [root@localhost ~]# mkdir /srv/thin [root@localhost ~]# moutn /dev/lsqvg/lsqthin1 /srv/thin bash: moutn: 未找到命令... 相似命令是: 'mount' [root@localhost ~]# mount /dev/lsqvg/lsqthin1 /srv/thin [root@localhost ~]# df -Th /srv/thin 文件系統 類型 容量 已用 可用 已用% 掛載點 /dev/mapper/lsqvg-lsqthin1 xfs 10G 33M 10G 1% /srv/thin [root@localhost ~]# dd if=/dev/zero of=/srv/thin/test.img bs=1M count=500 記錄了500+0 的讀入 記錄了500+0 的寫出 524288000字節(524 MB)已復制,0.843028 秒,622 MB/秒 [root@localhost ~]# lvs lsqvg LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert lsqlv lsqvg -wi-ao---- 2.50g lsqpool lsqvg twi-aotz-- 1.00g 49.92 11.82 lsqthin1 lsqvg Vwi-aotz-- 10.00g lsqpool 4.99
是不是沒看明白?我反正一開始也沒看明白,我又從頭看了鳥哥的介紹
lsqthin1是一個虛擬的10G的大小。但是是否能夠可以用到10G的大小,是有lsqpool來決定的。從我們操作來看,lsqpool只有1G的容量,所以這就直接決定了lsqthin1只能夠用1G的容量,超過1G就會導致數據損毀,但是你虛擬的時候,lsqthin1是可以隨便寫的,你可以寫100G,但是只能用1G來操作
LVM的快照功能
看完上面的東東不知你是否理解了。其實很簡單,就是說,再LV中的塊中,他只會保存更改過的塊,沒有更改的塊還是會放到共享區域中,然后,你可以隨時還原,就是從快照中找到備份的快照,然后替換就OK了。確實是個很牛的設計。
[root@localhost ~]# vgdisplay lsqvg --- Volume group --- VG Name lsqvg System ID Format lvm2 Metadata Areas 4 Metadata Sequence No 9 VG Access read/write VG Status resizable MAX LV 0 Cur LV 3 Open LV 2 Max PV 0 Cur PV 4 Act PV 4 VG Size <3.69 GiB PE Size 16.00 MiB Total PE 236 Alloc PE / Size 226 / 3.53 GiB Free PE / Size 10 / 160.00 MiB #這個地方是一個點,看到了么,free pe。也就是說我們要給快照分配大小的話,只能用剩余的PE大小,而不能超過,否則的話會出錯。所以,要建立快照,首先要看一下這個LV還剩下多大的空間,根據這個來建立 VG UUID yLBdez-VeIK-2xjQ-bMeT-8IBH-ENt3-Mu9NwE [root@localhost ~]# lvcreate -s -l 10 -n lsqnap1 /dev/lsqvg/lsqlv # 要對哪一個LV創建快照,這里我們針對lsqlv創建了一個名為lsqnap1的快照 WARNING: Sum of all thin volume sizes (10.00 GiB) exceeds the size of thin pools and the size of whole volume group (<3.69 GiB). WARNING: You have not turned on protection against thin pools running out of space. WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full. Logical volume "lsqnap1" created. [root@localhost ~]# lvdisplay /dev/lsqvg/lsqlv # 這里我們lvdisplay,我這里display錯了,應該display快照,我這里display成了lv,而我們的目的是要查看快照的信息 --- Logical volume --- LV Path /dev/lsqvg/lsqlv LV Name lsqlv VG Name lsqvg LV UUID I36ZBB-abG3-YZVt-h61R-qAya-ohn0-5GmH2n LV Write Access read/write LV Creation host, time localhost.localdomain, 2019-08-20 11:00:50 +0800 LV snapshot status source of lsqnap1 [active] LV Status available # open 1 LV Size 2.50 GiB Current LE 160 Segments 3 Allocation inherit Read ahead sectors auto - currently set to 8192 Block device 253:2
[root@localhost ~]# lvdisplay /dev/lsqvg/lsqnap1 #這里我們dispaly快照的信息 --- Logical volume --- LV Path /dev/lsqvg/lsqnap1 LV Name lsqnap1 VG Name lsqvg LV UUID Nr0dNA-cPS1-NSww-i3BJ-Jx9l-mKz6-4znKzd LV Write Access read/write LV Creation host, time localhost.localdomain, 2019-08-22 14:45:54 +0800 LV snapshot status active destination for lsqlv LV Status available # open 0 LV Size 2.50 GiB #原始LV的大小 Current LE 160 COW-table size 160.00 MiB #快照的大小 COW-table LE 10 Allocated to snapshot 0.01% #目前已經用掉的容量 Snapshot chunk size 4.00 KiB Segments 1 Allocation inherit Read ahead sectors auto - currently set to 8192 Block device 253:12
OK。我們現在就已經建立了快照,現在我們對快照掛載一下,然后對比一下LV。按照鳥哥的介紹,因為LV沒有任何改動,所以,快照應該和LV是一模一樣的,我們來看一下
[root@localhost ~]# mkdir /srv/snapshot1 [root@localhost ~]# mount -o /dev/lsqvg/lsqnap1 /srv/snapshot1 #在這里,我按照既有思維去掛載,結果提示錯誤,然后看書,后面需要加一個 nouuid mount: 在 /etc/fstab 中找不到 /srv/snapshot1 [root@localhost ~]# cd /srv [root@localhost srv]# ls lvm snapshot1 thin [root@localhost srv]# mount -o /dev/lsqvg/lsqnap1 /srv/snapshot1 mount: 在 /etc/fstab 中找不到 /srv/snapshot1 [root@localhost srv]# mount -o nouuid /dev/lsqvg/lsqnap1 /srv/snapshot1 #就是在這里,nouuid,鳥哥也介紹了,因為XFS不允許相同的UUID文件系統進行掛載,因此需要加上nouuid這個參數來忽略uuid,因為快照和LV都是同樣的UUID [root@localhost srv]# df -Th /srv/lvm /srv/snap0shot1 df: "/srv/snap0shot1": 沒有那個文件或目錄 文件系統 類型 容量 已用 可用 已用% 掛載點 /dev/mapper/lsqvg-lsqlv xfs 2.5G 87M 2.5G 4% /srv/lvm [root@localhost srv]# df -Th /srv/lvm /srv/snapshot1 #我們df一下,發現結論是一樣的,lv和snap1是一樣的。好吧,鳥哥沒有下文了,我這里再測試一下,如果我改動了lv,會是什么后果 文件系統 類型 容量 已用 可用 已用% 掛載點 /dev/mapper/lsqvg-lsqlv xfs 2.5G 87M 2.5G 4% /srv/lvm /dev/mapper/lsqvg-lsqnap1 xfs 2.5G 87M 2.5G 4% /srv/snapshot1
在這里,我將ect里邊的netconfig加了一段注釋。然后我們來看一下結果。
[root@localhost etc]# lvdisplay /dev/lsqvg/lsqnap1 --- Logical volume --- LV Path /dev/lsqvg/lsqnap1 LV Name lsqnap1 VG Name lsqvg LV UUID Nr0dNA-cPS1-NSww-i3BJ-Jx9l-mKz6-4znKzd LV Write Access read/write LV Creation host, time localhost.localdomain, 2019-08-22 14:45:54 +0800 LV snapshot status active destination for lsqlv LV Status available # open 1 LV Size 2.50 GiB Current LE 160 COW-table size 160.00 MiB COW-table LE 10 Allocated to snapshot 1.32% #看到了么,已經被用掉了1.32%、原來是我們剛才貼的代碼是0.01%。怎么,看不大出來,我們再來改一下,這次見一個文件夾試試 Snapshot chunk size 4.00 KiB Segments 1 Allocation inherit Read ahead sectors auto - currently set to 8192 Block device 253:12
[root@localhost lvm]# lvdisplay /dev/lsqvg/lsqnap1 --- Logical volume --- LV Path /dev/lsqvg/lsqnap1 LV Name lsqnap1 VG Name lsqvg LV UUID Nr0dNA-cPS1-NSww-i3BJ-Jx9l-mKz6-4znKzd LV Write Access read/write LV Creation host, time localhost.localdomain, 2019-08-22 14:45:54 +0800 LV snapshot status active destination for lsqlv LV Status available # open 1 LV Size 2.50 GiB Current LE 160 COW-table size 160.00 MiB COW-table LE 10 Allocated to snapshot 1.48% #看到了么?已經變成1.48了。這里要注意一些,就是你剛做完變動,是不會變的,需要他處理完之后才會變動,所以,我覺得應該不能突然斷電,否則會有問題。我虛擬機等了大約10幾秒吧 Snapshot chunk size 4.00 KiB Segments 1 Allocation inherit Read ahead sectors auto - currently set to 8192 Block device 253:12
OK.這一章節結束了。。。下面的操作題,我們來操作一下
題目
建立raid5 並在raid5上創建lv
首先是fdisk創建4個分區,我弄了一個20G 的硬盤,放在里面,然后建了4個分區。這個簡單,fdisk 那一堆,不說了
然后就是建立raid5的程式
[root@localhost ~]# mdadm --create /dev/md0 --auto=yes --level=5 --raid-devices=3 /dev/sdb{1,2,3}
[root@localhost ~]# mdadm --detail /dev/md0 | grep -i uuid #查出這個UUID。
這里有個問題,我這里能查出UUID,但是去尋找/etc/mdadm.conf的時候,發現沒有這個文件,然后find了一下,也沒有這個文件。很悲催,然后刦網上找了找,也沒找出個大概。。。。但好像不影響使用
然后就是家里pv,vg,lv
[root@localhost ~]# pvcreate /dev/md0
[root@localhost ~]# vgcreate raidvg /dev/md0
[root@localhost ~]# lvcreate -L 1.5G -n raidlv raidvg 在這里我出了個笑話,就是還沒有建立lv的時候,就Lvscan。。。有意思
[root@localhost ~]# lvscan
ACTIVE '/dev/raidvg/raidlv' [1.50 GiB] inherit 這個就是我們建立的LV
ACTIVE '/dev/centos/var' [1.86 GiB] inherit
ACTIVE '/dev/centos/swap' [192.00 MiB] inherit
ACTIVE '/dev/centos/root' [9.31 GiB] inherit
ACTIVE '/dev/centos/home' [8.33 GiB] inherit
然后就是格式化,修改fstab啟動文件,首先需要查找lv的UUID,然后修改fstab,然后建立掛載文件夾,然后mount -a ,然后df查看結果就OK了。
[root@localhost ~]# mkfs.xfs /dev/raidvg/raidlv
[root@localhost ~]# blkid /dev/raidvg/raidlv
/dev/raidvg/raidlv: UUID="4008bfa1-6b21-458e-ad12-49b77b6739f7" TYPE="xfs"
[root@localhost ~]# vim /etc/fstab
#
# /etc/fstab
# Created by anaconda on Mon Aug 19 10:59:08 2019
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/centos-root / xfs defaults 0 0
UUID=ba80a371-e434-431e-9508-df4b2827efad /boot xfs defaults 0 0
/dev/mapper/centos-home /home xfs defaults 0 0
/dev/mapper/centos-var /var xfs defaults 0 0
/dev/mapper/centos-swap swap swap defaults 0 0
UUID="4008bfa1-6b21-458e-ad12-49b77b6739f7" /srv/raidlvm xfs defaults 0 0 這個UUID就是我們前面通過blkid查找出來的uuid。這里一定要注意一下
[root@localhost ~]# mkdir /srv/raidlvm
[root@localhost ~]# mount -a
[root@localhost ~]# df -Th /srv/raidlvm
文件系統 類型 容量 已用 可用 已用% 掛載點
/dev/mapper/raidvg-raidlv xfs 1.5G 33M 1.5G 3% /srv/raidlvm
OK,搞定。。。。
我還是沒有明白有沒有那個conf與raid5有沒有影響。。。。。。這個留着點,以后再查找一下
貼一個徐秉義老師的文章,鳥哥推薦的,有時間看一下,太長了,百度文庫的
https://wenku.baidu.com/view/3ba28e21dd36a32d7375811b.html。這個章節結束!