Raid 5 (軟raid)創建、模擬損壞、移除、添加新磁盤的實驗


 

1.查看磁盤分區,做raid5 最少需要3塊盤

磁盤 /dev/sdb:1073 MB, 1073741824 字節,2097152 個扇區
Units = 扇區 of 1 * 512 = 512 bytes
扇區大小(邏輯/物理):512 字節 / 512 字節
I/O 大小(最小/最佳):512 字節 / 512 字節
磁盤標簽類型:dos
磁盤標識符:0x71dbe158
   設備 Boot      Start         End      Blocks   Id  System
/dev/sdb1            2048      206847      102400   83  Linux
/dev/sdb2          206848      411647      102400   83  Linux
/dev/sdb3          411648      616447      102400   83  Linux

2.用mdadm 命令創建raid5 陣列

[root@bogon ~]# mdadm -C /dev/md5 -n3 -l5 /dev/sdb{1,2,3} mdadm: Defaulting to version 1.2 metadata mdadm: array /dev/md5 started.

3.查看raid5陣列信息,可以看到我們的raid5 已經創建完畢

復制代碼
[root@bogon ~]# mdadm -D /dev/md5 /dev/md5: Version : 1.2 Creation Time : Wed Jun 12 22:38:30 2019 Raid Level : raid5 Array Size : 200704 (196.00 MiB 205.52 MB) Used Dev Size : 100352 (98.00 MiB 102.76 MB) Raid Devices : 3 Total Devices : 3 Persistence : Superblock is persistent Update Time : Wed Jun 12 22:38:32 2019 State : clean Active Devices : 3 Working Devices : 3 Failed Devices : 0 Spare Devices : 0 Layout : left-symmetric Chunk Size : 512K Consistency Policy : resync Name : bogon:5 (local to host bogon) UUID : 4b0810bc:460a99c0:9d06b842:8ebfcad9 Events : 18 Number Major Minor RaidDevice State 0 8 17 0 active sync /dev/sdb1 1 8 18 1 active sync /dev/sdb2 3 8 19 2 active sync /dev/sdb3
復制代碼

4.對其創建文件系統,在這里我選擇的是xfs格式,並進行掛載操作

復制代碼
[root@bogon ~]# mkfs.xfs /dev/md5 meta-data=/dev/md5 isize=512 agcount=8, agsize=6272 blks = sectsz=512 attr=2, projid32bit=1 = crc=1 finobt=0, sparse=0 data = bsize=4096 blocks=50176, imaxpct=25 = sunit=128 swidth=256 blks naming =version 2 bsize=4096 ascii-ci=0 ftype=1 log =internal log bsize=4096 blocks=624, version=2 = sectsz=512 sunit=8 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0
復制代碼

掛載

[root@bogon ~]# mount /dev/md5 /mnt/disk1/

查看掛載是否成功

復制代碼
[root@bogon ~]# df -Th 文件系統 類型 容量 已用 可用 已用% 掛載點 /dev/mapper/centos-root xfs 17G 4.5G 13G 27% / devtmpfs devtmpfs 470M 0 470M 0% /dev tmpfs tmpfs 487M 0 487M 0% /dev/shm tmpfs tmpfs 487M 8.6M 478M 2% /run tmpfs tmpfs 487M 0 487M 0% /sys/fs/cgroup /dev/sda1 xfs 1014M 166M 849M 17% /boot tmpfs tmpfs 98M 8.0K 98M 1% /run/user/42 tmpfs tmpfs 98M 28K 98M 1% /run/user/0 /dev/sr0 iso9660 4.3G 4.3G 0 100% /run/media/root/CentOS 7 x86_64 /dev/md5 xfs 194M 11M 184M 6% /mnt/disk1
復制代碼

5.模擬磁盤損壞

[root@bogon ~]# mdadm -f /dev/md5 /dev/sdb3 mdadm: set /dev/sdb3 faulty in /dev/md5

查看raid5信息

復制代碼
[root@bogon ~]# mdadm -D /dev/md5 /dev/md5: Version : 1.2 Creation Time : Wed Jun 12 22:38:30 2019 Raid Level : raid5 Array Size : 200704 (196.00 MiB 205.52 MB) Used Dev Size : 100352 (98.00 MiB 102.76 MB) Raid Devices : 3 Total Devices : 3 Persistence : Superblock is persistent Update Time : Wed Jun 12 22:51:07 2019 State : clean, degraded Active Devices : 2 Working Devices : 2 Failed Devices : 1 Spare Devices : 0 Layout : left-symmetric Chunk Size : 512K Consistency Policy : resync Name : bogon:5 (local to host bogon) UUID : 4b0810bc:460a99c0:9d06b842:8ebfcad9 Events : 20 Number Major Minor RaidDevice State 0 8 17 0 active sync /dev/sdb1 1 8 18 1 active sync /dev/sdb2 - 0 0 2 removed 3 8 19 - faulty /dev/sdb3
復制代碼

移除損壞磁盤,我們可以看到sdb3 已經從raid5中移除了

[root@bogon ~]# mdadm -r /dev/md5 /dev/sdb3 mdadm: hot removed /dev/sdb3 from /dev/md5
 Number Major Minor RaidDevice State 0 8 17 0 active sync /dev/sdb1 1 8 18 1 active sync /dev/sdb2 - 0 0 2 removed

6.添加新的額磁盤,並查看

[root@bogon ~]# mdadm -a /dev/md5 /dev/sdb3 mdadm: added /dev/sdb3

raid5中,出現了我們添加的sdb3

 Number Major Minor RaidDevice State 0 8 17 0 active sync /dev/sdb1 1 8 18 1 active sync /dev/sdb2 3 8 19 2 active sync /dev/sdb3


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM