准備一台虛擬機,系統安裝完成后,關機,添加4塊40G虛擬硬盤
就需要使用 mdadm 中的參數了。其中,-C 參數代表創建一個 RAID 陣列卡;-v 參 數顯示創建的過程,同時在后面追加一個設備名稱/dev/md0,這樣/dev/md0就是創建后的RAID 磁盤陣列的名稱;-a yes 參數代表自動創建設備文件;-n 4 參數代表使用 4 塊硬盤來部署這個 RAID 磁盤陣列;而-l 10 參數則代表 RAID 10 方案;最后再加上 4 塊硬盤設備的名稱就搞定 了 mdadm -Cv /dev/md0 -a yes -n 4 -l 10 /dev/sdb /dev/sdc /dev/sdd /dev/sde
把制作好的 RAID 磁盤陣列格式化為 ext4 格式
[root@localhost ~]# mkfs.ext4 /dev/md0 mke2fs 1.42.9 (28-Dec-2013) Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) Stride=128 blocks, Stripe width=256 blocks 5242880 inodes, 20954624 blocks 1047731 blocks (5.00%) reserved for the super user First data block=0 Maximum filesystem blocks=2168455168 640 block groups 32768 blocks per group, 32768 fragments per group 8192 inodes per group Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 4096000, 7962624, 11239424, 20480000 Allocating group tables: done Writing inode tables: done Creating journal (32768 blocks): done Writing superblocks and filesystem accounting information: done [root@localhost ~]#
再次,創建掛載點然后把硬盤設備進行掛載操作。掛載成功后可看到可用空間為 80GB
[root@localhost ~]# mkdir /RAID [root@localhost ~]# mount /dev/md0 /RAID [root@localhost ~]# df -h Filesystem Size Used Avail Use% Mounted on devtmpfs 475M 0 475M 0% /dev tmpfs 487M 0 487M 0% /dev/shm tmpfs 487M 6.6M 480M 2% /run tmpfs 487M 0 487M 0% /sys/fs/cgroup /dev/mapper/centos-root 37G 1.4G 36G 4% / /dev/sda1 1014M 138M 877M 14% /boot tmpfs 98M 0 98M 0% /run/user/0 /dev/md0 79G 57M 75G 1% /RAID [root@localhost ~]#
最后,查看/dev/md0 磁盤陣列的詳細信息,並把掛載信息寫入到配置文件中,使其永久 生效
[root@localhost ~]# [root@localhost ~]# mdadm -D /dev/md0 /dev/md0: Version : 1.2 Creation Time : Mon Sep 27 10:21:05 2021 Raid Level : raid10 Array Size : 83818496 (79.94 GiB 85.83 GB) Used Dev Size : 41909248 (39.97 GiB 42.92 GB) Raid Devices : 4 Total Devices : 4 Persistence : Superblock is persistent Update Time : Mon Sep 27 10:32:12 2021 State : active Active Devices : 4 Working Devices : 4 Failed Devices : 0 Spare Devices : 0 Layout : near=2 Chunk Size : 512K Consistency Policy : resync Name : localhost.localdomain:0 (local to host localhost.localdomain) UUID : 5b88152d:76a1d34d:aa27e583:469c1698 Events : 20 Number Major Minor RaidDevice State 0 8 16 0 active sync set-A /dev/sdb 1 8 32 1 active sync set-B /dev/sdc 2 8 48 2 active sync set-A /dev/sdd 3 8 64 3 active sync set-B /dev/sde [root@localhost ~]#
[root@localhost ~]# echo "/dev/md0 /RAID ext4 defaults 0 0" >> /etc/fstab [root@localhost ~]# 永久生效
在確認有一塊物理硬盤設備出現損壞而不能繼續正常使用后,應該使用 mdadm 命令(-f參數)將其 移除,然后查看 RAID 磁盤陣列的狀態,可以發現狀態已經改變
[root@localhost ~]# mdadm /dev/md0 -f /dev/sdb mdadm: set /dev/sdb faulty in /dev/md0 [root@localhost ~]# mdadm -D /dev/md0 /dev/md0: Version : 1.2 Creation Time : Mon Sep 27 10:21:05 2021 Raid Level : raid10 Array Size : 83818496 (79.94 GiB 85.83 GB) Used Dev Size : 41909248 (39.97 GiB 42.92 GB) Raid Devices : 4 Total Devices : 4 Persistence : Superblock is persistent Update Time : Mon Sep 27 10:43:01 2021 State : clean, degraded Active Devices : 3 Working Devices : 3 Failed Devices : 1 Spare Devices : 0 Layout : near=2 Chunk Size : 512K Consistency Policy : resync Name : localhost.localdomain:0 (local to host localhost.localdomain) UUID : 5b88152d:76a1d34d:aa27e583:469c1698 Events : 21 Number Major Minor RaidDevice State - 0 0 0 removed 1 8 32 1 active sync set-B /dev/sdc 2 8 48 2 active sync set-A /dev/sdd 3 8 64 3 active sync set-B /dev/sde 0 8 16 - faulty /dev/sdb [root@localhost ~]#
在 RAID 10 級別的磁盤陣列中,當 RAID 1 磁盤陣列中存在一個故障盤時並不影響 RAID 10 磁盤陣列的使用。當購買了新的硬盤設備后再使用 mdadm 命令來予以替換即可,在此期間 我們可以在/RAID 目錄中正常地創建或刪除文件。由於我們是在虛擬機中模擬硬盤,所以先 重啟系統,然后再把新的硬盤添加到 RAID 磁盤陣列中
[root@localhost ~]# umount /RAID/ [root@localhost ~]# mdadm /dev/md md/ md0 [root@localhost ~]# mdadm /dev/md0 -a /dev/sdb mdadm: added /dev/sdb [root@localhost ~]# mdadm -D /dev/md0 /dev/md0: Version : 1.2 Creation Time : Mon Sep 27 10:21:05 2021 Raid Level : raid10 Array Size : 83818496 (79.94 GiB 85.83 GB) Used Dev Size : 41909248 (39.97 GiB 42.92 GB) Raid Devices : 4 Total Devices : 4 Persistence : Superblock is persistent Update Time : Mon Sep 27 10:46:43 2021 State : clean, degraded, recovering Active Devices : 3 Working Devices : 4 Failed Devices : 0 Spare Devices : 1 Layout : near=2 Chunk Size : 512K Consistency Policy : resync Rebuild Status : 4% complete Name : localhost.localdomain:0 (local to host localhost.localdomain) UUID : 5b88152d:76a1d34d:aa27e583:469c1698 Events : 29 Number Major Minor RaidDevice State 4 8 16 0 spare rebuilding /dev/sdb 1 8 32 1 active sync set-B /dev/sdc 2 8 48 2 active sync set-A /dev/sdd 3 8 64 3 active sync set-B /dev/sde [root@localhost ~]#