RAID10磁盤陣列損壞的修復


RAID10磁盤陣列由RAID1和RAID0組合而成,理論上只要不是RAID1磁盤陣列上的所有硬盤同時損壞,數據就不會丟失。也就是說最多可以在每個RAID1中損壞一個硬盤。

這里修復的實質是:使用新的硬盤代替磁盤陣列中損壞的硬盤,而在磁盤陣列損壞期間,並不影響使用。

1、查看測試的RAID10詳細信息

[root@PC1linuxprobe dev]# mdadm -D /dev/md0 ## 查看測試的RAID10磁盤陣列詳細信息,一共四塊硬盤激活狀態 /dev/md0: Version : 1.2 Creation Time : Sun Nov 8 11:17:16 2020 Raid Level : raid10 Array Size : 41909248 (39.97 GiB 42.92 GB) Used Dev Size : 20954624 (19.98 GiB 21.46 GB) Raid Devices : 4 Total Devices : 4 Persistence : Superblock is persistent Update Time : Sun Nov 8 11:20:45 2020 State : active Active Devices : 4 Working Devices : 4 Failed Devices : 0 Spare Devices : 0 Layout : near=2 Chunk Size : 512K Name : PC1linuxprobe:0 (local to host PC1linuxprobe) UUID : 36866db8:2839f737:6831b810:d838b066 Events : 18 Number Major Minor RaidDevice State 0 8 16 0 active sync /dev/sdb 1 8 32 1 active sync /dev/sdc 2 8 48 2 active sync /dev/sdd 3 8 64 3 active sync /dev/sde

 

2、模擬損壞一塊硬盤/dev/sdc

[root@PC1linuxprobe dev]# mdadm /dev/md0 -f /dev/sdc mdadm: set /dev/sdc faulty in /dev/md0 [root@PC1linuxprobe dev]# mdadm -D /dev/md0 /dev/md0: Version : 1.2 Creation Time : Sun Nov 8 11:17:16 2020 Raid Level : raid10 Array Size : 41909248 (39.97 GiB 42.92 GB) Used Dev Size : 20954624 (19.98 GiB 21.46 GB) Raid Devices : 4 Total Devices : 4 Persistence : Superblock is persistent Update Time : Sun Nov 8 11:26:22 2020 State : active, degraded Active Devices : 3 Working Devices : 3 Failed Devices : 1 Spare Devices : 0 Layout : near=2 Chunk Size : 512K Name : PC1linuxprobe:0 (local to host PC1linuxprobe) UUID : 36866db8:2839f737:6831b810:d838b066 Events : 20 Number Major Minor RaidDevice State 0 8 16 0 active sync /dev/sdb 1 0 0 1 removed 2 8 48 2 active sync /dev/sdd 3 8 64 3 active sync /dev/sde 1 8 32 - faulty /dev/sdc

損壞一塊硬盤不影響RAID10磁盤陣列的使用,期間可以在/RAID10目錄中創建或者刪除文件。

 

3、重啟系統(虛擬機)

 

4、查看此時RAID10磁盤陣列的詳細信息

[root@PC1linuxprobe dev]# mdadm -D /dev/md0 ## 首先先查看當前RAID10磁盤陣列的詳細信息 /dev/md0: Version : 1.2 Creation Time : Sun Nov 8 11:17:16 2020 Raid Level : raid10 Array Size : 41909248 (39.97 GiB 42.92 GB) Used Dev Size : 20954624 (19.98 GiB 21.46 GB) Raid Devices : 4 Total Devices : 3 Persistence : Superblock is persistent Update Time : Sun Nov 8 19:30:18 2020 State : clean, degraded Active Devices : 3 Working Devices : 3 Failed Devices : 0 Spare Devices : 0 Layout : near=2 Chunk Size : 512K Name : PC1linuxprobe:0 (local to host PC1linuxprobe) UUID : 36866db8:2839f737:6831b810:d838b066 Events : 24 Number Major Minor RaidDevice State 0 8 16 0 active sync /dev/sdb 1 0 0 1 removed 2 8 48 2 active sync /dev/sdd 3 8 64 3 active sync /dev/sde

 

5、卸載

[root@PC1linuxprobe dev]# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/rhel-root   18G  2.9G   15G  17% / devtmpfs 985M 0  985M   0% /dev tmpfs 994M 140K 994M 1% /dev/shm tmpfs 994M 8.8M  986M   1% /run tmpfs 994M 0  994M   0% /sys/fs/cgroup /dev/md0 40G 49M 38G 1% /RAID10 /dev/sda1              497M  119M  379M  24% /boot /dev/sr0               3.5G  3.5G     0 100% /run/media/root/RHEL-7.0 Server.x86_64 [root@PC1linuxprobe dev]# umount /RAID10 [root@PC1linuxprobe dev]# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/rhel-root   18G  2.9G   15G  17% / devtmpfs 985M 0  985M   0% /dev tmpfs 994M 140K 994M 1% /dev/shm tmpfs 994M 8.8M  986M   1% /run tmpfs 994M 0  994M   0% /sys/fs/cgroup /dev/sda1              497M  119M  379M  24% /boot /dev/sr0               3.5G  3.5G     0 100% /run/media/root/RHEL-7.0 Server.x86_64

 

6、添加新硬盤(頂替損壞的硬盤)

[root@PC1linuxprobe dev]# mdadm /dev/md0 -a /dev/sdc ## 添加新硬盤 mdadm: added /dev/sdc [root@PC1linuxprobe dev]# mdadm -D /dev/md0 /dev/md0: Version : 1.2 Creation Time : Sun Nov 8 11:17:16 2020 Raid Level : raid10 Array Size : 41909248 (39.97 GiB 42.92 GB) Used Dev Size : 20954624 (19.98 GiB 21.46 GB) Raid Devices : 4 Total Devices : 4 Persistence : Superblock is persistent Update Time : Sun Nov 8 11:37:41 2020 State : clean, degraded, recovering Active Devices : 3 Working Devices : 4 Failed Devices : 0 Spare Devices : 1 Layout : near=2 Chunk Size : 512K  Rebuild Status : 16% complete Name : PC1linuxprobe:0 (local to host PC1linuxprobe) UUID : 36866db8:2839f737:6831b810:d838b066 Events : 32 Number Major Minor RaidDevice State 0       8       16        0      active sync   /dev/sdb 4 8 32 1 spare rebuilding /dev/sdc 2       8       48        2      active sync   /dev/sdd 3       8       64        3      active sync   /dev/sde [root@PC1linuxprobe dev]# mdadm -D /dev/md0 /dev/md0: Version : 1.2 Creation Time : Sun Nov 8 11:17:16 2020 Raid Level : raid10 Array Size : 41909248 (39.97 GiB 42.92 GB) Used Dev Size : 20954624 (19.98 GiB 21.46 GB) Raid Devices : 4 Total Devices : 4 Persistence : Superblock is persistent Update Time : Sun Nov 8 11:38:44 2020 State : clean, degraded, recovering Active Devices : 3 Working Devices : 4 Failed Devices : 0 Spare Devices : 1 Layout : near=2 Chunk Size : 512K  Rebuild Status : 74% complete Name : PC1linuxprobe:0 (local to host PC1linuxprobe) UUID : 36866db8:2839f737:6831b810:d838b066 Events : 59 Number Major Minor RaidDevice State 0       8       16        0      active sync   /dev/sdb 4 8 32 1 spare rebuilding /dev/sdc 2       8       48        2      active sync   /dev/sdd 3       8       64        3      active sync   /dev/sde [root@PC1linuxprobe dev]# mdadm -D /dev/md0 /dev/md0: Version : 1.2 Creation Time : Sun Nov 8 11:17:16 2020 Raid Level : raid10 Array Size : 41909248 (39.97 GiB 42.92 GB) Used Dev Size : 20954624 (19.98 GiB 21.46 GB) Raid Devices : 4 Total Devices : 4 Persistence : Superblock is persistent Update Time : Sun Nov 8 11:39:11 2020 State : clean Active Devices : 4 Working Devices : 4 Failed Devices : 0 Spare Devices : 0 Layout : near=2 Chunk Size : 512K Name : PC1linuxprobe:0 (local to host PC1linuxprobe) UUID : 36866db8:2839f737:6831b810:d838b066 Events : 69 Number Major Minor RaidDevice State 0       8       16        0      active sync   /dev/sdb 4 8 32 1 active sync /dev/sdc 2       8       48        2      active sync   /dev/sdd 3       8       64        3      active sync   /dev/sde

 

7、重新掛載

[root@PC1linuxprobe dev]# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/rhel-root   18G  2.9G   15G  17% / devtmpfs 985M 0  985M   0% /dev tmpfs 994M 140K 994M 1% /dev/shm tmpfs 994M 8.8M  986M   1% /run tmpfs 994M 0  994M   0% /sys/fs/cgroup /dev/sda1              497M  119M  379M  24% /boot /dev/sr0               3.5G  3.5G     0 100% /run/media/root/RHEL-7.0 Server.x86_64 [root@PC1linuxprobe dev]# cat /etc/fstab # # /etc/fstab # Created by anaconda on Thu Nov 5 15:23:01 2020 # # Accessible filesystems, by reference, are maintained under '/dev/disk' # See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info # /dev/mapper/rhel-root   /                       xfs     defaults        1 1 UUID=0ba20ae9-dd51-459f-ac48-7f7e81385eb8 /boot                   xfs     defaults        1 2
/dev/mapper/rhel-swap   swap                    swap    defaults        0 0
/dev/md0 /RAID10 ext4 defaults 0 0 [root@PC1linuxprobe dev]# mount -a [root@PC1linuxprobe dev]# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/rhel-root   18G  2.9G   15G  17% / devtmpfs 985M 0  985M   0% /dev tmpfs 994M 140K 994M 1% /dev/shm tmpfs 994M 8.8M  986M   1% /run tmpfs 994M 0  994M   0% /sys/fs/cgroup /dev/sda1              497M  119M  379M  24% /boot /dev/sr0               3.5G  3.5G     0 100% /run/media/root/RHEL-7.0 Server.x86_64 /dev/md0 40G 49M 38G 1% /RAID10

 

8、總結RAID10磁盤陣列損壞的修復

  • 首先重啟系統,解除掛載
  • 添加新硬盤 mdadm /dev/md0 -a /dev/newdisk
  • 等待rebuilding完成,使用mdadm -D /dev/md0查看進度
  • 重新掛載,mount -a (前提是已經寫入開啟自動掛載配置文件)


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM