本文參照以下兩個鏈接,將實驗重做了一遍,目的就是加深印象及提升實操能力
參照鏈接:http://www.opsers.org/base/learning-linux-the-day-that-the-system-configuration-in-the-rhel6-disk-array-raid.html
參照鏈接:http://www.cnblogs.com/mchina/p/linux-centos-disk-array-software_raid.html
Linux之在CENTOS系統上配置磁盤陣列(RAID)
實驗環境
虛擬機:Oracle VM 5.0.10 r104061
系統平台:CentOS Linux release 7.2.1511 (Core)
mdadm版本:mdadm - v3.3.2 - 21st August 2014
磁盤陣列全名是Redundant Arrays of Inexpensive Disks, RAID ,大概的意思是:廉價的磁盤冗余陣列。 RAID 可以通過一個技術(軟件或硬件),將多個較小的磁盤整合成為一個較大的磁盤設備,而這個較大的磁盤不但擴展了儲存空間,而且還有數據保護的功能。RAID會根據等級 (level) 的不同,而使得整合后的磁盤具有不同的功能,基本常見的 level 有以下這幾種
RAID級別划分
RAID 0:磁盤疊加
這種模式一般是使用相同型號與容量的磁盤來組成。這種模式的 RAID 會將磁盤先切出等量的區塊, 然后當一個文件需要要寫入 RAID 設備時,該文件就會依據區塊的大小切割好,然后再依次放到各個磁盤里。由於每個磁盤會交錯的存放數據, 因此數據要寫入 RAID 時,會被等量的放在各個磁盤上面。
所以說,RAID 0,他的特點就是:
1、磁盤越多RAID設備的容量就越大。
2、容量的總大小是多個硬盤的容量的總和。
3、磁盤越多,寫入的效能就越高。
4、如果使用非等大的硬盤,那么當小的磁盤寫滿后,就直接向空間大的磁盤中寫數據了。
5、最少的磁盤數是2個,而且磁盤使用率為100%
他的致使之處就是:萬一其中一個磁盤有問題,那么數據就會全部出問題。因為數據是分開存儲的。
RAID 1:鏡像備份
這種模式主要是讓同一份數據,完整的保存在不同的磁盤上。由於同一份數據會被分別寫入到其他不同磁盤。因此在大量寫入 RAID 1 設備的情況下,寫入的效能會變的非常差。但如果你使用的是硬件 RAID (磁盤陣列卡) 時,磁盤陣列卡會主動的復制一份而不使用系統的 I/O總線,這對效能影響是不大的。 如果使用軟件磁盤陣列,效能就會明顯下降了。
RAID 1,他的特點是:
1、保證了數據的安全,
2、RAID 1設備的容量是所有磁盤容量總和的一半
3、在多個磁盤組成RAID 1設備的時候,總容量將以最小的那一顆磁盤為主
4、讀取的效能相對增加。這是因為數據在不同的磁盤上面,如果多個進程在讀取同一筆數據時,RAID 會自行取得最佳的讀取平衡。
5、磁盤數必需是2的整數倍。磁盤利用率為50%
不足之處就是:寫入的效能會降低
RAID 5:效能與數據備份的均衡考慮
RAID 5:至少需要三個以上的磁盤才能夠組成這種類型的磁盤陣列。這種磁盤陣列的數據寫入有點類似 RAID 0, 不過每個循環的寫入過程中,在每顆磁盤還加入一個校驗數據(Parity),這個數據會記錄其他磁盤的備份數據, 用於當有磁盤損毀時的救援。
特點:
1、當任何一個磁盤損壞時,都能夠通過其他磁盤的檢查碼來重建原本磁盤內的數據,安全性明顯增強。
2、由於有同位檢查碼的存在,因此 RAID 5 的總容量會是整個磁盤數量減一個。
3、當損毀的磁盤數量大於等於兩顆時,那么 RAID 5 的資料就損壞了。 因為 RAID 5 預設只能支持一顆磁盤的損壞情況。
4、在讀寫效能上與 RAID-0 差不多。
5、最少磁盤是3塊,磁盤利用率N-1塊
不足:數據寫入的效能不一定增加,因為要寫入 RAID 5 的數據還得要經過計算校驗碼 (parity)。所以寫入的效能與系統的硬件關系較大。尤其當使用軟件磁盤陣列時,校驗碼 (parity)是通過 CPU 去計算而非專職的磁盤陣列卡, 因此在數據校驗恢復的時候,硬盤的效能會明顯下降。
RAID0 RAID1 RAID5三個級別的數據存儲流程,大家可以參考下圖

圖片來自:http://www.opsers.org/base/learning-linux-the-day-that-the-system-configuration-in-the-rhel6-disk-array-raid.html
RAID 01或RAID 10
這個RAID級別就是針對上面的特點與不足,把RAID 0和RAID 1這兩個結合起來了。
所謂的RAID 01就是:
1.先讓組成 RAID 0
2.再組成 RAID 1,這就是 RAID 0+1
所謂的RAID 10就是:
1.先組成 RAID 1
2.再組成 RAID 0,這就是RAID 1+0
特點與不足:由於具有 RAID 0 的優點,所以效能得以提升,由於具有 RAID 1 的優點,所以數據得以備份。 但是也由於 RAID 1 的缺點,所以總容量會少一半用來做為備份。
圖片來自:http://www.opsers.org/base/learning-linux-the-day-that-the-system-configuration-in-the-rhel6-disk-array-raid.html
由於 RAID5 僅能支持一顆磁盤的損毀,因此還有發展出另外一種等級,就是 RAID 6 ,這個 RAID 6 則使用兩顆磁盤的容量作為 parity 的儲存,因此整體的磁盤容量就會少兩顆,但是允許出錯的磁盤數量就可以達到兩顆,也就是在 RAID 6 的情況下,同時兩顆磁盤損毀時,數據還是可以恢復回來的。而此級別的RAID磁盤最少是4塊,利用率為 N-2。
Spare Disk:熱備磁盤
他的作用就是:當磁盤陣列中的磁盤有損毀時,這個熱備磁盤就能立刻代替損壞磁盤的位置,這時候我們的磁盤陣列就會主動重建。然后把所有的數據自動恢復。而這個或多個熱備磁盤是沒有包含在原本磁盤陣列等級中的磁盤,只有當磁盤陣列有任何磁盤損毀時,才真正的起作用。
關於理論知識我們就只介紹到這里,當然還可以延伸出多種組合,只要理解了上面的內容,那么其他級別就不難了,無非是多種組合而已。通過上面的講解,我相信大家也知道了做磁盤陣列的優點了:1、數據的安全性明顯增強,2、讀寫的效能明顯提高,3、磁盤的容量有效擴展。但也別忘記了他的缺點就是成本提高。但相對於數據而言,我想這點成本也不算什么吧!
設置磁盤
在Oracle VM VirtualBox 里模擬物理增加磁盤,在這篇文章中,我們將創建RAID0, RAID1, RAID5分區,RAID0 需要兩塊硬盤,RAID1 需要兩塊硬盤,RAID5需要四塊硬盤,所以在這里添加了八塊物理硬盤,每塊5.00 GB.

mdadm 是multiple devices admin 的簡稱,它是Linux下的一款標准的軟件RAID 管理工具
開始安裝
先安裝mdadm,yum install mdadm
[root@raid]# rpm -qa | grep mdadm mdadm-3.3.2-7.el7.x86_64
查看新增加的物理磁盤
[root@raid]# fdisk -l Disk /dev/sdb: 5368 MB, 5368709120 bytes, 10485760 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk /dev/sdc: 5368 MB, 5368709120 bytes, 10485760 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk /dev/sdd: 5368 MB, 5368709120 bytes, 10485760 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk /dev/sde: 5368 MB, 5368709120 bytes, 10485760 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk /dev/sdf: 5368 MB, 5368709120 bytes, 10485760 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk /dev/sdh: 5368 MB, 5368709120 bytes, 10485760 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk /dev/sdi: 5368 MB, 5368709120 bytes, 10485760 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk /dev/sdg: 5368 MB, 5368709120 bytes, 10485760 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes
一、
Raid0實驗:使用Disk /dev/sdb Disk /dev/sdc兩塊盤
1.對磁盤分區
[root@raid ~]# fdisk /dev/sdb Welcome to fdisk (util-linux 2.23.2). Changes will remain in memory only, until you decide to write them. Be careful before using the write command. Device does not contain a recognized partition table Building a new DOS disklabel with disk identifier 0xd7c6c9b7. Command (m for help): m Command action a toggle a bootable flag b edit bsd disklabel c toggle the dos compatibility flag d delete a partition g create a new empty GPT partition table G create an IRIX (SGI) partition table l list known partition types m print this menu n add a new partition o create a new empty DOS partition table p print the partition table q quit without saving changes s create a new empty Sun disklabel t change a partition's system id u change display/entry units v verify the partition table w write table to disk and exit x extra functionality (experts only) Command (m for help): n Partition type: p primary (0 primary, 0 extended, 4 free) e extended Select (default p): p Partition number (1-4, default 1): First sector (2048-10485759, default 2048): Using default value 2048 Last sector, +sectors or +size{K,M,G} (2048-10485759, default 10485759): Using default value 10485759 Partition 1 of type Linux and of size 5 GiB is set Command (m for help): t Selected partition 1 Hex code (type L to list all codes): L 0 Empty 24 NEC DOS 81 Minix / old Lin bf Solaris 1 FAT12 27 Hidden NTFS Win 82 Linux swap / So c1 DRDOS/sec (FAT- 2 XENIX root 39 Plan 9 83 Linux c4 DRDOS/sec (FAT- 3 XENIX usr 3c PartitionMagic 84 OS/2 hidden C: c6 DRDOS/sec (FAT- 4 FAT16 <32M 40 Venix 80286 85 Linux extended c7 Syrinx 5 Extended 41 PPC PReP Boot 86 NTFS volume set da Non-FS data 6 FAT16 42 SFS 87 NTFS volume set db CP/M / CTOS / . 7 HPFS/NTFS/exFAT 4d QNX4.x 88 Linux plaintext de Dell Utility 8 AIX 4e QNX4.x 2nd part 8e Linux LVM df BootIt 9 AIX bootable 4f QNX4.x 3rd part 93 Amoeba e1 DOS access a OS/2 Boot Manag 50 OnTrack DM 94 Amoeba BBT e3 DOS R/O b W95 FAT32 51 OnTrack DM6 Aux 9f BSD/OS e4 SpeedStor c W95 FAT32 (LBA) 52 CP/M a0 IBM Thinkpad hi eb BeOS fs e W95 FAT16 (LBA) 53 OnTrack DM6 Aux a5 FreeBSD ee GPT f W95 Ext'd (LBA) 54 OnTrackDM6 a6 OpenBSD ef EFI (FAT-12/16/ 10 OPUS 55 EZ-Drive a7 NeXTSTEP f0 Linux/PA-RISC b 11 Hidden FAT12 56 Golden Bow a8 Darwin UFS f1 SpeedStor 12 Compaq diagnost 5c Priam Edisk a9 NetBSD f4 SpeedStor 14 Hidden FAT16 <3 61 SpeedStor ab Darwin boot f2 DOS secondary 16 Hidden FAT16 63 GNU HURD or Sys af HFS / HFS+ fb VMware VMFS 17 Hidden HPFS/NTF 64 Novell Netware b7 BSDI fs fc VMware VMKCORE 18 AST SmartSleep 65 Novell Netware b8 BSDI swap fd Linux raid auto 1b Hidden W95 FAT3 70 DiskSecure Mult bb Boot Wizard hid fe LANstep 1c Hidden W95 FAT3 75 PC/IX be Solaris boot ff BBT 1e Hidden W95 FAT1 80 Old Minix Hex code (type L to list all codes): fd Changed type of partition 'Linux' to 'Linux raid autodetect' Command (m for help): p Disk /dev/sdb: 5368 MB, 5368709120 bytes, 10485760 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk label type: dos Disk identifier: 0xd7c6c9b7 Device Boot Start End Blocks Id System /dev/sdb1 2048 10485759 5241856 fd Linux raid autodetect Command (m for help): w The partition table has been altered! Calling ioctl() to re-read partition table. Syncing disks.
注:同方法對/dev/sdc分區
使kernel重新讀取分區表
[root@raid ~]# partprobe
查看一下狀態
[root@raid ~]# fdisk -l /dev/sdb /dev/sdc Disk /dev/sdb: 5368 MB, 5368709120 bytes, 10485760 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk label type: dos Disk identifier: 0xd7c6c9b7 Device Boot Start End Blocks Id System /dev/sdb1 2048 10485759 5241856 fd Linux raid autodetect Disk /dev/sdc: 5368 MB, 5368709120 bytes, 10485760 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk label type: dos Disk identifier: 0x7fd6e126 Device Boot Start End Blocks Id System /dev/sdc1 2048 10485759 5241856 fd Linux raid autodetect
開始創建Raid0
[root@raid ~]# mdadm -C /dev/md0 -ayes -l0 -n2 /dev/sd[b,c]1
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.
說明:
-C --create 創建陣列;
-a --auto 同意創建設備,如不加此參數時必須先使用mknod 命令來創建一個RAID設備,不過推薦使用-a yes參數一次性創建;
-l --level 陣列模式,支持的陣列模式有 linear, raid0, raid1, raid4, raid5, raid6, raid10, multipath, faulty, container;
-n --raid-devices 陣列中活動磁盤的數目,該數目加上備用磁盤的數目應該等於陣列中總的磁盤數目;
/dev/md0 陣列的設備名稱;
/dev/sd{b,c}1 參與創建陣列的磁盤名稱;
查看raid狀態
[root@raid ~]# cat /proc/mdstat Personalities : [raid0] md0 : active raid0 sdc1[1] sdb1[0] 10475520 blocks super 1.2 512k chunks unused devices: <none> [root@raid ~]# mdadm -D /dev/md0 /dev/md0: Version : 1.2 Creation Time : Mon Dec 28 14:48:12 2015 Raid Level : raid0 Array Size : 10475520 (9.99 GiB 10.73 GB) Raid Devices : 2 Total Devices : 2 Persistence : Superblock is persistent Update Time : Mon Dec 28 14:48:12 2015 State : clean Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Chunk Size : 512K Name : raid:0 (local to host raid) UUID : 1100e7ee:d40cbdc2:21c359b3:b6b966b6 Events : 0 Number Major Minor RaidDevice State 0 8 17 0 active sync /dev/sdb1 1 8 33 1 active sync /dev/sdc1
說明:Raid Level : 陣列級別;
Array Size : 陣列容量大小;
Raid Devices : RAID成員的個數;
Total Devices : RAID中下屬成員的總計個數,因為還有冗余硬盤或分區,也就是spare,為了RAID的正常運珩,隨時可以推上去加入RAID的;
State : clean, degraded, recovering 狀態,包括三個狀態,clean 表示正常,degraded 表示有問題,recovering 表示正在恢復或構建;
Active Devices : 被激活的RAID成員個數;
Working Devices : 正常的工作的RAID成員個數;
Failed Devices : 出問題的RAID成員;
Spare Devices : 備用RAID成員個數,當一個RAID的成員出問題時,用其它硬盤或分區來頂替時,RAID要進行構建,在沒構建完成時,這個成員也會被認為是spare設備;
UUID : RAID的UUID值,在系統中是唯一的;
創建RAID配置文件/etc/mdadm.conf,默認是不存在的,需要手工創建。該配置文件的主要作用是系統啟動的時候能夠自動加載軟RAID,同時也方便日后管理。但不是必須的,推薦對該文件進行配置。我們這里需要創建這個文件,測試中發現,如果沒有這個文件,則reboot 后,已經創建好的md0 會自動變成md127。
/etc/mdadm.conf 文件內容包括:
由DEVICE 選項指定用於軟RAID的所有設備,和ARRAY 選項所指定陣列的設備名、RAID級別、陣列中活動設備的數目以及設備的UUID號。
創建/etc/mdadm.conf
echo DEVICE /dev/sd{b,c}1 >> /etc/mdadm.conf mdadm -Ds >> /etc/mdadm.conf
當前生成的/etc/mdadm.conf 文件內容並不符合所規定的格式,所以也是不生效的,這時需要手工修改該文件內容為如下格式:
[root@raid ~]# cat /etc/mdadm.conf DEVICE /dev/sdb1 /dev/sdc1 ARRAY /dev/md0 level=raid0 num-devices=2 UUID=1100e7ee:d40cbdc2:21c359b3:b6b966b6
格式化磁盤陣列
[root@raid ~]# mkfs.ext4 /dev/md0 mke2fs 1.42.9 (28-Dec-2013) Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) Stride=128 blocks, Stripe width=256 blocks 655360 inodes, 2618880 blocks 130944 blocks (5.00%) reserved for the super user First data block=0 Maximum filesystem blocks=2151677952 80 block groups 32768 blocks per group, 32768 fragments per group 8192 inodes per group Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632 Allocating group tables: done Writing inode tables: done Creating journal (32768 blocks): done Writing superblocks and filesystem accounting information: done
建立掛載點並掛載
[root@raid ~]# mkdir -p /raid0 mount /dev/md0 /raid0
查看磁盤狀態
[root@raid ~]# df -Th Filesystem Type Size Used Avail Use% Mounted on /dev/mapper/centos-root xfs 46G 4.1G 42G 9% / devtmpfs devtmpfs 2.0G 0 2.0G 0% /dev tmpfs tmpfs 2.0G 144K 2.0G 1% /dev/shm tmpfs tmpfs 2.0G 8.8M 2.0G 1% /run tmpfs tmpfs 2.0G 0 2.0G 0% /sys/fs/cgroup /dev/sda1 xfs 497M 140M 358M 29% /boot tmpfs tmpfs 396M 16K 396M 1% /run/user/0 /dev/md0 ext4 9.8G 37M 9.2G 1% /raid0
寫入/etc/fstab
[root@raid ~]# cat /etc/fstab # # /etc/fstab # Created by anaconda on Mon Dec 28 11:06:31 2015 # # Accessible filesystems, by reference, are maintained under '/dev/disk' # See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info # /dev/mapper/centos-root / xfs defaults 0 0 UUID=5ea4bc6c-3846-41c6-9716-8a273e36a0f0 /boot xfs defaults 0 0 /dev/mapper/centos-swap swap swap defaults 0 0 /dev/md0 /raid0 ext4 defaults 0 0
然后reboot 測試開機是否自動掛載,raid0 創建完畢。
磁盤I/O測試,測試文件應大約內存,避免write cache,這里使用dd命令,該命令只能提供一個大概的測試結果,而且是連續IO,而不是隨機IO
寫測試 [root@raid ~]# time dd if=/dev/zero of=/raid0/iotest bs=8k count=655360 conv=fdatasync 655360+0 records in 655360+0 records out 5368709120 bytes (5.4 GB) copied, 26.4606 s, 203 MB/s real 0m26.466s user 0m0.425s sys 0m23.814s [root@raid ~]# time dd if=/dev/zero of=/iotest bs=8k count=655360 conv=fdatasync 655360+0 records in 655360+0 records out 5368709120 bytes (5.4 GB) copied, 30.9296 s, 174 MB/s real 0m30.932s user 0m0.080s sys 0m3.623s 一個寫在raid0上,一個寫在根目錄下,速度分別是203 MB/s,174 MB/s,耗時分別是0m26.466s,0m30.932s,可見raid0的速度獲勝 讀測試 [root@raid]# time dd if=/raid0/iotest of=/dev/null bs=8k count=655360 655360+0 records in 655360+0 records out 5368709120 bytes (5.4 GB) copied, 3.98003 s, 1.3 GB/s real 0m3.983s user 0m0.065s sys 0m3.581s [root@raid raid0]# time dd if=/iotest of=/dev/null bs=8k count=655360 655360+0 records in 655360+0 records out 5368709120 bytes (5.4 GB) copied, 6.81647 s, 788 MB/s real 0m6.819s user 0m0.020s sys 0m4.975s 一個讀取/raid0/iotest,一個讀取/iotest,速度分別是1.3 GB/s,788 MB/s,耗時分別是0m3.983s,0m6.819s,可見raid0的讀幾乎是2倍普通分區 讀寫測試 [root@raid ~]# time dd if=/raid0/iotest of=/raid0/iotest1 bs=8k count=327680 conv=fdatasync 327680+0 records in 327680+0 records out 2684354560 bytes (2.7 GB) copied, 7.04209 s, 381 MB/s real 0m7.045s user 0m0.073s sys 0m3.984s [root@raid ~]# time dd if=/iotest of=/iotest1 bs=8k count=327680 conv=fdatasync 327680+0 records in 327680+0 records out 2684354560 bytes (2.7 GB) copied, 21.2412 s, 126 MB/s real 0m21.244s user 0m0.051s sys 0m2.954s 一個讀寫/raid0/iotest,/raid0/iotest1,一個讀寫/iotest,/iotest1,速度分別是381 MB/s,126 MB/s,耗時分別是0m7.045s,0m21.244s,可見raid0的讀寫是普通分區2倍還不止
二、
Raid1實驗:使用Disk /dev/sdd Disk /dev/sde兩塊盤
對磁盤分區並修改分區類型
[root@raid ~]# fdisk /dev/sdd Welcome to fdisk (util-linux 2.23.2). Changes will remain in memory only, until you decide to write them. Be careful before using the write command. Device does not contain a recognized partition table Building a new DOS disklabel with disk identifier 0x686f5801. Command (m for help): n Partition type: p primary (0 primary, 0 extended, 4 free) e extended Select (default p): p Partition number (1-4, default 1): First sector (2048-10485759, default 2048): Using default value 2048 Last sector, +sectors or +size{K,M,G} (2048-10485759, default 10485759): Using default value 10485759 Partition 1 of type Linux and of size 5 GiB is set Command (m for help): t Selected partition 1 Hex code (type L to list all codes): L 0 Empty 24 NEC DOS 81 Minix / old Lin bf Solaris 1 FAT12 27 Hidden NTFS Win 82 Linux swap / So c1 DRDOS/sec (FAT- 2 XENIX root 39 Plan 9 83 Linux c4 DRDOS/sec (FAT- 3 XENIX usr 3c PartitionMagic 84 OS/2 hidden C: c6 DRDOS/sec (FAT- 4 FAT16 <32M 40 Venix 80286 85 Linux extended c7 Syrinx 5 Extended 41 PPC PReP Boot 86 NTFS volume set da Non-FS data 6 FAT16 42 SFS 87 NTFS volume set db CP/M / CTOS / . 7 HPFS/NTFS/exFAT 4d QNX4.x 88 Linux plaintext de Dell Utility 8 AIX 4e QNX4.x 2nd part 8e Linux LVM df BootIt 9 AIX bootable 4f QNX4.x 3rd part 93 Amoeba e1 DOS access a OS/2 Boot Manag 50 OnTrack DM 94 Amoeba BBT e3 DOS R/O b W95 FAT32 51 OnTrack DM6 Aux 9f BSD/OS e4 SpeedStor c W95 FAT32 (LBA) 52 CP/M a0 IBM Thinkpad hi eb BeOS fs e W95 FAT16 (LBA) 53 OnTrack DM6 Aux a5 FreeBSD ee GPT f W95 Ext'd (LBA) 54 OnTrackDM6 a6 OpenBSD ef EFI (FAT-12/16/ 10 OPUS 55 EZ-Drive a7 NeXTSTEP f0 Linux/PA-RISC b 11 Hidden FAT12 56 Golden Bow a8 Darwin UFS f1 SpeedStor 12 Compaq diagnost 5c Priam Edisk a9 NetBSD f4 SpeedStor 14 Hidden FAT16 <3 61 SpeedStor ab Darwin boot f2 DOS secondary 16 Hidden FAT16 63 GNU HURD or Sys af HFS / HFS+ fb VMware VMFS 17 Hidden HPFS/NTF 64 Novell Netware b7 BSDI fs fc VMware VMKCORE 18 AST SmartSleep 65 Novell Netware b8 BSDI swap fd Linux raid auto 1b Hidden W95 FAT3 70 DiskSecure Mult bb Boot Wizard hid fe LANstep 1c Hidden W95 FAT3 75 PC/IX be Solaris boot ff BBT 1e Hidden W95 FAT1 80 Old Minix Hex code (type L to list all codes): fd Changed type of partition 'Linux' to 'Linux raid autodetect' Command (m for help): p Disk /dev/sdd: 5368 MB, 5368709120 bytes, 10485760 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk label type: dos Disk identifier: 0x686f5801 Device Boot Start End Blocks Id System /dev/sdd1 2048 10485759 5241856 fd Linux raid autodetect Command (m for help): w The partition table has been altered! Calling ioctl() to re-read partition table. Syncing disks. [root@raid ~]# fdisk /dev/sde Welcome to fdisk (util-linux 2.23.2). Changes will remain in memory only, until you decide to write them. Be careful before using the write command. Device does not contain a recognized partition table Building a new DOS disklabel with disk identifier 0xe0cce225. Command (m for help): n Partition type: p primary (0 primary, 0 extended, 4 free) e extended Select (default p): p Partition number (1-4, default 1): First sector (2048-10485759, default 2048): Using default value 2048 Last sector, +sectors or +size{K,M,G} (2048-10485759, default 10485759): Using default value 10485759 Partition 1 of type Linux and of size 5 GiB is set Command (m for help): t Selected partition 1 Hex code (type L to list all codes): L 0 Empty 24 NEC DOS 81 Minix / old Lin bf Solaris 1 FAT12 27 Hidden NTFS Win 82 Linux swap / So c1 DRDOS/sec (FAT- 2 XENIX root 39 Plan 9 83 Linux c4 DRDOS/sec (FAT- 3 XENIX usr 3c PartitionMagic 84 OS/2 hidden C: c6 DRDOS/sec (FAT- 4 FAT16 <32M 40 Venix 80286 85 Linux extended c7 Syrinx 5 Extended 41 PPC PReP Boot 86 NTFS volume set da Non-FS data 6 FAT16 42 SFS 87 NTFS volume set db CP/M / CTOS / . 7 HPFS/NTFS/exFAT 4d QNX4.x 88 Linux plaintext de Dell Utility 8 AIX 4e QNX4.x 2nd part 8e Linux LVM df BootIt 9 AIX bootable 4f QNX4.x 3rd part 93 Amoeba e1 DOS access a OS/2 Boot Manag 50 OnTrack DM 94 Amoeba BBT e3 DOS R/O b W95 FAT32 51 OnTrack DM6 Aux 9f BSD/OS e4 SpeedStor c W95 FAT32 (LBA) 52 CP/M a0 IBM Thinkpad hi eb BeOS fs e W95 FAT16 (LBA) 53 OnTrack DM6 Aux a5 FreeBSD ee GPT f W95 Ext'd (LBA) 54 OnTrackDM6 a6 OpenBSD ef EFI (FAT-12/16/ 10 OPUS 55 EZ-Drive a7 NeXTSTEP f0 Linux/PA-RISC b 11 Hidden FAT12 56 Golden Bow a8 Darwin UFS f1 SpeedStor 12 Compaq diagnost 5c Priam Edisk a9 NetBSD f4 SpeedStor 14 Hidden FAT16 <3 61 SpeedStor ab Darwin boot f2 DOS secondary 16 Hidden FAT16 63 GNU HURD or Sys af HFS / HFS+ fb VMware VMFS 17 Hidden HPFS/NTF 64 Novell Netware b7 BSDI fs fc VMware VMKCORE 18 AST SmartSleep 65 Novell Netware b8 BSDI swap fd Linux raid auto 1b Hidden W95 FAT3 70 DiskSecure Mult bb Boot Wizard hid fe LANstep 1c Hidden W95 FAT3 75 PC/IX be Solaris boot ff BBT 1e Hidden W95 FAT1 80 Old Minix Hex code (type L to list all codes): fd Changed type of partition 'Linux' to 'Linux raid autodetect' Command (m for help): p Disk /dev/sde: 5368 MB, 5368709120 bytes, 10485760 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk label type: dos Disk identifier: 0xe0cce225 Device Boot Start End Blocks Id System /dev/sde1 2048 10485759 5241856 fd Linux raid autodetect Command (m for help): w The partition table has been altered! Calling ioctl() to re-read partition table. Syncing disks.
開始創建Raid1
[root@raid ~]# mdadm -C /dev/md1 -ayes -l1 -n2 /dev/sd[d,e]1 mdadm: Note: this array has metadata at the start and may not be suitable as a boot device. If you plan to store '/boot' on this device please ensure that your boot-loader understands md/v1.x metadata, or use --metadata=0.90 Continue creating array? y mdadm: Defaulting to version 1.2 metadata mdadm: array /dev/md1 started.
查看raid1狀態
[root@raid ~]# cat /proc/mdstat Personalities : [raid0] [raid1] md1 : active raid1 sde1[1] sdd1[0] 5237760 blocks super 1.2 [2/2] [UU] [================>....] resync = 84.0% (4401920/5237760) finish=0.0min speed=209615K/sec md0 : active raid0 sdb1[0] sdc1[1] 10475520 blocks super 1.2 512k chunks unused devices: <none> [root@raid ~]# mdadm -D /dev/md1 /dev/md1: Version : 1.2 Creation Time : Mon Dec 28 18:11:06 2015 Raid Level : raid1 Array Size : 5237760 (5.00 GiB 5.36 GB) Used Dev Size : 5237760 (5.00 GiB 5.36 GB) Raid Devices : 2 Total Devices : 2 Persistence : Superblock is persistent Update Time : Mon Dec 28 18:11:33 2015 State : clean Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Name : raid:1 (local to host raid) UUID : 5ac9846b:2e04aea8:4399404c:5c2b96cb Events : 17 Number Major Minor RaidDevice State 0 8 49 0 active sync /dev/sdd1 1 8 65 1 active sync /dev/sde1
說明:Used Dev Size ,raid成員容量大小,也就是構成raid的成員硬盤或分區的容量的大小,可以看到,raid1 正在創建,待創建完畢,狀態如下:
[root@raid ~]# cat /proc/mdstat Personalities : [raid0] [raid1] md1 : active raid1 sde1[1] sdd1[0] 5237760 blocks super 1.2 [2/2] [UU] md0 : active raid0 sdb1[0] sdc1[1] 10475520 blocks super 1.2 512k chunks unused devices: <none> [root@raid ~]# mdadm -D /dev/md1 /dev/md1: Version : 1.2 Creation Time : Mon Dec 28 18:11:06 2015 Raid Level : raid1 Array Size : 5237760 (5.00 GiB 5.36 GB) Used Dev Size : 5237760 (5.00 GiB 5.36 GB) Raid Devices : 2 Total Devices : 2 Persistence : Superblock is persistent Update Time : Mon Dec 28 18:11:33 2015 State : clean Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Name : raid:1 (local to host raid) UUID : 5ac9846b:2e04aea8:4399404c:5c2b96cb Events : 17 Number Major Minor RaidDevice State 0 8 49 0 active sync /dev/sdd1 1 8 65 1 active sync /dev/sde1
添加raid1 到RAID 配置文件/etc/mdadm.conf 並修改
[root@raid ~]# echo DEVICE /dev/sd{d,e}1 >> /etc/mdadm.conf [root@raid ~]# mdadm -Ds >> /etc/mdadm.conf
修改/etc/mdadm.conf文件如下格式:
[root@raid ~]# cat /etc/mdadm.conf
DEVICE /dev/sdb1 /dev/sdc1
ARRAY /dev/md0 level=raid0 num-devices=2 UUID=1100e7ee:d40cbdc2:21c359b3:b6b966b6
DEVICE /dev/sdd1 /dev/sde1
ARRAY /dev/md1 level=raid1 num-devices=2 UUID=5ac9846b:2e04aea8:4399404c:5c2b96cb
格式化磁盤陣列
[root@raid ~]# mkfs.ext4 /dev/md1 mke2fs 1.42.9 (28-Dec-2013) Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) Stride=0 blocks, Stripe width=0 blocks 327680 inodes, 1309440 blocks 65472 blocks (5.00%) reserved for the super user First data block=0 Maximum filesystem blocks=1342177280 40 block groups 32768 blocks per group, 32768 fragments per group 8192 inodes per group Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736 Allocating group tables: done Writing inode tables: done Creating journal (32768 blocks): done Writing superblocks and filesystem accounting information: done
建立掛載點並掛載
[root@raid ~]# mkdir -p /raid1 [root@raid ~]# mount /dev/md1 /raid1
查看磁盤大小
[root@raid ~]# df -Th Filesystem Type Size Used Avail Use% Mounted on /dev/mapper/centos-root xfs 46G 4.1G 42G 9% / devtmpfs devtmpfs 2.0G 0 2.0G 0% /dev tmpfs tmpfs 2.0G 88K 2.0G 1% /dev/shm tmpfs tmpfs 2.0G 8.9M 2.0G 1% /run tmpfs tmpfs 2.0G 0 2.0G 0% /sys/fs/cgroup /dev/md0 ext4 9.8G 37M 9.2G 1% /raid0 /dev/sda1 xfs 497M 140M 358M 29% /boot tmpfs tmpfs 396M 12K 396M 1% /run/user/0 /dev/md1 ext4 4.8G 20M 4.6G 1% /raid1
寫入/etc/fstab
[root@raid ~]# cat /etc/fstab # # /etc/fstab # Created by anaconda on Mon Dec 28 11:06:31 2015 # # Accessible filesystems, by reference, are maintained under '/dev/disk' # See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info # /dev/mapper/centos-root / xfs defaults 0 0 UUID=5ea4bc6c-3846-41c6-9716-8a273e36a0f0 /boot xfs defaults 0 0 /dev/mapper/centos-swap swap swap defaults 0 0 /dev/md0 /raid0 ext4 defaults 0 0 /dev/md1 /raid1 ext4 defaults 0 0
然后reboot 測試開機是否自動掛載,raid1 創建完畢
重啟后,自動掛載了
[root@raid ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/centos-root 46G 4.1G 42G 9% / devtmpfs 2.0G 0 2.0G 0% /dev tmpfs 2.0G 88K 2.0G 1% /dev/shm tmpfs 2.0G 8.9M 2.0G 1% /run tmpfs 2.0G 0 2.0G 0% /sys/fs/cgroup /dev/md0 9.8G 37M 9.2G 1% /raid0 /dev/md1 4.8G 20M 4.6G 1% /raid1 /dev/sda1 497M 140M 358M 29% /boot tmpfs 396M 12K 396M 1% /run/user/0
磁盤IO測試,測試文件應大於內存,避免write cache
寫測試 [root@raid ~]# time dd if=/dev/zero of=/raid1/iotest bs=8k count=327680 conv=fdatasync 327680+0 records in 327680+0 records out 2684354560 bytes (2.7 GB) copied, 6.65744 s, 403 MB/s real 0m6.667s user 0m0.086s sys 0m4.236s [root@raid ~]# time dd if=/dev/zero of=/iotest bs=8k count=327680 conv=fdatasync 655360+0 records in 655360+0 records out 5368709120 bytes (5.4 GB) copied, 30.9296 s, 174 MB/s real 0m30.932s user 0m0.080s sys 0m3.623s 一個寫在raid1上,一個寫在根目錄下,速度分別是403 MB/s,174 MB/s,耗時分別是0m6.667s,0m30.932s,可見raid1的寫速度幾乎是普通分區的2倍多(理論上raid1寫是要慢的,這個很奇怪?) 讀測試 [root@raid ~]# time dd if=/raid1/iotest of=/dev/null bs=8k count=327680 327680+0 records in 327680+0 records out 2684354560 bytes (2.7 GB) copied, 0.445192 s, 6.0 GB/s real 0m0.446s user 0m0.026s sys 0m0.420s [root@raid ~]# time dd if=/iotest of=/dev/null bs=8k count=327680 327680+0 records in 327680+0 records out 2684354560 bytes (2.7 GB) copied, 1.52405 s, 1.8 GB/s real 0m1.534s user 0m0.036s sys 0m1.194s 一個讀取/raid1/iotest,一個讀取/iotest,速度分別是6.0 GB/s,1.8 GB/s,耗時分別是0m0.446s,0m1.534s,可見raid1的讀幾乎是普通分區3倍多(理論上raid1讀與普通山區差不多,這個很奇怪?) 讀寫測試 [root@raid ~]# time dd if=/raid1/iotest of=/raid1/iotest1 bs=8k count=163840 conv=fdatasync 163840+0 records in 163840+0 records out 1342177280 bytes (1.3 GB) copied, 3.47 s, 387 MB/s real 0m3.472s user 0m0.036s sys 0m2.340s [root@raid ~]# time dd if=/iotest of=/iotest1 bs=8k count=327680 conv=fdatasync 327680+0 records in 327680+0 records out 2684354560 bytes (2.7 GB) copied, 21.2412 s, 126 MB/s real 0m21.244s user 0m0.051s sys 0m2.954s 一個讀寫/raid1/iotest,/raid1/iotest1,一個讀寫/iotest,/iotest1,速度分別是387 MB/s,126 MB/s,耗時分別是0m3.472s,0m21.244s,可見raid1的讀寫是普通分區2倍還不止(理論上raid1寫是要慢的,這個很奇怪?)
三、
Raid5實驗:使用Disk /dev/sdf Disk /dev/sdg Disk /dev/sdh Disk /dev/sdi四塊盤,三塊做為活動盤,另一塊做為熱備盤
1. 新建分區並修改分區類型
fdisk /dev/sdf
fdisk /dev/sdg
fdisk /dev/sdh
fdisk /dev/sdi
詳細步驟同上,此處忽略
分區結果如下:
[root@raid ~]# fdisk -l /dev/sd[f,g,h,i] Disk /dev/sdf: 5368 MB, 5368709120 bytes, 10485760 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk label type: dos Disk identifier: 0x8a6a4f75 Device Boot Start End Blocks Id System /dev/sdf1 2048 10485759 5241856 fd Linux raid autodetect Disk /dev/sdg: 5368 MB, 5368709120 bytes, 10485760 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk label type: dos Disk identifier: 0xcd98bef8 Device Boot Start End Blocks Id System /dev/sdg1 2048 10485759 5241856 fd Linux raid autodetect Disk /dev/sdh: 5368 MB, 5368709120 bytes, 10485760 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk label type: dos Disk identifier: 0xf4d754a4 Device Boot Start End Blocks Id System /dev/sdh1 2048 10485759 5241856 fd Linux raid autodetect Disk /dev/sdi: 5368 MB, 5368709120 bytes, 10485760 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk label type: dos Disk identifier: 0x62fb90d1 Device Boot Start End Blocks Id System /dev/sdi1 2048 10485759 5241856 fd Linux raid autodetect
開始創建Raid5
[root@raid ~]# mdadm -C /dev/md5 -ayes -l5 -n3 -x1 /dev/sd[f,g,h,i]1 mdadm: Defaulting to version 1.2 metadata mdadm: array /dev/md5 started.
說明:"-x1" 或"--spare-devices=1" 表示當前陣列中熱備盤只有一塊,若有多塊熱備盤,則將"--spare-devices" 的值設置為相應的數目。
查看raid5 狀態
[root@raid ~]# cat /proc/mdstat Personalities : [raid0] [raid1] [raid6] [raid5] [raid4] md5 : active raid5 sdh1[4] sdi1[3](S) sdg1[1] sdf1[0] 10475520 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU] md1 : active raid1 sde1[1] sdd1[0] 5237760 blocks super 1.2 [2/2] [UU] md0 : active raid0 sdc1[1] sdb1[0] 10475520 blocks super 1.2 512k chunks unused devices: <none> [root@raid ~]# mdadm -D /dev/md5 /dev/md5: Version : 1.2 Creation Time : Mon Dec 28 21:08:44 2015 Raid Level : raid5 Array Size : 10475520 (9.99 GiB 10.73 GB) Used Dev Size : 5237760 (5.00 GiB 5.36 GB) Raid Devices : 3 Total Devices : 4 Persistence : Superblock is persistent Update Time : Mon Dec 28 21:09:11 2015 State : clean Active Devices : 3 Working Devices : 4 Failed Devices : 0 Spare Devices : 1 Layout : left-symmetric Chunk Size : 512K Name : raid:5 (local to host raid) UUID : 1bafff7f:f8993ec9:553cd4f7:31ae4f91 Events : 18 Number Major Minor RaidDevice State 0 8 81 0 active sync /dev/sdf1 1 8 97 1 active sync /dev/sdg1 4 8 113 2 active sync /dev/sdh1 3 8 129 - spare /dev/sdi1
說明:Rebuild Status : RAID 的構建進度;類似如下
4 8 113 2 spare rebuilding /dev/sdh1 注:未被激活,正在構建中的成員,正在傳輸數據;
3 8 129 - spare /dev/sdi1 熱備盤
添加raid5 到RAID配置文件/etc/mdadm.conf 並修改
[root@raid ~]# echo DEVICE /dev/sd{f,g,h,i}1 >> /etc/mdadm.conf [root@raid ~]# mdadm -Ds >> /etc/mdadm.conf [root@raid ~]# cat /etc/mdadm.conf DEVICE /dev/sdb1 /dev/sdc1 ARRAY /dev/md0 level=raid0 num-devices=2 UUID=1100e7ee:d40cbdc2:21c359b3:b6b966b6 DEVICE /dev/sdd1 /dev/sde1 ARRAY /dev/md1 level=raid1 num-devices=2 UUID=5ac9846b:2e04aea8:4399404c:5c2b96cb DEVICE /dev/sdf1 /dev/sdg1 /dev/sdh1 /dev/sdi1 ARRAY /dev/md5 level=raid5 num-devices=3 UUID=1bafff7f:f8993ec9:553cd4f7:31ae4f91
格式化磁盤陣列
[root@raid ~]# mkfs.ext4 /dev/md5 mke2fs 1.42.9 (28-Dec-2013) Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) Stride=128 blocks, Stripe width=256 blocks 655360 inodes, 2618880 blocks 130944 blocks (5.00%) reserved for the super user First data block=0 Maximum filesystem blocks=2151677952 80 block groups 32768 blocks per group, 32768 fragments per group 8192 inodes per group Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632 Allocating group tables: done Writing inode tables: done Creating journal (32768 blocks): done Writing superblocks and filesystem accounting information: done
建立掛載點並掛載
[root@raid ~]# mkdir -p /raid5 [root@raid ~]# mount /dev/md5 /raid5 [root@raid ~]# df -Th Filesystem Type Size Used Avail Use% Mounted on /dev/mapper/centos-root xfs 46G 4.1G 42G 9% / devtmpfs devtmpfs 2.0G 0 2.0G 0% /dev tmpfs tmpfs 2.0G 88K 2.0G 1% /dev/shm tmpfs tmpfs 2.0G 8.9M 2.0G 1% /run tmpfs tmpfs 2.0G 0 2.0G 0% /sys/fs/cgroup /dev/md0 ext4 9.8G 37M 9.2G 1% /raid0 /dev/md1 ext4 4.8G 2.6G 2.1G 56% /raid1 /dev/sda1 xfs 497M 140M 358M 29% /boot tmpfs tmpfs 396M 16K 396M 1% /run/user/0 /dev/md5 ext4 9.8G 37M 9.2G 1% /raid5
注:raid5 的可用大小為9.2G,即(3-1)x 5G.
寫入 /etc/fstab
[root@raid ~]# cat /etc/fstab # # /etc/fstab # Created by anaconda on Mon Dec 28 11:06:31 2015 # # Accessible filesystems, by reference, are maintained under '/dev/disk' # See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info # /dev/mapper/centos-root / xfs defaults 0 0 UUID=5ea4bc6c-3846-41c6-9716-8a273e36a0f0 /boot xfs defaults 0 0 /dev/mapper/centos-swap swap swap defaults 0 0 /dev/md0 /raid0 ext4 defaults 0 0 /dev/md1 /raid1 ext4 defaults 0 0 /dev/md5 /raid5 ext4 defaults 0 0
然后reboot 測試開機是否自動掛載,raid5 創建完畢,重啟測試自動掛載
[root@raid ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/centos-root 46G 4.1G 42G 9% / devtmpfs 2.0G 0 2.0G 0% /dev tmpfs 2.0G 88K 2.0G 1% /dev/shm tmpfs 2.0G 8.9M 2.0G 1% /run tmpfs 2.0G 0 2.0G 0% /sys/fs/cgroup /dev/md0 9.8G 37M 9.2G 1% /raid0 /dev/md1 4.8G 2.6G 2.1G 56% /raid1 /dev/md5 9.8G 37M 9.2G 1% /raid5 /dev/sda1 497M 140M 358M 29% /boot tmpfs 396M 12K 396M 1% /run/user/0
磁盤I/O測試,
寫測試 [root@raid ~]# time dd if=/dev/zero of=/raid5/iotest bs=8k count=327680 conv=fdatasync 327680+0 records in 327680+0 records out 2684354560 bytes (2.7 GB) copied, 10.2333 s, 262 MB/s real 0m10.236s user 0m0.049s sys 0m2.603s [root@raid ~]# time dd if=/dev/zero of=/iotest bs=8k count=327680 conv=fdatasync 655360+0 records in 655360+0 records out 5368709120 bytes (5.4 GB) copied, 30.9296 s, 174 MB/s real 0m30.932s user 0m0.080s sys 0m3.623s 一個寫在raid5上,一個寫在根目錄下,速度分別是262 MB/s,174 MB/s,耗時分別是0m10.236s,0m30.932s,可見raid5的寫速度幾乎與普通分區差不多,略快一點 讀測試 [root@raid ~]# time dd if=/raid5/iotest of=/dev/null bs=8k count=327680 327680+0 records in 327680+0 records out 2684354560 bytes (2.7 GB) copied, 0.443526 s, 6.1 GB/s real 0m0.451s user 0m0.029s sys 0m0.416s [root@raid ~]# time dd if=/iotest of=/dev/null bs=8k count=327680 327680+0 records in 327680+0 records out 2684354560 bytes (2.7 GB) copied, 1.52405 s, 1.8 GB/s real 0m1.534s user 0m0.036s sys 0m1.194s 一個讀取/raid5/iotest,一個讀取/iotest,速度分別是6.1 GB/s,1.8 GB/s,耗時分別是0m0.451s,0m1.534s,可見raid5的讀幾乎是普通分區3倍多 讀寫測試 [root@raid ~]# time dd if=/raid5/iotest of=/raid5/iotest1 bs=8k count=163840 conv=fdatasync 163840+0 records in 163840+0 records out 1342177280 bytes (1.3 GB) copied, 5.55382 s, 242 MB/s real 0m5.561s user 0m0.041s sys 0m1.288s [root@raid ~]# time dd if=/iotest of=/iotest1 bs=8k count=327680 conv=fdatasync 327680+0 records in 327680+0 records out 2684354560 bytes (2.7 GB) copied, 21.2412 s, 126 MB/s real 0m21.244s user 0m0.051s sys 0m2.954s 一個讀寫/raid5/iotest,/raid5/iotest1,一個讀寫/iotest,/iotest1,速度分別是242 MB/s,126 MB/s,耗時分別是0m5.561s,0m21.244s,可見raid5的讀寫比普通分區略快
四、
Raid維護
1.模擬磁盤損壞
在實際中,當軟RAID 檢測到某個磁盤有故障時,會自動標記該磁盤為故障磁盤,並停止對故障磁盤的讀寫操作。在這里我們將/dev/sdh1 模擬為出現故障的磁盤,命令如下:
[root@raid ~]# mdadm /dev/md5 -f /dev/sdh1 mdadm: set /dev/sdh1 faulty in /dev/md5
查看重建狀態
在上面創建RAID5過程中,我們設置了一個熱備盤,所以當有標記為故障磁盤的時候,熱備盤會自動頂替故障磁盤工作,陣列也能夠在短時間內實現重建。通過查看"/proc/mdstat" 文件可以看到當前陣列的狀態,如下:
[root@raid ~]# cat /proc/mdstat Personalities : [raid1] [raid0] [raid6] [raid5] [raid4] md1 : active raid1 sdd1[0] sde1[1] 5237760 blocks super 1.2 [2/2] [UU] md5 : active raid5 sdh1[4](F) sdg1[1] sdf1[0] sdi1[3] 10475520 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/2] [UU_] [==>...................] recovery = 11.7% (612748/10475520) finish=2min speed=1854k/sec md0 : active raid0 sdc1[1] sdb1[0] 10475520 blocks super 1.2 512k chunks unused devices: <none> [root@raid ~]# mdadm -D /dev/md5 /dev/md5: Version : 1.2 Creation Time : Mon Dec 28 21:08:44 2015 Raid Level : raid5 Array Size : 10475520 (9.99 GiB 10.73 GB) Used Dev Size : 5237760 (5.00 GiB 5.36 GB) Raid Devices : 3 Total Devices : 4 Persistence : Superblock is persistent Update Time : Mon Dec 28 22:14:03 2015 State : clean Active Devices : 3 Working Devices : 3 Failed Devices : 1 Spare Devices : 0 Layout : left-symmetric Chunk Size : 512K Name : raid:5 (local to host raid) UUID : 1bafff7f:f8993ec9:553cd4f7:31ae4f91 Events : 37 Number Major Minor RaidDevice State 0 8 81 0 active sync /dev/sdf1 1 8 97 1 active sync /dev/sdg1 3 8 129 2 spare rebuilding /dev/sdi1 4 8 113 - faulty /dev/sdh1
以上信息表明陣列正在重建,當一個設備出現故障或被標記故障時,相應設備的方括號后將被標以(F),如 "sdh1[4](F)"。其中 "[3/2]" 的第一位數表示陣列所包含的設備數,第二位數表示活動的設備數,因為目前有一個故障設備,所以第二位數為2;這時的陣列以降級模式運行,雖然該陣列仍然可用,但是不具有數據冗余;而 "[UU_]" 表示當前陣列可以正常使用的設備是/dev/sdf1 和/dev/sdg1,如果是設備 “/dev/sdf1” 出現故障時,則將變成[_UU]。
查看之前寫入的測試數據是否還在
[root@raid raid5]# cat /raid5/1.txt ldjaflajfdlajf
數據正常,未丟失。
重建完畢后查看陣列狀態
[root@raid ~]# cat /proc/mdstat Personalities : [raid1] [raid0] [raid6] [raid5] [raid4] md1 : active raid1 sdd1[0] sde1[1] 5237760 blocks super 1.2 [2/2] [UU] md5 : active raid5 sdh1[4](F) sdg1[1] sdf1[0] sdi1[3] 10475520 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU] md0 : active raid0 sdc1[1] sdb1[0] 10475520 blocks super 1.2 512k chunks unused devices: <none>
當前的RAID 設備又恢復了正常。
移除損壞的磁盤
移除剛才模擬出現故障的/dev/sdh1,操作如下:
[root@raid ~]# mdadm /dev/md5 -r /dev/sdh1
mdadm: hot removed /dev/sdh1 from /dev/md5
再次查看md5的狀態
[root@raid ~]# cat /proc/mdstat Personalities : [raid1] [raid0] [raid6] [raid5] [raid4] md1 : active raid1 sdd1[0] sde1[1] 5237760 blocks super 1.2 [2/2] [UU] md5 : active raid5 sdg1[1] sdf1[0] sdi1[3] 10475520 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU] md0 : active raid0 sdc1[1] sdb1[0] 10475520 blocks super 1.2 512k chunks unused devices: <none> [root@raid ~]# mdadm -D /dev/md5 /dev/md5: Version : 1.2 Creation Time : Mon Dec 28 21:08:44 2015 Raid Level : raid5 Array Size : 10475520 (9.99 GiB 10.73 GB) Used Dev Size : 5237760 (5.00 GiB 5.36 GB) Raid Devices : 3 Total Devices : 3 Persistence : Superblock is persistent Update Time : Mon Dec 28 22:26:24 2015 State : clean Active Devices : 3 Working Devices : 3 Failed Devices : 0 Spare Devices : 0 Layout : left-symmetric Chunk Size : 512K Name : raid:5 (local to host raid) UUID : 1bafff7f:f8993ec9:553cd4f7:31ae4f91 Events : 38 Number Major Minor RaidDevice State 0 8 81 0 active sync /dev/sdf1 1 8 97 1 active sync /dev/sdg1 3 8 129 2 active sync /dev/sdi1
/dev/sdh1 已經移除了
新加熱備磁盤
如果是實際生產中添加新的硬盤,同樣需要對新硬盤進行創建分區的操作,這里我們為了方便,將剛才模擬損壞的硬盤再次新加到raid5 中。
[root@raid ~]# mdadm /dev/md5 -a /dev/sdh1
mdadm: added /dev/sdh1
查看raid5 陣列狀態
[root@raid ~]# mdadm -D /dev/md5 /dev/md5: Version : 1.2 Creation Time : Mon Dec 28 21:08:44 2015 Raid Level : raid5 Array Size : 10475520 (9.99 GiB 10.73 GB) Used Dev Size : 5237760 (5.00 GiB 5.36 GB) Raid Devices : 3 Total Devices : 4 Persistence : Superblock is persistent Update Time : Mon Dec 28 22:34:44 2015 State : clean Active Devices : 3 Working Devices : 4 Failed Devices : 0 Spare Devices : 1 Layout : left-symmetric Chunk Size : 512K Name : raid:5 (local to host raid) UUID : 1bafff7f:f8993ec9:553cd4f7:31ae4f91 Events : 39 Number Major Minor RaidDevice State 0 8 81 0 active sync /dev/sdf1 1 8 97 1 active sync /dev/sdg1 3 8 129 2 active sync /dev/sdi1 4 8 113 - spare /dev/sdh1
/dev/sdh1 已經變成了熱備盤。
查看測試數據
[root@raid ~]# cat /raid5/1.txt ldjaflajfdlajf
數據正常,未丟失。故障切換測試完畢。
五、
向RAID中增加存儲硬盤
如果現在已經做好的RAID 空間還是不夠用的話,那么我們可以向里面增加新的硬盤,來增加RAID 的空間。
在虛擬機中添加物理硬盤,上面我們已經在虛擬機中添加了八塊硬盤,這里需要模擬新增硬盤,所以首先將虛擬機關閉,然后在存儲里再次新增一塊5GB的硬盤。然后分區等等操作,這里不再贅述。
向RAID 中新加一塊硬盤
[root@raid ~]# mdadm /dev/md5 -a /dev/sdj1
mdadm: added /dev/sdj1
查看此時的RAID 狀態
[root@raid ~]# mdadm -D /dev/md5 /dev/md5: Version : 1.2 Creation Time : Mon Dec 28 21:08:44 2015 Raid Level : raid5 Array Size : 10475520 (9.99 GiB 10.73 GB) Used Dev Size : 5237760 (5.00 GiB 5.36 GB) Raid Devices : 3 Total Devices : 5 Persistence : Superblock is persistent Update Time : Mon Dec 28 22:47:33 2015 State : clean Active Devices : 3 Working Devices : 5 Failed Devices : 0 Spare Devices : 2 Layout : left-symmetric Chunk Size : 512K Name : raid:5 (local to host raid) UUID : 1bafff7f:f8993ec9:553cd4f7:31ae4f91 Events : 43 Number Major Minor RaidDevice State 0 8 81 0 active sync /dev/sdf1 1 8 97 1 active sync /dev/sdg1 3 8 129 2 active sync /dev/sdi1 4 8 113 - spare /dev/sdh1 5 8 145 - spare /dev/sdj1
默認情況下,我們向RAID 中增加的磁盤,會被默認當作熱備盤,我們需要把熱備盤加入到RAID 的活動盤中。
熱備盤轉換成活動盤

