需求說明:公司最近來了一批服務器,用於大數據業務部署。數據節點服務器由14塊物理磁盤,其中有2塊是900G的盤,12塊是4T的盤。在服務器系統安裝時,進入系統的BIOS界面:1)將2塊900G的磁盤做成raid1用作系統盤(順便說一下:raid0最少需要1塊磁盤;raid1最少需要2塊磁盤;raid10最少需要4塊磁盤,raid5至少需要3塊磁盤);2)將其中的2塊4T的磁盤做成raid1,分別掛載到/data1和/data2用作大數據日志存儲;3)另外的10塊4T的磁盤在系統安裝時沒做raid也沒做分區,打算在系統安裝后,登錄到系統終端通過命令行進行直接進行10塊盤的格式化,並分別掛載到/data3、/data4、/data5、/data6、/data7、/data8、/data9、/data10、/data11、/data12,用作大數據業務的數據盤,文件格式為ext4,采用uuid方式掛載,掛載屬性為noatime,nobarrier。
待服務器系統安裝后,登錄機器,使用"fdisk -l"命令,發現除了看到4塊做raid的盤,其余的10塊物理磁盤看不到(硬盤默認采用的是raid模式,如果不做raid陣列的話,就是別不了。可以在BIOS界面里修改硬盤模式為IDE模式或AHCI模式):
[root@data-node01 ~]# fdisk -l Disk /dev/sdb: 2001.1 GB, 2001111154688 bytes 255 heads, 63 sectors/track, 243287 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0004a319 Device Boot Start End Blocks Id System /dev/sdb1 1 243288 1954208768 83 Linux Disk /dev/sdd: 4000.2 GB, 4000225165312 bytes 255 heads, 63 sectors/track, 486333 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/sdf: 4000.2 GB, 4000225165312 bytes 255 heads, 63 sectors/track, 486333 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/sde: 4000.2 GB, 4000225165312 bytes 255 heads, 63 sectors/track, 486333 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/sdh: 4000.2 GB, 4000225165312 bytes 255 heads, 63 sectors/track, 486333 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/sda: 899.5 GB, 899527213056 bytes 255 heads, 63 sectors/track, 109361 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00026ac4 Device Boot Start End Blocks Id System /dev/sda1 * 1 52 409600 83 Linux Partition 1 does not end on cylinder boundary. /dev/sda2 52 4229 33554432 82 Linux swap / Solaris /dev/sda3 4229 109362 844479488 83 Linux Disk /dev/sdc: 1999.1 GB, 1999114010624 bytes 255 heads, 63 sectors/track, 243045 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0006d390 Device Boot Start End Blocks Id System /dev/sdc1 1 243046 1952258048 83 Linux
這時候,需要用到一款管理維護硬件RAID軟件-MegaCli,可以通過它來了解當前raid卡的所有信息,包括 raid卡的型號,raid的陣列類型,raid 上各磁盤狀態,等等。操作記錄如下:
1)下載及安裝MegaCLI工具
下載地址:https://pan.baidu.com/s/1TAGHjTA19ZR8MGODaqy7Mg
提取密碼:msbq
下載到/usr/loca/src目錄下
[root@data-node01 ~]# cd /usr/local/src/
[root@data-node01 src]# ls
ibm_utl_sraidmr_megacli-8.00.48_linux_32-64.zip
[root@data-node01 src]# unzip ibm_utl_sraidmr_megacli-8.00.48_linux_32-64.zip
[root@data-node01 src]# cd linux/
[root@data-node01 linux]# ls
Lib_Utils-1.00-09.noarch.rpm MegaCli-8.00.48-1.i386.rpm
[root@data-node01 linux]# rpm -ivh Lib_Utils-1.00-09.noarch.rpm MegaCli-8.00.48-1.i386.rpm
說明:安裝完畢之后MegaCli64所在路徑為/opt/MegaRAID/MegaCli/MegaCli64,在此路徑下可以運行MegaCli64工具,切換到其它路徑下則不能執行,
此時為了使用方便,可以考慮將/opt/MegaRAID/MegaCli/MegaCli64追加到系統PATH變量,或者像下面這樣做(建議):
[root@data-node01 linux]# ln -s /opt/MegaRAID/MegaCli/MegaCli64 /bin/MegaCli64
[root@data-node01 linux]# ln -s /opt/MegaRAID/MegaCli/MegaCli64 /sbin/MegaCli64
2)使用MegaCli64命令進行相關操作
先查看磁盤數量。如下可以看出,一共有14塊物理磁盤
[root@data-node01 linux]# MegaCli64 -PDList -aALL | egrep 'Enclosure Device ID|Slot Number'
Enclosure Device ID: 32
Slot Number: 0
Enclosure Device ID: 32
Slot Number: 1
Enclosure Device ID: 32
Slot Number: 2
Enclosure Device ID: 32
Slot Number: 3
Enclosure Device ID: 32
Slot Number: 4
Enclosure Device ID: 32
Slot Number: 5
Enclosure Device ID: 32
Slot Number: 6
Enclosure Device ID: 32
Slot Number: 7
Enclosure Device ID: 32
Slot Number: 8
Enclosure Device ID: 32
Slot Number: 9
Enclosure Device ID: 32
Slot Number: 10
Enclosure Device ID: 32
Slot Number: 11
Enclosure Device ID: 32
Slot Number: 12
Enclosure Device ID: 32
Slot Number: 13
接着查看磁盤信息
[root@data-node01 linux]# MegaCli64 -PDList -aALL
Adapter #0
Enclosure Device ID: 32
Slot Number: 0
Enclosure position: 0
Device Id: 0
Sequence Number: 2
Media Error Count: 0
Other Error Count: 0
Predictive Failure Count: 0
Last Predictive Failure Event Seq Number: 0
PD Type: SATA
Raw Size: 3.638 TB [0x1d1c0beb0 Sectors]
Non Coerced Size: 3.637 TB [0x1d1b0beb0 Sectors]
Coerced Size: 3.637 TB [0x1d1b00000 Sectors]
Firmware state: Online, Spun Up
SAS Address(0): 0x500056b3983fbac0
Connected Port Number: 0(path0)
Inquiry Data: 4837K2DVF7DETOSHIBA MG04ACA400NY FK3D
FDE Capable: Not Capable
FDE Enable: Disable
Secured: Unsecured
Locked: Unlocked
Needs EKM Attention: No
Foreign State: None
Device Speed: 6.0Gb/s
Link Speed: 6.0Gb/s
Media Type: Hard Disk Device
Drive Temperature :30C (86.00 F)
...........
...........
Enclosure Device ID: 32
Slot Number: 3
Enclosure position: 0
Device Id: 3
Sequence Number: 2
Media Error Count: 0
Other Error Count: 0
Predictive Failure Count: 0
Last Predictive Failure Event Seq Number: 0
PD Type: SATA
Raw Size: 3.638 TB [0x1d1c0beb0 Sectors]
Non Coerced Size: 3.637 TB [0x1d1b0beb0 Sectors]
Coerced Size: 3.637 TB [0x1d1b00000 Sectors]
Firmware state: unconfigured(good), Spun Up
SAS Address(0): 0x500056b3983fbac3
Connected Port Number: 0(path0)
Inquiry Data: 4838K1VCF7DETOSHIBA MG04ACA400NY FK3D
FDE Capable: Not Capable
FDE Enable: Disable
Secured: Unsecured
Locked: Unlocked
Needs EKM Attention: No
Foreign State: None
Device Speed: 6.0Gb/s
Link Speed: 6.0Gb/s
Media Type: Hard Disk Device
Drive Temperature :30C (86.00 F)
...........
...........
如上命令結果信息中,注意查看"Firmware state"的狀態:
如果該狀態為Online,就說明該磁盤已被做成raid陣列,
如果該狀態為unconfigured(good),就說明該磁盤沒有被做成raid陣列,但狀態OK。
由上面命令可知,前2塊4T的磁盤做成了raid,其余10塊4T的磁盤沒有做raid,另2塊900G的磁盤做成了raid。
--------------------------------------------------------------------------------------
現在的做法是:
通過MegaCli工具將那10塊4T的磁盤分別做成raid0,然后格式化成ext4,並使用uuid方式掛載。
依次對第3-12塊盤做成raid0陣列,其中:
-r0表示做成raid0陣列,[32:2]中的32為Enclosure Device ID,5為Slot Number。
WB Direct:磁盤Write back
[root@data-node01 linux]# MegaCli64 -CfgLdAdd -r0[32:2] WB Direct -a0
[root@data-node01 linux]# MegaCli64 -CfgLdAdd -r0[32:3] WB Direct -a0
[root@data-node01 linux]# MegaCli64 -CfgLdAdd -r0[32:4] WB Direct -a0
[root@data-node01 linux]# MegaCli64 -CfgLdAdd -r0[32:5] WB Direct -a0
[root@data-node01 linux]# MegaCli64 -CfgLdAdd -r0[32:6] WB Direct -a0
[root@data-node01 linux]# MegaCli64 -CfgLdAdd -r0[32:7] WB Direct -a0
[root@data-node01 linux]# MegaCli64 -CfgLdAdd -r0[32:8] WB Direct -a0
[root@data-node01 linux]# MegaCli64 -CfgLdAdd -r0[32:9] WB Direct -a0
[root@data-node01 linux]# MegaCli64 -CfgLdAdd -r0[32:10] WB Direct -a0
[root@data-node01 linux]# MegaCli64 -CfgLdAdd -r0[32:11] WB Direct -a0
然后再次查看磁盤情況,發現"Firmware state"的狀態都為Online,即所有磁盤都在raid陣列中了。
[root@data-node01 linux]# MegaCli64 -PDList -aALL
然后使用"fdisk -l"就能發現所有的物理磁盤了
[root@data-node01 linux]# fdisk -l
Disk /dev/sdb: 1993.4 GB, 1993414541312 bytes
255 heads, 63 sectors/track, 242352 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000cd8df
Device Boot Start End Blocks Id System
/dev/sdb1 1 242353 1946692608 83 Linux
Disk /dev/sda: 899.5 GB, 899527213056 bytes
255 heads, 63 sectors/track, 109361 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000b231b
Device Boot Start End Blocks Id System
/dev/sda1 * 1 52 409600 83 Linux
Partition 1 does not end on cylinder boundary.
/dev/sda2 52 4229 33554432 82 Linux swap / Solaris
/dev/sda3 4229 109362 844479488 83 Linux
Disk /dev/sdc: 2006.8 GB, 2006810624000 bytes
255 heads, 63 sectors/track, 243980 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000010e5
Device Boot Start End Blocks Id System
/dev/sdc1 1 243981 1959774208 83 Linux
Disk /dev/sdd: 4000.2 GB, 4000225165312 bytes
255 heads, 63 sectors/track, 486333 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Disk /dev/sde: 4000.2 GB, 4000225165312 bytes
255 heads, 63 sectors/track, 486333 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Disk /dev/sdf: 4000.2 GB, 4000225165312 bytes
255 heads, 63 sectors/track, 486333 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Disk /dev/sdg: 4000.2 GB, 4000225165312 bytes
255 heads, 63 sectors/track, 486333 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Disk /dev/sdh: 4000.2 GB, 4000225165312 bytes
255 heads, 63 sectors/track, 486333 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Disk /dev/sdi: 4000.2 GB, 4000225165312 bytes
255 heads, 63 sectors/track, 486333 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Disk /dev/sdj: 4000.2 GB, 4000225165312 bytes
255 heads, 63 sectors/track, 486333 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Disk /dev/sdk: 4000.2 GB, 4000225165312 bytes
255 heads, 63 sectors/track, 486333 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Disk /dev/sdl: 4000.2 GB, 4000225165312 bytes
255 heads, 63 sectors/track, 486333 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Disk /dev/sdm: 4000.2 GB, 4000225165312 bytes
255 heads, 63 sectors/track, 486333 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
---------------------------------------------------------------------------
對這10塊磁盤進行格式化,腳本如下:
[root@data-node01 ~]# cat disk.list
/dev/sdd
/dev/sde
/dev/sdf
/dev/sdg
/dev/sdh
/dev/sdi
/dev/sdj
/dev/sdk
/dev/sdl
/dev/sdm
[root@data-node01 ~]# cat mkfs.disk.sh
#!/bin/bash
for i in `cat /root/disk.list`
do
echo 'y' | /sbin/mkfs.ext4 $i &
done
[root@data-node01 ~]# sh -x mkfs.disk.sh
接着進行掛載
[root@data-node01 ~]# /bin/mkdir /data{3,4,5,6,7,8,9,10,11,12}
查看這10塊盤的uuid(下面兩種方式都可以)
[root@data-node01 ~]# blkid
[root@data-node01 ~]# ls -l /dev/disk/by-uuid/
[root@data-node01 ~]# blkid
/dev/sda3: UUID="964bec23-58b4-4a6b-a96f-f2e3222fc096" TYPE="ext4"
/dev/sdc1: UUID="9acdef9d-fbe1-4d9f-82ff-9e9920df868e" TYPE="ext4"
/dev/sdb1: UUID="696f5971-4c7c-4312-a1c3-a20fc3772299" TYPE="ext4"
/dev/sda1: UUID="ee26ded4-8334-4a0f-84bc-cc97d103714e" TYPE="ext4"
/dev/sda2: UUID="316cb693-05fe-473d-a2ff-3c3c0e0e6c3d" TYPE="swap"
/dev/sdd: UUID="f92e73be-526d-4d84-8f5b-95273ebbd352" TYPE="ext4"
/dev/sde: UUID="0a6404ea-60dc-4e3e-b542-48a313e149dd" TYPE="ext4"
/dev/sdf: UUID="05891dd0-256a-4f7f-a2de-f1f858eb2a95" TYPE="ext4"
/dev/sdg: UUID="77df1f77-0168-430e-96a3-f2eb44e15242" TYPE="ext4"
/dev/sdh: UUID="e1f11339-ad68-44a1-a600-066094439ed2" TYPE="ext4"
/dev/sdi: UUID="628f1658-d8f9-4573-a124-0712b0c29e90" TYPE="ext4"
/dev/sdj: UUID="9ee336b0-3960-4cfd-9cb6-c92535f45ebd" TYPE="ext4"
/dev/sdk: UUID="bb6c1e2d-41b8-407d-b6df-df2e3ffc9c52" TYPE="ext4"
/dev/sdl: UUID="9ca6aecf-e0f1-4338-a7eb-e8a1d2f3b017" TYPE="ext4"
/dev/sdm: UUID="a5bf2880-4981-462a-8042-c6e913627c3d" TYPE="ext4"
單獨找出這10塊磁盤的uuid
[root@data-node01 ~]# blkid|tail -10|awk '{print $2}'
UUID="f92e73be-526d-4d84-8f5b-95273ebbd352"
UUID="0a6404ea-60dc-4e3e-b542-48a313e149dd"
UUID="05891dd0-256a-4f7f-a2de-f1f858eb2a95"
UUID="77df1f77-0168-430e-96a3-f2eb44e15242"
UUID="e1f11339-ad68-44a1-a600-066094439ed2"
UUID="628f1658-d8f9-4573-a124-0712b0c29e90"
UUID="9ee336b0-3960-4cfd-9cb6-c92535f45ebd"
UUID="bb6c1e2d-41b8-407d-b6df-df2e3ffc9c52"
UUID="9ca6aecf-e0f1-4338-a7eb-e8a1d2f3b017"
UUID="a5bf2880-4981-462a-8042-c6e913627c3d"
[root@data-node01 ~]# blkid|tail -10|awk '{print $2}'|sed 's/"//g'
UUID=f92e73be-526d-4d84-8f5b-95273ebbd352
UUID=0a6404ea-60dc-4e3e-b542-48a313e149dd
UUID=05891dd0-256a-4f7f-a2de-f1f858eb2a95
UUID=77df1f77-0168-430e-96a3-f2eb44e15242
UUID=e1f11339-ad68-44a1-a600-066094439ed2
UUID=628f1658-d8f9-4573-a124-0712b0c29e90
UUID=9ee336b0-3960-4cfd-9cb6-c92535f45ebd
UUID=bb6c1e2d-41b8-407d-b6df-df2e3ffc9c52
UUID=9ca6aecf-e0f1-4338-a7eb-e8a1d2f3b017
UUID=a5bf2880-4981-462a-8042-c6e913627c3d
將這10塊磁盤的uuid好放到/etc/fastab開啟啟動文件里
[root@data-node01 ~]# cat /root/a.txt
/data3 ext4 noatime,nobarrier 0 0
/data4 ext4 noatime,nobarrier 0 0
/data5 ext4 noatime,nobarrier 0 0
/data6 ext4 noatime,nobarrier 0 0
/data7 ext4 noatime,nobarrier 0 0
/data8 ext4 noatime,nobarrier 0 0
/data9 ext4 noatime,nobarrier 0 0
/data10 ext4 noatime,nobarrier 0 0
/data11 ext4 noatime,nobarrier 0 0
/data12 ext4 noatime,nobarrier 0 0
[root@data-node01 ~]# blkid|tail -10|awk '{print $2}'|sed 's/"//g'|paste - /root/a.txt >> /etc/fastab #"paste -" 表示將兩個文件內容合並
UUID=f92e73be-526d-4d84-8f5b-95273ebbd352 /data3 ext4 noatime,nobarrier 0 0
UUID=0a6404ea-60dc-4e3e-b542-48a313e149dd /data4 ext4 noatime,nobarrier 0 0
UUID=05891dd0-256a-4f7f-a2de-f1f858eb2a95 /data5 ext4 noatime,nobarrier 0 0
UUID=77df1f77-0168-430e-96a3-f2eb44e15242 /data6 ext4 noatime,nobarrier 0 0
UUID=e1f11339-ad68-44a1-a600-066094439ed2 /data7 ext4 noatime,nobarrier 0 0
UUID=628f1658-d8f9-4573-a124-0712b0c29e90 /data8 ext4 noatime,nobarrier 0 0
UUID=9ee336b0-3960-4cfd-9cb6-c92535f45ebd /data9 ext4 noatime,nobarrier 0 0
UUID=bb6c1e2d-41b8-407d-b6df-df2e3ffc9c52 /data10 ext4 noatime,nobarrier 0 0
UUID=9ca6aecf-e0f1-4338-a7eb-e8a1d2f3b017 /data11 ext4 noatime,nobarrier 0 0
UUID=a5bf2880-4981-462a-8042-c6e913627c3d /data12 ext4 noatime,nobarrier 0 0
最后將服務器通過"reboot"重啟,重啟之后查看磁盤及掛載狀態,就能看到那10塊磁盤的掛載情況
[root@data-node01 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda3 793G 3.1G 750G 1% /
tmpfs 63G 0 63G 0% /dev/shm
/dev/sda1 380M 78M 282M 22% /boot
/dev/sdb1 1.8T 68M 1.7T 1% /data1
/dev/sdc1 1.8T 68M 1.8T 1% /data2
/dev/sdd 3.6T 68M 3.4T 1% /data3
/dev/sde 3.6T 68M 3.4T 1% /data4
/dev/sdf 3.6T 68M 3.4T 1% /data5
/dev/sdg 3.6T 68M 3.4T 1% /data6
/dev/sdh 3.6T 68M 3.4T 1% /data7
/dev/sdi 3.6T 68M 3.4T 1% /data8
/dev/sdj 3.6T 68M 3.4T 1% /data9
/dev/sdk 3.6T 68M 3.4T 1% /data10
/dev/sdl 3.6T 68M 3.4T 1% /data11
/dev/sdm 3.6T 68M 3.4T 1% /data12
[root@data-node01 ~]# mount
/dev/sda3 on / type ext4 (rw)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
tmpfs on /dev/shm type tmpfs (rw)
/dev/sda1 on /boot type ext4 (rw)
/dev/sdb1 on /data1 type ext4 (rw)
/dev/sdc1 on /data2 type ext4 (rw)
/dev/sdd on /data3 type ext4 (rw,noatime,nobarrier)
/dev/sde on /data4 type ext4 (rw,noatime,nobarrier)
/dev/sdf on /data5 type ext4 (rw,noatime,nobarrier)
/dev/sdg on /data6 type ext4 (rw,noatime,nobarrier)
/dev/sdh on /data7 type ext4 (rw,noatime,nobarrier)
/dev/sdi on /data8 type ext4 (rw,noatime,nobarrier)
/dev/sdj on /data9 type ext4 (rw,noatime,nobarrier)
/dev/sdk on /data10 type ext4 (rw,noatime,nobarrier)
/dev/sdl on /data11 type ext4 (rw,noatime,nobarrier)
/dev/sdm on /data12 type ext4 (rw,noatime,nobarrier)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
------------MegaCli相關命令使用梳理------------
1)添加新的磁盤(即對新的磁盤做raid陣列),做成raid0陣列 [root@date-test ~]# MegaCli64 -CfgLdAdd -r0[32:5] WB Direct -a0 說明: r0: raid0,即將這快盤做成raid0磁盤陣列 [32:5]:32為Enclosure Device ID,5為Slot Number。即磁盤的序列號信息 WB Direct:磁盤Write back 2)添加一塊帶有殘余信息的磁盤 [root@date-test ~]# MegaCli64 -cfgforeign -scan -a0 There are 1 foreign configuration(s) on controller 0. Exit Code: 0x00 說明:由於是新插入的盤,而且是塊有數據的盤,所有提示有外部配置。 [root@date-test ~]# MegaCli64 -cfgforeign -clear -a0 Foreign configuration 0 is cleared on controller 0. Exit Code: 0x00 [root@date-test ~]# MegaCli64 -cfgforeign -scan -a0 There is no foreign configuration on controller 0. Exit Code: 0x00 說明:清除外部配置信息,清除后再次進行查看 [root@date-test ~]# MegaCli64 -CfgLdAdd -r0[32:5] WB Direct -a0 Adapter 0: Created VD 1 Adapter 0: Configured the Adapter!! Exit Code: 0x00 3)查看Firmware state [root@date-test ~]# MegaCli64 -PDList -aALL -Nolog|grep '^Firm' Firmware state: Online, Spun Up Firmware state: Online, Spun Up Firmware state: Online, Spun Up Firmware state: Online, Spun Up 說明:Online狀態表示該磁盤已被做成raid陣列,為raid在線狀態;如果是unconfigured(good)狀態,就說明該磁盤沒有被做成raid陣列,但狀態OK。 4)關閉JBOD模式 [root@date-test ~]# MegaCli64 -AdpSetProp -EnableJBOD -0 -aALL Adapter 0: Set JBOD to Disable success. Exit Code: 0x00 5)檢查raild陣列的級別和配置(MegaCli64 -LDInfo -Lall -aALL) [root@date-test ~]# MegaCli64 -LDInfo -LALL -aAll | grep RAID RAID Level : Primary-1, Secondary-0, RAID Level Qualifier-0 RAID Level : Primary-1, Secondary-0, RAID Level Qualifier-0 RAID Level : Primary-1, Secondary-0, RAID Level Qualifier-0 RAID Level : Primary-0, Secondary-0, RAID Level Qualifier-0 6)檢查raid卡信息 [root@date-test ~]# MegaCli64 -AdpAllInfo -aALL 7)查看硬盤信息 [root@date-test ~]# MegaCli64 -PDList -aALL 8)查看電池信息 [root@date-test ~]# MegaCli64 -AdpBbuCmd -aAll 9)查看raid卡日志 [root@date-test ~]# MegaCli64 -FwTermLog -Dsply -aALL 10)顯示適配器個數 [root@date-test ~]# MegaCli64 -adpCount 11)顯示適配器時間 [root@date-test ~]# MegaCli64 -AdpGetTime –aALL 12)顯示所有適配器配置信息 [root@date-test ~]# MegaCli64 -AdpAllInfo -aAll 13)顯示所有邏輯磁盤組信息 [root@date-test ~]# MegaCli64 -LDInfo -LALL -aAll 14)顯示所有的物理信息 [root@date-test ~]# MegaCli64 -PDList -aAll 15)查看充電狀態 [root@date-test ~]# MegaCli64 -AdpBbuCmd -GetBbuStatus -aALL |grep 'Charger Status' 16)查看磁盤緩存策略(下面四種方式) [root@date-test ~]# MegaCli64 -LDGetProp -Cache -L0 -a0 [root@date-test ~]# MegaCli64 -LDGetProp -Cache -L1 -a0 [root@date-test ~]# MegaCli64 -LDGetProp -Cache -LALL -a0 [root@date-test ~]# MegaCli64 -LDGetProp -Cache -LALL -aALL [root@date-test ~]# MegaCli64 -LDGetProp -DskCache -LALL -aALL 17)設置磁盤緩存策略 緩存策略解釋: WT (Write through WB (Write back) NORA (No read ahead) RA (Read ahead) ADRA (Adaptive read ahead) Cached Direct 例子 [root@date-test ~]# MegaCli64 -LDSetProp WT|WB|NORA|RA|ADRA -L0 -a0 或者 [root@date-test ~]# MegaCli64 -LDSetProp -Cached|-Direct -L0 -a0 或者 [root@date-test ~]# MegaCli64 -LDSetProp -EnDskCache|-DisDskCache -L0 -a0 18)創建一個raid5陣列,由物理盤2,3,4構成,該陣列的熱備盤是物理盤5 [root@date-test ~]# MegaCli64 -CfgLdAdd -r5 [1:2,1:3,1:4] WB Direct -Hsp[1:5] -a0 19)創建一個raid5陣列,由物理盤2,3,4構成,不指定熱備盤。 [root@date-test ~]# MegaCli64 -CfgLdAdd -r5 [1:2,1:3,1:4] WB Direct -a0 20)刪除陣列 [root@date-test ~]# MegaCli64 -CfgLdDel -L1 -a0 21)在線添加磁盤 [root@date-test ~]# MegaCli64 -LDRecon -Start -r5 -Add -PhysDrv[1:4] -L1 -a0 22)陣列創建完后,會有一個初始化同步塊的過程,可以看看其進度。 [root@date-test ~]# MegaCli64 -LDInit -ShowProg -LALL -aALL 或者以動態可視化文字界面顯示 [root@date-test ~]# MegaCli64 -LDInit -ProgDsply -LALL -aALL 23)查看陣列后台初始化進度 [root@date-test ~]# MegaCli64 -LDBI -ShowProg -LALL -aALL 或者以動態可視化文字界面顯示 [root@date-test ~]# MegaCli64 -LDBI -ProgDsply -LALL -aALL 24)指定第5塊盤作為全局熱備 [root@date-test ~]# MegaCli64 -PDHSP -Set [-EnclAffinity] [-nonRevertible] -PhysDrv[1:5] -a0 25)指定為某個陣列的專用熱備 [root@date-test ~]# MegaCli64 -PDHSP -Set [-Dedicated [-Array1]] [-EnclAffinity] [-nonRevertible] -PhysDrv[1:5] -a0 26)刪除全局熱備 [root@date-test ~]# MegaCli64 -PDHSP -Rmv -PhysDrv[1:5] -a0 27)將某塊物理盤下線/上線 [root@date-test ~]# MegaCli64 -PDOffline -PhysDrv [1:4] -a0 [root@date-test ~]# MegaCli64 -PDOnline -PhysDrv [1:4] -a0 28)查看物理磁盤重建進度 [root@date-test ~]# MegaCli64 -PDRbld -ShowProg -PhysDrv [1:5] -a0 以動態可視化文字界面顯示 [root@date-test ~]# MegaCli64 -PDRbld -ProgDsply -PhysDrv [1:5] -a0
