一、multipath在redhat 6.2中的基本配置:
1. 通過命令:lsmod |grep dm_multipath 檢查是否正常安裝成功。如果沒有輸出說明沒有安裝那么通過yum功能安裝一下軟件包:yum –y install device-mapper device-mapper-multipath
接着通過命令:multipath –ll 查看多路徑狀態查看模塊是否加載成功
[root@liujing ~]# multipath –ll 查看多路徑狀態
Mar 10 19:18:28 | /etc/multipath.conf does not exist, blacklisting all devices.
Mar 10 19:18:28 | A sample multipath.conf file is located at
Mar 10 19:18:28 | /usr/share/doc/device-mapper-multipath-0.4.9/multipath.conf
Mar 10 19:18:28 | You can run /sbin/mpathconf to create or modify /etc/multipath.conf
Mar 10 19:18:28 | DM multipath kernel driver not loaded ----DM模塊沒有加載
如果模塊沒有加載成功請使用下列命初始化DM,或重啟系統
---Use the following commands to initialize and start DM for the first time:
# modprobe dm-multipath
# modprobe dm-round-robin
# service multipathd start
# multipath –v2
初始化完了之后再通過multipath -ll命令查看是否加載成功
[root@liujing ~]# multipath -ll
Mar 10 19:21:14 | /etc/multipath.conf does not exist, blacklisting all devices.
Mar 10 19:21:14 | A sample multipath.conf file is located at
Mar 10 19:21:14 | /usr/share/doc/device-mapper-multipath-0.4.9/multipath.conf
Mar 10 19:21:14 | You can run /sbin/mpathconf to create or modify /etc/multipath.conf
DM multipath kernel driver not loaded ----這個提示沒了說明DM模塊已加載成功。
從上面的提示可以看到,DM模塊是成功加載,但是/etc/下沒有multipath.conf 配置文件,下一步介紹如何配置multipath.conf 文件。
2. 配置multipath:
通過vi命令創建一個Multipath的配置文件路徑是/etc/multipath.conf ,在配置文件中添加multipath正常工作的最簡配置如下:
vi /etc/multipath.conf
blacklist {
devnode "^sda"
}
defaults {
user_friendly_names yes
path_grouping_policy multibus
failback immediate
no_path_retry fail
}
編輯完成后保存配置,同時通過命令:
# /etc/init.d/multipathd start #開啟mulitipath服務
如果出現無法開啟服務的情況,沒有提示OK的話如下:
[root@liujing mapper]# service multipathd start
Starting multipathd daemon: 沒有提示OK
重新開關一下服務就可以解決了。
[root@liujing mapper]# /etc/init.d/multipathd stop
Stopping multipathd daemon: [ OK ]
[root@localhost mapper]# /etc/init.d/multipathd start
Starting multipathd daemon: [ OK ] -----提示OK 正常開啟服務
通過命令查看:
[root@liujing mapper]# multipath -ll
mpatha (360a9800064665072443469563477396c) dm-0 NETAPP,LUN ----創建了一個lun
size=3.5G features='0' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=4 status=active
|- 1:0:0:0 sdb 8:16 active ready running ----多路徑下的兩個盤符sdb和sde.
`- 2:0:0:0 sde 8:64 active ready running
目錄/dev/mapper/ 下多了兩個文件夾mpatha 和mpathap1。
[root@liujing mapper]# cd /dev/mapper/
[root@liujing mapper]# ls
control mpatha mpathap1
同時fdisk –l的命令下也多了兩個設備標識:
沒有配置多路徑時:
[root@liujing~]# fdisk -l
Disk /dev/sda: 146.8 GB, 146815733760 bytes
255 heads, 63 sectors/track, 17849 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000a6cdd
Device Boot Start End Blocks Id System
/dev/sda1 * 1 26 204800 83 Linux
Partition 1 does not end on cylinder boundary.
/dev/sda2 26 287 2097152 82 Linux swap / Solaris
Partition 2 does not end on cylinder boundary.
/dev/sda3 287 17850 141071360 83 Linux
Disk /dev/sdb: 3774 MB, 3774873600 bytes
117 heads, 62 sectors/track, 1016 cylinders
Units = cylinders of 7254 * 512 = 3714048 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 4096 bytes / 65536 bytes
Disk identifier: 0xac956c3a
Device Boot Start End Blocks Id System
/dev/sdb1 1 1016 3685001 83 Linux
Partition 1 does not start on physical sector boundary.
Disk /dev/sde: 3774 MB, 3774873600 bytes
117 heads, 62 sectors/track, 1016 cylinders
Units = cylinders of 7254 * 512 = 3714048 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 4096 bytes / 65536 bytes
Disk identifier: 0xac956c3a
Device Boot Start End Blocks Id System
/dev/sde1 1 1016 3685001 83 Linux
Partition 1 does not start on physical sector boundary.
兩個CAN網卡獲取到同一盤符:
/dev/sde和/dev/sdb.
配置后多了/dev/mapper/mpatha和/dev/mapper/mpathap1:
[root@localhost mapper]# fdisk -l
Disk /dev/sda: 146.8 GB, 146815733760 bytes
255 heads, 63 sectors/track, 17849 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000a6cdd
Device Boot Start End Blocks Id System
/dev/sda1 * 1 26 204800 83 Linux
Partition 1 does not end on cylinder boundary.
/dev/sda2 26 287 2097152 82 Linux swap / Solaris
Partition 2 does not end on cylinder boundary.
/dev/sda3 287 17850 141071360 83 Linux
Disk /dev/sdb: 3774 MB, 3774873600 bytes
117 heads, 62 sectors/track, 1016 cylinders
Units = cylinders of 7254 * 512 = 3714048 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 4096 bytes / 65536 bytes
Disk identifier: 0xac956c3a
Device Boot Start End Blocks Id System
/dev/sdb1 1 1016 3685001 83 Linux
Partition 1 does not start on physical sector boundary.
Disk /dev/sde: 3774 MB, 3774873600 bytes
117 heads, 62 sectors/track, 1016 cylinders
Units = cylinders of 7254 * 512 = 3714048 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 4096 bytes / 65536 bytes
Disk identifier: 0xac956c3a
Device Boot Start End Blocks Id System
/dev/sde1 1 1016 3685001 83 Linux
Partition 1 does not start on physical sector boundary.
Disk /dev/mapper/mpatha: 3774 MB, 3774873600 bytes
117 heads, 62 sectors/track, 1016 cylinders
Units = cylinders of 7254 * 512 = 3714048 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 4096 bytes / 65536 bytes
Disk identifier: 0xac956c3a
Device Boot Start End Blocks Id System
/dev/mapper/mpathap1 1 1016 3685001 83 Linux
Partition 1 does not start on physical sector boundary.
Disk /dev/mapper/mpathap1: 3773 MB, 3773441024 bytes
255 heads, 63 sectors/track, 458 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 4096 bytes / 65536 bytes
Alignment offset: 1024 bytes
Disk identifier: 0x00000000
Disk /dev/mapper/mpathap1 doesn't contain a valid partition table
# multipath -F #刪除現有路徑 兩個新的路徑就會被刪除
# multipath -v2 #格式化路徑 格式化后又出現
3. multipath磁盤的基本操作
要對多路徑軟件生成的磁盤進行操作直接操作/dev/mapper/目錄下的磁盤就行.
在對多路徑軟件生成的磁盤進行分區之前最好運行一下pvcreate命令:
# pvcreate /dev/mapper/mpatha
# fdisk /dev/mapper/mpatha 分區時用這個目錄/dev/mapper/mpatha
用fdisk對多路徑軟件生成的磁盤進行分區保存時會有一個報錯,此報錯不用理會.
# ls -l /dev/mapper/
[root@liujing mnt]# ls -l /dev/mapper/
total 0
crw-rw----. 1 root root 10, 58 Mar 10 19:10 control
lrwxrwxrwx. 1 root root 7 Mar 10 20:28 mpatha -> ../dm-0
lrwxrwxrwx. 1 root root 7 Mar 10 20:33 mpathap1 -> ../dm-1
的mpathap1就是我們對multipath磁盤進行的分區
# mkfs.ext4 /dev/mapper/mpathap1 #對mpath1p1分區格式化成ext4文件系統
# mount /dev/mapper/mpathap1 /mnt/ #掛載mpathap1分區
格式化和掛載時用/dev/mapper/mpathap1
4. 分區磁盤:
上面有提到分區時用目錄/dev/mapper/mpatha
[root@liujing~]# fdisk /dev/mapper/mpatha
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel with disk identifier 0xac956c3a.
Changes will remain in memory only, until you decide to write them.
After that, of course, the previous content won't be recoverable.
Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)
WARNING: DOS-compatible mode is deprecated. It's strongly recommended to
switch off the mode (command 'c') and change display units to
sectors (command 'u').
Command (m for help): n------------------------新建分區
Command action
e extended
p primary partition (1-4)
p-----------------------------主分區
Partition number (1-4): 1
First cylinder (1-1016, default 1):
Using default value 1
Last cylinder, +cylinders or +size{K,M,G} (1-1016, default 1016):
Using default value 1016
Command (m for help): w ---------------------寫入列表相當於保存
The partition table has been altered!
Calling ioctl() to re-read partition table.
Syncing disks.
注:如果同一台設備的兩個node掛同樣的盤符,另一個盤符還需要再次寫入w就行。不需要n了。
5. 格式化:
[root@liujing ~]# mkfs.ext4 /dev/mapper/mpathap1
mke2fs 1.41.12 (17-May-2010)
/dev/sdd1 alignment is offset by 1024 bytes.
This may result in very poor performance, (re)-partitioning suggested.
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=1 blocks, Stripe width=16 blocks
230608 inodes, 921250 blocks
46062 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=943718400
29 block groups
32768 blocks per group, 32768 fragments per group
7952 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736
Writing inode tables: done
Creating journal (16384 blocks): done
Writing superblocks and filesystem accounting information: done
This filesystem will be automatically checked every 33 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.
6. 掛載 /dev/mapper/mpathap1 到 /mnt
[root@liujing ~]# mount /dev/mapper/mpathap1 /mnt
三、multipath的高級配置之前的配置都是用multipath的默認配置來完成multipath,比如映射設備的名稱,multipath負載均衡的方法都是默認設置。那有沒有按照我們自己定義的方法來配置multipath呢,答案是OK。
1、multipath.conf文件的配置
接下來的工作就是要編輯/etc/multipath.conf的配置文件
multipath.conf主要包括blacklist、multipaths、devices三部份的配置
blacklist配置
blacklist {
devnode "^sda"
}
Multipaths部分配置multipaths和devices兩部份的配置。
multipaths {
multipath {
wwid **************** #此值multipath -v3可以看到
alias iscsi-dm0 #映射后的別名,可以隨便取
path_grouping_policy multibus #路徑組策略
path_checker tur #決定路徑狀態的方法
path_selector "round-robin 0" #選擇那條路徑進行下一個IO操作的方法
}
}
Devices部分配置
devices {
device {
vendor "iSCSI-Enterprise" #廠商名稱
product "Virtual disk" #產品型號
path_grouping_policy multibus #默認的路徑組策略
getuid_callout "/sbin/scsi_id -g -u -s /block/%n" #獲得唯一設備號使用的默認程序
prio_callout "/sbin/acs_prio_alua %d" #獲取有限級數值使用的默認程序
path_checker readsector0 #決定路徑狀態的方法
path_selector "round-robin 0" #選擇那條路徑進行下一個IO操作的方法
failback immediate #故障恢復的模式
no_path_retry queue #在disable queue之前系統嘗試使用失效路徑的次數的數值
rr_min_io 100 #在當前的用戶組中,在切換到另外一條路徑之前的IO請求的數目
}
}
下面是相關參數的標准文檔的介紹:
Attribute |
Description |
|||||||||
wwid |
Specifies the WWID of the multipath device to which themultipath attributes apply. This parameter is mandatory for this section of themultipath.conf file. |
|||||||||
alias |
Specifies the symbolic name for the multipath device to which themultipath attributes apply. If you are usinguser_friendly_names, do not set this value tompathn; this may conflict with an automatically assigned user friendly name and give you incorrect device node names. |
|||||||||
path_grouping_policy |
|
|||||||||
path_selector |
|
|||||||||
failback |
|
|||||||||
prio |
|
|||||||||
no_path_retry |
|
|||||||||
rr_min_io |
Specifies the number of I/O requests to route to a path before switching to the next path in the current path group. This setting is only for systems running kernels older that 2.6.31. Newer systems should userr_min_io_rq. The default value is 1000. |
|||||||||
rr_min_io_rq |
Specifies the number of I/O requests to route to a path before switching to the next path in the current path group, using request-based device-mapper-multipath. This setting should be used on systems running current kernels. On systems running kernels older than 2.6.31, use rr_min_io. The default value is 1. |
|||||||||
rr_weight |
If set to priorities, then instead of sending rr_min_iorequests to a path before calling path_selector to choose the next path, the number of requests to send is determined byrr_min_io times the path's priority, as determined by theprio function. If set to uniform, all path weights are equal. |
|||||||||
flush_on_last_del |
If set to yes, then multipath will disable queueing when the last path to a device has been deleted. |
在我本地的一個完整的高級配置如下:
[root@liujing ~]# vi /etc/multipath.conf
blacklist {
devnode "^sda"
}
multipaths {
multipath {
wwid 360a98000646650724434697454546156
alias mpathb_fcoe
path_grouping_policy multibus
#path_checker "directio"
prio "random"
path_selector "round-robin 0"
}
}
devices {
device {
vendor "NETAPP"
product "LUN"
getuid_callout "/lib/udev/scsi_id --whitelisted --device=/dev/%n"
#path_checker "directio"
#path_selector "round-robin 0"
failback immediate
no_path_retry fail
}
}
其中 wwid,vendor,product, getuid_callout這些參數可以通過:multipath -v3命令來獲取。如果在/etc/multipath.conf中有設定各wwid 別名,別名會覆蓋此設定。
四、負載均衡測試:
可以使用dd命令來對設備進行讀寫操作,並同時通過iostat來查看I/0狀態,流量從哪個路徑出去:
DD命令:dd if=/dev/zero of=/mnt/1Gfile bs=8k count=131072 在上面我們已經把磁盤掛載在/MNT文件夾下所以我們在讀寫磁盤時直接對/mnt文件夾直接讀寫就可以了。
如果想對磁盤重復讀寫可以用如下語句:
[root@liujing ~]# for ((i=1;i<=5;i++));do dd if=/dev/zero of=/mnt/1Gfile bs=8k count=131072 2>&1|grep MB;done; ---重復讀寫5次這個值可以根據自己測試需求修改。
另一個控制台輸入iostat 2 10查看IO讀寫狀態:
可以看到sdc和sdd是兩個多路徑的盤符,流量均勻的負載在兩條路徑中,負載均衡很成功。
五、路徑冗余備份測試
將其中一條路徑的端口down掉,所有流量會直接切換到另一個路徑中。
轉自
Linux下多路徑multipath配置 - 李棟94 - 博客園
http://www.cnblogs.com/lidong94/p/6073304.html
LINUX下多路徑(multi-path)介紹及使用 - 王者之根 - 51CTO技術博客
http://rootking.blog.51cto.com/2619611/476212