一、簡單介紹
SAN,即存儲區域網絡(storage area network and SAN protocols),它是一種高速網絡實現計算機與存儲系統之間的數據傳輸。常見的分類是FC-SAN和IP-SAN兩種。FC-SAN通過光纖通道協議轉發scsi協議;IP-SAN通過TCP協議轉發scsi協議,也就是IP 地址。存儲設備是指一台或多台用以存儲計算機數據的磁盤設備,通常指磁盤陣列,主要廠商EMC、日立等。
iSCSI(internet SCSI)技術由IBM公司研究開發,是一個供硬件設備使用的、可以在IP協議的上層運行的SCSI指令集,這種指令集合可以實現在IP網絡上運行SCSI協議,使其能夠在諸如高速千兆以太網上進行路由選擇。iSCSI是一種新儲存技術,它是將現有SCSI接口與以太網絡(Ethernet)技術結合,使服務器可與使用IP網絡的儲存裝置互相交換資料。
iSCSI是一種基於TCP/IP 的協議,用來建立和管理IP存儲設備、主機和客戶機等之間的相互連接,並創建存儲區域網絡(SAN)。SAN 使得SCSI 協議應用於高速數據傳輸網絡成為可能,這種傳輸以數據塊級別(block-level)在多個數據存儲網絡間進行。SCSI 結構基於C/S模式,其通常應用環境是:設備互相靠近,並且這些設備由SCSI 總線連接。
iSCSI 的主要功能是在TCP/IP 網絡上的主機系統(啟動器 initiator)和存儲設備(目標器 target)之間進行大量數據的封裝和可靠傳輸過程。
完整的iSCSI系統的拓撲結構如下:
iSCSI簡單來說,就是把SCSI指令通過TCP/IP協議封裝起來,在以太網中傳輸。iSCSI 可以實現在IP網絡上傳遞和運行SCSI協議,使其能夠在諸如高速千兆以太網上進行數據存取,實現了數據的網際傳遞和管理。基於iSCSI建立的存儲區域網(SAN)與基於光纖的FC-SAN相比,具有很好的性價比。
iSCSI屬於端到端的會話層協議,它定義的是SCSI到TCP/IP的映射(如下圖),即Initiator將SCSI指令和數據封裝成iSCSI協議數據單元,向下提交給TCP層,最后封裝成IP數據包在IP網絡上傳輸,到達Target后通過解封裝還原成SCSI指令和數據,再由存儲控制器發送到指定的驅動器,從而實現SCSI命令和數據在IP網絡上的透明傳輸。它整合了現有的存儲協議SCSI和網絡協議TCP/IP,實現了存儲與TCP/IP網絡的無縫融合。在本篇中,將把發起器Initiator稱為客戶端,將目標器Target稱為服務端以方便理解。
二、配置案例
操作需求:
公司之前在阿里雲上購買了6台機器,磁盤空間大小不一致,后續IDC建設好后,又將業務從阿里雲上遷移到IDC機器上了。為了不浪費阿里雲上的這幾台機器資源,打算將這其中的5台機器做成IP SAN共享存儲,另一台機器共享這5台的SAN存儲,然后跟自己的磁盤一起做成LVM邏輯卷,最后統一作為備份磁盤使用!
1)服務器信息如下:
ip地址 數據盤空間 主機名 系統版本 192.168.10.17 200G ipsan-node01 centos7.3 192.168.10.18 500G ipsan-node02 centos7.3 192.168.10.5 500G ipsan-node03 centos7.3 192.168.10.6 200G ipsan-node04 centos7.3 192.168.10.20 100G ipsan-node05 centos7.3 192.168.10.10 100G ipsan-node06 centos7.3 前5個node節點作為IP-SAN存儲的服務端,第6個node節點作為客戶端,用來共享前5個節點的IP-SAN存儲,然后第6個node節點利用這5個共享過來的IP-SAN存儲和 自己的100G存儲做lvm邏輯卷,最終組成一個大的存儲池來使用! 首先將這6個node節點機對應的盤做格式化(6台機器的數據盤都是掛載到/data下的,需要先卸載/data,然后格式化磁盤) 接着關閉各節點服務器的iptables防火牆服務(若打開了iptables,則需要開通3260端口)。selinux也要關閉!!
2)服務端的操作記錄(即ipsan-node01、ipsan-node02、ipsan-node03、ipsan-node04、ipsan-node05)
關閉iptbales防火牆 [root@ipsan-node01 ~]# systemctl stop firewalld.service [root@ipsan-node01 ~]# systemctl disable firewalld.service 關閉selinux [root@ipsan-node01 ~]# setenforce 0 setenforce: SELinux is disabled [root@ipsan-node01 ~]# getenforce Disabled [root@ipsan-node01 ~]# cat /etc/sysconfig/selinux ....... SELINUX=disabled 卸載之前掛載到/data下的數據盤,並重新格式化 [root@ipsan-node01 ~]# fdisk -l Disk /dev/vda: 42.9 GB, 42949672960 bytes, 83886080 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk label type: dos Disk identifier: 0x0008d207 Device Boot Start End Blocks Id System /dev/vda1 * 2048 83886079 41942016 83 Linux Disk /dev/vdb: 214.7 GB, 214748364800 bytes, 419430400 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk label type: dos Disk identifier: 0xda936a6f Device Boot Start End Blocks Id System /dev/vdb1 2048 419430399 209714176 83 Linux [root@ipsan-node01 ~]# umount /data [root@ipsan-node01 ~]# mkfs.ext4 /dev/vdb1 安裝配置iSCSI Target服務端 [root@ipsan-node01 ~]# yum install -y scsi-target-utils 啟動target服務,通過ss -tnl可以看到3260端口已開啟 [root@ipsan-node01 ~]# systemctl enable tgtd [root@ipsan-node01 ~]# systemctl enable tgtd [root@ipsan-node01 ~]# systemctl status tgtd [root@ipsan-node01 ~]# ss -tnl State Recv-Q Send-Q Local Address:Port Peer Address:Port LISTEN 0 128 *:22 *:* LISTEN 0 128 *:3260 *:* LISTEN 0 128 :::3260 :::* 服務端配置管理工具tgtadm的使用 創建一個target id 為1 name為iqn.2018-02.com.node01.san:1 [root@ipsan-node01 ~]# tgtadm -L iscsi -o new -m target -t 1 -T iqn.2018-02.com.node01.san:1 顯示所有target [root@ipsan-node01 ~]# tgtadm -L iscsi -o show -m target 向某ID為1的設備上添加一個新的LUN,其號碼為1,且此設備提供給initiator使用。 /dev/vdb1是某"塊設備"的路徑,此塊設備也可以是raid或lvm設備。lun0已經被系統預留 [root@ipsan-node01 ~]# tgtadm -L iscsi -o new -m logicalunit -t 1 -l 1 -b /dev/vdb1 [root@ipsan-node01 ~]# tgtadm -L iscsi -o show -m target 定義某target的基於主機的訪問控制列表,192.168.10.0/24表示允許訪問此target的initiator客戶端的列表 [root@ipsan-node01 ~]# tgtadm -L iscsi -o bind -m target -t 1 -I 192.168.10.0/24 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ 溫馨提示: 如果該節點還有一個塊設備/dev/vdb2需要添加到san存儲里,可以再次向ID為1的設備上添加一個新的LUN,號碼為2 # tgtadm -L iscsi -o new -m logicalunit -t 1 -l 2 -b /dev/vdb2 # tgtadm -L iscsi -o show -m target 解除target的基於主機的訪問控制列表權限 # tgtadm -L iscsi -o unbind -m target -t 1 -I 192.168.10.0/24 # tgtadm -L iscsi -o show -m target 刪除target中的LUN # tgtadm -L iscsi -o delete -m target -t 1 # tgtadm -L iscsi -o show -m target ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ 同理,另外四個節點的操作如上一致: 同樣需要關閉iptables和selinux 需要卸載之前掛載到/data下的數據盤,並重新格式化 另外四個節點服務端配置管理工具tgtadm的使用分別如下: [root@ipsan-node02 ~]# yum -y install scsi-target-utils [root@ipsan-node02 ~]# systemctl enable tgtd [root@ipsan-node02 ~]# systemctl enable tgtd [root@ipsan-node02 ~]# systemctl status tgtd [root@ipsan-node02 ~]# ss -tnl [root@ipsan-node02 ~]# tgtadm -L iscsi -o new -m target -t 1 -T iqn.2018-02.com.node02.san:1 [root@ipsan-node02 ~]# tgtadm -L iscsi -o show -m target [root@ipsan-node02 ~]# tgtadm -L iscsi -o new -m logicalunit -t 1 -l 1 -b /dev/vdb1 [root@ipsan-node02 ~]# tgtadm -L iscsi -o show -m target [root@ipsan-node02 ~]# tgtadm -L iscsi -o bind -m target -t 1 -I 192.168.10.0/24 [root@ipsan-node02 ~]# tgtadm -L iscsi -o show -m target [root@ipsan-node03 ~]# yum -y install scsi-target-utils [root@ipsan-node03 ~]# systemctl enable tgtd [root@ipsan-node03 ~]# systemctl enable tgtd [root@ipsan-node03 ~]# systemctl status tgtd [root@ipsan-node03 ~]# ss -tnl [root@ipsan-node03 ~]# tgtadm -L iscsi -o new -m target -t 1 -T iqn.2018-02.com.node03.san:1 [root@ipsan-node03 ~]# tgtadm -L iscsi -o show -m target [root@ipsan-node03 ~]# tgtadm -L iscsi -o new -m logicalunit -t 1 -l 1 -b /dev/vdb1 [root@ipsan-node03 ~]# tgtadm -L iscsi -o show -m target [root@ipsan-node03 ~]# tgtadm -L iscsi -o bind -m target -t 1 -I 192.168.10.0/24 [root@ipsan-node03 ~]# tgtadm -L iscsi -o show -m target [root@ipsan-node04 ~]# yum -y install scsi-target-utils [root@ipsan-node04 ~]# systemctl enable tgtd [root@ipsan-node04 ~]# systemctl enable tgtd [root@ipsan-node04 ~]# systemctl status tgtd [root@ipsan-node04 ~]# ss -tnl [root@ipsan-node04 ~]# tgtadm -L iscsi -o new -m target -t 1 -T iqn.2018-02.com.node04.san:1 [root@ipsan-node04 ~]# tgtadm -L iscsi -o show -m target [root@ipsan-node04 ~]# tgtadm -L iscsi -o new -m logicalunit -t 1 -l 1 -b /dev/vdb1 [root@ipsan-node04 ~]# tgtadm -L iscsi -o show -m target [root@ipsan-node04 ~]# tgtadm -L iscsi -o bind -m target -t 1 -I 192.168.10.0/24 [root@ipsan-node04 ~]# tgtadm -L iscsi -o show -m target [root@ipsan-node05 ~]# yum -y install scsi-target-utils [root@ipsan-node05 ~]# systemctl enable tgtd [root@ipsan-node05 ~]# systemctl enable tgtd [root@ipsan-node05 ~]# systemctl status tgtd [root@ipsan-node05 ~]# ss -tnl [root@ipsan-node05 ~]# tgtadm -L iscsi -o new -m target -t 1 -T iqn.2018-02.com.node05.san:1 [root@ipsan-node05 ~]# tgtadm -L iscsi -o show -m target [root@ipsan-node05 ~]# tgtadm -L iscsi -o new -m logicalunit -t 1 -l 1 -b /dev/vdb1 [root@ipsan-node05 ~]# tgtadm -L iscsi -o show -m target [root@ipsan-node05 ~]# tgtadm -L iscsi -o bind -m target -t 1 -I 192.168.10.0/24 [root@ipsan-node05 ~]# tgtadm -L iscsi -o show -m target ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ 溫馨提示: 上面5個節點的target服務端是通過命令行進行配置的,其實除此之外,也可以通過編輯文件的方式定義target服務端。 方法如下: [root@ipsan-node01 ~]# cd /etc/tgt/ [root@ipsan-node01 tgt]# ls targets.conf targets.conf [root@ipsan-node01 tgt]# vim targets.conf //添加定義內容如下: ....... <target iqn.2018-02.com.node01.san:1> //命令 backing-store /dev/vdb1 //共享的設備分區 initiator-address 192.168.10.0/24 //允許訪問的ip地址段。(也可以允許某個具體的ip地址,如果是多個具體的ip地址,就寫多行initiator-address的配置) </target> 如果該節點還有一個塊設備/dev/vdb2需要添加到san存儲里,則再添加定義如下: <target iqn.2018-02.com.node01.san:2> backing-store /dev/vdb2 initiator-address 192.168.10.0/24 </target> 重啟tgtd [root@ipsan-node01 tgt]# systemctl restart tgtd [root@ipsan-node01 tgt]# tgtadm -L iscsi -o show -m target 其他4個節點的配置同理 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
3)客戶端的操作記錄(即ipsan-node06)
關閉iptbales防火牆 [root@ipsan-node06 ~]# systemctl stop firewalld.service [root@ipsan-node06 ~]# systemctl disable firewalld.service 關閉selinux [root@ipsan-node06 ~]# setenforce 0 setenforce: SELinux is disabled [root@ipsan-node06 ~]# getenforce Disabled [root@ipsan-node06 ~]# cat /etc/sysconfig/selinux ....... SELINUX=disabled 卸載之前掛載到/data下的數據盤,並重新格式化 [root@ipsan-node01 ~]# fdisk -l Disk /dev/vda: 42.9 GB, 42949672960 bytes, 83886080 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk label type: dos Disk identifier: 0x0008e3b4 Device Boot Start End Blocks Id System /dev/vda1 * 2048 83886079 41942016 83 Linux Disk /dev/vdb: 107.4 GB, 107374182400 bytes, 209715200 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk label type: dos Disk identifier: 0xf450445d Device Boot Start End Blocks Id System /dev/vdb1 2048 209715199 104856576 83 Linux [root@ipsan-node06 ~]# umount /data [root@ipsan-node06 ~]# mkfs.ext4 /dev/vdb1 安裝iscsi-initiator-utils工具 [root@ipsan-node06 ~]# yum install -y iscsi-initiator-utils [root@ipsan-node06 ~]# cat /etc/iscsi/initiatorname.iscsi [root@ipsan-node06 ~]# echo "InitiatorName=`iscsi-iname -p iqn.2018-02.com.node01`" > /etc/iscsi/initiatorname.iscsi [root@ipsan-node06 ~]# echo "InitiatorAlias=initiator1" >> /etc/iscsi/initiatorname.iscsi [root@ipsan-node06 ~]# cat /etc/iscsi/initiatorname.iscsi [root@ipsan-node06 ~]# systemctl enable iscsi [root@ipsan-node06 ~]# systemctl start iscsi [root@ipsan-node06 ~]# systemctl status iscsi [root@ipsan-node06 ~]# ss -tnl State Recv-Q Send-Q Local Address:Port Peer Address:Port LISTEN 0 128 *:22 *:* LISTEN 0 128 192.168.10.10:3128 *:* LISTEN 0 1 127.0.0.1:32000 *:* iscsiadm是個模式化的工具,其模式可通過-m或--mode選項指定,常見的模式有discovery、node、fw、session、host、iface幾個, 如果沒有額外指定其它選項,則discovery和node會顯示其相關的所有記錄;session用於顯示所有的活動會話和連接,fw顯示所有的啟動固件值, host顯示所有的iSCSI主機,iface顯示/var/lib/iscsi/ifaces目錄中的所有ifaces設定。 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ 溫馨提示: iscsiadm命令參數如下: iscsiadm -m discovery [ -d debug_level ] [ -P printlevel ] [ -I iface -t type -p ip:port [ -l ] ] iscsiadm -m node [ -d debug_level ] [ -P printlevel ] [ -L all,manual,automatic ] [ -U all,manual,automatic ] [ [ -T tar-getname -p ip:port -I iface ] [ -l | -u | -R | -s] ] [ [ -o operation ] -d, --debug=debug_level 顯示debug信息,級別為0-8; -l, --login -t, --type=type 這里可以使用的類型為sendtargets(可簡寫為st)、slp、fw和 isns,此選項僅用於discovery模式,且目前僅支持st、fw和isns;其中st表示允許每個iSCSI target發送一個可用target列表給initiator; -p, --portal=ip[:port] 指定target服務的IP和端口; -m, --mode op 可用的mode有discovery, node, fw, host iface 和 session -T, --targetname=targetname 用於指定target的名字 -u, --logout -o, --op=OPEARTION:指定針對discoverydb數據庫的操作,其僅能為new、delete、update、show和nonpersistent其中之一; -I, --interface=[iface]:指定執行操作的iSCSI接口,這些接口定義在/var/lib/iscsi/ifaces中; ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ 依次發現服務端的san存儲設備(只要能發現設備即可) [root@ipsan-node06 ~]# iscsiadm -m discovery -t st -p 192.168.10.17 Starting iscsid: 192.168.10.17:3260,1 iqn.2018-02.com.node01.san:1 [root@ipsan-node06 ~]# iscsiadm -m discovery -t st -p 192.168.10.18 Starting iscsid: 192.168.10.18:3260,1 iqn.2018-02.com.node02.san:1 [root@ipsan-node06 ~]# iscsiadm -m discovery -t st -p 192.168.10.5 Starting iscsid: 192.168.10.5:3260,1 iqn.2018-02.com.node03.san:1 [root@ipsan-node06 ~]# iscsiadm -m discovery -t st -p 192.168.10.6 Starting iscsid: 192.168.10.6:3260,1 iqn.2018-02.com.node04.san:1 [root@ipsan-node06 ~]# iscsiadm -m discovery -t st -p 192.168.10.20 Starting iscsid: 192.168.10.20:3260,1 iqn.2018-02.com.node05.san:1 [root@ipsan-node06 ~]# ls /var/lib/iscsi/send_targets/ 192.168.10.10,3260 192.168.10.17,3260 192.168.10.18,3260 192.168.10.20,3260 192.168.10.5,3260 192.168.10.6,3260 [root@ipsan-node06 ~]# ls /var/lib/iscsi/send_targets/ 192.168.10.10,3260 192.168.10.17,3260 192.168.10.18,3260 192.168.10.20,3260 192.168.10.5,3260 192.168.10.6,3260 依次登陸發現的san存儲設備 [root@ipsan-node06 ~]# iscsiadm -m node -T iqn.2018-02.com.node01.san:1 -p 192.168.10.17 -l [root@ipsan-node06 ~]# iscsiadm -m node -T iqn.2018-02.com.node02.san:1 -p 192.168.10.18 -l [root@ipsan-node06 ~]# iscsiadm -m node -T iqn.2018-02.com.node03.san:1 -p 192.168.10.5 -l [root@ipsan-node06 ~]# iscsiadm -m node -T iqn.2018-02.com.node04.san:1 -p 192.168.10.6 -l [root@ipsan-node06 ~]# iscsiadm -m node -T iqn.2018-02.com.node05.san:1 -p 192.168.10.20 -l ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ 溫馨提示: 關閉iSCSI服務器端 關閉iSCSI在開機重啟或重啟iscsi服務時自動對target進行重新連接,就需要在該客戶機徹底將該target條目信息刪除: 退出target會話或卸載iscsi設備(例如退出ipsan-node01節點的target會話): # iscsiadm -m node -T node -T iqn.2018-02.com.node01.san:1 -p 192.168.10.17 -u 刪除target條目的記錄(例如退出ipsan-node01節點的target條目的記錄): # iscsiadm -m node -T node -T iqn.2018-02.com.node01.san:1 -p 192.168.10.17 -o delete 特別注意: 在客戶端刪除了之前discovery發現的可用的target條目,則重啟或重啟服務后將不會自動進行重連接。 # ls /var/lib/iscsi/send_targets/ # ls /var/lib/iscsi/ # rm -rf /var/lib/iscsi/* # ls /var/lib/iscsi/ ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ 驗證能否看到服務端設備 [root@ipsan-node06 ~]# fdisk -l /dev/sd[a-z] //或者直接使用命令"fdisk -l" Disk /dev/vda: 42.9 GB, 42949672960 bytes, 83886080 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk label type: dos Disk identifier: 0x0008e3b4 Device Boot Start End Blocks Id System /dev/vda1 * 2048 83886079 41942016 83 Linux Disk /dev/vdb: 107.4 GB, 107374182400 bytes, 209715200 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk label type: dos Disk identifier: 0xf450445d Device Boot Start End Blocks Id System /dev/vdb1 2048 209715199 104856576 83 Linux Disk /dev/sda: 214.7 GB, 214747316224 bytes, 419428352 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk label type: dos Disk identifier: 0x149fdfec Disk /dev/sdb: 536.9 GB, 536869863424 bytes, 1048573952 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk label type: dos Disk identifier: 0xc74ea52c Disk /dev/sdc: 536.9 GB, 536869863424 bytes, 1048573952 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk label type: dos Disk identifier: 0x6663eaa6 Disk /dev/sdd: 214.7 GB, 214747316224 bytes, 419428352 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk label type: dos Disk identifier: 0x46738964 Disk /dev/sde: 107.4 GB, 107373133824 bytes, 209713152 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk label type: dos Disk identifier: 0xe78614cc 如上信息,可知客戶端節點ipsan-node06上已經發現了其他5個服務端節點的存儲設置了! 接着對發現的存儲設置依次進行分區 [root@ipsan-node06 ~]# fdisk /dev/sda 依次輸入p->n->p->回車->回車->回車->w [root@ipsan-node06 ~]# partprobe [root@ipsan-node06 ~]# fdisk /dev/sdb 依次輸入p->n->p->回車->回車->回車->w [root@ipsan-node06 ~]# partprobe [root@ipsan-node06 ~]# fdisk /dev/sdc 依次輸入p->n->p->回車->回車->回車->w [root@ipsan-node06 ~]# partprobe [root@ipsan-node06 ~]# fdisk /dev/sdd 依次輸入p->n->p->回車->回車->回車->w [root@ipsan-node06 ~]# partprobe [root@ipsan-node06 ~]# fdisk /dev/sde 依次輸入p->n->p->回車->回車->回車->w [root@ipsan-node06 ~]# partprobe 再次查看設備分區情況 [root@ipsan-node06 backup]# fdisk -l /dev/sd[a-z] Disk /dev/sda: 214.7 GB, 214747316224 bytes, 419428352 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk label type: dos Disk identifier: 0x149fdfec Device Boot Start End Blocks Id System /dev/sda1 2048 419428351 209713152 83 Linux Disk /dev/sdb: 536.9 GB, 536869863424 bytes, 1048573952 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk label type: dos Disk identifier: 0xc74ea52c Device Boot Start End Blocks Id System /dev/sdb1 2048 1048573951 524285952 83 Linux Disk /dev/sdc: 536.9 GB, 536869863424 bytes, 1048573952 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk label type: dos Disk identifier: 0x6663eaa6 Device Boot Start End Blocks Id System /dev/sdc1 2048 1048573951 524285952 83 Linux Disk /dev/sdd: 214.7 GB, 214747316224 bytes, 419428352 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk label type: dos Disk identifier: 0x46738964 Device Boot Start End Blocks Id System /dev/sdd1 2048 419428351 209713152 83 Linux Disk /dev/sde: 107.4 GB, 107373133824 bytes, 209713152 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk label type: dos Disk identifier: 0xe78614cc Device Boot Start End Blocks Id System /dev/sde1 2048 209713151 104855552 83 Linux 在服務端節點ipsan-node02上,使用自己的數據磁盤/dev/vdb1和上面發現的5個客戶端節點的san存儲設備創建lvm邏輯卷 創建pv(如果沒有pvcreate等命令,可以使用"yum install -y lvm2"進行安裝) [root@ipsan-node06 ~]# pvcreate /dev/{vdb1,sda1,sdb1,sdc1,sdd1,sde1} [root@ipsan-node06 ~]# pvs //或者使用命令"pvdisplay" PV VG Fmt Attr PSize PFree /dev/sda1 vg0 lvm2 a-- <200.00g 0 /dev/sdb1 vg0 lvm2 a-- <500.00g 0 /dev/sdc1 vg0 lvm2 a-- <500.00g 0 /dev/sdd1 vg0 lvm2 a-- <200.00g 0 /dev/sde1 vg0 lvm2 a-- <100.00g <2.54g /dev/vdb1 vg0 lvm2 a-- <100.00g 0 創建vg [root@ipsan-node06 ~]# vgcreate vg0 /dev/{vdb1,sda1,sdb1,sdc1,sdd1,sde1} [root@ipsan-node06 ~]# vgs //或者使用命令"vgdisplay" VG #PV #LV #SN Attr VSize VFree vg0 6 1 0 wz--n- 1.56t <2.54g 創建lv [root@ipsan-node06 ~]# lvcreate -L +1.56t -n lv01 vg0 [root@ipsan-node06 ~]# lvs //或者使用命令"lvdisplay" LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert lv01 vg0 -wi-ao---- 1.56t 格式化lvm邏輯卷磁盤 [root@ipsan-node06 ~]# mkfs.ext4 /dev/vg0/lv01 掛載lvm邏輯卷磁盤 [root@ipsan-node06 ~]# mkdir /backup [root@ipsan-node06 ~]# mount /dev/vg0/lv01 /backup 檢查lvm邏輯卷磁盤是否掛載上 [root@ipsan-node06 ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/vda1 40G 3.4G 34G 10% / devtmpfs 7.8G 78M 7.7G 1% /dev tmpfs 7.8G 12K 7.8G 1% /dev/shm tmpfs 7.8G 440K 7.8G 1% /run tmpfs 7.8G 0 7.8G 0% /sys/fs/cgroup /dev/mapper/vg0-lv01 1.6T 60k 1.6T 0% /backup 使用lsblk檢查設備信息 [root@ipsan-node06 ~]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 200G 0 disk └─sda1 8:1 0 200G 0 part └─vg0-lv01 252:0 0 1.6T 0 lvm /backup sdb 8:16 0 500G 0 disk └─sdb1 8:17 0 500G 0 part └─vg0-lv01 252:0 0 1.6T 0 lvm /backup sdc 8:32 0 500G 0 disk └─sdc1 8:33 0 500G 0 part └─vg0-lv01 252:0 0 1.6T 0 lvm /backup sdd 8:48 0 200G 0 disk └─sdd1 8:49 0 200G 0 part └─vg0-lv01 252:0 0 1.6T 0 lvm /backup sde 8:64 0 100G 0 disk └─sde1 8:65 0 100G 0 part └─vg0-lv01 252:0 0 1.6T 0 lvm /backup sr0 11:0 1 1024M 0 rom vda 253:0 0 40G 0 disk └─vda1 253:1 0 40G 0 part / vdb 253:16 0 100G 0 disk └─vdb1 253:17 0 100G 0 part └─vg0-lv01 252:0 0 1.6T 0 lvm /backup