Linux集群存儲配置


RH436_EX集群

介紹Linux環境下集群架構,Linux開源集群軟件的安裝及配置使用,軟件與軟件之間的組合。實現高可用集群,負載均衡集群;負載均衡、高可用集群與存儲集群間的多集群混合架構使用。

本次試驗我使用了多種安裝包:pcs為集群的基本架構,fence為警報裝置,targetcli和iscsi實現遠程磁盤共享,dlm和lvm2.cluster為集群中解決底層共享的邏輯卷,gfs-utiils中gfs2與clvm可擴展性集群共享存儲。
實驗環境中多台PC,真實機 :foundation0;低層存儲:noded;三個節點:nodea、nodeb、nodec。

首先我們講所用PC端的防火牆關掉
在/etc/selinux/config或/etc/sysconfig/selinux 配置文件內第7行的 內容改成SELINUX=diabled
[root@nodea ~]# setenforce 0
[root@nodea ~]# getenforce
Permissive
並用命令修改當前的狀態

安裝集群架構包pcs,並將用戶hacluster加入其中及修改了
hacluster的密碼為redhat
[root@nodea、root@nodeb、root@nodec]:
yum -y install pcs ;systemctl restart pcsd;systemctl enable pcsd;systemctl stop firewalld.service ;systemctl disable firewalld.service ;echo redhat |passwd --stdin hacluster
[root@nodea]:
[root@nodea ~]# pcs cluster auth nodea.cluster0.example.com nodeb.cluster0.example.com nodec.cluster0.example.com
Username: hacluster
Password:
nodea.cluster0.example.com: Authorized
nodeb.cluster0.example.com: Authorized
nodec.cluster0.example.com: Authorized
[root@nodea、root@nodeb、root@nodec]:
systemctl restart pcsd
真機打開瀏覽器瀏覽集群架構
[root@foundation0 ]:
firefox https://nodea.cluster0.example.com:2224

 

enter description here

1.png

 

1、用戶名:hacluster
2、密碼:redhat

 

enter description here

2.png

 

1、集群名:cluster-1
2-4、所加入的節點:nodea.cluster0.example.comnodeb.cluster0.example.comnodec.cluster0.example.com

[root@noded ]:
[root@noded ~]# mkdir /virtio
[root@noded ~]# cd /virtio/
[root@noded virtio]# ls
[root@noded virtio]# dd if=/dev/zero of=/virtio/vdisk1 bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 14.5206 s, 73.9 MB/s
[root@noded virtio]# cd
[root@noded ~]# yum -y install targetcli 安裝targetcli包
[root@noded virtio]# targetcli 進入磁盤編輯

 

enter description here

3.png

 

1、在/backstores/fileio下create iscsi_store /virtio/vdisk1 創建iscsi_store名接vdisk1目錄
2、在/iscsi下create iqn.2016-09.com.example.storage 創建一個廣播名(用來發現)
3、在/iscsi/iqn.2016-09.com.example.storage/tpg1/acls下 create iqn.2016-09.com.example:node 創建一個訪問控制列表(用來允許客戶端連接的密鑰 node為域)
4、在/iscsi/iqn.2016-09.com.example.storage/tpg1/luns下create /backstores/fileio/iscsi_store 標識每一個設備的ID(默認從0開始)
5、會自動生成一個IP 地址
[root@noded virtio]# systemctl restart target ;systmctl enable target

[root@nodea 、root@nodeb、root@nodec]:
yum -y install iscsi*
[root@nodea ~]# vim /etc/iscsi/initiatorname.iscsi 修改配置文件
[root@nodea ~]# cat /etc/iscsi/initiatorname.iscsi
InitiatorName=iqn.2016-09.com.example:node
[root@nodea ~]# scp /etc/iscsi/initiatorname.iscsi 將修改好的配置文件cp到nodeb、nodec上 root@nodeb.cluster0.example.com:/etc/iscsi/initiatorname.iscsi
[root@nodea ~]# scp /etc/iscsi/initiatorname.iscsi root@nodec.cluster0.example.com:/etc/iscsi/initiatorname.iscsi
[root@nodea ~]# systemctl restart iscsid;systemctl enable iscsid.service
ln -s '/usr/lib/systemd/system/iscsid.service' '/etc/systemd/system/multi-user.target.wants/iscsid.service'

[root@nodea ~]# iscsiadm -m discovery -t sendtargets -p 192.168.1.13
192.168.1.13:3260,1 iqn.2016-09.com.example.storage
[root@nodea ~]# iscsiadm -m discovery -t sendtargets -p 192.168.2.13
192.168.2.13:3260,1 iqn.2016-09.com.example.storage
如果上面的命令不能連接noded上的磁盤,那就最先做targetcli和iscsi后做pcs。
[root@nodea ~]# iscsiadm -m node -T iqn.2016-09.com.example.storage -p 192.168.1.13 -l
Logging in to [iface: default, target: iqn.2016-09.com.example.storage, portal: 192.168.1.13,3260] (multiple)
Login to [iface: default, target: iqn.2016-09.com.example.storage, portal: 192.168.1.13,3260] successful.
[root@nodea ~]#
[root@nodea ~]# iscsiadm -m node -T iqn.2016-09.com.example.storage -p 192.168.2.13 -l
Logging in to [iface: default, target: iqn.2016-09.com.example.storage, portal: 192.168.2.13,3260] (multiple)
Login to [iface: default, target: iqn.2016-09.com.example.storage, portal: 192.168.2.13,3260] successful.
[root@nodea ~]# fdisk -l

Disk /dev/vda: 10.7 GB, 10737418240 bytes, 20971520 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x000272f4

Device Boot Start End Blocks Id System
/dev/vda1 * 2048 20970332 10484142+ 83 Linux

Disk /dev/sda: 1073 MB, 1073741824 bytes, 2097152 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 4194304 bytes

Disk /dev/sdb: 1073 MB, 1073741824 bytes, 2097152 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 4194304 bytes
/dev/sda和/dev/sdb為遠程noded上的同一個虛擬磁盤,sda、sdb是2個接口做雙網卡的備份。

[root@nodea ~]# yum -y install device-mapper-multipath 做多路徑及線路預警
[root@nodea ~]# modprobe md_multipath 添加multipath模塊
[root@nodea ~]# multipath -l
Sep 21 21:31:19 | /etc/multipath.conf does not exist, blacklisting all devices.
Sep 21 21:31:19 | A default multipath.conf file is located at
Sep 21 21:31:19 | /usr/share/doc/device-mapper-multipath-0.4.9/multipath.conf
Sep 21 21:31:19 | You can run /sbin/mpathconf to create or modify /etc/multipath.conf
提示報錯,缺少文件
[root@nodea ~]# systemctl restart multipathd.service ;systemctl enable multipathd
[root@nodea ~]# cp /usr/share/doc/device-mapper-multipath-0.4.9/multipath.conf /etc/multipath.conf
[root@nodea ~]# multipath -l
Sep 21 21:35:25 | vda: No fc_host device for 'host-1'
Sep 21 21:35:25 | vda: No fc_host device for 'host-1'
Sep 21 21:35:25 | vda: No fc_remote_port device for 'rport--1:-1-0'
mpatha (3600140597279744f247488683521b4c2) dm-0 LIO-ORG ,iscsi-test
size=1.0G features='0' hwhandler='0' wp=rw
|-+- policy='service-time 0' prio=0 status=active
| - 2:0:0:0 sda 8:0 active undef running-+- policy='service-time 0' prio=0 status=enabled
`- 3:0:0:0 sdb 8:16 active undef running
提示上面信息就對了,原先的sda/sdb現在統稱為mpatha名,不過mpatha名也可以自定義。
下面將mpatha定義為mymu
[root@nodea ~]# vim .bash_profile
[root@nodea ~]# cat .bashp
cat: .bashp: No such file or directory
[root@nodea ~]# cat .bash_profile
.bash_profile

#Get the aliases and functions
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi

User specific environment and startup programs

PATH=$PATH:$HOME/bin:/usr/lib/udev

export PATH
[root@nodea ~]# source .bash_profile
[root@nodea ~]# scp .bash_profile root@nodeb.cluster0.example.com:/root/.bash_profile
root@nodeb.cluster0.example.com's password:
.bash_profile 100% 190 0.2KB/s 00:00
[root@nodea ~]# scp .bash_profile root@nodec.cluster0.example.com:/root/.bash_profile
root@nodec.cluster0.example.com's password:
.bash_profile 100% 190 0.2KB/s 00:00
[root@nodea ~]# scsi_id -u -g /dev/sda
3600140597279744f247488683521b4c2
[root@nodea ~]# scsi_id -u -g /dev/sdb
3600140597279744f247488683521b4c2
上面的修改配置文件是便於使用scsi_id命令查看UUID的信息

[root@nodea ~]# vim /etc/multipath.conf
user_friendly_name no 開啟默認命名 23行
enter description here
1、wwid 3600140597279744f247488683521b4c2 填寫UUID 73行
2、 alias mymu 匿名 74行
[root@nodea ~]# systemctl restart multipathd.service
[root@nodea ~]# multipath -l
Sep 21 21:57:01 | vda: No fc_host device for 'host-1'
Sep 21 21:57:01 | vda: No fc_host device for 'host-1'
Sep 21 21:57:01 | vda: No fc_remote_port device for 'rport--1:-1-0'
mymu (3600140597279744f247488683521b4c2) dm-0 LIO-ORG ,iscsi-test
size=1.0G features='0' hwhandler='0' wp=rw
|-+- policy='service-time 0' prio=0 status=active
| - 2:0:0:0 sda 8:0 active undef running-+- policy='service-time 0' prio=0 status=enabled
`- 3:0:0:0 sdb 8:16 active undef running

[root@nodea ~]# more /proc/partitions 查看磁盤簡略信息
major minor #blocks name

253 0 10485760 vda
253 1 10484142 vda1
8 0 1048576 sda
8 16 1048576 sdb
252 0 1048576 dm-0

[root@nodea ~]# yum -y install dlm lvm2-cluster.x86_64 將本地的lvm升級成lvm2-cluster 解決集群中的邏輯卷
[root@nodea ~]# vim /etc/lvm/lvm.conf 在配置文件內567行改成locking_type = 3 , 1為單機版,3為克隆模式 。或用命令lvmconf --enable-cluster自動改為3模式

[root@foundation0]:
[root@foundation0 ~]# mkdir /etc/cluster
[root@foundation0 ~]# cd /etc/cluster/
[root@foundation0 cluster]# ls
[root@foundation0 cluster]# dd if=/dev/zero of=/etc/cluster/fence_xvm.key bs=4k count=1
1+0 records in
1+0 records out
4096 bytes (4.1 kB) copied, 0.000129868 s, 31.5 MB/s
[root@nfoundation0 cluster]# yum -y install fence-virt* 安裝fence是對資源內clvm和dlm做對應
[root@foundation0 cluster]# fence_virtd -c
以上都回車
Interface [virbr0]: br0
回車
Replace /etc/fence_virt.conf with the above [y/N]? y
[root@foundation0 cluster]# ls
fence_xvm.key
[root@foundation0 cluster]# scp fence_xvm.key root@nodea.cluster0.example.com:/etc/cluster
Warning: Permanently added 'nodea.cluster0.example.com,172.25.0.10' (ECDSA) to the list of known hosts.
fence_xvm.key 100% 4096 4.0KB/s 00:00
[root@foundation0 cluster]# scp fence_xvm.key root@nodeb.cluster0.example.com:/etc/cluster
Warning: Permanently added 'nodeb.cluster0.example.com,172.25.0.11' (ECDSA) to the list of known hosts.
fence_xvm.key 100% 4096 4.0KB/s 00:00
[root@foundation0 cluster]# scp fence_xvm.key root@nodec.cluster0.example.com:/etc/cluster
Warning: Permanently added 'nodec.cluster0.example.com,172.25.0.12' (ECDSA) to the list of known hosts.
fence_xvm.key 100% 4096 4.0KB/s 00:00

firefox https://nodea.cluster0.example.com:2224
add添加一個fence

 

enter description here

5.png

 

1、選擇fence_xvm模式
2、名字:fence-1
3、指定對應的PC:nodea,noodeb,nodec

 

enter description here

6.png

 

add添加一個

 

enter description here

7.png

 

1、ocf:heartbeat
2、IPaddr2模式
3、名字:vip
4、IP地址:172.25.0.110

add添加一個

 

enter description here

8.png

1、ocf:pacemaker
2、controld模式
3、Clone√ 克隆模式
4、名字:cluster-dlm

 

add添加一個

 

enter description here

9.png

 

1、ocf:heartbeat
2、clvm模式
3、Clone√ 克隆模式
4、名字:cluster-clvm

 

enter description here

10.png

 

[root@nodea】:
[root@nodea cluster]# fdisk -l
Disk /dev/mapper/mymu: 1073 MB, 1073741824 bytes, 2097152 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 4194304 bytes
[root@nodea cluster]# cd /dev/mapper/
[root@nodea mapper]# ls
total 0
crw-------. 1 root root 10, 236 Sep 21 20:49 control
lrwxrwxrwx. 1 root root 7 Sep 21 21:56 mymu -> ../dm-0
[root@nodea mapper]# fdisk /dev/mapper/mymu
Device Boot Start End Blocks Id System
/dev/mapper/mymu1 8192 1851391 921600 83 Linux
[root@nodea mapper]# partprobe
[root@nodea mapper]# fdisk -l
Disk /dev/mapper/mymu: 1073 MB, 1073741824 bytes, 2097152 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 4194304 bytes
Disk label type: dos
Disk identifier: 0xd8583874

       Device Boot      Start         End      Blocks   Id  System

/dev/mapper/mymu1 8192 1851391 921600 83 Linux

Disk /dev/mapper/mymu1: 943 MB, 943718400 bytes, 1843200 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 4194304 bytes

[root@nodea mapper]# pvcreate /dev/mapper/mymu1
WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it!
Physical volume "/dev/mapper/mymu1" successfully created
[root@nodea mapper]# vgcreate vg1 /dev/mapper/mymu1
WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it!
Clustered volume group "vg1" successfully created
[root@nodea mapper]# lvcreate -L 800M -n lv1 vg1
WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it!
Logical volume "lv1" created.

[root@nodeb ~]# lvs
WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it!
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
lv1 vg1 -wi-a----- 800.00m

[root@nodec ~]# lvs
WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it!
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
lv1 vg1 -wi-a----- 800.00m

[root@nodea mapper]# yum -y install gfs2-utils.x86_64
[root@nodea mapper]# mkfs.gfs2 -p lock_dlm -t cluster-1:mygfs2 -j 4 /dev/vg1/lv1
-p:類型
-t:集群名:自定義名
-j:n+1 n為節點個數,1為保留
/dev/vg1/lv1 is a symbolic link to /dev/dm-2
This will destroy any data on /dev/dm-2
Are you sure you want to proceed? [y/n]y
Device: /dev/vg1/lv1
Block size: 4096
Device size: 0.78 GB (204800 blocks)
Filesystem size: 0.78 GB (204797 blocks)
Journals: 4
Resource groups: 6
Locking protocol: "lock_dlm"
Lock table: "cluster-1:mygfs2"
UUID: e89181f7-1251-8c4b-496e-305aeff9b780

[root@nodea ~]# mkdir /zzz
[root@nodea ~]# mount /dev/vg1/lv1 /zzz/
[root@nodea ~]# cd /zzz/
[root@nodea zzz]# ls
[root@nodea zzz]# vim zlm
[root@nodea zzz]# cat zlm
haha.................

[root@nodeb ~]# mkdir /zzz
[root@nodeb ~]# mount /dev/vg1/lv1 /zzz/
[root@nodeb zzz]# cat /zzz/zlm
haha.................

[root@nodec ~]# mkdir /zzz
[root@nodec ~]# mount /dev/vg1/lv1 /zzz/
root@nodec zzz]# cat /zzz/zlm
haha.................
[root@nodec zzz]# vim zlm
[root@nodec zzz]# cat zlm
haha

[root@nodea zzz]# cat zlm
haha

上面文件查看說明底層文件是可以同步了,如果要對文件做對應的權限控制可以天acl或磁盤配額
acl的添加權限為mount -o remount,acl /zzz
磁盤配額的添加權限為mount -o remount,quota=on /zzz

[root@nodea ~]# umount /zzz/
[root@nodeb ~]# umount /zzz/
[root@nodec ~]# umount /zzz/

[root@nodea ~]# yum -y install httpd

 

enter description here

11.png

1、為nodea、nodeb、nodec添加了優先級
2、當前ip落在nodea上

 

 

enter description here

12.png

 

1、添加一個資源名WEB先綁定vip

 

enter description here

13.png

 

1、ocf:heartbeat
2、filesystem模式
3、綁定WEB上
4、資源名:web-file
5、磁盤:/dev/vg1/lv1
6、掛在地方:/var/www/html
7、磁盤格式:gfs2
8、權限:acl

 

enter description here

14.png

 

1、systemd
2、httpd模式
3、綁定WEB
4、資源名:web-httpd

 

enter description here

15.png

 

[root@nodea html]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/vda1 10G 1.2G 8.9G 12% /
devtmpfs 904M 0 904M 0% /dev
tmpfs 921M 61M 861M 7% /dev/shm
tmpfs 921M 28M 893M 4% /run
tmpfs 921M 0 921M 0% /sys/fs/cgroup
/dev/mapper/vg1-lv1 800M 518M 283M 65% /var/www/html
說明/dev/mapper/vg1-lv1自動掛在再 /var/www/html上了

[root@nodea ~]# cd /var/www/html/
[root@nodea html]# ls
zlm
[root@nodea html]# cat zlm
haha
剛才測試的文件還在,說明文件已經寫到noded下的底層磁盤
[root@nodea html]# vim index.html
[root@nodea html]# cat index.html
nodea

[root@nodea html]# pcs cluster standby
*****當關掉nodea后,http的服務就紅了,底層的磁盤不能跑到nodeb上。隨后cluster-clvm也紅 了,現在還不知道什么原因,看到此篇文章知道最后失敗的原因可以留言討論。

 

enter description here

17.png

 

 

enter description here

16.png

 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM