HA高可用解決方案-RHCS部署


RHCS

環境配置

  • luci一台 ricci多台

  • iptables disabled

  • selinux disabled

  • 本次使用兩個6.5來測試,server111和server222,命令的時候注意主機的區分

yum源設定

[HA]
name=Instructor HA Repository
baseurl=http://localhost/pub/6.5/HighAvailability
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release

[LoadBalancer]
name=Instructor LoadBalancer Repository
baseurl=http://localhost/pub/6.5/LoadBalancer
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release

[ResilientStorage]
name=Instructor ResilientStorage Repository
baseurl=http://localhost/pub/6.5/ResilientStorage
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release

[ScalableFileSystem]
name=Instructor ScalableFileSystem Repository
baseurl=http://localhost/pub/6.5/ScalableFileSystem
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release

ricci安裝

yum install ricci -y
passwd ricci
/etc/init.d/ricci start
chkconfig ricci on

注意事項

  1. 紅帽高可用性附加組件最多支持的集群節點數為 16。
  2. 使用 luci 配置 GUI。
  3. 該組件不支持在集群節點中使用 NetworkManager。如果您已經在集群節點中安裝了 NetworkManager,您應該刪除或者停止該服務。
  4. 集群中的節點使用多播地址彼此溝通。因此必須將紅帽高可用附加組件中的每個網絡切換以及關聯的聯網設備配置為啟用多播地址並支持 IGMP(互聯網組管理協議)。請確定紅帽高可用附加組件中的每個網絡切換以及關聯的聯網設備都支持多播地址和 IGMP
  5. 紅帽企業版 Linux 6 中使用 ricci 替換 ccsd。因此必需在每個集群節點中都運行 ricci
  6. 從紅帽企業版 Linux 6.1 開始,您在任意節點中使用 ricci 推廣更新的集群配置時要求輸入密碼。您在系統中安裝 ricci 后,請使用 passwd ricci 命令為用戶 ricci 創建密碼。
  7. 登陸luci的web界面的登陸為luci主機的root用戶和密碼
    8.創建添加Failover domain 時候 節點的值越小 ,越靠前。
  8. 添加某種服務節點上都需要有該服務,創建完成后自動啟動

服務安裝(注意主機區別)

[root@server222 Desktop]# yum install ricci -y
[root@server111 Desktop]# yum install luci -y
[root@server111 Desktop]# yum install ricci -y
[root@server111 Desktop]# passwd ricci   #為ricci設定密碼
[root@server111 Desktop]# /etc/init.d/ricci start   #啟動ricci,並且設定開機啟動
[root@server222 Desktop]# passwd ricci   #同上為ricci設定密碼
[root@server222 Desktop]# /etc/init.d/ricci start   #啟動ricci ,並且開機啟動
[root@server111 Desktop]# /etc/init.d/luci start   #並啟動luci ,開機啟動
  • 啟動后點擊鏈接進入管理界面,注意是否有hosts或者dns的解析

  • 看到主界面如下圖所示

  • 進入cluster界面,點擊create 的刀以下界面,相關的配置如圖中所示

  • 創建cluster

  • 等待創建完成兩台主機都會自動重啟,結果為下圖所示


  • 成功創建后
[root@server222 ~]# cd /etc/cluster/
[root@server222 cluster]# ls
cluster.conf  cman-notify.d
[root@server222 cluster]# cman_tool status
Version: 6.2.0
Config Version: 1
Cluster Name: forsaken
Cluster Id: 7919
Cluster Member: Yes
Cluster Generation: 8
Membership state: Cluster-Member
Nodes: 2
Expected votes: 1
Total votes: 2
Node votes: 1
Quorum: 1  
Active subsystems: 9
Flags: 2node 
Ports Bound: 0 11 177  
Node name: 192.168.157.222
Node ID: 2
Multicast addresses: 239.192.30.14 
Node addresses: 192.168.157.222 

[root@server111 ~]# cman_tool status
Version: 6.2.0
Config Version: 1
Cluster Name: forsaken
Cluster Id: 7919
Cluster Member: Yes
Cluster Generation: 8
Membership state: Cluster-Member
Nodes: 2
Expected votes: 1
Total votes: 2
Node votes: 1
Quorum: 1  
Active subsystems: 7
Flags: 2node 
Ports Bound: 0  
Node name: 192.168.157.111
Node ID: 1
Multicast addresses: 239.192.30.14 
Node addresses: 192.168.157.111 

[root@server111 ~]# clustat 
Cluster Status for forsaken @ Tue May 19 22:01:06 2015
Member Status: Quorate

 Member Name                             ID   Status
 ------ ----                             ---- ------
 192.168.157.111                             1 Online, Local
 192.168.157.222                             2 Online

[root@server222 cluster]# clustat 
Cluster Status for forsaken @ Tue May 19 22:01:23 2015
Member Status: Quorate

 Member Name                             ID   Status
 ------ ----                             ---- ------
 192.168.157.111                             1 Online
 192.168.157.222                             2 Online, Local

為節點添加fence機制

  • 注釋:tramisu 為我的物理機,該步驟需要在物理機上完成

  • 安裝必要的服務

[root@tramisu ~]# yum install fence-virtd.x86_64 fence-virtd-libvirt.x86_64 fence-virtd-multicast.x86_64 fence-virtd-serial.x86_64 -y
fence_virtd -c
Module search path [/usr/lib64/fence-virt]: 

Available backends:
    libvirt 0.1

Listener modules are responsible for accepting requests
from fencing clients.

Listener module [multicast]: 
No listener module named multicast found!
Use this value anyway [y/N]? y

The multicast listener module is designed for use environments
where the guests and hosts may communicate over a network using
multicast.

The multicast address is the address that a client will use to
send fencing requests to fence_virtd.

Multicast IP Address [225.0.0.12]: 

Using ipv4 as family.

Multicast IP Port [1229]: 

Setting a preferred interface causes fence_virtd to listen only
on that interface.  Normally, it listens on all interfaces.
In environments where the virtual machines are using the host
machine as a gateway, this *must* be set (typically to virbr0).
Set to 'none' for no interface.

Interface [virbr0]: br0   #因為我的物理機的網卡為br0在與虛擬機通訊使用,大家根據自己實際情況設定

The key file is the shared key information which is used to
authenticate fencing requests.  The contents of this file must
be distributed to each physical host and virtual machine within
a cluster.

Key File [/etc/cluster/fence_xvm.key]: 

Backend modules are responsible for routing requests to
the appropriate hypervisor or management layer.

Backend module [libvirt]: 

Configuration complete.

=== Begin Configuration ===
backends {
    libvirt {
        uri = "qemu:///system";
    }

}

listeners {
    multicast {
        port = "1229";
        family = "ipv4";
        interface = "br0";
        address = "225.0.0.12";  #多播地址
        key_file = "/etc/cluster/fence_xvm.key";   #生成key的地址
    }

}

fence_virtd {
    module_path = "/usr/lib64/fence-virt";
    backend = "libvirt";
    listener = "multicast";
}

=== End Configuration ===
Replace /etc/fence_virt.conf with the above [y/N]? y
[root@tramisu Desktop]# mkdir /etc/cluster
[root@tramisu Desktop]# fence_virtd -c^C
[root@tramisu Desktop]# dd if=/dev/urandom of=/etc/cluster/fence_xvm.key bs=128 count=1  #運用dd命令生成隨機數的key
1+0 records in
1+0 records out
128 bytes (128 B) copied, 0.000455837 s, 781 kB/s
[root@tramisu ~]# ll /etc/cluster/fence_xvm.key   #生成的key
-rw-r--r-- 1 root root 128 May 19 22:13 /etc/cluster/fence_xvm.key
[root@tramisu ~]# scp /etc/cluster/fence_xvm.key 192.168.157.111:/etc/cluster/  #將key遠程拷貝給兩個節點,注意拷貝的目錄
The authenticity of host '192.168.157.111 (192.168.157.111)' can't be established.
RSA key fingerprint is 80:50:bb:dd:40:27:26:66:4c:6e:20:5f:82:3f:7c:ab.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.157.111' (RSA) to the list of known hosts.
root@192.168.157.111's password: 
fence_xvm.key                                 100%  128     0.1KB/s   00:00    
[root@tramisu ~]# scp /etc/cluster/fence_xvm.key 192.168.157.222:/etc/cluster/
The authenticity of host '192.168.157.222 (192.168.157.222)' can't be established.
RSA key fingerprint is 28:be:4f:5a:37:4a:a8:80:37:6e:18:c5:93:84:1d:67.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.157.222' (RSA) to the list of known hosts.
root@192.168.157.222's password: 
fence_xvm.key                                 100%  128     0.1KB/s   00:00 
[root@tramisu ~]# systemctl restart fence_virtd.service   #重啟服務
  • 回到luci主機的web網頁設定fence如下圖

  • 設定完成后如下圖所示

  • 回到每一個節點進行設置,如下圖

  • 上圖中第二步具體設置如下圖


  • 設定完成后如下圖

+查看配置文件的變化

[root@server111 ~]# cat /etc/cluster/cluster.conf  #查看文件內容的改變
[root@server222 ~]# cat /etc/cluster/cluster.conf #發現配置在兩個節點上應該是一樣的

  • 節點狀態
[root@server222 ~]# clustat  #查看節點狀態,兩台節點狀態也應該是一樣的
Cluster Status for forsaken @ Tue May 19 22:31:31 2015
Member Status: Quorate

 Member Name                             ID   Status
 ------ ----                             ---- ------
 192.168.157.111                             1 Online
 192.168.157.222                             2 Online, Local

一些簡單的測試

[root@server222 ~]# fence_node 192.168.157.111  #利用命令切掉111節點
fence 192.168.157.111 success
[root@server222 ~]# clustat 
Cluster Status for forsaken @ Tue May 19 22:32:23 2015
Member Status: Quorate

 Member Name                             ID   Status
 ------ ----                             ---- ------
 192.168.157.111                             1 Offline   #第一台被換掉,此時我們查看111主機應該會進入重啟狀態,證明fence機制正常工作
 192.168.157.222                             2 Online, Local

[root@server222 ~]# clustat   #當111節點重新啟動會,會自動唄加入到節點中,此時222主機作為主機,111節點作為備用節點
Cluster Status for forsaken @ Tue May 19 22:35:28 2015
Member Status: Quorate

 Member Name                             ID   Status
 ------ ----                             ---- ------
 192.168.157.111                             1 Online
 192.168.157.222                             2 Online, Local

[root@server222 ~]# echo c > /proc/sysrq-trigger  #也可以使用內存破壞等命令,或者手動宕掉網卡等操作來實驗,唄破壞的節點自動重啟,重啟后作為備機,大家可以自行實驗,在此不做多余的介紹


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM