Fence 設備


RHCS中必須有Fence設備,在設備為知故障發生時,Fence負責讓占有浮動資源的設備與集群斷開。

REDHAT的fence device有兩種,

內部fence設備:

IBM RSAII卡,HP的iLO卡,Dell的DRAC,還有IPMI的設備;

外部fence 設備:

UPS,SAN SWITCH,NETWORK SWITCH等。

對於外部fence 設備,可以做拔電源的測試,因為備機可以接受到fence device返回的信號,備機可以正常接管服務,

對於內部fence 設備,不能做拔電源的測試,因為主機斷電后,備機接受不到主板芯片做為fence device返備的信號,就不能接管服務,clustat會看到資源的屬主是unknow,查看日志會看到持續報fence failed的信息。

 

軟fence配置

CentOS與Redhat使用KVM虛擬機做RHCS時可以通過做軟fence來實現fence功能。

配置如下:

1、在真機上運行fence服務
yum list | grep --color fence
yum -y install fence-virtd fence-virtd-libvirt fence-virtd-mult

2 、創建密碼文件,並密碼文件scp給 web1 web2 台服務器,

宿主計算機上做:

[root@localhost ~]# mkdir /etc/cluster
[root@localhost ~]# dd if=/dev/urandom of=/etc/cluster/fence_xvm.key bs=4K count=1
1+0 records in
1+0 records out
4096 bytes (4.1 kB) copied, 0.000848692 s, 4.8 MB/s
[root@localhost ~]# ll /etc/cluster/
total 4
-rw-r--r-- 1 root root 4096 Mar 24 16:58 fence_xvm.key
[root@localhost ~]#
[root@localhost ~]# scp /etc/cluster/fence_xvm.key root@10.37.129.5:/etc/cluster/

/etc/cluster/
3、宿主機配置fencefence_virtd -c

紅色為手動,其他為enter

[root@localhost ~]# fence_virtd -c######
Module search path [/usr/lib64/fence-virt]:

Available backends:
libvirt 0.1

Listener modules are responsible for accepting requests
from fencing clients.

Listener module [multicast]:
No listener module named multicast found!
Use this value anyway [y/N]? y#####

The multicast listener module is designed for use environments
where the guests and hosts may communicate over a network using
multicast.

The multicast address is the address that a client will use to
send fencing requests to fence_virtd.

Multicast IP Address [225.0.0.12]:

Using ipv4 as family.

Multicast IP Port [1229]:

Setting a preferred interface causes fence_virtd to listen only
on that interface. Normally, it listens on the default network
interface. In environments where the virtual machines are
using the host machine as a gateway, this *must* be set
(typically to virbr0).
Set to 'none' for no interface.

Interface [none]: private#####

The key file is the shared key information which is used to
authenticate fencing requests. The contents of this file must
be distributed to each physical host and virtual machine within
a cluster.

Key File [/etc/cluster/fence_xvm.key]:

Backend modules are responsible for routing requests to
the appropriate hypervisor or management layer.

Backend module [checkpoint]: libvirt#####

The libvirt backend module is designed for single desktops or
servers. Do not use in environments where virtual machines
may be migrated between hosts.

Libvirt URI [qemu:///system]:

Configuration complete.

=== Begin Configuration ===
backends {
libvirt {
uri = "qemu:///system";
}

}

listeners {
multicast {
interface = "private";
port = "1229";
family = "ipv4";
address = "225.0.0.12";
key_file = "/etc/cluster/fence_xvm.key";
}

}

fence_virtd {
module_path = "/usr/lib64/fence-virt";
backend = "libvirt";
listener = "multicast";
}
=== End Configuration ===
Replace /etc/fence_virt.conf with the above [y/N]? y
[root@localhost ~]#

4、啟動fence服務並開機運行

/etc/init.d/fence_virtd start
chkconfig fence_virtd on

 #luci添加軟fence


#

 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM