Oracle 11gR2 RAC網絡配置,更改public ip、vip和scanip
轉載。 https://blog.csdn.net/Martin201609/article/details/52557037
Oracle RAC網絡配置,修改public IP以及VIP地址
Oracle Clusterware Network管理
#public ip和private ip
An Oracle Clusterware configuration requires at least two interfaces:
A public network interface, on which users and application servers connect to access data on the database server.
–pulibc網卡,是VIP所綁定的網卡,客戶端連接所使用的,對外提供服務連的網卡
A private network interface for internode communication.
–private network interface,用於rac節點之間作信息同步的。
–Oracle RAC系統中每個節點至少有兩個interface。public網卡,對外提供服務的,用於客戶端連接。
SCAN IP屬於虛擬IP,是對外提供的IP,oracle推薦使用scan ip,通過配置scan ip可以將客戶端來的請求,負載均衡的分配到集群的各個節點上。
配置要求:
public ip 和 private ip對應的操作系統網卡在集群中各節點的名稱要一致,比如都用eth0和eth1,若不一致,則rac軟件無法成功安裝。
操作系統hosts文件對應正確,需要配置public ip ,public vip ,private ip以及scan ip
關於修改網絡配置的各種情況說明
實驗環境信息:(linux 64 + RAC 11.2.0.4)
–public ip
192.168.56.101 rac1.wtest.com rac1
192.168.56.103 rac2.wtest.com rac2
192.168.56.102 rac1-vip.wtest.com rac1-vip
192.168.56.104 rac2-vip.wtest.com rac2-vip
192.168.56.105 rac.wtest.com rac
–priv
192.168.57.11 rac1-priv
192.168.57.13 rac2-priv
–網卡信息
node1:
eth0 inet addr:192.168.56.101 Bcast:192.168.56.255 Mask:255.255.255.0
eth1 inet addr:192.168.57.11 Bcast:192.168.57.255 Mask:255.255.255.0
node2:
eth0 inet addr:192.168.56.103 Bcast:192.168.56.255 Mask:255.255.255.0
eth1 inet addr:192.168.57.13 Bcast:192.168.57.255 Mask:255.255.255.0
Case 1、修改主機名(hostname)
public hostname是在軟件安裝期間,自動在OCR中配置,不能被隨便修改。
要想修改hostname,則只有通過將節點踢出集群,然后重新加入的方式,修改hostname
Case 2、修改public ip
修改public ip,如果不修改網卡的名稱和掩碼等,修改后的IP地址,仍然在原網絡的局域內,則可以直接進行修改。
直接在OS操作系統層處理,不需要再在oracle clusterware層做一些其它的處理
node1:
eth0 192.168.56.101 –> 192.168.56.111
node2:
eth0 192.168.56.103 –> 192.168.56.113
原網卡的名稱保持不變
1. Shutdown Oracle Clusterware stack
node1:
./crsctl stop crs
node2:
./crsctl stop crs
2. Modify the IP address at network layer, DNS and /etc/hosts file to reflect the change
修改/etc/hosts文件
修改/etc/sysconfig/network-scripts/ifcfg-eth0
重開網絡服務
service network restart
3. Restart Oracle Clusterware stack
使用新網卡登錄到機器
cd /app/grid/11.2.0/bin
./crsctl start crs
查看狀態
crsctl stat res -t
[grid@rac2 ~]$ crsctl stat res -t
--------------------------------------------------------------------------------
NAME TARGET STATE SERVER STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATADG.dg
ONLINE ONLINE rac1
ONLINE ONLINE rac2
ora.DGSYS.dg
ONLINE ONLINE rac1
ONLINE ONLINE rac2
ora.EXTDG.dg
ONLINE ONLINE rac1
ONLINE ONLINE rac2
ora.LISTENER.lsnr
ONLINE ONLINE rac1
ONLINE ONLINE rac2
ora.SYSDG.dg
ONLINE ONLINE rac1
ONLINE ONLINE rac2
ora.asm
ONLINE ONLINE rac1 Started
ONLINE ONLINE rac2 Started
ora.gsd
OFFLINE OFFLINE rac1
OFFLINE OFFLINE rac2
ora.net1.network
ONLINE ONLINE rac1
ONLINE ONLINE rac2
ora.ons
ONLINE ONLINE rac1
ONLINE ONLINE rac2
ora.registry.acfs
ONLINE ONLINE rac1
ONLINE ONLINE rac2
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE rac1
ora.cvu
1 ONLINE ONLINE rac1
ora.mar.db
1 OFFLINE OFFLINE Instance Shutdown
2 OFFLINE OFFLINE Instance Shutdown
ora.oc4j
1 ONLINE ONLINE rac1
ora.rac.db
1 OFFLINE OFFLINE Instance Shutdown
2 OFFLINE OFFLINE Instance Shutdown
ora.rac1.vip
1 ONLINE ONLINE rac1
ora.rac2.vip
1 ONLINE ONLINE rac2
ora.scan1.vip
1 ONLINE ONLINE rac1
crs_stat -t -v
[grid@rac2 ~]$ crs_stat -t -v
Name Type R/RA F/FT Target State Host
----------------------------------------------------------------------
ora.DATADG.dg ora....up.type 0/5 0/ ONLINE ONLINE rac1
ora.DGSYS.dg ora....up.type 0/5 0/ ONLINE ONLINE rac1
ora.EXTDG.dg ora....up.type 0/5 0/ ONLINE ONLINE rac1
ora....ER.lsnr ora....er.type 0/5 0/ ONLINE ONLINE rac1
ora....N1.lsnr ora....er.type 0/5 0/0 ONLINE ONLINE rac1
ora.SYSDG.dg ora....up.type 0/5 0/ ONLINE ONLINE rac1
ora.asm ora.asm.type 0/5 0/ ONLINE ONLINE rac1
ora.cvu ora.cvu.type 0/5 0/0 ONLINE ONLINE rac1
ora.gsd ora.gsd.type 0/5 0/ OFFLINE OFFLINE
ora.mar.db ora....se.type 0/2 0/1 OFFLINE OFFLINE
ora....network ora....rk.type 0/5 0/ ONLINE ONLINE rac1
ora.oc4j ora.oc4j.type 0/1 0/2 ONLINE ONLINE rac1
ora.ons ora.ons.type 0/3 0/ ONLINE ONLINE rac1
ora.rac.db ora....se.type 0/2 0/1 OFFLINE OFFLINE
ora....SM1.asm application 0/5 0/0 ONLINE ONLINE rac1
ora....C1.lsnr application 0/5 0/0 ONLINE ONLINE rac1
ora.rac1.gsd application 0/5 0/0 OFFLINE OFFLINE
ora.rac1.ons application 0/3 0/0 ONLINE ONLINE rac1
ora.rac1.vip ora....t1.type 0/0 0/0 ONLINE ONLINE rac1
ora....SM2.asm application 0/5 0/0 ONLINE ONLINE rac2
ora....C2.lsnr application 0/5 0/0 ONLINE ONLINE rac2
ora.rac2.gsd application 0/5 0/0 OFFLINE OFFLINE
ora.rac2.ons application 0/3 0/0 ONLINE ONLINE rac2
ora.rac2.vip ora....t1.type 0/0 0/0 ONLINE ONLINE rac2
ora....ry.acfs ora....fs.type 0/5 0/ ONLINE ONLINE rac1
ora.scan1.vip ora....ip.type 0/0 0/0 ONLINE ONLINE rac1
---------------------
Case 3、Changing public network interface, subnet or netmask
修改網卡、子網掩碼等信息,需要通過oifcfg命令完成對應的修改
If the change involves different subnet(netmask) or interface, delete the existing interface information from OCR and add it back with the correct information is required.
原先使用eth0,分別為192.168.56.111 和 192.168.56.113
修改網卡名到eth2,使用ip為192.168.56.121 和 192.168.56.123
node1:
eth2 192.168.56.121
node2:
eth2 192.168.56.123
---------------------
srvctl stop database -d RAC -o immediate
srvctl stop asm -n rac1
srvctl stop asm -n rac2
srvctl stop nodeapps -n rac1
srvctl stop nodeapps -n rac2
1.停止數據庫 srvctl stop database -d RAC -o immediate
2.停止nodeapps
從11gR2以后:
srvctl config nodeapps -a
./crsctl stop crs (兩個節點都要執行)
---------------------
[grid@rac1 ~]$ srvctl config nodeapps -a
網絡存在: 1/192.168.56.0/255.255.255.0/eth0, 類型 static
VIP 存在: /rac1-vip/192.168.56.102/192.168.56.0/255.255.255.0/eth0, 托管節點 rac1
VIP 存在: /rac2-vip/192.168.56.104/192.168.56.0/255.255.255.0/eth0, 托管節點 rac2
[grid@rac2 ~]$ srvctl stop asm -n rac2
PRCR-1014 : 無法停止資源 ora.asm
PRCR-1065 : 無法停止資源 ora.asm
CRS-2529: Unable to act on 'ora.asm' because that would require stopping or relocating 'ora.DATADG.dg', but the force option was not specified
oerr crs 2529
[grid@rac2 ~]$ oerr crs 2529
2529, 1, "Unable to act on '%s' because that would require stopping or relocating '%s', but the force option was not specified"
// *Cause: Acting on the resource requires stopping or relocating other resources,
// which requires that force option be specified, and it is not.
// *Action: Re-evaluate the request and if it makes sense, set the force option and
// re-submit
強制關閉(加f選項)
srvctl stop asm -n rac2 -f
servctl stop asm -n rac1 -f
查看oifcfg getif
[root@rac1 bin]# ./oifcfg getif
PRIF-10: failed to initialize the cluster registry
當關閉asm實例時,則執行這個會報錯的
---------------------
3.所以 crsctl start crs
處理:cd /app/grid/11.2.0/bin
./oifcfg getif
[root@rac1 bin]# ./oifcfg getif
eth0 192.168.56.0 global public
eth1 192.168.57.0 global cluster_interconnect
[root@rac1 bin]# ./oifcfg iflist
eth0 192.168.56.0
eth1 192.168.57.0
eth1 169.254.0.0
eth2 192.168.56.0
---------------------
查看hosts
#public ip
192.168.56.111 rac1.wtest.com rac1
192.168.56.113 rac2.wtest.com rac2
192.168.56.102 rac1-vip.wtest.com rac1-vip
192.168.56.104 rac2-vip.wtest.com rac2-vip
192.168.56.105 rac.wtest.com rac
#priv
192.168.57.11 rac1-priv
192.168.57.13 rac2-priv
#新的所使用的public ip
192.168.56.121 rac1.wtest.com rac1
192.168.56.123 rac2.wtest.com rac2
---------------------
4.處理
通過 oifcfg 命令修改所使用的public ip的網卡
[root@rac1 bin]# ./oifcfg delif -global eth0/192.168.56.0
[root@rac1 bin]#
[root@rac1 bin]# ./oifcfg getif
eth1 192.168.57.0 global cluster_interconnect
./oifcfg delif -global eth0/192.168.56.0
./oifcfg setif -global eth2/192.168.56.0:public
禁用eth0,啟動eth2,模擬啟動新的名稱的網卡,換掉舊的網卡
ifdown eth0 停用原有網卡
ifup eth2 使用網卡eth2
./crsctl stop crs
./crsctl start crs
6.查看各資源狀態
crsctl stat res -t
發現 兩個節點上的VIP並沒有啟動
嘗試啟動兩節點的VIP
[grid@rac2 ~]$ srctl start vip -n rac2
-bash: srctl: command not found
[grid@rac2 ~]$ srvctl start vip -n rac2
PRCR-1079 : 無法啟動資源 ora.rac2.vip
CRS-2674: Start of 'ora.net1.network' on 'rac2' failed
CRS-2632: There are no more servers to try to place resource 'ora.rac2.vip' on that would satisfy its placement policy
[grid@rac2 ~]$ oerr crs 2632
2632, 1, "There are no more servers to try to place resource '%s' on that would satisfy its placement policy"
// *Cause: After one or more attempts, the system ran out of servers
// that can be used to place the resource and satisfy its placement
// policy.
// *Action: None.
[grid@rac2 ~]$ oerr crs 2674
2674, 1, "Start of '%s' on '%s' failed"
// *Cause: This is a status message.
// *Action: None.
7.查看vip的信息,仍然綁定在eth0上
[grid@rac2 ~]$ srvctl config nodeapps -a
網絡存在: 1/192.168.56.0/255.255.255.0/eth0, 類型 static
VIP 存在: /rac1-vip/192.168.56.102/192.168.56.0/255.255.255.0/eth0, 托管節點 rac1
VIP 存在: /rac2-vip/192.168.56.104/192.168.56.0/255.255.255.0/eth0, 托管節點 rac2
故:修改public ip所在的網卡名稱之后,因所有的VIP綁定對PUBLIC IP所在的網卡上,所以同樣要同步修改VIP的相關配置的
修改VIP所綁定的網卡
通過原有的方式修改
root用戶操作
cd /app/grid/11.2.0/bin
srvctl config nodeapps -n rac1 -A 192.168.56.102/255.255.255.0/eth2
srvctl config nodeapps -n rac2 -A 192.168.56.104/255.255.255.0/eth2
查看修改后的狀態
[grid@rac2 ~]$ srvctl config nodeapps -a
網絡存在: 1/192.168.56.0/255.255.255.0/eth2, 類型 static
VIP 存在: /rac1-vip/192.168.56.102/192.168.56.0/255.255.255.0/eth2, 托管節點 rac1
VIP 存在: /rac2-vip/192.168.56.104/192.168.56.0/255.255.255.0/eth2, 托管節點 rac2
在11.2.0.2以后的版本中,也可以通過直接修改network資源完成類似的操作
更改相關的配置:
srvctl modify network -k 1 -S 192.168.56.102/255.255.255.0/eth2
srvctl modify network -k 1 -S 192.168.56.104/255.255.255.0/eth2
注:
How to ModifyPublic Network Information including VIP in Oracle Clusterware (文檔 ID 276434.1)
Note 1: Starting with 11.2, the VIPs depend on the network resource (ora.net1.network), the OCR only records the VIP
hostname or the IP address associated with the VIP resource. The network attributes (subnet/netmask/interface) are
recorded with the network resource. When the nodeapps resource is modified, the network resoure(ora.net1.network)
attributes are also modified implicitly.
From 11.2.0.2 onwards, if only subnet/netmask/interface change is required, network resource can be modified directly via
srvctl modify network command.
as root user:
# srvctl modify network -k <network_number>] [-S <subnet>/<netmask>[/if1[|if2...]]
eg:
# srvctl modify network -k 1 -S 110.11.70.0/255.255.255.0/eth2
8.查看資源狀態
crsctl stat res -t
crs_stat -t
9.如果保證放心,有停機時間窗口,可以嘗試兩個節點重啟一起,保證安全請放心
crsctl stop crs
crsctl start crs
crsctl stat res -t
servctl start database -d RAC -o immediate
Case 4: Changing VIPs associated with public network change
如果涉及VIP的修改以及public ip所在的網卡名稱,掩碼等特性的修改。參照上述Case 3的步驟修改。這里不在重復測試記錄。
---------------------
作者:Martin201609
來源:CSDN
原文:https://blog.csdn.net/Martin201609/article/details/52557037
版權聲明:本文為博主原創文章,轉載請附上博文鏈接!
