一.巡檢RAC數據庫
1.1列出數據庫
[grid@node1 ~]$ srvctl config database
racdb
[grid@node1 ~]$
1.2列出數據庫的實例
[grid@node1 ~]$ srvctl status database -d racdb
Instance racdb1 is running on node node1
Instance racdb2 is running on node node2
1.3數據庫的配置
[grid@node1 ~]$ srvctl config database -d racdb -a Database unique name: racdb Database name: racdb Oracle home: /u01/app/oracle/11.2.0/dbhome_1 Oracle user: oracle Spfile: +DATA/racdb/spfileracdb.ora Domain: Start options: open Stop options: immediate Database role: PRIMARY Management policy: AUTOMATIC Server pools: racdb Database instances: racdb1,racdb2 Disk Groups: DATA Services: Database is enabled Database is administrator managed [grid@node1 ~]$
二.巡檢Grid
2.1集群名稱
[grid@node1 ~]$ cemutlo -n scan-cluster [grid@node1 ~]$
2.2檢查集群棧狀態
[grid@node1 ~]$ crsctl check cluster -all ************************************************************** node1: CRS-4537: Cluster Ready Services is online CRS-4529: Cluster Synchronization Services is online CRS-4533: Event Manager is online ************************************************************** node2: CRS-4537: Cluster Ready Services is online CRS-4529: Cluster Synchronization Services is online CRS-4533: Event Manager is online ************************************************************** [grid@node1 ~]$
2.3 集群的資源
[grid@node1 ~]$ crsctl status res -t -------------------------------------------------------------------------------- NAME TARGET STATE SERVER STATE_DETAILS -------------------------------------------------------------------------------- Local Resources -------------------------------------------------------------------------------- ora.DATA.dg ONLINE ONLINE node1 ONLINE ONLINE node2 ora.LISTENER.lsnr ONLINE ONLINE node1 ONLINE ONLINE node2 ora.asm ONLINE ONLINE node1 Started ONLINE ONLINE node2 Started ora.eons ONLINE ONLINE node1 ONLINE ONLINE node2 ora.gsd OFFLINE OFFLINE node1 OFFLINE OFFLINE node2 ora.net1.network ONLINE ONLINE node1 ONLINE ONLINE node2 ora.ons ONLINE ONLINE node1 ONLINE ONLINE node2 -------------------------------------------------------------------------------- Cluster Resources -------------------------------------------------------------------------------- ora.LISTENER_SCAN1.lsnr 1 ONLINE ONLINE node2 ora.LISTENER_SCAN2.lsnr 1 ONLINE ONLINE node1 ora.LISTENER_SCAN3.lsnr 1 ONLINE ONLINE node1 ora.node1.vip 1 ONLINE ONLINE node1 ora.node2.vip 1 ONLINE ONLINE node2 ora.oc4j 1 OFFLINE OFFLINE ora.racdb.db 1 ONLINE ONLINE node1 Open 2 ONLINE OFFLINE ora.scan1.vip 1 ONLINE ONLINE node2 ora.scan2.vip 1 ONLINE ONLINE node1 ora.scan3.vip 1 ONLINE ONLINE node1 [grid@node1 ~]$
主機node1的更加詳細的資源
[grid@node1 ~]$ crsctl status res -t -init -------------------------------------------------------------------------------- NAME TARGET STATE SERVER STATE_DETAILS -------------------------------------------------------------------------------- Cluster Resources -------------------------------------------------------------------------------- ora.asm 1 ONLINE ONLINE node1 Started ora.crsd 1 ONLINE ONLINE node1 ora.cssd 1 ONLINE ONLINE node1 ora.cssdmonitor 1 ONLINE ONLINE node1 ora.ctssd 1 ONLINE ONLINE node1 ACTIVE:0 ora.diskmon 1 ONLINE ONLINE node1 ora.evmd 1 ONLINE ONLINE node1 ora.gipcd 1 ONLINE ONLINE node1 ora.gpnpd 1 ONLINE ONLINE node1 ora.mdnsd 1 ONLINE ONLINE node1 [grid@node1 ~]$
主機node2的更加詳細的資源
[grid@node2 ~]$ crsctl status res -t -init -------------------------------------------------------------------------------- NAME TARGET STATE SERVER STATE_DETAILS -------------------------------------------------------------------------------- Cluster Resources -------------------------------------------------------------------------------- ora.asm 1 ONLINE ONLINE node2 Started ora.crsd 1 ONLINE ONLINE node2 ora.cssd 1 ONLINE ONLINE node2 ora.cssdmonitor 1 ONLINE ONLINE node2 ora.ctssd 1 ONLINE ONLINE node2 ACTIVE:-11700 ora.diskmon 1 ONLINE ONLINE node2 ora.evmd 1 ONLINE ONLINE node2 ora.gipcd 1 ONLINE ONLINE node2 ora.gpnpd 1 ONLINE ONLINE node2 ora.mdnsd 1 ONLINE ONLINE node2 [grid@node2 ~]$
2.4檢查節點應用
[grid@node1 ~]$ srvctl status nodeapps VIP node1-vip is enabled VIP node1-vip is running on node: node1 VIP node2-vip is enabled VIP node2-vip is running on node: node2 Network is enabled Network is running on node: node1 Network is running on node: node2 GSD is disabled GSD is not running on node: node1 GSD is not running on node: node2 ONS is enabled ONS daemon is running on node: node1 ONS daemon is running on node: node2 eONS is enabled eONS daemon is running on node: node1 eONS daemon is running on node: node2 [grid@node1 ~]$
2.5 檢查SCAN
檢查scan-ip地址的配置 [grid@node1 ~]$ srvctl config scan SCAN name: scan-cluster.com, Network: 1/192.168.0.0/255.255.255.0/eth0 SCAN VIP name: scan1, IP: /scan-cluster/192.168.0.24 SCAN VIP name: scan2, IP: /scan-cluster/192.168.0.25 SCAN VIP name: scan3, IP: /scan-cluster/192.168.0.26 [grid@node1 ~]$ 檢查scan-ip地址的實際分布及狀態 [grid@node1 ~]$ srvctl status scan SCAN VIP scan1 is enabled SCAN VIP scan1 is running on node node2 SCAN VIP scan2 is enabled SCAN VIP scan2 is running on node node1 SCAN VIP scan3 is enabled SCAN VIP scan3 is running on node node1 [grid@node1 ~]$ 檢查scan監聽配置 [grid@node1 ~]$ srvctl config scan_listener SCAN Listener LISTENER_SCAN1 exists. Port: TCP:1521 SCAN Listener LISTENER_SCAN2 exists. Port: TCP:1521 SCAN Listener LISTENER_SCAN3 exists. Port: TCP:1521 [grid@node1 ~]$ 檢查scan監聽狀態 [grid@node1 ~]$ srvctl status scan_listener SCAN Listener LISTENER_SCAN1 is enabled SCAN listener LISTENER_SCAN1 is running on node node2 SCAN Listener LISTENER_SCAN2 is enabled SCAN listener LISTENER_SCAN2 is running on node node1 SCAN Listener LISTENER_SCAN3 is enabled SCAN listener LISTENER_SCAN3 is running on node node1 [grid@node1 ~]$
2.6 檢查VIP和監聽
檢查VIP的配置情況 [grid@node1 ~]$ srvctl config vip -n node1 VIP exists.:node1 VIP exists.: /node1-vip/192.168.0.21/255.255.255.0/eth0 [grid@node1 ~]$ srvctl config vip -n node2 VIP exists.:node2 VIP exists.: /node2-vip/192.168.0.31/255.255.255.0/eth0 [grid@node1 ~]$ 檢查VIP的狀態 [grid@node1 ~]$ srvctl status nodeapps 或 [grid@node1 ~]$ srvctl status vip -n node1 VIP node1-vip is enabled VIP node1-vip is running on node: node1 [grid@node1 ~]$ srvctl status vip -n node2 VIP node2-vip is enabled VIP node2-vip is running on node: node2 [grid@node1 ~]$ 檢查本地監聽配置: [grid@node1 ~]$ srvctl config listener -a Name: LISTENER Network: 1, Owner: grid Home: <CRS home> /u01/app/11.2.0/grid on node(s) node2,node1 End points: TCP:1521 檢查本地監聽狀態: [grid@node1 ~]$ srvctl status listener Listener LISTENER is enabled Listener LISTENER is running on node(s): node1,node2 [grid@node1 ~]$
2.7 檢查ASM
檢查ASM狀態
[grid@node1 ~]$ srvctl status asm -a
ASM is running on node1,node2
ASM is enabled.
檢查ASM配置
[grid@node1 ~]$ srvctl config asm -a
ASM home: /u01/app/11.2.0/grid
ASM listener: LISTENER
ASM is enabled.
[grid@node1 ~]$
檢查磁盤組
[grid@node1 ~]$ srvctl status diskgroup -g DATA
Disk Group DATA is running on node1,node2
[grid@node1 ~]$
查看ASM磁盤
[root@node1 bin]# oracleasm listdisks
VOL1
VOL2
[root@node1 bin]#
查看物理磁盤與asm 磁盤對應關系
[root@node1 bin]# oracleasm querydisk -v -p VOL1
Disk "VOL1" is a valid ASM disk
/dev/sdb1: LABEL="VOL1" TYPE="oracleasm"
[root@node1 bin]#
2.8檢查集群節點間的時鍾同步
檢查節點node1的時間同步 [grid@node1 ~]$ cluvfy comp clocksync -verbose ....... Verification of Clock Synchronization across the cluster nodes was successful. [grid@node1 ~]$
檢查節點node2的時間同步 [grid@node2 ~]$ cluvfy comp clocksync -verbose .............. CTSS is in Active state. Proceeding with check of clock time offsets on all nodes... Reference Time Offset Limit: 1000.0 msecs Check: Reference Time Offset Node Name Time Offset Status ------------ ------------------------ ------------------------ node2 -89900.0 failed Result: PRVF-9661 : Time offset is NOT within the specified limits on the following nodes: "[node2]" PRVF-9652 : Cluster Time Synchronization Services check failed Verification of Clock Synchronization across the cluster nodes was unsuccessful on all the specified nodes. [grid@node2 ~]$
注:節點2的服務器時間出現問題
至此,對Grid的巡檢基本上就算完成了