一.1 BLOG文檔結構圖
一.2 前言部分
一.2.1 導讀和注意事項
各位技術愛好者,看完本文后,你可以掌握如下的技能,也可以學到一些其它你所不知道的知識,~O(∩_∩)O~:
① rac中OCR的簡介及其作用
② rac中OCR的備份和恢復
③ rac的健忘與腦裂
④ grid用戶下的日志文件:$ORACLE_HOME/log文件夾內容被刪除導致集群不能啟動如何恢復?(重點)
⑤ 如何修復11.2 Grid權限誤操作【(How to check and fix file permissions on Grid Infrastructure environment (文檔 ID 1931142.1)】
⑥ 如何修復ASM實例名和節點名不一致的情況【How to Change 11.2 ASM Configuration to Match ASM Instance Name to the Node Where It Runs? (example, +ASM2 on Node2, etc) (文檔 ID 1419424.1)】
⑦ permission.pl腳本的使用
Tips:
① 本文在ITpub(http://blog.itpub.net/26736162)和博客園(http://www.cnblogs.com/lhrbest)有同步更新
② 文章中用到的所有代碼,相關軟件,相關資料請前往小麥苗的雲盤下載(http://blog.itpub.net/26736162/viewspace-1624453/)
③ 若文章代碼格式有錯亂,推薦使用搜狗、360或QQ瀏覽器,也可以下載pdf格式的文檔來查看,pdf文檔下載地址:http://blog.itpub.net/26736162/viewspace-1624453/
④ 本篇BLOG中命令的輸出部分需要特別關注的地方我都用灰色背景和粉紅色字體來表示,比如下邊的例子中,thread 1的最大歸檔日志號為33,thread 2的最大歸檔日志號為43是需要特別關注的地方;而命令一般使用黃色背景和紅色字體標注;對代碼或代碼輸出部分的注釋一般采用藍色字體表示。
List of Archived Logs in backup set 11
Thrd Seq Low SCN Low Time Next SCN Next Time
---- ------- ---------- ------------------- ---------- ---------
1 32 1621589 2015-05-29 11:09:52 1625242 2015-05-29 11:15:48
1 33 1625242 2015-05-29 11:15:48 1625293 2015-05-29 11:15:58
2 42 1613951 2015-05-29 10:41:18 1625245 2015-05-29 11:15:49
2 43 1625245 2015-05-29 11:15:49 1625253 2015-05-29 11:15:53
[ZHLHRDB1:root]:/>lsvg -o
T_XDESK_APP1_vg
rootvg
[ZHLHRDB1:root]:/>
00:27:22 SQL> alter tablespace idxtbs read write;
====》2097152*512/1024/1024/1024=1G
本文如有錯誤或不完善的地方請大家多多指正,ITPUB留言或QQ皆可,您的批評指正是我寫作的最大動力。
一.2.2 相關參考文章鏈接
關於物理和邏輯備份比較好的一篇文章:https://gjilevski.com/2010/12/20/backup-and-restore-of-ocr-in-grid-infrastructure-11g-r2-11-2-2/
一.2.3 本文簡介
寫這篇blog的情況是這樣的,rac啟動有問題,但是$GRID_HOME/log下的文件太多了,我就把下邊的內容清空了,結果是集群更加不能啟動就連基本的日志也沒有了,重啟OS無效,最后在仔細的想了想查看了一下運行正常的庫的$GRID_HOME/log下的文件結構,發現有的文件夾最后有個T或t的權限,由此聯想到可能是權限的緣故導致的,於是上MOS搜了搜文章果然搜到幾篇文章,11.2.0.3.6以上比較好解決,但是小於這個版本的就不好弄了,自己總結的方法是重新跑root.sh腳本,而跑完后集群中注冊的很多資源不存在了,於是這個又涉及到OCR的備份與恢復。另外執行root.sh若是順序不對,或者其他的原因可能導致asm實例號和主機號不一致,就是rac1上的實例名是+ASM2,這個雖然沒有啥影響,但是看着着實不爽,因此更改它們直接的對應關系也比較重要,所以總體而言就引出了文章開頭提出的6個問題。
一.3 相關知識點掃盲(摘自網絡+個人總結)
Clusterware 在運行期間需要兩個文件: OCR 和 Voting Disk. 這 2 個文件必須存放在共享存儲上。 OCR 用於解決健忘問題, Voting Disk 用於解決腦裂問題。
一.3.1 OCR Disk
Oracle Clusterware把整個集群的配置信息放在共享存儲上,這個存儲就是OCR Disk. 在整個集群中,只有一個節點能對OCR Disk 進行讀寫操作,這個節點叫作Master Node,所有節點都會在內存中保留一份OCR的拷貝,同時有一個OCR Process 從這個內存中讀取內容。 OCR 內容發生改變時,由Master Node的OCR Process負責同步到其他節點的OCR Process。
健忘問題是由於每個節點都有配置信息的拷貝,修改節點的配置信息不同步引起的。Oracle 采用的解決方法就是把這個配置文件放在共享的存儲上, 這個文件就是 OCR Disk。OCR 中保存整個集群的配置信息,配置信息以"Key-Value" 的形式保存其中。 在 Oracle10g 以前, 這個文件叫作 Server Manageability Repository(SRVM). 在 Oracle 10g, 這部分內容被重新設計,並重名為 OCR.。在 Oracle Clusterware 安裝的過程中, 安裝程序會提示用戶指定 OCR 位置。並且用戶指定的這個位置會被記錄在/etc/oracle/ocr.loc(Linux System、AIX) 或者/var/opt/oracle/ocr.loc(Solaris System)文件中。 而在 Oracle 9i RAC 中,對等的是 srvConfig.Loc文件。 Oracle Clusterware在啟動時會根據這里面的內容從指定位置讀入 OCR 內容。
[zfzhlhrdb3:root]:/>cd /etc/oracle
[zfzhlhrdb3:root]:/etc/oracle>ls -l
total 3160
drwxrwx--- 2 root dba 256 Dec 29 14:16 lastgasp
-rw-r--r-- 1 root dba 37 Dec 29 14:10 ocr.loc
-rw-r--r-- 1 root system 0 Dec 29 14:10 ocr.loc.orig
-rw-r--r-- 1 root dba 92 Dec 29 14:10 olr.loc
-rw-r--r-- 1 root system 0 Dec 29 14:10 olr.loc.orig
drwxrwxr-x 5 root dba 256 Dec 29 14:09 oprocd
drwxr-xr-x 3 root dba 256 Dec 29 14:09 scls_scr
-rws--x--- 1 root dba 1606037 Dec 29 14:09 setasmgid
[zfzhlhrdb3:root]:/etc/oracle>
[zfzhlhrdb3:root]:/etc/oracle>more /etc/oracle/ocr.loc
ocrconfig_loc=+DATA
local_only=FALSE
[zfzhlhrdb3:root]:/etc/oracle>
OCR key
整個 OCR 的信息是樹形結構,有 3 個大分支。分別是 SYSTEM,DATABASE 和 CRS。每個分支下面又有許多小分支。這些記錄的信息只能由 root 用戶修改。
一.3.1.1 OCR包含的內容
OCR中通常包含下列內容:
v 節點成員信息
v 數據庫實例,節點,以及其他的映射關系
v ASM
v 資源配置信息(vip,services等等)
v 服務特性(Service characteristics)
v Oracle集群中相關進程的信息
v CRS控制的第三方應用程序信息
[zfzhlhrdb1:root]:/>ocrdump -local -stdout -xml|more|grep -i \<name\>|sed -e 's/\<NAME\>//g' -e 's/\<\/NAME\>//g'|awk -F . '{print $1,$2,$3}'|uniq
SYSTEM
SYSTEM crs
SYSTEM crs usersecurity
SYSTEM crs deny
SYSTEM crs user_default_dir
SYSTEM ORA_CRS_HOME
SYSTEM WALLET
SYSTEM GNS
SYSTEM version
SYSTEM version localhost
SYSTEM version activeversion
SYSTEM GPnP
SYSTEM GPnP profiles
SYSTEM css
SYSTEM css nodenum_hint
SYSTEM network
SYSTEM network haip
SYSTEM OHASD
SYSTEM OHASD DM
SYSTEM OHASD SERVERPOOLS
SYSTEM OHASD SERVERS
SYSTEM OHASD TYPES
SYSTEM OHASD RESOURCES
SYSTEM CRS
SYSTEM CRS JOIN_SIGNATURE
SYSTEM OLR
SYSTEM OLR MANUALBACKUP
SYSTEM OCR
SYSTEM OCR BACKUP
DATABASE
DATABASE NODEAPPS
DATABASE VIP_RANGE
DATABASE LOG
DATABASE ASM
DATABASE DATABASES
CRS
[zfzhlhrdb1:root]:/>ocrdump -stdout -xml|more|grep -i \<name\>|sed -e 's/\<NAME\>//g' -e 's/\<\/NAME\>//g'|awk -F . '{print $1,$2,$3}'|uniq
SYSTEM
SYSTEM version
SYSTEM version activeversion
SYSTEM version hostnames
SYSTEM versionstring
SYSTEM WALLET
SYSTEM WALLET APPQOSADMIN
SYSTEM GNS
SYSTEM css
SYSTEM css interfaces
SYSTEM crs
SYSTEM crs versions
SYSTEM crs usersecurity
SYSTEM crs deny
SYSTEM crs user_default_dir
SYSTEM crs e2eport
SYSTEM crs uiport
SYSTEM crs 11
SYSTEM ACFS
SYSTEM ORA_CRS_HOME
SYSTEM evm
SYSTEM evm debug
SYSTEM evm cevmkey
SYSTEM evm rmport
SYSTEM evm cevmport
SYSTEM DIAG
SYSTEM DIAG status
SYSTEM local_only
SYSTEM WLM
SYSTEM GPnP
SYSTEM GPnP profiles
SYSTEM JAZNFILE
<name>jazn com</name>
<name>qosadmin</name>
<name>oc4jadmin</name>
<name>JtaAdmin</name>
<name>ascontrol_appadmin</name>
<name>oc4j-administrators</name>
<name>qosadmin</name>
<name>oc4jadmin</name>
<name>JtaAdmin
<name>qos_admin</name>
<name>qosadmin</name>
<name>oc4j-app-administrators</name>
<name>users</name>
<name>ascontrol_monitor</name>
<name>
<name>qosadmin</name>
<name>oc4jadmin</name>
<name>qos_admin</name>
<name>jazn com/oc4j-administrators</name>
<name>login</name>
<name>subject propagation</name>
<name>oracle security jazn
<name>jazn com/*</name>
<name>administration</name>
<name>jazn com</name>
<name>jazn com/ascontrol_admin</name>
<name>login</name>
<name>subject propagation</name>
<name>oracle security jazn
<name>oracl
<name>oracle security jazn
<name>jazn com/*</name>
<name>administration</name>
<name>jazn com</name>
<name>jazn com/oc4j-app-administrators</name>
<name>login</name>
<name>jazn com/users</name>
<name>login</name>
<name>oracle security jazn
<name>coreid password header</name>
<name>coreid resource operation</name>
<name>addAllRoles</name>
<name>coreid password attribute</name>
<name>coreid resource type</name>
<name>coreid name attribute</name>
<name>coreid resource name</name>
<name>core
<name>oracle security jazn
<name>addAllRoles</name>
<name>oracle security wss
<name>addAllRoles</name>
<name>oracle security jazn
<name>addAllRoles</name>
<name>oracle security jazn
<name>addAllRoles</name>
<name>oracle security jazn
<name>addAllRoles</name>
<name>oracle security wss
<name>addAllRoles</name>
<name>issuer name 1</name>
SYSTEM JAZNFILE STATE
SYSTEM CRSADMIN
SYSTEM CRSUSER
SYSTEM CRSD
SYSTEM CRSD DM
SYSTEM CRSD SERVERPOOLS
SYSTEM CRSD SERVERS
SYSTEM CRSD TYPES
SYSTEM CRSD RESOURCES
SYSTEM OCR
SYSTEM OCR BACKUP
DATABASE
DATABASE NODEAPPS
DATABASE NODEAPPS zfzhlhrdb1
DATABASE NODEAPPS zfzhlhrdb2
DATABASE VIP_RANGE
DATABASE LOG
DATABASE ASM
DATABASE ASM zfzhlhrdb1
DATABASE ASM zfzhlhrdb2
DATABASE DATABASES
CRS
CRS CUR
CRS HIS
CRS SEC
CRS STAGE
CRS STAGE node1
CRS STAGE node2
[zfzhlhrdb1:root]:/>
一.3.1.2 OCR存儲內容的表現形式
1. 同樣地與Windows注冊表來類比,OCR其存儲內容的表現形式與其相同,是采用鍵值對的方式來展現。
2. 整個OCR 的信息是樹形結構,有3個大分支。分別是SYSTEM,DATABASE 和CRS。
3. 每個分支下面又有許多小分支。這些記錄的信息只能由root用戶修改。
4. 可以使用ocrdump命令將其內容全部導出或者按分支進行導出。
一.3.2 Voting Disk
Voting Disk 這個文件主要用於記錄節點成員狀態,在出現腦裂時,決定那個 Partion 獲得控制權,其他的 Partion 必須從集群中剔除。在安裝 Clusterware 時也會提示指定這個位置。安裝完成后可以通過如下命令來查看 Voting Disk 位置。
$crsctl query css votedisk
[zfzhlhrdb3:root]:/dev>crsctl query css votedisk
## STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1. ONLINE 83cb4909d3254f4ebf1181b024aaf539 (/dev/rhdisk2) [DATA]
Located 1 voting disk(s).
[zfzhlhrdb3:root]:/dev>ls -l /dev/rhdisk*
crw------- 2 root system 19, 0 Dec 29 10:02 /dev/rhdisk0
crw------- 1 root system 19, 4 Dec 29 11:15 /dev/rhdisk1
crw-rw---- 1 grid asmadmin 19, 6 Jan 08 15:17 /dev/rhdisk2
crw-rw---- 1 root system 19, 3 Dec 29 11:15 /dev/rhdisk3
crw-rw---- 1 root system 19, 1 Dec 29 11:15 /dev/rhdisk4
crw------- 1 root system 19, 7 Dec 29 11:15 /dev/rhdisk5
crw------- 1 root system 19, 8 Dec 29 11:15 /dev/rhdisk6
crw------- 1 root system 19, 2 Dec 29 11:15 /dev/rhdisk7
crw------- 1 root system 19, 5 Dec 29 11:15 /dev/rhdisk8
[zfzhlhrdb3:root]:/dev>
一.3.3 健忘症(Amnesia)
集群環境配置文件不是集中存放的,而是每個節點都有一個本地副本,在集群正常運行時,用戶可以在任何節點更改集群的配置,並且這種更改會自動同步到其他節點。有一種特殊情況: 節點 A 正常關閉, 在節點 B 上修改配置, 關閉結點 A,啟動結點B。 這種情況下,修改的配置文件是丟失的, 就是所謂的健忘症。OCR 用於解決健忘問題。
健忘是由於某個節點更新了OCR中的內容,而集群中的另外一些節點此時處於關閉,維護或重啟階段,OCR Master進程來不及將其信息更新到這些異常節點緩存而導致的不一致。譬如,在A節點發出了添加ocr鏡像的命令,在這個時候B節點處於重啟階段。重啟后A已經更新完畢,而此時B並不知道已經為ocr增加了一個新的鏡像磁盤,健忘由此而生。
如下例,節點bo2dbp添加了新的ocr之后,配置文件發生了變化,此時節點bo2dbs的ocr.loc會被更新,如果bo2dbs處於關閉或重啟階段,則該文件得不到該更新,此即位健忘一例。
一.3.4 腦裂(Split Brain)
在集群中,節點間通過某種機制(心跳)了解彼此的健康狀態,以確保各節點協調工作。假設只有"心跳"出現問題, 各個節點還在正常運行, 這時,每個節點都認為其他的節點宕機了, 自己是整個集群環境中的"唯一建在者",自己應該獲得整個集群的"控制權"。 在集群環境中,存儲設備都是共享的, 這就意味着數據災難, 這種情況就是"腦裂"解決這個問題的通常辦法是使用投票算法(Quorum Algorithm). 它的算法機理如下:集群中各個節點需要心跳機制來通報彼此的"健康狀態",假設每收到一個節點的"通報"代表一票。對於三個節點的集群,正常運行時,每個節點都會有 3 票。 當結點 A 心跳出現故障但節點 A 還在運行,這時整個集群就會分裂成 2 個小的 partition。 節點 A 是一個,剩下的 2 個是一個。 這是必須剔除一個 partition 才能保障集群的健康運行。對於有 3 個節點的集群, A 心跳出現問題后, B 和 C 是一個 partion,有 2 票, A只有 1 票。 按照投票算法, B 和 C 組成的集群獲得控制權, A 被剔除。如果只有 2 個節點,投票算法就失效了。 因為每個節點上都只有 1 票。 這時就需要引入第三個設備: Quorum Device. Quorum Device 通常采用餓是共享磁盤,這個磁盤也叫作Quorum disk。 這個 Quorum Disk 也代表一票。 當 2 個結點的心跳出現問題時, 2 個節點同時去爭取 Quorum Disk 這一票, 最早到達的請求被最先滿足。 故最先獲得 Quorum Disk的節點就獲得 2 票。另一個節點就會被剔除。
一.3.5 OCR命令系列
一.3.5.1 ocrdump
該命令能以ASCII的方式打印出OCR的內容,但是這個命令不能用作OCR的備份恢復,也就是說產生的文件只能用作閱讀,而不能用於恢復。
命令格式: ocrdump [-stdout] [filename] [-keyname name] [-xml]
參數說明:
-stdout: 把內容打印輸出到屏幕上
Filename:內容輸出到文件中
-keyname:只打印某個鍵及其子健內容
-xml:以xml格式打印輸出
示例:把system.css鍵的內容以.xml格式打印輸出到屏幕
[root@raw1 bin]# ./ocrdump -stdout -keyname system.css -xml|more
<OCRDUMP>
<TIMESTAMP>03/08/2010 04:28:41</TIMESTAMP>
<DEVICE>/dev/raw/raw1</DEVICE>
<COMMAND>./ocrdump.bin -stdout -keyname system.css -xml </COMMAND>
......
這個命令在執行過程中,會在$CRS_HOME/log/<node_name>/client 目錄下產生日志文件,文件名ocrdump_<pid>.log,如果命令執行出現問題,可以從這個日志查看問題原因。
一.3.5.2 ocrcheck
Ocrcheck 命令用於檢查OCR內容的一致性,命令執行過程會在$CRS_HOME/log/nodename/client 目錄下產生ocrcheck_pid.log 日志文件。 這個命令不需要參數。
[zfzhlhrdb1:root]:/>ocrcheck
Status of Oracle Cluster Registry is as follows :
Version : 3
Total space (kbytes) : 262120
Used space (kbytes) : 3176
Available space (kbytes) : 258944
ID : 362503260
Device/File Name : +DATA
Device/File integrity check succeeded
Device/File not configured
Device/File not configured
Device/File not configured
Device/File not configured
Cluster registry integrity check succeeded
Logical corruption check succeeded
一.3.5.3 ocrconfig
該命令用於維護OCR 磁盤,安裝clusterware過程中,如果選擇External Redundancy冗余方式,則只能輸入一個OCR磁盤位置。 但是Oracle允許配置兩個OCR 磁盤互為鏡像,以防止OCR 磁盤的單點故障。 OCR 磁盤和Votedisk磁盤不一樣,OCR磁盤最多只能有兩個,一個Primary OCR 和一個Mirror OCR。
[root@raw1 bin]# ./ocrconfig --help
Name:
ocrconfig - Configuration tool for Oracle Cluster Registry.
Synopsis:
ocrconfig [option]
option:
-export <filename> [-s online]
- Export cluster register contents to a file
-import <filename> - Import cluster registry contents from a file
-upgrade [<user> [<group>]]
- Upgrade cluster registry from previous version
-downgrade [-version <version string>]
- Downgrade cluster registry to the specified version
-backuploc <dirname> - Configure periodic backup location
-showbackup - Show backup information
-restore <filename> - Restore from physical backup
-replace ocr|ocrmirror [<filename>] - Add/replace/remove a OCR device/file
-overwrite - Overwrite OCR configuration on disk
-repair ocr|ocrmirror <filename> - Repair local OCR configuration
-help - Print out this help information
Note:
A log file will be created in
$ORACLE_HOME/log/<hostname>/client/ocrconfig_<pid>.log. Please ensure
you have file creation privileges in the above directory before
running this tool.
一.3.6 Oracle RAC OCR 的備份與恢復
Oracle Clusterware把整個集群的配置信息放在共享存儲上,這些信息包括了集群節點的列表、集群數據庫實例到節點的映射以及CRS應用程序資源信息。也即是存放在ocr 磁盤(或者ocfs文件)上。因此對於這個配置文件的重要性是不言而喻的。任意使得ocr配置發生變化的操作在操作之間或之后都建議立即備份ocr。
因為OCR的內容如此重要,Oracle 每4個小時對其做一次備份,並且保留最后的3個備份,以及前一天,前一周的最后一個備份。 這個備份由Master Node CRSD進程完成,備份的默認位置是$CRS_HOME/crs/cdata/<cluster_name>目錄下。 每次備份后,備份文件名自動更改,以反應備份時間順序,最近一次的備份叫作backup00.ocr。這些備份文件除了保存在本地,DBA還應該在其他存儲設備上保留一份,以防止意外的存儲故障。
與Oracle數據庫備份恢復相似,OCR的備份也有物理備份或邏輯備份的概念,因此有兩種備份方式,兩種恢復方式。
常用命令:
crsctl query css votedisk
lquerypv -h /dev/rhdisk2
crsctl stop has -f
crsctl start has
crsctl stat res -t
一.3.6.1 dd備份恢復
備份表決磁盤:
dd if=/dev/raw/raw3 of=/tmp/votedisk_lhr.bak bs=1024k count=4
恢復表決磁盤:
dd if=/tmp/votedisk_lhr.bak of=/dev/raw/raw3 bs=1024k count=4
注:11g不推薦使用dd來進行備份恢復,盤頭一般是前4K
一.3.6.2 kfed恢復磁盤頭
dd if=/dev/rhdisk2 of=/asm_rhdisk2_dd.bak bs=1024 count=4
dd if=/dev/zero of=/dev/rhdisk2 bs=1024 count=4
kfed repair /dev/rhdisk2
關於kfed、kfod、amdu的更多內容可以參考:http://blog.itpub.net/26736162/viewspace-1694198/
http://blog.itpub.net/26736162/viewspace-1694199/
一.3.6.3 md_backup和md_restore恢復磁盤頭
asmcmd md_backup /oracle/app/11.2.0/grid/cdata/ZFTPCCDB-crs/asm_md_backup.bak
asmcmd md_restore /oracle/app/11.2.0/grid/cdata/ZFTPCCDB-crs/asm_md_backup.bak
dd if=/dev/rhdisk2 of=/asm_rhdisk2_dd.bak bs=1024k count=4
dd if=/dev/zero of=/dev/rhdisk2 bs=1024k count=4
crsctl stop has -f
crsctl start has
ASMCMD [+] > startup force nomount;
ASMCMD [+] > md_restore /asm_rhdisk2_dd.bak
ASMCMD [+] > md_backup /rman/asm_md.bak
dd if=/dev/zero of=/dev/rhdisk2 bs=1024 count=4
crsctl stop has -f
crsctl start has
ASMCMD [+] > startup force nomount;
ASMCMD [+] > md_restore /rman/asm_md.bak
關於md_backup和md_restore更多內容可以參考:http://blog.itpub.net/26736162/viewspace-2121309/
一.3.6.4 物理備份與恢復(自動備份)
缺省情況下,Oracle 每4個小時對其做一次備份,並且保留最后的3個副本,以及前一天,前一周的最后一個備份副本。用戶不能自定義備份頻率以及備份文件的副本數。
對於OCR的備份由是由Master Node CRSD進程完成,因此備份的默認位置是$CRS_HOME/crs/cdata/<cluster_name>目錄下。
備份的文件會自動更名,以反應備份時間順序,最近一次的備份叫作backup00.ocr。
由於是在Master Node的節點之上進行備份,因此備份文件僅存在於Master Node節點。
對於Master Node的節點crash之后則由剩余節點接管。
備份目錄可以通過ocrconfig -backuploc <directory_name> 命令修改。
OCR磁盤最多只能有兩個,一個Primary OCR 和一個Mirror OCR。兩者互為鏡像以避免單點故障。
對於物理備份恢復,不能簡單的使用操作系統級別的復制命令(使用ocr文件時)來完成,該操作將導致ocr不可用。
對於物理備份,僅僅只能使用restore方式來進行恢復,而不支持import方式
1,查看備份磁盤
[zfzhlhrdb2:grid]:/home/grid>ocrconfig -showbackup
zfzhlhrdb1 2016/06/30 15:13:46 /oracle/app/11.2.0/grid/cdata/zfzhlhrdb-crs/backup00.ocr
zfzhlhrdb1 2016/06/30 11:13:45 /oracle/app/11.2.0/grid/cdata/zfzhlhrdb-crs/backup01.ocr
zfzhlhrdb1 2016/06/30 07:13:45 /oracle/app/11.2.0/grid/cdata/zfzhlhrdb-crs/backup02.ocr
zfzhlhrdb1 2016/06/29 03:13:41 /oracle/app/11.2.0/grid/cdata/zfzhlhrdb-crs/day.ocr
zfzhlhrdb1 2016/06/20 03:13:08 /oracle/app/11.2.0/grid/cdata/zfzhlhrdb-crs/week.ocr
PROT-25: Manual backups for the Oracle Cluster Registry are not available
[zfzhlhrdb2:grid]:/home/grid>oerr prot 25
00025, 0, "Manual backups for the Oracle Cluster Registry are not available"
// *Cause: Manual backups for the Oracle Cluster Registry were not yet created.
// *Action: Manual backups can be created using 'ocrconfig -manualbackup'
// command.
[zfzhlhrdb2:grid]:/home/grid>ocrconfig -manualbackup
PROT-20: Insufficient permission to proceed. Require privileged user
[zfzhlhrdb2:grid]:/home/grid>exit
[zfzhlhrdb2:root]:/>
[zfzhlhrdb2:root]:/>
[zfzhlhrdb2:root]:/>ocrconfig -manualbackup
zfzhlhrdb1 2016/06/30 16:21:34 /oracle/app/11.2.0/grid/cdata/zfzhlhrdb-crs/backup_20160630_162134.ocr
[zfzhlhrdb2:root]:/>ocrconfig -showbackup
zfzhlhrdb1 2016/06/30 15:13:46 /oracle/app/11.2.0/grid/cdata/zfzhlhrdb-crs/backup00.ocr
zfzhlhrdb1 2016/06/30 11:13:45 /oracle/app/11.2.0/grid/cdata/zfzhlhrdb-crs/backup01.ocr
zfzhlhrdb1 2016/06/30 07:13:45 /oracle/app/11.2.0/grid/cdata/zfzhlhrdb-crs/backup02.ocr
zfzhlhrdb1 2016/06/29 03:13:41 /oracle/app/11.2.0/grid/cdata/zfzhlhrdb-crs/day.ocr
zfzhlhrdb1 2016/06/20 03:13:08 /oracle/app/11.2.0/grid/cdata/zfzhlhrdb-crs/week.ocr
zfzhlhrdb1 2016/06/30 16:21:34 /oracle/app/11.2.0/grid/cdata/zfzhlhrdb-crs/backup_20160630_162134.ocr
在節點一執行,可以看到2個節點得到的內容一致:
[zfzhlhrdb1:root]:/>ocrconfig -showbackup
zfzhlhrdb1 2016/06/30 15:13:46 /oracle/app/11.2.0/grid/cdata/zfzhlhrdb-crs/backup00.ocr
zfzhlhrdb1 2016/06/30 11:13:45 /oracle/app/11.2.0/grid/cdata/zfzhlhrdb-crs/backup01.ocr
zfzhlhrdb1 2016/06/30 07:13:45 /oracle/app/11.2.0/grid/cdata/zfzhlhrdb-crs/backup02.ocr
zfzhlhrdb1 2016/06/29 03:13:41 /oracle/app/11.2.0/grid/cdata/zfzhlhrdb-crs/day.ocr
zfzhlhrdb1 2016/06/20 03:13:08 /oracle/app/11.2.0/grid/cdata/zfzhlhrdb-crs/week.ocr
zfzhlhrdb1 2016/06/30 16:21:34 /oracle/app/11.2.0/grid/cdata/zfzhlhrdb-crs/backup_20160630_162134.ocr
[zfzhlhrdb1:root]:/>
2,恢復
ocrconfig -restore /app/crs/product/11.0.6/crs/cdata/racluster/backup01.ocr
3,查看配置
[grid@rac1 ~]$ more /etc/oracle/ocr.loc
ocrconfig_loc=+DATA
local_only=FALSE
[grid@rac1 ~]$
4,查看進程
[grid@rac1 ~]$ ps -ef | grep d.bin
root 4694 1 0 10:00 ? 00:00:13 /u01/grid/bin/ohasd.bin reboot
grid 4821 1 0 10:00 ? 00:00:28 /u01/grid/bin/oraagent.bin
root 4823 1 0 10:00 ? 00:00:04 /u01/grid/bin/orarootagent.bin
grid 4846 1 0 10:00 ? 00:00:00 /u01/grid/bin/gipcd.bin
grid 4859 1 0 10:00 ? 00:00:00 /u01/grid/bin/mdnsd.bin
grid 4874 1 0 10:00 ? 00:00:01 /u01/grid/bin/gpnpd.bin
root 8645 1 0 10:47 ? 00:00:04 /u01/grid/bin/cssdmonitor
root 8662 1 0 10:48 ? 00:00:05 /u01/grid/bin/cssdagent
grid 8664 1 0 10:48 ? 00:00:01 /u01/grid/bin/diskmon.bin -d -f
grid 8688 1 0 10:48 ? 00:00:40 /u01/grid/bin/ocssd.bin
root 8754 1 0 10:50 ? 00:00:01 /u01/grid/bin/octssd.bin
grid 8770 1 0 10:50 ? 00:00:02 /u01/grid/bin/evmd.bin
grid 8888 1 0 10:51 ? 00:00:00 /u01/grid/bin/oclskd.bin
root 8920 1 0 10:51 ? 00:00:07 /u01/grid/bin/crsd.bin reboot
root 8966 1 0 10:51 ? 00:00:00 /u01/grid/bin/oclskd.bin
grid 9013 8770 0 10:51 ? 00:00:00 /u01/grid/bin/evmlogger.bin -o /u01/grid/evm/log/evmlogger.info -l /u01/grid/evm/log/evmlogger.log
grid 9055 1 0 10:51 ? 00:00:06 /u01/grid/bin/oraagent.bin
root 9059 1 0 10:51 ? 00:00:42 /u01/grid/bin/orarootagent.bin
grid 9283 1 0 10:52 ? 00:00:00 /u01/grid/bin/tnslsnr LISTENER -inherit
oracle 9549 1 0 10:58 ? 00:00:26 /u01/grid/bin/oraagent.bin
oracle 9773 1 0 11:00 ? 00:00:00 /u01/grid/bin/oclskd.bin
grid 18618 1 0 13:46 ? 00:00:00 /u01/grid/bin/tnslsnr LISTENER_SCAN1 -inherit
grid 22527 21370 0 14:58 pts/2 00:00:00 grep d.bin
[grid@rac1 ~]$
注:
ocssd:用於管理與協調集群中各個節點的關系,並用於節點通信。該進程非常的重要,如果這個進程異常中止,會導致系統自動重啟。在某些極端的情況下,如果ocssd無法正常啟動,就會導致操作系統循環重啟。
crsd:監控節點各個資源,當某個資源發生異常是,自動重啟或者切換該資源。
evmd:是一個基於后台的事件檢測程序。
oclskd:該守護進程是Oracle 11g(11.10.6)新增的一個后台進程,主要是用於監控RAC數據庫節點實例,當某個實例掛起時,就重啟該節點。
一.3.6.5 邏輯備份與恢復(手動備份)
其實OCR也可以通過手動的方式導出、導入、方法如下:
ocrconfig -export /tmp/ocr_bak
ocrconfig -import /tmp/ocr_bak
使用ocrconfig -export 方式產生的備份,統稱之為邏輯備份。
對於重大的ocr配置發生變化前后,如添加刪除節點,修改集群資源,創建數據庫等,都建議使用邏輯備份。
對於由於錯誤配置而導致的ocr被損壞的情形下,我們可以使用ocrconfig -import方式進行恢復。
對於這種邏輯方式也可以還原丟失或損壞的ocr磁盤(文件)。
一.3.7 如何修復11.2 Grid權限誤操作
關於Oracle GRID HOME文件目錄的權限問題
① chown -R 誤操作了,將整個/u01/app的權限修過了如何恢復?
② 刪除了$GRID_HOME/log文件夾下的所有內容,集群不能啟動,如何恢復?
使用chown -R 誤操作了,將整個/u01/app的權限修過了,導致grid無法啟動。,搜了下mos發現了一篇文檔:Tips for checking file permissions on GRID environment(ID 1931142.1)
該文檔中描述到,$GRID_HOME/crs/utl下面的幾個文件中記錄了整個GRID_HOME下面的文件和目錄的相關權限。
Check the permissions from the following 2 files which are created during Grid Infrastructure installation.
In $GRID_HOME/crs/utl (for 11.2 and 12.1.0.1) and <GRID_HOME>/crs/utl/<hostname> (for 12.1.0.2) directory:
crsconfig_dirs :which has all directories listed in <GRID_HOME> and their permissions
crsconfig_fileperms :which has list of files and their permissions and locations in <GRID_HOME>.
我們來看下是否是這樣的,cd $ORACLE_HOME/crs/utl:
[root@rac2 bin]# cd /home/grid/app/11.2/grid/crs/utl
[root@rac2 utl]# ls -ltr
total 324
-rw-r–r– 1 root root 1128 Aug 11 09:48 usrvip
-rw-r–r– 1 root root 8437 Aug 11 09:48 srvctl
……
-rw-r–r– 1 root root 12102 Aug 11 09:48 crsconfig_files
-rw-r–r– 1 root root 13468 Aug 11 09:48 crsconfig_fileperms
-rw-r–r– 1 root root 8666 Aug 11 09:48crsconfig_dirs
-rw-r–r– 1 root root 699 Aug 11 09:48 crfsetenv
-rw-r–r– 1 root root 1280 Aug 11 09:48 cmdllroot.sh
-rw-r–r– 1 root root 3680 Aug 11 09:48 cluutil
-rw-r–r– 1 root root 1648 Aug 11 09:48 clsrwrap
-rw-r–r– 1 root root 540 Aug 11 09:48 appvipcfg
[zfzhlhrdb1:grid]:/oracle/app/11.2.0/grid/crs/utl>more crsconfig_dirs
# Copyright (c) 2009, 2013, Oracle and/or its affiliates. All rights reserved.
# The values in each line use the following format:
#
# OSLIST DIRNAME OWNER GROUP CLOSED-PERMS OPEN-PERMS
#
# Note:
# 1) OSLIST is a comma-separated list of platforms on which the directory
# needs to be created. 'all' indicates that the directory needs to be
# created on every platform. OSLIST MUST NOT contain whitespace.
# 2) Permissions need to be specified AS OCTAL NUMBERS. If permissions are
# not specified, default (umask) values will be used.
#
# TBD: OPEN-PERMS need to be added for each dir
all /oracle/app/11.2.0/grid/cdata grid dba 0775
all /oracle/app/11.2.0/grid/cdata/zfzhlhrdb-crs grid dba 0775
all /oracle/app/11.2.0/grid/cfgtoollogs grid dba 0775
all /oracle/app/11.2.0/grid/cfgtoollogs/crsconfig grid dba 0775
all /oracle/app/11.2.0/grid/log grid dba 0775
all /oracle/app/11.2.0/grid/log/zfzhlhrdb1 root dba 01755
all /oracle/app/11.2.0/grid/log/zfzhlhrdb1/crsd root dba 0750
all /oracle/app/11.2.0/grid/log/zfzhlhrdb1/ctssd root dba 0750
all /oracle/app/11.2.0/grid/log/zfzhlhrdb1/evmd grid dba 0750
all /oracle/app/11.2.0/grid/log/zfzhlhrdb1/cssd grid dba 0750
all /oracle/app/11.2.0/grid/log/zfzhlhrdb1/mdnsd grid dba 0750
all /oracle/app/11.2.0/grid/log/zfzhlhrdb1/gpnpd grid dba 0750
all /oracle/app/11.2.0/grid/log/zfzhlhrdb1/gnsd root dba 0750
all /oracle/app/11.2.0/grid/log/zfzhlhrdb1/srvm grid dba 0750
all /oracle/app/11.2.0/grid/log/zfzhlhrdb1/gipcd grid dba 0750
all /oracle/app/11.2.0/grid/log/zfzhlhrdb1/diskmon grid dba 0750
all /oracle/app/11.2.0/grid/log/zfzhlhrdb1/cvu grid dba 0750
all /oracle/app/11.2.0/grid/log/zfzhlhrdb1/cvu/cvulog grid dba 0750
all /oracle/app/11.2.0/grid/log/zfzhlhrdb1/cvu/cvutrc grid dba 0750
all /oracle/app/11.2.0/grid/log/zfzhlhrdb1/acfssec root dba 0755
all /oracle/app/11.2.0/grid/log/zfzhlhrdb1/acfsrepl grid dba 0750
all /oracle/app/11.2.0/grid/log/zfzhlhrdb1/acfslog grid dba 0750
all /oracle/app/11.2.0/grid/cdata/localhost grid dba 0755
all /oracle/app/11.2.0/grid/cdata/zfzhlhrdb1 grid dba 0755
all /oracle/app/11.2.0/grid/cv grid dba 0775
all /oracle/app/11.2.0/grid/cv/log grid dba 0775
all /oracle/app/11.2.0/grid/cv/init grid dba 0775
all /oracle/app/11.2.0/grid/cv/report grid dba 0775
all /oracle/app/11.2.0/grid/cv/report/html grid dba 0775
all /oracle/app/11.2.0/grid/cv/report/text grid dba 0775
all /oracle/app/11.2.0/grid/cv/report/xml grid dba 0775
# These dirs must be owned by crsuser in SIHA, and $SUPERUSER in cluster env.
# 'HAS_USER' is set appropriately in roothas.pl and rootcrs.pl for this
# purpose
all /oracle/app/11.2.0/grid/log/zfzhlhrdb1/ohasd root dba 0750
all /oracle/app/11.2.0/grid/lib root dba 0755
all /oracle/app/11.2.0/grid/bin root dba 0755
all /oracle/app/11.2.0/grid/log/zfzhlhrdb1/agent root dba 01775
all /oracle/app/11.2.0/grid/log/zfzhlhrdb1/agent/crsd root dba 01777
all /oracle/app/11.2.0/grid/log/zfzhlhrdb1/agent/ohasd root dba 01775
all /oracle/app/11.2.0/grid/log/zfzhlhrdb1/client grid dba 01777
all /oracle/app/11.2.0/grid/log/zfzhlhrdb1/racg grid dba 01775
all /oracle/app/11.2.0/grid/log/zfzhlhrdb1/racg/racgmain grid dba 01777
all /oracle/app/11.2.0/grid/log/zfzhlhrdb1/racg/racgeut grid dba 01777
all /oracle/app/11.2.0/grid/log/zfzhlhrdb1/racg/racgevtf grid dba 01777
all /oracle/app/11.2.0/grid/log/zfzhlhrdb1/admin grid dba 0750
all /oracle/app/11.2.0/grid/log/diag/clients grid asmadmin 01770
all /oracle/app/11.2.0/grid/evm grid dba 0750
all /oracle/app/11.2.0/grid/evm/init grid dba 0750
all /oracle/app/11.2.0/grid/auth/evm/zfzhlhrdb1 root dba 01777
all /oracle/app/11.2.0/grid/evm/log grid dba 01770
all /oracle/app/11.2.0/grid/eons/init grid dba 0750
all /oracle/app/11.2.0/grid/auth/ohasd/zfzhlhrdb1 root dba 01777
all /oracle/app/11.2.0/grid/mdns grid dba 0750
all /oracle/app/11.2.0/grid/mdns/init grid dba 0750
all /oracle/app/11.2.0/grid/gipc grid dba 0750
all /oracle/app/11.2.0/grid/gipc/init grid dba 0750
all /oracle/app/11.2.0/grid/gnsd root dba 0750
all /oracle/app/11.2.0/grid/gnsd/init root dba 0750
all /oracle/app/11.2.0/grid/gpnp grid dba 0750
all /oracle/app/11.2.0/grid/gpnp/init grid dba 0750
all /oracle/app/11.2.0/grid/ohasd grid dba 0750
all /oracle/app/11.2.0/grid/ohasd/init grid dba 0750
all /oracle/app/11.2.0/grid/gpnp grid dba 0750
all /oracle/app/11.2.0/grid/gpnp/profiles grid dba 0750
all /oracle/app/11.2.0/grid/gpnp/profiles/peer grid dba 0750
all /oracle/app/11.2.0/grid/gpnp/wallets grid dba 01750
all /oracle/app/11.2.0/grid/gpnp/wallets/root grid dba 01700
all /oracle/app/11.2.0/grid/gpnp/wallets/prdr grid dba 01750
all /oracle/app/11.2.0/grid/gpnp/wallets/peer grid dba 01700
all /oracle/app/11.2.0/grid/gpnp/wallets/pa grid dba 01700
all /oracle/app/11.2.0/grid/mdns grid dba 0750
all /oracle/app/11.2.0/grid/gpnp grid dba 0750
all /oracle/app/11.2.0/grid/gpnp/zfzhlhrdb1/profiles grid dba 0750
all /oracle/app/11.2.0/grid/gpnp/zfzhlhrdb1/profiles/peer grid dba 0750
all /oracle/app/11.2.0/grid/gpnp/zfzhlhrdb1/wallets grid dba 01750
all /oracle/app/11.2.0/grid/gpnp/zfzhlhrdb1/wallets/root grid dba 01700
all /oracle/app/11.2.0/grid/gpnp/zfzhlhrdb1/wallets/prdr grid dba 01750
all /oracle/app/11.2.0/grid/gpnp/zfzhlhrdb1/wallets/peer grid dba 01700
all /oracle/app/11.2.0/grid/gpnp/zfzhlhrdb1/wallets/pa grid dba 01700
all /oracle/app/11.2.0/grid/css grid dba 0711
all /oracle/app/11.2.0/grid/css/init grid dba 0711
all /oracle/app/11.2.0/grid/css/log grid dba 0711
all /oracle/app/11.2.0/grid/auth/css/zfzhlhrdb1 root dba 01777
all /oracle/app/11.2.0/grid/crs root dba 0755
all /oracle/app/11.2.0/grid/crs/init root dba 0755
all /oracle/app/11.2.0/grid/crs/profile root dba 0755
all /oracle/app/11.2.0/grid/crs/script root dba 0755
all /oracle/app/11.2.0/grid/crs/template root dba 0755
all /oracle/app/11.2.0/grid/auth/crs/zfzhlhrdb1 root dba 01777
all /oracle/app/11.2.0/grid/crs/log grid dba 01750
all /oracle/app/11.2.0/grid/crs/trace grid dba 01750
all /oracle/app/11.2.0/grid/crs/public grid dba 01777
all /oracle/app/11.2.0/grid/ctss root dba 0755
all /oracle/app/11.2.0/grid/ctss/init root dba 0755
all /oracle/app/11.2.0/grid/racg/usrco grid dba
all /oracle/app/11.2.0/grid/racg/dump grid dba 0775
all /oracle/app/11.2.0/grid/srvm/admin grid dba 0775
all /oracle/app/11.2.0/grid/srvm/log grid dba 0775
all /oracle/app/11.2.0/grid/evm/admin/conf grid dba 0750
all /oracle/app/11.2.0/grid/evm/admin/logger grid dba 0750
all /oracle/app/11.2.0/grid/crf root dba 0750
all /oracle/app/11.2.0/grid/crf/admin root dba 0750
all /oracle/app/11.2.0/grid/crf/admin/run grid dba 0750
all /oracle/app/11.2.0/grid/crf/admin/run/crfmond root dba 0700
all /oracle/app/11.2.0/grid/crf/admin/run/crflogd root dba 0700
all /oracle/app/11.2.0/grid/crf/db root dba 0750
all /oracle/app/11.2.0/grid/crf/db/zfzhlhrdb1 root dba 0750
all /oracle/app/11.2.0/grid/osysmond root dba 0755
all /oracle/app/11.2.0/grid/osysmond/init root dba 0755
all /oracle/app/11.2.0/grid/ologgerd root dba 0755
all /oracle/app/11.2.0/grid/ologgerd/init root dba 0755
all /oracle/app/11.2.0/grid/log/zfzhlhrdb1/crfmond root dba 0750
all /oracle/app/11.2.0/grid/log/zfzhlhrdb1/crflogd root dba 0750
unix /etc/oracle/oprocd root dba 0775
unix /etc/oracle/oprocd/check root dba 0770
unix /etc/oracle/oprocd/stop root dba 0770
unix /etc/oracle/oprocd/fatal root dba 0770
unix /etc/oracle/scls_scr root dba 0755
unix /etc/oracle/scls_scr/zfzhlhrdb1 root dba 0755
unix /var/tmp/.oracle root dba 01777
unix /tmp/.oracle root dba 01777
unix /oracle/app/11.2.0/grid/log/zfzhlhrdb1/acfsreplroot root dba 0750
# create $ID, if it doesn't exist (applicable only in dev env)
unix /etc root root 0755
unix /oracle/app/11.2.0/grid root dba 0755
# Last Gasp files directory - change "unix" to "all"
# once Windows makes a directory decision.
unix /etc/oracle/lastgasp root dba 0770
unix /etc/rc.d/rc2.d root root 0755
[zfzhlhrdb1:grid]:/oracle/app/11.2.0/grid/crs/utl> more crsconfig_fileperms
# Copyright (c) 2009, 2013, Oracle and/or its affiliates. All rights reserved.
# The values in each line use the following format:
#
# OSLIST FILENAME OWNER GROUP PERMS
#
# Note:
# 1) OSLIST is a comma-separated list of platforms on which the file
# permissions need to be set. 'all' indicates that the directory needs
# to be created on every platform. OSLIST MUST NOT contain whitespace.
# 2) Permissions need to be specified AS OCTAL NUMBERS. If permissions
# are not specified, default (umask) values will be used.
# 3) The fields within each line of this file must be delimited by a single space
#
unix /oracle/app/11.2.0/grid/log/zfzhlhrdb1/alertzfzhlhrdb1.log grid dba 0664
unix /oracle/app/11.2.0/grid/bin/usrvip root dba 0755
unix /oracle/app/11.2.0/grid/bin/appvipcfg root dba 0755
unix /oracle/app/11.2.0/grid/crs/install/preupdate.sh grid dba 0755
unix /oracle/app/11.2.0/grid/crs/install/s_crsconfig_defs grid dba 0755
unix /oracle/app/11.2.0/grid/bin/cluutil grid dba 0755
unix /oracle/app/11.2.0/grid/bin/ocrcheck root dba 0755
unix /oracle/app/11.2.0/grid/bin/ocrcheck.bin root dba 0755
unix /oracle/app/11.2.0/grid/bin/ocrconfig root dba 0755
unix /oracle/app/11.2.0/grid/bin/ocrconfig.bin root dba 0755
unix /oracle/app/11.2.0/grid/bin/ocrdump root dba 0755
unix /oracle/app/11.2.0/grid/bin/ocrdump.bin root dba 0755
unix /oracle/app/11.2.0/grid/bin/ocrpatch root dba 0755
unix /oracle/app/11.2.0/grid/bin/appagent grid dba 0755
unix /oracle/app/11.2.0/grid/bin/clssproxy grid dba 0755
unix /oracle/app/11.2.0/grid/bin/cssvfupgd root dba 0755
unix /oracle/app/11.2.0/grid/bin/cssvfupgd.bin root dba 0755
unix /oracle/app/11.2.0/grid/bin/racgwrap grid dba 0755
unix /oracle/app/11.2.0/grid/bin/cemutls grid dba 0755
unix /oracle/app/11.2.0/grid/bin/cemutlo grid dba 0755
unix /oracle/app/11.2.0/grid/bin/crs_getperm grid dba 0755
unix /oracle/app/11.2.0/grid/bin/crs_profile grid dba 0755
unix /oracle/app/11.2.0/grid/bin/crs_register grid dba 0755
unix /oracle/app/11.2.0/grid/bin/crs_relocate grid dba 0755
unix /oracle/app/11.2.0/grid/bin/crs_setperm grid dba 0755
unix /oracle/app/11.2.0/grid/bin/crs_start grid dba 0755
unix /oracle/app/11.2.0/grid/bin/crs_stat grid dba 0755
unix /oracle/app/11.2.0/grid/bin/crs_stop grid dba 0755
unix /oracle/app/11.2.0/grid/bin/crs_unregister grid dba 0755
unix /oracle/app/11.2.0/grid/bin/gipcd grid dba 0755
unix /oracle/app/11.2.0/grid/bin/mdnsd grid dba 0755
unix /oracle/app/11.2.0/grid/bin/gpnpd grid dba 0755
unix /oracle/app/11.2.0/grid/bin/gpnptool grid dba 0755
unix /oracle/app/11.2.0/grid/bin/oranetmonitor grid dba 0755
unix /oracle/app/11.2.0/grid/bin/rdtool grid dba 0755
unix /oracle/app/11.2.0/grid/bin/octssd root dba 0741
unix /oracle/app/11.2.0/grid/bin/octssd.bin root dba 0741
unix /oracle/app/11.2.0/grid/bin/ohasd root dba 0741
unix /oracle/app/11.2.0/grid/bin/ohasd.bin root dba 0741
unix /oracle/app/11.2.0/grid/bin/crsd root dba 0741
unix /oracle/app/11.2.0/grid/bin/crsd.bin root dba 0741
unix /oracle/app/11.2.0/grid/bin/evmd grid dba 0755
unix /oracle/app/11.2.0/grid/bin/evminfo grid dba 0755
unix /oracle/app/11.2.0/grid/bin/evmlogger grid dba 0755
unix /oracle/app/11.2.0/grid/bin/evmmkbin grid dba 0755
unix /oracle/app/11.2.0/grid/bin/evmmklib grid dba 0755
unix /oracle/app/11.2.0/grid/bin/evmpost grid dba 0755
unix /oracle/app/11.2.0/grid/bin/evmshow grid dba 0755
unix /oracle/app/11.2.0/grid/bin/evmsort grid dba 0755
unix /oracle/app/11.2.0/grid/bin/evmwatch grid dba 0755
unix /oracle/app/11.2.0/grid/bin/lsnodes grid dba 0755
unix /oracle/app/11.2.0/grid/bin/oifcfg grid dba 0755
unix /oracle/app/11.2.0/grid/bin/olsnodes grid dba 0755
unix /oracle/app/11.2.0/grid/bin/oraagent grid dba 0755
unix /oracle/app/11.2.0/grid/bin/orarootagent root dba 0741
unix /oracle/app/11.2.0/grid/bin/orarootagent.bin root dba 0741
unix /oracle/app/11.2.0/grid/bin/scriptagent grid dba 0755
unix /oracle/app/11.2.0/grid/bin/lsdb grid dba 0755
unix /oracle/app/11.2.0/grid/bin/emcrsp grid dba 0755
unix /oracle/app/11.2.0/grid/bin/onsctl grid dba 0755
unix /oracle/app/11.2.0/grid/crs/install/onsconfig grid dba 0554
unix /oracle/app/11.2.0/grid/bin/gnsd root dba 0741
unix /oracle/app/11.2.0/grid/bin/gnsd.bin root dba 0741
unix /oracle/app/11.2.0/grid/bin/gsd.sh grid dba 0755
unix /oracle/app/11.2.0/grid/bin/gsdctl grid dba 0755
unix /oracle/app/11.2.0/grid/bin/scrctl grid dba 0750
unix /oracle/app/11.2.0/grid/bin/vipca grid dba 0755
unix /oracle/app/11.2.0/grid/bin/oc4jctl grid dba 0755
unix /oracle/app/11.2.0/grid/bin/cvures grid dba 0755
unix /oracle/app/11.2.0/grid/bin/odnsd grid dba 0755
unix /oracle/app/11.2.0/grid/bin/qosctl grid dba 0755
unix /oracle/app/11.2.0/grid/crs/install/cmdllroot.sh grid dba 0755
unix /oracle/app/11.2.0/grid/crs/utl/rootdelete.sh root root 0755
unix /oracle/app/11.2.0/grid/crs/utl/rootdeletenode.sh root root 0755
unix /oracle/app/11.2.0/grid/crs/utl/rootdeinstall.sh root root 0755
unix /oracle/app/11.2.0/grid/crs/utl/rootaddnode.sh root root 0755
unix /oracle/app/11.2.0/grid/lib/libskgxpcompat.so grid dba 0644
all /oracle/app/11.2.0/grid/log/zfzhlhrdb1/client/olsnodes.log grid dba 0666
all /oracle/app/11.2.0/grid/log/zfzhlhrdb1/client/oifcfg.log grid dba 0666
unix /oracle/app/11.2.0/grid/bin/srvctl root dba 0755
unix /oracle/app/11.2.0/grid/bin/cluvfy root dba 0755
unix /oracle/app/11.2.0/grid/bin/clsecho root dba 0755
unix /oracle/app/11.2.0/grid/bin/clsecho.bin root dba 0755
unix /oracle/app/11.2.0/grid/bin/clscfg root dba 0755
unix /oracle/app/11.2.0/grid/bin/clscfg.bin root dba 0755
unix /oracle/app/11.2.0/grid/bin/clsfmt root dba 0755
unix /oracle/app/11.2.0/grid/bin/clsfmt.bin root dba 0755
unix /oracle/app/11.2.0/grid/bin/clsid grid dba 0755
unix /oracle/app/11.2.0/grid/bin/crsctl root dba 0755
unix /oracle/app/11.2.0/grid/bin/crsctl.bin root dba 0755
unix /oracle/app/11.2.0/grid/bin/ndfnceca grid dba 0750
unix /oracle/app/11.2.0/grid/bin/oclskd root dba 0755
unix /oracle/app/11.2.0/grid/bin/oclskd.bin root dba 0751
unix /oracle/app/11.2.0/grid/bin/oclsomon grid dba 0755
unix /oracle/app/11.2.0/grid/bin/oclsvmon grid dba 0755
unix /oracle/app/11.2.0/grid/bin/ocssd grid dba 0755
unix /oracle/app/11.2.0/grid/bin/cssdagent root dba 0741
unix /oracle/app/11.2.0/grid/bin/cssdagent.bin root dba 0741
unix /oracle/app/11.2.0/grid/bin/cssdmonitor root dba 0741
unix /oracle/app/11.2.0/grid/bin/cssdmonitor.bin root dba 0741
unix /oracle/app/11.2.0/grid/bin/diskmon root dba 0741
unix /oracle/app/11.2.0/grid/bin/diskmon.bin root dba 0741
unix /oracle/app/11.2.0/grid/bin/diagcollection.sh root dba 0755
unix /oracle/app/11.2.0/grid/bin/oradnssd grid dba 0755
unix /oracle/app/11.2.0/grid/bin/oradnssd.bin grid dba 0755
unix /oracle/app/11.2.0/grid/bin/setasmgidwrap grid dba 0755
unix /oracle/app/11.2.0/grid/bin/oclumon root dba 0750
unix /oracle/app/11.2.0/grid/bin/oclumon.bin root dba 0750
unix /oracle/app/11.2.0/grid/bin/oclumon.pl grid dba 0750
unix /oracle/app/11.2.0/grid/bin/crswrapexece.pl root dba 0744
unix /oracle/app/11.2.0/grid/bin/crfsetenv root dba 0750
unix /oracle/app/11.2.0/grid/bin/osysmond root dba 0750
unix /oracle/app/11.2.0/grid/bin/osysmond.bin root dba 0750
unix /oracle/app/11.2.0/grid/bin/ologgerd root dba 0750
unix /oracle/app/11.2.0/grid/bin/ologdbg grid dba 0750
unix /oracle/app/11.2.0/grid/bin/ologdbg.pl grid dba 0750
unix /etc/oracle/setasmgid root dba 4710
# Jars and shared libraries used by the executables invoked by the root script
unix /oracle/app/11.2.0/grid/jlib/srvm.jar root dba 0644
unix /oracle/app/11.2.0/grid/jlib/srvmasm.jar root dba 0644
unix /oracle/app/11.2.0/grid/jlib/srvctl.jar root dba 0644
unix /oracle/app/11.2.0/grid/jlib/srvmhas.jar root dba 0644
unix /oracle/app/11.2.0/grid/jlib/gns.jar root dba 0644
unix /oracle/app/11.2.0/grid/jlib/ons.jar root dba 0644
unix /oracle/app/11.2.0/grid/jlib/netcfg.jar root dba 0644
unix /oracle/app/11.2.0/grid/jlib/i18n.jar root dba 0644
unix /oracle/app/11.2.0/grid/jlib/supercluster.jar root dba 0644
unix /oracle/app/11.2.0/grid/jlib/supercluster-common.jar root dba 0644
unix /oracle/app/11.2.0/grid/jlib/antlr-complete.jar root dba 0644
unix /oracle/app/11.2.0/grid/jlib/antlr-3.3-complete.jar root dba 0644
unix /oracle/app/11.2.0/grid/lib/libhasgen11.so root dba 0644
unix /oracle/app/11.2.0/grid/lib/libocr11.so root dba 0644
unix /oracle/app/11.2.0/grid/lib/libocrb11.so root dba 0644
unix /oracle/app/11.2.0/grid/lib/libocrutl11.so root dba 0644
unix /oracle/app/11.2.0/grid/lib/libclntsh.so.11.1 root dba 0644
unix /oracle/app/11.2.0/grid/lib/libclntshcore.so.11.1 root dba 0644
unix /oracle/app/11.2.0/grid/lib/libskgxn2.so root dba 0644
unix /oracle/app/11.2.0/grid/lib/libskgxp11.so root dba 0644
unix /oracle/app/11.2.0/grid/lib/libasmclntsh11.so root dba 0644
unix /oracle/app/11.2.0/grid/lib/libcell11.so root dba 0644
unix /oracle/app/11.2.0/grid/lib/libnnz11.so root dba 0644
unix /oracle/app/11.2.0/grid/lib/libclsra11.so root dba 0644
unix /oracle/app/11.2.0/grid/lib/libgns11.so root dba 0644
unix /oracle/app/11.2.0/grid/lib/libeons.so root dba 0644
unix /oracle/app/11.2.0/grid/lib/libonsx.so root dba 0644
unix /oracle/app/11.2.0/grid/lib/libeonsserver.so root dba 0644
unix /oracle/app/11.2.0/grid/lib/libsrvm11.so root dba 0644
unix /oracle/app/11.2.0/grid/lib/libsrvmhas11.so root dba 0644
unix /oracle/app/11.2.0/grid/lib/libsrvmocr11.so root dba 0644
unix /oracle/app/11.2.0/grid/lib/libuini11.so root dba 0644
unix /oracle/app/11.2.0/grid/lib/libgnsjni11.so root dba 0644
unix /oracle/app/11.2.0/grid/lib/librdjni11.so root dba 0644
unix /oracle/app/11.2.0/grid/lib/libgnsjni11.so root dba 0644
unix /oracle/app/11.2.0/grid/lib/libclsce11.so root dba 0644
unix /oracle/app/11.2.0/grid/lib/libcrf11.so root dba 0644
unix /oracle/app/11.2.0/grid/bin/diagcollection.pl root dba 0755
# crs configuration scripts invoked from rootcrs.pl
unix /oracle/app/11.2.0/grid/crs/install/crsconfig_lib.pm root dba 0755
unix /oracle/app/11.2.0/grid/crs/install/s_crsconfig_lib.pm root dba 0755
unix /oracle/app/11.2.0/grid/crs/install/crsdelete.pm root dba 0755
unix /oracle/app/11.2.0/grid/crs/install/crspatch.pm root dba 0755
unix /oracle/app/11.2.0/grid/crs/install/oracss.pm root dba 0755
unix /oracle/app/11.2.0/grid/crs/install/oraacfs.pm root dba 0755
unix /oracle/app/11.2.0/grid/crs/install/hasdconfig.pl root dba 0755
unix /oracle/app/11.2.0/grid/crs/install/rootcrs.pl root dba 0755
unix /oracle/app/11.2.0/grid/crs/install/roothas.pl root dba 0755
unix /oracle/app/11.2.0/grid/crs/install/preupdate.sh root dba 0755
unix /oracle/app/11.2.0/grid/crs/install/rootofs.sh root dba 0755
# XXX: required only for dev env, where inittab ($IT) is not present already
unix /etc/inittab root root 0644
# USM FILES
# Only files which will be installed with executable permissions need
# to be listed.
unix /oracle/app/11.2.0/grid/bin/acfsdriverstate root dba 0755
unix /oracle/app/11.2.0/grid/bin/acfsload root dba 0755
unix /oracle/app/11.2.0/grid/bin/acfsregistrymount root dba 0755
unix /oracle/app/11.2.0/grid/bin/acfsroot root dba 0755
unix /oracle/app/11.2.0/grid/bin/acfssinglefsmount root dba 0755
unix /oracle/app/11.2.0/grid/bin/acfsrepl_apply root dba 0755
unix /oracle/app/11.2.0/grid/bin/acfsrepl_apply.bin root dba 0755
unix /oracle/app/11.2.0/grid/bin/acfsreplcrs grid dba 0755
unix /oracle/app/11.2.0/grid/bin/acfsreplcrs.pl grid dba 0755
unix /oracle/app/11.2.0/grid/bin/acfsrepl_initializer root dba 0755
unix /oracle/app/11.2.0/grid/bin/acfsrepl_monitor grid dba 0755
unix /oracle/app/11.2.0/grid/bin/acfsrepl_preapply grid dba 0755
unix /oracle/app/11.2.0/grid/bin/acfsrepl_transport grid dba 0755
unix /oracle/app/11.2.0/grid/lib/acfsdriverstate.pl root dba 0644
unix /oracle/app/11.2.0/grid/lib/acfsload.pl root dba 0644
unix /oracle/app/11.2.0/grid/lib/acfsregistrymount.pl root dba 0644
unix /oracle/app/11.2.0/grid/lib/acfsroot.pl root dba 0644
unix /oracle/app/11.2.0/grid/lib/acfssinglefsmount.pl root dba 0644
unix /oracle/app/11.2.0/grid/lib/acfstoolsdriver.sh root dba 0755
unix /oracle/app/11.2.0/grid/lib/libusmacfs11.so grid dba 0644
#EVM config files
unix /oracle/app/11.2.0/grid/evm/admin/conf/evm.auth root dba 0644
unix /oracle/app/11.2.0/grid/evm/admin/conf/evmdaemon.conf root dba 0644
unix /oracle/app/11.2.0/grid/evm/admin/conf/evmlogger.conf root dba 0644
# TFA files
unix /oracle/app/11.2.0/grid/crs/install/tfa_setup.sh root dba 0755
unix /oracle/app/11.2.0/grid/cdata/zfzhlhrdb1.olr root dba 0600
unix /etc/oracle/olr.loc root dba 0644
unix /etc/oracle/ocr.loc root dba 0644
[zfzhlhrdb1:grid]:/oracle/app/11.2.0/grid/crs/utl>
我們可以看到,確實是這樣,crsconfig_dirs里面記錄所有$GRID_HOME相關目錄的權限。crsconfig_fileperms中記錄了文件的權限.
一.3.7.1 校驗權限
我們可以通過命令來校驗:Validate the <GRID_HOME> by using cluvfy tool.
$ cluvfy comp software -n all -verbose
[zfzhlhrdb1:grid]:/home/grid>cluvfy comp software -n all -verbose
Verifying software
Check: Software
Component: crs
Node Name: zfzhlhrdb2
/oracle/app/11.2.0/grid/bin/racgeut..."Permissions" did not match reference
Permissions of file "/oracle/app/11.2.0/grid/bin/racgeut" did not match the expected value. [Expected = "0755" ; Found = "0775"]
/oracle/app/11.2.0/grid/bin/racgmain..."Permissions" did not match reference
Permissions of file "/oracle/app/11.2.0/grid/bin/racgmain" did not match the expected value. [Expected = "0755" ; Found = "0775"]
/oracle/app/11.2.0/grid/bin/asmproxy..."Permissions" did not match reference
Permissions of file "/oracle/app/11.2.0/grid/bin/asmproxy" did not match the expected value. [Expected = "0755" ; Found = "0775"]
/oracle/app/11.2.0/grid/lib/oc4jctl_common.pm..."Permissions" did not match reference
Permissions of file "/oracle/app/11.2.0/grid/lib/oc4jctl_common.pm" did not match the expected value. [Expected = "0755" ; Found = "0775"]
/oracle/app/11.2.0/grid/lib/oc4jctl_lib.pm..."Permissions" did not match reference
Permissions of file "/oracle/app/11.2.0/grid/lib/oc4jctl_lib.pm" did not match the expected value. [Expected = "0755" ; Found = "0775"]
/oracle/app/11.2.0/grid/bin/appagent.bin..."Permissions" did not match reference
Permissions of file "/oracle/app/11.2.0/grid/bin/appagent.bin" did not match the expected value. [Expected = "0755" ; Found = "0775"]
/oracle/app/11.2.0/grid/bin/clssproxy.bin..."Permissions" did not match reference
Permissions of file "/oracle/app/11.2.0/grid/bin/clssproxy.bin" did not match the expected value. [Expected = "0755" ; Found = "0775"]
/oracle/app/11.2.0/grid/bin/crs_getperm.bin..."Permissions" did not match reference
Permissions of file "/oracle/app/11.2.0/grid/bin/crs_getperm.bin" did not match the expected value. [Expected = "0755" ; Found = "0775"]
/oracle/app/11.2.0/grid/bin/crs_profile.bin..."Permissions" did not match reference
Permissions of file "/oracle/app/11.2.0/grid/bin/crs_profile.bin" did not match the expected value. [Expected = "0755" ; Found = "0775"]
/oracle/app/11.2.0/grid/bin/crs_register.bin..."Permissions" did not match reference
Permissions of file "/oracle/app/11.2.0/grid/bin/crs_register.bin" did not match the expected value. [Expected = "0755" ; Found = "0775"]
/oracle/app/11.2.0/grid/bin/crs_relocate.bin..."Permissions" did not match reference
Permissions of file "/oracle/app/11.2.0/grid/bin/crs_relocate.bin" did not match the expected value. [Expected = "0755" ; Found = "0775"]
/oracle/app/11.2.0/grid/bin/crs_setperm.bin..."Permissions" did not match reference
Permissions of file "/oracle/app/11.2.0/grid/bin/crs_setperm.bin" did not match the expected value. [Expected = "0755" ; Found = "0775"]
/oracle/app/11.2.0/grid/bin/crs_start.bin..."Permissions" did not match reference
Permissions of file "/oracle/app/11.2.0/grid/bin/crs_start.bin" did not match the expected value. [Expected = "0755" ; Found = "0775"]
/oracle/app/11.2.0/grid/bin/crs_stat.bin..."Permissions" did not match reference
Permissions of file "/oracle/app/11.2.0/grid/bin/crs_stat.bin" did not match the expected value. [Expected = "0755" ; Found = "0775"]
/oracle/app/11.2.0/grid/bin/crs_stop.bin..."Permissions" did not match reference
Permissions of file "/oracle/app/11.2.0/grid/bin/crs_stop.bin" did not match the expected value. [Expected = "0755" ; Found = "0775"]
/oracle/app/11.2.0/grid/bin/crs_unregister.bin..."Permissions" did not match reference
Permissions of file "/oracle/app/11.2.0/grid/bin/crs_unregister.bin" did not match the expected value. [Expected = "0755" ; Found = "0775"]
/oracle/app/11.2.0/grid/bin/gipcd.bin..."Permissions" did not match reference
Permissions of file "/oracle/app/11.2.0/grid/bin/gipcd.bin" did not match the expected value. [Expected = "0755" ; Found = "0775"]
/oracle/app/11.2.0/grid/bin/mdnsd.bin..."Permissions" did not match reference
Permissions of file "/oracle/app/11.2.0/grid/bin/mdnsd.bin" did not match the expected value. [Expected = "0755" ; Found = "0775"]
/oracle/app/11.2.0/grid/bin/gpnpd.bin..."Permissions" did not match reference
Permissions of file "/oracle/app/11.2.0/grid/bin/gpnpd.bin" did not match the expected value. [Expected = "0755" ; Found = "0775"]
/oracle/app/11.2.0/grid/bin/gpnptool.bin..."Permissions" did not match reference
Permissions of file "/oracle/app/11.2.0/grid/bin/gpnptool.bin" did not match the expected value. [Expected = "0755" ; Found = "0775"]
/oracle/app/11.2.0/grid/bin/oranetmonitor.bin..."Permissions" did not match reference
Permissions of file "/oracle/app/11.2.0/grid/bin/oranetmonitor.bin" did not match the expected value. [Expected = "0755" ; Found = "0775"]
/oracle/app/11.2.0/grid/bin/evmd.bin..."Permissions" did not match reference
Permissions of file "/oracle/app/11.2.0/grid/bin/evmd.bin" did not match the expected value. [Expected = "0755" ; Found = "0775"]
/oracle/app/11.2.0/grid/bin/evminfo.bin..."Permissions" did not match reference
Permissions of file "/oracle/app/11.2.0/grid/bin/evminfo.bin" did not match the expected value. [Expected = "0755" ; Found = "0775"]
/oracle/app/11.2.0/grid/bin/evmlogger.bin..."Permissions" did not match reference
Permissions of file "/oracle/app/11.2.0/grid/bin/evmlogger.bin" did not match the expected value. [Expected = "0755" ; Found = "0775"]
/oracle/app/11.2.0/grid/bin/evmmkbin.bin..."Permissions" did not match reference
Permissions of file "/oracle/app/11.2.0/grid/bin/evmmkbin.bin" did not match the expected value. [Expected = "0755" ; Found = "0775"]
/oracle/app/11.2.0/grid/bin/evmmklib.bin..."Permissions" did not match reference
Permissions of file "/oracle/app/11.2.0/grid/bin/evmmklib.bin" did not match the expected value. [Expected = "0755" ; Found = "0775"]
/oracle/app/11.2.0/grid/bin/evmpost.bin..."Permissions" did not match reference
Permissions of file "/oracle/app/11.2.0/grid/bin/evmpost.bin" did not match the expected value. [Expected = "0755" ; Found = "0775"]
/oracle/app/11.2.0/grid/bin/evmshow.bin..."Permissions" did not match reference
Permissions of file "/oracle/app/11.2.0/grid/bin/evmshow.bin" did not match the expected value. [Expected = "0755" ; Found = "0775"]
/oracle/app/11.2.0/grid/bin/evmsort.bin..."Permissions" did not match reference
Permissions of file "/oracle/app/11.2.0/grid/bin/evmsort.bin" did not match the expected value. [Expected = "0755" ; Found = "0775"]
/oracle/app/11.2.0/grid/bin/evmwatch.bin..."Permissions" did not match reference
Permissions of file "/oracle/app/11.2.0/grid/bin/evmwatch.bin" did not match the expected value. [Expected = "0755" ; Found = "0775"]
/oracle/app/11.2.0/grid/bin/oraagent.bin..."Permissions" did not match reference
Permissions of file "/oracle/app/11.2.0/grid/bin/oraagent.bin" did not match the expected value. [Expected = "0755" ; Found = "0775"]
/oracle/app/11.2.0/grid/bin/racgevtf..."Permissions" did not match reference
Permissions of file "/oracle/app/11.2.0/grid/bin/racgevtf" did not match the expected value. [Expected = "0755" ; Found = "0775"]
/oracle/app/11.2.0/grid/bin/racgvip..."Permissions" did not match reference
Permissions of file "/oracle/app/11.2.0/grid/bin/racgvip" did not match the expected value. [Expected = "0755" ; Found = "0775"]
/oracle/app/11.2.0/grid/bin/sclsspawn..."Permissions" did not match reference
Permissions of file "/oracle/app/11.2.0/grid/bin/sclsspawn" did not match the expected value. [Expected = "0755" ; Found = "0775"]
/oracle/app/11.2.0/grid/bin/scriptagent.bin..."Permissions" did not match reference
Permissions of file "/oracle/app/11.2.0/grid/bin/scriptagent.bin" did not match the expected value. [Expected = "0755" ; Found = "0775"]
/oracle/app/11.2.0/grid/bin/oprocd..."Permissions" did not match reference
Permissions of file "/oracle/app/11.2.0/grid/bin/oprocd" did not match the expected value. [Expected = "0755" ; Found = "0775"]
/oracle/app/11.2.0/grid/bin/emcrsp.bin..."Permissions" did not match reference
Permissions of file "/oracle/app/11.2.0/grid/bin/emcrsp.bin" did not match the expected value. [Expected = "0755" ; Found = "0775"]
/oracle/app/11.2.0/grid/bin/clsid.bin..."Permissions" did not match reference
Permissions of file "/oracle/app/11.2.0/grid/bin/clsid.bin" did not match the expected value. [Expected = "0755" ; Found = "0775"]
/oracle/app/11.2.0/grid/bin/ocssd.bin..."Permissions" did not match reference
Permissions of file "/oracle/app/11.2.0/grid/bin/ocssd.bin" did not match the expected value. [Expected = "0755" ; Found = "0775"]
/oracle/app/11.2.0/grid/bin/crstmpl.scr..."Permissions" did not match reference
Permissions of file "/oracle/app/11.2.0/grid/bin/crstmpl.scr" did not match the expected value. [Expected = "0755" ; Found = "0775"]
/oracle/app/11.2.0/grid/bin/evt.sh..."Permissions" did not match reference
Permissions of file "/oracle/app/11.2.0/grid/bin/evt.sh" did not match the expected value. [Expected = "0755" ; Found = "0775"]
/oracle/app/11.2.0/grid/bin/cemutls.bin..."Permissions" did not match reference
Permissions of file "/oracle/app/11.2.0/grid/bin/cemutls.bin" did not match the expected value. [Expected = "0755" ; Found = "0775"]
/oracle/app/11.2.0/grid/bin/cemutlo.bin..."Permissions" did not match reference
Permissions of file "/oracle/app/11.2.0/grid/bin/cemutlo.bin" did not match the expected value. [Expected = "0755" ; Found = "0775"]
/oracle/app/11.2.0/grid/bin/lsdb.bin..."Permissions" did not match reference
Permissions of file "/oracle/app/11.2.0/grid/bin/lsdb.bin" did not match the expected value. [Expected = "0755" ; Found = "0775"]
/oracle/app/11.2.0/grid/bin/oifcfg.bin..."Permissions" did not match reference
Permissions of file "/oracle/app/11.2.0/grid/bin/oifcfg.bin" did not match the expected value. [Expected = "0755" ; Found = "0775"]
/oracle/app/11.2.0/grid/bin/olsnodes.bin..."Permissions" did not match reference
Permissions of file "/oracle/app/11.2.0/grid/bin/olsnodes.bin" did not match the expected value. [Expected = "0755" ; Found = "0775"]
/oracle/app/11.2.0/grid/bin/gsd..."Permissions" did not match reference
Permissions of file "/oracle/app/11.2.0/grid/bin/gsd" did not match the expected value. [Expected = "0755" ; Found = "0775"]
/oracle/app/11.2.0/grid/bin/oc4jctl.pl..."Permissions" did not match reference
Permissions of file "/oracle/app/11.2.0/grid/bin/oc4jctl.pl" did not match the expected value. [Expected = "0755" ; Found = "0775"]
/oracle/app/11.2.0/grid/lib/s_oc4jctl_lib.pm..."Permissions" did not match reference
Permissions of file "/oracle/app/11.2.0/grid/lib/s_oc4jctl_lib.pm" did not match the expected value. [Expected = "0755" ; Found = "0775"]
/oracle/app/11.2.0/grid/bin/appvipcfg.pl..."Permissions" did not match reference
Permissions of file "/oracle/app/11.2.0/grid/bin/appvipcfg.pl" did not match the expected value. [Expected = "0750" ; Found = "0775"]
/oracle/app/11.2.0/grid/bin/lxinst..."Permissions" did not match reference
Permissions of file "/oracle/app/11.2.0/grid/bin/lxinst" did not match the expected value. [Expected = "0755" ; Found = "0775"]
/oracle/app/11.2.0/grid/clone/rootpre/ORCLcluster/lib/libskgxnr.a...No such file or directory
Node Name: zfzhlhrdb1
/oracle/app/11.2.0/grid/bin/racgeut..."Permissions" did not match reference
Permissions of file "/oracle/app/11.2.0/grid/bin/racgeut" did not match the expected value. [Expected = "0755" ; Found = "0775"]
/oracle/app/11.2.0/grid/bin/racgmain..."Permissions" did not match reference
Permissions of file "/oracle/app/11.2.0/grid/bin/racgmain" did not match the expected value. [Expected = "0755" ; Found = "0775"]
/oracle/app/11.2.0/grid/bin/asmproxy..."Permissions" did not match reference
Permissions of file "/oracle/app/11.2.0/grid/bin/asmproxy" did not match the expected value. [Expected = "0755" ; Found = "0775"]
/oracle/app/11.2.0/grid/lib/oc4jctl_common.pm..."Permissions" did not match reference
Permissions of file "/oracle/app/11.2.0/grid/lib/oc4jctl_common.pm" did not match the expected value. [Expected = "0755" ; Found = "0775"]
/oracle/app/11.2.0/grid/lib/oc4jctl_lib.pm..."Permissions" did not match reference
Permissions of file "/oracle/app/11.2.0/grid/lib/oc4jctl_lib.pm" did not match the expected value. [Expected = "0755" ; Found = "0775"]
/oracle/app/11.2.0/grid/bin/appagent.bin..."Permissions" did not match reference
Permissions of file "/oracle/app/11.2.0/grid/bin/appagent.bin" did not match the expected value. [Expected = "0755" ; Found = "0775"]
/oracle/app/11.2.0/grid/bin/clssproxy.bin..."Permissions" did not match reference
Permissions of file "/oracle/app/11.2.0/grid/bin/clssproxy.bin" did not match the expected value. [Expected = "0755" ; Found = "0775"]
/oracle/app/11.2.0/grid/bin/crs_getperm.bin..."Permissions" did not match reference
Permissions of file "/oracle/app/11.2.0/grid/bin/crs_getperm.bin" did not match the expected value. [Expected = "0755" ; Found = "0775"]
/oracle/app/11.2.0/grid/bin/crs_profile.bin..."Permissions" did not match reference
Permissions of file "/oracle/app/11.2.0/grid/bin/crs_profile.bin" did not match the expected value. [Expected = "0755" ; Found = "0775"]
/oracle/app/11.2.0/grid/bin/crs_register.bin..."Permissions" did not match reference
Permissions of file "/oracle/app/11.2.0/grid/bin/crs_register.bin" did not match the expected value. [Expected = "0755" ; Found = "0775"]
/oracle/app/11.2.0/grid/bin/crs_relocate.bin..."Permissions" did not match reference
Permissions of file "/oracle/app/11.2.0/grid/bin/crs_relocate.bin" did not match the expected value. [Expected = "0755" ; Found = "0775"]
/oracle/app/11.2.0/grid/bin/crs_setperm.bin..."Permissions" did not match reference
Permissions of file "/oracle/app/11.2.0/grid/bin/crs_setperm.bin" did not match the expected value. [Expected = "0755" ; Found = "0775"]
/oracle/app/11.2.0/grid/bin/crs_start.bin..."Permissions" did not match reference
Permissions of file "/oracle/app/11.2.0/grid/bin/crs_start.bin" did not match the expected value. [Expected = "0755" ; Found = "0775"]
/oracle/app/11.2.0/grid/bin/crs_stat.bin..."Permissions" did not match reference
Permissions of file "/oracle/app/11.2.0/grid/bin/crs_stat.bin" did not match the expected value. [Expected = "0755" ; Found = "0775"]
/oracle/app/11.2.0/grid/bin/crs_stop.bin..."Permissions" did not match reference
Permissions of file "/oracle/app/11.2.0/grid/bin/crs_stop.bin" did not match the expected value. [Expected = "0755" ; Found = "0775"]
/oracle/app/11.2.0/grid/bin/crs_unregister.bin..."Permissions" did not match reference
Permissions of file "/oracle/app/11.2.0/grid/bin/crs_unregister.bin" did not match the expected value. [Expected = "0755" ; Found = "0775"]
/oracle/app/11.2.0/grid/bin/gipcd.bin..."Permissions" did not match reference
Permissions of file "/oracle/app/11.2.0/grid/bin/gipcd.bin" did not match the expected value. [Expected = "0755" ; Found = "0775"]
/oracle/app/11.2.0/grid/bin/mdnsd.bin..."Permissions" did not match reference
Permissions of file "/oracle/app/11.2.0/grid/bin/mdnsd.bin" did not match the expected value. [Expected = "0755" ; Found = "0775"]
/oracle/app/11.2.0/grid/bin/gpnpd.bin..."Permissions" did not match reference
Permissions of file "/oracle/app/11.2.0/grid/bin/gpnpd.bin" did not match the expected value. [Expected = "0755" ; Found = "0775"]
/oracle/app/11.2.0/grid/bin/gpnptool.bin..."Permissions" did not match reference
Permissions of file "/oracle/app/11.2.0/grid/bin/gpnptool.bin" did not match the expected value. [Expected = "0755" ; Found = "0775"]
/oracle/app/11.2.0/grid/bin/oranetmonitor.bin..."Permissions" did not match reference
Permissions of file "/oracle/app/11.2.0/grid/bin/oranetmonitor.bin" did not match the expected value. [Expected = "0755" ; Found = "0775"]
/oracle/app/11.2.0/grid/bin/evmd.bin..."Permissions" did not match reference
Permissions of file "/oracle/app/11.2.0/grid/bin/evmd.bin" did not match the expected value. [Expected = "0755" ; Found = "0775"]
/oracle/app/11.2.0/grid/bin/evminfo.bin..."Permissions" did not match reference
Permissions of file "/oracle/app/11.2.0/grid/bin/evminfo.bin" did not match the expected value. [Expected = "0755" ; Found = "0775"]
/oracle/app/11.2.0/grid/bin/evmlogger.bin..."Permissions" did not match reference
Permissions of file "/oracle/app/11.2.0/grid/bin/evmlogger.bin" did not match the expected value. [Expected = "0755" ; Found = "0775"]
/oracle/app/11.2.0/grid/bin/evmmkbin.bin..."Permissions" did not match reference
Permissions of file "/oracle/app/11.2.0/grid/bin/evmmkbin.bin" did not match the expected value. [Expected = "0755" ; Found = "0775"]
/oracle/app/11.2.0/grid/bin/evmmklib.bin..."Permissions" did not match reference
Permissions of file "/oracle/app/11.2.0/grid/bin/evmmklib.bin" did not match the expected value. [Expected = "0755" ; Found = "0775"]
/oracle/app/11.2.0/grid/bin/evmpost.bin..."Permissions" did not match reference
Permissions of file "/oracle/app/11.2.0/grid/bin/evmpost.bin" did not match the expected value. [Expected = "0755" ; Found = "0775"]
/oracle/app/11.2.0/grid/bin/evmshow.bin..."Permissions" did not match reference
Permissions of file "/oracle/app/11.2.0/grid/bin/evmshow.bin" did not match the expected value. [Expected = "0755" ; Found = "0775"]
/oracle/app/11.2.0/grid/bin/evmsort.bin..."Permissions" did not match reference
Permissions of file "/oracle/app/11.2.0/grid/bin/evmsort.bin" did not match the expected value. [Expected = "0755" ; Found = "0775"]
/oracle/app/11.2.0/grid/bin/evmwatch.bin..."Permissions" did not match reference
Permissions of file "/oracle/app/11.2.0/grid/bin/evmwatch.bin" did not match the expected value. [Expected = "0755" ; Found = "0775"]
/oracle/app/11.2.0/grid/bin/oraagent.bin..."Permissions" did not match reference
Permissions of file "/oracle/app/11.2.0/grid/bin/oraagent.bin" did not match the expected value. [Expected = "0755" ; Found = "0775"]
/oracle/app/11.2.0/grid/bin/racgevtf..."Permissions" did not match reference
Permissions of file "/oracle/app/11.2.0/grid/bin/racgevtf" did not match the expected value. [Expected = "0755" ; Found = "0775"]
/oracle/app/11.2.0/grid/bin/racgvip..."Permissions" did not match reference
Permissions of file "/oracle/app/11.2.0/grid/bin/racgvip" did not match the expected value. [Expected = "0755" ; Found = "0775"]
/oracle/app/11.2.0/grid/bin/sclsspawn..."Permissions" did not match reference
Permissions of file "/oracle/app/11.2.0/grid/bin/sclsspawn" did not match the expected value. [Expected = "0755" ; Found = "0775"]
/oracle/app/11.2.0/grid/bin/scriptagent.bin..."Permissions" did not match reference
Permissions of file "/oracle/app/11.2.0/grid/bin/scriptagent.bin" did not match the expected value. [Expected = "0755" ; Found = "0775"]
/oracle/app/11.2.0/grid/bin/oprocd..."Permissions" did not match reference
Permissions of file "/oracle/app/11.2.0/grid/bin/oprocd" did not match the expected value. [Expected = "0755" ; Found = "0775"]
/oracle/app/11.2.0/grid/bin/emcrsp.bin..."Permissions" did not match reference
Permissions of file "/oracle/app/11.2.0/grid/bin/emcrsp.bin" did not match the expected value. [Expected = "0755" ; Found = "0775"]
/oracle/app/11.2.0/grid/bin/clsid.bin..."Permissions" did not match reference
Permissions of file "/oracle/app/11.2.0/grid/bin/clsid.bin" did not match the expected value. [Expected = "0755" ; Found = "0775"]
/oracle/app/11.2.0/grid/bin/ocssd.bin..."Permissions" did not match reference
Permissions of file "/oracle/app/11.2.0/grid/bin/ocssd.bin" did not match the expected value. [Expected = "0755" ; Found = "0775"]
/oracle/app/11.2.0/grid/bin/crstmpl.scr..."Permissions" did not match reference
Permissions of file "/oracle/app/11.2.0/grid/bin/crstmpl.scr" did not match the expected value. [Expected = "0755" ; Found = "0775"]
/oracle/app/11.2.0/grid/bin/evt.sh..."Permissions" did not match reference
Permissions of file "/oracle/app/11.2.0/grid/bin/evt.sh" did not match the expected value. [Expected = "0755" ; Found = "0775"]
/oracle/app/11.2.0/grid/bin/cemutls.bin..."Permissions" did not match reference
Permissions of file "/oracle/app/11.2.0/grid/bin/cemutls.bin" did not match the expected value. [Expected = "0755" ; Found = "0775"]
/oracle/app/11.2.0/grid/bin/cemutlo.bin..."Permissions" did not match reference
Permissions of file "/oracle/app/11.2.0/grid/bin/cemutlo.bin" did not match the expected value. [Expected = "0755" ; Found = "0775"]
/oracle/app/11.2.0/grid/bin/lsdb.bin..."Permissions" did not match reference
Permissions of file "/oracle/app/11.2.0/grid/bin/lsdb.bin" did not match the expected value. [Expected = "0755" ; Found = "0775"]
/oracle/app/11.2.0/grid/bin/oifcfg.bin..."Permissions" did not match reference
Permissions of file "/oracle/app/11.2.0/grid/bin/oifcfg.bin" did not match the expected value. [Expected = "0755" ; Found = "0775"]
/oracle/app/11.2.0/grid/bin/olsnodes.bin..."Permissions" did not match reference
Permissions of file "/oracle/app/11.2.0/grid/bin/olsnodes.bin" did not match the expected value. [Expected = "0755" ; Found = "0775"]
/oracle/app/11.2.0/grid/bin/gsd..."Permissions" did not match reference
Permissions of file "/oracle/app/11.2.0/grid/bin/gsd" did not match the expected value. [Expected = "0755" ; Found = "0775"]
/oracle/app/11.2.0/grid/bin/oc4jctl.pl..."Permissions" did not match reference
Permissions of file "/oracle/app/11.2.0/grid/bin/oc4jctl.pl" did not match the expected value. [Expected = "0755" ; Found = "0775"]
/oracle/app/11.2.0/grid/lib/s_oc4jctl_lib.pm..."Permissions" did not match reference
Permissions of file "/oracle/app/11.2.0/grid/lib/s_oc4jctl_lib.pm" did not match the expected value. [Expected = "0755" ; Found = "0775"]
/oracle/app/11.2.0/grid/bin/appvipcfg.pl..."Permissions" did not match reference
Permissions of file "/oracle/app/11.2.0/grid/bin/appvipcfg.pl" did not match the expected value. [Expected = "0750" ; Found = "0775"]
/oracle/app/11.2.0/grid/bin/lxinst..."Permissions" did not match reference
Permissions of file "/oracle/app/11.2.0/grid/bin/lxinst" did not match the expected value. [Expected = "0755" ; Found = "0775"]
/oracle/app/11.2.0/grid/clone/rootpre/ORCLcluster/lib/libskgxnr.a...No such file or directory
1227 files verified
Software check failed
Verification of software was unsuccessful on all the specified nodes.
一.3.7.2 解決
所以要解決這個問題其實並不難,我們大致可以通過如下幾種方法來解決:
1. 根據前面的幾個權限配置腳本自己參考進行修改,實際上並不難,直接UE編輯就很快搞定.
2. 根據Mos文檔提供的建議通過 $GRID_HOME/crs/install/rootcrs.pl -init 或 roothas.pl -init進行解決. rootcrs.pl –init是在PSU>11.2.0.3.6下執行的,如果PSU<11.2.0.3.6可以執行如下兩條命令來實現同樣的效果
<GRID_HOME>/crs/install/rootcrs.pl -unlock
<GRID_HOME>/crs/install/rootcrs.pl -patch
For 11.2:
For clustered Grid Infrastructure, as root user
# cd <GRID_HOME>/crs/install/
# ./rootcrs.pl -init
For Standalone Grid Infrastructure, as root user
# cd <GRID_HOME>/crs/install/
# ./roothas.pl -init
For 12c:
For clustered Grid Infrastructure, as root user
# cd <GRID_HOME>/crs/install/
# ./rootcrs.sh -init
For Standalone Grid Infrastructure, as root user
# cd <GRID_HOME>/crs/install/
# ./roothas.sh -init
3.采用MOS1515018.1文檔提供的腳本在正常庫上生成腳本,然后將生成的腳本在異常庫上執行從而來修復權限問題(應該和方法2結合使用)。
4. 11gR2可以deconfig crs的配置,然后重新跑root.sh即可。重新跑root.sh腳本並不影響數據庫,所以無需擔心(個人推薦的一種方式).
$ORACLE_HOME/crs/install/rootcrs.pl -deconfig -force -verbose
$ORACLE_HOME/root.sh
5. 如果是rac的某個節點的誤操作,那么還可以通過delete node然后add node來做,不過這個相對麻煩太多了,但是或許是最保險的一種方式。oracle也推薦這樣,因為你如果人為修改文件權限,很難保證以后運行過程中不會出現什么問題。
補充:
Linux環境中還可以通過getfacl和setfacl來進行操作,如下是例子:
1) getfacl /home/grid/app/11.2/grid > dir_privs.txt
2) set –restore dir_privs.txt
總結:
在安裝有GI的環境下,權限、屬主是嚴格被設定的,任何對於它們的錯誤修改容易引發一系列的問題,而且這些問題往往都很詭異很難按照常規的思路去診斷。萬一權限或屬主被修改了可以通過rootcrs.pl -init及permission.pl進行修復,rootcrs.pl –init僅修復GI的核心目錄,所以其修復速度較快,如果遇到GI無法啟動的問題,建議首選這種方法以使GI能夠快速啟動,但其缺點在於無法全量的進行修復,GI雖然正常了,並不能保證之后的運行過程中不出現這樣那樣的問題,這時就需要permission.pl出場了,permission.pl的運行模式決定了源庫(權限正確的庫)與目標庫(權限錯誤的庫)間的軟件版本盡可能的一致,所以源庫一定要選好,否則問題會更糟,另外如果源、目標兩個庫的安裝目錄不一樣還需要對permission*腳本作調整后再執行。
所以個人建議還是跑root.sh來的穩妥一點。
一.3.7.3 MOS 1515018.1 permission.pl腳本的使用
chmod 755 permission.pl
oracle用戶獲取ORACLE_HOME: ./permission.pl $ORACLE_HOME
root用戶獲取GRID_HOME: ./permission.pl $ORACLE_HOME
Script generates two files
a. permission-<time stamp> - This contains file permission in octal value, owner and group information of the files captured
b. restore-perm-<time stamp>.cmd - This contains command to change the permission, owner, and group of the captured files
拷貝到目標主機后分別執行:
chmod 755 restore-perm-<timestamp>.cmd
./restore-perm-<timestamp>.cmd
一.3.8 如何修復ASM實例名和節點名不一致的情況

對於10g的情況,我們可以參考Dave大神的blog:RAC修改ASM實例名的步驟:http://blog.csdn.net/tianlesoftware/article/details/6275827
對於11g的情況,我們只能重新執行root.sh腳本來修復這個問題。
一.3.9 如何徹底清除CRS信息
[ZFTPCCDB1:root]:/>$ORACLE_HOME/crs/install/rootcrs.pl -h
Unknown option: h
Usage:
rootcrs.pl [-verbose] [-upgrade [-force] | -patch]
[-paramfile <parameter-file>]
[-deconfig [-deinstall] [-keepdg] | -downgrade] [-force] [-lastnode]
[-downgrade] [-oldcrshome <old crshome path>] [-version <old crs version>]
[-unlock [-crshome <path to crs home>] [-nocrsstop]]
Options:
-verbose Run this script in verbose mode
-upgrade Oracle HA is being upgraded from previous version
-patch Oracle HA is being upgraded to a patch version
-paramfile Complete path of file specifying HA parameter values
-lastnode Force the node this is executing on to be considered the
last node of the install and perform actions associated
with configuring the last node
-downgrade Downgrade the clusterware
-version For use with downgrade; special handling is required if
downgrading to 9i. This is the old crs version in the format
A.B.C.D.E (e.g 11.1.0.6.0).
-deconfig Remove Oracle Clusterware to allow it to be uninstalled or reinstalled.
-force Force the execution of steps in delete that cannot be verified
to be safe
-deinstall Reset the permissions on CRS home during de-configuration
-keepdg Keep existing diskgroups during de-configuration
-unlock Unlock CRS home
-crshome Complete path of crs home. Use with unlock option.
-oldcrshome For use with downgrade. Complete path of the old crs home.
-nocrsstop used with unlock option to reset permissions on an inactive grid home
If neither -upgrade nor -patch is supplied, a new install is performed
To see the full manpage for this program, execute:
perldoc rootcrs.pl
[ZFTPCCDB1:root]:/>
根據MOS How to Proceed from Failed 11gR2 Grid Infrastructure (CRS) Installation (文檔 ID 942166.1) 文檔,若要重新執行root.sh腳本,我們可以如下操作:
在除最后一個節點外的所有節點執行:$ORACLE_HOME/crs/install/rootcrs.pl -deconfig -force -verbose
在最后一個節點執行:$ORACLE_HOME/crs/install/rootcrs.pl -deconfig -force -verbose -lastnode
重啟OS后再執行:$ORACLE_HOME/root.sh
需要注意的一點是,我們在執行完$ORACLE_HOME/crs/install/rootcrs.pl -deconfig -force -verbose執成之后需要刪除如下的文件:
ls -l $ORACLE_BASE/Clusterware/ckptGridHA*
find $ORACLE_HOME/gpnp/* -type f
find $ORACLE_HOME/gpnp/* -type f -exec rm -rf {} \;
其中(find $ORACLE_HOME/gpnp/* -type f)中的文件刪除后,我們在重新執行root.sh的時候才會有如下的提示:
root wallet
root wallet cert
root cert export
peer wallet
profile reader wallet
pa wallet
peer wallet keys
pa wallet keys
peer cert request
pa cert request
peer cert
pa cert
peer root cert TP
profile reader root cert TP
pa root cert TP
peer pa cert TP
pa peer cert TP
profile reader pa cert TP
profile reader peer cert TP
peer user cert
pa user cert
---------------------------------------------------------------------------------------------------------------------
第二章 實驗部分
二.1 實驗環境介紹
項目 |
primary db |
db 類型 |
單實例 |
db version |
11.2.0.2.0 |
db 存儲 |
ASM |
二.2 實驗目標
本次我們模擬6個實驗:
1、dd備份恢復OCR
2、OCR的物理備份和恢復
3、OCR的邏輯備份和恢復
4、刪除grid用戶下的$ORACLE_HOME/log下的文件夾,嘗試恢復重新執行root.sh來恢復數據
5、permission.pl腳本的使用
6、清除crs和更換修復實例名就不實驗了,因為都是執行root.sh腳本
二.3 實驗過程
二.3.1 實驗一:dd備份恢復OCR
首先關掉集群並且dd備份ocr磁盤頭內容:
[zfzhlhrdb1:root]:/>crsctl query css votedisk
## STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1. ONLINE 7e59ff6d88ba4fc0bfe5b6ccdd27ba55 (/dev/rhdisk2) [DATA]
Located 1 voting disk(s).
[zfzhlhrdb1:root]:/>dd if=/dev/rhdisk2 of=/tmp/votedisk_lhr.bak bs=1024k count=4
4+0 records in.
4+0 records out.
關閉集群后,使用dd命令模擬ocr 損壞
[zfzhlhrdb1:root]:/>dd if=/dev/zero of=/dev/rhdisk2 bs=1024k count=4
4+0 records in.
4+0 records out.
[zfzhlhrdb2:root]:/>sh disk*
------------------------------------------------------------------------------------------------------------------------------
| disk | PVID | no_reserve | size(G) | disktype | disk_storage |
------------------------------------------------------------------------------------------------------------------------------
| crw------- root system /dev/rhdisk0 | 00f60f2b47d4b56f | no_reserve | 128 | rootvg | EMC,vscsi,3 |
| crw------- root system /dev/rhdisk1 | 00f60f2bd2147554 | single_path | 128 | T_XDESK_APP2_vg | EMC,fscsi,32 |
| crw-rw---- grid asmadmin /dev/rhdisk2 | 0000000000000000 | no_reserve | 128 | Not_Used | EMC,fscsi,32 |
| crw-rw---- grid asmadmin /dev/rhdisk3 | 0000000000000000 | no_reserve | 128 | ASM:+FRA | EMC,fscsi,32 |
| crw-rw---- root system /dev/rhdisk4 | 0000000000000000 | no_reserve | 128 | gpfs1nsd | EMC,fscsi,32 |
| crw------- root system /dev/rhdisk5 | 00f60f2b6046c20d | single_path | 0 | HeadDisk | EMC,fscsi,32 |
| crw------- root system /dev/rhdisk6 | 00f60f2b6046ca7d | single_path | 0 | HeadDisk | EMC,fscsi,32 |
| crw------- root system /dev/rhdisk7 | 00f60f2b6046d2f8 | single_path | 0 | HeadDisk | EMC,fscsi,32 |
| crw------- root system /dev/rhdisk8 | 00f60f2b6046dae6 | single_path | 0 | HeadDisk | EMC,fscsi,32 |
------------------------------------------------------------------------------------------------------------------------------
| ASMDISK_TOTAL:1 TOTAL_SIZE(GB):128 |
------------------------------------------------------------------------------------------------------------------------------
[zfzhlhrdb2:root]:/>
[zfzhlhrdb2:root]:/>cat disk*
if [ 1 = 1 ] ;then
sum=0;asmnum=0
awk 'BEGIN {printf "------------------------------------------------------------------------------------------------------------------------------\n"; printf "%-43s %-18s %-14s %-8s %-15s %-14s\n","| disk ","| PVID ","| no_reserve ","| size(G)","| disktype ","| disk_storage |"; printf "------------------------------------------------------------------------------------------------------------------------------\n";}'
for diskname in `lspv | grep disk | awk '{print $1}'`;do
mydiskname=`ls -l /dev/r$diskname |grep -w /dev/r$diskname| cut -c 1-12,17-38,59-76`
mydiskpvid=`lquerypv -H /dev/$diskname | cut -c 1-16`
if [ "${mydiskpvid}" = "" ];then mydiskpvid="0000000000000000" ; fi 2>/dev/null
mydiskreserve=`lsattr -El $diskname | grep -i reserve_policy | cut -c 17-30`
mydisksize=`bootinfo -s $diskname 2>/dev/null` ; let "mydisksize1=$mydisksize/1024" 2>/dev/null
mydiskvg=`lspv | grep -w $diskname | awk '{print $3}'`
mydiskasmgroup=`lquerypv -h /dev/r$diskname|head -n 7|tail -n 1|awk -F ' ' '{print $NF}'|sed -e 's/\.//g' -e 's/\|//g' | awk '{ if ($1 != "") printf "+"$1 ; else print "NULL"}'`
mydiskflag=`lquerypv -h /dev/r$diskname 2>/dev/null|grep -i orcldisk|wc -l`
if [ ${mydisksize} -lt 1000 ];then mydisktype="HeadDisk" ; elif [ ${mydisksize} -gt 1000 -a ${mydiskflag} -gt 0 ];then mydisktype="ASM:"$mydiskasmgroup; elif [ ${mydisksize} -gt 1000 -a ${mydiskflag} -eq 0 -a $mydiskvg != "None" ];then mydisktype=$mydiskvg ; else mydisktype="Not_Used"; fi 2>/dev/null
mydiskpath=`lspath -l $diskname 2>/dev/null|head -1|awk '{print $NF}'|sed "s/.$//"`
mydiskstring=`odmget -q attribute="unique_id" CuAt|egrep "name|value"|paste - -|tr '\t' ' '|grep -w ${diskname}|sed 's/\"//g'`
mydiskstorage=`echo ${mydiskstring} 2> /dev/null|awk '{ if($NF ~ /EMC/) {print "EMC"} else if ($NF ~ /NETAPP/) {print "NETAPP"} else if($NF ~ /HITACHI/) {print "HDS"}}'`
mydiskdepth=`lsattr -El ${diskname}|grep queue_depth|awk '{print $2}'`
mydiskstorage1=$mydiskstorage","$mydiskpath","$mydiskdepth
[ $mydisksize1 -gt 1 -a ${mydiskflag} -gt 0 ] && { (( sum=sum+$mydisksize1 )) ; (( asmnum=$asmnum+1 )) ;}
echo "$mydiskname" "$mydiskpvid" "$mydiskreserve" "${mydisksize1%.*}" "$mydisktype" "$mydiskstorage1" | awk '{printf "| %-10s %-6s %-8s %-14s | %-17s | %-12s | %-8s| %-15s | %-14s |\n",$1,$2,$3,$4,$5,$6,$7,$8,$9}'
done
awk 'BEGIN {printf "------------------------------------------------------------------------------------------------------------------------------\n";}'
echo "ASMDISK_TOTAL:$asmnum" "TOTAL_SIZE(GB):$sum" |awk '{printf "| %-20s %-101s |\n", $1,$2}'
awk 'BEGIN {printf "------------------------------------------------------------------------------------------------------------------------------\n";}'
fi
[zfzhlhrdb2:root]:/>
[zfzhlhrdb1:root]:/>crsctl start has
CRS-4123: Oracle High Availability Services has been started.
[zfzhlhrdb1:root]:/>crsctl stat res -t
CRS-4535: Cannot communicate with Cluster Ready Services
CRS-4000: Command Status failed, or completed with errors.
[zfzhlhrdb1:root]:/>
[zfzhlhrdb1:root]:/>crsctl stat res -t -init
--------------------------------------------------------------------------------
NAME TARGET STATE SERVER STATE_DETAILS
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.asm
1 ONLINE OFFLINE Instance Shutdown
ora.cluster_interconnect.haip
1 ONLINE OFFLINE
ora.crf
1 ONLINE ONLINE zfzhlhrdb1
ora.crsd
1 ONLINE OFFLINE
ora.cssd
1 ONLINE OFFLINE STARTING
ora.cssdmonitor
1 ONLINE ONLINE zfzhlhrdb1
ora.ctssd
1 ONLINE OFFLINE
ora.diskmon
1 OFFLINE OFFLINE
ora.drivers.acfs
1 ONLINE OFFLINE
ora.evmd
1 ONLINE OFFLINE
ora.gipcd
1 ONLINE ONLINE zfzhlhrdb1
ora.gpnpd
1 ONLINE ONLINE zfzhlhrdb1
ora.mdnsd
1 ONLINE ONLINE zfzhlhrdb1
[zfzhlhrdb1:root]:/>
[zfzhlhrdb1:grid]:/home/grid>cluvfy comp ocr -n all
Verifying OCR integrity
Unable to retrieve nodelist from Oracle Clusterware
Verification cannot proceed
[zfzhlhrdb2:root]:/>ocrcheck
KGFCHECK kgfnStmtExecute01c: ret == OCI_SUCCESS: FAILED at kgfn.c:1563
KGFCHECK kgfpOpen01c: ok: FAILED at kgfp.c:519
-- trace dump on error exit --
Error [kgfoOpen01] in [kgfokge] at kgfo.c:1697
ORA-17503: ksfdopn:2 Failed to open file +DATA.255.4294967295
ORA-15001: diskgroup "DATA" does not exist or is not mounted
ORA-06512: at line 4
Category: 8
DepInfo: 15056
ADR is not properly configured
-- trace dump end --
-- trace dump on error exit --
Error [kgfoOpen01] in [kgfokge] at kgfo.c:1546
ORA-17503: ksfdopn:2 Failed to open file +DATA.255.4294967295
ORA-15001: diskgroup "DATA" does not exist or is not mounted
ORA-06512: at line 4
Category: 8
DepInfo: 15056
ADR is not properly configured
-- trace dump end --
KGFCHECK kgfnStmtSingle3: ret == OCI_SUCCESS: FAILED at kgfn.c:1770
-- trace dump on error exit --
Error [kgfo] in [kgfoCkMt03] at kgfo.c:2080
diskgroup DATA not mounted ()
Category: 6
DepInfo: 0
ADR is not properly configured
-- trace dump end --
KGFCHECK kgfnStmtSingle3: ret == OCI_SUCCESS: FAILED at kgfn.c:1770
-- trace dump on error exit --
Error [kgfo] in [kgfoCkMt03] at kgfo.c:2080
diskgroup DATA not mounted ()
Category: 6
DepInfo: 0
ADR is not properly configured
-- trace dump end --
PROT-602: Failed to retrieve data from the cluster registry
PROC-26: Error while accessing the physical storage
說明集群已經不能啟動了,磁盤頭也被清掉了,接下來我們恢復集群:
[zfzhlhrdb1:root]:/>crsctl stop has -f
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'zfzhlhrdb1'
。。。。。。。。
[zfzhlhrdb1:root]:/>dd if=/tmp/votedisk_lhr.bak of=/dev/rhdisk2 bs=1024k count=4
4+0 records in.
4+0 records out.
[zfzhlhrdb1:root]:/>sh disk*
------------------------------------------------------------------------------------------------------------------------------
| disk | PVID | no_reserve | size(G) | disktype | disk_storage |
------------------------------------------------------------------------------------------------------------------------------
| crw------- root system /dev/rhdisk0 | 00f63a6147ced87a | no_reserve | 128 | rootvg | EMC,vscsi,3 |
| crw------- root system /dev/rhdisk1 | 00f63a61d2143e86 | single_path | 128 | T_XDESK_APP1_vg | EMC,fscsi,32 |
| crw-rw---- grid asmadmin /dev/rhdisk2 | 0000000000000000 | no_reserve | 128 | ASM:+DATA | EMC,fscsi,32 |
| crw-rw---- grid asmadmin /dev/rhdisk3 | 0000000000000000 | no_reserve | 128 | ASM:+FRA | EMC,fscsi,32 |
| crw-rw---- root system /dev/rhdisk4 | 0000000000000000 | no_reserve | 128 | gpfs1nsd | EMC,fscsi,32 |
| crw------- root system /dev/rhdisk5 | 00f63a61c89dbd11 | single_path | 0 | HeadDisk | EMC,fscsi,32 |
| crw------- root system /dev/rhdisk6 | 00f63a6160469425 | single_path | 0 | HeadDisk | EMC,fscsi,32 |
| crw------- root system /dev/rhdisk7 | 00f63a6160469c21 | single_path | 0 | HeadDisk | EMC,fscsi,32 |
| crw------- root system /dev/rhdisk8 | 0000000000000000 | single_path | 0 | HeadDisk | EMC,fscsi,32 |
------------------------------------------------------------------------------------------------------------------------------
| ASMDISK_TOTAL:2 TOTAL_SIZE(GB):256 |
------------------------------------------------------------------------------------------------------------------------------
[zfzhlhrdb1:root]:/>
[zfzhlhrdb1:root]:/>crsctl stat res -t -init
--------------------------------------------------------------------------------
NAME TARGET STATE SERVER STATE_DETAILS
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.asm
1 ONLINE ONLINE zfzhlhrdb1 Started
ora.cluster_interconnect.haip
1 ONLINE ONLINE zfzhlhrdb1
ora.crf
1 ONLINE ONLINE zfzhlhrdb1
ora.crsd
1 ONLINE ONLINE zfzhlhrdb1 STOPPING
ora.cssd
1 ONLINE ONLINE zfzhlhrdb1
ora.cssdmonitor
1 ONLINE ONLINE zfzhlhrdb1
ora.ctssd
1 ONLINE ONLINE zfzhlhrdb1 OBSERVER
ora.diskmon
1 OFFLINE OFFLINE
ora.drivers.acfs
1 ONLINE ONLINE zfzhlhrdb1
ora.evmd
1 ONLINE ONLINE zfzhlhrdb1
ora.gipcd
1 ONLINE ONLINE zfzhlhrdb1
ora.gpnpd
1 ONLINE ONLINE zfzhlhrdb1
ora.mdnsd
1 ONLINE ONLINE zfzhlhrdb1
[zfzhlhrdb1:root]:/>
[zfzhlhrdb1:root]:/>crsctl stop has -f
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'zfzhlhrdb1'
CRS-2673: Attempting to stop 'ora.crsd' on 'zfzhlhrdb1'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'zfzhlhrdb1'
CRS-2673: Attempting to stop 'ora.oc4j' on 'zfzhlhrdb1'
CRS-2675: Stop of 'ora.oc4j' on 'zfzhlhrdb1' failed
CRS-2679: Attempting to clean 'ora.oc4j' on 'zfzhlhrdb1'
CRS-2681: Clean of 'ora.oc4j' on 'zfzhlhrdb1' succeeded
CRS-2673: Attempting to stop 'ora.ons' on 'zfzhlhrdb1'
CRS-2677: Stop of 'ora.ons' on 'zfzhlhrdb1' succeeded
CRS-2673: Attempting to stop 'ora.net1.network' on 'zfzhlhrdb1'
CRS-2677: Stop of 'ora.net1.network' on 'zfzhlhrdb1' succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'zfzhlhrdb1' has completed
CRS-2677: Stop of 'ora.crsd' on 'zfzhlhrdb1' succeeded
CRS-2673: Attempting to stop 'ora.mdnsd' on 'zfzhlhrdb1'
CRS-2673: Attempting to stop 'ora.crf' on 'zfzhlhrdb1'
CRS-2673: Attempting to stop 'ora.ctssd' on 'zfzhlhrdb1'
CRS-2673: Attempting to stop 'ora.evmd' on 'zfzhlhrdb1'
CRS-2673: Attempting to stop 'ora.asm' on 'zfzhlhrdb1'
CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'zfzhlhrdb1'
CRS-2677: Stop of 'ora.mdnsd' on 'zfzhlhrdb1' succeeded
CRS-2677: Stop of 'ora.crf' on 'zfzhlhrdb1' succeeded
CRS-2677: Stop of 'ora.evmd' on 'zfzhlhrdb1' succeeded
CRS-2677: Stop of 'ora.ctssd' on 'zfzhlhrdb1' succeeded
CRS-2677: Stop of 'ora.asm' on 'zfzhlhrdb1' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'zfzhlhrdb1'
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'zfzhlhrdb1' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'zfzhlhrdb1'
CRS-2677: Stop of 'ora.cssd' on 'zfzhlhrdb1' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'zfzhlhrdb1'
CRS-2677: Stop of 'ora.gipcd' on 'zfzhlhrdb1' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'zfzhlhrdb1'
CRS-2677: Stop of 'ora.gpnpd' on 'zfzhlhrdb1' succeeded
CRS-2677: Stop of 'ora.drivers.acfs' on 'zfzhlhrdb1' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'zfzhlhrdb1' has completed
CRS-4133: Oracle High Availability Services has been stopped.
[zfzhlhrdb1:root]:/>ps -ef|grep d.bin
root 4718784 7667788 0 19:47:26 pts/2 0:00 grep d.bin
[zfzhlhrdb1:root]:/>crsctl start has
CRS-4123: Oracle High Availability Services has been started.
[zfzhlhrdb1:root]:/>
[zfzhlhrdb2:root]:/>crsctl stat res -t
--------------------------------------------------------------------------------
NAME TARGET STATE SERVER STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
ONLINE ONLINE zfzhlhrdb1
ONLINE ONLINE zfzhlhrdb2
ora.FRA.dg
ONLINE ONLINE zfzhlhrdb1
ONLINE ONLINE zfzhlhrdb2
ora.LISTENER.lsnr
ONLINE ONLINE zfzhlhrdb1
ONLINE ONLINE zfzhlhrdb2
ora.LISTENER_LHRDG.lsnr
ONLINE ONLINE zfzhlhrdb1
ONLINE ONLINE zfzhlhrdb2
ora.asm
ONLINE ONLINE zfzhlhrdb1 Started
ONLINE ONLINE zfzhlhrdb2 Started
ora.gsd
OFFLINE OFFLINE zfzhlhrdb1
OFFLINE OFFLINE zfzhlhrdb2
ora.net1.network
ONLINE ONLINE zfzhlhrdb1
ONLINE ONLINE zfzhlhrdb2
ora.ons
ONLINE ONLINE zfzhlhrdb1
ONLINE ONLINE zfzhlhrdb2
ora.registry.acfs
ONLINE ONLINE zfzhlhrdb1
ONLINE ONLINE zfzhlhrdb2
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE zfzhlhrdb1
ora.cvu
1 ONLINE ONLINE zfzhlhrdb1
ora.lhrdg.db
1 ONLINE OFFLINE Instance Shutdown
2 ONLINE OFFLINE Instance Shutdown
ora.oc4j
1 ONLINE ONLINE zfzhlhrdb1
ora.oraesdb.db
1 ONLINE OFFLINE Corrupted Controlfi
le
2 ONLINE OFFLINE Corrupted Controlfi
le
ora.oraeskdb.db
1 ONLINE OFFLINE Instance Shutdown
2 ONLINE OFFLINE Instance Shutdown
ora.raclhr.db
1 ONLINE OFFLINE Instance Shutdown
2 ONLINE OFFLINE Instance Shutdown
ora.scan1.vip
1 ONLINE ONLINE zfzhlhrdb1
ora.zfzhlhrdb1.vip
1 ONLINE ONLINE zfzhlhrdb1
ora.zfzhlhrdb2.vip
1 ONLINE ONLINE zfzhlhrdb2
[zfzhlhrdb2:root]:/>
可以看到集群可以正常啟動了,接下來修復幾個db的問題即可。
二.3.2 實驗二:通過kfed修復磁盤頭
備份ASM磁盤頭部
# dd if=/dev/rhdisk2 of=/tmp/asm_dd.bak bs=1024 count=4
[zfzhlhrdb1:root]:/>dd if=/dev/rhdisk2 of=/tmp/asm_dd.bak bs=1024 count=4
4+0 records in.
4+0 records out.
破壞ASM磁盤頭部
# dd if=/dev/zero of=/dev/rhdisk2 bs=1024 count=4
[zfzhlhrdb1:root]:/>dd if=/dev/zero of=/dev/rhdisk2 bs=1024 count=4
4+0 records in.
4+0 records out.
查看ASM磁盤頭部信息(空的)
# lquerypv -h /dev/rhdisk2
[zfzhlhrdb1:root]:/>lquerypv -h /dev/rhdisk2
00000000 00000000 00000000 00000000 00000000 |................|
00000010 00000000 00000000 00000000 00000000 |................|
00000020 00000000 00000000 00000000 00000000 |................|
00000030 00000000 00000000 00000000 00000000 |................|
00000040 00000000 00000000 00000000 00000000 |................|
00000050 00000000 00000000 00000000 00000000 |................|
00000060 00000000 00000000 00000000 00000000 |................|
00000070 00000000 00000000 00000000 00000000 |................|
00000080 00000000 00000000 00000000 00000000 |................|
00000090 00000000 00000000 00000000 00000000 |................|
000000A0 00000000 00000000 00000000 00000000 |................|
000000B0 00000000 00000000 00000000 00000000 |................|
000000C0 00000000 00000000 00000000 00000000 |................|
000000D0 00000000 00000000 00000000 00000000 |................|
000000E0 00000000 00000000 00000000 00000000 |................|
000000F0 00000000 00000000 00000000 00000000 |................|
停止HAS服務
# crsctl stop has -f
[zfzhlhrdb1:root]:/>crsctl stop has -f
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'zfzhlhrdb1'
CRS-2673: Attempting to stop 'ora.crsd' on 'zfzhlhrdb1'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'zfzhlhrdb1'
CRS-2673: Attempting to stop 'ora.LISTENER_SCAN1.lsnr' on 'zfzhlhrdb1'
CRS-2673: Attempting to stop 'ora.LISTENER.lsnr' on 'zfzhlhrdb1'
CRS-2673: Attempting to stop 'ora.LISTENER_LHRDG.lsnr' on 'zfzhlhrdb1'
CRS-2673: Attempting to stop 'ora.oc4j' on 'zfzhlhrdb1'
CRS-2673: Attempting to stop 'ora.cvu' on 'zfzhlhrdb1'
CRS-2673: Attempting to stop 'ora.registry.acfs' on 'zfzhlhrdb1'
CRS-2673: Attempting to stop 'ora.FRA.dg' on 'zfzhlhrdb1'
CRS-2673: Attempting to stop 'ora.oralhrq.db' on 'zfzhlhrdb1'
CRS-2677: Stop of 'ora.cvu' on 'zfzhlhrdb1' succeeded
CRS-2672: Attempting to start 'ora.cvu' on 'zfzhlhrdb2'
CRS-2677: Stop of 'ora.FRA.dg' on 'zfzhlhrdb1' succeeded
CRS-2676: Start of 'ora.cvu' on 'zfzhlhrdb2' succeeded
CRS-2677: Stop of 'ora.oralhrq.db' on 'zfzhlhrdb1' succeeded
CRS-2673: Attempting to stop 'ora.DATA.dg' on 'zfzhlhrdb1'
CRS-2677: Stop of 'ora.LISTENER_SCAN1.lsnr' on 'zfzhlhrdb1' succeeded
CRS-2673: Attempting to stop 'ora.scan1.vip' on 'zfzhlhrdb1'
CRS-2677: Stop of 'ora.LISTENER_LHRDG.lsnr' on 'zfzhlhrdb1' succeeded
CRS-2677: Stop of 'ora.registry.acfs' on 'zfzhlhrdb1' succeeded
CRS-2677: Stop of 'ora.LISTENER.lsnr' on 'zfzhlhrdb1' succeeded
CRS-2673: Attempting to stop 'ora.zfzhlhrdb1.vip' on 'zfzhlhrdb1'
CRS-2677: Stop of 'ora.scan1.vip' on 'zfzhlhrdb1' succeeded
CRS-2672: Attempting to start 'ora.scan1.vip' on 'zfzhlhrdb2'
CRS-2677: Stop of 'ora.zfzhlhrdb1.vip' on 'zfzhlhrdb1' succeeded
CRS-2672: Attempting to start 'ora.zfzhlhrdb1.vip' on 'zfzhlhrdb2'
CRS-2676: Start of 'ora.scan1.vip' on 'zfzhlhrdb2' succeeded
CRS-2672: Attempting to start 'ora.LISTENER_SCAN1.lsnr' on 'zfzhlhrdb2'
CRS-2676: Start of 'ora.zfzhlhrdb1.vip' on 'zfzhlhrdb2' succeeded
CRS-2676: Start of 'ora.LISTENER_SCAN1.lsnr' on 'zfzhlhrdb2' succeeded
CRS-2677: Stop of 'ora.DATA.dg' on 'zfzhlhrdb1' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'zfzhlhrdb1'
CRS-2677: Stop of 'ora.asm' on 'zfzhlhrdb1' succeeded
CRS-2677: Stop of 'ora.oc4j' on 'zfzhlhrdb1' succeeded
CRS-2672: Attempting to start 'ora.oc4j' on 'zfzhlhrdb2'
CRS-2676: Start of 'ora.oc4j' on 'zfzhlhrdb2' succeeded
CRS-2673: Attempting to stop 'ora.ons' on 'zfzhlhrdb1'
CRS-2677: Stop of 'ora.ons' on 'zfzhlhrdb1' succeeded
CRS-2673: Attempting to stop 'ora.net1.network' on 'zfzhlhrdb1'
CRS-2677: Stop of 'ora.net1.network' on 'zfzhlhrdb1' succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'zfzhlhrdb1' has completed
CRS-2677: Stop of 'ora.crsd' on 'zfzhlhrdb1' succeeded
CRS-2673: Attempting to stop 'ora.mdnsd' on 'zfzhlhrdb1'
CRS-2673: Attempting to stop 'ora.crf' on 'zfzhlhrdb1'
CRS-2673: Attempting to stop 'ora.ctssd' on 'zfzhlhrdb1'
CRS-2673: Attempting to stop 'ora.evmd' on 'zfzhlhrdb1'
CRS-2673: Attempting to stop 'ora.asm' on 'zfzhlhrdb1'
CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'zfzhlhrdb1'
CRS-2677: Stop of 'ora.mdnsd' on 'zfzhlhrdb1' succeeded
CRS-2677: Stop of 'ora.ctssd' on 'zfzhlhrdb1' succeeded
CRS-2677: Stop of 'ora.evmd' on 'zfzhlhrdb1' succeeded
CRS-2677: Stop of 'ora.crf' on 'zfzhlhrdb1' succeeded
CRS-2677: Stop of 'ora.asm' on 'zfzhlhrdb1' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'zfzhlhrdb1'
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'zfzhlhrdb1' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'zfzhlhrdb1'
CRS-2677: Stop of 'ora.cssd' on 'zfzhlhrdb1' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'zfzhlhrdb1'
CRS-2677: Stop of 'ora.gipcd' on 'zfzhlhrdb1' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'zfzhlhrdb1'
CRS-2677: Stop of 'ora.gpnpd' on 'zfzhlhrdb1' succeeded
CRS-2677: Stop of 'ora.drivers.acfs' on 'zfzhlhrdb1' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'zfzhlhrdb1' has completed
CRS-4133: Oracle High Availability Services has been stopped.
[zfzhlhrdb1:root]:/>crsctl start has
CRS-4123: Oracle High Availability Services has been started.
[zfzhlhrdb1:root]:/>crsctl stat res -t -init
--------------------------------------------------------------------------------
NAME TARGET STATE SERVER STATE_DETAILS
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.asm
1 ONLINE OFFLINE Instance Shutdown
ora.cluster_interconnect.haip
1 ONLINE OFFLINE
ora.crf
1 ONLINE ONLINE zfzhlhrdb1
ora.crsd
1 ONLINE OFFLINE
ora.cssd
1 ONLINE OFFLINE STARTING
ora.cssdmonitor
1 ONLINE ONLINE zfzhlhrdb1
ora.ctssd
1 ONLINE OFFLINE
ora.diskmon
1 OFFLINE OFFLINE
ora.drivers.acfs
1 ONLINE OFFLINE
ora.evmd
1 ONLINE OFFLINE
ora.gipcd
1 ONLINE ONLINE zfzhlhrdb1
ora.gpnpd
1 ONLINE ONLINE zfzhlhrdb1
ora.mdnsd
1 ONLINE ONLINE zfzhlhrdb1
[zfzhlhrdb1:root]:/>
通過KFED命令修復ASM磁盤頭
# kfed repair /dev/rhdisk2
[zfzhlhrdb1:root]:/>kfed repair /dev/rhdisk2
[zfzhlhrdb1:root]:/>
查看ASM磁盤頭信息
[zfzhlhrdb1:root]:/>lquerypv -h /dev/rhdisk2
00000000 00820101 00000000 80000000 9D6A73D5 |.............js.|
00000010 00000000 00000000 00000000 00000000 |................|
00000020 4F52434C 4449534B 00000000 00000000 |ORCLDISK........|
00000030 00000000 00000000 00000000 00000000 |................|
00000040 0B200000 00000103 44415441 5F303030 |. ......DATA_000|
00000050 30000000 00000000 00000000 00000000 |0...............|
00000060 00000000 00000000 44415441 00000000 |........DATA....|
00000070 00000000 00000000 00000000 00000000 |................|
00000080 00000000 00000000 44415441 5F303030 |........DATA_000|
00000090 30000000 00000000 00000000 00000000 |0...............|
000000A0 00000000 00000000 00000000 00000000 |................|
000000B0 00000000 00000000 00000000 00000000 |................|
000000C0 00000000 00000000 01F81BD3 40FE1000 |............@...|
000000D0 01F81BD4 50600800 02001000 00100000 |....P`..........|
000000E0 0001BC80 0002001C 00000003 00000001 |................|
000000F0 00000002 00000002 00000000 00000000 |................|
[zfzhlhrdb1:root]:/>
啟動HAS服務
# crsctl start has
查看服務啟動信息
# crsctl stat res -t
[zfzhlhrdb1:root]:/>crsctl stat res -t
--------------------------------------------------------------------------------
NAME TARGET STATE SERVER STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
ONLINE ONLINE zfzhlhrdb1
ONLINE ONLINE zfzhlhrdb2
ora.FRA.dg
ONLINE ONLINE zfzhlhrdb1
ONLINE ONLINE zfzhlhrdb2
ora.LISTENER.lsnr
ONLINE OFFLINE zfzhlhrdb1 STARTING
ONLINE ONLINE zfzhlhrdb2
ora.LISTENER_LHRDG.lsnr
ONLINE OFFLINE zfzhlhrdb1 STARTING
ONLINE ONLINE zfzhlhrdb2
ora.asm
ONLINE ONLINE zfzhlhrdb1 Started
ONLINE ONLINE zfzhlhrdb2 Started
ora.gsd
OFFLINE OFFLINE zfzhlhrdb1
OFFLINE OFFLINE zfzhlhrdb2
ora.net1.network
ONLINE ONLINE zfzhlhrdb1
ONLINE ONLINE zfzhlhrdb2
ora.ons
ONLINE ONLINE zfzhlhrdb1
ONLINE ONLINE zfzhlhrdb2
ora.registry.acfs
ONLINE ONLINE zfzhlhrdb1
ONLINE ONLINE zfzhlhrdb2
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE zfzhlhrdb2
ora.cvu
1 ONLINE ONLINE zfzhlhrdb2
ora.lhrdg.db
1 ONLINE OFFLINE Instance Shutdown
2 ONLINE OFFLINE
ora.oc4j
1 ONLINE ONLINE zfzhlhrdb2
ora.oraesdb.db
1 ONLINE OFFLINE Corrupted Controlfi
le
2 ONLINE OFFLINE
ora.oraeskdb.db
1 ONLINE OFFLINE Instance Shutdown
2 ONLINE OFFLINE
ora.oralhrq.db
1 ONLINE ONLINE zfzhlhrdb2 Open
2 ONLINE OFFLINE Instance Shutdown
ora.raclhr.db
1 ONLINE OFFLINE Instance Shutdown
2 ONLINE OFFLINE
ora.scan1.vip
1 ONLINE ONLINE zfzhlhrdb2
ora.zfzhlhrdb1.vip
1 ONLINE ONLINE zfzhlhrdb1
ora.zfzhlhrdb2.vip
1 ONLINE ONLINE zfzhlhrdb2
[zfzhlhrdb1:root]:/>
二.3.3 實驗三:物理備份與恢復(自動備份)
ocrconfig -manualbackup
ocrconfig -showbackup
crsctl query css votedisk
crsctl stop crs -f
crsctl start crs -excl
crsctl stop resource ora.crsd -init
ocrconfig -restore /oracle/app/11.2.0/grid/cdata/ZFTPCCDB-crs/backup_20160701_152358.ocr
[ZFTPCCDB1:root]:/>su - grid
[ZFTPCCDB1:grid]:/home/grid>cluvfy comp ocr -n all -verbose
Verifying OCR integrity
Checking OCR integrity...
Checking the absence of a non-clustered configuration...
All nodes free of non-clustered, local-only configurations
ASM Running check passed. ASM is running on all specified nodes
Checking OCR config file "/etc/oracle/ocr.loc"...
OCR config file "/etc/oracle/ocr.loc" check successful
Disk group for ocr location "+DATA1" available on all the nodes
NOTE:
This check does not verify the integrity of the OCR contents. Execute 'ocrcheck' as a privileged user to verify the contents of OCR.
OCR integrity check passed
Verification of OCR integrity was successful.
[ZFTPCCDB1:grid]:/home/grid>
[ZFTPCCDB1:grid]:/home/grid>
[ZFTPCCDB1:grid]:/home/grid>exit
[ZFTPCCDB1:root]:/>ocrconfig -showbackup
PROT-24: Auto backups for the Oracle Cluster Registry are not available
zftpccdb1 2016/07/01 15:23:58 /oracle/app/11.2.0/grid/cdata/ZFTPCCDB-crs/backup_20160701_152358.ocr
[ZFTPCCDB1:root]:/>ocrconfig -manualbackup
zftpccdb2 2016/07/01 16:08:22 /oracle/app/11.2.0/grid/cdata/ZFTPCCDB-crs/backup_20160701_160822.ocr
zftpccdb1 2016/07/01 15:23:58 /oracle/app/11.2.0/grid/cdata/ZFTPCCDB-crs/backup_20160701_152358.ocr
[ZFTPCCDB1:root]:/>ocrconfig -showbackup
PROT-24: Auto backups for the Oracle Cluster Registry are not available
zftpccdb2 2016/07/01 16:08:22 /oracle/app/11.2.0/grid/cdata/ZFTPCCDB-crs/backup_20160701_160822.ocr
zftpccdb1 2016/07/01 15:23:58 /oracle/app/11.2.0/grid/cdata/ZFTPCCDB-crs/backup_20160701_152358.ocr
[ZFTPCCDB1:root]:/>
2個節點都停掉CRS:
[ZFTPCCDB1:root]:/>crsctl query css votedisk
## STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1. ONLINE 2da6f80ec3e64f45bfca9dabe0dd65eb (/dev/rhdisk1) [DATA1]
Located 1 voting disk(s).
[ZFTPCCDB1:root]:/>ocrconfig -restore /oracle/app/11.2.0/grid/cdata/ZFTPCCDB-crs/backup_20160701_152358.ocr
PROT-19: Cannot proceed while the Cluster Ready Service is running
[ZFTPCCDB1:root]:/>crsctl stop crs -f
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'zftpccdb1'
CRS-2673: Attempting to stop 'ora.crsd' on 'zftpccdb1'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'zftpccdb1'
CRS-2673: Attempting to stop 'ora.zftpccdb2.vip' on 'zftpccdb1'
CRS-2673: Attempting to stop 'ora.LISTENER_SCAN1.lsnr' on 'zftpccdb1'
CRS-2673: Attempting to stop 'ora.oc4j' on 'zftpccdb1'
CRS-2673: Attempting to stop 'ora.cvu' on 'zftpccdb1'
CRS-2673: Attempting to stop 'ora.registry.acfs' on 'zftpccdb1'
CRS-2673: Attempting to stop 'ora.oralhr.db' on 'zftpccdb1'
CRS-2673: Attempting to stop 'ora.LISTENER.lsnr' on 'zftpccdb1'
CRS-2677: Stop of 'ora.zftpccdb2.vip' on 'zftpccdb1' succeeded
CRS-2672: Attempting to start 'ora.zftpccdb2.vip' on 'zftpccdb2'
CRS-2677: Stop of 'ora.cvu' on 'zftpccdb1' succeeded
CRS-2672: Attempting to start 'ora.cvu' on 'zftpccdb2'
CRS-2677: Stop of 'ora.LISTENER_SCAN1.lsnr' on 'zftpccdb1' succeeded
CRS-2673: Attempting to stop 'ora.scan1.vip' on 'zftpccdb1'
CRS-2677: Stop of 'ora.LISTENER.lsnr' on 'zftpccdb1' succeeded
CRS-2677: Stop of 'ora.scan1.vip' on 'zftpccdb1' succeeded
CRS-2672: Attempting to start 'ora.scan1.vip' on 'zftpccdb2'
CRS-2676: Start of 'ora.cvu' on 'zftpccdb2' succeeded
CRS-2677: Stop of 'ora.oralhr.db' on 'zftpccdb1' succeeded
CRS-2673: Attempting to stop 'ora.DATA1.dg' on 'zftpccdb1'
CRS-2676: Start of 'ora.zftpccdb2.vip' on 'zftpccdb2' succeeded
CRS-2672: Attempting to start 'ora.LISTENER.lsnr' on 'zftpccdb2'
CRS-2676: Start of 'ora.scan1.vip' on 'zftpccdb2' succeeded
CRS-2672: Attempting to start 'ora.LISTENER_SCAN1.lsnr' on 'zftpccdb2'
CRS-2676: Start of 'ora.LISTENER.lsnr' on 'zftpccdb2' succeeded
CRS-2673: Attempting to stop 'ora.zftpccdb1.vip' on 'zftpccdb1'
CRS-2677: Stop of 'ora.zftpccdb1.vip' on 'zftpccdb1' succeeded
CRS-2672: Attempting to start 'ora.zftpccdb1.vip' on 'zftpccdb2'
CRS-2676: Start of 'ora.LISTENER_SCAN1.lsnr' on 'zftpccdb2' succeeded
CRS-2677: Stop of 'ora.registry.acfs' on 'zftpccdb1' succeeded
CRS-2676: Start of 'ora.zftpccdb1.vip' on 'zftpccdb2' succeeded
CRS-2677: Stop of 'ora.oc4j' on 'zftpccdb1' succeeded
CRS-2672: Attempting to start 'ora.oc4j' on 'zftpccdb2'
CRS-2676: Start of 'ora.oc4j' on 'zftpccdb2' succeeded
CRS-2677: Stop of 'ora.DATA1.dg' on 'zftpccdb1' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'zftpccdb1'
CRS-2677: Stop of 'ora.asm' on 'zftpccdb1' succeeded
CRS-2673: Attempting to stop 'ora.ons' on 'zftpccdb1'
CRS-2677: Stop of 'ora.ons' on 'zftpccdb1' succeeded
CRS-2673: Attempting to stop 'ora.net1.network' on 'zftpccdb1'
CRS-2677: Stop of 'ora.net1.network' on 'zftpccdb1' succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'zftpccdb1' has completed
CRS-2677: Stop of 'ora.crsd' on 'zftpccdb1' succeeded
CRS-2673: Attempting to stop 'ora.mdnsd' on 'zftpccdb1'
CRS-2673: Attempting to stop 'ora.crf' on 'zftpccdb1'
CRS-2673: Attempting to stop 'ora.ctssd' on 'zftpccdb1'
CRS-2673: Attempting to stop 'ora.evmd' on 'zftpccdb1'
CRS-2673: Attempting to stop 'ora.asm' on 'zftpccdb1'
CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'zftpccdb1'
CRS-2677: Stop of 'ora.mdnsd' on 'zftpccdb1' succeeded
CRS-2677: Stop of 'ora.evmd' on 'zftpccdb1' succeeded
CRS-2677: Stop of 'ora.crf' on 'zftpccdb1' succeeded
CRS-2677: Stop of 'ora.asm' on 'zftpccdb1' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'zftpccdb1'
CRS-2677: Stop of 'ora.ctssd' on 'zftpccdb1' succeeded
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'zftpccdb1' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'zftpccdb1'
CRS-2677: Stop of 'ora.cssd' on 'zftpccdb1' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'zftpccdb1'
CRS-2677: Stop of 'ora.gipcd' on 'zftpccdb1' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'zftpccdb1'
CRS-2677: Stop of 'ora.gpnpd' on 'zftpccdb1' succeeded
CRS-2677: Stop of 'ora.drivers.acfs' on 'zftpccdb1' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'zftpccdb1' has completed
CRS-4133: Oracle High Availability Services has been stopped.
[ZFTPCCDB1:root]:/>ps -ef|grep d.bin
root 4391306 6094924 0 16:31:03 pts/0 0:00 grep d.bin
[ZFTPCCDB1:root]:/>
[ZFTPCCDB1:root]:/>ocrconfig -restore /oracle/app/11.2.0/grid/cdata/ZFTPCCDB-crs/backup_20160701_152358.ocr
-- trace dump on error exit --
Error [kgfoAl06] in [kgfokge] at kgfo.c:1529
ORA-29701: unable to connect to Cluster Synchronization Service
Category: 7
DepInfo: 29701
ADR is not properly configured
-- trace dump end --
-- trace dump on error exit --
Error [kgfoAl06] in [kgfokge] at kgfo.c:1255
ORA-29701: unable to connect to Cluster Synchronization Service
Category: 7
DepInfo: 29701
ADR is not properly configured
-- trace dump end --
-- trace dump on error exit --
Error [kgfoAl06] in [kgfokge] at kgfo.c:2063
ORA-29701: unable to connect to Cluster Synchronization Service
Category: 7
DepInfo: 29701
ADR is not properly configured
-- trace dump end --
PROT-35: The configured Oracle Cluster Registry locations are not accessible
[ZFTPCCDB1:root]:/>crsctl start crs -excl
CRS-4123: Oracle High Availability Services has been started.
CRS-2672: Attempting to start 'ora.mdnsd' on 'zftpccdb1'
CRS-2676: Start of 'ora.mdnsd' on 'zftpccdb1' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'zftpccdb1'
CRS-2676: Start of 'ora.gpnpd' on 'zftpccdb1' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'zftpccdb1'
CRS-2672: Attempting to start 'ora.gipcd' on 'zftpccdb1'
CRS-2676: Start of 'ora.cssdmonitor' on 'zftpccdb1' succeeded
CRS-2676: Start of 'ora.gipcd' on 'zftpccdb1' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'zftpccdb1'
CRS-2672: Attempting to start 'ora.diskmon' on 'zftpccdb1'
CRS-2676: Start of 'ora.diskmon' on 'zftpccdb1' succeeded
CRS-2676: Start of 'ora.cssd' on 'zftpccdb1' succeeded
CRS-2672: Attempting to start 'ora.ctssd' on 'zftpccdb1'
CRS-2672: Attempting to start 'ora.drivers.acfs' on 'zftpccdb1'
CRS-2679: Attempting to clean 'ora.cluster_interconnect.haip' on 'zftpccdb1'
CRS-2681: Clean of 'ora.cluster_interconnect.haip' on 'zftpccdb1' succeeded
CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'zftpccdb1'
CRS-2676: Start of 'ora.ctssd' on 'zftpccdb1' succeeded
CRS-2676: Start of 'ora.drivers.acfs' on 'zftpccdb1' succeeded
CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'zftpccdb1' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'zftpccdb1'
CRS-2674: Start of 'ora.asm' on 'zftpccdb1' failed
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'zftpccdb1'
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'zftpccdb1' succeeded
CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'zftpccdb1'
tempting to stop 'ora.drivers.acfs' on 'zftpccdb1'
CRS-2677: Stop of 'ora.drivers.acfs' on 'zftpccdb1' succeeded
CRS-2673: Attempting to stop 'ora.ctssd' on 'zftpccdb1'
CRS-2677: Stop of 'ora.ctssd' on 'zftpccdb1' succeeded
CRS-4000: Command Start failed, or completed with errors.
[ZFTPCCDB1:root]:/>
[ZFTPCCDB1:root]:/>crsctl stat res -t -init
--------------------------------------------------------------------------------
NAME TARGET STATE SERVER STATE_DETAILS
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.asm
1 ONLINE ONLINE zftpccdb1 Started
ora.cluster_interconnect.haip
1 ONLINE ONLINE zftpccdb1
ora.crf
1 OFFLINE OFFLINE
ora.crsd
1 ONLINE INTERMEDIATE zftpccdb1 EXCLUSIVE
ora.cssd
1 ONLINE ONLINE zftpccdb1
ora.cssdmonitor
1 ONLINE ONLINE zftpccdb1
ora.ctssd
1 ONLINE ONLINE zftpccdb1 OBSERVER
ora.diskmon
1 OFFLINE OFFLINE
ora.drivers.acfs
1 ONLINE ONLINE zftpccdb1
ora.evmd
1 OFFLINE OFFLINE
ora.gipcd
1 ONLINE ONLINE zftpccdb1
ora.gpnpd
1 ONLINE ONLINE zftpccdb1
ora.mdnsd
1 ONLINE ONLINE zftpccdb1
[ZFTPCCDB1:root]:/>ocrconfig -restore /oracle/app/11.2.0/grid/cdata/ZFTPCCDB-crs/backup_20160701_152358.ocr
PROT-19: Cannot proceed while the Cluster Ready Service is running
[ZFTPCCDB1:root]:/>crsctl stop resource ora.crsd -init
CRS-2673: Attempting to stop 'ora.crsd' on 'zftpccdb1'
CRS-2677: Stop of 'ora.crsd' on 'zftpccdb1' succeeded
[ZFTPCCDB1:root]:/>ocrconfig -restore /oracle/app/11.2.0/grid/cdata/ZFTPCCDB-crs/backup_20160701_152358.ocr
[ZFTPCCDB1:root]:/>
[ZFTPCCDB1:root]:/>crsctl stop has -f
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'zftpccdb1'
CRS-2673: Attempting to stop 'ora.mdnsd' on 'zftpccdb1'
CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'zftpccdb1'
CRS-2677: Stop of 'ora.mdnsd' on 'zftpccdb1' succeeded
CRS-2677: Stop of 'ora.drivers.acfs' on 'zftpccdb1' succeeded
CRS-2673: Attempting to stop 'ora.ctssd' on 'zftpccdb1'
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'zftpccdb1'
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'zftpccdb1' succeeded
CRS-2677: Stop of 'ora.ctssd' on 'zftpccdb1' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'zftpccdb1'
CRS-2677: Stop of 'ora.cssd' on 'zftpccdb1' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'zftpccdb1'
CRS-2677: Stop of 'ora.gipcd' on 'zftpccdb1' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'zftpccdb1'
CRS-2677: Stop of 'ora.gpnpd' on 'zftpccdb1' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'zftpccdb1' has completed
CRS-4133: Oracle High Availability Services has been stopped.
[ZFTPCCDB1:root]:/>crsctl start has
CRS-4123: Oracle High Availability Services has been started.
[ZFTPCCDB1:root]:/>crsctl stat res -t
--------------------------------------------------------------------------------
NAME TARGET STATE SERVER STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA1.dg
ONLINE ONLINE zftpccdb1
ONLINE ONLINE zftpccdb2
ora.LISTENER.lsnr
ONLINE ONLINE zftpccdb1
ONLINE ONLINE zftpccdb2
ora.asm
ONLINE ONLINE zftpccdb1 Started
ONLINE ONLINE zftpccdb2 Started
ora.gsd
OFFLINE OFFLINE zftpccdb1
OFFLINE OFFLINE zftpccdb2
ora.net1.network
ONLINE ONLINE zftpccdb1
ONLINE ONLINE zftpccdb2
ora.ons
ONLINE ONLINE zftpccdb1
ONLINE ONLINE zftpccdb2
ora.registry.acfs
ONLINE ONLINE zftpccdb1
ONLINE ONLINE zftpccdb2
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE zftpccdb2
ora.cvu
1 ONLINE ONLINE zftpccdb2
ora.oc4j
1 ONLINE ONLINE zftpccdb2
ora.oralhr.db
1 ONLINE ONLINE zftpccdb1 Open
2 ONLINE ONLINE zftpccdb2 Open
ora.scan1.vip
1 ONLINE ONLINE zftpccdb2
ora.zftpccdb1.vip
1 ONLINE ONLINE zftpccdb1
ora.zftpccdb2.vip
1 ONLINE ONLINE zftpccdb2
[ZFTPCCDB1:root]:/>
成功恢復。
二.3.4 實驗四:邏輯備份與恢復(手動備份)
ocrconfig -export /oracle/app/11.2.0/grid/cdata/ZFTPCCDB-crs/export_asm_lhr.bak
ocrconfig -import /oracle/app/11.2.0/grid/cdata/ZFTPCCDB-crs/export_asm_lhr.bak
crsctl stop crs
crsctl start crs -excl -nocrs
ocrconfig -import /oracle/app/11.2.0/grid/cdata/ZFTPCCDB-crs/export_asm_lhr.bak
crsctl start crs
[ZFTPCCDB1:root]:/>ocrconfig -export /oracle/app/11.2.0/grid/cdata/ZFTPCCDB-crs/export_asm_lhr.bak
[ZFTPCCDB1:root]:/>ocrconfig -import /oracle/app/11.2.0/grid/cdata/ZFTPCCDB-crs/export_asm_lhr.bak
PROT-19: Cannot proceed while the Cluster Ready Service is running
[ZFTPCCDB1:root]:/>crsctl stop crs
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'zftpccdb1'
CRS-2673: Attempting to stop 'ora.crsd' on 'zftpccdb1'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'zftpccdb1'
CRS-2673: Attempting to stop 'ora.LISTENER.lsnr' on 'zftpccdb1'
CRS-2673: Attempting to stop 'ora.registry.acfs' on 'zftpccdb1'
CRS-2673: Attempting to stop 'ora.oralhr.db' on 'zftpccdb1'
CRS-2677: Stop of 'ora.LISTENER.lsnr' on 'zftpccdb1' succeeded
CRS-2673: Attempting to stop 'ora.zftpccdb1.vip' on 'zftpccdb1'
CRS-2677: Stop of 'ora.zftpccdb1.vip' on 'zftpccdb1' succeeded
CRS-2672: Attempting to start 'ora.zftpccdb1.vip' on 'zftpccdb2'
CRS-2677: Stop of 'ora.oralhr.db' on 'zftpccdb1' succeeded
CRS-2673: Attempting to stop 'ora.DATA1.dg' on 'zftpccdb1'
CRS-2676: Start of 'ora.zftpccdb1.vip' on 'zftpccdb2' succeeded
CRS-2677: Stop of 'ora.registry.acfs' on 'zftpccdb1' succeeded
CRS-2677: Stop of 'ora.DATA1.dg' on 'zftpccdb1' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'zftpccdb1'
CRS-2677: Stop of 'ora.asm' on 'zftpccdb1' succeeded
CRS-2673: Attempting to stop 'ora.ons' on 'zftpccdb1'
CRS-2677: Stop of 'ora.ons' on 'zftpccdb1' succeeded
CRS-2673: Attempting to stop 'ora.net1.network' on 'zftpccdb1'
CRS-2677: Stop of 'ora.net1.network' on 'zftpccdb1' succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'zftpccdb1' has completed
CRS-2677: Stop of 'ora.crsd' on 'zftpccdb1' succeeded
CRS-2673: Attempting to stop 'ora.mdnsd' on 'zftpccdb1'
CRS-2673: Attempting to stop 'ora.crf' on 'zftpccdb1'
CRS-2673: Attempting to stop 'ora.ctssd' on 'zftpccdb1'
CRS-2673: Attempting to stop 'ora.evmd' on 'zftpccdb1'
CRS-2673: Attempting to stop 'ora.asm' on 'zftpccdb1'
CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'zftpccdb1'
CRS-2677: Stop of 'ora.mdnsd' on 'zftpccdb1' succeeded
CRS-2677: Stop of 'ora.evmd' on 'zftpccdb1' succeeded
CRS-2677: Stop of 'ora.crf' on 'zftpccdb1' succeeded
CRS-2677: Stop of 'ora.asm' on 'zftpccdb1' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'zftpccdb1'
CRS-2677: Stop of 'ora.ctssd' on 'zftpccdb1' succeeded
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'zftpccdb1' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'zftpccdb1'
CRS-2677: Stop of 'ora.cssd' on 'zftpccdb1' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'zftpccdb1'
CRS-2677: Stop of 'ora.gipcd' on 'zftpccdb1' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'zftpccdb1'
CRS-2677: Stop of 'ora.drivers.acfs' on 'zftpccdb1' succeeded
CRS-2677: Stop of 'ora.gpnpd' on 'zftpccdb1' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'zftpccdb1' has completed
CRS-4133: Oracle High Availability Services has been stopped.
[ZFTPCCDB1:root]:/>
[ZFTPCCDB1:root]:/>ocrconfig -import /oracle/app/11.2.0/grid/cdata/ZFTPCCDB-crs/export_asm_lhr.bak
-- trace dump on error exit --
Error [kgfoAl06] in [kgfokge] at kgfo.c:1529
ORA-15077: could not locate ASM instance serving a required diskgroup
Category: 7
DepInfo: 15077
ADR is not properly configured
-- trace dump end --
-- trace dump on error exit --
Error [kgfoAl06] in [kgfokge] at kgfo.c:1255
ORA-15077: could not locate ASM instance serving a required diskgroup
Category: 7
DepInfo: 15077
ADR is not properly configured
-- trace dump end --
-- trace dump on error exit --
Error [kgfoAl06] in [kgfokge] at kgfo.c:2063
ORA-15077: could not locate ASM instance serving a required diskgroup
Category: 7
DepInfo: 15077
ADR is not properly configured
-- trace dump end --
PROT-1: Failed to initialize ocrconfig
PROC-26: Error while accessing the physical storage
ORA-15077: could not locate ASM instance serving a required diskgroup
[ZFTPCCDB1:root]:/>
[ZFTPCCDB1:root]:/>crsctl start crs -excl -nocrs
CRS-4123: Oracle High Availability Services has been started.
CRS-2672: Attempting to start 'ora.mdnsd' on 'zftpccdb1'
CRS-2676: Start of 'ora.mdnsd' on 'zftpccdb1' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'zftpccdb1'
CRS-2676: Start of 'ora.gpnpd' on 'zftpccdb1' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'zftpccdb1'
CRS-2672: Attempting to start 'ora.gipcd' on 'zftpccdb1'
CRS-2676: Start of 'ora.cssdmonitor' on 'zftpccdb1' succeeded
CRS-2676: Start of 'ora.gipcd' on 'zftpccdb1' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'zftpccdb1'
CRS-2672: Attempting to start 'ora.diskmon' on 'zftpccdb1'
CRS-2676: Start of 'ora.diskmon' on 'zftpccdb1' succeeded
CRS-2676: Start of 'ora.cssd' on 'zftpccdb1' succeeded
CRS-2672: Attempting to start 'ora.drivers.acfs' on 'zftpccdb1'
CRS-2679: Attempting to clean 'ora.cluster_interconnect.haip' on 'zftpccdb1'
CRS-2672: Attempting to start 'ora.ctssd' on 'zftpccdb1'
CRS-2681: Clean of 'ora.cluster_interconnect.haip' on 'zftpccdb1' succeeded
CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'zftpccdb1'
CRS-2676: Start of 'ora.ctssd' on 'zftpccdb1' succeeded
CRS-2676: Start of 'ora.drivers.acfs' on 'zftpccdb1' succeeded
CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'zftpccdb1' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'zftpccdb1'
CRS-2676: Start of 'ora.asm' on 'zftpccdb1' succeeded
[ZFTPCCDB1:root]:/>ocrconfig -import /oracle/app/11.2.0/grid/cdata/ZFTPCCDB-crs/export_asm_lhr.bak
[ZFTPCCDB1:root]:/>crsctl stop has -f
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'zftpccdb1'
CRS-2673: Attempting to stop 'ora.mdnsd' on 'zftpccdb1'
CRS-2673: Attempting to stop 'ora.ctssd' on 'zftpccdb1'
CRS-2673: Attempting to stop 'ora.asm' on 'zftpccdb1'
CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'zftpccdb1'
CRS-2677: Stop of 'ora.mdnsd' on 'zftpccdb1' succeeded
CRS-2677: Stop of 'ora.asm' on 'zftpccdb1' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'zftpccdb1'
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'zftpccdb1' succeeded
CRS-2677: Stop of 'ora.ctssd' on 'zftpccdb1' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'zftpccdb1'
CRS-2677: Stop of 'ora.drivers.acfs' on 'zftpccdb1' succeeded
CRS-2677: Stop of 'ora.cssd' on 'zftpccdb1' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'zftpccdb1'
CRS-2677: Stop of 'ora.gipcd' on 'zftpccdb1' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'zftpccdb1'
CRS-2677: Stop of 'ora.gpnpd' on 'zftpccdb1' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'zftpccdb1' has completed
CRS-4133: Oracle High Availability Services has been stopped.
[ZFTPCCDB1:root]:/>crsctl start has
CRS-4123: Oracle High Availability Services has been started.
[ZFTPCCDB1:root]:/>
[ZFTPCCDB1:root]:/>crsctl stat res -t
--------------------------------------------------------------------------------
NAME TARGET STATE SERVER STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA1.dg
ONLINE ONLINE zftpccdb1
ONLINE ONLINE zftpccdb2
ora.LISTENER.lsnr
ONLINE ONLINE zftpccdb1
ONLINE ONLINE zftpccdb2
ora.asm
ONLINE ONLINE zftpccdb1 Started
ONLINE ONLINE zftpccdb2 Started
ora.gsd
OFFLINE OFFLINE zftpccdb1
OFFLINE OFFLINE zftpccdb2
ora.net1.network
ONLINE ONLINE zftpccdb1
ONLINE ONLINE zftpccdb2
ora.ons
ONLINE ONLINE zftpccdb1
ONLINE ONLINE zftpccdb2
ora.registry.acfs
ONLINE ONLINE zftpccdb1
ONLINE ONLINE zftpccdb2
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE zftpccdb2
ora.cvu
1 ONLINE ONLINE zftpccdb1
ora.oc4j
1 ONLINE ONLINE zftpccdb1
ora.oralhr.db
1 ONLINE ONLINE zftpccdb1 Open
2 ONLINE ONLINE zftpccdb2 Open
ora.scan1.vip
1 ONLINE ONLINE zftpccdb2
ora.zftpccdb1.vip
1 ONLINE ONLINE zftpccdb1
ora.zftpccdb2.vip
1 ONLINE ONLINE zftpccdb2
[ZFTPCCDB1:root]:/>
成功恢復。
二.3.5 實驗五:刪除$ORACLE_HOME/log下的文件夾后的修復
在這個實驗中,我們刪除grid用戶下的日志文件:$ORACLE_HOME/log,文件夾內容被刪除導致集群不能啟動。
[zfzhlhrdb1:root]:/>crsctl stat res -t
--------------------------------------------------------------------------------
NAME TARGET STATE SERVER STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
ONLINE ONLINE zfzhlhrdb1
ONLINE ONLINE zfzhlhrdb2
ora.asm
ONLINE ONLINE zfzhlhrdb1 Started
ONLINE ONLINE zfzhlhrdb2 Started
ora.gsd
OFFLINE OFFLINE zfzhlhrdb1
OFFLINE OFFLINE zfzhlhrdb2
ora.net1.network
ONLINE ONLINE zfzhlhrdb1
ONLINE ONLINE zfzhlhrdb2
ora.ons
ONLINE ONLINE zfzhlhrdb1
ONLINE ONLINE zfzhlhrdb2
ora.registry.acfs
ONLINE OFFLINE zfzhlhrdb1
ONLINE ONLINE zfzhlhrdb2
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE zfzhlhrdb1
ora.cvu
1 ONLINE ONLINE zfzhlhrdb1
ora.oc4j
1 ONLINE ONLINE zfzhlhrdb1
ora.oraesdb.db
1 ONLINE ONLINE zfzhlhrdb1 Open
2 ONLINE ONLINE zfzhlhrdb2 Open
ora.oraeskdb.db
1 ONLINE ONLINE zfzhlhrdb1 Open
2 ONLINE ONLINE zfzhlhrdb2 Open
ora.scan1.vip
1 ONLINE ONLINE zfzhlhrdb1
ora.zfzhlhrdb1.vip
1 ONLINE ONLINE zfzhlhrdb1
ora.zfzhlhrdb2.vip
1 ONLINE ONLINE zfzhlhrdb2
[zfzhlhrdb1:root]:/>cd $ORACLE_HOME/log
[zfzhlhrdb1:root]:/oracle/app/11.2.0/grid/log>l
total 32
drwxr-xr-x 2 grid dba 256 Nov 05 2014 crs
drwxrwx--T 6 grid asmadmin 256 Nov 06 2014 diag
drwxr-xr-t 25 root dba 4096 Nov 19 2014 yjyltest2
drwxr-xr-t 25 root dba 4096 Jul 15 2015 zfmcisudb5
drwxr-xr-t 25 root dba 4096 Jul 01 10:51 zfzhlhrdb1
drwxr-xr-t 25 root dba 4096 Nov 05 2014 zt1nuwdb1
[zfzhlhrdb1:root]:/oracle/app/11.2.0/grid/log>cd ..
[zfzhlhrdb1:root]:/oracle/app/11.2.0/grid>mv log log_bk
[zfzhlhrdb1:root]:/oracle/app/11.2.0/grid>crsctl stop has
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'zfzhlhrdb1'
CRS-2673: Attempting to stop 'ora.crsd' on 'zfzhlhrdb1'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'zfzhlhrdb1'
CRS-2673: Attempting to stop 'ora.LISTENER_SCAN1.lsnr' on 'zfzhlhrdb1'
CRS-2673: Attempting to stop 'ora.zfzhlhrdb1.vip' on 'zfzhlhrdb1'
CRS-2673: Attempting to stop 'ora.oc4j' on 'zfzhlhrdb1'
CRS-2673: Attempting to stop 'ora.cvu' on 'zfzhlhrdb1'
CRS-2673: Attempting to stop 'ora.oraesdb.db' on 'zfzhlhrdb1'
CRS-2673: Attempting to stop 'ora.oraeskdb.db' on 'zfzhlhrdb1'
CRS-2677: Stop of 'ora.cvu' on 'zfzhlhrdb1' succeeded
CRS-2672: Attempting to start 'ora.cvu' on 'zfzhlhrdb2'
CRS-2676: Start of 'ora.cvu' on 'zfzhlhrdb2' succeeded
CRS-2677: Stop of 'ora.zfzhlhrdb1.vip' on 'zfzhlhrdb1' succeeded
CRS-2672: Attempting to start 'ora.zfzhlhrdb1.vip' on 'zfzhlhrdb2'
CRS-2677: Stop of 'ora.LISTENER_SCAN1.lsnr' on 'zfzhlhrdb1' succeeded
CRS-2673: Attempting to stop 'ora.scan1.vip' on 'zfzhlhrdb1'
CRS-2677: Stop of 'ora.scan1.vip' on 'zfzhlhrdb1' succeeded
CRS-2672: Attempting to start 'ora.scan1.vip' on 'zfzhlhrdb2'
CRS-2677: Stop of 'ora.oraeskdb.db' on 'zfzhlhrdb1' succeeded
CRS-2676: Start of 'ora.zfzhlhrdb1.vip' on 'zfzhlhrdb2' succeeded
CRS-2676: Start of 'ora.scan1.vip' on 'zfzhlhrdb2' succeeded
CRS-2672: Attempting to start 'ora.LISTENER_SCAN1.lsnr' on 'zfzhlhrdb2'
CRS-2677: Stop of 'ora.oraesdb.db' on 'zfzhlhrdb1' succeeded
CRS-2673: Attempting to stop 'ora.DATA.dg' on 'zfzhlhrdb1'
CRS-2676: Start of 'ora.LISTENER_SCAN1.lsnr' on 'zfzhlhrdb2' succeeded
CRS-2677: Stop of 'ora.oc4j' on 'zfzhlhrdb1' succeeded
CRS-2672: Attempting to start 'ora.oc4j' on 'zfzhlhrdb2'
CRS-2677: Stop of 'ora.DATA.dg' on 'zfzhlhrdb1' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'zfzhlhrdb1'
CRS-2677: Stop of 'ora.asm' on 'zfzhlhrdb1' succeeded
CRS-2676: Start of 'ora.oc4j' on 'zfzhlhrdb2' succeeded
CRS-2673: Attempting to stop 'ora.ons' on 'zfzhlhrdb1'
CRS-2677: Stop of 'ora.ons' on 'zfzhlhrdb1' succeeded
CRS-2673: Attempting to stop 'ora.net1.network' on 'zfzhlhrdb1'
CRS-2677: Stop of 'ora.net1.network' on 'zfzhlhrdb1' succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'zfzhlhrdb1' has completed
CRS-2677: Stop of 'ora.crsd' on 'zfzhlhrdb1' succeeded
CRS-2673: Attempting to stop 'ora.mdnsd' on 'zfzhlhrdb1'
CRS-2673: Attempting to stop 'ora.crf' on 'zfzhlhrdb1'
CRS-2673: Attempting to stop 'ora.ctssd' on 'zfzhlhrdb1'
CRS-2673: Attempting to stop 'ora.evmd' on 'zfzhlhrdb1'
CRS-2673: Attempting to stop 'ora.asm' on 'zfzhlhrdb1'
CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'zfzhlhrdb1'
CRS-2677: Stop of 'ora.mdnsd' on 'zfzhlhrdb1' succeeded
CRS-2677: Stop of 'ora.evmd' on 'zfzhlhrdb1' succeeded
CRS-2677: Stop of 'ora.ctssd' on 'zfzhlhrdb1' succeeded
CRS-2677: Stop of 'ora.crf' on 'zfzhlhrdb1' succeeded
CRS-2677: Stop of 'ora.asm' on 'zfzhlhrdb1' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'zfzhlhrdb1'
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'zfzhlhrdb1' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'zfzhlhrdb1'
CRS-2677: Stop of 'ora.cssd' on 'zfzhlhrdb1' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'zfzhlhrdb1'
CRS-2677: Stop of 'ora.drivers.acfs' on 'zfzhlhrdb1' succeeded
CRS-2677: Stop of 'ora.gipcd' on 'zfzhlhrdb1' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'zfzhlhrdb1'
CRS-2677: Stop of 'ora.gpnpd' on 'zfzhlhrdb1' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'zfzhlhrdb1' has completed
CRS-4133: Oracle High Availability Services has been stopped.
[zfzhlhrdb1:root]:/oracle/app/11.2.0/grid>crsctl start has
《《《《。。。。。。。。這里沒有任何輸出,我多次回車,只能ctrl+c退出。。。。。。。。》》》》
[zfzhlhrdb1:root]:/oracle/app/11.2.0/grid>l log
total 0
drwxr-xr-x 4 root system 256 Jul 01 17:35 zfzhlhrdb1
[zfzhlhrdb1:root]:/oracle/app/11.2.0/grid>
可以看到log文件夾自動創建,但是集群不能啟動,log文件下的其他文件夾也不能自動創建,下邊我們根據前邊的理論知識來恢復由於權限問題導致的集群不能啟動的問題,由於我們的環境是11.2.0.4,修復起來比較容易,執行腳本$GRID_HOME/crs/install/rootcrs.pl -init。
---執行速度很快。。。
[zfzhlhrdb1:root]:/oracle/app/11.2.0/grid>$ORACLE_HOME/crs/install/rootcrs.pl -init
Using configuration parameter file: /oracle/app/11.2.0/grid/crs/install/crsconfig_params
[zfzhlhrdb1:root]:/oracle/app/11.2.0/grid>
[zfzhlhrdb1:root]:/oracle/app/11.2.0/grid>l log
total 8
drwxrwx--T 3 grid asmadmin 256 Jul 01 17:40 diag
drwxr-xr-t 24 root dba 4096 Jul 01 17:40 zfzhlhrdb1
[zfzhlhrdb1:root]:/oracle/app/11.2.0/grid>
[zfzhlhrdb1:root]:/oracle/app/11.2.0/grid>crsctl start has
CRS-4123: Oracle High Availability Services has been started.
[zfzhlhrdb1:root]:/oracle/app/11.2.0/grid>
可以看到crs可以啟動了,等待一會可以看到資源全部啟動:
[zfzhlhrdb1:root]:/oracle/app/11.2.0/grid>crsctl stat res -t
--------------------------------------------------------------------------------
NAME TARGET STATE SERVER STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
ONLINE ONLINE zfzhlhrdb1
ONLINE ONLINE zfzhlhrdb2
ora.asm
ONLINE ONLINE zfzhlhrdb1 Started
ONLINE ONLINE zfzhlhrdb2 Started
ora.gsd
OFFLINE OFFLINE zfzhlhrdb1
OFFLINE OFFLINE zfzhlhrdb2
ora.net1.network
ONLINE ONLINE zfzhlhrdb1
ONLINE ONLINE zfzhlhrdb2
ora.ons
ONLINE ONLINE zfzhlhrdb1
ONLINE ONLINE zfzhlhrdb2
ora.registry.acfs
ONLINE ONLINE zfzhlhrdb1
ONLINE ONLINE zfzhlhrdb2
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE zfzhlhrdb2
ora.cvu
1 ONLINE ONLINE zfzhlhrdb2
ora.oc4j
1 ONLINE ONLINE zfzhlhrdb2
ora.oraesdb.db
1 ONLINE ONLINE zfzhlhrdb1 Open
2 ONLINE ONLINE zfzhlhrdb2 Open
ora.oraeskdb.db
1 ONLINE ONLINE zfzhlhrdb1 Open
2 ONLINE ONLINE zfzhlhrdb2 Open
ora.scan1.vip
1 ONLINE ONLINE zfzhlhrdb2
ora.zfzhlhrdb1.vip
1 ONLINE ONLINE zfzhlhrdb1
ora.zfzhlhrdb2.vip
1 ONLINE ONLINE zfzhlhrdb2
[zfzhlhrdb1:root]:/oracle/app/11.2.0/grid>
集群基本已經恢復,且看實驗六。
二.3.6 實驗六:permission.pl腳本的使用
根據實驗五的操作,執行命令$GRID_HOME/crs/install/rootcrs.pl -init,只是修復了$GRID_HOME下的為避免留下后遺症,我們需要將rootcrs.pl棄之不管的目錄與文件的權限、屬主也修復一下,怎么修復?MOS1515018.1提供了現成的perl腳本,這個腳本使用方法很簡單:從一台權限正常的服務器上抓取GRID_HOME、ORACLE_HOME下的所有文件與目錄權限,生成shell腳本,然后在權限錯誤的主機上執行這個腳本,簡單演示一下:
<1> 先把permission.pl下載下來復制到一台權限正常的服務器上,並賦予執行權限,這台主機上必須要有perl的執行環境,我們這里直接在另外一台主機rac2上生成腳本,抓取GRID_HOME下的所有目錄與文件的屬主、權限,必須使用root用戶執行:
[zfzhlhrdb2:root]:/>l /tmp/permission*
-rw-r----- 1 root system 2326 Jul 01 00:05 /tmp/permission.pl
[zfzhlhrdb2:root]:/>chmod 755 /tmp/permission.pl
[zfzhlhrdb2:root]:/>
[zfzhlhrdb2:root]:/>/tmp/permission.pl $ORACLE_HOME
Following log files are generated
logfile : permission-Fri-Jul-01-18-16-20-2016
Command file : restore-perm-Fri-Jul-01-18-16-20-2016.cmd
Linecount : 18126
[zfzhlhrdb2:root]:/>l *18-16-20-2016*
-rw-r--r-- 1 root system 1457428 Jul 01 18:16 permission-Fri-Jul-01-18-16-20-2016
-rw-r--r-- 1 root system 2928380 Jul 01 18:16 restore-perm-Fri-Jul-01-18-16-20-2016.cmd
[zfzhlhrdb2:root]:/>
生成了兩個文件,其中permission*開頭的是/oracle/app/oracle/product/11.2.0/db_1目錄及其下的所有子目錄與文件列表,例如:
[zfzhlhrdb2:root]:/>more permission-Fri-Jul-01-18-16-20-2016
755 root dba /oracle/app/11.2.0/grid
755 grid dba /oracle/app/11.2.0/grid/JRE
640 grid dba /oracle/app/11.2.0/grid/oraInst.loc
750 grid dba /oracle/app/11.2.0/grid/root.sh
755 grid dba /oracle/app/11.2.0/grid/rootupgrade.sh
750 grid dba /oracle/app/11.2.0/grid/.patch_storage
644 grid dba /oracle/app/11.2.0/grid/.patch_storage/LatestOPatchSession.properties
644 grid dba /oracle/app/11.2.0/grid/.patch_storage/interim_inventory.txt
644 grid dba /oracle/app/11.2.0/grid/.patch_storage/patch_free
644 grid dba /oracle/app/11.2.0/grid/.patch_storage/record_inventory.txt
755 grid dba /oracle/app/11.2.0/grid/.patch_storage/17478514_Dec_30_2013_03_38_50
710 grid dba /oracle/app/11.2.0/grid/.patch_storage/17478514_Dec_30_2013_03_38_50/rollback.sh
755 grid dba /oracle/app/11.2.0/grid/.patch_storage/17478514_Dec_30_2013_03_38_50/files
755 grid dba /oracle/app/11.2.0/grid/.patch_storage/17478514_Dec_30_2013_03_38_50/files/bin
755 grid dba /oracle/app/11.2.0/grid/.patch_storage/17478514_Dec_30_2013_03_38_50/files/bin/lxinst
755 grid dba /oracle/app/11.2.0/grid/.patch_storage/17478514_Dec_30_2013_03_38_50/files/lib
755 grid dba /oracle/app/11.2.0/grid/.patch_storage/17478514_Dec_30_2013_03_38_50/files/lib/libclient11.a
664 grid dba /oracle/app/11.2.0/grid/.patch_storage/17478514_Dec_30_2013_03_38_50/files/lib/libclient11.a/knoggcap.o
755 grid dba /oracle/app/11.2.0/grid/.patch_storage/17478514_Dec_30_2013_03_38_50/files/lib/libgeneric11.a
664 grid dba /oracle/app/11.2.0/grid/.patch_storage/17478514_Dec_30_2013_03_38_50/files/lib/libgeneric11.a/qcd.o
664 grid dba /oracle/app/11.2.0/grid/.patch_storage/17478514_Dec_30_2013_03_38_50/files/lib/libgeneric11.a/qcs.o
755 grid dba /oracle/app/11.2.0/grid/.patch_storage/17478514_Dec_30_2013_03_38_50/files/lib/libpls11.a
664 grid dba /oracle/app/11.2.0/grid/.patch_storage/17478514_Dec_30_2013_03_38_50/files/lib/libpls11.a/pevmexe.o
755 grid dba /oracle/app/11.2.0/grid/.patch_storage/17478514_Dec_30_2013_03_38_50/files/lib/libpls11_pic.a
664 grid dba /oracle/app/11.2.0/grid/.patch_storage/17478514_Dec_30_2013_03_38_50/files/lib/libpls11_pic.a/pevmexe_pic.o
755 grid dba /oracle/app/11.2.0/grid/.patch_storage/17478514_Dec_30_2013_03_38_50/files/lib/libserver11.a
664 grid dba /oracle/app/11.2.0/grid/.patch_storage/17478514_Dec_30_2013_03_38_50/files/lib/libserver11.a/kcfis.o
664 grid dba /oracle/app/11.2.0/grid/.patch_storage/17478514_Dec_30_2013_03_38_50/files/lib/libserver11.a/kf.o
664 grid dba /oracle/app/11.2.0/grid/.patch_storage/17478514_Dec_30_2013_03_38_50/files/lib/libserver11.a/kfd.o
664 grid dba /oracle/app/11.2.0/grid/.patch_storage/17478514_Dec_30_2013_03_38_50/files/lib/libserver11.a/kfds.o
664 grid dba /oracle/app/11.2.0/grid/.patch_storage/17478514_Dec_30_2013_03_38_50/files/lib/libserver11.a/kff.o
664 grid dba /oracle/app/11.2.0/grid/.patch_storage/17478514_Dec_30_2013_03_38_50/files/lib/libserver11.a/kjb.o
664 grid dba /oracle/app/11.2.0/grid/.patch_storage/17478514_Dec_30_2013_03_38_50/files/lib/libserver11.a/kjbl.o
《《《《。。。。。。。。篇幅原因,有省略。。。。。。。。》》》》
restore*開頭的包含了執行修改權限修復所需的腳本,例如:
[zfzhlhrdb2:root]:/>more restore-perm-Fri-Jul-01-18-16-20-2016.cmd
chown root:dba /oracle/app/11.2.0/grid
chmod 755 /oracle/app/11.2.0/grid
chown grid:dba /oracle/app/11.2.0/grid/JRE
chmod 755 /oracle/app/11.2.0/grid/JRE
chown grid:dba /oracle/app/11.2.0/grid/oraInst.loc
chmod 640 /oracle/app/11.2.0/grid/oraInst.loc
chown grid:dba /oracle/app/11.2.0/grid/root.sh
chmod 750 /oracle/app/11.2.0/grid/root.sh
chown grid:dba /oracle/app/11.2.0/grid/rootupgrade.sh
chmod 755 /oracle/app/11.2.0/grid/rootupgrade.sh
chown grid:dba /oracle/app/11.2.0/grid/.patch_storage
chmod 750 /oracle/app/11.2.0/grid/.patch_storage
chown grid:dba /oracle/app/11.2.0/grid/.patch_storage/LatestOPatchSession.properties
chmod 644 /oracle/app/11.2.0/grid/.patch_storage/LatestOPatchSession.properties
chown grid:dba /oracle/app/11.2.0/grid/.patch_storage/interim_inventory.txt
chmod 644 /oracle/app/11.2.0/grid/.patch_storage/interim_inventory.txt
chown grid:dba /oracle/app/11.2.0/grid/.patch_storage/patch_free
chmod 644 /oracle/app/11.2.0/grid/.patch_storage/patch_free
chown grid:dba /oracle/app/11.2.0/grid/.patch_storage/record_inventory.txt
chmod 644 /oracle/app/11.2.0/grid/.patch_storage/record_inventory.txt
chown grid:dba /oracle/app/11.2.0/grid/.patch_storage/17478514_Dec_30_2013_03_38_50
chmod 755 /oracle/app/11.2.0/grid/.patch_storage/17478514_Dec_30_2013_03_38_50
chown grid:dba /oracle/app/11.2.0/grid/.patch_storage/17478514_Dec_30_2013_03_38_50/rollback.sh
chmod 710 /oracle/app/11.2.0/grid/.patch_storage/17478514_Dec_30_2013_03_38_50/rollback.sh
chown grid:dba /oracle/app/11.2.0/grid/.patch_storage/17478514_Dec_30_2013_03_38_50/files
chmod 755 /oracle/app/11.2.0/grid/.patch_storage/17478514_Dec_30_2013_03_38_50/files
chown grid:dba /oracle/app/11.2.0/grid/.patch_storage/17478514_Dec_30_2013_03_38_50/files/bin
chmod 755 /oracle/app/11.2.0/grid/.patch_storage/17478514_Dec_30_2013_03_38_50/files/bin
chown grid:dba /oracle/app/11.2.0/grid/.patch_storage/17478514_Dec_30_2013_03_38_50/files/bin/lxinst
chmod 755 /oracle/app/11.2.0/grid/.patch_storage/17478514_Dec_30_2013_03_38_50/files/bin/lxinst
chown grid:dba /oracle/app/11.2.0/grid/.patch_storage/17478514_Dec_30_2013_03_38_50/files/lib
chmod 755 /oracle/app/11.2.0/grid/.patch_storage/17478514_Dec_30_2013_03_38_50/files/lib
chown grid:dba /oracle/app/11.2.0/grid/.patch_storage/17478514_Dec_30_2013_03_38_50/files/lib/libclient11.a
《《《《。。。。。。。。篇幅原因,有省略。。。。。。。。》》》》
<3> 抓取$ORACLE_HOME下所有目錄與文件的屬主、權限,可以使用oracle或者root用戶
[zfzhlhrdb2:root]:/>su - oracle
[zfzhlhrdb2:oracle]:/oracle>echo $ORACLE_HOME
/oracle/app/oracle/product/11.2.0/db
[zfzhlhrdb2:oracle]:/oracle>exit
[zfzhlhrdb2:root]:/>/tmp/permission.pl /oracle/app/oracle/product/11.2.0/db
Following log files are generated
logfile : permission-Fri-Jul-01-18-21-50-2016
Command file : restore-perm-Fri-Jul-01-18-21-50-2016.cmd
Linecount : 41627
[zfzhlhrdb2:root]:/>
<4> 將生成的4個文件copy到目標主機上,在目標主機上執行restore*開頭的兩個腳本,root用戶執行
[zfzhlhrdb1:root]:/permissions>l
total 32600
-rw-r----- 1 root system 1457428 Jul 01 18:16 permission-Fri-Jul-01-18-16-20-2016
-rw-r----- 1 root system 4114174 Jul 01 18:22 permission-Fri-Jul-01-18-21-50-2016
-rw-r----- 1 root system 2928380 Jul 01 18:16 restore-perm-Fri-Jul-01-18-16-20-2016.cmd
-rw-r----- 1 root system 8184311 Jul 01 18:22 restore-perm-Fri-Jul-01-18-21-50-2016.cmd
[zfzhlhrdb1:root]:/permissions>chmod 755 *.cmd
[zfzhlhrdb1:root]:/permissions>l
total 32600
-rw-r----- 1 root system 1457428 Jul 01 18:16 permission-Fri-Jul-01-18-16-20-2016
-rw-r----- 1 root system 4114174 Jul 01 18:22 permission-Fri-Jul-01-18-21-50-2016
-rwxr-xr-x 1 root system 2928380 Jul 01 18:16 restore-perm-Fri-Jul-01-18-16-20-2016.cmd
-rwxr-xr-x 1 root system 8184311 Jul 01 18:22 restore-perm-Fri-Jul-01-18-21-50-2016.cmd
[zfzhlhrdb1:root]:/permissions>./restore-perm-Fri-Jul-01-18-16-20-2016.cmd
chown: /oracle/app/11.2.0/grid/auth/crs/zfzhlhrdb2: A file or directory in the path name does not exist.
chmod: /oracle/app/11.2.0/grid/auth/crs/zfzhlhrdb2: A file or directory in the path name does not exist.
chown: /oracle/app/11.2.0/grid/auth/css/zfzhlhrdb2: A file or directory in the path name does not exist.
chmod: /oracle/app/11.2.0/grid/auth/css/zfzhlhrdb2: A file or directory in the path name does not exist.
chown: /oracle/app/11.2.0/grid/auth/evm/zfzhlhrdb2: A file or directory in the path name does not exist.
。。。。。。。。。。。。。
[zfzhlhrdb1:root]:/permissions>./restore-perm-Fri-Jul-01-18-21-50-2016.cmd
chown: /oracle/app/oracle/product/11.2.0/db/ccr/state/ORADEDB-RAC.ccr: A file or directory in the path name does not exist.
chmod: /oracle/app/oracle/product/11.2.0/db/ccr/state/ORADEDB-RAC.ccr: A file or directory in the path name does not exist.
chown: /oracle/app/oracle/product/11.2.0/db/ccr/state/ORADEDB-RAC.ll: A file or directory in the path name does not exist.
chmod: /oracle/app/oracle/product/11.2.0/db/ccr/state/ORADEDB-RAC.ll: A file or directory in the path name does not exist.
chown: /oracle/app/oracle/product/11.2.0/db/ccr/state/ORADEDB-RAC.ll-stat: A file or directory in the path name does not exist.
chmod: /oracle/app/oracle/product/11.2.0/db/ccr/state/ORADEDB-RAC.ll-stat: A file or directory in the path name does not exist.
chown: /oracle/app/oracle/product/11.2.0/db/cfgtoollogs/opatch/opatch2016-07-01_12-50-49PM_1.log: A file or directory in the path name does not exist.
chmod: /oracle/app/oracle/product/11.2.0/db/cfgtoollogs/opatch/opatch2016-07-01_12-50-49PM_1.log: A file or directory in the path name does not exist.
chown: /oracle/app/oracle/product/11.2.0/db/dbs/hc_oraESDB2.dat: A file or directory in the path name does not exist.
。。。。。。。。。。。。。。。。。
啟動和關閉集群試試:
[zfzhlhrdb1:root]:/>crsctl stop has
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'zfzhlhrdb1'
CRS-2673: Attempting to stop 'ora.crsd' on 'zfzhlhrdb1'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'zfzhlhrdb1'
CRS-2673: Attempting to stop 'ora.LISTENER_LHRDG.lsnr' on 'zfzhlhrdb1'
CRS-2673: Attempting to stop 'ora.oraesdb.db' on 'zfzhlhrdb1'
CRS-2673: Attempting to stop 'ora.registry.acfs' on 'zfzhlhrdb1'
CRS-2673: Attempting to stop 'ora.oraeskdb.db' on 'zfzhlhrdb1'
CRS-2677: Stop of 'ora.LISTENER_LHRDG.lsnr' on 'zfzhlhrdb1' succeeded
CRS-2673: Attempting to stop 'ora.zfzhlhrdb1.vip' on 'zfzhlhrdb1'
CRS-2677: Stop of 'ora.oraesdb.db' on 'zfzhlhrdb1' succeeded
CRS-2677: Stop of 'ora.zfzhlhrdb1.vip' on 'zfzhlhrdb1' succeeded
CRS-2672: Attempting to start 'ora.zfzhlhrdb1.vip' on 'zfzhlhrdb2'
CRS-2677: Stop of 'ora.registry.acfs' on 'zfzhlhrdb1' succeeded
CRS-2677: Stop of 'ora.oraeskdb.db' on 'zfzhlhrdb1' succeeded
CRS-2673: Attempting to stop 'ora.DATA.dg' on 'zfzhlhrdb1'
CRS-2676: Start of 'ora.zfzhlhrdb1.vip' on 'zfzhlhrdb2' succeeded
CRS-2677: Stop of 'ora.DATA.dg' on 'zfzhlhrdb1' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'zfzhlhrdb1'
CRS-2677: Stop of 'ora.asm' on 'zfzhlhrdb1' succeeded
CRS-2673: Attempting to stop 'ora.ons' on 'zfzhlhrdb1'
CRS-2677: Stop of 'ora.ons' on 'zfzhlhrdb1' succeeded
CRS-2673: Attempting to stop 'ora.net1.network' on 'zfzhlhrdb1'
CRS-2677: Stop of 'ora.net1.network' on 'zfzhlhrdb1' succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'zfzhlhrdb1' has completed
CRS-2677: Stop of 'ora.crsd' on 'zfzhlhrdb1' succeeded
CRS-2673: Attempting to stop 'ora.mdnsd' on 'zfzhlhrdb1'
CRS-2673: Attempting to stop 'ora.crf' on 'zfzhlhrdb1'
CRS-2673: Attempting to stop 'ora.ctssd' on 'zfzhlhrdb1'
CRS-2673: Attempting to stop 'ora.evmd' on 'zfzhlhrdb1'
CRS-2673: Attempting to stop 'ora.asm' on 'zfzhlhrdb1'
CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'zfzhlhrdb1'
CRS-2677: Stop of 'ora.mdnsd' on 'zfzhlhrdb1' succeeded
CRS-2677: Stop of 'ora.evmd' on 'zfzhlhrdb1' succeeded
CRS-2677: Stop of 'ora.crf' on 'zfzhlhrdb1' succeeded
CRS-2677: Stop of 'ora.ctssd' on 'zfzhlhrdb1' succeeded
crsctl start has
CRS-2677: Stop of 'ora.asm' on 'zfzhlhrdb1' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'zfzhlhrdb1'
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'zfzhlhrdb1' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'zfzhlhrdb1'
CRS-2677: Stop of 'ora.cssd' on 'zfzhlhrdb1' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'zfzhlhrdb1'
CRS-2677: Stop of 'ora.gipcd' on 'zfzhlhrdb1' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'zfzhlhrdb1'
CRS-2677: Stop of 'ora.gpnpd' on 'zfzhlhrdb1' succeeded
CRS-2677: Stop of 'ora.drivers.acfs' on 'zfzhlhrdb1' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'zfzhlhrdb1' has completed
CRS-4133: Oracle High Availability Services has been stopped.
[zfzhlhrdb1:root]:/>crsctl start has
CRS-4123: Oracle High Availability Services has been started.
[zfzhlhrdb1:root]:/>crsctl stat res -t
--------------------------------------------------------------------------------
NAME TARGET STATE SERVER STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
ONLINE ONLINE zfzhlhrdb1
ONLINE ONLINE zfzhlhrdb2
ora.LISTENER_LHRDG.lsnr
ONLINE ONLINE zfzhlhrdb1
ONLINE ONLINE zfzhlhrdb2
ora.asm
ONLINE ONLINE zfzhlhrdb1 Started
ONLINE ONLINE zfzhlhrdb2 Started
ora.gsd
OFFLINE OFFLINE zfzhlhrdb1
OFFLINE OFFLINE zfzhlhrdb2
ora.net1.network
ONLINE ONLINE zfzhlhrdb1
ONLINE ONLINE zfzhlhrdb2
ora.ons
ONLINE ONLINE zfzhlhrdb1
ONLINE ONLINE zfzhlhrdb2
ora.registry.acfs
ONLINE ONLINE zfzhlhrdb1
ONLINE ONLINE zfzhlhrdb2
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE zfzhlhrdb1
ora.cvu
1 ONLINE ONLINE zfzhlhrdb1
ora.oc4j
1 ONLINE ONLINE zfzhlhrdb1
ora.oraesdb.db
1 ONLINE ONLINE zfzhlhrdb1 Open
2 ONLINE ONLINE zfzhlhrdb2 Open
ora.oraeskdb.db
1 ONLINE ONLINE zfzhlhrdb1 Open
2 ONLINE ONLINE zfzhlhrdb2 Open
ora.scan1.vip
1 ONLINE ONLINE zfzhlhrdb1
ora.zfzhlhrdb1.vip
1 ONLINE ONLINE zfzhlhrdb1
ora.zfzhlhrdb2.vip
1 ONLINE ONLINE zfzhlhrdb2
[zfzhlhrdb1:root]:/>
好,搞定,結束,收尾,回家。
二.4 實驗總結
本篇blog內容較多,主要包括OCR的備份恢復及其GRID_HOME目錄的權限修復2大類問題,實驗中的dd命令記得一定要在測試庫實驗,記得多種備份都要有。
第三章 實驗中用到的SQL總結
ocrconfig -export /oracle/app/11.2.0/grid/cdata/ZFTPCCDB-crs/export_asm.bak
dd if=/dev/rhdisk1 of=/oracle/app/11.2.0/grid/cdata/ZFTPCCDB-crs/asm_rhdisk1_dd.bak bs=1024k count=4
asmcmd md_backup /oracle/app/11.2.0/grid/cdata/ZFTPCCDB-crs/asm_md_backup.bak
ocrconfig -manualbackup
ocrconfig -showbackup
ocrconfig -export /oracle/app/11.2.0/grid/cdata/ZFTPCCDB-crs/export_asm.bak
ocrconfig -import /oracle/app/11.2.0/grid/cdata/ZFTPCCDB-crs/export_asm.bak
dd if=/dev/rhdisk1 of=/oracle/app/11.2.0/grid/cdata/ZFTPCCDB-crs/asm_rhdisk1_dd.bak bs=1024k count=4
dd if=/oracle/app/11.2.0/grid/cdata/ZFTPCCDB-crs/asm_rhdisk1_dd.bak of=/dev/rhdisk1 bs=1024k count=4
kfed repair /dev/rhdisk1
asmcmd md_backup /oracle/app/11.2.0/grid/cdata/ZFTPCCDB-crs/asm_md_backup.bak
asmcmd md_restore /oracle/app/11.2.0/grid/cdata/ZFTPCCDB-crs/asm_md_backup.bak
ocrconfig -manualbackup
ocrconfig -showbackup
crsctl query css votedisk
crsctl stop crs -f
crsctl start crs -excl
crsctl stop resource ora.crsd -init
ocrconfig -restore /oracle/app/11.2.0/grid/cdata/ZFTPCCDB-crs/backup_20160701_152358.ocr
cluvfy comp ocr -n all -verbose
ocrconfig -export /oracle/app/11.2.0/grid/cdata/ZFTPCCDB-crs/export_asm_lhr.bak
ocrconfig -import /oracle/app/11.2.0/grid/cdata/ZFTPCCDB-crs/export_asm_lhr.bak
crsctl stop crs
crsctl start crs -excl -nocrs
ocrconfig -import /oracle/app/11.2.0/grid/cdata/ZFTPCCDB-crs/export_asm_lhr.bak
crsctl start crs
$ORACLE_HOME/crs/install/rootcrs.pl -init
---------------------------------------------------------------------------------------------------------------------
..........................................................................................................................................................................................................
本文作者:小麥苗,只專注於數據庫的技術,更注重技術的運用
本文在ITpub(http://blog.itpub.net/26736162)和博客園(http://www.cnblogs.com/lhrbest)有同步更新
本文地址:http://blog.itpub.net/26736162/viewspace-2121470/
本文pdf版:http://yunpan.cn/cdEQedhCs2kFz (提取碼:ed9b)
小麥苗分享的其它資料:http://blog.itpub.net/26736162/viewspace-1624453/
聯系我請加QQ好友(642808185),注明添加緣由
於 2016-06-24 10:00~ 2016-07-04 19:00 在中行完成
【版權所有,文章允許轉載,但須以鏈接方式注明源地址,否則追究法律責任】
..........................................................................................................................................................................................................
拿起手機掃描下邊的圖片來關注小麥苗的微信公眾號:xiaomaimiaolhr,學習最實用的數據庫技術。