Oracle 11g RAC ohasd failed to start at /u01/app/11.2.0/grid/crs/install/rootcrs.pl line 443 解決方法


最新文章:Virson's Blog

文章來自:CSDN-David Dai

一.問題描述 

在Oracle Linux 6.1 上安裝11.2.0.1 的RAC,在安裝grid時執行root.sh 腳本,報錯,如下:

 1 [root@rac1 bin]#/u01/app/11.2.0/grid/root.sh
 2 Running Oracle 11g root.sh script...
 3  
 4 The following environment variables are setas:
 5    ORACLE_OWNER= oracle
 6    ORACLE_HOME=  /u01/app/11.2.0/grid
 7  
 8 Enter the full pathname of the local bindirectory: [/usr/local/bin]:
 9   Copying dbhome to /usr/local/bin ...
10   Copying oraenv to /usr/local/bin ...
11   Copying coraenv to /usr/local/bin ...
12  
13 Entries will be added to the /etc/oratabfile as needed by
14 Database Configuration Assistant when adatabase is created
15 Finished running generic part of root.shscript.
16 Now product-specific root actions will beperformed.
17 2012-06-27 10:31:18: Parsing the host name
18 2012-06-27 10:31:18: Checking for superuser privileges
19 2012-06-27 10:31:18: User has super userprivileges
20 Using configuration parameter file:/u01/app/11.2.0/grid/crs/install/crsconfig_params
21 Creating trace directory
22 LOCAL ADD MODE
23 Creating OCR keys for user 'root', privgrp'root'..
24 Operation successful.
25  root wallet
26  root wallet cert
27  root cert export
28  peer wallet
29   profile reader wallet
30   pawallet
31  peer wallet keys
32   pawallet keys
33  peer cert request
34   pacert request
35  peer cert
36   pacert
37  peer root cert TP
38  profile reader root cert TP
39   paroot cert TP
40  peer pa cert TP
41   papeer cert TP
42  profile reader pa cert TP
43  profile reader peer cert TP
44  peer user cert
45   pauser cert
46 Adding daemon to inittab
47 CRS-4124: Oracle High Availability Services startup failed.
48 CRS-4000: Command Start failed, or completed with errors.
49 ohasd failed to start: Inappropriate ioctl for device
50 ohasd failed to start at/u01/app/11.2.0/grid/crs/install/rootcrs.pl line 443.

據說這個錯誤只在linux 6.1下,且Oracle 版本為11.2.0.1的時候出現,在11.2.0.3的時候就不會有這種問題,而解決方法就是在生成了文件/var/tmp/.oracle/npohasd文件后,root立即執行命令:

/bin/dd if=/var/tmp/.oracle/npohasd of=/dev/nullbs=1024 count=1

二.清除安裝歷史記錄

這里有兩種方法:1.清除grid,2,清除root.sh.

2.1 清除GRID
在我們繼續執行之前先清除GRID,具體步驟參考:
RAC 卸載 說明
http://blog.csdn.net/tianlesoftware/article/details/5892225

在所有節點執行:

rm –rf /etc/oracle/*

rm -rf /etc/init.d/init.cssd

rm -rf /etc/init.d/init.crs

rm -rf /etc/init.d/init.crsd

rm -rf /etc/init.d/init.evmd

rm -rf /etc/rc2.d/K96init.crs

rm -rf /etc/rc2.d/S96init.crs

rm -rf /etc/rc3.d/K96init.crs

rm -rf /etc/rc3.d/S96init.crs

rm -rf /etc/rc5.d/K96init.crs

rm -rf /etc/rc5.d/S96init.crs

rm -rf /etc/oracle/scls_scr

rm -rf /etc/inittab.crs

 

rm -rf /var/tmp/.oracle/*

or

rm -rf /tmp/.oracle/*

 

移除ocr.loc 文件,通常在/etc/oracle 目錄下:

[root@rac1 ~]# cd /etc/oracle

You have new mail in /var/spool/mail/root

[root@rac1 oracle]# ls

lastgasp ocr.loc ocr.loc.orig olr.loc olr.loc.orig oprocd

[root@rac1 oracle]# rm -rf ocr.*

 

格式化ASM 裸設備:

[root@rac1 utl]# ll /dev/asm*

brw-rw---- 1 oracle dba 8, 17 Jun 27 09:38 /dev/asm-disk1

brw-rw---- 1 oracle dba 8, 33 Jun 27 09:38/dev/asm-disk2

brw-rw---- 1 oracle dba 8, 49 Jun 27 09:38/dev/asm-disk3

brw-rw---- 1 oracle dba 8, 65 Jun 27 09:38/dev/asm-disk4

 

dd if=/dev/zero of=/dev/asm-disk1 bs=1Mcount=256

dd if=/dev/zero of=/dev/asm-disk2 bs=1Mcount=256

dd if=/dev/zero of=/dev/asm-disk3 bs=1Mcount=256

dd if=/dev/zero of=/dev/asm-disk4 bs=1Mcount=256

 

移除/tmp/CVU* 目錄:

[root@rac1 ~]# rm -rf /tmp/CVU*

 

刪除/var/opt目錄下的Oracle信息和ORACLE_BASE目錄:

 

# rm -rf /data/oracle

# rm -rf /var/opt/oracle

 

刪除/usr/local/bin目錄下的設置:

# rm -rf /usr/local/bin/dbhome

# rm -rf /usr/local/bin/oraenv

# rm -rf /usr/local/bin/coraenv

 

移除Grid 安裝目錄,並重建:

[root@rac1 oracle]# rm -rf /u01/app

 

[root@rac2 u01]# mkdir -p /u01/app/11.2.0/grid

[root@rac2 u01]# mkdir -p/u01/app/oracle/product/11.2.0/db_1

[root@rac2 u01]# chown -R oracle:oinstall/u01

[root@rac2 u01]# chmod -R775 /u01/

 

2.2 清除root.sh 記錄
使用rootcrs.pl 命令來清楚記錄,命令如下:

 1 [root@rac1 oracle]#/u01/app/11.2.0/grid/crs/install/rootcrs.pl-deconfig  -verbose -force
 2 2012-06-27 14:30:17: Parsing the host name
 3 2012-06-27 14:30:17: Checking for superuserprivileges
 4 2012-06-27 14:30:17: User has superuserprivileges
 5 Using configuration parameterfile:/u01/app/11.2.0/grid/crs/install/crsconfig_params
 6 Failure to execute: Inappropriate ioctlfordevice for command /u01/app/11.2.0/grid/bin/crsctl check cluster -n rac1
 7 Failure to execute: Inappropriate ioctlfordevice for command /u01/app/11.2.0/grid/bin/crsctl check cluster -n rac1
 8 Usage: srvctl <command><object>[<options>]
 9    commands:enable|disable|start|stop|status|add|remove|modify|getenv|setenv|unsetenv|config
10    objects:database|service|asm|diskgroup|listener|home|ons|eons
11 For detailed help on each command andobjectand its options use:
12  srvctl <command> -h or
13  srvctl <command> <object>-h
14 PRKO-2012 : nodeapps object is notsupportedin Oracle Restart
15 sh: /u01/app/11.2.0/grid/bin/clsecho:Nosuch file or directory
16 Can'texec"/u01/app/11.2.0/grid/bin/clsecho": No such file or directoryat/u01/app/11.2.0/grid/lib/acfslib.pm line 937.
17 Failure to execute: Inappropriate ioctlfordevice for command /u01/app/11.2.0/grid/bin/crsctl check cluster -n rac1
18 You must kill crs processes or rebootthesystem to properly
19 cleanup the processes started byOracleclusterware
20 2560+0 records in
21 2560+0 records out
22 10485760 bytes (10 MB) copied, 0.0373402s,281 MB/s
23 error: package cvuqdisk is not installed
24 Successfully deconfigured Oracleclusterwarestack on this node
25 You have new mail in /var/spool/mail/root
26 [root@rac1 oracle]#

三.重新安裝並處理問題

在執行/u01/app/11.2.0/grid/root.sh腳本的時候開2個root的shell窗口,一個用來執行腳本,一個用來監控/var/tmp/.oracle/npohasd文件,看到就用root立即執行命令:
/bin/ddif=/var/tmp/.oracle/npohasd of=/dev/null bs=1024 count=1

  1 [root@rac1 oracle]#/u01/app/11.2.0/grid/root.sh
  2 Running Oracle 11g root.sh script...
  3  
  4 The following environment variables are setas:
  5    ORACLE_OWNER= oracle
  6    ORACLE_HOME=  /u01/app/11.2.0/grid
  7  
  8 Enter the full pathname of the local bindirectory: [/usr/local/bin]:
  9 The file "dbhome" already existsin /usr/local/bin.  Overwrite it? (y/n)
 10 [n]:
 11 The file "oraenv" already existsin /usr/local/bin.  Overwrite it? (y/n)
 12 [n]:
 13 The file "coraenv" already existsin /usr/local/bin.  Overwrite it? (y/n)
 14 [n]:
 15  
 16 Entries will be added to the /etc/oratabfile as needed by
 17 Database Configuration Assistant when adatabase is created
 18 Finished running generic part of root.shscript.
 19 Now product-specific root actions will beperformed.
 20 2012-06-27 14:32:21: Parsing the host name
 21 2012-06-27 14:32:21: Checking for superuser privileges
 22 2012-06-27 14:32:21: User has super userprivileges
 23 Using configuration parameter file:/u01/app/11.2.0/grid/crs/install/crsconfig_params
 24 LOCAL ADD MODE
 25 Creating OCR keys for user 'root', privgrp'root'..
 26 Operation successful.
 27   rootwallet
 28  root wallet cert
 29  root cert export
 30  peer wallet
 31  profile reader wallet
 32   pawallet
 33  peer wallet keys
 34   pawallet keys
 35  peer cert request
 36   pacert request
 37  peer cert
 38   pacert
 39  peer root cert TP
 40  profile reader root cert TP
 41   paroot cert TP
 42  peer pa cert TP
 43   papeer cert TP
 44  profile reader pa cert TP
 45  profile reader peer cert TP
 46  peer user cert
 47   pauser cert
 48  
 49 --------注意-------------
 50 看到root.sh 執行到這里的時候,我們就可以在另一個窗口不斷的刷我們的dd命令了,如果有更好的方法也可以,我這里是這么操作的:
 51 [root@rac1 ~]# /bin/ddif=/var/tmp/.oracle/npohasd of=/dev/null bs=1024 count=1
 52 /bin/dd: opening`/var/tmp/.oracle/npohasd': No such file or directory
 53 [root@rac1 ~]# /bin/ddif=/var/tmp/.oracle/npohasd of=/dev/null bs=1024 count=1
 54 /bin/dd: opening`/var/tmp/.oracle/npohasd': No such file or directory
 55 [root@rac1 ~]# /bin/ddif=/var/tmp/.oracle/npohasd of=/dev/null bs=1024 count=1
 56 /bin/dd: opening`/var/tmp/.oracle/npohasd': No such file or directory
 57 [root@rac1 ~]# /bin/ddif=/var/tmp/.oracle/npohasd of=/dev/null bs=1024 count=1
 58 /bin/dd: opening`/var/tmp/.oracle/npohasd': No such file or directory
 59 [root@rac1 ~]# /bin/ddif=/var/tmp/.oracle/npohasd of=/dev/null bs=1024 count=1
 60 /bin/dd: opening`/var/tmp/.oracle/npohasd': No such file or directory
 61 You have new mail in /var/spool/mail/root
 62 [root@rac1 ~]# /bin/ddif=/var/tmp/.oracle/npohasd of=/dev/null bs=1024 count=1
 63 --只要dd命令成功執行,我們的root.sh 就可以順利完成了。
 64  
 65 --------End --------------
 66  
 67  
 68 Adding daemon to inittab
 69 CRS-4123: Oracle High Availability Serviceshas been started.
 70 ohasd is starting
 71 ADVM/ACFS is not supported onoraclelinux-release-6Server-1.0.2.x86_64
 72  
 73  
 74  
 75 CRS-2672: Attempting to start 'ora.gipcd'on 'rac1'
 76 CRS-2672: Attempting to start 'ora.mdnsd'on 'rac1'
 77 CRS-2676: Start of 'ora.gipcd' on 'rac1'succeeded
 78 CRS-2676: Start of 'ora.mdnsd' on 'rac1'succeeded
 79 CRS-2672: Attempting to start 'ora.gpnpd'on 'rac1'
 80 CRS-2676: Start of 'ora.gpnpd' on 'rac1'succeeded
 81 CRS-2672: Attempting to start'ora.cssdmonitor' on 'rac1'
 82 CRS-2676: Start of 'ora.cssdmonitor' on'rac1' succeeded
 83 CRS-2672: Attempting to start 'ora.cssd' on'rac1'
 84 CRS-2672: Attempting to start 'ora.diskmon'on 'rac1'
 85 CRS-2676: Start of 'ora.diskmon' on 'rac1'succeeded
 86 CRS-2676: Start of 'ora.cssd' on 'rac1'succeeded
 87 CRS-2672: Attempting to start 'ora.ctssd'on 'rac1'
 88 CRS-2676: Start of 'ora.ctssd' on 'rac1'succeeded
 89  
 90 ASM created and started successfully.
 91  
 92 DiskGroup DATA created successfully.
 93  
 94 clscfg: -install mode specified
 95 Successfully accumulated necessary OCRkeys.
 96 Creating OCR keys for user 'root', privgrp'root'..
 97 Operation successful.
 98 CRS-2672: Attempting to start 'ora.crsd' on'rac1'
 99 CRS-2676: Start of 'ora.crsd' on 'rac1'succeeded
100 CRS-4256: Updating the profile
101 Successful addition of voting disk372c42f3b2bc4f66bf8b52d2526104e3.
102 Successfully replaced voting disk groupwith +DATA.
103 CRS-4256: Updating the profile
104 CRS-4266: Voting file(s) successfullyreplaced
105 ## STATE    File Universal Id                File Name Disk group
106 -- -----    -----------------                --------- ---------
107  1.ONLINE   372c42f3b2bc4f66bf8b52d2526104e3(/dev/asm-disk1) [DATA]
108 Located 1 voting disk(s).
109 CRS-2673: Attempting to stop 'ora.crsd' on'rac1'
110 CRS-2677: Stop of 'ora.crsd' on 'rac1'succeeded
111 CRS-2673: Attempting to stop 'ora.asm' on'rac1'
112 CRS-2677: Stop of 'ora.asm' on 'rac1' succeeded
113 CRS-2673: Attempting to stop 'ora.ctssd' on'rac1'
114 CRS-2677: Stop of 'ora.ctssd' on 'rac1'succeeded
115 CRS-2673: Attempting to stop'ora.cssdmonitor' on 'rac1'
116 CRS-2677: Stop of 'ora.cssdmonitor' on'rac1' succeeded
117 CRS-2673: Attempting to stop 'ora.cssd' on'rac1'
118 CRS-2677: Stop of 'ora.cssd' on 'rac1'succeeded
119 CRS-2673: Attempting to stop 'ora.gpnpd' on'rac1'
120 CRS-2677: Stop of 'ora.gpnpd' on 'rac1'succeeded
121 CRS-2673: Attempting to stop 'ora.gipcd' on'rac1'
122 CRS-2677: Stop of 'ora.gipcd' on 'rac1'succeeded
123 CRS-2673: Attempting to stop 'ora.mdnsd' on'rac1'
124 CRS-2677: Stop of 'ora.mdnsd' on 'rac1'succeeded
125 CRS-2672: Attempting to start 'ora.mdnsd'on 'rac1'
126 CRS-2676: Start of 'ora.mdnsd' on 'rac1'succeeded
127 CRS-2672: Attempting to start 'ora.gipcd'on 'rac1'
128 CRS-2676: Start of 'ora.gipcd' on 'rac1'succeeded
129 CRS-2672: Attempting to start 'ora.gpnpd'on 'rac1'
130 CRS-2676: Start of 'ora.gpnpd' on 'rac1'succeeded
131 CRS-2672: Attempting to start'ora.cssdmonitor' on 'rac1'
132 CRS-2676: Start of 'ora.cssdmonitor' on'rac1' succeeded
133 CRS-2672: Attempting to start 'ora.cssd' on'rac1'
134 CRS-2672: Attempting to start 'ora.diskmon'on 'rac1'
135 CRS-2676: Start of 'ora.diskmon' on 'rac1'succeeded
136 CRS-2676: Start of 'ora.cssd' on 'rac1'succeeded
137 CRS-2672: Attempting to start 'ora.ctssd'on 'rac1'
138 CRS-2676: Start of 'ora.ctssd' on 'rac1'succeeded
139 CRS-2672: Attempting to start 'ora.asm' on'rac1'
140 CRS-2676: Start of 'ora.asm' on 'rac1'succeeded
141 CRS-2672: Attempting to start 'ora.crsd' on'rac1'
142 CRS-2676: Start of 'ora.crsd' on 'rac1'succeeded
143 CRS-2672: Attempting to start 'ora.evmd' on'rac1'
144 CRS-2676: Start of 'ora.evmd' on 'rac1'succeeded
145 CRS-2672: Attempting to start 'ora.asm' on'rac1'
146 CRS-2676: Start of 'ora.asm' on 'rac1'succeeded
147 CRS-2672: Attempting to start 'ora.DATA.dg'on 'rac1'
148 CRS-2676: Start of 'ora.DATA.dg' on 'rac1'succeeded
149  
150 rac1    2012/06/27 14:39:25    /u01/app/11.2.0/grid/cdata/rac1/backup_20120627_143925.olr
151 Preparing packages for installation...
152 cvuqdisk-1.0.7-1
153 Configure Oracle Grid Infrastructure for aCluster ... succeeded
154 Updating inventory properties forclusterware
155 Starting Oracle Universal Installer...
156  
157 Checking swap space: must be greater than500 MB.   Actual 969 MB    Passed
158 The inventory pointer is located at/etc/oraInst.loc
159 The inventory is located at/u01/app/oraInventory
160 'UpdateNodeList' was successful.
161 [root@rac1 oracle]#

這里root.sh成功執行,方法可行。

注意:
在所有節點執行root.sh 都需要使用dd命令。


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM