RedHat Linux, 64位操作系統
RAC的實施之所以難裝,很多時候是因為准備工作不周全,導致不斷的出錯返工,消耗了大量時間。
1. 准備工作
- 准備三塊同樣大小的磁盤,1G 。即用於做OCR(保存RAC的配置信息),也用於voting disk, ocr和voting disk共用一塊磁盤,三塊磁盤提供了普通程度的冗余,也可以5塊,提供高冗余度。
- 系統配置確認:
a) 內存 > 1.5G ,空閑內存 >50M
b) SWAP > 3G
c) /tmp > 1G
- 核對內核版本 uname -r 確保該地址存在與該內核版本匹配的asmlib,http://www.oracle.com/technetwork/server-storage/linux/downloads/rhel5-084877.html ,否則就需要升級內核。開始我的內核版本是2.6.18-229,沒有對應的asmlib版本;始終無法加載模塊,把內核升到了2.6.18-238,問題才解決。
內核升級方法 見附錄1
- 配置好yum,因為需要安裝一些unixODBC-devel等第三方軟件包,沒有yum將浪費大量時間。而如果redhat沒有注冊,yum是不能用的。
yum配置方法,見附錄2
以上准備完成,就可以開始RAC的配置了。
2. 配置工作(同時在兩個節點做)
2.1 .創建grid和Oracle用戶和用戶組。
grid 用於安裝rac的基礎軟件包括clusterware和asm
- oracle 用於安裝數據庫軟件。
groupadd -g 501 oinstall groupadd -g 502 dba groupadd -g 503 oper groupadd -g 504 asmadmin groupadd -g 505 asmoper groupadd -g 506 asmdba mkdir -p /oracle/app/{grid,oracle,oraInventory} useradd -u 501 -g oinstall -G dba,asmdba,oper -d /oracle/app/oracle oracle useradd -u 502 -g oinstall -G asmadmin,asmdba,asmoper -m -d /oracle/app/grid grid chmod -R 775 /oracle/app/ chown -R grid:oinstall /oracle/app/ chown -R grid:oinstall /oracle/app/oraInventory chown -R grid:oinstall /oracle/app/grid chown -R oracle:oinstall /oracle/app/oracle
2.2. 配置系統參數
1. 關閉selinux
vi /etc/selinux/config
SELINUX=disabled
2. vi /etc/security/limits.conf
grid soft nproc 2047
grid hard nproc 16384
grid soft nofile 1024
grid hard nofile 65536
- oracle soft nproc 2047
- oracle hard nproc 16384
- oracle soft nofile 1024
- oracle hard nofile 65536
3. vi /etc/pam.d/login
session required pam_limits.so
4. vi /etc/sysctl.conf
kernel.shmmax = 536870912
kernel.shmall = 2097152
kernel.shmmni = 4096
kernel.sem = 250 32000 100 128
fs.aio-max-nr=1048576
fs.file-max = 6815744
net.ipv4.ip_local_port_range=9000 65500
net.core.rmem_default=262144
net.core.rmem_max=4194304
net.core.wmem_default=262144
net.core.wmem_max=1048586
5. 時鍾同步
由於grid有自己的時鍾同步,所以需要取消現有的時鍾同步。
gird時間同步所需要的設置(11gR2新增檢查項)
#Network Time Protocol Setting
/sbin/service ntpd stop
chkconfig ntpd off
mv /etc/ntp.conf /etc/ntp.conf.org
2.3. 配置grid和oracle帳號的環境文件。
a. grid帳號的.bash_profile
TMP=/tmp; export TMP TMPDIR=$TMP; export TMPDIR ORACLE_SID=+ASM1; export ORACLE_SID ORACLE_BASE=/oracle/app/grid; export ORACLE_BASE ORACLE_HOME=/oracle/app/grid/product/11.2.0; export ORACLE_HOME NLS_DATE_FORMAT="yyyy-mm-dd HH24:MI:SS"; export NLS_DATE_FORMAT THREADS_FLAG=native; export THREADS_FLAG PATH=$ORACLE_HOME/bin:$PATH; export PATH if [ $USER = "oracle" ] || [ $USER = "grid" ]; then if [ $SHELL = "/bin/ksh" ]; then ulimit -p 16384 ulimit -n 65536 else ulimit -u 16384 -n 65536 fi umask 022 fi
b. oracle 帳號的.bash_profile
TMP=/tmp; export TMP TMPDIR=$TMP; export TMPDIR ORACLE_SID=racdb1; export ORACLE_SID ORACLE_BASE=/oracle/app/oracle; export ORACLE_BASE ORACLE_HOME=/oracle/app/oracle/product/11.2.0; export ORACLE_HOME ORACLE_TERM=xterm;export ORACLE_TERM LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib; export LD_LIBRARY_PATH CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib; export CLASSPATH NLS_DATE_FORMAT="yyyy-mm-dd HH24:MI:SS"; export NLS_DATE_FORMAT NLS_LANG=AMERICAN_AMERICA.ZHS16GBK;export NLS_LANG PATH=/usr/sbin:$PATH; export PATH PATH=$ORACLE_HOME/bin:$PATH; export PATH if [ $USER = "oracle" ] || [ $USER = "grid" ]; then if [ $SHELL = "/bin/ksh" ]; then ulimit -p 16384 ulimit -n 65536 else ulimit -u 16384 -n 65536 fi umask 022 fi
2.4. 配置節點名稱和/etc/hosts
這一步非常重要,很多錯誤都是因為這一步配置。
1, 注意HOSTNAME,
2, 注意要加域名后綴。
vi /etc/sysconfig/network
HOSTNAME=rac2.domain.com
vi /etc/hosts
192.168.24.204 rac1.domain.com rac1
192.168.24.203 rac2.domain.com rac2
192.168.19.204 rac1priv.domain.com rac1priv
192.168.19.203 rac2priv.domain.com rac2priv
192.168.24.206 rac1vip.domain.com rac1vip
192.168.24.205 rac2vip.domain.com rac2vip
192.168.24.207 racscan.domain.com racscan
2.5. 配置兩個節點間grid和oracle帳號的SSH互信認證
以便安裝過程中將grid和oracle目錄復制到其他節點中。
1).在主節點RAC1上以grid,oracle用戶身份生成用戶的公匙和私匙
# su - oracle
$ mkdir ~/.ssh
$ ssh-keygen -t rsa
$ ssh-keygen -t dsa
2).在副節點RAC2上執行相同的操作,確保通信無阻
# su - oracle
$ mkdir ~/.ssh
$ ssh-keygen -t rsa
$ ssh-keygen -t dsa
3).在主節點RAC1上oracle用戶執行以下操作
$ cat ~/.ssh/id_rsa.pub >> ./.ssh/authorized_keys
$ cat ~/.ssh/id_dsa.pub >> ./.ssh/authorized_keys
$ ssh rac2 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
$ ssh rac2 cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
$ scp ~/.ssh/authorized_keys rac2:~/.ssh/authorized_keys
4).主節點RAC1上執行檢驗操作
$ ssh rac1 date
$ ssh rac2 date
2.6. ASM安裝配置及創建ASM盤
- 下載安裝以下模塊 (兩個節點)
http://www.oracle.com/technetwork/server-storage/linux/downloads/rhel5-084877.html
- oracleasm-support-2.1.7-1.el5.x86_64.rpm
- oracleasm-2.6.18-238.el5xen-2.0.5-1.el5.x86_64.rpm
- oracleasmlib-2.0.4-1.el5.x86_64.rpm
- ASMlib配置 (在兩個節點做)
root@ora1 ~]# /etc/init.d/oracleasm configure
Configuring the Oracle ASM library driver.
The current values will be shown in brackets ('[]'). Hitting <ENTER> without typing an
answer will keep that current value. Ctrl-C will abort.
Default user to own the driver interface []: grid
Default group to own the driver interface []: asmadmin
Start Oracle ASM library driver on boot (y/n) [n]: y
Scan for Oracle ASM disks on boot (y/n) [y]: y
Writing Oracle ASM library driver configuration: done
Initializing the Oracle ASMLib driver: [ OK ]
Scanning the system for Oracle ASMLib disks: [ OK ]
日志位置:/var/log/oracleasm
- 初始化磁盤 (在一個節點做)
fdisk /dev/sdd
Disk /dev/sdd: 2147 MB, 2147483648 bytes
255 heads, 63 sectors/track, 261 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sdd1 1 261 2096451 83 Linux\
… …
4. 創建ASM盤 (在一個節點做)
如果之前屬於某磁盤組,可以先deletedisk再創建。
root@ora2 asm]# /etc/init.d/oracleasm createdisk CRS1 /dev/sdd1
Marking disk "CRS1" as an ASM disk: [ OK ]
[root@ora2 asm]# /etc/init.d/oracleasm createdisk CRS2 /dev/sde1
Marking disk "CRS2" as an ASM disk: [ OK ]
[root@ora2 asm]# /etc/init.d/oracleasm createdisk CRS3 /dev/sdh1
Marking disk "CRS3" as an ASM disk: [ OK ]
- 加載asm盤(在兩個節點做)
root@ora2 asm]# /etc/init.d/oracleasm scandisks
Scanning the system for Oracle ASMLib disks: [ OK ]
[root@ora2 asm]# /etc/init.d/oracleasm listdisks
CRS1
CRS2
CRS3
2.7. RAC安裝前自檢
安裝cvuqdisk包
cd grid/rpm
rpm –ivh cvuqdisk-1.0.7-1.rpm
作為grid用戶進行rac安裝前檢查
export CVUQDISK_GRP=oinstall
export LANG=C
驗證集群安裝要求
./runcluvfy.sh stage -pre crsinst -n rac1,rac2 -fixup -verbose
驗證硬件和操作系統設置
./runcluvfy.sh stage -post hwos -n rac1,rac2 –verbose
在此步驟可能還缺少一些unixodbc等軟件包,用yum下載安裝既可。
3. 安裝工作(只在一個節點做)
選擇“安裝和配置集群的網格基礎結構”
選擇“高級安裝“
3.1. 創建scanip
scan名稱用/etc/hosts配置的名稱
3.2 .創建節點信息
3.3. 創建ASM磁盤組
普通級別: 需要三個候選磁盤;
高級別: 需要五個候選磁盤。
外部: 如果靠盤陣提供冗余,只需要選一個
安裝拷貝 本過程大約15分鍾
進度條在65%的時候會停頓,此時是向其他節點拷貝程序。
3.4. 執行root腳本
orainstRoot.sh, 按3.2節的節點順序執行,rac1,rac2注意一定不要並行執行。
root.sh; 按3.2節的節點順序執行,rac1,rac2注意不要並行執行。
Adding daemon to inittab
CRS-4123: Oracle High Availability Services has been started.
ohasd is starting
FATAL: Module oracleoks not found.
FATAL: Module oracleadvm not found.
FATAL: Module oracleacfs not found.
acfsroot: ACFS-9121: Failed to detect /dev/asm/.asm_ctl_spec.
acfsroot: ACFS-9310: ADVM/ACFS installation failed.
acfsroot: ACFS-9311: not all components were detected after the installation.
CRS-2676: 成功啟動 'ora.gipcd' (在 'rac1' 上)
CRS-2676: 成功啟動 'ora.mdnsd' (在 'rac1' 上)
CRS-2676: 成功啟動 'ora.gpnpd' (在 'rac1' 上)
CRS-2676: 成功啟動 'ora.cssdmonitor' (在 'rac1' 上)
CRS-2676: 成功啟動 'ora.diskmon' (在 'rac1' 上)
CRS-2676: 成功啟動 'ora.cssd' (在 'rac1' 上)
CRS-2676: 成功啟動 'ora.ctssd' (在 'rac1' 上)
已成功創建並啟動 ASM。
已成功創建磁盤組 OCR。
clscfg: -install mode specified
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
CRS-2676: 成功啟動 'ora.crsd' (在 'rac1' 上)
CRS-4256: Updating the profile
Successful addition of voting disk 4fb3851c39cc4f3ebf4d12c1d2050474.
Successful addition of voting disk 9bf84fbf94894f88bf05fbd37bc45f04.
Successful addition of voting disk 505604630a784fe6bfafa7d81c2eadb5.
Successfully replaced voting disk group with +OCR.
CRS-4256: Updating the profile
CRS-4266: Voting file(s) successfully replaced
## STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1. ONLINE 4fb3851c39cc4f3ebf4d12c1d2050474 (ORCL:CRS1) [OCR]
2. ONLINE 9bf84fbf94894f88bf05fbd37bc45f04 (ORCL:CRS2) [OCR]
3. ONLINE 505604630a784fe6bfafa7d81c2eadb5 (ORCL:CRS3) [OCR]
Located 3 voting disk(s).
CRS-2677: 成功停止 'ora.crsd' (在 'rac1' 上)
CRS-2677: 成功停止 'ora.asm' (在 'rac1' 上)
CRS-2677: 成功停止 'ora.ctssd' (在 'rac1' 上)
CRS-2677: 成功停止 'ora.cssdmonitor' (在 'rac1' 上)
CRS-2677: 成功停止 'ora.cssd' (在 'rac1' 上)
CRS-2677: 成功停止 'ora.gpnpd' (在 'rac1' 上)
CRS-2677: 成功停止 'ora.gipcd' (在 'rac1' 上)
CRS-2677: 成功停止 'ora.mdnsd' (在 'rac1' 上)
CRS-2676: 成功啟動 'ora.mdnsd' (在 'rac1' 上)
CRS-2676: 成功啟動 'ora.gipcd' (在 'rac1' 上)
CRS-2676: 成功啟動 'ora.gpnpd' (在 'rac1' 上)
CRS-2676: 成功啟動 'ora.cssdmonitor' (在 'rac1' 上)
CRS-2676: 成功啟動 'ora.diskmon' (在 'rac1' 上)
CRS-2676: 成功啟動 'ora.cssd' (在 'rac1' 上)
CRS-2676: 成功啟動 'ora.ctssd' (在 'rac1' 上)
CRS-2676: 成功啟動 'ora.asm' (在 'rac1' 上)
CRS-2676: 成功啟動 'ora.crsd' (在 'rac1' 上)
CRS-2676: 成功啟動 'ora.evmd' (在 'rac1' 上)
CRS-2676: 成功啟動 'ora.asm' (在 'rac1' 上)
CRS-2676: 成功啟動 'ora.OCR.dg' (在 'rac1' 上)
rac1 2012/08/04 17:22:22 /oracle/app/grid/product/11.2.0/cdata/rac1/backup_20120804_172222.olr
The inventory pointer is located at /etc/oraInst.loc
'UpdateNodeList' 成功。
3.5. oracle安裝
用oracle帳號,執行runInstaller
oracle安裝與單實例安裝基本相同,不再贅述。
高級安裝
企業版
在85%的時候停滯不前好久
3.6. oracle & RAC卸載
先停掉所有服務
- 卸載oracle
用oracle帳號執行
[oracle@london1 deinstall]$ oracle/product/11.2.0/deinstall/deinstall
輸入兩個節點,RAC,ASM
Do you want to continue (y - yes, n - no)? [n]: y
- 卸載grid
用oracle帳號執行
[oracle@london1 deinstall]$ grid/product/11.2.0/deinstall/deinstall
3. 在兩個節點用root執行
/tmp/deinstall2012-08-04_02-28-58-下午/perl/bin/perl -I/tmp/deinstall2012-08-04_02-28-58-下午/perl/lib -I/tmp/deinstall2012-08-04_02-28-58-下午/crs/install /tmp/deinstall2012-08-04_02-28-58-下午/crs/install/rootcrs.pl -force -delete -paramfile /tmp/deinstall2012-08-04_02-28-58-下午/response/deinstall_Ora11g_gridinfrahome1.rsp
CRS管理
節點起停:
[root@london1]# crsctl start cluster -all
[root@london1]# crsctl stop cluster -all
Alternatively, you could use the -n switch to start Grid Infrastructure on a specific (not local) node.
To check the current status of all nodes in the cluster, execute the following command:
[root@london1]# crsctl check cluster –all
crsctl start crs
服務啟動
[node1:grid]$srvctl enable oc4j
[node1:grid]$srvctl start oc4j
[node1:grid]$srvctl enable nodeapps
[node1:grid]$srvctl start nodeapps
狀態查詢和管理
srvctl enable servname
srvctl start servname
crs_stat -t
crsctl status resource -t
crsctl query css votedisk
olsnodes -l
crsctl status resource –t
olsnodes –l
檢查表決磁盤:
crsctl query css votedisk
關閉crs自啟動: crsctl disable crs
手工啟動crs: crsctl start crs
olr檢查: ocrcheck -local
ocr檢查: ocrcheck
需要區別rac1和rac2的ORACLE_SID,實例名稱不同, +ASM1/+ASM2; racdb11/racdb12
sqlplus / as sysdba
sqlplus / as sysasm
在grid帳號下用sysasm查詢v$datafile信息會報nomount錯誤,因為這些信息需要用oracle帳號的sysdba來查詢。
修改系統參數:
alter system set log_archive_dest_2='location=/data/oradata/jssdbn1'
修改某實例系統參數
alter system set log_archive_dest_2='location=/data/oradata/jssdbn1' sid='jssdbn1';
1. 列出所有的數據庫:
[grid@rac02 ~]$ srvctl config database
2. 查看數據庫的配置:
[grid@rac02 ~]$ srvctl config database -d racdb -a
3.查看所有 Oracle 實例 —(數據庫狀態):
[grid@rac02 ~]$ srvctl status database -d racdb
4. 檢查單個實例狀態:
[grid@rac02 ~]$ srvctl status instance -d racdb -i racdb1
5. TNS監聽器狀態以及配置:
[grid@rac02 ~]$ srvctl status listener
6. SCAN狀態以及配置:
[grid@rac02 ~]$ srvctl status scan
7.使用 SRVCTL 啟動/停止所有實例:
[oracle@rac01 ~]$srvctl stop database -d racdb
[oracle@rac01 ~]$srvctl start database -d racdb
13、集群中所有正在運行的實例 — (SQL): sysasm
SELECT inst_id , instance_number inst_no , instance_name inst_name , parallel , status ,
database_status db_status , active_state state , host_name host FROM gv$instance ORDER BY inst_id;
14、所有數據庫文件及它們所在的 ASM 磁盤組 — (SQL): sysdba
v$datafile ,v$logfile, v$tempfile, v$controlfile
1、檢查集群狀態:
[grid@rac02 ~]$ crsctl check cluster
4、節點應用程序狀態:
[grid@rac02 ~]$ srvctl status nodeapps
10、VIP各個節點的狀態以及配置:
[grid@rac02 ~]$ srvctl status vip -n rac01
11、節點應用程序配置 —(VIP、GSD、ONS、監聽器)
[grid@rac02 ~]$ srvctl config nodeapps -a -g -s -l
12、驗證所有集群節點間的時鍾同步:
[grid@rac02 ~]$ cluvfy comp clocksync -verbose
(1)、在本地服務器上停止Oracle Clusterware 系統:
[root@rac01 ~]# /u01/app/11.2.0/grid/bin/crsctl stop cluster
強制停止: 加 -f
在所有服務器上停止clusterware
[root@rac01 ~]# /u01/app/11.2.0/grid/bin/crsctl stop cluster -all
7、ASM狀態以及ASM配置:
[grid@rac02 ~]$ srvctl status asm
ASM is running on rac01,rac02
[grid@rac02 ~]$ srvctl config asm –a
SQL> create tablespace datacfg datafile size 2g extent management local segment space management auto;
缺省創建在 “DB_CREATE_FILE_DEST”指定位置
SQL> create tablespace users datafile '+DATA' size 1g extent management local segment space management auto;
SQL> alter database datafile '+DATA/PROD/DATAFILE/users.259.679156903' resize 10G;
刪除表空間
SQL> drop tablespace dataflow including contents and datafiles cascade constraints;
ASM管理
創建磁盤組 asmca
管理磁盤: asmcmd
相關URL資源
1. cluster管理 http://candon123.blog.51cto.com/704299/336023
2. asm 管理,asmcmd 類似shell操作 http://space.itpub.net/25574072/viewspace-712245
alter diskgroup dg2 drop disk disk13;
http://blog.csdn.net/wyzxg/article/details/4902439
3. fragment http://blog.csdn.net/hijk139/article/details/7224768
4. device mapper 多路徑管理軟件http://www.cyberciti.biz/tips/rhel-linux4-setup-device-mapper-multipathing-devicemapper.html
FAQ:
問題:執行root.sh報錯
執行#/oracle/app/grid/product/11.2.0/crs/install/rootcrs.pl -deconfig無法清除
解決方法:
強制清除:
#/oracle/app/grid/product/11.2.0/crs/install/rootcrs.pl -delete -force –verbose
參考url:http://www.aixchina.net/home/space.php?uid=15081&do=blog&id=25724
問題:perl-DBD安裝失敗
Unable to locate an oracle.mk, proc.mk or other suitable *.mk
解決方法:
- # perl Makefile.PL -l
- # make
- # make install
問題: PL/SQL\TOAD等客戶端無法連接SCAN-IP
解決方法: 修改scan-IP為Fully Qualified Domain Name (FQDN) SCAN或者IP,不能用短名稱。
rac1:
SQL> show parameter local_listener
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
local_listener string (DESCRIPTION=(ADDRESS_LIST=(AD
DRESS=(PROTOCOL=TCP)(HOST=rac1
-vip)(PORT=1521))))
SQL> alter system set local_listener='(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.2.111)(PORT=1521))))' scope=both sid='orcl1';
SQL> alter system register;
rac2:
SQL> show parameter local_listener
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
local_listener string (DESCRIPTION=(ADDRESS_LIST=(AD
DRESS=(PROTOCOL=TCP)(HOST=rac2
-vip)(PORT=1521))))
SQL> alter system set local_listener='(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.2.112)(PORT=1521))))' scope=both sid='orcl2';
SQL> alter system register;
附錄1 redhat linux內核升級:
內核源代碼下載地址:
其他linux: http://www.kernel.org/pub/linux/kernel/v2.6
redhat: ftp://ftp.redhat.com/pub/redhat/linux/enterprise/5Server/en/os/SRPMS/kernel-2.6.18-238.el5.src.rpm
# rpm -ivh kernel-2.6.9-22.EL.src.rpm
源碼被解壓至 /usr/src/redhat/SOURCES 目錄,並且在 /usr/src/redhat/SPECS 目錄中建立 kernel-2.6.spec 文件。
# cd /usr/src/redhat/SPECS/
# vi kernel-2.6.spec
%define buildup 1
%define buildsmp 1
%define buildsource 1
%define buildhugemem 1
將buildsource的值從0改為1
編譯內核
# rpmbuild -ba --target=x86_64 ./kernel-2.6.spec
一定要仔細核對rpmbuild命令中的target參數,你所要被安裝的機器的體系究竟是i686,i386,還是64位的。不妨用uname命令查對一下。
3.最終目錄結構
成功安裝后,數據分布如下:
·所有的kernel配置文件生成在 /usr/src/redhat/BUILD/kernel-2.6.9/linux-2.6.9/configs 目錄下
kernel-2.6.9-x86_64.config
kernel-2.6.9-x86_64-smp.config
·內核樹生成在 /usr/src/redhat/BUILD/kernel-2.6.9/linux-2.6.9 目錄下
·內核RPM安裝包生成在 /usr/src/redhat/RPMS/{機器體系} 目錄下
kernel-2.6.9-22.EL.x86_64.rpm
kernel-debuginfo-2.6.9-22.EL.x86_64.rpm
kernel-devel-2.6.9-22.EL.x86_64.rpm
kernel-smp-2.6.9-22.EL.x86_64.rpm
kernel-smp-devel-2.6.9-22.EL.x86_64.rpm
kernel-sourcecode-2.6.9-22.EL.x86_64.rpm
4. 安裝內核rpm -ivh kernel-2.6.9-22.EL.x86_64.rpm
內核被安裝到/boot目錄中,同時grub.conf會被自動更新
Q: 執行 rpmbuild 報 “Not enough random bytes available. Please do some other work to give”錯誤。
A: You can see the entropy valu using following command.
#cat /proc/sys/kernel/random/entropy_avail
Now, start the 'rngd' daemon using following command and monitor the entropy on the system.
#rngd -r /dev/urandom -o /dev/random -f -t 1
#watch -n 1 cat /proc/sys/kernel/random/entropy_avail
The 'rngd' daemon is installed by 'kernel-utils' package in RHEL 4 and 'rng-utils' package on RHEL 5.
其實就是在redhat安裝iso文件中找到rng-utils包裝上去
附錄2. YUM安裝
Redhat Linux通常由於沒有注冊,導致yum程序無法使用,需要將其替換為centos的yum程序。
1. 下載Yum的安裝包,由於體系結構的不同和包的更新,因此目錄和文件名的版本號可能需要調整以下。
#wget http://centos.ustc.edu.cn/centos/5/os/{i386|x86_64}/CentOS/yum-3.2.22-39.el5.centos.noarch.rpm
#wget http://centos.ustc.edu.cn/centos/5/os/{i386|x86_64}/CentOS/yum-fastestmirror-1.1.16-21.el5.centos.1.noarch.rpm
#wget http://centos.ustc.edu.cn/centos/5/os/{i386|x86_64}/CentOS/yum-metadata-parser-1.1.2-3.el5.centos.i386.rpm
2. 查出當前的yum程序,進行卸載
#rpm -qa|grep yum
# rpm -e yum-3.2.22-20.el5 --nodeps
# rpm -e yum-updatesd-0.9-2.el5 --nodeps
# rpm -e yum-security-1.1.16-13.el5 --nodeps
# rpm -e yum-metadata-parser-1.1.2-3.el5 --nodeps
# rpm -e yum-rhn-plugin-0.5.4-13.el5 --nodeps
3. 下載並導入KEY
# cd /etc/pki/rpm-gpg/
# wget http://mirrors.sohu.com/centos/RPM-GPG-KEY-CentOS-5
# rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY*
4. 安裝yum安裝包
rpm -ivh yum-3.2.22-39.el5.centos.noarch.rpm \
yum-fastestmirror-1.1.16-21.el5.centos.1.noarch.rpm \
yum-metadata-parser-1.1.2-3.el5.centos.i386.rpm
5. 修改配置文件
Vi /etc/yum.repos.d/rhel-debuginfo.repo
[base]
name=Red Hat Enterprise Linux $releasever -Base
baseurl=http://mirrors.sohu.com/centos/5.5/os/$basearch/
gpgcheck=1
[update]
name=Red Hat Enterprise Linux $releasever -Updates
baseurl=http://mirrors.sohu.com/centos/5.5/updates/$basearch/
gpgcheck=1
[extras]
name=Red Hat Enterprise Linux $releasever -Extras
baseurl=http://mirrors.sohu.com/centos/5.5/extras/$basearch/
gpgcheck=1
[addons]
name=Red Hat Enterprise Linux $releasever -Addons
baseurl=http://mirrors.sohu.com/centos/5.5/addons/$basearch/
gpgcheck=1
至此yum安裝完成,可以 yum install 安裝程序了
附錄3. /dev/shm 共享內存不足的處理
解決方法:
例如:為了將/dev/shm的大小增加到1GB,修改/etc/fstab的這行:默認的:
none /dev/shm tmpfs defaults 0 0
改成:
none /dev/shm tmpfs defaults,size=1024m 0 0
size參數也可以用G作單位:size=1G。
重新mount /dev/shm使之生效:
# mount -o remount /dev/shm
或者:
# umount /dev/shm
# mount -a
馬上可以用"df -h"命令檢查變化。
http://docs.oracle.com/cd/E14072_01/rac.112/e10717/intro.htm
/etc/inittab h1:35:respawn:/etc/init.d/init.ohasd run >/dev/null 2>&1 </dev/null
/etc/init.d/ohasd->$GRID_HOME/bin/ohasd.bin $GRID_HOME/log/hostname/ohasd/ohasd.log
Oracle High Availability Services (OHAS)
The Grid Plug And Play (GPnP) daemon
The Grid Interprocess Communication (GIPC) daemon
The multicast DNS (mDNS) service
The Grid Naming Service (GNS):
Cluster Ready Services (CRS):
Cluster Synchronization Services (CSS) service
The Cluster Synchronization Services Agent (cssdagent):
The Cluster Synchronization Services Monitor (cssdmonitor) process
The Disk Monitor (diskmon) daemon:
The Oracle Clusterware Kill (oclskd) daemon
The Cluster Time Synchronization Service (CTSS):
The Event Manager (EVM) service
The Event Manager Logger (EVMLOGGER) daemon
The Oracle Notification Service (ONS, eONS):
不能用crsctl改變ora開頭的 resources 狀態,應該用 srvctl;
/oracle/crs/product/10.2.0.4/log/nmg-nms-db/cssd/ocssd.log
192.168.24.202 root/wanding123
/oracle/app/grid/product/11.2.0/auth/css/
export CVUQDISK_GRP=oinstall
./runcluvfy.sh stage -pre crsinst -n rac1,rac2 -fixup -verbose
