一、環境准備
系統:centos 7.6
軟件:oracle 11.2.0.4
database: p13390677_112040_Linux-x86-64_1of7.zip p13390677_112040_Linux-x86-64_2of7.zip
grid集群: p13390677_112040_Linux-x86-64_3of7.zip
共享磁盤:系統組已配置好 iscsi。
二、前期准備
2.1 ip規划
每個節點網卡需要2個,11.2開始至少需要4種IP地址,規划如下:
rac01:134.80.101.2 公共ip
rac01-vip:134.80.101.4 虛擬ip
rac01-pip:192.100.100.2 私有ip
rac02:134.80.101.3
rac02-vip:134.80.101.5
rac02-pip:192.100.100.3
scan-cluster:134.80.101.6 集群入口
說明:3個公共ip,2個虛擬ip 需要在同一個子網!
Oracle RAC環境下每個節點都會有多個IP地址,分別為公共IP(Public IP) 、私有IP(Private IP)和虛擬IP(Virtual IP):
私有IP(Public IP)
Private IP address is used only for internal clustering processing(Cache Fusion).
專用(私有)IP地址只用於內部群集處理,如心跳偵測,服務器間的同步數據用。
虛擬IP(Virtual IP)
Virtual IP is used by database applications to enable fail over when one cluster node fails.
當一個群集節點出現故障時,數據庫應用程序通過虛擬IP地址進行故障切換。
當一個群集節點出現故障時,數據庫應用程序(包括數據庫客戶端)通過虛擬IP地址切換到另一個無故障節點,另一個功能是均衡負載。
公共IP(Public IP)
Public IP adress is the normal IP address typically used by DBA and SA to manage storage, system and database.
公共IP地址
正常的(真實的)IP地址,通常DBA和SA使用公共IP地址在來管理存儲、系統和數據庫。
監聽IP(SCAN IP)
從Oracle 11g R2開始,Oracle RAC網絡對IP地址有特殊要求,新增了加監聽IP地址(SCAN IP),所以從Oracle 11g R2開始Oracle RAC網絡至少需要4種IP地址(前面介紹三種IP地址)。在Oracle 11g R2之前,如果數據庫采用了RAC架構,在客戶端的tnsnames中,需要配置多個節點的連接信息,從而實現諸如負載均衡、Failover等RAC的特性。因此,當數據庫RAC集群需要添加或刪除節點時,需要及時對客戶端機器的tns進行更新,以免出現安全隱患。
在Oracle 11g R2中,為了簡化該項配置工作,引入了SCAN(Single Client Access Name)的特性。該特性的好處在於,在數據庫與客戶端之間,添加了一層虛擬的服務層,就是所謂的SCAN IP以及SCAN IP Listener,在客戶端僅需要配置SCAN IP的tns信息,通過SCAN IP Listener,連接后台集群數據庫。這樣,不論集群數據庫是否有添加或者刪除節點的操作,均不會對Client產生影響。
2.2 編輯/etc/hosts文件
rac01,rac02 同時配置 vim /etc/hosts 如下:
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
#rac01
134.80.101.2 rac01
134.80.101.4 rac01-vip
192.100.100.2 rac01-pip
#rac02
134.80.101.3 rac02
134.80.101.5 rac02-vip
192.100.100.3 rac02-pip
#scan cluster
134.81.101.6 scan-cluster
2.3創建用戶和組
rac01,rac02兩節點 root 下執行操作:創建組、用戶,並設置用戶密碼
groupadd -g 1000 oinstall
groupadd -g 1200 dba
groupadd -g 1201 oper
groupadd -g 1300 asmadmin
groupadd -g 1301 asmdba
groupadd -g 1302 asmoper
useradd -u 1100 -g oinstall -G asmadmin,asmdba,asmoper,oper,dba grid
useradd -u 1101 -g oinstall -G dba,oper,asmdba oracle
passwd grid
passwd oracle
2.4 創建目錄並授權
在rac01、rac02上進行目錄的創建和授權:
root 下執行以下操作:
mkdir -p /u01/app/11.2.0/grid
mkdir -p /u01/app/grid
mkdir /u01/app/oracle
chown -R grid:oinstall /u01
chown oracle:oinstall /u01/app/oracle
chmod -R 775 /u01/
2.5 配置用戶環境變量
2.5.1配置grid用戶
rac01:
[grid@rac01 ~]$ vim .bash_profile
export TMP=/tmp
export TMPDIR=$TMP
export ORACLE_SID=+ASM1
export ORACLE_BASE=/u01/app/grid
export ORACLE_HOME=/u01/app/11.2.0/grid
export PATH=/usr/sbin:$PATH
export PATH=$ORACLE_HOME/bin:$PATH
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib
export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib
umask 022
[grid@rac01 ~]$ source .bash_profile
rac02:
[grid@rac02 ~]$ vim .bash_profile
export TMP=/tmp
export TMPDIR=$TMP
export ORACLE_SID=+ASM2
export ORACLE_BASE=/u01/app/grid
export ORACLE_HOME=/u01/app/11.2.0/grid
export PATH=/usr/sbin:$PATH
export PATH=$ORACLE_HOME/bin:$PATH
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib
export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib
umask 022
[grid@rac02 ~]$ source .bash_profile
2.5.2 配置oracle用戶
rac01:
[oracle@rac01 ~]$ vim .bash_profile
export TMP=/tmp
export TMPDIR=$TMP
export ORACLE_SID=orcl1
export ORACLE_UNQNAME=orcl
export ORACLE_BASE=/u01/app/oracle
export ORACLE_HOME=$ORACLE_BASE/product/11.2.0/db_1
export TNS_ADMIN=$ORACLE_HOME/network/admin
export PATH=/usr/sbin:$PATH
export PATH=$ORACLE_HOME/bin:$PATH
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib
[oracle@rac01 ~]$ source .bash_profile
rac02:
[oracle@rac02 ~]$ vim .bash_profile
export TMP=/tmp
export TMPDIR=$TMP
export ORACLE_SID=orcl2
export ORACLE_UNQNAME=orcl
export ORACLE_BASE=/u01/app/oracle
export ORACLE_HOME=$ORACLE_BASE/product/11.2.0/db_1
export TNS_ADMIN=$ORACLE_HOME/network/admin
export PATH=/usr/sbin:$PATH
export PATH=$ORACLE_HOME/bin:$PATH
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib
[oracle@rac02 ~]$ source .bash_profile
2.6 配置grid,oracle用戶ssh互信
rac01:
ssh-keygen -t rsa #一路回車
ssh-keygen -t dsa #一路回車
rac02:
ssh-keygen -t rsa #一路回車
ssh-keygen -t dsa #一路回車
先rac01、再rac02上執行過上述兩條命令后,再回到rac01再繼續執行下面的命令:
cat ~/.ssh/id_rsa.pub >>~/.ssh/authorized_keys
cat ~/.ssh/id_dsa.pub >>~/.ssh/authorized_keys
ssh rac02 cat ~/.ssh/id_rsa.pub >>~/.ssh/authorized_keys
ssh rac02 cat ~/.ssh/id_dsa.pub >>~/.ssh/authorized_keys
scp ~/.ssh/authorized_keys rac02:~/.ssh/authorized_keys
chmod 600 .ssh/authorized_keys
兩個節點互相ssh通過一次
ssh rac01 date
ssh rac02 date
ssh rac01-pip date
ssh rac02-pip date
2.7 配置系統參數
[root@rac01 ~]# cat /etc/sysconfig/selinux
# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
# enforcing - SELinux security policy is enforced.
# permissive - SELinux prints warnings instead of enforcing.
# disabled - No SELinux policy is loaded.
SELINUX=disabled
# SELINUXTYPE= can take one of three values:
# targeted - Targeted processes are protected,
# minimum - Modification of targeted policy. Only selected processes are protected.
# mls - Multi Level Security protection.
SELINUXTYPE=targeted
[root@ ~]# vim /etc/sysctl.conf ,在最后添加以下內容:
kernel.msgmnb = 65536
kernel.msgmax = 65536
kernel.shmmax = 68719476736 RAM times 0.5
kernel.shmall = 4294967296 physical RAM size / pagesize(getconf PAGESIZE)
fs.aio-max-nr = 1048576
fs.file-max = 6815744
kernel.shmall = 2097152
kernel.shmmni = 4096
kernel.sem = 250 32000 100 128
net.ipv4.ip_local_port_range = 9000 65500
net.core.rmem_default = 262144
net.core.rmem_max = 4194304
net.core.wmem_default = 262144
net.core.wmem_max = 1048586
net.ipv4.tcp_wmem = 262144 262144 262144
net.ipv4.tcp_rmem = 4194304 4194304 4194304
[root@ rac01~]# sysctl -p 立即生效
2.7.3 修改 limits.conf
在rac01和rac02上都要執行
[root@rac01 ~]# vim /etc/security/limits.conf ,在最后添加以下內容:
oracle soft nproc 2047
oracle hard nproc 16384
oracle soft nofile 1024
oracle hard nofile 65536
oracle soft stack 10240
grid soft nproc 2047
grid hard nproc 16384
grid soft nofile 1024
grid hard nofile 65536
grid soft stack 10240
2.7.4 修改 /etc/pam.d/login
在rac01和rac02上都要執行
[root@rac01 ~]# vim /etc/pam.d/login,在session required pam_namespace.so下面插入:
session required pam_limits.so
2.7.5 修改/etc/profile
在rac01和rac02上都要執行
[root@ rac01~]# cp /etc/profile /etc/profile.bak
[root@ rac01~]# vim /etc/profile,在文件最后添加以下內容:
if [ $USER = "ORACLE" ] || [ $USER = "GRID" ];then
if [ $SHELL = "/bin/ksh" ];then
ulimit -p 16384
ulimit -n 65536
else
ulimit -u 16384 -n 65536
fi
umask 022
fi
2.7.6 停止並刪除ntp服務
在rac01和rac02上都要執行
[root@rac01~]# service ntpd status
[root@rac01~]# chkconfig ntpd off
[root@rac01~]# cp /etc/ntp.conf /etc/ntp.conf.bak
[root@rac01 ~]# rm -rf /etc/ntp.conf
2.8 共享磁盤分區
在rac01執行 fdisk /dev/sdb
根據提示輸入 n、 p、 w 等
# 同理,重復步驟對 sdc sdd sde 完成分區。
分區完 rac02 執行
[root@rac02 ~]# partprobe 分區立即生效
2.9 安裝UEK核心
這里采用asmlib方式配置磁盤,需要安裝oracleasm rpm包
[root@ ~]# yum install kmod-oracleasm
[root@ ~]# rpm -ivh oracleasmlib-2.0.12-1.el6.x86_64.rpm
[root@ ~]# rpm -ivh oracleasm-support-2.1.8-1.el6.x86_64.rpm
kmod可以直接yum安裝,另外兩個需要去官網下載
下載地址:https://www.oracle.com/linux/downloads/linux-asmlib-rhel7-downloads.html
2.10 創建ASM Disk Volumes
2.10.1 配置並裝載ASM核心模塊
rac01 rac02都要操作
[root@rac01 ~]# oracleasm configure -i ,根據提示輸入:
Configuringthe Oracle ASM library driver.
Thiswill configure the on-boot properties of the Oracle ASM library
driver. The following questions will determinewhether the driver is
loadedon boot and what permissions it will have. The current values
willbe shown in brackets ('[]'). Hitting<ENTER> without typing an
answerwill keep that current value. Ctrl-Cwill abort.
Defaultuser to own the driver interface []: grid
Defaultgroup to own the driver interface []: asmadmin
StartOracle ASM library driver on boot (y/n) [n]: y
Scanfor Oracle ASM disks on boot (y/n) [y]: y
WritingOracle ASM library driver configuration: done
[root@rac01 ~]# oracleasm init
Creating/dev/oracleasm mount point: /dev/oracleasm
Loadingmodule "oracleasm": oracleasm
MountingASMlib driver filesystem: /dev/oracleasm
2.10.2 創建ASM磁盤
rac01上執行
oracleasm createdisk CRSVOL1 /dev/sdc1
oracleasm createdisk FRAVOL1 /dev/sdc2
oracleasm createdisk ARCVOL1 /dev/sdc3
oracleasm createdisk DATAVOL1 /dev/sdd1
oracleasm createdisk DATAVOL2 /dev/sde1
oracleasm createdisk DATAVOL3 /dev/sdf1
oracleasm createdisk DATAVOL4 /dev/sdg1
oracleasm createdisk DATAVOL5 /dev/sdh1
[root@rac01 ~]# oracleasm listdisks
ARCVOL1
CRSVOL1
DATAVOL1
DATAVOL2
DATAVOL3
DATAVOL4
DATAVOL5
FRAVOL1
使用oracleasm-discover查找ASM磁盤,運行該命令查看是否能找到剛創建的幾個磁盤。
[root@rac01 ~]# oracleasm-discover
UsingASMLib from /opt/oracle/extapi/64/asm/orcl/1/libasm.so
[ASMLibrary - Generic Linux, version 2.0.4 (KABI_V2)]
Discovereddisk: ORCL:CRSVOL1 [2096753 blocks (1073537536 bytes), maxio 512]
Discovereddisk: ORCL:DATAVOL1 [41940960 blocks (21473771520 bytes), maxio 512]
Discovereddisk: ORCL:DATAVOL2 [41940960 blocks (21473771520 bytes), maxio 512]
Discovereddisk: ORCL:FRAVOL1 [62912480 blocks (32211189760 bytes), maxio 512]
rac02上使用oracleasm scandisks 掃描asm磁盤
[root@rac02 ~]# oracleasm scandisks
Reloading disk partitions: done
Cleaning any stale ASM disks...
Scanning system for ASM disks...
[root@rac02 ~]# oracleasm listdisks
ARCVOL1
CRSVOL1
DATAVOL1
DATAVOL2
DATAVOL3
DATAVOL4
DATAVOL5
FRAVOL1
三、安裝rac 集群軟件
3.1 安裝oracle所依賴組件
3.1.1配置yum源
cd /etc/yum.repos.d/
下載repo文件
wget http://mirrors.aliyun.com/repo/Centos-7.repo
mv CentOs-Base.repo CentOs-Base.repo.bak
mv Centos-7.repo CentOs-Base.repo
執行yum源更新命令
yum clean all
yum makecache
yum update
3.1.2安裝oracle 依賴
[root@rac01 ~]# yum -y install binutils compat-libstdc++ elfutils-libelf elfutils-libelf-devel gcc gcc-c++ glibc glibc-common glibc-devel glibc-headers kernel-headers ksh libaio libaio-devel libgcc libgomp libstdc++ libstdc++-devel numactl-devel sysstat unixODBC unixODBC-devel compat-libstdc++* libXp
[root@rac01 ~]# rpm -ivh pdksh-5.2.14-30.x86_64.rpm (這個包需要下載)
[root@rac02 ~]# 同上
3.2 安裝前預檢查配置信息
使用grid用戶 ,racnode1、racnode2都要執行一下這個腳本。
[grid@rac01 ~]$ cd /opt/grid/ 切換到軟件上傳目錄
[grid@rac01 grid]$ ./runcluvfy.sh stage -pre crsinst -n rac01,rac02 -fixup -verbose
結果如:
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Performing pre-checks for cluster services setup
Checking node reachability...
Check: Node reachability from node "rac01"
Destination Node Reachable?
------------------------------------ ------------------------
rac01 yes
rac02 yes
Result: Node reachability check passed from node "rac01"
Checking user equivalence...
Check: User equivalence for user "grid"
Node Name Status
------------------------------------ ------------------------
rac02 passed
rac01 passed
Result: User equivalence check passed for user "grid"
(中間部分省略粘貼)
(中間部分省略粘貼)
(中間部分省略粘貼)
Checking the file "/etc/resolv.conf" to make sure only one of domain and search entries is defined
File "/etc/resolv.conf" does not have both domain and search entries defined
Checking if domain entry in file "/etc/resolv.conf" is consistent across the nodes...
domain entry in file "/etc/resolv.conf" is consistent across nodes
Checking if search entry in file "/etc/resolv.conf" is consistent across the nodes...
search entry in file "/etc/resolv.conf" is consistent across nodes
Checking DNS response time for an unreachable node
Node Name Status
------------------------------------ ------------------------
rac02 passed
rac01 passed
The DNS response time for an unreachable node is within acceptable limit on all nodes
File "/etc/resolv.conf" is consistent across nodes
Check: Time zone consistency
Result: Time zone consistency check passed
Pre-check for cluster services setup was successful.
3.3 開始安裝grid軟件
確保兩個節點rac01、racn02都已經啟動,然后以grid用戶登錄,開始Oracle Grid Infrastructure安裝 【請在圖形界面下】
[grid@rac01 ~]$ cd /opt/grid/ 切換到軟件上傳目錄
調出圖形界面,可以安裝xmanager
[grid@rac01 grid] export DISPLAY=本地ip:0.0
[grid@rac01 grid]$ ./runInstaller
選擇"skip software updates".
默認即可。選擇"Install and Configure Oracle Grid Infrastructure for a Cluster".
選擇高級安裝
選擇English
去掉Configure GNS,設置Cluster Name,Scan Name,ScanPort:1521.
意:將SCAN Name與/etc/hosts文件一致。 我們這里應是: scan-cluster ,Cluster Name可以自定義。
選擇Add,添加節點
Public Hostname輸入rac02, Virtual Hostname輸入rac02-vip
最終結果如下:
驗證ssh等效性
1) 如果前置未設置ssh等效性:選擇ssh connectivty,輸入OS password:grid(grid用戶密
碼),點擊setup,等待即可,成功則下一步。然后點擊"T est".
2) 如果前面已經設置了ssh等效性:可以點擊"T est",或直接下一步
點擊"setup"后
點擊"OK"
點擊next
后面默認即可
創建ASM磁盤組。若未發現磁盤,則點擊change Discovery Path,輸入磁盤所在地址。
設置密碼,如果密碼設置相對簡單,會彈出提示,直接繼續即可。
后面默認即可.
安裝前檢查,有些警告提示可以忽略。 如有其它錯誤,根據具體錯誤提示進行分析配置。點下一步出現。
默認即可,點擊 install 開始安裝。
安裝過程
提示執行腳本。
一定要以root帳戶執行,並且不能同時執行。
先執行rac01 /u01/app/oraInventory/orainstRoot.sh,
再執行rac02 /u01/app/oraInventory/orainstRoot.sh
然后,先執行rac01 /u01/app/11.2.0/grid/root.sh
再執行 rac02 /u01/app/11.2.0/grid/root.sh
安裝完畢,點擊close。 至此rac軟件安裝完畢。接下來要進行驗證。
以上截圖部分來源於網上,可能有新對不上的圖片,步驟一樣即可。
3.4 驗證Oracle Grid Infrastructure安裝成功。
檢查結果類似如下:
以rac01為例粘貼結果,rac02執行結果此處不粘貼了。
[root@racnode1 ~]# su - grid
[grid@racnode1 ~]$ crsctl check crs
CRS-4638: Oracle High Availability Services is online
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
檢查Clusterware資源:[grid@racnode1 ~]$ crs_stat -t -v
Name Type R/RA F/FT Target State Host
----------------------------------------------------------------------
ora.CRS.dg ora....up.type 0/5 0/ ONLINE ONLINE racnode1
ora.DATA.dg ora....up.type 0/5 0/ ONLINE ONLINE racnode1
ora.FRA.dg ora....up.type 0/5 0/ ONLINE ONLINE racnode1
ora....ER.lsnr ora....er.type 0/5 0/ ONLINE ONLINE racnode1
ora....N1.lsnr ora....er.type 0/5 0/0 ONLINE ONLINE racnode1
ora.asm ora.asm.type 0/5 0/ ONLINE ONLINE racnode1
ora.cvu ora.cvu.type 0/5 0/0 ONLINE ONLINE racnode2
ora.gsd ora.gsd.type 0/5 0/ OFFLINE OFFLINE
ora....network ora....rk.type 0/5 0/ ONLINE ONLINE racnode1
ora.oc4j ora.oc4j.type 0/1 0/2 ONLINE ONLINE racnode2
ora.ons ora.ons.type 0/3 0/ ONLINE ONLINE racnode1
ora....SM1.asm application 0/5 0/0 ONLINE ONLINE racnode1
ora....C1.lsnr application 0/5 0/0 ONLINE ONLINE racnode1
ora. racnode1.gsd application 0/5 0/0 OFFLINE OFFLINE
ora. racnode1.ons application 0/3 0/0 ONLINE ONLINE racnode1
ora. racnode1.vip ora....t1.type 0/0 0/0 ONLINE ONLINE racnode1
ora....SM2.asm application 0/5 0/0 ONLINE ONLINE racnode2
ora....C2.lsnr application 0/5 0/0 ONLINE ONLINE racnode2
ora. racnode2.gsd application 0/5 0/0 OFFLINE OFFLINE
ora. racnode2.ons application 0/3 0/0 ONLINE ONLINE racnode2
ora. racnode2.vip ora....t1.type 0/0 0/0 ONLINE ONLINE racnode2
ora.scan1.vip ora....ip.type 0/0 0/0 ONLINE ONLINE racnode1
檢查集群節點
[grid@racnode1 ~]$ olsnodes -n
racnode1 1
racnode2 2
檢查兩個節點上的Oracle TNS監聽器進程:
[grid@racnode1 ~]$ ps -ef|grep lsnr|grep -v 'grep'
grid 94448 1 0 15:04 ? 00:00:00 /u01/app/11.2.0/grid/bin/tnslsnr LISTENER -inherit
grid 94485 1 0 15:04 ? 00:00:00 /u01/app/11.2.0/grid/bin/tnslsnr LISTENER_SCAN1 -inherit
[grid@racnode1 ~]$ ps -ef|grep lsnr|grep -v 'grep'|awk '{print $9}'
LISTENER
LISTENER_SCAN1
確認針對Oracle Clusterware文件的Oracle ASM功能:
[grid@racnode1 ~]$ srvctl status asm -a
ASM is running on racnode1,racnode2
ASM is enabled.
檢查Oracle集群注冊表(OCR):
[grid@racnode1 ~]$ ocrcheck
Status of Oracle Cluster Registry is as follows :
Version : 3
Total space (kbytes) : 262120
Used space (kbytes) : 2624
Available space (kbytes) : 259496
ID : 1555425658
Device/File Name : +CRS
Device/File integrity check succeeded
Device/File not configured
Device/File not configured
Device/File not configured
Device/File not configured
Cluster registry integrity check succeeded
Logical corruption check bypassed due to non-privileged user
檢查表決磁盤:
[grid@racnode1 ~]$ crsctl query css votedisk
## STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1. ONLINE 0d90b0c368684ff5bff8f2094823b901 (ORCL:CRSVOL1) [CRS]
Located 1 voting disk(s).
3.5 安裝grid遇到的錯誤
3.5.1提示scan ip 不在一個子網
原因:scan ip ,rac01-vip,rac02-vip與rac01 rac02不在同一個子網
解決:聯系系統組重新分配ip在同一個vlan
3.5.2 提示/tmp空間不足
/tmp空間不足,擴容磁盤
解決:vgs 查看vg
lvextend -L +100G /dev/mapper/centos-root
xfs_growfs /dev/mapper/centos-root
3.5.3 cvuqdisk-1.0.9-1 rpm包檢測未安裝
解決辦法:(grid 按裝包的rpm目錄下有此包。)
1.執行#cd grid/rpm
2.執行rpm -ivh cvuqdisk-1.0.9-1.rpm ,安裝前確保yum install smartmontools(所有節點都要執行)
3.checkagain,問題解決
3.5.4 ohasd failed to start
最后執行root.sh報錯 ohasd failed to start
報錯原因:因為RHEL 7使用systemd而不是initd運行進程和重啟進程,而root.sh通過傳統的initd運行ohasd進程。
[root@rac01 grid]# ./root.sh
Performing root user operation for Oracle 11g
The following environment variables are set as:
ORACLE_OWNER= grid
ORACLE_HOME= /u1/db/11.2.0/grid
Enter the full pathname of the local bin directory: [/usr/local/bin]:
Copying dbhome to /usr/local/bin ...
Copying oraenv to /usr/local/bin ...
Copying coraenv to /usr/local/bin ...
Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u1/db/11.2.0/grid/crs/install/crsconfig_params
Creating trace directory
User ignored Prerequisites during installation
OLR initialization - successful
root wallet
root wallet cert
root cert export
peer wallet
profile reader wallet
pa wallet
peer wallet keys
pa wallet keys
peer cert request
pa cert request
peer cert
pa cert
peer root cert TP
profile reader root cert TP
pa root cert TP
peer pa cert TP
pa peer cert TP
profile reader pa cert TP
profile reader peer cert TP
peer user cert
pa user cert
Adding Clusterware entries to inittab
ohasd failed to start
Failed to start the Clusterware. Last 20 lines of the alert log follow:
2019-10-31 11:19:36.572
[client(33002)]CRS-2101:The OLR was formatted using version 3.
解決辦法:
在RHEL 7中ohasd需要被設置為一個服務,在運行腳本root.sh之前。
步驟如下:
1. 以root用戶創建服務文件
#touch /usr/lib/systemd/system/ohas.service
#chmod 777 /usr/lib/systemd/system/ohas.service
2. 將以下內容添加到新創建的ohas.service文件中
[root@rac1 init.d]# cat /usr/lib/systemd/system/ohas.service
[Unit]
Description=Oracle High Availability Services
After=syslog.target
[Service]
ExecStart=/etc/init.d/init.ohasd run >/dev/null 2>&1 Type=simple
Restart=always
[Install]
WantedBy=multi-user.target
3. 以root用戶運行下面的命令
systemctl daemon-reload
systemctl enable ohas.service
systemctl start ohas.service
4. 查看運行狀態
[root@rac1 init.d]# systemctl status ohas.service
ohas.service - Oracle High Availability Services
Loaded: loaded (/usr/lib/systemd/system/ohas.service; enabled)
Active: failed (Result: start-limit) since Fri 2015-09-11 16:07:32 CST; 1s ago
Process: 5734 ExecStart=/etc/init.d/init.ohasd run >/dev/null 2>&1 Type=simple (code=exited, status=203/EXEC)
Main PID: 5734 (code=exited, status=203/EXEC)
Sep 11 16:07:32 rac1 systemd[1]: Starting Oracle High Availability Services...
Sep 11 16:07:32 rac1 systemd[1]: Started Oracle High Availability Services.
Sep 11 16:07:32 rac1 systemd[1]: ohas.service: main process exited, code=exited, status=203/EXEC
Sep 11 16:07:32 rac1 systemd[1]: Unit ohas.service entered failed state.
Sep 11 16:07:32 rac1 systemd[1]: ohas.service holdoff time over, scheduling restart.
Sep 11 16:07:32 rac1 systemd[1]: Stopping Oracle High Availability Services...
Sep 11 16:07:32 rac1 systemd[1]: Starting Oracle High Availability Services...
Sep 11 16:07:32 rac1 systemd[1]: ohas.service start request repeated too quickly, refusing to start.
Sep 11 16:07:32 rac1 systemd[1]: Failed to start Oracle High Availability Services.
Sep 11 16:07:32 rac1 systemd[1]: Unit ohas.service entered failed state.
此時狀態為失敗,原因是現在還沒有/etc/init.d/init.ohasd文件。
rac01因為已經執行過root.sh,這里顯示服務是activing ,再次執行.root.sh 即為成功。
rac02 首次執行root.sh,還是會提示失敗,執行完后,systemctl restart ohas.service ,服務就起來了,再次執行root.sh后成功。
成功后部分結果如下:
Performing root user operation for Oracle 11g
The following environment variables are set as:
ORACLE_OWNER= grid
ORACLE_HOME= /u1/db/11.2.0/grid
Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u1/db/11.2.0/grid/crs/install/crsconfig_params
Creating trace directory
User ignored Prerequisites during installation
OLR initialization - successful
Adding Clusterware entries to inittab
CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS daemon on node crmtest1, number 1, and is terminating
An active cluster was found during exclusive startup, restarting to join the cluster
Configure Oracle Grid Infrastructure for a Cluster ... succeeded
3.5.5 最后一步 INS-20802 顯示集群驗證失敗
解決辦法:
網上查看到原因說是因為沒有配置DNS解析原因造成的SCAN IP解析錯誤,這個錯誤可以忽略
rac01 rac02能ping通scan ip即可。
四、安裝database軟件
4.1 安裝軟件
確保兩個節點rac01、racn02都已經啟動,然后以oracle用戶登錄,安裝請在圖形界面下
進入軟件安裝包的目錄
[oracle@racnode1 database]$ ./runInstaller
第1步:不勾選I wish to …… ,然后點next。
選擇跳過軟件更新,點擊next。
選擇僅安裝數據庫軟件,點擊next。
選擇全部節點,並測試ssh互信。
點擊SSH Connectivity… ,輸入oracle的密碼,點擊test 測試通過如下提示:
點擊OK,然后點擊next。
選擇企業版
后面默認即可
安裝過程中需要執行腳本
rac01,rac02先后執行該root.sh腳本
至此軟件安裝完畢。
4.2 安裝遇到的問題
4.2.1 安裝檢查提示rac01-vip rac02-vip不是同一個網卡
ifconfig 查看發現rac01的私有ip在eth4上,而rac02私有Ip在eth5上,這里網卡要對應都是eth4或者eth5
只能再次聯系系統組解決,這里grid安裝沒有報錯,安裝軟件才提示報錯,后來只能卸載grid 重裝。
4.2.2 安裝檢查Error in invoking target 'agent nmhs' of makefile
解決方案:
在makefile中添加鏈接libnnz11庫的參數
修改$ORACLE_HOME/sysman/lib/ins_emagent.mk,將
$(MK_EMAGENT_NMECTL)修改為:$(MK_EMAGENT_NMECTL) -lnnz11
建議修改前備份原始文件
[oracle@rac01 ~]$ cd $ORACLE_HOME/sysman/lib
[oracle@rac01 lib]$ cp ins_emagent.mk ins_emagent.mk.bak
[oracle@rac01 lib]$ vim ins_emagent.mk
進入vi編輯器后 命令模式輸入/NMECTL 進行查找,快速定位要修改的行
在后面追加參數-lnnz11 第一個是字母l 后面兩個是數字1
保存后Retry
通過安裝前的檢查。
五、配置asm磁盤組
也可以在安裝ORACLE軟件之前創建ASM磁盤組。下面開始創建ASM磁盤組。
su - grid
命令 asmca 啟動圖形配置 。 點擊Create ,創建磁盤組。
Disk Group Name 命名為DATA;
Redundancy選擇 External (None);;
disks勾選 DATAVOL1、DATAVOL2、DATAVOL3、DATAVOL4、DATAVOL5;
點擊OK,提示創建成功。
依次創建ARC FRA目錄,最后執行掛載,點擊Mount ALL ,掛載所有磁盤。點擊Yes。確認掛載。
六、創建數據庫實例
切換到oracle用戶,以oracle用戶運行命令 dbca 。彈出開始畫面。
選擇Oracle Real Application Cluster(RAC)database,點擊next。
選擇Create a Database,創建一個數據庫。點擊next。
選擇custom database,截圖來源網上,不對應參考即可
選擇Configuration Type:Admin-Managed. Global Database Name:orcl. SID Prefix:orcl.
點擊"Select ALL". 這里一定要選擇全部節點.
配置Enterprise Managert 和 Automatic Maintenance Tasks.
設置密碼。"Use the Same Administrative Password for All Accounts"
在"Databse Area",點擊"Browse",選擇+DATA.
要求設置ASMSNMP密碼
第7步:設置FRA和歸檔。定義快速恢復區(FRA)大小時,一般用整個卷的大小的90%
點擊"Browse",選擇 FRA
設置內存、SGA和PGA、字符集、連接模式。
選擇Typical,SGA and PGA,先用默認的40%. 后面根據情況也可以調整。
根據實際並發情況調整process 大小
字符集這里一般正常選擇UTF-8,看實際項目使用
開始創建數據庫。選擇"Create Database"
創建過程中
彈出下圖,提示Database creation complete.和相應的提示信息。
點Exit,退出。數據庫創建完成。
RAC安裝到此完成。