1、ip規划
注意:public ip、virtual ip、scan ip必須位於同一個網段
我目前是在虛擬機中設置了兩個網卡,一個是NAT(192.168.88.X),另外一個是HOST ONLY(192.168.94.X)
node1.localdomain node1 public ip 192.168.88.191 node1-vip.localdomain node1-vip virtual ip 192.168.88.193 node1-priv.localdomain node1-priv private ip 192.168.94.11 node2.localdomain node2 public ip 192.168.88.192 node2-vip.localdomain node2-vip virtual ip 192.168.88.194 node2-priv.localdomain node2-priv private ip 192.168.94.12 scan-cluster.localdomain scan-cluster SCAN IP 192.168.88.203 dg.localdomain 192.168.88.212 DNS服務器ip: 192.168.88.11
2、安裝oracle linux 6.10
安裝過程略過,在安裝過程中node1、node2要設置好public ip和private ip,dg要設置一個ip。
安裝完成后,分別測試node1是否能ping通node2,dg,node2是否能ping通node1,dg。
在node1的終端: ping 192.168.88.192 ping 192.168.94.12 ping 192.168.88.212 在node2的終端: ping 192.168.88.191 ping 192.168.94.11 ping 192.168.88.212
3、設置hostname
node1和node2配置相同的hostname
127.0.0.1 localhost ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 #node1 192.168.88.191 node1.localdomain node1 192.168.88.193 node1-vip.localdomain node1-vip 192.168.94.11 node1-priv.localdomain node1-priv #node2 192.168.88.192 node2.localdomain node2 192.168.88.194 node2-vip.localdomain node2-vip 192.168.94.12 node2-priv.localdomain node2-priv #scan-ip 192.168.88.203 scan-cluster.localdomain scan-cluster
測試:
在node1的終端: ping node2 ping node2-priv 在node2的終端: ping node1 ping node1-priv
4、安裝配置DNS服務器(192.168.88.11)
安裝DNS軟件包:
[root@feiht Packages]# rpm -ivh bind-9.8.2-0.68.rc1.el6.x86_64.rpm warning: bind-9.8.2-0.68.rc1.el6.x86_64.rpm: Header V3 RSA/SHA256 Signature, key ID ec551f03: NOKEY Preparing... ########################################### [100%] 1:bind ########################################### [100%] [root@feiht Packages]# rpm -ivh bind-chroot-9.8.2-0.68.rc1.el6.x86_64.rpm warning: bind-chroot-9.8.2-0.68.rc1.el6.x86_64.rpm: Header V3 RSA/SHA256 Signature, key ID ec551f03: NOKEY Preparing... ########################################### [100%] 1:bind-chroot ########################################### [100%]
配置/etc/named.conf 文件:
說明:
直接將該文件中的127.0.0.1、localhost 全部修改成any,且修改時,需要注意左右兩邊留空格。
修改前將原文件進行備份,注意加上-p 選項,來保證文件的權限問題,否則修改有問題后還原文件會由於權限問題導致DNS 服務啟不來!
[root@feiht /]# cd /etc [root@feiht etc]# cp -p named.conf named.conf.bak
修改后如下:
// // named.conf // // Provided by Red Hat bind package to configure the ISC BIND named(8) DNS // server as a caching only nameserver (as a localhost DNS resolver only). // // See /usr/share/doc/bind*/sample/ for example named configuration files. // options { listen-on port 53 { any; }; listen-on-v6 port 53 { ::1; }; directory "/var/named"; dump-file "/var/named/data/cache_dump.db"; statistics-file "/var/named/data/named_stats.txt"; memstatistics-file "/var/named/data/named_mem_stats.txt"; allow-query { any; }; recursion yes; dnssec-enable yes; dnssec-validation yes; /* Path to ISC DLV key */ bindkeys-file "/etc/named.iscdlv.key"; managed-keys-directory "/var/named/dynamic"; }; logging { channel default_debug { file "data/named.run"; severity dynamic; }; }; zone "." IN { type hint; file "named.ca"; }; include "/etc/named.rfc1912.zones"; include "/etc/named.root.key";
配置/etc/named.rfc1912.zones 文件:
[root@feiht /]# cd /etc [root@feiht etc]# cp -p named.rfc1912.zones named.rfc1912.zones.bak
在etc/named.rfc1912.zones的最后添加如下內容:
zone "localdomain" IN { type master; file "localdomain.zone"; allow-update { none; }; }; zone "88.168.192.in-addr.arpa" IN { type master; file "88.168.192.in-addr.arpa"; allow-update { none; }; };
修改后的如下:
zone "localhost.localdomain" IN { type master; file "named.localhost"; allow-update { none; }; }; zone "localdomain" IN { type master; file "localdomain.zone"; allow-update { none; }; }; zone "localhost" IN { type master; file "named.localhost"; allow-update { none; }; }; zone "1.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.ip6.arpa" IN { type master; file "named.loopback"; allow-update { none; }; }; zone "1.0.0.127.in-addr.arpa" IN { type master; file "named.loopback"; allow-update { none; }; }; zone "0.in-addr.arpa" IN { type master; file "named.empty"; allow-update { none; }; }; zone "88.168.192.in-addr.arpa" IN { type master; file "88.168.192.in-addr.arpa"; allow-update { none; }; };
配置正、反向解析數據庫文件:
[root@feiht ~]# cd /var/named/ 創建正反向文件: [root@feiht named]# cp -p named.localhost localdomain.zone [root@feiht named]# cp -p named.localhost 88.168.192.in-addr.arpa
在正向解析文件localdomain.zone的最后添加如下內容:
scan-cluster IN A 192.168.88.203
修改后如下:
$TTL 1D @ IN SOA @ rname.invalid. ( 0 ; serial 1D ; refresh 1H ; retry 1W ; expire 3H ) ; minimum NS @ A 127.0.0.1 AAAA ::1 scan-cluster A 192.168.88.203
在反向解析數據庫文件88.168.192.in-addr.arpa 最后添加下述內容:
203 IN PTR scan-cluster.localdomain.
修改后如下:
$TTL 1D @ IN SOA @ rname.invalid. ( 1997022700 ; serial 28800 ; refresh 1400 ; retry 3600000 ; expire 86400 ) ; minimum NS @ A 127.0.0.1 AAAA ::1 203 IN PTR scan-cluster.localdomain.
如果遇到權限問題(具體什么問題忘記截圖了),請執行下列語句把正反數據庫文件的權限改成named:
[root@feiht named]# chown -R named:named localdomain.zone [root@feiht named]# chown -R named:named 88.168.192.in-addr.arpa
修改DNS服務器的 /etc/resolv.conf文件,保證resolv.conf不會自動修改:
[root@feiht etc]# cat /etc/resolv.conf # Generated by NetworkManager search localdomain nameserver 192.168.88.11 [root@feiht named]# chattr +i /etc/resolv.conf
關閉DNS服務器的防火牆:
[root@feiht etc]# service iptables stop
[root@oradb ~]# chkconfig iptables off
啟動DNS服務:
[root@feiht named]# /etc/rc.d/init.d/named status [root@feiht named]# /etc/rc.d/init.d/named start [root@feiht named]# /etc/rc.d/init.d/named stop [root@feiht named]# /etc/rc.d/init.d/named restart
然后,分別在RAC 節點node1、node2 的/etc/resolv.conf 配置文件中添加下述配置信息:
search localdomain nameserver 192.168.88.11
驗證node1的scan ip是否解析成功:
[root@node1 etc]# nslookup 192.168.88.203 Server: 192.168.88.11 Address: 192.168.88.11#53 203.88.168.192.in-addr.arpa name = scan-cluster.localdomain. [root@node1 etc]# nslookup scan-cluster.localdomain Server: 192.168.88.11 Address: 192.168.88.11#53 Name: scan-cluster.localdomain Address: 192.168.88.203 [root@node1 etc]# nslookup scan-cluster Server: 192.168.88.11 Address: 192.168.88.11#53 Name: scan-cluster.localdomain Address: 192.168.88.203 同樣的方式測試node2.
5、安裝前的准備工作
5.1、分別在node1、node2上建用戶、改口令、修改用戶配置文件
用戶規划:
GroupName GroupID GroupInfo OracleUser(1100) GridUser(1101) oinstall 1000 Inventory Group Y Y dba 1300 OSDBA Group Y oper 1301 OSOPER Group Y asmadmin 1200 OSASM Y asmdba 1201 OSDBA for ASM Y Y asmoper 1202 OSOPER for ASM Y
shell腳本(node1):
說明:在節點node2 上執行該腳本時,
需要將grid 用戶環境變量ORACLE_SID 修改為+ASM2,oracle 用戶環境變量ORACLE_SID 修改為devdb2,ORACLE_HOSTNAME 環境變量修改為node2.localdomain
echo "Now create 6 groups named 'oinstall','dba','asmadmin','asmdba','asmoper','oper'" echo "Plus 2 users named 'oracle','grid',Also setting the Environment" groupadd -g 1000 oinstall groupadd -g 1200 asmadmin groupadd -g 1201 asmdba groupadd -g 1202 asmoper useradd -u 1100 -g oinstall -G asmadmin,asmdba,asmoper -d /home/grid -s /bin/bash -c "grid Infrastructure Owner" grid echo "grid" | passwd --stdin grid echo 'export PS1="`/bin/hostname -s`-> "'>> /home/grid/.bash_profile echo "export TMP=/tmp">> /home/grid/.bash_profile echo 'export TMPDIR=$TMP'>>/home/grid/.bash_profile echo "export ORACLE_SID=+ASM1">> /home/grid/.bash_profile echo "export ORACLE_BASE=/u01/app/grid">> /home/grid/.bash_profile echo "export ORACLE_HOME=/u01/app/11.2.0/grid">> /home/grid/.bash_profile echo "export ORACLE_TERM=xterm">> /home/grid/.bash_profile echo "export NLS_DATE_FORMAT='yyyy/mm/dd hh24:mi:ss'" >> /home/grid/.bash_profile echo 'export TNS_ADMIN=$ORACLE_HOME/network/admin' >> /home/grid/.bash_profile echo 'export PATH=/usr/sbin:$PATH'>> /home/grid/.bash_profile echo 'export PATH=$ORACLE_HOME/bin:$PATH'>> /home/grid/.bash_profile echo 'export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib'>> /home/grid/.bash_profile echo 'export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib'>> /home/grid/.bash_profile echo "export EDITOR=vi" >> /home/grid/.bash_profile echo "export LANG=en_US" >> /home/grid/.bash_profile echo "export NLS_LANG=AMERICAN_AMERICA.AL32UTF8" >> /home/grid/.bash_profile echo "umask 022">> /home/grid/.bash_profile groupadd -g 1300 dba groupadd -g 1301 oper useradd -u 1101 -g oinstall -G dba,oper,asmdba -d /home/oracle -s /bin/bash -c "Oracle Software Owner" oracle echo "oracle" | passwd --stdin oracle echo 'export PS1="`/bin/hostname -s`-> "'>> /home/oracle/.bash_profile echo "export TMP=/tmp">> /home/oracle/.bash_profile echo 'export TMPDIR=$TMP'>>/home/oracle/.bash_profile echo "export ORACLE_HOSTNAME=node1.localdomain">> /home/oracle/.bash_profile echo "export ORACLE_SID=devdb1">> /home/oracle/.bash_profile echo "export ORACLE_BASE=/u01/app/oracle">> /home/oracle/.bash_profile echo 'export ORACLE_HOME=$ORACLE_BASE/product/11.2.0/db_1'>> /home/oracle/.bash_profile echo "export ORACLE_UNQNAME=devdb">> /home/oracle/.bash_profile echo 'export TNS_ADMIN=$ORACLE_HOME/network/admin' >> /home/oracle/.bash_profile echo "export ORACLE_TERM=xterm">> /home/oracle/.bash_profile echo 'export PATH=/usr/sbin:$PATH'>> /home/oracle/.bash_profile echo 'export PATH=$ORACLE_HOME/bin:$PATH'>> /home/oracle/.bash_profile echo 'export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib'>> /home/oracle/.bash_profile echo 'export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib'>> /home/oracle/.bash_profile echo "export EDITOR=vi" >> /home/oracle/.bash_profile echo "export LANG=en_US" >> /home/oracle/.bash_profile echo "export NLS_LANG=AMERICAN_AMERICA.AL32UTF8" >> /home/oracle/.bash_profile echo "export NLS_DATE_FORMAT='yyyy/mm/dd hh24:mi:ss'" >> /home/oracle/.bash_profile echo "umask 022">> /home/oracle/.bash_profile echo "The Groups and users has been created" echo "The Environment for grid,oracle also has been set successfully"
查看用戶和目錄是否創建成功:
[root@node1 shell]# id oracle uid=1101(oracle) gid=1000(oinstall) 組=1000(oinstall),1201(asmdba),1300(dba),1301(oper) [root@node1 shell]# id grid uid=1100(grid) gid=1000(oinstall) 組=1000(oinstall),1200(asmadmin),1201(asmdba),1202(asmoper) [root@node1 shell]# groups oracle oracle : oinstall asmdba dba oper [root@node1 shell]# groups grid grid : oinstall asmadmin asmdba asmoper [root@node1 home]# ll /home 總用量 8 drwx------. 3 grid oinstall 4096 2月 5 17:18 grid drwx------. 3 oracle oinstall 4096 2月 5 17:18 oracle
5.2、建路徑、改權限
路徑和權限規划:
Environment Variable Grid User Oracle User ORACLE_BASE /u01/app/grid /u01/app/oracle ORACLE_HOME /u01/app/11.2.0/grid /u01/app/oracle/product/11.2.0/db_1 ORACLE_SID [node1] +ASM1 devdb1 ORACLE_SID [node2] +ASM2 devdb2
shell腳本:
echo "Now create the necessary directory for oracle,grid users and change the authention to oracle,grid users..." mkdir -p /u01/app/grid mkdir -p /u01/app/11.2.0/grid mkdir -p /u01/app/oracle chown -R oracle:oinstall /u01 chown -R grid:oinstall /u01/app/grid chown -R grid:oinstall /u01/app/11.2.0 chmod -R 775 /u01 echo "The necessary directory for oracle,grid users and change the authention to oracle,grid users has been finished"
5.3、修改/etc/security/limits.conf,配置 oracle、grid用戶的shell限制
shell腳本:
echo "Now modify the /etc/security/limits.conf,but backup it named /etc/security/limits.conf.bak before" cp /etc/security/limits.conf /etc/security/limits.conf.bak echo "oracle soft nproc 2047" >>/etc/security/limits.conf echo "oracle hard nproc 16384" >>/etc/security/limits.conf echo "oracle soft nofile 1024" >>/etc/security/limits.conf echo "oracle hard nofile 65536" >>/etc/security/limits.conf echo "grid soft nproc 2047" >>/etc/security/limits.conf echo "grid hard nproc 16384" >>/etc/security/limits.conf echo "grid soft nofile 1024" >>/etc/security/limits.conf echo "grid hard nofile 65536" >>/etc/security/limits.conf echo "Modifing the /etc/security/limits.conf has been succeed."
5.4、修改/etc/pam.d/login配置文件
shell腳本:
echo "Now modify the /etc/pam.d/login,but with a backup named /etc/pam.d/login.bak" cp /etc/pam.d/login /etc/pam.d/login.bak echo "session required /lib/security/pam_limits.so" >>/etc/pam.d/login echo "session required pam_limits.so" >>/etc/pam.d/login echo "Modifing the /etc/pam.d/login has been succeed."
5.5、修改/etc/profile文件
shell腳本:
echo "Now modify the /etc/profile,but with a backup named /etc/profile.bak" cp /etc/profile /etc/profile.bak echo 'if [ $USER = "oracle" ]||[ $USER = "grid" ]; then' >> /etc/profile echo 'if [ $SHELL = "/bin/ksh" ]; then' >> /etc/profile echo 'ulimit -p 16384' >> /etc/profile echo 'ulimit -n 65536' >> /etc/profile echo 'else' >> /etc/profile echo 'ulimit -u 16384 -n 65536' >> /etc/profile echo 'fi' >> /etc/profile echo 'fi' >> /etc/profile echo "Modifing the /etc/profile has been succeed."
5.6、修改內核配置文件/etc/sysctl.conf
shell腳本:
echo "Now modify the /etc/sysctl.conf,but with a backup named /etc/sysctl.bak" cp /etc/sysctl.conf /etc/sysctl.conf.bak echo "fs.aio-max-nr = 1048576" >> /etc/sysctl.conf echo "fs.file-max = 6815744" >> /etc/sysctl.conf echo "kernel.shmall = 2097152" >> /etc/sysctl.conf echo "kernel.shmmax = 1054472192" >> /etc/sysctl.conf echo "kernel.shmmni = 4096" >> /etc/sysctl.conf echo "kernel.sem = 250 32000 100 128" >> /etc/sysctl.conf echo "net.ipv4.ip_local_port_range = 9000 65500" >> /etc/sysctl.conf echo "net.core.rmem_default = 262144" >> /etc/sysctl.conf echo "net.core.rmem_max = 4194304" >> /etc/sysctl.conf echo "net.core.wmem_default = 262144" >> /etc/sysctl.conf echo "net.core.wmem_max = 1048586" >> /etc/sysctl.conf echo "net.ipv4.tcp_wmem = 262144 262144 262144" >> /etc/sysctl.conf echo "net.ipv4.tcp_rmem = 4194304 4194304 4194304" >> /etc/sysctl.conf echo "Modifing the /etc/sysctl.conf has been succeed." echo "Now make the changes take effect....." sysctl -p
5.7、停止 ntp服務
[root@node1 /]# service ntpd status ntpd 已停 [root@node1 /]# chkconfig ntpd off [root@node1 etc]# ls |grep ntp ntp ntp.conf [root@node1 etc]# cp -p /etc/ntp.conf /etc/ntp.conf.bak [root@node1 etc]# ls |grep ntp ntp ntp.conf ntp.conf.bak [root@node1 etc]# rm -rf /etc/ntp.conf [root@node1 etc]# ls |grep ntp ntp ntp.conf.bak [root@node1 etc]#
6、在node2上重復第5步的步驟配置node2節點
說明:在節點node2 上執行該腳本時,
需要將grid 用戶環境變量ORACLE_SID 修改為+ASM2,oracle 用戶環境變量ORACLE_SID 修改為devdb2,ORACLE_HOSTNAME 環境變量修改為node2.localdomain
7、配置 oracle,grid 用戶 SSH對等性
node1:
[root@node1 etc]# su - oracle node1-> env | grep ORA ORACLE_UNQNAME=devdb ORACLE_SID=devdb1 ORACLE_BASE=/u01/app/oracle ORACLE_HOSTNAME=node1.localdomain ORACLE_TERM=xterm ORACLE_HOME=/u01/app/oracle/product/11.2.0/db_1 node1-> pwd /home/oracle node1-> mkdir ~/.ssh node1-> chmod 700 ~/.ssh node1-> ls -al total 36 drwx------. 4 oracle oinstall 4096 Feb 5 18:53 . drwxr-xr-x. 4 root root 4096 Feb 5 17:10 .. -rw-------. 1 oracle oinstall 167 Feb 5 18:16 .bash_history -rw-r--r--. 1 oracle oinstall 18 Mar 22 2017 .bash_logout -rw-r--r--. 1 oracle oinstall 823 Feb 5 17:18 .bash_profile -rw-r--r--. 1 oracle oinstall 124 Mar 22 2017 .bashrc drwxr-xr-x. 2 oracle oinstall 4096 Nov 20 2010 .gnome2 drwx------. 2 oracle oinstall 4096 Feb 5 18:53 .ssh -rw-------. 1 oracle oinstall 651 Feb 5 18:16 .viminfo node1-> ssh-keygen -t rsa Generating public/private rsa key pair. Enter file in which to save the key (/home/oracle/.ssh/id_rsa): Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /home/oracle/.ssh/id_rsa. Your public key has been saved in /home/oracle/.ssh/id_rsa.pub. The key fingerprint is: b9:97:e7:1b:1c:e4:1d:d9:31:47:e1:d1:90:7f:27:e7 oracle@node1.localdomain The key's randomart image is: node1-> ssh-keygen -t dsa Generating public/private dsa key pair. Enter file in which to save the key (/home/oracle/.ssh/id_dsa): Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /home/oracle/.ssh/id_dsa. Your public key has been saved in /home/oracle/.ssh/id_dsa.pub. The key fingerprint is: b7:70:24:43:ab:90:74:b0:49:dc:a9:bf:e7:19:17:ef oracle@node1.localdomain The key's randomart image is: 同樣在node2上執行上面的命令
返回node1:
node1-> id uid=1101(oracle) gid=1000(oinstall) groups=1000(oinstall),1201(asmdba),1300(dba),1301(oper) context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 node1-> pwd /home/oracle node1-> cd ~/.ssh node1-> ll total 16 -rw-------. 1 oracle oinstall 668 Feb 5 18:55 id_dsa -rw-r--r--. 1 oracle oinstall 614 Feb 5 18:55 id_dsa.pub -rw-------. 1 oracle oinstall 1675 Feb 5 18:54 id_rsa -rw-r--r--. 1 oracle oinstall 406 Feb 5 18:54 id_rsa.pub node1-> cat ~/.ssh/id_rsa.pub >>~/.ssh/authorized_keys node1-> cat ~/.ssh/id_dsa.pub >>~/.ssh/authorized_keys node1-> ll total 20 -rw-r--r--. 1 oracle oinstall 1020 Feb 5 19:05 authorized_keys -rw-------. 1 oracle oinstall 668 Feb 5 18:55 id_dsa -rw-r--r--. 1 oracle oinstall 614 Feb 5 18:55 id_dsa.pub -rw-------. 1 oracle oinstall 1675 Feb 5 18:54 id_rsa -rw-r--r--. 1 oracle oinstall 406 Feb 5 18:54 id_rsa.pub node1-> ssh node2 cat ~/.ssh/id_rsa.pub >>~/.ssh/authorized_keys The authenticity of host 'node2 (192.168.88.192)' can't be established. RSA key fingerprint is cd:fd:bd:72:7d:2f:54:b3:d7:32:30:de:67:bb:6f:8b. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'node2,192.168.88.192' (RSA) to the list of known hosts. oracle@node2's password: node1-> ssh node2 cat ~/.ssh/id_dsa.pub >>~/.ssh/authorized_keys oracle@node2's password: node1-> scp ~/.ssh/authorized_keys node2:~/.ssh/authorized_keys oracle@node2's password: authorized_keys node1->
在node1上驗證SSH的對等性是否配置成功:
node1-> ssh node1 date The authenticity of host 'node1 (192.168.88.191)' can't be established. RSA key fingerprint is b2:a4:19:c0:85:b5:df:f2:8d:16:d8:b2:83:5b:21:19. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'node1,192.168.88.191' (RSA) to the list of known hosts. Fri Feb 5 19:15:02 CST 2021 node1-> ssh node2 date Fri Feb 5 19:15:57 CST 2021 node1-> ssh node1-priv date ....省略... node1-> ssh node2-priv date ....省略... node1-> ssh node1.localdomain date ....省略... node1-> ssh node2.localdomain date ....省略... node1-> ssh node1-priv.localdomain date ....省略... node1-> ssh node2-priv.localdomain date ....省略...
在node2上同樣執行上述命令驗證SSH的對等性是否配置成功
最后在node1和node2上分別再次執行上述命令,如果不需要輸入密碼,則node1和node2對等性配置成功:
node1: node1-> ssh node1 date Fri Feb 5 19:36:22 CST 2021 node1-> ssh node2 date Fri Feb 5 19:36:26 CST 2021 node1-> ssh node1-priv date Fri Feb 5 19:36:34 CST 2021 node1-> ssh node2-priv date Fri Feb 5 19:36:38 CST 2021 node1-> ssh node1.localdomain date Fri Feb 5 19:37:51 CST 2021 node1-> ssh node2.localdomain date Fri Feb 5 19:37:54 CST 2021 node1-> ssh node2-priv.localdomain date Fri Feb 5 19:38:01 CST 2021 node1-> ssh node1-priv.localdomain date Fri Feb 5 19:38:08 CST 2021 node2: node2-> ssh node1 date Fri Feb 5 19:49:20 CST 2021 node2-> ssh node2 date Fri Feb 5 19:49:23 CST 2021 node2-> ssh node1-priv date Fri Feb 5 19:49:29 CST 2021 node2-> ssh node2-priv date Fri Feb 5 19:49:32 CST 2021 node2-> ssh node1.localdomain date Fri Feb 5 19:49:40 CST 2021 node2-> ssh node2.localdomain date Fri Feb 5 19:49:43 CST 2021 node2-> ssh node2-priv.localdomain date Fri Feb 5 19:49:50 CST 2021 node2-> ssh node1-priv.localdomain date Fri Feb 5 19:49:55 CST 2021
Oracle 用戶SSH 對等性配置完成!
8、重復上述步驟7,切換到grid用戶(su - oracle),以grid 用戶配置對等性。
9、配置共享磁盤
在任意一個節點上先創建共享磁盤,然后在另外的節點上選擇添加已有磁盤。這里選擇先在node2 節點機器上創建共享磁盤,然后在node1 上添加已創建的磁盤。
共創建4 塊硬盤,其中2 塊500M的硬盤,將來用於配置GRIDDG 磁盤組,專門存放OCR 和Voting Disk,Voting Disk一般是配置奇數塊硬盤;1 塊3G 的磁盤,用於配置DATA 磁盤組,存放數據庫;1 塊3G 的磁盤,用於配置FLASH 磁盤組,用於閃回區;
在node2 上創建第一塊共享硬盤的步驟:
① 先關閉節點2 RAC2,然后選擇RAC2,右鍵選擇設置:
② 在編輯虛擬機對話框下,選擇添加,選擇硬盤,下一步:
③創建新虛擬磁盤
④指定共享磁盤的大小
⑤選擇共享磁盤文件的存放位置
⑥磁盤創建完成后,選擇剛創建的新硬盤,點擊“高級”,在彈出的對話框里,虛擬設備節點這里需要特別注意,要選擇1:0。
⑦重復步驟①-⑥創建第二塊硬盤,磁盤大小0.5G,虛擬設備節點這里要選擇1:1
⑧重復步驟①-⑥創建第三塊硬盤,磁盤大小3G,虛擬設備節點這里要選擇2:0
⑨重復步驟①-⑥創建第四塊硬盤,磁盤大小3G,虛擬設備節點這里要選擇2:1
關機node1節點,然后為node1添加磁盤:
添加步驟和上述node2節點的步驟完全一致,但是要注意在選擇磁盤的時候必須選擇“使用現有虛擬磁盤”,如下圖:
修改node1和node2的虛擬機文件:
先關機node2節點,鼠標放到RAC2上,會提示當前虛擬機對應的文件:
修改后的內容如下,紅色字體為添加的部分:
...省略N行... scsi1.present = "TRUE" scsi1.virtualDev = "lsilogic" scsi1.sharedBus = "virtual" scsi2.present = "TRUE" scsi2.virtualDev = "lsilogic" scsi2.sharedBus = "virtual" scsi1:0.present = "TRUE" scsi1:1.present = "TRUE" scsi2:0.present = "TRUE" scsi2:1.present = "TRUE" scsi1:0.fileName = "H:\sharedisk\OCR.vmdk" scsi1:0.mode = "independent-persistent" scsi1:0.deviceType = "disk" scsi1:1.fileName = "H:\sharedisk\VOTING.vmdk" scsi1:1.mode = "independent-persistent" scsi1:1.deviceType = "disk" scsi2:0.fileName = "H:\sharedisk\DATA.vmdk" scsi2:0.mode = "independent-persistent" scsi2:0.deviceType = "disk" scsi2:1.fileName = "H:\sharedisk\FLASH.vmdk" scsi2:1.mode = "independent-persistent" scsi2:1.deviceType = "disk" floppy0.present = "FALSE" scsi1:1.redo = "" scsi1:0.redo = "" scsi2:0.redo = "" scsi2:1.redo = "" scsi1.pciSlotNumber = "38" scsi2.pciSlotNumber = "39" disk.locking = "false" diskLib.dataCacheMaxSize = "0" diskLib.dataCacheMaxReadAheadSize = "0" diskLib.DataCacheMinReadAheadSize = "0" diskLib.dataCachePageSize = "4096" diskLib.maxUnsyncedWrites = "0" disk.EnableUUID = "TRUE" usb:0.present = "TRUE" usb:0.deviceType = "hid" usb:0.port = "0" usb:0.parent = "-1"
關機node1,按照上述方法修改node1的虛擬機文件。
10、配置 ASM磁盤
在第15步中已經對RAC 雙節點已經配置好了共享磁盤,接下來需要將這些共享磁盤格式化、然后用asmlib 將其配置為ASM 磁盤,用於將來存放OCR、Voting Disk和數據庫用。
注意:只需在其中1 個節點上格式化就可以,接下來選擇在node1 節點上格式化。這里以asmlib 軟件來創建ASM 磁盤,而不使用raw disk,而且從11gR2 開始,OUI的圖形界面已經不再支持raw disk。
① 查看共享磁盤信息
以root 用戶分別在兩個節點node1和node2上執行fdisk -l 命令,查看現有硬盤分區信息:
可以看到目前兩個節點上的分區信息一致:其中/dev/sda 是用於存放操作系統的,/dev/sdb、/dev/sdc、/dev/sdd、/dev/sde 這4 塊盤都沒有分區信息
②格式化共享磁盤
root 用戶在node1上格式化/dev/sdb、/dev/sdc、/dev/sdd、/dev/sde 這4塊盤
root 用戶在node1 上格式化/dev/sdb、/dev/sdc、/dev/sdd、/dev/sde 這4 塊盤 [root@node1 ~]# fdisk /dev/sdb Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel Building a new DOS disklabel. Changes will remain in memory only, until you decide to write them. After that, of course, the previous content won't be recoverable. Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite) Command (m for help): n Command action e extended p primary partition (1-4) p Partition number (1-4): 1 First cylinder (1-500, default 1): Using default value 1 Last cylinder or +size or +sizeM or +sizeK (1-500, default 500): Using default value 500 Command (m for help): w The partition table has been altered! Calling ioctl() to re-read partition table. Syncing disks. [root@node1 ~]#
說明:
說明:fdisk /dev/sdb 表示要對/dev/sdb 磁盤進行格式化,其中,輸入的命令分別表示:
n 表示新建1 個分區;
p 表示分區類型選擇為primary partition 主分區;
1 表示分區編號從1 開始;
起始、終止柱面選擇默認值,即1 和500;
w 表示將新建的分區信息寫入硬盤分區表。
③ 重復上述步驟②,以root 用戶在node1 上分別格式化其余3 塊磁盤
④ 格式化完畢之后,在node1,node2 節點上分別看到下述信息:
node1:

[root@node1 ~]# fdisk -l ...前面省略N行... Disk /dev/sda: 32.2 GB, 32212254720 bytes 255 heads, 63 sectors/track, 3916 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000229dc Device Boot Start End Blocks Id System /dev/sda1 * 1 3407 27360256 83 Linux /dev/sda2 3407 3917 4096000 82 Linux swap / Solaris Disk /dev/sdb: 536 MB, 536870912 bytes 64 heads, 32 sectors/track, 512 cylinders Units = cylinders of 2048 * 512 = 1048576 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xab9b0998 Device Boot Start End Blocks Id System /dev/sdb1 1 500 511984 83 Linux Disk /dev/sdc: 536 MB, 536870912 bytes 64 heads, 32 sectors/track, 512 cylinders Units = cylinders of 2048 * 512 = 1048576 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xcca413ef Device Boot Start End Blocks Id System /dev/sdc1 1 512 524272 83 Linux Disk /dev/sdd: 3221 MB, 3221225472 bytes 255 heads, 63 sectors/track, 391 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x886442a7 Device Boot Start End Blocks Id System /dev/sdd1 1 391 3140676 83 Linux Disk /dev/sde: 3221 MB, 3221225472 bytes 255 heads, 63 sectors/track, 391 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xa9674b78 Device Boot Start End Blocks Id System /dev/sde1 1 391 3140676 83 Linux [root@node1 ~]#
node2:

[root@node2 ~]# fdisk -l ...前面省略N行... Disk /dev/sda: 32.2 GB, 32212254720 bytes 255 heads, 63 sectors/track, 3916 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000c2574 Device Boot Start End Blocks Id System /dev/sda1 * 1 3407 27360256 83 Linux /dev/sda2 3407 3917 4096000 82 Linux swap / Solaris Disk /dev/sdb: 536 MB, 536870912 bytes 64 heads, 32 sectors/track, 512 cylinders Units = cylinders of 2048 * 512 = 1048576 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xab9b0998 Device Boot Start End Blocks Id System /dev/sdb1 1 500 511984 83 Linux Disk /dev/sdc: 536 MB, 536870912 bytes 64 heads, 32 sectors/track, 512 cylinders Units = cylinders of 2048 * 512 = 1048576 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xcca413ef Device Boot Start End Blocks Id System /dev/sdc1 1 512 524272 83 Linux Disk /dev/sdd: 3221 MB, 3221225472 bytes 255 heads, 63 sectors/track, 391 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x886442a7 Device Boot Start End Blocks Id System /dev/sdd1 1 391 3140676 83 Linux Disk /dev/sde: 3221 MB, 3221225472 bytes 255 heads, 63 sectors/track, 391 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xa9674b78 Device Boot Start End Blocks Id System /dev/sde1 1 391 3140676 83 Linux [root@node2 ~]#
⑤在node1和node2兩個節點上分別安裝ASM RPM軟件包
node1:
--檢查是否已安裝ASM RPM軟件包 [root@node2 Packages]# rpm -qa|grep asm [root@node2 Packages]# --安裝ASM RPM軟件包 [root@node2 Packages]# rpm -ivh oracleasm-support-2.1.11-2.el6.x86_64.rpm warning: oracleasm-support-2.1.11-2.el6.x86_64.rpm: Header V3 RSA/SHA256 Signature, key ID ec551f03: NOKEY Preparing... ########################################### [100%] 1:oracleasm-support ########################################### [100%] [root@node2 Packages]# rpm -ivh kmod-oracleasm-2.0.8-15.el6_9.x86_64.rpm warning: kmod-oracleasm-2.0.8-15.el6_9.x86_64.rpm: Header V4 DSA/SHA1 Signature, key ID 192a7d7d: NOKEY Preparing... ########################################### [100%] 1:kmod-oracleasm ########################################### [100%] gzip: /boot/initramfs-2.6.32-754.el6.x86_64.img: not in gzip format gzip: /boot/initramfs-2.6.32-754.el6.x86_64.tmp: not in gzip format [root@node1 kmod-oracleasm-package]# rpm -ivh oracleasmlib-2.0.12-1.el7.x86_64.rpm warning: oracleasmlib-2.0.12-1.el7.x86_64.rpm: Header V3 RSA/SHA256 Signature, key ID ec551f03: NOKEY Preparing... ########################################### [100%] 1:oracleasmlib ########################################### [100%] [root@node1 kmod-oracleasm-package]# rpm -qa|grep asm kmod-oracleasm-2.0.8-15.el6_9.x86_64 oracleasmlib-2.0.12-1.el7.x86_64 oracleasm-support-2.1.11-2.el6.x86_64
同樣在node2上安裝同樣版本的軟件包。
說明:安裝上述3 個ASM RPM 軟件包時要先安裝oracleasm-support-2.1.11-2.el6.x86_64.rpm 軟件包,其次安裝kmod-oracleasm-2.0.8-15.el6_9.x86_64.rpm 軟件包,最后安裝oracleasmlib-2.0.12-1.el7.x86_64.rpm軟件包。
必須安裝和操作系統版本內核相同的軟件包,可以使用uname -i命令查看linux的內核版本。
安裝完畢后,執行rpm -qa|grep asm 確認是否安裝成功。
⑥配置ASM driver服務
可以通過執行/usr/sbin/oracleasm 命令來進行配置, 也可以通過執行/etc/init.d/oracleasm 命令來進行配置,后者命令是Oracle 10g 中進行ASM 配置的命令,Oracle推薦執行前者命令,不過后者命令保留使用。
查看ASM 服務狀態:
[root@node1 kmod-oracleasm-package]# /usr/sbin/oracleasm status Checking if ASM is loaded: no Checking if /dev/oracleasm is mounted: no
查看ASM服務配置的幫助信息:
[root@node1 kmod-oracleasm-package]# /usr/sbin/oracleasm -h Usage: oracleasm [--exec-path=<exec_path>] <command> [ <args> ] oracleasm --exec-path oracleasm -h oracleasm -V The basic oracleasm commands are: configure Configure the Oracle Linux ASMLib driver init Load and initialize the ASMLib driver exit Stop the ASMLib driver scandisks Scan the system for Oracle ASMLib disks status Display the status of the Oracle ASMLib driver listdisks List known Oracle ASMLib disks listiids List the iid files deleteiids Delete the unused iid files querydisk Determine if a disk belongs to Oracle ASMlib createdisk Allocate a device for Oracle ASMLib use deletedisk Return a device to the operating system renamedisk Change the label of an Oracle ASMlib disk update-driver Download the latest ASMLib driver
配置ASM 服務:
[root@node1 kmod-oracleasm-package]# /usr/sbin/oracleasm configure -i Configuring the Oracle ASM library driver. This will configure the on-boot properties of the Oracle ASM library driver. The following questions will determine whether the driver is loaded on boot and what permissions it will have. The current values will be shown in brackets ('[]'). Hitting <ENTER> without typing an answer will keep that current value. Ctrl-C will abort. Default user to own the driver interface []: grid Default group to own the driver interface []: asmadmin Start Oracle ASM library driver on boot (y/n) [n]: y Scan for Oracle ASM disks on boot (y/n) [y]: y Writing Oracle ASM library driver configuration: done [root@node1 kmod-oracleasm-package]# /usr/sbin/oracleasm status Checking if ASM is loaded: no Checking if /dev/oracleasm is mounted: no [root@node1 kmod-oracleasm-package]# /usr/sbin/oracleasm init Creating /dev/oracleasm mount point: /dev/oracleasm Loading module "oracleasm": oracleasm Configuring "oracleasm" to use device physical block size Mounting ASMlib driver filesystem: /dev/oracleasm [root@node1 kmod-oracleasm-package]# /usr/sbin/oracleasm configure ORACLEASM_ENABLED=true ORACLEASM_UID=grid ORACLEASM_GID=asmadmin ORACLEASM_SCANBOOT=true ORACLEASM_SCANORDER="" ORACLEASM_SCANEXCLUDE="" ORACLEASM_SCAN_DIRECTORIES="" ORACLEASM_USE_LOGICAL_BLOCK_SIZE="false"
說明:/usr/sbin/oracleasm configure -i 命令進行配置時,用戶配置為grid,組為asmadmin,啟動ASM library driver 驅動服務,並且將其配置為隨着操作系統的啟動而自動啟動。
配置完成后,記得執行/usr/sbin/oracleasm init 命令來加載oracleasm 內核模塊。
在node2 上執行上述步驟⑥,在node2上完成ASM 服務配置。
⑦ 配置ASM磁盤
安裝ASM RPM軟件包,配置ASM 驅動服務的最終目的是要創建ASM 磁盤,為將來安裝grid 軟件、創建Oracle 數據庫提供存儲。
說明:只需在一個節點上創建ASM 磁盤即可!創建完之后,在其它節點上執行/usr/sbin/oracleasm scandisks 之后,就可看到ASM 磁盤。
1、先在node2上執行/usr/sbin/oracleasm createdisk 來創建ASM 磁盤
[root@node2 kmod-oracleasm-package]# /usr/sbin/oracleasm listdisks
[root@node2 kmod-oracleasm-package]# /usr/sbin/oracleasm createdisk
Usage: oracleasm-createdisk [-l <manager>] [-v] <label> <device>
[root@node2 kmod-oracleasm-package]# /usr/sbin/oracleasm createdisk VOL1 /dev/sdb1
Writing disk header: done
Instantiating disk: done
[root@node2 kmod-oracleasm-package]# /usr/sbin/oracleasm createdisk VOL1 /dev/sdc1
Disk "VOL1" already exists
[root@node2 kmod-oracleasm-package]# /usr/sbin/oracleasm createdisk VOL2 /dev/sdc1
Writing disk header: done
Instantiating disk: done
[root@node2 kmod-oracleasm-package]# /usr/sbin/oracleasm createdisk VOL3 /dev/sdd1
Writing disk header: done
Instantiating disk: done
[root@node2 kmod-oracleasm-package]# /usr/sbin/oracleasm createdisk VOL4 /dev/sde1
Writing disk header: done
Instantiating disk: done
[root@node2 kmod-oracleasm-package]# /usr/sbin/oracleasm listdisks
VOL1
VOL2
VOL3
VOL4
2、現在在node1上還看不到剛創建的ASM 磁盤。執行/usr/sbin/oracleasm scandisks 掃描磁盤后就可以看到剛才在node2上創建的ASM磁盤了。
[root@node1 kmod-oracleasm-package]# /usr/sbin/oracleasm listdisks
[root@node1 kmod-oracleasm-package]# /usr/sbin/oracleasm scandisks
Reloading disk partitions: done
Cleaning any stale ASM disks...
Scanning system for ASM disks...
Instantiating disk "VOL1"
Instantiating disk "VOL2"
Instantiating disk "VOL3"
Instantiating disk "VOL4"
[root@node1 kmod-oracleasm-package]# /usr/sbin/oracleasm listdisks
VOL1
VOL2
VOL3
VOL4
3、如何確定ASM 磁盤同物理磁盤之間的對應關系?
node1:
[root@node1 kmod-oracleasm-package]# /usr/sbin/oracleasm querydisk /dev/sd*
Device "/dev/sda" is not marked as an ASM disk
Device "/dev/sda1" is not marked as an ASM disk
Device "/dev/sda2" is not marked as an ASM disk
Device "/dev/sdb" is not marked as an ASM disk
Device "/dev/sdb1" is marked an ASM disk with the label "VOL1"
Device "/dev/sdc" is not marked as an ASM disk
Device "/dev/sdc1" is marked an ASM disk with the label "VOL2"
Device "/dev/sdd" is not marked as an ASM disk
Device "/dev/sdd1" is marked an ASM disk with the label "VOL3"
Device "/dev/sde" is not marked as an ASM disk
Device "/dev/sde1" is marked an ASM disk with the label "VOL4"
node2:
[root@node2 kmod-oracleasm-package]# /usr/sbin/oracleasm querydisk /dev/sd*
Device "/dev/sda" is not marked as an ASM disk
Device "/dev/sda1" is not marked as an ASM disk
Device "/dev/sda2" is not marked as an ASM disk
Device "/dev/sdb" is not marked as an ASM disk
Device "/dev/sdb1" is marked an ASM disk with the label "VOL1"
Device "/dev/sdc" is not marked as an ASM disk
Device "/dev/sdc1" is marked an ASM disk with the label "VOL2"
Device "/dev/sdd" is not marked as an ASM disk
Device "/dev/sdd1" is marked an ASM disk with the label "VOL3"
Device "/dev/sde" is not marked as an ASM disk
Device "/dev/sde1" is marked an ASM disk with the label "VOL4"
11、解壓安裝文件
--需要下載的安裝文件: p13390677_112040_Linux-x86-64_1of7_database.zip p13390677_112040_Linux-x86-64_2of7_database.zip p13390677_112040_Linux-x86-64_3of7_grid.zip --解壓文件: [root@node1 ~]# unzip p13390677_112040_Linux-x86-64_1of7_database.zip [root@node1 ~]# unzip p13390677_112040_Linux-x86-64_2of7_database.zip [root@node1 ~]# unzip p13390677_112040_Linux-x86-64_3of7_grid.zip [root@node1 ~]# ll drwxr-xr-x 8 root root 4096 8月 21 2009 database drwxr-xr-x 8 root root 4096 8月 21 2009 grid --查看解壓后的文件大小: [root@node1 ~]# du -sh database 2.4G database [root@node1 ~]# du -sh grid/ 1.1G grid/ --為便於將來安裝軟件,分別將其move 到oracle 用戶和grid 用戶的家目錄: [root@node1 ~]# mv database/ /home/oracle/ [root@node1 ~]# mv grid/ /home/grid/
12、安裝前預檢查配置信息
① 利用CVU(Cluster Verification Utility)檢查CRS 的安裝前環境
[root@node1 home]# su - grid node1-> pwd /home/grid node1-> ls grid node1-> cd grid node1-> ll total 40 drwxr-xr-x 9 root root 4096 Aug 17 2009 doc drwxr-xr-x 4 root root 4096 Aug 15 2009 install drwxrwxr-x 2 root root 4096 Aug 15 2009 response drwxrwxr-x 2 root root 4096 Aug 15 2009 rpm -rwxrwxr-x 1 root root 3795 Jan 29 2009 runcluvfy.sh -rwxr-xr-x 1 root root 3227 Aug 15 2009 runInstaller drwxrwxr-x 2 root root 4096 Aug 15 2009 sshsetup drwxr-xr-x 14 root root 4096 Aug 15 2009 stage -rw-r--r-- 1 root root 4228 Aug 18 2009 welcome.html node1-> ./runcluvfy.sh stage -pre crsinst -n node1,node2 -fixup -verbose ......
② 根據檢查結果修復安裝環境
Check: TCP connectivity of subnet "192.168.88.0" Source Destination Connected? ------------------------------ ------------------------------ ---------------- node1:192.168.88.191 node2:192.168.88.192 failed ERROR: PRVF-7617 : Node connectivity between "node1 : 192.168.88.191" and "node2 : 192.168.88.192" failed Result: TCP connectivity check failed for subnet "192.168.88.0" 解決方法:關閉node1和node2的防火牆 Check: Swap space Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- node2 2GB (2097148.0KB) 2.496GB (2617194.0KB) failed node1 2GB (2097148.0KB) 2.496GB (2617194.0KB) failed Result: Swap space check failed 解決方法:添加交換分區(linux太弱了,baidu后反正沒搞明白) 第一步、dd命令創建一個空文件,大小為1G [root@node2 shell]# dd if=/dev/zero of=/tmp/swap bs=1MB count=1024 記錄了1024+0 的讀入 記錄了1024+0 的寫出 1024000000字節(1.0 GB)已復制,24.026 秒,42.6 MB/秒 [root@node2 shell]# du -sh /tmp/swap 977M /tmp/swap 第二步、格式化此文件為swap文件系統 [root@node2 shell]# mkswap -L swap /tmp/swap 正在設置交換空間版本 1,大小 = 999996 KiB LABEL=swap, UUID=4ba4d45c-76d0-4c57-acaf-0ba35967f39a 第三步、掛載swap分區 [root@node2 shell]# swapon /tmp/swap swapon: /tmp/swap:不安全的權限 0644,建議使用 0600。 [root@node2 shell]# free -h total used free shared buff/cache available Mem: 1.7G 370M 70M 17M 1.2G 1.1G Swap: 3.0G 0B 3.0G 第四步、編輯/etc/fstab文件,以便開機自動掛載。swap分區添加完成 在文件的最后加上: /tmp/swap swap swap defaults 0 0
如果分區不用了,可以用如下命令刪掉: [root@node2 shell]# swapon off /tmp/swap [root@node2 shell]# free -h total used free shared buff/cache available Mem: 1.7G 370M 70M 17M 1.2G 1.1G Swap: 2.0G 0B 2.0G Check: Membership of user "grid" in group "dba" Node Name User Exists Group Exists User in Group Status ---------------- ------------ ------------ ------------ ---------------- node2 yes yes no failed node1 yes yes no failed Result: Membership check for user "grid" in group "dba" failed 解決方法:分別在node1和node2用root用戶執行 [root@node1 ~]# sh /tmp/CVU_11.2.0.4.0_grid/runfixup.sh [root@node2 ~]# sh /tmp/CVU_11.2.0.4.0_grid/runfixup.sh Check: Package existence for "elfutils-libelf-devel" Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- node2 missing elfutils-libelf-devel-0.97 failed node1 missing elfutils-libelf-devel-0.97 failed Result: Package existence check failed for "elfutils-libelf-devel" 解決方法:安裝 Check: Package existence for "libaio-devel(x86_64)" Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- node2 missing libaio-devel(x86_64)-0.3.105 failed node1 missing libaio-devel(x86_64)-0.3.105 failed Result: Package existence check failed for "libaio-devel(x86_64)" 解決方法:安裝 Check: Package existence for "pdksh" Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- node2 missing pdksh-5.2.14 failed node1 missing pdksh-5.2.14 failed Result: Package existence check failed for "pdksh" 解決方法:安裝 [root@node1 oracle_soft]# rpm -ivh pdksh-5.2.14-37.el5.x86_64.rpm [root@node2 oracle_soft]# rpm -ivh pdksh-5.2.14-37.el5.x86_64.rpm Checking to make sure user "grid" is not in "root" group Node Name Status Comment ------------ ------------------------ ------------------------ node2 passed does not exist node1 passed does not exist Result: User "grid" is not part of "root" group. Check passed 解決辦法:分別在node1和node2用root用戶執行 [root@node1 ~]# sh /tmp/CVU_11.2.0.4.0_grid/runfixup.sh [root@node2 ~]# sh /tmp/CVU_11.2.0.4.0_grid/runfixup.sh Checking to make sure user "grid" is not in "root" group Node Name Status Comment ------------ ------------------------ ------------------------ node2 passed does not exist node1 passed does not exist Result: User "grid" is not part of "root" group. Check passed 解決辦法:忽略 Check: TCP connectivity of subnet "192.168.122.0" Source Destination Connected? ------------------------------ ------------------------------ ---------------- node1:192.168.122.1 node2:192.168.122.1 failed ERROR: PRVF-7617 : Node connectivity between "node1 : 192.168.122.1" and "node2 : 192.168.122.1" failed Result: TCP connectivity check failed for subnet "192.168.122.0" 解決辦法:刪掉virbr0 [root@node1 ~]# ifconfig virbr0 down [root@node1 ~]# brctl delbr virbr0
18、安裝Grid Infrastructure
18.1、 以grid 用戶登錄圖形界面,執行/home/grid/grid/runInstaller,進入OUI 的圖形安裝界
如果出現以下問題:
node1-> ./runInstaller
Starting Oracle Universal Installer...
Checking Temp space: must be greater than 120 MB. Actual 17549 MB Passed
Checking swap space: must be greater than 150 MB. Actual 3024 MB Passed
Checking monitor: must be configured to display at least 256 colors
>>> Could not execute auto check for display colors using command /usr/bin/xdpyinfo. Check if the DISPLAY variable is set. Failed <<<<
Some requirement checks failed. You must fulfill these requirements before
continuing with the installation,
Continue? (y/n) [n] n
User Selected: No
解決辦法:
node1-> xhost + No protocol specified xhost: unable to open display "192.168.88.191:0.0" node1-> exit logout [root@node1 ~]# xhost + access control disabled, clients can connect from any host [root@node1 ~]# su - grid Last login: Wed Feb 24 17:42:56 CST 2021 on pts/1 node1-> export DISPLAY=192.168.88.191:0.0 node1-> cd grid/ node1-> ./runInstaller Starting Oracle Universal Installer... Checking Temp space: must be greater than 120 MB. Actual 17132 MB Passed Checking swap space: must be greater than 150 MB. Actual 3024 MB Passed Checking monitor: must be configured to display at least 256 colors. Actual 16777216 Passed node1->
18.2、進入OUI 安裝界面后,選擇第3 項,跳過軟件更新,Next:
18.3、選擇集群的Grid Infrastructure,Next:
18.4、選擇advanced Installation,Next:
18.5、語言選擇默認,English,Next:
18.6、去掉Configure GNS 選項, 輸入Cluster Name:scan-cluster,SCAN Name:scan-cluster.localdomain。Next:
18.7、單擊Add,添加第2 個節點,Next:
18.8、確認網絡接口,Next:
18.9、選擇ASM,作為存儲,Next:
18.10、輸入ASM 磁盤組名,這里命名為GRIDDG,冗余策略選擇External 外部,AU 大小選擇默認1M,ASM 磁盤選擇VOL1,VOL2。Next:
這里有可能選擇不到虛擬磁盤,如果下圖:
可能是虛擬磁盤的路徑不對。點擊“Change Discovery Path”按鈕,在彈出框中可以看到默認的路徑是/dev/raw/*。
通過如下命令查看虛擬磁盤的狀態,發現路徑可能是/dev/oracleasm。於是將路徑修改為/dev/oracleasm/disks/*,前面配置的ASM虛擬磁盤可以選擇了,但是這個虛擬磁盤是帶路徑的,也不知道對不對,先繼續吧。
[root@node2 ~]# /usr/sbin/oracleasm status Checking if ASM is loaded: yes Checking if /dev/oracleasm is mounted: yes
18.11、選擇給ASM 的SYS、ASMSNMP 用戶配置為相同的口令,並輸入口令,Next:
18.12、選擇不使用IPMI,Next:
18.13、給ASM 指定不同的組,Next:
18.14、選擇GRID 軟件的安裝路徑,其中ORACLE_BASE,ORACLE_HOME 均選擇之前已經配置好的。這里要注意GRID 的ORACLE_HOME 不能是ORACLE_BASE 的子目錄。
18.15、選擇默認的Inventory,Next:
18.16、檢查出現告警,提示在所有節點上缺失cvuqdisk-1.0.9-1 軟件包。
可以選擇忽略,直接進入下一步安裝。也可以從grid 安裝文件的rpm 目錄下獲取該RPM包,然后進行安裝。
分別在node1、node2節點上安裝完cvuqdisk-1.0.9-1 軟件后,重新執行預檢查,不再有警告信息。
我實際在安裝的時候,安裝萬cvuqdisk后,重新執行檢查還是有“Device Checks for ASM”的錯誤,我忽略掉了。
18.17、進入安裝GRID 安裝之前的概要信息,Install 進行安裝:
18.18、根據提示以root 用戶分別在兩個節點上執行腳本:
執行/u01/app/oraInventory/orainstRoot.sh 腳本:
執行/u01/app/11.2.0/grid/root.sh 腳本:
node1節點運行root.sh結果:

[root@node1 ~]# sh /u01/app/11.2.0/grid/root.sh Performing root user operation for Oracle 11g The following environment variables are set as: ORACLE_OWNER= grid ORACLE_HOME= /u01/app/11.2.0/grid Enter the full pathname of the local bin directory: [/usr/local/bin]: The contents of "dbhome" have not changed. No need to overwrite. The contents of "oraenv" have not changed. No need to overwrite. The contents of "coraenv" have not changed. No need to overwrite. Creating /etc/oratab file... Entries will be added to the /etc/oratab file as needed by Database Configuration Assistant when a database is created Finished running generic part of root script. Now product-specific root actions will be performed. Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params Creating trace directory User ignored Prerequisites during installation Installing Trace File Analyzer OLR initialization - successful root wallet root wallet cert root cert export peer wallet profile reader wallet pa wallet peer wallet keys pa wallet keys peer cert request pa cert request peer cert pa cert peer root cert TP profile reader root cert TP pa root cert TP peer pa cert TP pa peer cert TP profile reader pa cert TP profile reader peer cert TP peer user cert pa user cert Adding Clusterware entries to inittab ohasd failed to start Failed to start the Clusterware. Last 20 lines of the alert log follow: 2021-04-07 15:34:45.885: [client(32854)]CRS-2101:The OLR was formatted using version 3. CRS-2672: Attempting to start 'ora.mdnsd' on 'node1' CRS-2676: Start of 'ora.mdnsd' on 'node1' succeeded CRS-2672: Attempting to start 'ora.gpnpd' on 'node1' CRS-2676: Start of 'ora.gpnpd' on 'node1' succeeded CRS-2672: Attempting to start 'ora.cssdmonitor' on 'node1' CRS-2672: Attempting to start 'ora.gipcd' on 'node1' CRS-2676: Start of 'ora.cssdmonitor' on 'node1' succeeded CRS-2676: Start of 'ora.gipcd' on 'node1' succeeded CRS-2672: Attempting to start 'ora.cssd' on 'node1' CRS-2672: Attempting to start 'ora.diskmon' on 'node1' CRS-2676: Start of 'ora.diskmon' on 'node1' succeeded CRS-2676: Start of 'ora.cssd' on 'node1' succeeded ASM created and started successfully. Disk Group GRIDDG created successfully. clscfg: -install mode specified Successfully accumulated necessary OCR keys. Creating OCR keys for user 'root', privgrp 'root'.. Operation successful. CRS-4256: Updating the profile Successful addition of voting disk 6f99fb7033364f48bf7bf40de939dbc4. Successfully replaced voting disk group with +GRIDDG. CRS-4256: Updating the profile CRS-4266: Voting file(s) successfully replaced ## STATE File Universal Id File Name Disk group -- ----- ----------------- --------- --------- 1. ONLINE 6f99fb7033364f48bf7bf40de939dbc4 (/dev/oracleasm/disks/VOL1) [GRIDDG] Located 1 voting disk(s). CRS-2672: Attempting to start 'ora.asm' on 'node1' CRS-2676: Start of 'ora.asm' on 'node1' succeeded CRS-2672: Attempting to start 'ora.GRIDDG.dg' on 'node1' CRS-2676: Start of 'ora.GRIDDG.dg' on 'node1' succeeded Configure Oracle Grid Infrastructure for a Cluster ... succeeded
node2節點運行root.sh結果:

[root@node2 ~]# sh /u01/app/11.2.0/grid/root.sh Performing root user operation for Oracle 11g The following environment variables are set as: ORACLE_OWNER= grid ORACLE_HOME= /u01/app/11.2.0/grid Enter the full pathname of the local bin directory: [/usr/local/bin]: The contents of "dbhome" have not changed. No need to overwrite. The contents of "oraenv" have not changed. No need to overwrite. The contents of "coraenv" have not changed. No need to overwrite. Entries will be added to the /etc/oratab file as needed by Database Configuration Assistant when a database is created Finished running generic part of root script. Now product-specific root actions will be performed. Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params Creating trace directory User ignored Prerequisites during installation Installing Trace File Analyzer OLR initialization - successful Adding Clusterware entries to inittab CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS daemon on node node1, number 1, and is terminating An active cluster was found during exclusive startup, restarting to join the cluster Configure Oracle Grid Infrastructure for a Cluster ... succeeded
在執行root.sh腳本是如果遇到下列問題的解決辦法:
問題一:
Adding Clusterware entries to inittab ohasd failed to start Failed to start the Clusterware. Last 20 lines of the alert log follow: 2021-04-07 15:34:45.885: [client(32854)]CRS-2101:The OLR was formatted using version 3. 原因: 該問題是ORACLE的一個BUG,已經在11.2.0.3中修復,該問題是由於在執行root.sh時候 會在/tmp/.oracle下產生一個文件npohasd文件,此文件的只有root用戶有權限,因此出現不能啟動ohasd進程。 解決辦法: 方法一: 提示錯誤並卡在此處,直接另開一個會話窗口,在 root 用戶下執行 /bin/dd if=/var/tmp/.oracle/npohasd of=/dev/null bs=1024 count=1 ( 注:根據操作系統版本不同, npohasd 文件目錄可能不同,更改目錄路徑即可 ) 方法二: [root@rac1 install]# cd /var/tmp/.oracle/ [root@rac1 .oracle]# ls npohasd [root@rac1 .oracle]# rm npohasd rm: remove fifo 'npohasd'? y [root@rac1 .oracle]# ls [root@rac1 .oracle]# touch npohasd [root@rac1 .oracle]# chomod 755 npohasd
問題二:
User ignored Prerequisites during installation Installing Trace File Analyzer Failed to create keys in the OLR, rc = 127, Message: /u01/app/11.2.0/grid/bin/clscfg.bin: error while loading shared libraries: libcap.so.1: cannot open shared object file: No such file or directory Failed to create keys in the OLR at /u01/app/11.2.0/grid/crs/install/crsconfig_lib.pm line 7660. /u01/app/11.2.0/grid/perl/bin/perl -I/u01/app/11.2.0/grid/perl/lib -I/u01/app/11.2.0/grid/crs/install /u01/app/11.2.0/grid/crs/install/rootcrs.pl execution failed 解決辦法: [root@node2 ~]# cd /lib64/ [root@node2 lib64]# ls -lrt libcap* -rwxr-xr-x. 1 root root 23968 11月 20 2015 libcap-ng.so.0.0.0 -rwxr-xr-x. 1 root root 20032 8月 3 2017 libcap.so.2.22 lrwxrwxrwx. 1 root root 14 4月 8 15:38 libcap.so.2 -> libcap.so.2.22 lrwxrwxrwx. 1 root root 18 4月 8 15:38 libcap-ng.so.0 -> libcap-ng.so.0.0.0 [root@node2 lib64]# ln -s libcap.so.2.22 libcap.so.1 [root@node2 lib64]# ls -lrt libcap* -rwxr-xr-x. 1 root root 23968 11月 20 2015 libcap-ng.so.0.0.0 -rwxr-xr-x. 1 root root 20032 8月 3 2017 libcap.so.2.22 lrwxrwxrwx. 1 root root 14 4月 8 15:38 libcap.so.2 -> libcap.so.2.22 lrwxrwxrwx. 1 root root 18 4月 8 15:38 libcap-ng.so.0 -> libcap-ng.so.0.0.0 lrwxrwxrwx 1 root root 14 4月 12 16:07 libcap.so.1 -> libcap.so.2.22
18.19、此時,集群件相關的服務已經啟動。當然,ASM 實例也將在兩個節點上啟動。
[root@node1 ~]# su - grid Last login: Wed Apr 7 15:50:00 CST 2021 on pts/2 node1-> crs_stat -t Name Type Target State Host ------------------------------------------------------------ ora.GRIDDG.dg ora....up.type ONLINE ONLINE node1 ora....N1.lsnr ora....er.type ONLINE ONLINE node1 ora.asm ora.asm.type ONLINE ONLINE node1 ora.cvu ora.cvu.type ONLINE ONLINE node1 ora.gsd ora.gsd.type OFFLINE OFFLINE ora....network ora....rk.type ONLINE ONLINE node1 ora....SM1.asm application ONLINE ONLINE node1 ora.node1.gsd application OFFLINE OFFLINE ora.node1.ons application ONLINE ONLINE node1 ora.node1.vip ora....t1.type ONLINE ONLINE node1 ora....SM2.asm application ONLINE ONLINE node2 ora.node2.gsd application OFFLINE OFFLINE ora.node2.ons application ONLINE ONLINE node2 ora.node2.vip ora....t1.type ONLINE ONLINE node2 ora.oc4j ora.oc4j.type ONLINE ONLINE node1 ora.ons ora.ons.type ONLINE ONLINE node1 ora.scan1.vip ora....ip.type ONLINE ONLINE node1
18.19、執行完上述腳本之后,單擊OK,Next,進入下一步。
18.20、最后,單擊close,完成GRID 軟件在雙節點上的安裝。
至此,GRID 集群件安裝成功!!!
18.21、重啟虛擬機后,/etc/resolv.conf中的nameserver被覆蓋
在本此安裝中,DNS服務器的ip是192.168.88.11,每次在node1、node2的/etc/resolv.conf中設置好nameserver為192.168.88.11后,重啟虛擬機,nameserver又變成了8.8.8.8.
經過百度有以下兩種解決方法:
方法一:直接修改所有ifcfg-ens*中的dns為192.168.88.11
vi /etc/sysconfig/network-scripts/ifcfg-ens33
vi /etc/sysconfig/network-scripts/ifcfg-ens34
方法二:直接執行如下命令,沒試過:
chattr +i /etc/resolv.conf
18.22、重啟虛擬機后,執行crs_stat -t命令報錯:CRS-0184: Cannot communicate with the CRS daemon
這個問題折騰了好久,搞的人都快崩潰了。最后發現可能是ohasd服務沒有啟動。解決步驟如下:
1. 創建服務ohas.service的服務文件並賦予權限
[root@node2 ~]# touch /usr/lib/systemd/system/ohas.service [root@node2 ~]# chmod 777 /usr/lib/systemd/system/ohas.service
2. 往ohas.service服務文件添加啟動ohasd的相關信息
vi /usr/lib/systemd/system/ohas.service
添加內容如下:
[Unit] Description=Oracle High Availability Services After=syslog.target [Service] ExecStart=/etc/init.d/init.ohasd run >/dev/null 2>&1 Type=simple Restart=always [Install] WantedBy=multi-user.target
3. 加載,啟動服務
重新加載守護進程: systemctl daemon-reload 設置守護進程自動啟動: systemctl enable ohas.service 手工啟動ohas服務: systemctl start ohas.service 查看ohas服務狀態 systemctl status ohas.service
最終執行過程如下:
[root@node2 ~]# touch /usr/lib/systemd/system/ohas.service [root@node2 ~]# chmod 777 /usr/lib/systemd/system/ohas.service [root@node2 ~]# vi /usr/lib/systemd/system/ohas.service [root@node2 ~]# systemctl daemon-reload [root@node2 ~]# systemctl enable ohas.service Created symlink from /etc/systemd/system/multi-user.target.wants/ohas.service to /usr/lib/systemd/system/ohas.service. [root@node2 ~]# systemctl start ohas.service [root@node2 ~]# systemctl status ohas.service ● ohas.service - Oracle High Availability Services Loaded: loaded (/usr/lib/systemd/system/ohas.service; enabled; vendor preset: disabled) Active: active (running) since 二 2021-04-13 11:15:29 CST; 10s ago Main PID: 2724 (init.ohasd) Tasks: 1 CGroup: /system.slice/ohas.service └─2724 /bin/sh /etc/init.d/init.ohasd run >/dev/null 2>&1 Type=simple 4月 13 11:15:29 node2.localdomain systemd[1]: Started Oracle High Availability Services. 4月 13 11:15:29 node2.localdomain systemd[1]: Starting Oracle High Availability Services...
19、安裝oracle
19.1、以oracle用戶登錄圖形界面,執行/home/oracle/database/runInstaller,進入OUI 的圖形安裝界面:
[root@node1 ~]# su - oracle Last login: Wed Apr 7 11:55:13 CST 2021 from node1.localdomain on pts/1 node1-> pwd /home/oracle node1-> cd database/ node1-> export DISPLAY=192.168.88.191:0.0 node1-> ./runInstaller Starting Oracle Universal Installer... Checking Temp space: must be greater than 120 MB. Actual 7448 MB Passed Checking swap space: must be greater than 150 MB. Actual 3005 MB Passed Checking monitor: must be configured to display at least 256 colors. Actual 16777216 Passed Preparing to launch Oracle Universal Installer from /tmp/OraInstall2021-04-07_04-24-10PM. Please wait ...node1->
Next,語言選擇默認,English,Next:
19.2、選擇oracle 的安裝路徑,其中ORACLE_BASE,ORACLE_HOME 均選擇之前已經配置好的
19.3、選擇oracle 用戶組,Next:
19.4、執行安裝前的預檢查,Next:
我忽略了這個Warning。
19.5、安裝過程中遇到的問題
問題一:
解決辦法:
修改$ORACLE_HOME/sysman/lib/ins_emagent.mk,將$(MK_EMAGENT_NMECTL)修改為:$(MK_EMAGENT_NMECTL) -lnnz11 修改前備份原始文件 [root@node1 ~]# su - oracle Last login: Wed Apr 7 16:52:33 CST 2021 on pts/1 node1-> cd $ORACLE_HOME/sysman/lib node1-> cp ins_emagent.mk ins_emagent.mk.bak node1-> vi ins_emagent.mk 進入vi編輯器后,命令模式輸入/NMECTL 進行查找,快速定位要修改的行 在后面追加參數-lnnz11,注意:第一個是字母l,后面兩個是數字1 保存后,回到安裝界面,點Retry即可。
19.6、在node1和node2分別執行root.sh
node1:
[root@node1 ~]# /u01/app/oracle/product/11.2.0/db_1/root.sh Performing root user operation for Oracle 11g The following environment variables are set as: ORACLE_OWNER= oracle ORACLE_HOME= /u01/app/oracle/product/11.2.0/db_1 Enter the full pathname of the local bin directory: [/usr/local/bin]: The contents of "dbhome" have not changed. No need to overwrite. The contents of "oraenv" have not changed. No need to overwrite. The contents of "coraenv" have not changed. No need to overwrite. Entries will be added to the /etc/oratab file as needed by Database Configuration Assistant when a database is created Finished running generic part of root script. Now product-specific root actions will be performed. Finished product-specific root actions.
node2:
[root@node2 ~]# /u01/app/oracle/product/11.2.0/db_1/root.sh Performing root user operation for Oracle 11g The following environment variables are set as: ORACLE_OWNER= oracle ORACLE_HOME= /u01/app/oracle/product/11.2.0/db_1 Enter the full pathname of the local bin directory: [/usr/local/bin]: The contents of "dbhome" have not changed. No need to overwrite. The contents of "oraenv" have not changed. No need to overwrite. The contents of "coraenv" have not changed. No need to overwrite. Entries will be added to the /etc/oratab file as needed by Database Configuration Assistant when a database is created Finished running generic part of root script. Now product-specific root actions will be performed. Finished product-specific root actions.
至此,我們在RAC 雙節點上完成oracle 軟件的安裝!!!
20、利用ASMCA創建ASM磁盤組
以grid 用戶創建ASM 磁盤組,創建的ASM 磁盤組為下一步創建數據庫提供存儲。
① grid 用戶登錄圖形界面,執行asmca 命令來創建磁盤組:
[root@node1 ~]# xhost + access control disabled, clients can connect from any host [root@node1 ~]# su - grid Last login: Wed Apr 7 17:21:54 CST 2021 on pts/2 node1-> id grid uid=1100(grid) gid=2000(oinstall) groups=2000(oinstall),2200(asmadmin),2201(asmdba),2202(asmoper),2300(dba) node1-> env|grep ORA ORACLE_SID=+ASM1 ORACLE_BASE=/u01/app/grid ORACLE_TERM=xterm ORACLE_HOME=/u01/app/11.2.0/grid node1-> asmca DISPLAY not set. Set DISPLAY environment variable, then re-run. node1-> export DISPLAY=192.168.88.191:0.0 node1-> asmca
② 進入ASMCA 配置界面后,單擊Create,創建新的磁盤組:
③ 輸入磁盤組名DATA,冗余策略選擇External,磁盤選擇ORCL:VOL3,單擊OK:
④ DATA 磁盤組創建完成,單擊OK:
⑤ 繼續創建磁盤組,磁盤組名FLASH,冗余策略選擇External,磁盤選擇VOL4:
⑥ 創建完DATA、FLASH 磁盤組后的界面如下,Exit 退出ASMCA 圖形配置界面:
從下面圖可以看到DATA磁盤組只被一個節點MOUNTED,點一下“Mount All”按鈕即可被兩個節點都MOUNT。
至此,利用ASMCA 創建好DATA、FLASH 磁盤組。且,可以看到連同之前創建的GRIDDG 3 個磁盤組均已經被RAC 雙節點MOUNT。
21、利用DBCA創建RAC數據庫
① 以oracle 用戶登錄圖形界面,執行dbca,進入DBCA 的圖形界面,選擇第1 項,創建RAC 數據庫:
[root@node1 ~]# xhost + access control disabled, clients can connect from any host [root@node1 ~]# su - oracle Last login: Wed Apr 7 16:53:12 CST 2021 on pts/1 node1-> id oracle uid=1101(oracle) gid=2000(oinstall) groups=2000(oinstall),2200(asmadmin),2201(asmdba),2300(dba),2301(oper) node1-> env|grep ORA ORACLE_UNQNAME=devdb ORACLE_SID=devdb1 ORACLE_BASE=/u01/app/oracle ORACLE_HOSTNAME=node1.localdomain ORACLE_TERM=xterm ORACLE_HOME=/u01/app/oracle/product/11.2.0/db_1 node1-> export DISPLAY=192.168.88.191:0.0 node1-> dbca
② 選擇創建數據庫選項,Next:
③ 選擇創建通用數據庫,Next:
④ 配置類型選擇Admin-Managed,輸入數據庫名devdb,選擇雙節點,Next:
⑤ 選擇默認,配置OEM、啟用數據庫自動維護任務,Next:
⑥ 選擇數據庫用戶使用同一口令,Next:
⑦ 數據庫存儲選擇ASM,使用OMF,數據區選擇之前創建的DATA 磁盤組,Next:
⑧ 指定數據庫閃回區,選擇之前創建好的FLASH 磁盤組,Next:
⑨ 選擇創建數據庫自帶Sample Schema,Next:
⑩ 選擇數據庫字符集,AL32UTF8,Next:
⑪ 選擇默認數據庫存儲信息,直接Next:
⑫ 單擊,Finish,開始創建數據庫,Next:
⑬ 完成創建數據庫。
至此,我們完成創建RAC 數據庫!!!