(一)環境介紹
esxi6.0 ,VMware vSphere Client6.0,linux 版本Centos7.6(最小化安裝)Oracle 版本 oracle 11g 11.2.0.4,下載鏈接在博客尾頁。
(二)虛擬化環境搭建
資源下載:
鏈接:https://pan.baidu.com/s/1AjXAPi2OBCm3pA88EtDi0Q
提取碼:1111
個別機型去要第三方驅動,而在官方又找不到時候,可以選擇ESXi-Customizer加載第三方驅動
工具下載鏈接如下:
鏈接:https://pan.baidu.com/s/1Ta5ZZK6KtSe05Q1LyPtU6A
提取碼:1111
(三) 配置共享存儲
esxi配置ssh遠程連接,我這里配置的與主機一起啟動,如果ssh沒有啟動,點擊啟動即可。
打開ssh連接工具,這里我使用的xshell,截圖如下,創建共享磁盤時候忘記截圖了,就截圖做完畢之后的如下
創建共享磁盤命令如下,(一共是4塊,)
vmkfstools -c 10240m -a lsilogic -d eagerzeroedthick mysharedisk_01.vmdk vmkfstools -c 10240m -a lsilogic -d eagerzeroedthick mysharedisk_02.vmdk vmkfstools -c 10240m -a lsilogic -d eagerzeroedthick mysharedisk_03.vmdk vmkfstools -c 10240m -a lsilogic -d eagerzeroedthick mysharedisk_04.vmdk
創建成功后,打開VMware vSphere Client,進入數據存儲瀏覽器打開如下圖:
兩個linux節點,虛擬機中配置如下,這里列出rac1的配置:(兩台Linux配置相同)
兩台虛擬機scsi控制器1設置為虛擬屬性。
將同步客戶機時間與主機時間打勾,重啟兩台虛擬機。
(四)配置yum源
安裝linux ,選擇最小化安裝,虛擬機配置雙網卡,掛載安裝鏡像,配置yum源為本地源,掛載操作系統光盤文件,命令如下參考:
mkdir /cdrom mount /dev/cdrom /cdrom cd /etc/yum.repos.d mkdir bak mv *.* bak
編輯yum 配置文件CentOS-Base.repo配置如下:
vi CentOS-Base.repo添加如下
[base] name=CentOS baseurl=file:///cdrom gpgcheck=0 enabled=1 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
聯網環境也可使用oracle官方源,使用命令下載:
wget http://public-yum.oracle.com/public-yum-ol7.repo
安裝net-tools,安裝net-tools,只是為了使用ifconfig 命令,命令如下:
yum inatall net-tools* -y
安裝unzip ,解壓oracle安裝包使用unzip命令:
yum install unzip -y
(五)安裝前環境配置
1.安裝前下載
下載oracle 安裝包,grid 安裝包,准備紅帽操作系統或者oracle linux 操作系統,(oracle 和grid 需要的包在centos7會缺少compat-libstdc++ ,紅帽鏡像中有)
本文中使用的oracle 11.2.0.4 安裝包及補丁包如下:
百度雲鏈接:
鏈接:https://pan.baidu.com/s/1WvdpiTs9m3es5vyTBzHb4w
提取碼:1111
操作系統下載鏈接:
centos
http://mirrors.sohu.com/centos/7.6.1810/isos/x86_64/
redhat
鏈接:https://pan.baidu.com/s/16-TgKVk_nAxLeakRsOPKkg
提取碼:1111
創建asm,前配置及規划,環境配置如下:
oracle linux
下載鏈接
https://yum.oracle.com/oracle-linux-isos.html
2.網絡環境
rac1
網卡1:網卡名:ens192配置如下:
public ip :192.168.2.101
網卡1:網卡名:ens224配置如下:
private vip :10.10.10.11
網卡配置文件如下:
ens160
TYPE=Ethernet PROXY_METHOD=none BROWSER_ONLY=no BOOTPROTO=static DEFROUTE=yes IPV4_FAILURE_FATAL=no IPV6INIT=yes IPV6_AUTOCONF=yes IPV6_DEFROUTE=yes IPV6_FAILURE_FATAL=no IPV6_ADDR_GEN_MODE=stable-privacy NAME=ens160 UUID=fe2552fd-fd89-4cd0-9b97-55d52c1287f0 DEVICE=ens160 ONBOOT=yes IPADDR=192.168.2.101 NETMASK=255.255.255.0
ens192
TYPE=Ethernet PROXY_METHOD=none BROWSER_ONLY=no BOOTPROTO=static DEFROUTE=yes IPV4_FAILURE_FATAL=no IPV6INIT=yes IPV6_AUTOCONF=yes IPV6_DEFROUTE=yes IPV6_FAILURE_FATAL=no IPV6_ADDR_GEN_MODE=stable-privacy NAME=ens192 UUID=a2834e07-69f5-49cf-b480-c58cefe28e6e DEVICE=ens192 ONBOOT=yes IPADDR=10.10.10.11 NETMASK=255.255.255.0
rac2
網卡1:網卡名:ens192配置如下:
public ip :192.168.2.102
網卡1:網卡名:ens224配置如下
private vip :10.10.10.12
配置文件如下:
ens160
TYPE=Ethernet PROXY_METHOD=none BROWSER_ONLY=no BOOTPROTO=static DEFROUTE=yes IPV4_FAILURE_FATAL=no IPV6INIT=yes IPV6_AUTOCONF=yes IPV6_DEFROUTE=yes IPV6_FAILURE_FATAL=no IPV6_ADDR_GEN_MODE=stable-privacy NAME=ens160 UUID=fe2552fd-fd89-4cd0-9b97-55d52c1287f0 DEVICE=ens160 ONBOOT=yes IPADDR=192.168.2.102 NETMASK=255.255.255.0
ens192
TYPE=Ethernet PROXY_METHOD=none BROWSER_ONLY=no BOOTPROTO=static DEFROUTE=yes IPV4_FAILURE_FATAL=no IPV6INIT=yes IPV6_AUTOCONF=yes IPV6_DEFROUTE=yes IPV6_FAILURE_FATAL=no IPV6_ADDR_GEN_MODE=stable-privacy NAME=ens192 UUID=a2834e07-69f5-49cf-b480-c58cefe28e6e DEVICE=ens192 ONBOOT=yes IPADDR=10.10.10.12 NETMASK=255.255.255.0
兩個節點都要配置
修改hosts 文件添加ip,及安裝Oracle后生成的虛擬ip
192.168.2.103-104,192.168.2.105 為安裝后Oracle配置的ip:
192.168.2.101 rac1 192.168.2.102 rac2 192.168.2.103 rac1-vip 192.168.2.104 rac2-vip 10.10.10.11 rac1-priv 10.10.10.12 rac2-priv 192.168.2.105 rac-san
3 .關閉防火牆關閉selinux
兩節點執行如下命令, 命令如下:
systemctl stop firewalld
systemctl disable firewalld
vi /etc/selinux/config
編輯
SELINUX=disabled
如下圖操作:
3.安裝grid必須的包
grid必須的包官方參考官方鏈接:
https://docs.oracle.com/cd/E11882_01/install.112/e41961/prelinux.htm#CWLIN225
Table 2-9 Linux x86-64 Oracle Grid Infrastructure and Oracle RAC Package Requirements Oracle Linux 7 and Red Hat Enterprise Linux 7 The following packages (or later versions) must be installed: binutils-2.23.52.0.1-12.el7.x86_64 compat-libcap1-1.10-3.el7.x86_64 gcc-4.8.2-3.el7.x86_64 gcc-c++-4.8.2-3.el7.x86_64 glibc-2.17-36.el7.i686 glibc-2.17-36.el7.x86_64 glibc-devel-2.17-36.el7.i686 glibc-devel-2.17-36.el7.x86_64 ksh libaio-0.3.109-9.el7.i686 libaio-0.3.109-9.el7.x86_64 libaio-devel-0.3.109-9.el7.i686 libaio-devel-0.3.109-9.el7.x86_64 libgcc-4.8.2-3.el7.i686 libgcc-4.8.2-3.el7.x86_64 libstdc++-4.8.2-3.el7.i686 libstdc++-4.8.2-3.el7.x86_64 libstdc++-devel-4.8.2-3.el7.i686 libstdc++-devel-4.8.2-3.el7.x86_64 libXi-1.7.2-1.el7.i686 libXi-1.7.2-1.el7.x86_64 libXtst-1.2.2-1.el7.i686 libXtst-1.2.2-1.el7.x86_64 make-3.82-19.el7.x86_64 sysstat-10.1.5-1.el7.x86_64
修改命令如下:
yum install -y binutils* yum install -y compat-libcap1* yum install -y gcc* yum install -y gcc-c++* yum install -y glibc* yum install -y glibc-devel* yum install -y ksh* yum install -y libaio* yum install -y libgcc* yum install -y libstdc* yum install -y libstdc++-devel* yum install -y libXi* yum install -y libXtst* yum install -y make* yum install -y sysstat* yum install -y elfutils-libelf-devel*
centos7中缺少的包compat-libstdc++-33-3.2.3-69.el6.x86_64.rpm,需要在紅帽鏡像中找到。使用rpm命令安裝即可。
compat-libstdc++-33-3.2.3-69.el6.x86_64.rpm下載鏈接(從紅帽鏡像里面拷貝出來的)
https://files.cnblogs.com/files/wenxiao1-2-3-4/compat-libstdc.rpm.zip
[root@rac2 ~]# rpm -ivh compat-libstdc++-33-3.2.3-69.el6.x86_64.rpm warning: compat-libstdc++-33-3.2.3-69.el6.x86_64.rpm: Header V3 RSA/SHA256 Signature, key ID fd431d51: NOKEY Preparing... ################################# [100%] Updating / installing... 1:compat-libstdc++-33-3.2.3-69.el6 ################################# [100%]
(環境檢查時候會缺少pdksh包,忽略即可)。
4.用戶和組
4.1創建安裝目錄,創建用戶
/usr/sbin/groupadd -g 1000 oinstall /usr/sbin/groupadd -g 1020 asmadmin /usr/sbin/groupadd -g 1021 asmdba /usr/sbin/groupadd -g 1022 asmoper /usr/sbin/groupadd -g 1031 dba /usr/sbin/groupadd -g 1032 oper useradd -u 1100 -g oinstall -G asmadmin,asmdba,asmoper,oper,dba grid useradd -u 1101 -g oinstall -G dba,asmdba,oper oracle mkdir -p /u01/app/11.2.0/grid mkdir -p /u01/app/grid mkdir /u01/app/oracle chown -R grid:oinstall /u01 chown oracle:oinstall /u01/app/oracle chmod -R 775 /u01/
參考官方創建組和用戶,鏈接如下:
https://docs.oracle.com/cd/E11882_01/install.112/e41961/prelinux.htm#CWLIN178
最后使用root用戶更改grid,oracle 用戶的密碼
passwd oracle passwd grid
4.2配置環境變量
4.2.1 grid環境變量
節點1上配置,編輯 .bash_profile
export TMP=/tmp export TMPDIR=$TMP export ORACLE_SID=+ASM1 export ORACLE_BASE=/u01/app/grid export ORACLE_HOME=/u01/app/11.2.0/grid export PATH=/usr/sbin:$PATH export PATH=$ORACLE_HOME/bin:$PATH export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib umask 022
節點2上配置,編輯 .bash_profile
export TMP=/tmp export TMPDIR=$TMP export ORACLE_SID=+ASM2 export ORACLE_BASE=/u01/app/grid export ORACLE_HOME=/u01/app/11.2.0/grid export PATH=/usr/sbin:$PATH export PATH=$ORACLE_HOME/bin:$PATH export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib umask 022
4.2.2oracle 環境變量
節點1.bash_profile
export TMP=/tmp export TMPDIR=$TMP export ORACLE_SID=rac1 export ORACLE_BASE=/u01/app/oracle export ORACLE_HOME=$ORACLE_BASE/product/11.2.0/db_1 export TNS_ADMIN=$ORACLE_HOME/network/admin export PATH=/usr/sbin:$PATH export PATH=$ORACLE_HOME/bin:$PATH export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib umask 022
節點2.bash_profile
export TMP=/tmp export TMPDIR=$TMP export ORACLE_SID=rac2 export ORACLE_BASE=/u01/app/oracle export ORACLE_HOME=$ORACLE_BASE/product/11.2.0/db_1 export TNS_ADMIN=$ORACLE_HOME/network/admin export PATH=/usr/sbin:$PATH export PATH=$ORACLE_HOME/bin:$PATH export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib umask 022
保存完畢,執行source命令,生效環境變量。
5.linux 配置裸設備
一個節點執行分區即可,執行fdisk 創建分區,編輯配置文件/usr/lib/udev/rules.d/60-raw.rules(兩節點都需要修改)
[root@rac1 ~]# ls /usr/lib/udev/rules.d/60-raw.rules /usr/lib/udev/rules.d/60-raw.rules [root@rac1 ~]# cat /usr/lib/udev/rules.d/60-raw.rules # # Enter raw device bindings here. # # An example would be: # ACTION=="add", KERNEL=="sda", RUN+="/usr/bin/raw /dev/raw/raw1 %N" # to bind /dev/raw/raw1 to /dev/sda, or # ACTION=="add", ENV{MAJOR}=="8", ENV{MINOR}=="1", RUN+="/usr/bin/raw /dev/raw/raw2 %M %m" # to bind /dev/raw/raw2 to the device with major 8, minor 1. ACTION=="add", KERNEL=="sdb1", RUN+="/bin/raw /dev/raw/raw1 %N" ACTION=="add", KERNEL=="sdc1", RUN+="/bin/raw /dev/raw/raw2 %N" ACTION=="add", KERNEL=="sdd1", RUN+="/bin/raw /dev/raw/raw3 %N" ACTION=="add", KERNEL=="sde1", RUN+="/bin/raw /dev/raw/raw4 %N" KERNEL=="raw[1]", MODE="0660", OWNER="grid", GROUP="asmadmin" KERNEL=="raw[2]", MODE="0660", OWNER="grid", GROUP="asmadmin" KERNEL=="raw[3]", MODE="0660", OWNER="grid", GROUP="asmadmin" KERNEL=="raw[4]", MODE="0660", OWNER="grid", GROUP="asmadmin"
重啟系統后看到/dev/raw/下出現新配的文件
[root@rac1 ~]# ls -l /dev/raw/ total 0 crw-rw---- 1 grid asmadmin 162, 1 Jul 15 21:23 raw1 crw-rw---- 1 grid asmadmin 162, 2 Jul 14 20:45 raw2 crw-rw---- 1 grid asmadmin 162, 3 Jul 14 20:45 raw3 crw-rw---- 1 grid asmadmin 162, 4 Jul 14 20:45 raw4 crw-rw---- 1 root disk 162, 0 Jul 14 20:45 rawctl
6.系統參數
修改,兩個文件/etc/sysctl.conf,和/etc/security/limits.conf(兩節點都需要修改)
/etc/sysctl.conf添加如下
kernel.msgmnb = 65536
kernel.msgmax = 65536
kernel.shmmax = 68719476736
kernel.shmall = 4294967296
fs.aio-max-nr = 1048576
fs.file-max = 6815744
kernel.shmall = 2097152
kernel.shmmax = 4294967295
kernel.shmmni = 4096
kernel.sem = 250 32000 100 128
net.ipv4.ip_local_port_range = 9000 65500
net.core.rmem_default = 262144
net.core.rmem_max = 4194304
net.core.wmem_default = 262144
net.core.wmem_max = 1048586
net.ipv4.tcp_wmem = 262144 262144 262144
net.ipv4.tcp_rmem = 4194304 4194304 4194304
修改完畢執行sysctl -p。
/etc/security/limits.conf 添加如下
grid soft nproc 2047 grid hard nproc 16384 grid soft nofile 1024 grid hard nofile 65536 oracle soft nproc 2047 oracle hard nproc 16384 oracle soft nofile 1024 oracle hard nofile 65536
7.配置互信
兩種方法配置互信,一種手動執行命令保證兩節點key文件一致,另一種方法使用oracle 安裝包自帶的互信腳本執行。
方法一
兩台linux rac1,rac2互信,包括Oracle,grid 用戶
oracle 用戶和grid 用戶大致相同
步驟如下
rac1下操作
ssh-keygen -t rsa
回車默認
cat .ssh/id_rsa.pub >> .ssh/authorized_keys
rac2下操作
ssh-keygen -t rsa
回車默認
cat .ssh/id_rsa.pub >> .ssh/authorized_keys
scp .ssh/authorized_keys rac2:
在rac2上操作
cat authorized_keys >> .ssh/authorized_keys
執行
scp .ssh/authorized_keys rac1:
rac1上操作
cat authorized_keys >.ssh/authorized_keys
rac1互信配置完畢配置文件,權限如下
[
grid@rac1 ~]$ ll .ssh/ -d drwx------ 2 grid oinstall 80 Jul 13 05:02 .ssh/ [grid@rac1 ~]$ ll .ssh/ total 16 -rw-r--r-- 1 grid oinstall 782 Jul 13 05:04 authorized_keys -rw------- 1 grid oinstall 1675 Jul 13 05:01 id_rsa -rw-r--r-- 1 grid oinstall 391 Jul 13 05:01 id_rsa.pub -rw-r--r-- 1 grid oinstall 360 Jul 13 05:02 known_hosts [grid@rac1 ~]$ cat .ssh/authorized_keys ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDVuiEr9Dug/tLJ+rZbdUr0ZDIlviaRPAoKcPoPf2EBCCRuYa3O6sqcOqKlwMNbitRl6eTaGxktVFOru7r2AG+DlajNaZo1B5jizx+atKCZzxoils8XGZNEPbEbNZN/NwgYEq2DYv1RkjcRXMvvPhEKskkBV3GT6BOPow0YTsRwajBmaLSamhg7fHnBTkzV6cjhDlLnLCyPevvlFMcKm1Y338ApSQMNBaPO9DhprYhUaEbUK8SpqZVOOWKHpAsFqc2iAPdnXJX36W/pc+dZf/FzuaJU1bPz6nzizEBB1zsnY3MWopKrac2j6TzjHpO4HLWCLDdJLYIMsFDCL0kJEIgl/ grid@rac1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDOTMVA9hhXpMZQX9YK282J0QWxCawAwZCP/AH3zIuXYXA/q5GHYhnFoG2xAM7wt9Vf2+LXYKV0QrVBBpZAlS9MzCna2upqqfB32QtBBMeAIxyBE++9ufPUz+QgvfkpET5YA9vxgm1plX0jB5OftO0YXXQzE5m0+2TI+WRsDCSpqZJTZzfZr0ncLfT6olguTwok70ls4dv+VLDW5RyG4yef8T5+PEdQa9igPD8iqsrAcunmhzsf+Hroz7Mkkl7IiFrFZc3OOPEkOa1GK3doOZeuu75fEHDd89IEMfDPJdW5Hfv+xOTRBd3mOv0jn/oiL+9wxP9TqfamJ9XRZpUpDFeR grid@rac2 [grid@rac1 ~]$
rac2上rac1互信配置完畢配置文件,權限如下
[grid@rac2 ~]$ ll .ssh/ -d drwx------. 2 grid oinstall 80 Jul 12 23:23 .ssh/ [grid@rac2 ~]$ ll .ssh/ total 16 -rw-r--r--. 1 grid oinstall 782 Jul 13 05:04 authorized_keys -rw-------. 1 grid oinstall 1679 Jul 12 23:22 id_rsa -rw-r--r--. 1 grid oinstall 391 Jul 12 23:22 id_rsa.pub -rw-r--r--. 1 grid oinstall 360 Jul 13 05:02 known_hosts [grid@rac2 ~]$ cat .ssh/authorized_keys ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDVuiEr9Dug/tLJ+rZbdUr0ZDIlviaRPAoKcPoPf2EBCCRuYa3O6sqcOqKlwMNbitRl6eTaGxktVFOru7r2AG+DlajNaZo1B5jizx+atKCZzxoils8XGZNEPbEbNZN/NwgYEq2DYv1RkjcRXMvvPhEKskkBV3GT6BOPow0YTsRwajBmaLSamhg7fHnBTkzV6cjhDlLnLCyPevvlFMcKm1Y338ApSQMNBaPO9DhprYhUaEbUK8SpqZVOOWKHpAsFqc2iAPdnXJX36W/pc+dZf/FzuaJU1bPz6nzizEBB1zsnY3MWopKrac2j6TzjHpO4HLWCLDdJLYIMsFDCL0kJEIgl/ grid@rac1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDOTMVA9hhXpMZQX9YK282J0QWxCawAwZCP/AH3zIuXYXA/q5GHYhnFoG2xAM7wt9Vf2+LXYKV0QrVBBpZAlS9MzCna2upqqfB32QtBBMeAIxyBE++9ufPUz+QgvfkpET5YA9vxgm1plX0jB5OftO0YXXQzE5m0+2TI+WRsDCSpqZJTZzfZr0ncLfT6olguTwok70ls4dv+VLDW5RyG4yef8T5+PEdQa9igPD8iqsrAcunmhzsf+Hroz7Mkkl7IiFrFZc3OOPEkOa1GK3doOZeuu75fEHDd89IEMfDPJdW5Hfv+xOTRBd3mOv0jn/oiL+9wxP9TqfamJ9XRZpUpDFeR grid@rac2 [grid@rac2 ~]$
方法供參考,目的是兩台機器的密鑰文件寫入到authorized_keys,拷貝到兩台機器上,並且一致
互信測試執行
rac1
[grid@rac1 ~]$ ssh rac1 Last login: Wed Jul 15 03:30:57 2020 [grid@rac1 ~]$ [grid@rac1 ~]$ ssh rac2 Last login: Wed Jul 15 03:04:23 2020 from rac1 [grid@rac2 ~]$
rac2
grid@rac2 ~]$ ssh rac2 Last login: Wed Jul 15 03:44:49 2020 from rac1 [grid@rac2 ~]$ ssh rac1 Last login: Wed Jul 15 03:44:31 2020 from rac1 [grid@rac1 ~]$
如上輸出即可正常。以上完畢。
方法二
使用oral測自帶的互信腳本,執行即可
[root@rac1 database]# ls install readme.html response rpm runInstaller sshsetup stage welcome.html [root@rac1 database]# ./sshsetup/sshUserSetup.sh help=y Please specify a valid and existing cluster configuration file. Either user name or host information is missing Usage ./sshsetup/sshUserSetup.sh -user <user name> [ -hosts "<space separated hostlist>" | -hostfile <absolute path of cluster configuration file> ] [ -advanced ] [ -verify] [ -exverify ] [ -logfile <desired absolute path of logfile> ] [-confirm] [-shared] [-help] [-usePassphrase] [-noPromptPassphrase] [root@rac1 database]#
為為兩主機rac1 ,rac2添加互信,用戶是oralce
./sshUserSetup.sh -hosts "rac1 rac2" -user oracle -advanced
腳本輸出如下
[root@rac1 sshsetup]# ./sshUserSetup.sh -hosts "rac1 rac2" -user oracle -advanced
The output of this script is also logged into /tmp/sshUserSetup_2021-09-16-06-04-32.log
Hosts are rac1 rac2
user is oracle
Platform:- Linux
Checking if the remote hosts are reachable
PING rac1 (192.168.10.20) 56(84) bytes of data.
64 bytes from rac1 (192.168.10.20): icmp_seq=1 ttl=64 time=0.018 ms
64 bytes from rac1 (192.168.10.20): icmp_seq=2 ttl=64 time=0.044 ms
64 bytes from rac1 (192.168.10.20): icmp_seq=3 ttl=64 time=0.044 ms
64 bytes from rac1 (192.168.10.20): icmp_seq=4 ttl=64 time=0.044 ms
64 bytes from rac1 (192.168.10.20): icmp_seq=5 ttl=64 time=0.048 ms
--- rac1 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4002ms
rtt min/avg/max/mdev = 0.018/0.039/0.048/0.012 ms
PING rac2 (192.168.10.21) 56(84) bytes of data.
64 bytes from rac2 (192.168.10.21): icmp_seq=1 ttl=64 time=0.600 ms
64 bytes from rac2 (192.168.10.21): icmp_seq=2 ttl=64 time=0.714 ms
64 bytes from rac2 (192.168.10.21): icmp_seq=3 ttl=64 time=0.701 ms
64 bytes from rac2 (192.168.10.21): icmp_seq=4 ttl=64 time=0.737 ms
64 bytes from rac2 (192.168.10.21): icmp_seq=5 ttl=64 time=0.643 ms
--- rac2 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4005ms
rtt min/avg/max/mdev = 0.600/0.679/0.737/0.050 ms
Remote host reachability check succeeded.
The following hosts are reachable: rac1 rac2.
The following hosts are not reachable: .
All hosts are reachable. Proceeding further...
firsthost rac1
numhosts 2
The script will setup SSH connectivity from the host rac1 to all
the remote hosts. After the script is executed, the user can use SSH to run
commands on the remote hosts or copy files between this host rac1
and the remote hosts without being prompted for passwords or confirmations.
NOTE 1:
As part of the setup procedure, this script will use ssh and scp to copy
files between the local host and the remote hosts. Since the script does not
store passwords, you may be prompted for the passwords during the execution of
the script whenever ssh or scp is invoked.
NOTE 2:
AS PER SSH REQUIREMENTS, THIS SCRIPT WILL SECURE THE USER HOME DIRECTORY
AND THE .ssh DIRECTORY BY REVOKING GROUP AND WORLD WRITE PRIVILEDGES TO THESE
directories.
Do you want to continue and let the script make the above mentioned changes (yes/no)?
yes
The user chose yes
Please specify if you want to specify a passphrase for the private key this script will create for the local host. Passphrase is used to encrypt the private key and makes SSH much more secure. Type 'yes' or 'no' and then press enter. In case you press 'yes', you would need to enter the passphrase whenever the script executes ssh or scp.
The estimated number of times the user would be prompted for a passphrase is 4. In addition, if the private-public files are also newly created, the user would have to specify the passphrase on one additional occasion.
Enter 'yes' or 'no'.
yes
The user chose yes
Creating .ssh directory on local host, if not present already
Creating authorized_keys file on local host
Changing permissions on authorized_keys to 644 on local host
Creating known_hosts file on local host
Changing permissions on known_hosts to 644 on local host
Creating config file on local host
If a config file exists already at /root/.ssh/config, it would be backed up to /root/.ssh/config.backup.
Removing old private/public keys on local host
Running SSH keygen on local host
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Generating public/private rsa key pair.
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:gKwItLJdlR8rhK15zllEw2+Em1TyPYfylmHZDO4i88M root@rac1
The key's randomart image is:
+---[RSA 1024]----+
| . o.++o. . |
|. ...o+ *+.o * |
|o. ++.+ Bo O + |
|o+ oo o.* o= = |
|o o + +S.. = |
| + = o |
| E |
| . |
| |
+----[SHA256]-----+
Creating .ssh directory and setting permissions on remote host rac1
THE SCRIPT WOULD ALSO BE REVOKING WRITE PERMISSIONS FOR group AND others ON THE HOME DIRECTORY FOR oracle. THIS IS AN SSH REQUIREMENT.
The script would create ~oracle/.ssh/config file on remote host rac1. If a config file exists already at ~oracle/.ssh/config, it would be backed up to ~oracle/.ssh/config.backup.
The user may be prompted for a password here since the script would be running SSH on host rac1.
Warning: Permanently added 'rac1,192.168.10.20' (ECDSA) to the list of known hosts.
oracle@rac1's password:
Done with creating .ssh directory and setting permissions on remote host rac1.
Creating .ssh directory and setting permissions on remote host rac2
THE SCRIPT WOULD ALSO BE REVOKING WRITE PERMISSIONS FOR group AND others ON THE HOME DIRECTORY FOR oracle. THIS IS AN SSH REQUIREMENT.
The script would create ~oracle/.ssh/config file on remote host rac2. If a config file exists already at ~oracle/.ssh/config, it would be backed up to ~oracle/.ssh/config.backup.
The user may be prompted for a password here since the script would be running SSH on host rac2.
Warning: Permanently added 'rac2,192.168.10.21' (ECDSA) to the list of known hosts.
oracle@rac2's password:
Done with creating .ssh directory and setting permissions on remote host rac2.
Copying local host public key to the remote host rac1
The user may be prompted for a password or passphrase here since the script would be using SCP for host rac1.
oracle@rac1's password:
Done copying local host public key to the remote host rac1
Copying local host public key to the remote host rac2
The user may be prompted for a password or passphrase here since the script would be using SCP for host rac2.
oracle@rac2's password:
Done copying local host public key to the remote host rac2
Creating keys on remote host rac1 if they do not exist already. This is required to setup SSH on host rac1.
Generating public/private rsa key pair.
Your identification has been saved in .ssh/id_rsa.
Your public key has been saved in .ssh/id_rsa.pub.
The key fingerprint is:
SHA256:v5ssNTbEi0QV7DrGvPvRRlHtCFgDx1jdVwQvAP6bWBQ oracle@rac1
The key's randomart image is:
+---[RSA 1024]----+
| o=XE.o=+|
| ..+.o=..+|
| . o. o..oo|
| . +o ....|
| +S+ .+ |
| B.*= o |
| . =+o= |
| o. = |
| .+*. |
+----[SHA256]-----+
Creating keys on remote host rac2 if they do not exist already. This is required to setup SSH on host rac2.
Generating public/private rsa key pair.
Your identification has been saved in .ssh/id_rsa.
Your public key has been saved in .ssh/id_rsa.pub.
The key fingerprint is:
SHA256:DOM3p67iPw0SutOj8yXX6imidsQxrCEcPqJqS00nwj4 oracle@rac2
The key's randomart image is:
+---[RSA 1024]----+
| |
| . |
|o o o |
|+= +.. + |
|+o=+oo. S . |
|o.=o+ .o + |
|.E.+..ooo |
|oo*.=+.+. |
|+o+Bo**o. |
+----[SHA256]-----+
Updating authorized_keys file on remote host rac1
Updating known_hosts file on remote host rac1
The script will run SSH on the remote machine rac1. The user may be prompted for a passphrase here in case the private key has been encrypted with a passphrase.
Updating authorized_keys file on remote host rac2
Updating known_hosts file on remote host rac2
The script will run SSH on the remote machine rac2. The user may be prompted for a passphrase here in case the private key has been encrypted with a passphrase.
SSH setup is complete.
------------------------------------------------------------------------
Verifying SSH setup
===================
The script will now run the date command on the remote nodes using ssh
to verify if ssh is setup correctly. IF THE SETUP IS CORRECTLY SETUP,
THERE SHOULD BE NO OUTPUT OTHER THAN THE DATE AND SSH SHOULD NOT ASK FOR
PASSWORDS. If you see any output other than date or are prompted for the
password, ssh is not setup correctly and you will need to resolve the
issue and set up ssh again.
The possible causes for failure could be:
1. The server settings in /etc/ssh/sshd_config file do not allow ssh
for user oracle.
2. The server may have disabled public key based authentication.
3. The client public key on the server may be outdated.
4. ~oracle or ~oracle/.ssh on the remote host may not be owned by oracle.
5. User may not have passed -shared option for shared remote users or
may be passing the -shared option for non-shared remote users.
6. If there is output in addition to the date, but no password is asked,
it may be a security alert shown as part of company policy. Append the
additional text to the <OMS HOME>/sysman/prov/resources/ignoreMessages.txt file.
------------------------------------------------------------------------
--rac1:--
Running /usr/bin/ssh -x -l oracle rac1 date to verify SSH connectivity has been setup from local host to rac1.
IF YOU SEE ANY OTHER OUTPUT BESIDES THE OUTPUT OF THE DATE COMMAND OR IF YOU ARE PROMPTED FOR A PASSWORD HERE, IT MEANS SSH SETUP HAS NOT BEEN SUCCESSFUL. Please note that being prompted for a passphrase may be OK but being prompted for a password is ERROR.
The script will run SSH on the remote machine rac1. The user may be prompted for a passphrase here in case the private key has been encrypted with a passphrase.
Thu Sep 16 06:05:04 EDT 2021
------------------------------------------------------------------------
--rac2:--
Running /usr/bin/ssh -x -l oracle rac2 date to verify SSH connectivity has been setup from local host to rac2.
IF YOU SEE ANY OTHER OUTPUT BESIDES THE OUTPUT OF THE DATE COMMAND OR IF YOU ARE PROMPTED FOR A PASSWORD HERE, IT MEANS SSH SETUP HAS NOT BEEN SUCCESSFUL. Please note that being prompted for a passphrase may be OK but being prompted for a password is ERROR.
The script will run SSH on the remote machine rac2. The user may be prompted for a passphrase here in case the private key has been encrypted with a passphrase.
Thu Sep 16 06:05:20 EDT 2021
------------------------------------------------------------------------
------------------------------------------------------------------------
Verifying SSH connectivity has been setup from rac1 to rac1
IF YOU SEE ANY OTHER OUTPUT BESIDES THE OUTPUT OF THE DATE COMMAND OR IF YOU ARE PROMPTED FOR A PASSWORD HERE, IT MEANS SSH SETUP HAS NOT BEEN SUCCESSFUL.
bash: -c: line 0: unexpected EOF while looking for matching `"'
bash: -c: line 1: syntax error: unexpected end of file
------------------------------------------------------------------------
------------------------------------------------------------------------
Verifying SSH connectivity has been setup from rac1 to rac2
IF YOU SEE ANY OTHER OUTPUT BESIDES THE OUTPUT OF THE DATE COMMAND OR IF YOU ARE PROMPTED FOR A PASSWORD HERE, IT MEANS SSH SETUP HAS NOT BEEN SUCCESSFUL.
bash: -c: line 0: unexpected EOF while looking for matching `"'
bash: -c: line 1: syntax error: unexpected end of file
------------------------------------------------------------------------
-Verification from complete-
SSH verification complete.
執行完畢,執行驗證
[oracle@rac1 ~]$ date;ssh rac1 date;ssh rac1 date;ssh rac2 date;ssh rac2 date;ssh rac1 date
Thu Sep 16 06:17:57 EDT 2021
Thu Sep 16 06:17:58 EDT 2021
Thu Sep 16 06:17:58 EDT 2021
Thu Sep 16 06:18:13 EDT 2021
Thu Sep 16 06:18:13 EDT 2021
Thu Sep 16 06:17:59 EDT 2021
[oracle@rac1 ~]$
(六)grid安裝前檢查
配置完畢執行grid環境檢查腳本
./runcluvfy.sh stage -pre crsinst -n rac1,rac2 -fixup -verbose
檢查完畢,缺少的包pdksh 沒有安裝,安裝了ksh,可以忽略,oracle 官方有解釋到,參考鏈接:
https://docs.oracle.com/cd/E11882_01/relnotes.112/e23558/toc.htm#CJAFEIBJ
解釋如下:
7.2.6 Missing Package Error During Oracle Database Installation on Oracle Linux 7
During Oracle Database 11.2.0.4 installation on Oracle Linux 7, the Oracle Universal Installer reports a missing pdksh-5.2.14
package.
Workaround:
Ignore the missing pdksh-5.2.14
package error and proceed with the installation.
This issue is tracked with Oracle bug 19947777.
(七)安裝 grid 軟件,oracle 及創建數據庫
安裝grid前,安裝grid中cvuqdisk包,操作如下:
[root@rac2 ~]# ls -lh /u01/grid/ total 56K drwxr-xr-x. 4 grid oinstall 4.0K Aug 26 2013 install -rw-r--r--. 1 grid oinstall 30K Aug 27 2013 readme.html drwxr-xr-x. 2 grid oinstall 58 Jul 23 02:48 response drwxr-xr-x. 2 grid oinstall 34 Aug 26 2013 rpm -rwxr-xr-x. 1 grid oinstall 4.8K Aug 26 2013 runcluvfy.sh -rwxr-xr-x. 1 grid oinstall 3.2K Aug 26 2013 runInstaller drwxr-xr-x. 2 grid oinstall 29 Aug 26 2013 sshsetup drwxr-xr-x. 14 grid oinstall 4.0K Aug 26 2013 stage -rw-r--r--. 1 grid oinstall 500 Aug 27 2013 welcome.html [root@rac2 ~]# ls -lh /u01/grid/rpm/ total 12K -rw-r--r--. 1 grid oinstall 8.1K Aug 26 2013 cvuqdisk-1.0.9-1.rpm [root@rac2 ~]# rpm -ivh /u01/grid/rpm/cvuqdisk-1.0.9-1.rpm Preparing... ################################# [100%] ls: cannot access /usr/sbin/smartctl: No such file or directory /usr/sbin/smartctl not found. error: %pre(cvuqdisk-1.0.9-1.x86_64) scriptlet failed, exit status 1 error: cvuqdisk-1.0.9-1.x86_64: install failed [root@rac2 ~]# yum install -y smartmontools Loaded plugins: ulninfo Resolving Dependencies --> Running transaction check ---> Package smartmontools.x86_64 1:6.5-1.el7 will be installed --> Processing Dependency: mailx for package: 1:smartmontools-6.5-1.el7.x86_64 --> Running transaction check ---> Package mailx.x86_64 0:12.5-19.el7 will be installed --> Finished Dependency Resolution Dependencies Resolved =============================================================================================================================================================================================== Package Arch Version Repository Size =============================================================================================================================================================================================== Installing: smartmontools x86_64 1:6.5-1.el7 base 460 k Installing for dependencies: mailx x86_64 12.5-19.el7 base 244 k Transaction Summary =============================================================================================================================================================================================== Install 1 Package (+1 Dependent package) Total download size: 704 k Installed size: 2.2 M Downloading packages: ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Total 3.6 MB/s | 704 kB 00:00:00 Running transaction check Running transaction test Transaction test succeeded Running transaction Installing : mailx-12.5-19.el7.x86_64 1/2 Installing : 1:smartmontools-6.5-1.el7.x86_64 2/2 Verifying : mailx-12.5-19.el7.x86_64 1/2 Verifying : 1:smartmontools-6.5-1.el7.x86_64 2/2 Installed: smartmontools.x86_64 1:6.5-1.el7 Dependency Installed: mailx.x86_64 0:12.5-19.el7 Complete! [root@rac2 ~]# rpm -ivh /u01/grid/rpm/cvuqdisk-1.0.9-1.rpm Preparing... ################################# [100%] Using default group oinstall to install package Updating / installing... 1:cvuqdisk-1.0.9-1 ################################# [100%] [root@rac2 ~]#
解壓grid安裝包,靜默安裝grid配置文件:(這里配置安裝grid軟件配置ASM)編輯grid_install.rsp,如下僅供參考:
解釋這里:oracle.install.crs.config.networkInterfaceList=ens192:192.168.2.0:1,ens224:10.10.10.0:2
填寫公有網卡名稱和似有網卡名和ip 段就可以了
oracle.install.responseFileVersion=/oracle/install/rspfmt_crsinstall_response_schema_v11_2_0 ORACLE_HOSTNAME=rac1 INVENTORY_LOCATION=/u01/app/oraInventory SELECTED_LANGUAGES=en oracle.install.option=CRS_CONFIG ORACLE_BASE=/u01/app/grid ORACLE_HOME=/u01/app/11.2.0/grid oracle.install.asm.OSDBA=asmdba oracle.install.asm.OSOPER=asmoper oracle.install.asm.OSASM=asmadmin oracle.install.crs.config.gpnp.scanName=rac-san oracle.install.crs.config.gpnp.scanPort=1521 oracle.install.crs.config.clusterName=cluster-san oracle.install.crs.config.gpnp.configureGNS=false oracle.install.crs.config.gpnp.gnsSubDomain= oracle.install.crs.config.gpnp.gnsVIPAddress= oracle.install.crs.config.autoConfigureClusterNodeVIP=false oracle.install.crs.config.clusterNodes=rac1:rac1-vip,rac2:rac2-vip oracle.install.crs.config.networkInterfaceList=ens192:192.168.2.0:1,ens224:10.10.10.0:2 oracle.install.crs.config.storageOption=ASM_STORAGE oracle.install.crs.config.sharedFileSystemStorage.diskDriveMapping= oracle.install.crs.config.sharedFileSystemStorage.votingDiskLocations= oracle.install.crs.config.sharedFileSystemStorage.votingDiskRedundancy=NORMAL oracle.install.crs.config.sharedFileSystemStorage.ocrLocations= oracle.install.crs.config.sharedFileSystemStorage.ocrRedundancy=NORMAL oracle.install.crs.config.useIPMI=false oracle.install.crs.config.ipmi.bmcUsername= oracle.install.crs.config.ipmi.bmcPassword= oracle.install.asm.SYSASMPassword=oracle oracle.install.asm.diskGroup.name=DATA oracle.install.asm.diskGroup.redundancy=NORMAL oracle.install.asm.diskGroup.AUSize=1 oracle.install.asm.diskGroup.disks=/dev/raw/raw1,/dev/raw/raw2,/dev/raw/raw3,/dev/raw/raw4 oracle.install.asm.diskGroup.diskDiscoveryString=/dev/raw/* oracle.install.asm.monitorPassword=oracle oracle.install.crs.upgrade.clusterNodes= oracle.install.asm.upgradeASM=false oracle.installer.autoupdates.option=SKIP_UPDATES oracle.installer.autoupdates.downloadUpdatesLoc= AUTOUPDATES_MYORACLESUPPORT_USERNAME= AUTOUPDATES_MYORACLESUPPORT_PASSWORD= PROXY_HOST= PROXY_PORT= PROXY_USER= PROXY_PWD= PROXY_REALM=
安裝命令:
./runInstaller -silent -ignorePrereq -showProgress -responseFile /u01/ora_app/grid/response/grid_install01.rsp
-ignorePrereq表示忽略警告。
操作步驟如下:
[grid@rac1 grid]$ ./runInstaller -silent -ignorePrereq -showProgress -responseFile /u01/ora_app/grid/response/grid_install01.rsp Starting Oracle Universal Installer... Checking Temp space: must be greater than 120 MB. Actual 9901 MB Passed Checking swap space: must be greater than 150 MB. Actual 4091 MB Passed Preparing to launch Oracle Universal Installer from /tmp/OraInstall2020-07-24_06-21-45PM. Please wait ... [grid@rac1 grid]$ [WARNING] [INS-30011] The SYS password entered does not conform to the Oracle recommended standards. CAUSE: Oracle recommends that the password entered should be at least 8 characters in length, contain at least 1 uppercase character, 1 lower case character and 1 digit [0-9]. ACTION: Provide a password that conforms to the Oracle recommended standards. [WARNING] [INS-30011] The ASMSNMP password entered does not conform to the Oracle recommended standards. CAUSE: Oracle recommends that the password entered should be at least 8 characters in length, contain at least 1 uppercase character, 1 lower case character and 1 digit [0-9]. ACTION: Provide a password that conforms to the Oracle recommended standards. You can find the log of this install session at: /u01/app/oraInventory/logs/installActions2020-07-24_06-21-45PM.log Prepare in progress. .................................................. 9% Done. Prepare successful. Copy files in progress. .................................................. 15% Done. .................................................. 20% Done. .................................................. 25% Done. .................................................. 30% Done. .................................................. 35% Done. .................................................. 40% Done. .................................................. 45% Done. ........................................ Copy files successful. Link binaries in progress. Link binaries successful. .................................................. 62% Done. Setup files in progress. Setup files successful. .................................................. 76% Done. Perform remote operations in progress. .................................................. 89% Done. Perform remote operations successful. The installation of Oracle Grid Infrastructure 11g was successful. Please check '/u01/app/oraInventory/logs/silentInstall2020-07-24_06-21-45PM.log' for more details. .................................................. 94% Done. Execute Root Scripts in progress. As a root user, execute the following script(s): 1. /u01/app/oraInventory/orainstRoot.sh 2. /u01/app/11.2.0/grid/root.sh Execute /u01/app/oraInventory/orainstRoot.sh on the following nodes: [rac1, rac2] Execute /u01/app/11.2.0/grid/root.sh on the following nodes: [rac1, rac2] .................................................. 100% Done. Execute Root Scripts successful. As install user, execute the following script to complete the configuration. 1. /u01/app/11.2.0/grid/cfgtoollogs/configToolAllCommands RESPONSE_FILE=<response_file> Note: 1. This script must be run on the same host from where installer was run. 2. This script needs a small password properties file for configuration assistants that require passwords (refer to install guide documentation). Successfully Setup Software.
兩個節點上執行腳本,執行腳本前更新oracle補丁,參見文章末尾步驟,安裝補丁18370031。
補丁更新完畢,接着執行root腳本步驟如下:
[root@rac1 ~]# /u01/app/oraInventory/orainstRoot.sh Changing permissions of /u01/app/oraInventory. Adding read,write permissions for group. Removing read,write,execute permissions for world. Changing groupname of /u01/app/oraInventory to oinstall. The execution of the script is complete. [root@rac1 ~]# /u01/app/11.2.0/grid/root.sh Check /u01/app/11.2.0/grid/install/root_rac1_2020-07-24_18-40-41.log for the output of root script
新開另外一個終端:
[root@rac1 ~]# tail -f /u01/app/11.2.0/grid/install/root_rac1_2020-08-24_06-11-09.log | nl 1 Copying oraenv to /usr/local/bin ... 2 Copying coraenv to /usr/local/bin ... 3 Entries will be added to the /etc/oratab file as needed by 4 Database Configuration Assistant when a database is created 5 Finished running generic part of root script. 6 Now product-specific root actions will be performed. 7 Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params 8 User ignored Prerequisites during installation 9 Installing Trace File Analyzer 10 OLR initialization - successful 11 root wallet 12 root wallet cert 13 root cert export 14 peer wallet 15 profile reader wallet 16 pa wallet 17 peer wallet keys 18 pa wallet keys 19 peer cert request 20 pa cert request 21 peer cert 22 pa cert 23 peer root cert TP 24 profile reader root cert TP 25 pa root cert TP 26 peer pa cert TP 27 pa peer cert TP 28 profile reader pa cert TP 29 profile reader peer cert TP 30 peer user cert 31 pa user cert 32 Adding Clusterware entries to oracle-ohasd.service 33 CRS-2672: Attempting to start 'ora.mdnsd' on 'rac1' 34 CRS-2676: Start of 'ora.mdnsd' on 'rac1' succeeded 35 CRS-2672: Attempting to start 'ora.gpnpd' on 'rac1' 36 CRS-2676: Start of 'ora.gpnpd' on 'rac1' succeeded 37 CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rac1' 38 CRS-2672: Attempting to start 'ora.gipcd' on 'rac1' 39 CRS-2676: Start of 'ora.cssdmonitor' on 'rac1' succeeded 40 CRS-2676: Start of 'ora.gipcd' on 'rac1' succeeded 41 CRS-2672: Attempting to start 'ora.cssd' on 'rac1' 42 CRS-2672: Attempting to start 'ora.diskmon' on 'rac1' 43 CRS-2676: Start of 'ora.diskmon' on 'rac1' succeeded 44 CRS-2676: Start of 'ora.cssd' on 'rac1' succeeded 45 ASM created and started successfully. 46 Disk Group DATA created successfully. 47 clscfg: -install mode specified 48 Successfully accumulated necessary OCR keys. 49 Creating OCR keys for user 'root', privgrp 'root'.. 50 Operation successful. 51 CRS-4256: Updating the profile 52 Successful addition of voting disk 5daea1cfaac24f1ebf870b7f9bb964c3. 53 Successful addition of voting disk 28267865584c4fc3bf672e650d0e28ba. 54 Successful addition of voting disk 1e8c192b947d4f43bf96dba5e650f11f. 55 Successfully replaced voting disk group with +DATA. 56 CRS-4256: Updating the profile 57 CRS-4266: Voting file(s) successfully replaced 58 ## STATE File Universal Id File Name Disk group 59 -- ----- ----------------- --------- --------- 60 1. ONLINE 5daea1cfaac24f1ebf870b7f9bb964c3 (/dev/raw/raw1) [DATA] 61 2. ONLINE 28267865584c4fc3bf672e650d0e28ba (/dev/raw/raw2) [DATA] 62 3. ONLINE 1e8c192b947d4f43bf96dba5e650f11f (/dev/raw/raw3) [DATA] 63 Located 3 voting disk(s). 64 CRS-2672: Attempting to start 'ora.asm' on 'rac1' 65 CRS-2676: Start of 'ora.asm' on 'rac1' succeeded 66 CRS-2672: Attempting to start 'ora.DATA.dg' on 'rac1' 67 CRS-2676: Start of 'ora.DATA.dg' on 'rac1' succeeded 68 Configure Oracle Grid Infrastructure for a Cluster ... succeeded
節點二
[root@rac2 ~]# tail -f /u01/app/11.2.0/grid/install/root_rac2_2020-07-24_18-48-51.log | nl 1 Creating /etc/oratab file... 2 Entries will be added to the /etc/oratab file as needed by 3 Database Configuration Assistant when a database is created 4 Finished running generic part of root script. 5 Now product-specific root actions will be performed. 6 Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params 7 Creating trace directory 8 User ignored Prerequisites during installation 9 Installing Trace File Analyzer 10 OLR initialization - successful 11 Adding Clusterware entries to oracle-ohasd.service 12 CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS daemon on node rac1, number 1, and is terminating 13 An active cluster was found during exclusive startup, restarting to join the cluster 14 Configure Oracle Grid Infrastructure for a Cluster ... succeeded
執行完畢成功。
7.1.安裝oracle 軟件
oracle 必須的軟件包參考
https://docs.oracle.com/cd/E11882_01/install.112/e24326/toc.htm#BHCGJCEA
db_install.rsp文件配置如下:
oracle.install.responseFileVersion=/oracle/install/rspfmt_dbinstall_response_schema_v11_2_0 oracle.install.option=INSTALL_DB_AND_CONFIG ORACLE_HOSTNAME=rac1 UNIX_GROUP_NAME=oinstall INVENTORY_LOCATION=/u01/app/oraInventory SELECTED_LANGUAGES=en ORACLE_HOME=/u01/app/oracle/product/11.2.0/db_1 ORACLE_BASE=/u01/app/oracle oracle.install.db.InstallEdition=EE oracle.install.db.EEOptionsSelection=false oracle.install.db.optionalComponents=oracle.rdbms.partitioning:11.2.0.4.0,oracle.oraolap:11.2.0.4.0,oracle.rdbms.dm:11.2.0.4.0,oracle.rdbms.dv:11.2.0.4.0,oracle.rdbms.lbac:11.2.0.4.0,oracle.rdbms.rat:11.2.0.4.0 oracle.install.db.DBA_GROUP=dba oracle.install.db.OPER_GROUP=oper oracle.install.db.CLUSTER_NODES=rac1,rac2 oracle.install.db.isRACOneInstall= oracle.install.db.racOneServiceName= oracle.install.db.config.starterdb.type=GENERAL_PURPOSE oracle.install.db.config.starterdb.globalDBName=rac oracle.install.db.config.starterdb.SID=rac oracle.install.db.config.starterdb.characterSet=ZHS16GBK oracle.install.db.config.starterdb.memoryOption=true oracle.install.db.config.starterdb.memoryLimit=512 oracle.install.db.config.starterdb.installExampleSchemas=false oracle.install.db.config.starterdb.enableSecuritySettings=true oracle.install.db.config.starterdb.password.ALL=oracle oracle.install.db.config.starterdb.password.SYS= oracle.install.db.config.starterdb.password.SYSTEM= oracle.install.db.config.starterdb.password.SYSMAN= oracle.install.db.config.starterdb.password.DBSNMP= oracle.install.db.config.starterdb.control=DB_CONTROL oracle.install.db.config.starterdb.gridcontrol.gridControlServiceURL= oracle.install.db.config.starterdb.automatedBackup.enable=false oracle.install.db.config.starterdb.automatedBackup.osuid= oracle.install.db.config.starterdb.automatedBackup.ospwd= oracle.install.db.config.starterdb.storageType=ASM_STORAGE oracle.install.db.config.starterdb.fileSystemStorage.dataLocation= oracle.install.db.config.starterdb.fileSystemStorage.recoveryLocation= oracle.install.db.config.asm.diskGroup=DATA oracle.install.db.config.asm.ASMSNMPPassword=oracle MYORACLESUPPORT_USERNAME= MYORACLESUPPORT_PASSWORD= SECURITY_UPDATES_VIA_MYORACLESUPPORT=false DECLINE_SECURITY_UPDATES=true PROXY_HOST= PROXY_PORT= PROXY_USER= PROXY_PWD= PROXY_REALM= COLLECTOR_SUPPORTHUB_URL= oracle.installer.autoupdates.option=SKIP_UPDATES oracle.installer.autoupdates.downloadUpdatesLoc= AUTOUPDATES_MYORACLESUPPORT_USERNAME= AUTOUPDATES_MYORACLESUPPORT_PASSWORD=
執行安裝
./runInstaller -ignoreSysPrereqs -silent -showProgress -responseFile /u01/ora_app/database/response/db_install.rsp
這里會出現一個錯誤:
[oracle@rac1 database]$ ./runInstaller -ignoreSysPrereqs -silent -showProgress -responseFile /u01/ora_app/database/response/db_install.rsp Starting Oracle Universal Installer... Checking Temp space: must be greater than 120 MB. Actual 8932 MB Passed Checking swap space: must be greater than 150 MB. Actual 4049 MB Passed Preparing to launch Oracle Universal Installer from /tmp/OraInstall2020-07-24_07-00-02PM. Please wait ...[oracle@rac1 database]$ [FATAL]
[INS-35354] The system on which you are attempting to install Oracle RAC
is not part of a valid cluster. CAUSE: Before you can install Oracle RAC, you must install Oracle Grid Infrastructure on all servers (Oracle Clusterware and Oracle ASM)
to create a cluster. ACTION: Oracle Grid Infrastructure is not installed. Install it either from the separate installation media included in your media pack,
or install
it by downloading it from Electronic Product Delivery (EPD)
or the Oracle Technology Network (OTN).
Oracle Grid Infrastructure normally is installed by a different operating system user than the one used for Oracle Database.
It may need to be installed by your system administrator.
See the installation guide for more details.
修正錯誤見微章末尾:修改grid配置文件inventory.xml。
修正后重新執行安裝如下:
[oracle@rac1 database]$./runInstaller -ignoreSysPrereqs -silent -showProgress -responseFile /u01/ora_app/database/response/db_install.rsp
Starting Oracle Universal Installer...
Checking Temp space: must be greater than 120 MB.
Actual 13597 MB Passed Checking swap space:
must be greater than 150 MB. Actual 4095 MB Passed
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2020-08-24_09-54-03PM. Please wait ...oracle@rac1 database]$
[FATAL] [INS-30502] No ASM disk group found.
CAUSE: There were no disk groups managed by the ASM instance +ASM1.
ACTION: Use Automatic Storage Management Configuration Assistant to add disk groups.
這里提示找不到磁盤組,我這里磁盤組是正常的,
[grid@rac1 ~]$ crs_stat -t Name Type Target State Host ------------------------------------------------------------ ora.DATA.dg ora....up.type ONLINE ONLINE rac1 ora....N1.lsnr ora....er.type ONLINE ONLINE rac1 ora.asm ora.asm.type ONLINE ONLINE rac1 ora.cvu ora.cvu.type ONLINE ONLINE rac1 ora.gsd ora.gsd.type OFFLINE OFFLINE ora....network ora....rk.type ONLINE ONLINE rac1 ora.oc4j ora.oc4j.type ONLINE ONLINE rac2 ora.ons ora.ons.type ONLINE ONLINE rac1 ora....SM1.asm application ONLINE ONLINE rac1 ora.rac1.gsd application OFFLINE OFFLINE ora.rac1.ons application ONLINE ONLINE rac1 ora.rac1.vip ora....t1.type ONLINE ONLINE rac1 ora....SM2.asm application ONLINE ONLINE rac2 ora.rac2.gsd application OFFLINE OFFLINE ora.rac2.ons application ONLINE ONLINE rac2 ora.rac2.vip ora....t1.type ONLINE ONLINE rac2 ora.scan1.vip ora....ip.type ONLINE ONLINE rac1
[grid@rac1 ~]$ sqlplus / as sysasm
SQL*Plus: Release 11.2.0.4.0 Production on Mon Aug 24 22:25:56 2020
Copyright (c) 1982, 2013, Oracle. All rights reserved.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Real Application Clusters and Automatic Storage Management options
SQL> column path format 20
SP2-0246: Illegal FORMAT string "20"
SQL> column path format a20
SQL> set lines 200
SQL> select group_number ,disk_number,mode_status ,name ,path from v$asm_disk;
GROUP_NUMBER DISK_NUMBER MODE_ST NAME PATH
------------ ----------- ------- ------------------------------ --------------------
1 3 ONLINE DATA_0003 /dev/raw/raw4
1 2 ONLINE DATA_0002 /dev/raw/raw3
1 1 ONLINE DATA_0001 /dev/raw/raw2
1 0 ONLINE DATA_0000 /dev/raw/raw1
SQL>
修正步驟,
[root@rac1 ~]# ls /u01/app/11.2.0/grid/bin/oracle -l -rwxr-x--x 1 grid oinstall 209836184 Aug 24 06:01 /u01/app/11.2.0/grid/bin/oracle [root@rac1 ~]# chmod +s /u01/app/11.2.0/grid/bin/oracle [root@rac1 ~]# ls /u01/app/11.2.0/grid/bin/oracle -l -rwsr-s--x 1 grid oinstall 209836184 Aug 24 06:01 /u01/app/11.2.0/grid/bin/oracle [root@rac1 ~]#
更改后重新運行安裝,
[oracle@rac1 database]$ ./runInstaller -ignoreSysPrereqs -silent -showProgress -responseFile /u01/ora_app/database/response/db_install.rsp
Starting Oracle Universal Installer... Checking Temp space: must be greater than 120 MB. Actual 13597 MB Passed Checking swap space: must be greater than 150 MB. Actual 4084 MB Passed Preparing to launch Oracle Universal Installer from /tmp/OraInstall2020-08-24_11-03-43PM. Please wait ...[oracle@rac1 database]$ [WARNING] [INS-30011] The ADMIN password entered does not conform to the Oracle recommended standards. CAUSE: Oracle recommends that the password entered should be at least 8 characters in length, contain at least 1 uppercase character, 1 lower case character and 1 digit [0-9]. ACTION: Provide a password that conforms to the Oracle recommended standards. [WARNING] [INS-13014] Target environment do not meet some optional requirements. CAUSE: Some of the optional prerequisites are not met. See logs for details. /u01/app/oraInventory/logs/installActions2020-08-24_11-03-43PM.log ACTION: Identify the list of failed prerequisite checks from the log: /u01/app/oraInventory/logs/installActions2020-08-24_11-03-43PM.log. Then either from the log file or from installation manual find the appropriate configuration to meet the prerequisites and fix it manually. You can find the log of this install session at: /u01/app/oraInventory/logs/installActions2020-08-24_11-03-43PM.log Prepare in progress. .................................................. 8% Done. Prepare successful. Copy files in progress. .................................................. 13% Done. .................................................. 18% Done. .................................................. 23% Done. .................................................. 28% Done. .................................................. 33% Done. .................................................. 38% Done. .................................................. 43% Done. .................... Copy files successful. Link binaries in progress. .......... Link binaries successful. .................................................. 53% Done. Setup files in progress. Setup files successful. .................................................. 65% Done. Perform remote operations in progress. .................................................. 76% Done. Perform remote operations successful. SEVERE:Remote 'AttachHome' failed on nodes: 'rac2'. Refer to '/u01/app/oraInventory/logs/installActions2020-08-24_11-03-43PM.log' for details. It is recommended that the following command needs to be manually run on the failed nodes: /u01/app/oracle/product/11.2.0/db_1/oui/bin/runInstaller -attachHome -noClusterEnabled ORACLE_HOME=/u01/app/oracle/product/11.2.0/db_1 ORACLE_HOME_NAME=OraDb11g_home1 CLUSTER_NODES=rt1,rt2 "INVENTORY_LOCATION=/u01/app/oraInventory" LOCAL_NODE=<node on which command is to be run>. Please refer 'AttachHome' logs under central inventory of remote nodes where failure occurred for more details. The installation of Oracle Database 11g was successful on the local node but failed on remote nodes. Please check '/u01/app/oraInventory/logs/silentInstall2020-08-24_11-03-43PM.log' for more details. Oracle Database Configuration Assistant in progress. .................................................. 95% Done. Oracle Database Configuration Assistant failed. [WARNING] [INS-32091] Software installation was successful. But some configuration assistants failed, were cancelled or skipped. ACTION: Refer to the logs or contact Oracle Support Services.
這里靜默安裝oracle軟件安裝成功了,監聽和創建數據庫需要手動執行下。
查看日志如下:
[oracle@rac1 ~]$ cat /u01/app/oraInventory/logs/silentInstall2020-08-24_11-03-43PM.log silentInstall2020-08-24_11-03-43PM.log sNativeVolName:/u01/app/oracle/product/11.2.0/db_1/ m_asNodeArray:rac1,rac2 m_sLocalNode:rac1 sNativeVolName:/tmp/ m_asNodeArray:rac1,rac2 m_sLocalNode:rac1 sNativeVolName:/u01/app/oraInventory/ m_asNodeArray:rac1,rac2 m_sLocalNode:rac1 Error in invoking target 'agent nmhs' of makefile '/u01/app/oracle/product/11.2.0/db_1/sysman/lib/ins_emagent.mk'. See '/u01/app/oraInventory/logs/installActions2020-08-24_11-03-43PM.log' for details. sNativeVolName:/u01/app/oracle/ m_asNodeArray:rac1,rac2 m_sLocalNode:rac1 SEVERE:Remote 'AttachHome' failed on nodes: 'rt2'. Refer to '/u01/app/oraInventory/logs/installActions2020-08-24_11-03-43PM.log' for details. It is recommended that the following command needs to be manually run on the failed nodes: /u01/app/oracle/product/11.2.0/db_1/oui/bin/runInstaller -attachHome -noClusterEnabled ORACLE_HOME=/u01/app/oracle/product/11.2.0/db_1 ORACLE_HOME_NAME=OraDb11g_home1 CLUSTER_NODES=rt1,rt2 "INVENTORY_LOCATION=/u01/app/oraInventory" LOCAL_NODE=<node on which command is to be run>. Please refer 'AttachHome' logs under central inventory of remote nodes where failure occurred for more details. The installation of Oracle Database 11g was successful on the local node but failed on remote nodes. [oracle@rac1 ~]$
打補丁19692824,執行如下:
[oracle@rac1 ~]$ cd /u01/oracle_app/19692824/ [oracle@rac1 19692824]$ find /u01/app/oracle/ -iname "opatch" /u01/app/oracle/product/11.2.0/db_1/inventory/Templates/OPatch /u01/app/oracle/product/11.2.0/db_1/OPatch /u01/app/oracle/product/11.2.0/db_1/OPatch/opatchprereqs/opatch /u01/app/oracle/product/11.2.0/db_1/OPatch/opatch /u01/app/oracle/product/11.2.0/db_1/oc4j/cfgtoollogs/opatch /u01/app/oracle/product/11.2.0/db_1/cfgtoollogs/opatch [oracle@rac1 19692824]$ /u01/app/oracle/product/11.2.0/db_1/OPatch/opatch apply
7.2創建數據庫
首先配置監聽程序
監聽配置文件
[GENERAL] RESPONSEFILE_VERSION="11.2" CREATE_TYPE="CUSTOM" [oracle.net.ca] INSTALLED_COMPONENTS={"server","net8","javavm"} INSTALL_TYPE=""typical"" LISTENER_NUMBER=1 LISTENER_NAMES={"LISTENER"} LISTENER_PROTOCOLS={"TCP;1521"} LISTENER_START=""LISTENER"" NAMING_METHODS={"TNSNAMES","ONAMES","HOSTNAME"} NSN_NUMBER=1 NSN_NAMES={"EXTPROC_CONNECTION_DATA"} NSN_SERVICE={"PLSExtProc"} NSN_PROTOCOLS={"TCP;HOSTNAME;1521"}
創建監聽需要使用grid用戶執行,執行如下
[grid@rac1 ~]$ netca -silent -responsefile /u01/netca.rsp
Parsing command line arguments:
Parameter "silent" = true
Parameter "responsefile" = /u01/netca.rsp
Done parsing command line arguments.
Oracle Net Services Configuration:
Profile configuration complete.
Profile configuration complete.
rac1...
rac2...
Oracle Net Listener Startup:
Listener started successfully.
Listener configuration complete.
Oracle Net Services configuration successful. The exit code is 0
[grid@rac1 ~]$ lsnrctl status
LSNRCTL for Linux: Version 11.2.0.4.0 - Production on 25-AUG-2020 02:13:49
Copyright (c) 1991, 2013, Oracle. All rights reserved.
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER)))
STATUS of the LISTENER
------------------------
Alias LISTENER
Version TNSLSNR for Linux: Version 11.2.0.4.0 - Production
Start Date 25-AUG-2020 01:21:42
Uptime 0 days 0 hr. 52 min. 6 sec
Trace Level off
Security ON: Local OS Authentication
SNMP OFF
Listener Parameter File /u01/app/11.2.0/grid/network/admin/listener.ora
Listener Log File /u01/app/grid/diag/tnslsnr/rac1/listener/alert/log.xml
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.2.101)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.2.103)(PORT=1521)))
Services Summary...
Service "+ASM" has 1 instance(s).
Instance "+ASM1", status READY, has 1 handler(s) for this service...
Service "rac" has 1 instance(s).
Instance "rac1", status READY, has 1 handler(s) for this service...
Service "racXDB" has 1 instance(s).
Instance "rac1", status READY, has 1 handler(s) for this service...
The command completed successfully
[grid@rac1 ~]$
dbca.rsp配置文件如下
[GENERAL] RESPONSEFILE_VERSION = "11.2.0" OPERATION_TYPE = "createDatabase" [CREATEDATABASE] GDBNAME = "rac" SID = "rac" NODELIST=rac1,rac2 TEMPLATENAME = "General_Purpose.dbc" SYSPASSWORD = "oracle" SYSTEMPASSWORD = "oracle" SYSMANPASSWORD = "oracle" DBSNMPPASSWORD = "oracle" STORAGETYPE=ASM DISKGROUPNAME=DATA ASMSNMP_PASSWORD="oracle" #RECOVERYGROUPNAME=ARCH CHARACTERSET = "ZHS16GBK" NATIONALCHARACTERSET= "UTF8"
參數配置:TEMPLATENAME = "General_Purpose.dbc" General_Purpose.dbc文件可在$ORACLE_HOME//assistants/dbca/templates/中找到:
[oracle@rac2 response]$ ls -lh /u01/app/oracle/product/11.2.0/db_1/assistants/dbca/templates/ total 295M -rw-r--r-- 1 oracle oinstall 5.0K Aug 24 2013 Data_Warehouse.dbc -rwxr-xr-x 1 oracle oinstall 21M Aug 27 2013 example01.dfb -rwxr-xr-x 1 oracle oinstall 1.5M Aug 27 2013 example.dmp -rw-r--r-- 1 oracle oinstall 4.9K Aug 24 2013 General_Purpose.dbc -rw-r--r-- 1 oracle oinstall 12K May 1 2013 New_Database.dbt -rwxr-xr-x 1 oracle oinstall 9.3M Aug 27 2013 Seed_Database.ctl -rwxr-xr-x 1 oracle oinstall 263M Aug 27 2013 Seed_Database.dfb [oracle@rac2 response]$ [oracle@rac1 ~]$ cat /u01/dbca.rsp [GENERAL] RESPONSEFILE_VERSION = "11.2.0" OPERATION_TYPE = "createDatabase" [CREATEDATABASE] GDBNAME = "rac" SID = "rac" NODELIST=rac1,rac2 TEMPLATENAME = "General_Purpose.dbc" SYSPASSWORD = "oracle" SYSTEMPASSWORD = "oracle" SYSMANPASSWORD = "oracle" DBSNMPPASSWORD = "oracle" STORAGETYPE=ASM [oracle@rac1 ~]$ dbca -silent -responsefile /u01/dbca.rsp Look at the log file "/u01/app/oracle/cfgtoollogs/dbca/rac1.log" for further details. [oracle@rac1 ~]$ cat /u01/app/oracle/cfgtoollogs/dbca/rac1.log Could not connect to ASM due to following error: ORA-12547: TNS:lost contact [oracle@rac1 ~]$ ls -l /u01/app/11.2.0/grid/bin/oracle -rwxr-x--x 1 grid oinstall 209836184 Aug 24 06:01 /u01/app/11.2.0/grid/bin/oracle [oracle@rac1 ~]$ chmod +s /u01/app/11.2.0/grid/bin/oracle chmod: changing permissions of ‘/u01/app/11.2.0/grid/bin/oracle’: Operation not permitted [oracle@rac1 ~]$ su - grid Password: Last login: Tue Aug 25 01:21:15 EDT 2020 on pts/1 [grid@rac1 ~]$ chmod +s /u01/app/11.2.0/grid/bin/oracle [grid@rac1 ~]$ ls -l /u01/app/11.2.0/grid/bin/oracle -rwsr-s--x 1 grid oinstall 209836184 Aug 24 06:01 /u01/app/11.2.0/grid/bin/oracle [grid@rac1 ~]$ [oracle@rac1 ~]$ dbca -silent -responsefile /u01/dbca.rsp Copying database files 1% complete 3% complete 9% complete 15% complete 21% complete 27% complete 30% complete Creating and starting Oracle instance 32% complete 36% complete 40% complete 44% complete 45% complete 48% complete 50% complete Creating cluster database views 52% complete 70% complete Completing Database Creation 73% complete 76% complete 85% complete 94% complete 100% complete Look at the log file "/u01/app/oracle/cfgtoollogs/dbca/rac/rac.log" for further details.
出現 Could not connect to ASM due to following error: ORA-12547: TNS:lost contact
檢查兩節點 文件 /u01/app/11.2.0/grid/bin/oracle權限是否有所有者和所屬組是否有s權限
[oracle@rac2 oracle]$ ls -l /u01/app/11.2.0/grid/bin/oracle && ssh rac1 ls -l /u01/app/11.2.0/grid/bin/oracle -rwsr-s--x 1 grid oinstall 209836184 Aug 24 06:02 /u01/app/11.2.0/grid/bin/oracle -rwsr-s--x 1 grid oinstall 209836184 Aug 24 06:01 /u01/app/11.2.0/grid/bin/oracle [oracle@rac2 oracle]$
安裝后測試登錄檢查實例狀態
[oracle@rac1 ~]$ sqlplus system/oracle@rac-san:1521/rac
SQL*Plus: Release 11.2.0.4.0 Production on Tue Aug 25 02:19:20 2020
Copyright (c) 1982, 2013, Oracle. All rights reserved.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options
SQL> SQL> desc gv$instance Name Null? Type ----------------------------------------- -------- ---------------------------- INST_ID NUMBER INSTANCE_NUMBER NUMBER INSTANCE_NAME VARCHAR2(16) HOST_NAME VARCHAR2(64) VERSION VARCHAR2(17) STARTUP_TIME DATE STATUS VARCHAR2(12) PARALLEL VARCHAR2(3) THREAD# NUMBER ARCHIVER VARCHAR2(7) LOG_SWITCH_WAIT VARCHAR2(15) LOGINS VARCHAR2(10) SHUTDOWN_PENDING VARCHAR2(3) DATABASE_STATUS VARCHAR2(17) INSTANCE_ROLE VARCHAR2(18) ACTIVE_STATE VARCHAR2(9) BLOCKED VARCHAR2(3) SQL> select host_name,status,instance_name from gv$instance 2 / HOST_NAME STATUS ---------------------------------------------------------------- ------------ INSTANCE_NAME ---------------- rac1 OPEN rac1 rac2 OPEN rac2 SQL> column host_name format a30 SQL> / HOST_NAME STATUS INSTANCE_NAME ------------------------------ ------------ ---------------- rac1 OPEN rac1 rac2 OPEN rac2 SQL>
(八)安裝后測試
srvctl 命令示例:
[grid@rac2 ~]$ srvctl status instance -d rac -i rac1,rac2
Instance rac1 is running on node rac1
Instance rac2 is running on node rac2
[grid@rac2 ~]$ srvctl status listener
Listener LISTENER is enabled
Listener LISTENER is running on node(s): rac2,rac1
[grid@rac2 ~]$ srvctl status asm -a
ASM is running on rac2,rac1
ASM is enabled.
[grid@rac2 ~]$ srvctl status database -d rac
Instance rac1 is running on node rac1
Instance rac2 is running on node rac2
[grid@rac2 ~]$ srvctl status diskgroup -g DATA
Disk Group DATA is running on rac2,rac1
[grid@rac2 ~]$ srvctl status scan_listener -i 1 -v
SCAN Listener LISTENER_SCAN1 is enabled
SCAN listener LISTENER_SCAN1 is running on node rac2
[grid@rac2 ~]$
測試oracle 操作系統驗證執行如下
[oracle@rac1 ~]$ sqlplus / as sysdba SQL*Plus: Release 11.2.0.4.0 Production on Wed Jul 15 22:37:05 2020 Copyright (c) 1982, 2013, Oracle. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP, Data Mining and Real Application Testing options SQL> 查看節點信息 SQL> desc gv$instance Name Null? Type ----------------------------------------- -------- ---------------------------- INST_ID NUMBER INSTANCE_NUMBER NUMBER INSTANCE_NAME VARCHAR2(16) HOST_NAME VARCHAR2(64) VERSION VARCHAR2(17) STARTUP_TIME DATE STATUS VARCHAR2(12) PARALLEL VARCHAR2(3) THREAD# NUMBER ARCHIVER VARCHAR2(7) LOG_SWITCH_WAIT VARCHAR2(15) LOGINS VARCHAR2(10) SHUTDOWN_PENDING VARCHAR2(3) DATABASE_STATUS VARCHAR2(17) INSTANCE_ROLE VARCHAR2(18) ACTIVE_STATE VARCHAR2(9) BLOCKED VARCHAR2(3) SQL> column host_name format a10 SQL> select host_name ,instance_name,status,database_status from gv$instance 2 / HOST_NAME INSTANCE_NAME STATUS DATABASE_STATUS ---------- ---------------- ------------ ----------------- rac1 rac1 OPEN ACTIVE rac2 rac2 OPEN ACTIVE SQL> 使用ip端口服務名連接 [oracle@rac1 ~]$ sqlplus system/oracle@192.168.2.105:1521/rac SQL*Plus: Release 11.2.0.4.0 Production on Wed Jul 15 22:38:34 2020 Copyright (c) 1982, 2013, Oracle. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP, Data Mining and Real Application Testing options SQL>
查看監聽運行在節點rac2上,
[grid@rac2 ~]$ srvctl status scan_listener
SCAN Listener LISTENER_SCAN1 is enabled
SCAN listener LISTENER_SCAN1 is running on node rac2
測試把rac2上的兩個網卡全部停用,然后再測oracle連接
執行如下命令 ifdown ens192, ifdown ens224
rac2的遠程已經失去連接
登錄rac1測試登錄Oracle
[oracle@rac1 ~]$ sqlplus system/oracle@192.168.2.105:1521/rac
SQL*Plus: Release 11.2.0.4.0 Production on Wed Jul 15 23:21:56 2020
Copyright (c) 1982, 2013, Oracle. All rights reserved.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options
SQL> desc gv$instance
Name Null? Type
----------------------------------------- -------- ----------------------------
INST_ID NUMBER
INSTANCE_NUMBER NUMBER
INSTANCE_NAME VARCHAR2(16)
HOST_NAME VARCHAR2(64)
VERSION VARCHAR2(17)
STARTUP_TIME DATE
STATUS VARCHAR2(12)
PARALLEL VARCHAR2(3)
THREAD# NUMBER
ARCHIVER VARCHAR2(7)
LOG_SWITCH_WAIT VARCHAR2(15)
LOGINS VARCHAR2(10)
SHUTDOWN_PENDING VARCHAR2(3)
DATABASE_STATUS VARCHAR2(17)
INSTANCE_ROLE VARCHAR2(18)
ACTIVE_STATE VARCHAR2(9)
BLOCKED VARCHAR2(3)
SQL> column host_name format a10
SQL> select host_name,instance_name.status.database_status from gv$instance
2 /
select host_name,instance_name.status.database_status from gv$instance
*
ERROR at line 1:
ORA-00904: "INSTANCE_NAME"."STATUS"."DATABASE_STATUS": invalid identifier
SQL> c/./,
1* select host_name,instance_name,status.database_status from gv$instance
SQL> c/./,
1* select host_name,instance_name,status,database_status from gv$instance
SQL> /
HOST_NAME INSTANCE_NAME STATUS DATABASE_STATUS
---------- ---------------- ------------ -----------------
rac1 rac1 OPEN ACTIVE
SQL>
查看rac1上的ip地址,查看192.168.2.105 rac-san已經被rac1節點接管,
測試完畢。
(九)安裝問題解決及總結
9.1:執行root.sh腳本時候報錯
ohasd failed to start
Failed to start the Clusterware. Last 20 lines of the alert log follow:
解決方案:安裝oracle補丁p18370031_112040_Linux-x86-64.zip
oracle 官方對於11.2.0.4grid 解釋:
https://docs.oracle.com/cd/E11882_01/relnotes.112/e23558/toc.htm#CJAJEBGG
7.2.3 Oracle Grid Infrastructure Installation Issue
During the Oracle Grid Infrastructure installation, you must apply patch 18370031 before configuring the software that is installed. The timing of applying the patch is important and is described in detail in the Note 1951613.1 on My Oracle Support. This patch ensures that the clusterware stack is configured to use systemd
for clusterware processes, as Oracle Linux 7 uses systemd
for all services.
This issue is tracked with Oracle bug 18370031, which was logged for release 12.1.0.2, but the patch is for release 11.2.0.4.
錯誤解決,安裝補丁18370031,操作步驟如下
- 1root下執行yum install perl*
- 2.執行腳本 回退root.sh操作/u01/app/11.2.0/grid/crs/install/roothas.pl -deconfig -force -verbose
- 3.解壓壓縮包,切換到18370031目錄執行 /u01/app/11.2.0/grid/OPatch/opatch apply 完成更新,輸出沒有報錯信息,重新運行root腳本。
執行步驟如下:下載更新包,解壓后切換到更新包路徑,執行如下:
[grid@rac1 ~]$ cd 18370031/
[grid@rac1 18370031]$
[grid@rac1 18370031]$ ls
custom etc files
[grid@rac1 18370031]$ /u01/app/11.2.0/grid/OPatch/opatch apply
Oracle Interim Patch Installer version 11.2.0.3.4
Copyright (c) 2012, Oracle Corporation. All rights reserved.
Oracle Home : /u01/app/11.2.0/grid
Central Inventory : /u01/app/oraInventory
from : /u01/app/11.2.0/grid/oraInst.loc
OPatch version : 11.2.0.3.4
OUI version : 11.2.0.4.0
Log file location : /u01/app/11.2.0/grid/cfgtoollogs/opatch/18370031_Jul_24_2020_18_36_32/apply2020-07-24_18-36-32PM_1.log
Applying interim patch '18370031' to OH '/u01/app/11.2.0/grid'
Verifying environment and performing prerequisite checks...
All checks passed.
This node is part of an Oracle Real Application Cluster.
Remote nodes: 'rac2'
Local node: 'rac1'
Please shutdown Oracle instances running out of this ORACLE_HOME on the local system.
(Oracle Home = '/u01/app/11.2.0/grid')
Is the local system ready for patching? [y|n]
Could not recognize input. Please re-enter.
y
User Responded with: Y
Backing up files...
Patching component oracle.crs, 11.2.0.4.0...
Verifying the update...
The local system has been patched. You can restart Oracle instances on it.
Patching in rolling mode.
The node 'rac2' will be patched next.
Please shutdown Oracle instances running out of this ORACLE_HOME on 'rac2'.
(Oracle Home = '/u01/app/11.2.0/grid')
Is the node ready for patching? [y|n]
Y
User Responded with: Y
Updating nodes 'rac2'
Apply-related files are:
FP = "/u01/app/11.2.0/grid/.patch_storage/18370031_Aug_15_2014_16_14_40/rac/copy_files.txt"
DP = "/u01/app/11.2.0/grid/.patch_storage/18370031_Aug_15_2014_16_14_40/rac/copy_dirs.txt"
MP = "/u01/app/11.2.0/grid/.patch_storage/18370031_Aug_15_2014_16_14_40/rac/make_cmds.txt"
RC = "/u01/app/11.2.0/grid/.patch_storage/18370031_Aug_15_2014_16_14_40/rac/remote_cmds.txt"
Instantiating the file "/u01/app/11.2.0/grid/.patch_storage/18370031_Aug_15_2014_16_14_40/rac/copy_files.txt.instantiated" by replacing $ORACLE_HOME in "/u01/app/11.2.0/grid/.patch_storage/18370031_Aug_15_2014_16_14_40/rac/copy_files.txt" with actual path.
Propagating files to remote nodes...
Instantiating the file "/u01/app/11.2.0/grid/.patch_storage/18370031_Aug_15_2014_16_14_40/rac/copy_dirs.txt.instantiated" by replacing $ORACLE_HOME in "/u01/app/11.2.0/grid/.patch_storage/18370031_Aug_15_2014_16_14_40/rac/copy_dirs.txt" with actual path.
Propagating directories to remote nodes...
Instantiating the file "/u01/app/11.2.0/grid/.patch_storage/18370031_Aug_15_2014_16_14_40/rac/make_cmds.txt.instantiated" by replacing $ORACLE_HOME in "/u01/app/11.2.0/grid/.patch_storage/18370031_Aug_15_2014_16_14_40/rac/make_cmds.txt" with actual path.
Running command on remote node 'rac2':
cd /u01/app/11.2.0/grid/srvm/lib; /usr/bin/make -f ins_srvm.mk install_srvm ORACLE_HOME=/u01/app/11.2.0/grid || echo REMOTE_MAKE_FAILED::>&2
Running command on remote node 'rac2':
cd /u01/app/11.2.0/grid/racg/lib; /usr/bin/make -f ins_has.mk install ORACLE_HOME=/u01/app/11.2.0/grid || echo REMOTE_MAKE_FAILED::>&2
The node 'rac2' has been patched. You can restart Oracle instances on it.
There were relinks on remote nodes. Remember to check the binary size and timestamp on the nodes 'rac2' .
The following make commands were invoked on remote nodes:
'cd /u01/app/11.2.0/grid/srvm/lib; /usr/bin/make -f ins_srvm.mk install_srvm ORACLE_HOME=/u01/app/11.2.0/grid
cd /u01/app/11.2.0/grid/racg/lib; /usr/bin/make -f ins_has.mk install ORACLE_HOME=/u01/app/11.2.0/grid
'
Patch 18370031 successfully applied
Log file location: /u01/app/11.2.0/grid/cfgtoollogs/opatch/18370031_Jul_24_2020_18_36_32/apply2020-07-24_18-36-32PM_1.log
OPatch succeeded.
[grid@rac1 18370031]$
更新完畢。
9.2:安裝oracle軟件時候報錯:INS-35354
錯誤解釋:安裝oracle提示信息如下:
The system on which you are attempting to install Oracle RAC is not part of a valid cluster.
解決方案:編輯/u01/app/oraInventory/ContentsXML/inventory.xml,找到屬性 <HOME NAME="Ora11g_gridinfrahome1" LOC="/u01/app/11.2.0/grid" TYPE="O" IDX="1">添加CRS="true"屬性,兩個節點都要修改,修改完畢重新執行安裝
如下
[oracle@rac2 ~]$ cat /u01/app/oraInventory/ContentsXML/inventory.xml
<?xml version="1.0" standalone="yes" ?>
<!-- Copyright (c) 1999, 2013, Oracle and/or its affiliates.
All rights reserved. -->
<!-- Do not modify the contents of this file by hand. -->
<INVENTORY>
<VERSION_INFO>
<SAVED_WITH>11.2.0.4.0</SAVED_WITH>
<MINIMUM_VER>2.1.0.6.0</MINIMUM_VER>
</VERSION_INFO>
<HOME_LIST>
<HOME NAME="Ora11g_gridinfrahome1" LOC="/u01/app/11.2.0/grid" TYPE="O" IDX="1" CRS="true">
<NODE_LIST>
<NODE NAME="rac1"/>
<NODE NAME="rac2"/>
</NODE_LIST>
</HOME>
<HOME NAME="OraDb11g_home1" LOC="/u01/app/oracle/product/11.2.0/db_1" TYPE="O" IDX="2">
<NODE_LIST>
<NODE NAME="rac1"/>
<NODE NAME="rac2"/>
</NODE_LIST>
</HOME>
</HOME_LIST>
<COMPOSITEHOME_LIST>
</COMPOSITEHOME_LIST>
</INVENTORY>
[oracle@rac2 ~]$
9.3 附加鏈接靜默安裝Oracle
參考個人新浪博客鏈接如下:
靜默安裝Oracle(,由於新浪博客無法修改博文,其中個別參數寫的有錯誤,其中錯誤無法修復),僅供參考
http://blog.sina.com.cn/s/blog_1722831d10102xm4x.html
附加參考:
oracle 11.2.0.4說明:
只安裝oracle僅僅需要下載p13390677_112040_Linux-x86-64_1of7.zip和p13390677_112040_Linux-x86-64_2of7.zip,安裝grid需要下載p13390677_112040_Linux-x86-64_3of7.zip, 可運行grid安裝包中的readme.html查看。
(十) asm增加磁盤操作
ssh登錄esxi主機命令如下
- 虛擬機中配置
兩節點都要配置
配置完畢。
- rac節點linux操作
登錄linux (兩節點都要配置),操作如下:
執行fdisk -l 查看如下:發現新增的兩塊磁盤
節點一
節點二
對磁盤進行創建分區
對磁盤 /dev/sdi /dev/sdj 進行創建分區,此操作只在一個節點上執行即可。
編輯文件 /usr/lib/udev/rules.d/60-raw.rules
編輯新增的兩塊磁盤
vi /usr/lib/udev/rules.d/60-raw.rules
修改完畢使用scp命令拷貝到rac2節點上
scp /usr/lib/udev/rules.d/60-raw.rules rac2:/usr/lib/udev/rules.d/60-raw.rules
修改完畢,
先重啟rac1,節點一,節點一重啟完畢,再重啟二節點,重啟期間oracle服務不會中斷。
重啟后執行ls -l /dev/raw/ 查看新增設備。
- grid用戶sql命令示例
操作步驟
操作系統切換grid 用戶,sqlplus / as sysasm 登錄,操作如下
貼入文本操作如下:
[root@rac1 ~]# su - grid
Last login: Fri Jul 17 01:03:00 EDT 2020 on pts/1
[grid@rac1 ~]$ sqlplus / as sysasm
SQL*Plus: Release 11.2.0.4.0 Production on Fri Jul 17 01:40:36 2020
Copyright (c) 1982, 2013, Oracle. All rights reserved.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Real Application Clusters and Automatic Storage Management options
SQL>
SQL>
SQL>
SQL> set lines 200
SQL> column path format a50
SQL> select group_number ,disk_number,mode_status ,name ,path from v$asm_disk
2 /
GROUP_NUMBER DISK_NUMBER MODE_ST NAME PATH
------------ ----------- ------- ------------------------------ --------------------------------------------------
0 1 ONLINE /dev/raw/raw9
0 2 ONLINE /dev/raw/raw8
1 0 ONLINE DATA_0000 /dev/raw/raw1
1 3 ONLINE DATA_0004 /dev/raw/raw4
1 6 ONLINE DATA_0007 /dev/raw/raw7
1 5 ONLINE DATA_0006 /dev/raw/raw6
1 4 ONLINE DATA_0005 /dev/raw/raw5
1 2 ONLINE DATA_0003 /dev/raw/raw3
1 1 ONLINE DATA_0001 /dev/raw/raw2
9 rows selected.
SQL> alter diskgroup DATA add disk '/dev/raw/raw8' name DATA_0008
2 /
Diskgroup altered.
SQL> c/8/9
1* alter diskgroup DATA add disk '/dev/raw/raw9' name DATA_0008
SQL> c/8/9
1* alter diskgroup DATA add disk '/dev/raw/raw9' name DATA_0009
SQL> /
Diskgroup altered.
SQL> select group_number ,disk_number,mode_status ,name ,path from v$asm_disk
2 /
GROUP_NUMBER DISK_NUMBER MODE_ST NAME PATH
------------ ----------- ------- ------------------------------ --------------------------------------------------
1 0 ONLINE DATA_0000 /dev/raw/raw1
1 8 ONLINE DATA_0009 /dev/raw/raw9
1 7 ONLINE DATA_0008 /dev/raw/raw8
1 3 ONLINE DATA_0004 /dev/raw/raw4
1 6 ONLINE DATA_0007 /dev/raw/raw7
1 5 ONLINE DATA_0006 /dev/raw/raw6
1 4 ONLINE DATA_0005 /dev/raw/raw5
1 2 ONLINE DATA_0003 /dev/raw/raw3
1 1 ONLINE DATA_0001 /dev/raw/raw2
9 rows selected.
SQL> c/name/name ,total_mb
1* select group_number ,disk_number,mode_status ,name ,total_mb ,path from v$asm_disk
SQL> /
GROUP_NUMBER DISK_NUMBER MODE_ST NAME TOTAL_MB PATH
------------ ----------- ------- ------------------------------ ---------- --------------------------------------------------
1 0 ONLINE DATA_0000 10239 /dev/raw/raw1
1 8 ONLINE DATA_0009 20479 /dev/raw/raw9
1 7 ONLINE DATA_0008 20479 /dev/raw/raw8
1 3 ONLINE DATA_0004 10239 /dev/raw/raw4
1 6 ONLINE DATA_0007 1023 /dev/raw/raw7
1 5 ONLINE DATA_0006 1023 /dev/raw/raw6
1 4 ONLINE DATA_0005 1023 /dev/raw/raw5
1 2 ONLINE DATA_0003 10239 /dev/raw/raw3
1 1 ONLINE DATA_0001 10239 /dev/raw/raw2
9 rows selected.
SQL>
ASM Creating Disk Groups 操作參考官方鏈接如下:
https://docs.oracle.com/cd/E11882_01/server.112/e18951/asmdiskgrps.htm#OSTMG10040
完畢
(十一)參考鏈接
- 官方鏈接:
安裝參考
Grid Infrastructure Installation Guide for linux
https://docs.oracle.com/cd/E11882_01/install.112/e41961/toc.htm
Database 2 Day + Real Application Clusters Guide
https://docs.oracle.com/cd/E11882_01/rac.112/e17264/install_rac.htm#TDPRC169
命令參考
sqlplus 命令
https://docs.oracle.com/cd/E11882_01/server.112/e16604/toc.htm
crsctl命令參考
https://docs.oracle.com/cd/E11882_01/rac.112/e41959/crsref.htm#CWADD91145
srvctl 命令參考
https://docs.oracle.com/cd/E11882_01/rac.112/e41960/srvctladmin.htm#RACAD005
grid必須的包官方參考:
官方鏈接
https://docs.oracle.com/cd/E11882_01/install.112/e41961/prelinux.htm#CWLIN225
參考官方創建組和用戶,鏈接如下:
https://docs.oracle.com/cd/E11882_01/install.112/e41961/prelinux.htm#CWLIN178
本文中使用的oracle 11.2.0.4 安裝包及補丁包如下
百度雲鏈接:
鏈接:https://pan.baidu.com/s/1WvdpiTs9m3es5vyTBzHb4w
提取碼:1111
本實驗環境,安裝基本操作腳本示例:
https://files.cnblogs.com/files/wenxiao1-2-3-4/OracleRAC%E8%84%9A%E6%9C%AC.zip
操作系統下載鏈接:
centos鏡像下載
http://mirrors.sohu.com/centos/7.6.1810/isos/x86_64/
redhat
鏈接:https://pan.baidu.com/s/16-TgKVk_nAxLeakRsOPKkg
提取碼:1111
技術交流:
郵箱鏈接:wenxiaomst@outlook.com
wechat :w8686512