(一)環境准備
| 主機操作系統 | windows10 |
| 虛擬機平台 | vmware workstation 12 |
| 虛擬機操作系統 | redhat 5.5 x86(32位) :Linux.5.5.for.x86.rhel-server-5.5-i386-dvd.iso |
| grid版本 | linux_11gR2_grid.zip (32位) |
| oracle版本 | linux_11gR2_database_1of2 和 linux_11gR2_database_2of2(32位) |
| 共享存儲 | ASM |
(二)操作系統安裝
(2.1)操作系統安裝
操作系統安裝相對而言較簡單,不做詳述。系統的配置信息大致如下,后面還會根據需要添加和刪除設備
(2.2)安裝VM Tools
為了方便在主機與虛擬機之間進行文件拖拽、復制粘貼、使用共享文件夾等功能,我們需要安裝VM Tools,VM Tools的安裝包已經集合在了vmware workstation里面了。下面一步一步安裝VM Tools。
step 1:虛擬機-> 安裝Vmware Tools
step 2: mount查看是否裝載 VMware Tools 虛擬 CD-ROM 映像,如果有紅色部分,說明已經將VM tools安裝包mount在了/media目錄下
step 3:轉到安裝目錄/tmp,解壓VM tools安裝包
step 4:開始安裝VM Tools
[root@Redhat tmp]# cd vmware-tools-distrib [root@Redhat vmware-tools-distrib]#./vmware-install.pl
遇到選項,直接回車即可。
step 5:安裝結束,重啟虛擬機
[root@rac1 ~]# reboot
step6:測試VM Tools安裝是否成功
從主機拖到一個文檔到虛擬機,如果拖動成功,說明VM Tools已經安裝成功
(三)操作系統配置
(3.1)網絡配置
(3.1.1)主機名配置
①節點1:
[root@rac1 ~]# vim /etc/sysconfig/network
NETWORKING=yes NETWORKING_IPV6=no HOSTNAME=rac1
②節點2:
[root@rac2 ~]# vim /etc/sysconfig/network NETWORKING=yes NETWORKING_IPV6=no HOSTNAME=rac2
(3.1.2)IP配置
為了使用靜態IP配置數據庫,我又為每台虛擬機新增加了一塊網卡,將其配置成only-host模式,新增網卡方式如下:
step1:添加網絡
點擊vmware的“編輯”-> “虛擬網絡編輯器”-> “更改設置”-> “添加網絡”,按下圖選擇,保存
step2:在兩台虛擬機上添加網卡
選擇虛擬機,“設置”->“添加”->“網絡適配器”,選擇“自定義”,這個自定義是我們上一步定義的網絡,結果如下:
step3:根據網絡設置,我們規划IP地址如下:
接下來就是配置IP地址了,對於節點1(主機名:rac1),我們:
①配置eth1
--或刪除BOOTPROTO
--不要修改硬件地址
--設置網卡為開機啟動
--增加IP和MASK
[root@rac1 ~]# vim /etc/sysconfig/network-scripts/ifcfg-eth1
#修改下面紅色部分
# Advanced Micro Devices [AMD] 79c970 [PCnet32 LANCE] DEVICE=eth1 # BOOTPROTO=
dhcp HWADDR=00:0C:29:9C:DF:6A ONBOOT= yes IPADDR= 192.168. 19.10
NETMASK
=255.255.
255.0
②配置eth2
[root@rac1 ~]# vim /etc/sysconfig/network-scripts/ifcfg-eth2 # 修改紅色部分 # Advanced Micro Devices [AMD] 79c970 [PCnet32 LANCE] DEVICE=eth2 # BOOTPROTO=
dhcp
ONBOOT
=
yes
HWADDR
=00:0C:29
:6G:8C:5F
IPADDR
=
192.168.
15.10 NETMASK= 255.255. 255.0
對於節點2(主機名:rac2),我們參照節點1即可:
①配置eth1
[root@rac2 ~]# vim /etc/sysconfig/network-scripts/ifcfg-eth1 # Advanced Micro Devices [AMD] 79c970 [PCnet32 LANCE] DEVICE=eth1 # BOOTPROTO= dhcp ONBOOT=
yes HWADDR=00:0c:29:b0:4e:b6 IPADDR=
192.168
.
19.11 NETMASK= 255.255.
255.0
②配置eth2
[root@rac2 ~]# vim /etc/sysconfig/network-scripts/ifcfg-eth2 # Advanced Micro Devices [AMD] 79c970 [PCnet32 LANCE] DEVICE=eth2 # BOOTPROTO=
dhcp
ONBOOT
=
yes HWADDR=00:0c:29:b0:4e:c0 IPADDR=192.168.
15.11 NETMASK=255.255.
255.0
(3.1.3)hosts文件配置
在2個節點上配置hosts文件,以節點1為例
[root@rac1 ~]# vim /etc/hosts # 在文件的最后面加上 #eth1 public 192.168.19.10 rac1 192.168.19.11 rac2 #virtual 192.168.19.12 rac1-vip 192.168.19.13 rac2-vip #eth2 private 192.168.15.10 rac1-priv 192.168.15.11 rac2-priv #scan 192.168.19.14 rac-scan
配置完成后,重啟網卡
[root@rac1 ~]# service network restart
重啟網卡時,遇到了一個小錯誤,提示:
Device eth2 has different MAC address than expected, ignoring.
[FAILED]
看了其他人的文章,發現是網卡配置文件里面的MAC地址與實際虛擬機的MAC地址不一樣,解決辦法如下:
step1:查看本機實際MAC地址(紅色部分)
[root@rac1 ~]# ifconfig eth2 eth2 Link encap:Ethernet HWaddr 00:0C:29:9C:DF:7E inet addr:192.168.15.10 Bcast:192.168.15.255 Mask:255.255.255.0 inet6 addr: fe80::20c:29ff:fe9c:df7e/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:30677 errors:0 dropped:0 overruns:0 frame:0 TX packets:26377 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:15839769 (15.1 MiB) TX bytes:10819637 (10.3 MiB) Interrupt:83 Base address:0x2824
step2:查看我們配置的MAC地址
[root@rac1 ~]# vim /etc/sysconfig/network-scripts/ifcfg-eth2 # 修改紅色部分 # Advanced Micro Devices [AMD] 79c970 [PCnet32 LANCE] DEVICE=eth2 # BOOTPROTO=dhcp ONBOOT=yes HWADDR=00:0C:29:6G:8C:5F IPADDR=192.168.15.10 NETMASK=255.255.255.0
發現我們配置的MAC地址與實際的MAC地址不一樣,修改網卡step2里面的MAC地址,重啟網卡。
(3.1.4)測試網絡
測試主機與虛擬機、虛擬機與虛擬機之間通訊是否正常
①在主機上執行
ping 192.168.15.10 ping 192.168.15.11 ping 192.168.19.10 ping 192.168.19.11
②在節點1上執行,其中192.168.15.1與192.168.19.1均為主機ip
ping 192.168.15.1 ping 192.168.15.11 ping 192.168.19.1 ping 192.168.19.11
③在節點2執行
ping 192.168.15.1 ping 192.168.15.10 ping 192.168.19.1 ping 192.168.19.10
如果都能夠ping通,說明網絡配置正確。
(3.2)共享存儲配置
(3.2.1)存儲規划
| 磁盤功能 | 個數 | 每個大小(GB) |
| OCR | 3 | 1 |
| DATA | 3 | 10 |
| ARCHIVE | 1 | 10 |
(3.2.1)創建共享存儲
step1:以管理員方式打開DOS窗口,到vmware workstation安裝目錄下,執行以下命令創建共享磁盤:
vmware-vdiskmanager.exe -c -s 1000MB -a lsilogic -t 2 "F:\vmwareVirtueMachine\RAC\ShareDisk\votingdisk1.vmdk" vmware-vdiskmanager.exe -c -s 1000Mb -a lsilogic -t 2 "F:\vmwareVirtueMachine\RAC\ShareDisk\votingdisk2.vmdk" vmware-vdiskmanager.exe -c -s 1000Mb -a lsilogic -t 2 "F:\vmwareVirtueMachine\RAC\ShareDisk\votingdisk3.vmdk" vmware-vdiskmanager.exe -c -s 10000Mb -a lsilogic -t 2 "F:\vmwareVirtueMachine\RAC\ShareDisk\datadisk1.vmdk" vmware-vdiskmanager.exe -c -s 10000Mb -a lsilogic -t 2 "F:\vmwareVirtueMachine\RAC\ShareDisk\datadisk2.vmdk" vmware-vdiskmanager.exe -c -s 10000Mb -a lsilogic -t 2 "F:\vmwareVirtueMachine\RAC\ShareDisk\datadisk3.vmdk" vmware-vdiskmanager.exe -c -s 10000Mb -a lsilogic -t 2 "F:\vmwareVirtueMachine\RAC\ShareDisk\archdisk1.vmdk"
(3)修改2個虛擬機的vmx配置文件,在虛擬機的安裝目錄下的.vmx文件最后加上以下信息:
scsi1.present = "TRUE" scsi1.virtualDev = "lsilogic" scsi1.sharedBus = "virtual" scsi1:1.present = "TRUE" scsi1:1.mode = "independent-persistent" scsi1:1.filename = "F:\vmwareVirtueMachine\RAC\ShareDisk\votingdisk1.vmdk" scsi1:1.deviceType = "plainDisk" scsi1:2.present = "TRUE" scsi1:2.mode = "independent-persistent" scsi1:2.filename = "F:\vmwareVirtueMachine\RAC\ShareDisk\votingdisk2.vmdk" scsi1:2.deviceType = "plainDisk" scsi1:3.present = "TRUE" scsi1:3.mode = "independent-persistent" scsi1:3.filename = "F:\vmwareVirtueMachine\RAC\ShareDisk\votingdisk3.vmdk" scsi1:3.deviceType = "plainDisk" scsi1:4.present = "TRUE" scsi1:4.mode = "independent-persistent" scsi1:4.filename = "F:\vmwareVirtueMachine\RAC\ShareDisk\datadisk1.vmdk" scsi1:4.deviceType = "plainDisk" scsi1:5.present = "TRUE" scsi1:5.mode = "independent-persistent" scsi1:5.filename = "F:\vmwareVirtueMachine\RAC\ShareDisk\datadisk2.vmdk" scsi1:5.deviceType = "plainDisk" scsi1:6.present = "TRUE" scsi1:6.mode = "independent-persistent" scsi1:6.filename = "F:\vmwareVirtueMachine\RAC\ShareDisk\datadisk3.vmdk" scsi1:6.deviceType = "plainDisk" scsi1:7.present = "TRUE" scsi1:7.mode = "independent-persistent" scsi1:7.filename = "F:\vmwareVirtueMachine\RAC\ShareDisk\archdisk1.vmdk" scsi1:7.deviceType = "plainDisk" disk.locking = "false" diskLib.dataCacheMaxSize = "0" diskLib.dataCacheMaxReadAheadSize = "0" diskLib.DataCacheMinReadAheadSize = "0" diskLib.dataCachePageSize = "4096" diskLib.maxUnsyncedWrites = "0"
重啟虛擬機,2個節點都重啟
[root@rac1 ~]# reboot
最終我們看到的虛擬機的硬件配置大致是這樣的
(四)Oracle & Grid預配置
(4.1)創建用戶
在2個節點上創建用戶及安裝目錄:
/usr/sbin/groupadd -g 1000 oinstall /usr/sbin/groupadd -g 1020 asmadmin /usr/sbin/groupadd -g 1021 asmdba /usr/sbin/groupadd -g 1022 asmoper /usr/sbin/groupadd -g 1031 dba /usr/sbin/groupadd -g 1032 oper useradd -u 1100 -g oinstall -G asmadmin,asmdba,asmoper,oper,dba grid useradd -u 1101 -g oinstall -G dba,asmdba,oper oracle mkdir -p /u01/app/11.2.0/grid mkdir -p /u01/app/grid mkdir /u01/app/oracle chown -R grid:oinstall /u01 chown oracle:oinstall /u01/app/oracle chmod -R 775 /u01/
安裝目錄結構如下:
[root@rac1 u01]# tree
.
`-- app
|-- 11.2.0
| `-- grid
|-- grid
`—oracle
(4.2)配置環境變量
(4.2.1)配置ORACLE的環境變量,ORACLE_SID需根據主機來填寫
[oracle@rac1 ~]$ cd
[oracle@rac1 ~]$ vi .bash_profile
#添加下面的部分
export TMP=/tmp export TMPDIR=$TMP export ORACLE_SID=rac1# 如果是節點2,則: export ORACLE_SID=
rac2 export ORACLE_BASE=/u01/app/oracle export ORACLE_HOME=$ORACLE_BASE/product/11.2.0/db_1 export TNS_ADMIN=$ORACLE_HOME/network/admin export PATH=/usr/sbin:$PATH export PATH=$ORACLE_HOME/bin:$PATH export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib umask 022
(4.2.1)配置GRID的環境變量,ORACLE_SID需根據主機來填寫
[grid@rac1 ~]$ cd [grid@rac1 ~]$ vi .bash_profile # 添加下面的部分 export TMP=/tmp export TMPDIR=$TMP export ORACLE_SID=+asm1 # 如果是節點2,則:export ORACLE_SID=+asm2
export ORACLE_BASE=/u01/app/grid export ORACLE_HOME=/u01/app/11.2.0/grid export PATH=/usr/sbin:$PATH export PATH=$ORACLE_HOME/bin:$PATH export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib umask 022
(4.3)配置節點互信
(4.3.1)配置oracle用戶的節點互信
step1:2個節點執行
su - oracle mkdir ~/.ssh chmod 755 .ssh /usr/bin/ssh-keygen -t rsa /usr/bin/ssh-keygen -t dsa
step2:將所有的key文件匯總到一個總的認證文件中,在節點1上執行
su – oracle
ssh rac1 cat ~/.ssh/id_rsa.pub >> authorized_keys ssh rac2 cat ~/.ssh/id_rsa.pub >> authorized_keys ssh rac1 cat ~/.ssh/id_dsa.pub >> authorized_keys ssh rac2 cat ~/.ssh/id_dsa.pub >> authorized_keys
step3:節點1上存在一份完整的key,拷貝到節點2
[oracle@rac1 ~]$ cd ~/.ssh/ [oracle@rac1 .ssh]$ scp authorized_keys rac2:~/.ssh/ [oracle@rac2 .ssh]chmod 600 authorized_keys
step4:在2個節點都執行一下命令
ssh rac1 date ssh rac2 date ssh rac1-priv date ssh rac2-priv date
step5:檢驗是否配置成功,在節點1上不用輸入密碼就可以通過ssh連接節點2,說明配置成功
[oracle@rac1 ~]$ ssh rac2 [oracle@rac2 ~]$ id uid=1101(oracle) gid=1000(oinstall) groups=1000(oinstall),1021(asmdba),1031(dba),1032(oper) [oracle@rac2 ~]$ hostname rac2
(4.3.2)配置grid用戶的節點互信
與Oracle用戶執行方式相同,只是執行腳本的用戶變成了grid。
(4.4)配置系統內核參數
(4.4.1)內核參數設置
[root@rac1 ~]# vi /etc/sysctl.conf # 在末尾添加
kernel.msgmnb = 65536 kernel.msgmax = 65536 kernel.shmmax = 68719476736 kernel.shmall = 4294967296 fs.aio-max-nr = 1048576 fs.file-max = 6815744 kernel.shmall = 2097152 kernel.shmmax = 2002012160 kernel.shmmni = 4096 kernel.sem = 250 32000 100 128 net.ipv4.ip_local_port_range = 9000 65500 net.core.rmem_default = 262144 net.core.rmem_max = 4194304 net.core.wmem_default = 262144 net.core.wmem_max = 1048586 net.ipv4.tcp_wmem = 262144 262144 262144 net.ipv4.tcp_rmem = 4194304 4194304 4194304
參數說明:
kernel.shmmni:整個系統共享內存段的最大數目
fs.file-max:系統中所允許的文件句柄最大數目
net.core.rmem_default:套接字接收緩沖區大小的缺省值
net.core.rmem_max:套接字接收緩沖區大小的最大值
net.core.wmem_default:套接字發送緩沖區大小的缺省值
net.core.wmem_max:套接字發送緩沖區大小的最大值
net.ipv4.ip_local_port_range:應用程序可使用的IPv4端口范圍
使得修改內核參數生效
[root@rac1 ~]# sysctl -p
(4.4.2)配置oracle、grid用戶的shell限制
[root@rac1 ~]# vi /etc/security/limits.conf # 在末尾添加
grid soft nproc 2047 grid hard nproc 16384 grid soft nofile 1024 grid hard nofile 65536 oracle soft nproc 2047 oracle hard nproc 16384 oracle soft nofile 1024 oracle hard nofile 65536
(4.5)配置共享存儲
step1:查看磁盤
[root@rac1 ~]# fdisk -l
通過該命令,可看到我們后面添加上去的磁盤信息,磁盤從sdb到sdh
step2:分區、格式化磁盤。由於是共享磁盤,所以只需要在一個節點上執行即可
# 在節點1上格式化,以/dev/sdb為例:
[root@rac1 ~]# fdisk /dev/sdb The number of cylinders for this disk is set to 3824. There is nothing wrong with that, but this is larger than 1024, and could in certain setups cause problems with: 1) software that runs at boot time (e.g., old versions of LILO) 2) booting and partitioning software from other OSs (e.g., DOS FDISK, OS/2 FDISK) Command (m for help): n Command action e extended p primary partition (1-4) p Partition number (1-4): 1 First cylinder (1-3824, default 1): Using default value 1 Last cylinder or +size or +sizeM or +sizeK (1-3824, default 3824): Command (m for help): w The partition table has been altered! Calling ioctl() to re-read partition table. Syncing disks.
step3:在2個節點上添加裸設備,2個節點都要執行
[root@rac2 ~]# vi /etc/udev/rules.d/60-raw.rules # 在后面添加 ACTION=="add", KERNEL=="sdb1", RUN+="/bin/raw /dev/raw/raw1 %N" ACTION=="add", KERNEL=="sdc1", RUN+="/bin/raw /dev/raw/raw2 %N" ACTION=="add", KERNEL=="sdd1", RUN+="/bin/raw /dev/raw/raw3 %N" ACTION=="add", KERNEL=="sde1", RUN+="/bin/raw /dev/raw/raw4 %N" ACTION=="add", KERNEL=="sdf1", RUN+="/bin/raw /dev/raw/raw5 %N" ACTION=="add", KERNEL=="sdg1", RUN+="/bin/raw /dev/raw/raw6 %N" ACTION=="add", KERNEL=="sdh1", RUN+="/bin/raw /dev/raw/raw7 %N" KERNEL=="raw[1]", MODE="0660", OWNER="grid", GROUP="asmadmin" KERNEL=="raw[2]", MODE="0660", OWNER="grid", GROUP="asmadmin" KERNEL=="raw[3]", MODE="0660", OWNER="grid", GROUP="asmadmin" KERNEL=="raw[4]", MODE="0660", OWNER="grid", GROUP="asmadmin" KERNEL=="raw[5]", MODE="0660", OWNER="grid", GROUP="asmadmin" KERNEL=="raw[6]", MODE="0660", OWNER="grid", GROUP="asmadmin" KERNEL=="raw[7]", MODE="0660", OWNER="grid", GROUP="asmadmin"
啟動裸設備,2個節點都執行
[root@rac2 ~]# start_udev
查看裸設備,2個節點都要查看
[root@rac1 ~]# raw -qa /dev/raw/raw1: bound to major 8, minor 17 /dev/raw/raw2: bound to major 8, minor 33 /dev/raw/raw3: bound to major 8, minor 49 /dev/raw/raw4: bound to major 8, minor 65 /dev/raw/raw5: bound to major 8, minor 81 /dev/raw/raw6: bound to major 8, minor 97 /dev/raw/raw7: bound to major 8, minor 113
(4.6)安裝依賴軟件包
(4.6.1)配置共享文件夾(2個節點都要配置)
step1:關閉虛擬機
step2: 設置”->“選項”->“共享文件夾”->“總是啟用”->“添加 ”。然后添加操作系統安裝包,為了方便,我們將grid、oracle安裝包也添加進去。
step3:查看結果
[root@rac1 ~]# cd /mnt/hgfs/soft/ [root@rac1 soft]# ls database linux_11gR2_database_1of2.zip Linux.5.5.for.x86.rhel-server-5.5-i386-dvd.iso grid linux_11gR2_database_2of2.zip
(4.6.2)配置yum源
step 1:創建ISO文件的掛載目錄並掛載
[root@rac1 /]# mkdir /yum
[root@rac1 /]# cd /mnt/hgfs/soft/
[root@rac1 soft]# mount -o loop Linux.5.5.for.x86.rhel-server-5.5-i386-dvd.iso /yum
step2:找到rpm包
[root@rac1 Server]# cd /yum/Server/ [root@rac1 Server]# ll ... -r--r--r-- 433 root root 102853 Jan 19 2007 zlib-devel-1.2.3-3.i386.rpm -r--r--r-- 368 root root 1787914 May 6 2009 zsh-4.2.6-3.el5.i386.rpm -r--r--r-- 368 root root 369147 May 6 2009 zsh-html-4.2.6-3.el5.i386.rpm [root@rac1 Server]# ll |wc -l 2351
step 3:配置Linux OS介質為yum源
[root@rac1 ~]# cd /etc/yum.repos.d/ [root@rac1 yum.repos.d]# ls rhel-source.repo [root@RedHat yum.repos.d]# rm -rf redhat.repo [root@RedHat yum.repos.d]# vi lijiaman.repo # 在新的yum源里面添加紅色部分 [dvdinfo] name=my_yum baseurl=file:///yum/Server enabled=1 gpgcheck=0
step4:收集安裝包的信息
[root@rac1 yum.repos.d]# yum makecache
step5:為方便重啟虛擬機之后依然可以使用yum,在rc.local里面添加:
[root@rac1 yum.repos.d]# vim /etc/rc.local #添加紅色部分 mount -o loop Linux.5.5.for.x86.rhel-server-5.5-i386-dvd.iso /yum
(4.6.3)安裝依賴包
不同的操作系統安裝不同的包,具體可看官方教程:
http://docs.oracle.com/cd/E11882_01/install.112/e41961/prelinux.htm#CWLIN
我這里安裝的包如下:
yum install -y binutils-* yum install -y compat-libstdc++-* yum install -y elfutils-libelf-* yum install -y elfutils-libelf-* yum install -y elfutils-libelf-devel-static-0.125 yum install -y gcc-4.1.2 yum install -y gcc-c++-4.1.2 yum install -y glibc-2.5-24 yum install -y glibc-common-2.5 yum install -y glibc-devel-2.5 yum install -y glibc-headers-2.5 yum install -y kernel-headers-2.6.18 yum install -y ksh-20060214 yum install -y libaio-0.3.106 yum install -y libaio-devel-0.3.106 yum install -y libgcc-4.1.2 yum install -y libgomp-4.1.2 yum install -y libstdc++-4.1.2 yum install -y libstdc++-devel-4.1.2 yum install -y make-3.81 yum install -y sysstat-7.0.2
(五)安裝Grid
(5.1)安裝前的檢查
step1:解壓grid安裝包
[root@rac1 ~]# cd /mnt/hgfs/soft/ [root@rac1 soft]# unzip linux_11gR2_grid.zip
step2:安裝cvuqdisk包,用於檢查RAC環境配置
[root@rac1 soft]# cd grid/ [root@rac1 grid]# ls doc install response rpm runcluvfy.sh runInstaller sshsetup stage welcome.html [root@rac1 grid]# cd rpm/ [root@rac1 rpm]# ls cvuqdisk-1.0.7-1.rpm [root@rac1 rpm]# rpm -ivh cvuqdisk-1.0.7-1.rpm Preparing... ########################################### [100%] Using default group oinstall to install package 1:cvuqdisk ########################################### [100%]
step3:檢查環境配置,這里需要一項一項的去核對,確保沒有不通過的選項,
[root@rac1 rpm]# su - grid [grid@rac1 ~]$ cd /mnt/hgfs/soft/grid/ [grid@rac1 grid]$ ls doc install response rpm runcluvfy.sh runInstaller sshsetup stage welcome.html [grid@rac1 grid]$ ./runcluvfy.sh stage -pre crsinst -n rac1,rac2 -fixup -verbose Performing pre-checks for cluster services setup Checking node reachability...
這里我的檢查中有3項目不合格:
①用戶等效性:
Checking user equivalence...
Check: User equivalence for user "grid"
Node Name Comment
------------------------------------ ------------------------
rac2 failed
rac1 failed
Result: PRVF-4007 : User equivalence check failed for user "grid"
解決方法:
安裝前面的方法,重新配置grid用戶互信。
②補丁包
Check: Package existence for "unixODBC-2.2.11"
Node Name Available Required Comment
------------ ------------------------ ------------------------ ----------
rac2 missing unixODBC-2.2.11 failed
Result: Package existence check failed for "unixODBC-2.2.11"
Check: Package existence for "unixODBC-devel-2.2.11"
Node Name Available Required Comment
------------ ------------------------ ------------------------ ----------
rac2 missing unixODBC-devel-2.2.11 failed
Result: Package existence check failed for "unixODBC-devel-2.2.11"
解決方法:
yum install –y unixODBC*
③NTP時鍾同步
Starting Clock synchronization checks using Network Time Protocol(NTP)...
NTP Configuration file check started...
The NTP configuration file "/etc/ntp.conf" is available on all nodes
NTP Configuration file check passed
Checking daemon liveness...
Check: Liveness for "ntpd"
Node Name Running?
------------------------------------ ------------------------
rac2 no
rac1 no
Result: Liveness check failed for "ntpd"
PRVF-5415 : Check to see if NTP daemon is running failed
Result: Clock synchronization check using Network Time Protocol(NTP) failed
解決方法:
在11gr2 版本的Rac 在安裝 GI軟件時,需要通過2種途徑實現節點間時間同步:
1.操作系統 NTP 時間同步
2.Oracle Cluster Time Synchronization Service
在oracle11G R2版本中,oracle自己增加了集群中的時間同步服務,如果操作系統沒有配置NTP,就會啟用oracle 自帶的 Cluster Time Synchronization Service 。
故該錯誤可以忽略。
(5.2)安裝grid
安裝主需要在一個節點上進行即可,會自動同步到另一個節點,我們在rac1上執行安裝工作
step1:開啟安裝
[root@rac1 ~]# xhost + xhost: unable to open display "" [root@rac1 ~]# su - grid [grid@rac1 ~]$ cd /mnt/hgfs/soft/grid/ [grid@rac1 grid]$ ls doc install response rpm runcluvfy.sh runInstaller sshsetup stage welcome.html [grid@rac1 grid]$ ./runInstaller
step2:圖形安裝


這一步需要注意SCAN Name,我們在/etc/hosts文件里面設置為什么值,這里就填什么值
添加另一個節點
添加節點之后,點擊“SSH Connectivity”,填寫grid用戶的密碼,點擊“setup”
配置網絡,這里至少需要兩個網絡,根據前面的IP規划表選擇即可
選擇3塊1GB的磁盤,組成一個磁盤組OCR
輸入密碼,這里有警告信息,看了一下,是密碼不夠復雜,不符合Oracle的規定,忽略不管
下面的幾個圖忘記截圖了,從網上找了一張圖,在Prerequisite Check中,NTP檢查失敗,忽略
開始進行安裝…
安裝過程中,會填出如下窗口:
按照其要求,一個節點一個節點的執行,千萬不要2個節點一起執行,以我為例:
step1:在虛擬機rac1上打開一個窗口,以root用戶登錄
執行腳本:
[root@rac1 ~]# /u01/app/oraInventory/orainstRoot.sh # 待上面腳本執行完成,再執行下面腳本 [root@rac1 ~]# /u01/app/11.2.0/grid/root.sh
step2:漫長的等待
step3:確認節點1上的腳本執行完成,在虛擬機rac2上打開一個窗口,以root用戶登錄
執行腳本:
[root@rac2 ~]# /u01/app/oraInventory/orainstRoot.sh # 待上面腳本執行完成,再執行下面腳本 [root@rac2 ~]# /u01/app/11.2.0/grid/root.sh
step4:漫長的等待
step5:確認節點2的腳本執行完成。點擊“OK”;
===================================================
安裝到最后,有報錯信息出現:
原因:
由於配置了/etc/hosts來解析SCAN,導致未走DNS來進行SCAN的解析,爆出此錯誤,可以考慮忽略掉
解決辦法:
ping scan ip,如果可以ping通,忽略,跳過即可。
[grid@rac1 grid]$ ping rac-scan PING rac-scan (192.168.19.14) 56(84) bytes of data. 64 bytes from rac-scan (192.168.19.14): icmp_seq=1 ttl=64 time=0.021 ms 64 bytes from rac-scan (192.168.19.14): icmp_seq=2 ttl=64 time=0.026 ms
至此grid安裝完成。
(5.3)配置磁盤組
在節點1上執行
[grid@rac1 ~]$ asmca
點擊“create”
創建磁盤組DATA
創建磁盤組ARCH
(六)安裝Oracle
(6.1)安裝Oracle軟件
解壓安裝包,調出圖形界面
[oracle@rac1 ~]$ cd /mnt/hgfs/soft/ [oracle@rac1 soft]$ unzip linux_11gR2_database_1of2.zip [oracle@rac1 soft]$ unzip linux_11gR2_database_2of2.zip [oracle@rac1 soft]$ ls database linux_11gR2_database_1of2.zip Linux.5.5.for.x86.rhel-server-5.5-i386-dvd.iso grid linux_11gR2_database_2of2.zip [oracle@rac1 soft]$ cd database/ [oracle@rac1 database]$ ls doc install response rpm runInstaller sshsetup stage welcome.html [oracle@rac1 database]$ ./runInstall
選擇第二個,只安裝數據庫軟件
這里節點互信總是出問題,重新手動配置一下就好了,沒完全搞明白 -_-
時鍾同步問題,忽略
漫長的等待
結束。
(6.2)創建數據庫
在oracle用戶下執行命令
[oracle@rac1 ~]$ dbca
部分截圖:
與安裝grid一樣,執行腳本,一個節點一個節點的執行,先執行完節點1,在執行節點2

結束安裝。
(6.3)檢查安裝結果
發現arch和data磁盤組還是有問題,沒有自動啟動(這個問題比較奇怪,我安裝了3次,在個人電腦上安裝遇到2次,在服務器上安裝則沒有這個問題,懷疑是個人電腦I/O不足,在數據庫啟動時磁盤mount超時了)
[grid@rac1 ~]$ crsctl status res -t -------------------------------------------------------------------------------- NAME TARGET STATE SERVER STATE_DETAILS -------------------------------------------------------------------------------- Local Resources -------------------------------------------------------------------------------- ora.ARCH.dg OFFLINE OFFLINE rac1 ONLINE ONLINE rac2 ora.DATA.dg OFFLINE OFFLINE rac1 OFFLINE OFFLINE rac2 ora.LISTENER.lsnr ONLINE ONLINE rac1 ONLINE ONLINE rac2 ora.OCR.dg ONLINE ONLINE rac1 ONLINE ONLINE rac2 ora.asm ONLINE ONLINE rac1 Started ONLINE ONLINE rac2 Started ora.eons ONLINE ONLINE rac1 ONLINE ONLINE rac2 ora.gsd OFFLINE OFFLINE rac1 OFFLINE OFFLINE rac2 ora.net1.network ONLINE ONLINE rac1 ONLINE ONLINE rac2 ora.ons ONLINE ONLINE rac1 ONLINE ONLINE rac2 ora.registry.acfs ONLINE ONLINE rac1 ONLINE ONLINE rac2 -------------------------------------------------------------------------------- Cluster Resources -------------------------------------------------------------------------------- ora.LISTENER_SCAN1.lsnr 1 ONLINE ONLINE rac1 ora.oc4j 1 OFFLINE OFFLINE ora.rac.db 1 OFFLINE OFFLINE 2 OFFLINE OFFLINE ora.rac1.vip 1 ONLINE ONLINE rac1 ora.rac2.vip 1 ONLINE ONLINE rac2 ora.scan1.vip 1 ONLINE ONLINE rac1
手動啟動arch和data
[grid@rac1 ~]$ srvctl start diskgroup -g arch
[grid@rac1 ~]$ srvctl start diskgroup -g data
再次查看資源狀態,已經啟動
[grid@rac1 ~]$ crsctl status res -t -------------------------------------------------------------------------------- NAME TARGET STATE SERVER STATE_DETAILS -------------------------------------------------------------------------------- Local Resources -------------------------------------------------------------------------------- ora.ARCH.dg ONLINE ONLINE rac1 ONLINE ONLINE rac2 ora.DATA.dg ONLINE ONLINE rac1 ONLINE ONLINE rac2
接下來,啟動數據庫
[grid@rac1 ~]$ srvctl start database -d rac
查看資源狀態,數據庫已經啟動
ora.rac.db 1 ONLINE ONLINE rac1 Open 2 ONLINE ONLINE rac2 Open
進入數據庫,
[grid@rac1 ~]$ su - oracle Password: [oracle@rac1 ~]$ sqlplus / as sysdba SQL*Plus: Release 11.2.0.1.0 Production on Sun Oct 15 19:27:40 2017 Copyright (c) 1982, 2009, Oracle. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP, Data Mining and Real Application Testing options
查看實例狀態
SQL> col host_name format a10; SQL> select host_name,inst_id,instance_name,version from gv$instance; HOST_NAME INST_ID INSTANCE_NAME VERSION ---------- ---------- ---------------- ----------------- rac1 1 rac1 11.2.0.1.0 rac2 2 rac2 11.2.0.1.0
一切正常。結束
其它問題:
問題1:無法進入asm實例
在grid下,想要進入asm實例,發現asm實例狀態為idle,如下:
[grid@rac1 ~]$ sqlplus / as sysasm
SQL*Plus: Release 11.2.0.1.0 Production on Tue Sep 11 18:43:36 2018
Copyright (c) 1982, 2009, Oracle. All rights reserved.
Connected to an idle instance.
[grid@rac1 ~]$ asmcmd
Connected to an idle instance.
ASMCMD>
通過crsctl查看,我的asm實例明明是啟動的,那么為什么狀態為idle呢?從網上查找資料,發現都是grid用戶環境變量的問題。查看監聽,確認asm實例名稱,發現asm實例名稱是大寫的,在前面安裝grid的時候沒注意,導致出現該問題。
[grid@rac2 ~]$ lsnrctl
LSNRCTL for Linux: Version 11.2.0.1.0 - Production on 11-SEP-2018 19:20:42
Copyright (c) 1991, 2009, Oracle. All rights reserved.
Welcome to LSNRCTL, type "help" for information.
LSNRCTL> status
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER)))
STATUS of the LISTENER
------------------------
Alias LISTENER
Version TNSLSNR for Linux: Version 11.2.0.1.0 - Production
Start Date 11-SEP-2018 18:18:12
Uptime 0 days 1 hr. 2 min. 31 sec
Trace Level off
Security ON: Local OS Authentication
SNMP OFF
Listener Parameter File /u01/app/11.2.0/grid/network/admin/listener.ora
Listener Log File /u01/app/grid/diag/tnslsnr/rac2/listener/alert/log.xml
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=10.141.71.102)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=10.141.71.104)(PORT=1521)))
Services Summary...
Service "+ASM" has 1 instance(s).
Instance "+ASM2", status READY, has 1 handler(s) for this service...
Service "rac" has 1 instance(s).
Instance "rac2", status READY, has 1 handler(s) for this service...
Service "racXDB" has 1 instance(s).
Instance "rac2", status READY, has 1 handler(s) for this service...
The command completed successfully
於是將grid用戶的.bash_profile文件中的oracle_sid修改為大寫即可
修改前:
[grid@rac1 ~]$ more .bash_profile # .bash_profile # Get the aliases and functions if [ -f ~/.bashrc ]; then . ~/.bashrc fi # User specific environment and startup programs PATH=$PATH:$HOME/bin export TMP=/tmp export TMPDIR=$TMP export ORACLE_SID=+asm1 # 如果是節點2,則:export ORACLE_SID=+asm2 export ORACLE_BASE=/u01/app/grid export ORACLE_HOME=/u01/app/11.2.0/grid export PATH=/usr/sbin:$PATH export PATH=$ORACLE_HOME/bin:$PATH export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib umask 022
修改后:
[grid@rac1 ~]$ more .bash_profile # .bash_profile # Get the aliases and functions if [ -f ~/.bashrc ]; then . ~/.bashrc fi # User specific environment and startup programs PATH=$PATH:$HOME/bin export TMP=/tmp export TMPDIR=$TMP export ORACLE_SID=+ASM1 # 如果是節點2,則:export ORACLE_SID=+asm2 export ORACLE_BASE=/u01/app/grid export ORACLE_HOME=/u01/app/11.2.0/grid export PATH=/usr/sbin:$PATH export PATH=$ORACLE_HOME/bin:$PATH export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib umask 022
問題2:通過scan ip連接不上數據庫 (2018.09.12補充)
在我的電腦上配置了tns文件,然后連接到數據庫上時,報錯
ORA-12545: 因目標主機或對象不存在,連接失敗
通過其它IP(public ip和vip)都可以臉上數據庫,網上查找相似問題,發現是11.2.0.1的bug,只需要將初始化參數local_listener中的host的值改為vip即可。
修改方法如下:
---------節點1 SQL> alter system set local_listener='(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=10.141.71.103)(PORT=1521))))' scope=both sid='racdb1'; System altered. SQL> alter system register; System altered. ---------節點2 SQL> alter system set local_listener='(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=10.141.71.104)(PORT=1521))))' scope=both sid='racdb2'; System altered. SQL> alter system register; System altered.
修改前:
----------2個節點配置均如下
SQL> show parameter listener NAME TYPE VALUE ----------------------- ----------- ------------------------------ listener_networks string local_listener string (DESCRIPTION=(ADDRESS_LIST=(AD DRESS=(PROTOCOL=TCP)(HOST=rac1-vip)(PORT=1521)))) remote_listener string rac-scan:1521
修改后:
------------節點1 SQL> show parameter listener NAME TYPE VALUE ------------------------ ----------- ------------------------------ listener_networks string local_listener string (DESCRIPTION=(ADDRESS_LIST=(AD DRESS=(PROTOCOL=TCP)(HOST=10.141.71.103)(PORT=1521)))) remote_listener string rac-scan:1521 -------------節點2 SQL> show parameter listener NAME TYPE VALUE ----------------------- ----------- ------------------------------ listener_networks string local_listener string (DESCRIPTION=(ADDRESS_LIST=(AD DRESS=(PROTOCOL=TCP)(HOST=10.141.71.104)(PORT=1521)))) remote_listener string rac-scan:1521
注釋:這個錯誤是另一次安裝遇到的,故數據庫、服務器信息有所變化。
【完】













































