Linux平台 Oracle 18c RAC安裝Part1:准備工作
2018-08-04 22:20 by AlfredZhao, 1065 閱讀, 0 評論, 收藏, 編輯
一、實施前期准備工作
二、安裝前期准備工作
- 2.1 各節點系統時間校對
- 2.2 各節點關閉防火牆和SELinux
- 2.3 各節點檢查系統依賴包安裝情況
- 2.4 各節點配置/etc/hosts
- 2.5 各節點創建需要的用戶和組
- 2.6 各節點創建安裝目錄
- 2.7 各節點系統配置文件修改
- 2.8 各節點設置用戶的環境變量
Linux平台 Oracle 18c RAC安裝指導:
Part1:Linux平台 Oracle 18c RAC安裝Part1:准備工作
Part2:Linux平台 Oracle 18c RAC安裝Part2:GI配置
Part3:Linux平台 Oracle 18c RAC安裝Part3:DB配置
本文安裝環境:OEL 7.5 + Oracle 18.3 GI & RAC
一、實施前期准備工作
1.1 服務器安裝操作系統
配置完全相同的兩台服務器,安裝相同版本的Linux操作系統。留存系統光盤或者鏡像文件。
我這里是OEL7.5,系統目錄大小均一致。對應OEL7.5的系統鏡像文件放在服務器上,供后面配置本地yum使用。
1.2 Oracle安裝介質
Oracle 18.3 版本2個zip包(總大小9G+,注意空間):
LINUX.X64_180000_grid_home.zip MD5: CD42D137FD2A2EEB4E911E8029CC82A9
LINUX.X64_180000_db_home.zip MD5: 99A7C4A088A8A502C261E741A8339AE8
這個自己去Oracle官網下載,然后只需要上傳到節點1即可。
1.3 共享存儲規划
從存儲中划分出兩台主機可以同時看到的共享LUN,3個1G的盤用作OCR和Voting Disk,1個40G的盤做GIMR,其余規划做數據盤和FRA。
根據實際需要選擇multipath或者udev綁定設備。這里選用multipath綁定。
multipath -ll multipath -F multipath -v2 multipath -ll
我這里實驗環境,存儲划分的LUN是通過一台iSCSI服務器模擬的,下面是服務端主要配置信息:
o- / ......................................................................................................................... [...] o- backstores .............................................................................................................. [...] | o- block .................................................................................................. [Storage Objects: 8] | | o- disk1 ...................................................... [/dev/mapper/vg_storage-lv_lun1 (1.0GiB) write-thru activated] | | | o- alua ................................................................................................... [ALUA Groups: 1] | | | o- default_tg_pt_gp ....................................................................... [ALUA state: Active/optimized] | | o- disk2 ...................................................... [/dev/mapper/vg_storage-lv_lun2 (1.0GiB) write-thru activated] | | | o- alua ................................................................................................... [ALUA Groups: 1] | | | o- default_tg_pt_gp ....................................................................... [ALUA state: Active/optimized] | | o- disk3 ...................................................... [/dev/mapper/vg_storage-lv_lun3 (1.0GiB) write-thru activated] | | | o- alua ................................................................................................... [ALUA Groups: 1] | | | o- default_tg_pt_gp ....................................................................... [ALUA state: Active/optimized] | | o- disk4 ..................................................... [/dev/mapper/vg_storage-lv_lun4 (40.0GiB) write-thru activated] | | | o- alua ................................................................................................... [ALUA Groups: 1] | | | o- default_tg_pt_gp ....................................................................... [ALUA state: Active/optimized] | | o- disk5 ..................................................... [/dev/mapper/vg_storage-lv_lun5 (10.0GiB) write-thru activated] | | | o- alua ................................................................................................... [ALUA Groups: 1] | | | o- default_tg_pt_gp ....................................................................... [ALUA state: Active/optimized] | | o- disk6 ..................................................... [/dev/mapper/vg_storage-lv_lun6 (10.0GiB) write-thru activated] | | | o- alua ................................................................................................... [ALUA Groups: 1] | | | o- default_tg_pt_gp ....................................................................... [ALUA state: Active/optimized] | | o- disk7 ..................................................... [/dev/mapper/vg_storage-lv_lun7 (10.0GiB) write-thru activated] | | | o- alua ................................................................................................... [ALUA Groups: 1] | | | o- default_tg_pt_gp ....................................................................... [ALUA state: Active/optimized] | | o- disk8 ..................................................... [/dev/mapper/vg_storage-lv_lun8 (16.0GiB) write-thru activated] | | o- alua ................................................................................................... [ALUA Groups: 1] | | o- default_tg_pt_gp ....................................................................... [ALUA state: Active/optimized] | o- fileio ................................................................................................. [Storage Objects: 0] | o- pscsi .................................................................................................. [Storage Objects: 0] | o- ramdisk ................................................................................................ [Storage Objects: 0] o- iscsi ............................................................................................................ [Targets: 1] | o- iqn.2003-01.org.linux-iscsi.storage-c.x8664:sn.bc3a6511567c ....................................................... [TPGs: 1] | o- tpg1 ............................................................................................... [no-gen-acls, no-auth] | o- acls .......................................................................................................... [ACLs: 1] | | o- iqn.2003-01.org.linux-iscsi.storage-c.x8664:sn.bc3a6511567c:client ................................... [Mapped LUNs: 8] | | o- mapped_lun0 ................................................................................. [lun0 block/disk1 (rw)] | | o- mapped_lun1 ................................................................................. [lun1 block/disk2 (rw)] | | o- mapped_lun2 ................................................................................. [lun2 block/disk3 (rw)] | | o- mapped_lun3 ................................................................................. [lun3 block/disk4 (rw)] | | o- mapped_lun4 ................................................................................. [lun4 block/disk5 (rw)] | | o- mapped_lun5 ................................................................................. [lun5 block/disk6 (rw)] | | o- mapped_lun6 ................................................................................. [lun6 block/disk7 (rw)] | | o- mapped_lun7 ................................................................................. [lun7 block/disk8 (rw)] | o- luns .......................................................................................................... [LUNs: 8] | | o- lun0 ................................................ [block/disk1 (/dev/mapper/vg_storage-lv_lun1) (default_tg_pt_gp)] | | o- lun1 ................................................ [block/disk2 (/dev/mapper/vg_storage-lv_lun2) (default_tg_pt_gp)] | | o- lun2 ................................................ [block/disk3 (/dev/mapper/vg_storage-lv_lun3) (default_tg_pt_gp)] | | o- lun3 ................................................ [block/disk4 (/dev/mapper/vg_storage-lv_lun4) (default_tg_pt_gp)] | | o- lun4 ................................................ [block/disk5 (/dev/mapper/vg_storage-lv_lun5) (default_tg_pt_gp)] | | o- lun5 ................................................ [block/disk6 (/dev/mapper/vg_storage-lv_lun6) (default_tg_pt_gp)] | | o- lun6 ................................................ [block/disk7 (/dev/mapper/vg_storage-lv_lun7) (default_tg_pt_gp)] | | o- lun7 ................................................ [block/disk8 (/dev/mapper/vg_storage-lv_lun8) (default_tg_pt_gp)] | o- portals .................................................................................................... [Portals: 1] | o- 0.0.0.0:3260 ..................................................................................................... [OK] o- loopback ......................................................................................................... [Targets: 0] />
關於這部分相關的知識點可以參考之前的文章:
關於udev + multipath 的最簡配置(可在后續創建用戶后操作):
# vi /etc/udev/rules.d/12-dm-permissions.rules ENV{DM_UUID}=="mpath-?*", OWNER:="grid", GROUP:="asmadmin", MODE:="660" # udevadm control --reload # udevadm trigger
1.4 網絡規范分配
公有網絡 以及 私有網絡。
公有網絡:這里實驗環境是enp0s3是public IP,enp0s8是private IP,enp0s9和enp0s10是用於模擬IPSAN的兩條網絡。實際生產需根據實際情況調整規划。
二、安裝前期准備工作
2.1 各節點系統時間校對
各節點系統時間校對:
--檢驗時間和時區確認正確 date --關閉chrony服務,移除chrony配置文件(后續使用ctss) systemctl list-unit-files|grep chronyd systemctl status chronyd systemctl disable chronyd systemctl stop chronyd mv /etc/chrony.conf /etc/chrony.conf_bak
這里實驗環境,選擇不使用NTP和chrony,這樣Oracle會自動使用自己的ctss服務。
2.2 各節點關閉防火牆和SELinux
各節點關閉防火牆:
systemctl list-unit-files|grep firewalld systemctl status firewalld systemctl disable firewalld systemctl stop firewalld
各節點關閉SELinux:
getenforce cat /etc/selinux/config 手工修改/etc/selinux/config SELINUX=disabled,或使用下面命令: sed -i '/^SELINUX=.*/ s//SELINUX=disabled/' /etc/selinux/config setenforce 0
最后核實各節點已經關閉SELinux即可。
2.3 各節點檢查系統依賴包安裝情況
yum install -y oracle-database-server-12cR2-preinstall.x86_64
在OEL7.5中還是12cR2-preinstall的名字,並沒有對應18c的,但實際測試,在依賴包方面基本沒區別。
如果選用的是其他Linux,比如常用的RHEL,那就需要yum安裝官方文檔要求的依賴包了。
2.4 各節點配置/etc/hosts
編輯/etc/hosts文件:
#public ip 192.168.1.40 db40 192.168.1.42 db42 #virtual ip 192.168.1.41 db40-vip 192.168.1.43 db42-vip #scan ip 192.168.1.44 db18c-scan #private ip 10.10.1.40 db40-priv 10.10.1.42 db42-priv
2.5 各節點創建需要的用戶和組
創建group & user,給oracle、grid設置密碼:
groupadd -g 54321 oinstall groupadd -g 54322 dba groupadd -g 54323 oper groupadd -g 54324 backupdba groupadd -g 54325 dgdba groupadd -g 54326 kmdba groupadd -g 54327 asmdba groupadd -g 54328 asmoper groupadd -g 54329 asmadmin groupadd -g 54330 racdba useradd -u 54321 -g oinstall -G dba,asmdba,backupdba,dgdba,kmdba,racdba,oper oracle useradd -u 54322 -g oinstall -G asmadmin,asmdba,asmoper,dba grid echo oracle | passwd --stdin oracle echo oracle | passwd --stdin grid
我這里測試環境設置密碼都是oracle,實際生產環境建議設置符合規范的復雜密碼。
2.6 各節點創建安裝目錄
各節點創建安裝目錄(root用戶):
mkdir -p /u01/app/18.3.0/grid mkdir -p /u01/app/grid mkdir -p /u01/app/oracle chown -R grid:oinstall /u01 chown oracle:oinstall /u01/app/oracle chmod -R 775 /u01/
2.7 各節點系統配置文件修改
內核參數修改:vi /etc/sysctl.conf
實際上OEL在安裝依賴包的時候也同時修改了這些值,以下參數主要是核對或是對RHEL版本作為參考:
# vi /etc/sysctl.conf 增加如下內容: vm.swappiness = 1 vm.dirty_background_ratio = 3 vm.dirty_ratio = 80 vm.dirty_expire_centisecs = 500 vm.dirty_writeback_centisecs = 100 kernel.shmmni = 4096 kernel.shmall = 1073741824 kernel.shmmax = 4398046511104 kernel.sem = 250 32000 100 128 net.ipv4.ip_local_port_range = 9000 65500 net.core.rmem_default = 262144 net.core.rmem_max = 4194304 net.core.wmem_default = 262144 net.core.wmem_max = 1048576 fs.aio-max-nr = 1048576 fs.file-max = 6815744 kernel.panic_on_oops = 1 net.ipv4.conf.enp0s8.rp_filter = 2 net.ipv4.conf.enp0s9.rp_filter = 2 net.ipv4.conf.enp0s10.rp_filter = 2
修改生效:
#sysctl -p /etc/sysctl.conf
注:enp0s9和enp0s10是IPSAN專用的網卡,跟私網一樣設置loose mode。
#sysctl -p /etc/sysctl.d/98-oracle.conf net.ipv4.conf.enp0s8.rp_filter = 2 net.ipv4.conf.enp0s9.rp_filter = 2 net.ipv4.conf.enp0s10.rp_filter = 2
用戶shell的限制:vi /etc/security/limits.d/99-grid-oracle-limits.conf
oracle soft nproc 16384 oracle hard nproc 16384 oracle soft nofile 1024 oracle hard nofile 65536 oracle soft stack 10240 oracle hard stack 32768 grid soft nproc 16384 grid hard nproc 16384 grid soft nofile 1024 grid hard nofile 65536 grid soft stack 10240 grid hard stack 32768
這里需要注意OEL自動配置的 /etc/security/limits.d/oracle-database-server-12cR2-preinstall.conf 並不包含grid用戶的,可以手工加上。
vi /etc/profile.d/oracle-grid.sh
#Setting the appropriate ulimits for oracle and grid user if [ $USER = "oracle" ]; then if [ $SHELL = "/bin/ksh" ]; then ulimit -u 16384 ulimit -n 65536 else ulimit -u 16384 -n 65536 fi fi if [ $USER = "grid" ]; then if [ $SHELL = "/bin/ksh" ]; then ulimit -u 16384 ulimit -n 65536 else ulimit -u 16384 -n 65536 fi fi
這個OEL中也沒有自動配置,需要手工配置。
2.8 各節點設置用戶的環境變量
第1個節點grid用戶:
export ORACLE_SID=+ASM1; export ORACLE_BASE=/u01/app/grid; export ORACLE_HOME=/u01/app/18.3.0/grid; export PATH=$ORACLE_HOME/bin:$PATH; export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib;
第2個節點grid用戶:
export ORACLE_SID=+ASM2; export ORACLE_BASE=/u01/app/grid; export ORACLE_HOME=/u01/app/18.3.0/grid; export PATH=$ORACLE_HOME/bin:$PATH; export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib;
第1個節點oracle用戶:
export ORACLE_SID=cdb1; export ORACLE_BASE=/u01/app/oracle; export ORACLE_HOME=/u01/app/oracle/product/18.3.0/db_1; export PATH=$ORACLE_HOME/bin:$PATH; export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib;
第2個節點oracle用戶:
export ORACLE_SID=cdb2; export ORACLE_BASE=/u01/app/oracle; export ORACLE_HOME=/u01/app/oracle/product/18.3.0/db_1; export PATH=$ORACLE_HOME/bin:$PATH; export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib;
- 3.1 解壓GI的安裝包
- 3.2 安裝配置Xmanager軟件
- 3.3 共享存儲LUN的賦權
- 3.4 使用Xmanager圖形化界面配置GI
- 3.5 驗證crsctl的狀態
- 3.6 測試集群的FAILED OVER功能
Linux平台 Oracle 18c RAC安裝指導:
Part1:Linux平台 Oracle 18c RAC安裝Part1:准備工作
Part2:Linux平台 Oracle 18c RAC安裝Part2:GI配置
Part3:Linux平台 Oracle 18c RAC安裝Part3:DB配置
本文安裝環境:OEL 7.5 + Oracle 18.3 GI & RAC
三、GI(Grid Infrastructure)安裝
3.1 解壓GI的安裝包
su - grid
解壓 GRID 到 GRID用戶的$ORACLE_HOME下
[grid@db40 grid]$ pwd /u01/app/18.3.0/grid [grid@db40 grid]$ unzip /tmp/LINUX.X64_180000_grid_home.zip
3.2 安裝配置Xmanager軟件
在自己的Windows系統上成功安裝Xmanager Enterprise之后,運行Xstart.exe可執行程序,
配置如下
Session:db40
Host:192.168.1.40
Protocol:SSH
User Name:grid
Execution Command:/usr/bin/xterm -ls -display $DISPLAY
點擊RUN,輸入grid用戶的密碼可以正常彈出命令窗口界面,即配置成功。
當然也可以通過開啟Xmanager - Passive,直接在SecureCRT連接的會話窗口中臨時配置DISPLAY變量直接調用圖形:
export DISPLAY=192.168.1.31:0.0
3.3 共享存儲LUN的賦權
vi /etc/udev/rules.d/12-dm-permissions.rules
# MULTIPATH DEVICES # # Set permissions for all multipath devices ENV{DM_UUID}=="mpath-?*", OWNER:="grid", GROUP:="asmadmin", MODE:="660"
配置重載生效:
# udevadm control --reload # udevadm trigger
3.4 使用Xmanager圖形化界面配置GI
Xmanager通過grid用戶登錄,進入$ORACLE_HOME目錄,運行gridSetup配置GI
$ cd $ORACLE_HOME $ ./gridSetup.sh
其實從12cR2開始,GI的配置就跟之前有一些變化,18c也一樣,下面來看下GI配置的整個圖形化安裝的過程截圖:
注:這里Public網卡這里用的enp0s3,ASM&Private這里用的enp0s8,enp0s9和enp0s10是模擬IPSAN用到的網卡,所以這里不使用它們。
注:這里跟之前區別不大,我依然是選擇3塊1G的盤,Normal冗余作為OCR和voting disk。
注:這里有一個新的存儲GIMR的,我這里選擇是外部冗余的一個40G大小的盤,這是12c新引入的概念,18c依然如此。
注:這里檢查出來的問題都需要認真核對,確認確實可以忽略才可以點擊“Ignore All”,如果這里檢測出缺少某些RPM包,需要使用yum安裝好。
注:執行root腳本,確保先在一節點執行完畢后,再在其他節點執行。
第一個節點root執行腳本:
[root@db40 tmp]# /u01/app/oraInventory/orainstRoot.sh Changing permissions of /u01/app/oraInventory. Adding read,write permissions for group. Removing read,write,execute permissions for world. Changing groupname of /u01/app/oraInventory to oinstall. The execution of the script is complete. [root@db40 tmp]# /u01/app/18.3.0/grid/root.sh Performing root user operation. The following environment variables are set as: ORACLE_OWNER= grid ORACLE_HOME= /u01/app/18.3.0/grid Enter the full pathname of the local bin directory: [/usr/local/bin]: Copying dbhome to /usr/local/bin ... Copying oraenv to /usr/local/bin ... Copying coraenv to /usr/local/bin ... Creating /etc/oratab file... Entries will be added to the /etc/oratab file as needed by Database Configuration Assistant when a database is created Finished running generic part of root script. Now product-specific root actions will be performed. Relinking oracle with rac_on option Using configuration parameter file: /u01/app/18.3.0/grid/crs/install/crsconfig_params The log of current session can be found at: /u01/app/grid/crsdata/db40/crsconfig/rootcrs_db40_2018-08-04_10-25-09AM.log 2018/08/04 10:25:34 CLSRSC-594: Executing installation step 1 of 20: 'SetupTFA'. 2018/08/04 10:25:35 CLSRSC-4001: Installing Oracle Trace File Analyzer (TFA) Collector. 2018/08/04 10:26:29 CLSRSC-4002: Successfully installed Oracle Trace File Analyzer (TFA) Collector. 2018/08/04 10:26:29 CLSRSC-594: Executing installation step 2 of 20: 'ValidateEnv'. 2018/08/04 10:26:29 CLSRSC-363: User ignored prerequisites during installation 2018/08/04 10:26:29 CLSRSC-594: Executing installation step 3 of 20: 'CheckFirstNode'. 2018/08/04 10:26:35 CLSRSC-594: Executing installation step 4 of 20: 'GenSiteGUIDs'. 2018/08/04 10:26:37 CLSRSC-594: Executing installation step 5 of 20: 'SaveParamFile'. 2018/08/04 10:26:53 CLSRSC-594: Executing installation step 6 of 20: 'SetupOSD'. 2018/08/04 10:26:53 CLSRSC-594: Executing installation step 7 of 20: 'CheckCRSConfig'. 2018/08/04 10:26:53 CLSRSC-594: Executing installation step 8 of 20: 'SetupLocalGPNP'. 2018/08/04 10:27:47 CLSRSC-594: Executing installation step 9 of 20: 'CreateRootCert'. 2018/08/04 10:27:54 CLSRSC-594: Executing installation step 10 of 20: 'ConfigOLR'. 2018/08/04 10:28:09 CLSRSC-594: Executing installation step 11 of 20: 'ConfigCHMOS'. 2018/08/04 10:28:10 CLSRSC-594: Executing installation step 12 of 20: 'CreateOHASD'. 2018/08/04 10:28:21 CLSRSC-594: Executing installation step 13 of 20: 'ConfigOHASD'. 2018/08/04 10:28:21 CLSRSC-330: Adding Clusterware entries to file 'oracle-ohasd.service' 2018/08/04 10:30:29 CLSRSC-594: Executing installation step 14 of 20: 'InstallAFD'. 2018/08/04 10:30:43 CLSRSC-594: Executing installation step 15 of 20: 'InstallACFS'. CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'db40' CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'db40' has completed CRS-4133: Oracle High Availability Services has been stopped. CRS-4123: Oracle High Availability Services has been started. 2018/08/04 10:32:34 CLSRSC-594: Executing installation step 16 of 20: 'InstallKA'. 2018/08/04 10:32:47 CLSRSC-594: Executing installation step 17 of 20: 'InitConfig'. CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'db40' CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'db40' has completed CRS-4133: Oracle High Availability Services has been stopped. CRS-4123: Oracle High Availability Services has been started. CRS-2672: Attempting to start 'ora.evmd' on 'db40' CRS-2672: Attempting to start 'ora.mdnsd' on 'db40' CRS-2676: Start of 'ora.mdnsd' on 'db40' succeeded CRS-2676: Start of 'ora.evmd' on 'db40' succeeded CRS-2672: Attempting to start 'ora.gpnpd' on 'db40' CRS-2676: Start of 'ora.gpnpd' on 'db40' succeeded CRS-2672: Attempting to start 'ora.cssdmonitor' on 'db40' CRS-2672: Attempting to start 'ora.gipcd' on 'db40' CRS-2676: Start of 'ora.cssdmonitor' on 'db40' succeeded CRS-2676: Start of 'ora.gipcd' on 'db40' succeeded CRS-2672: Attempting to start 'ora.cssd' on 'db40' CRS-2672: Attempting to start 'ora.diskmon' on 'db40' CRS-2676: Start of 'ora.diskmon' on 'db40' succeeded CRS-2676: Start of 'ora.cssd' on 'db40' succeeded [INFO] [DBT-30001] Disk groups created successfully. Check /u01/app/grid/cfgtoollogs/asmca/asmca-180804AM103332.log for details. 2018/08/04 10:37:38 CLSRSC-482: Running command: '/u01/app/18.3.0/grid/bin/ocrconfig -upgrade grid oinstall' CRS-2672: Attempting to start 'ora.crf' on 'db40' CRS-2672: Attempting to start 'ora.storage' on 'db40' CRS-2676: Start of 'ora.storage' on 'db40' succeeded CRS-2676: Start of 'ora.crf' on 'db40' succeeded CRS-2672: Attempting to start 'ora.crsd' on 'db40' CRS-2676: Start of 'ora.crsd' on 'db40' succeeded CRS-4256: Updating the profile Successful addition of voting disk e234d69db62c4f41bff377eec5bed671. Successful addition of voting disk eb9d2950a5aa4f4cbfa46432f7c4f709. Successful addition of voting disk 84c44e2025be4fe3bf7d5a7a4049d4fd. Successfully replaced voting disk group with +OCRVT. CRS-4256: Updating the profile CRS-4266: Voting file(s) successfully replaced ## STATE File Universal Id File Name Disk group -- ----- ----------------- --------- --------- 1. ONLINE e234d69db62c4f41bff377eec5bed671 (/dev/mapper/mpatha) [OCRVT] 2. ONLINE eb9d2950a5aa4f4cbfa46432f7c4f709 (/dev/mapper/mpathb) [OCRVT] 3. ONLINE 84c44e2025be4fe3bf7d5a7a4049d4fd (/dev/mapper/mpathc) [OCRVT] Located 3 voting disk(s). CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'db40' CRS-2673: Attempting to stop 'ora.crsd' on 'db40' CRS-2677: Stop of 'ora.crsd' on 'db40' succeeded CRS-2673: Attempting to stop 'ora.storage' on 'db40' CRS-2673: Attempting to stop 'ora.crf' on 'db40' CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'db40' CRS-2673: Attempting to stop 'ora.mdnsd' on 'db40' CRS-2677: Stop of 'ora.crf' on 'db40' succeeded CRS-2677: Stop of 'ora.storage' on 'db40' succeeded CRS-2673: Attempting to stop 'ora.asm' on 'db40' CRS-2677: Stop of 'ora.drivers.acfs' on 'db40' succeeded CRS-2677: Stop of 'ora.mdnsd' on 'db40' succeeded CRS-2677: Stop of 'ora.asm' on 'db40' succeeded CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'db40' CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'db40' succeeded CRS-2673: Attempting to stop 'ora.ctssd' on 'db40' CRS-2673: Attempting to stop 'ora.evmd' on 'db40' CRS-2677: Stop of 'ora.evmd' on 'db40' succeeded CRS-2677: Stop of 'ora.ctssd' on 'db40' succeeded CRS-2673: Attempting to stop 'ora.cssd' on 'db40' CRS-2677: Stop of 'ora.cssd' on 'db40' succeeded CRS-2673: Attempting to stop 'ora.gipcd' on 'db40' CRS-2673: Attempting to stop 'ora.gpnpd' on 'db40' CRS-2677: Stop of 'ora.gipcd' on 'db40' succeeded CRS-2677: Stop of 'ora.gpnpd' on 'db40' succeeded CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'db40' has completed CRS-4133: Oracle High Availability Services has been stopped. 2018/08/04 10:42:19 CLSRSC-594: Executing installation step 18 of 20: 'StartCluster'. CRS-4123: Starting Oracle High Availability Services-managed resources CRS-2672: Attempting to start 'ora.evmd' on 'db40' CRS-2672: Attempting to start 'ora.mdnsd' on 'db40' CRS-2676: Start of 'ora.mdnsd' on 'db40' succeeded CRS-2676: Start of 'ora.evmd' on 'db40' succeeded CRS-2672: Attempting to start 'ora.gpnpd' on 'db40' CRS-2676: Start of 'ora.gpnpd' on 'db40' succeeded CRS-2672: Attempting to start 'ora.gipcd' on 'db40' CRS-2676: Start of 'ora.gipcd' on 'db40' succeeded CRS-2672: Attempting to start 'ora.crf' on 'db40' CRS-2672: Attempting to start 'ora.cssdmonitor' on 'db40' CRS-2676: Start of 'ora.cssdmonitor' on 'db40' succeeded CRS-2672: Attempting to start 'ora.cssd' on 'db40' CRS-2672: Attempting to start 'ora.diskmon' on 'db40' CRS-2676: Start of 'ora.diskmon' on 'db40' succeeded CRS-2676: Start of 'ora.crf' on 'db40' succeeded CRS-2676: Start of 'ora.cssd' on 'db40' succeeded CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'db40' CRS-2672: Attempting to start 'ora.ctssd' on 'db40' CRS-2676: Start of 'ora.ctssd' on 'db40' succeeded CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'db40' succeeded CRS-2672: Attempting to start 'ora.asm' on 'db40' CRS-2676: Start of 'ora.asm' on 'db40' succeeded CRS-2672: Attempting to start 'ora.storage' on 'db40' CRS-2676: Start of 'ora.storage' on 'db40' succeeded CRS-2672: Attempting to start 'ora.crsd' on 'db40' CRS-2676: Start of 'ora.crsd' on 'db40' succeeded CRS-6023: Starting Oracle Cluster Ready Services-managed resources CRS-6017: Processing resource auto-start for servers: db40 CRS-6016: Resource auto-start has completed for server db40 CRS-6024: Completed start of Oracle Cluster Ready Services-managed resources CRS-4123: Oracle High Availability Services has been started. 2018/08/04 10:45:28 CLSRSC-343: Successfully started Oracle Clusterware stack 2018/08/04 10:45:28 CLSRSC-594: Executing installation step 19 of 20: 'ConfigNode'. CRS-2672: Attempting to start 'ora.ASMNET1LSNR_ASM.lsnr' on 'db40' CRS-2676: Start of 'ora.ASMNET1LSNR_ASM.lsnr' on 'db40' succeeded CRS-2672: Attempting to start 'ora.asm' on 'db40' CRS-2676: Start of 'ora.asm' on 'db40' succeeded CRS-2672: Attempting to start 'ora.OCRVT.dg' on 'db40' CRS-2676: Start of 'ora.OCRVT.dg' on 'db40' succeeded 2018/08/04 10:49:35 CLSRSC-594: Executing installation step 20 of 20: 'PostConfig'. [INFO] [DBT-30001] Disk groups created successfully. Check /u01/app/grid/cfgtoollogs/asmca/asmca-180804AM104944.log for details. 2018/08/04 10:55:24 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded
執行成功后,在第二個節點root執行腳本:
[root@db42 app]# /u01/app/oraInventory/orainstRoot.sh Changing permissions of /u01/app/oraInventory. Adding read,write permissions for group. Removing read,write,execute permissions for world. Changing groupname of /u01/app/oraInventory to oinstall. The execution of the script is complete. [root@db42 app]# /u01/app/18.3.0/grid/root.sh Performing root user operation. The following environment variables are set as: ORACLE_OWNER= grid ORACLE_HOME= /u01/app/18.3.0/grid Enter the full pathname of the local bin directory: [/usr/local/bin]: Copying dbhome to /usr/local/bin ... Copying oraenv to /usr/local/bin ... Copying coraenv to /usr/local/bin ... Creating /etc/oratab file... Entries will be added to the /etc/oratab file as needed by Database Configuration Assistant when a database is created Finished running generic part of root script. Now product-specific root actions will be performed. Relinking oracle with rac_on option Using configuration parameter file: /u01/app/18.3.0/grid/crs/install/crsconfig_params The log of current session can be found at: /u01/app/grid/crsdata/db42/crsconfig/rootcrs_db42_2018-08-04_11-09-32AM.log 2018/08/04 11:09:50 CLSRSC-594: Executing installation step 1 of 20: 'SetupTFA'. 2018/08/04 11:09:50 CLSRSC-4001: Installing Oracle Trace File Analyzer (TFA) Collector. 2018/08/04 11:10:38 CLSRSC-4002: Successfully installed Oracle Trace File Analyzer (TFA) Collector. 2018/08/04 11:10:38 CLSRSC-594: Executing installation step 2 of 20: 'ValidateEnv'. 2018/08/04 11:10:38 CLSRSC-363: User ignored prerequisites during installation 2018/08/04 11:10:39 CLSRSC-594: Executing installation step 3 of 20: 'CheckFirstNode'. 2018/08/04 11:10:41 CLSRSC-594: Executing installation step 4 of 20: 'GenSiteGUIDs'. 2018/08/04 11:10:42 CLSRSC-594: Executing installation step 5 of 20: 'SaveParamFile'. 2018/08/04 11:10:48 CLSRSC-594: Executing installation step 6 of 20: 'SetupOSD'. 2018/08/04 11:10:48 CLSRSC-594: Executing installation step 7 of 20: 'CheckCRSConfig'. 2018/08/04 11:10:49 CLSRSC-594: Executing installation step 8 of 20: 'SetupLocalGPNP'. 2018/08/04 11:10:52 CLSRSC-594: Executing installation step 9 of 20: 'CreateRootCert'. 2018/08/04 11:10:52 CLSRSC-594: Executing installation step 10 of 20: 'ConfigOLR'. 2018/08/04 11:10:58 CLSRSC-594: Executing installation step 11 of 20: 'ConfigCHMOS'. 2018/08/04 11:10:58 CLSRSC-594: Executing installation step 12 of 20: 'CreateOHASD'. 2018/08/04 11:11:01 CLSRSC-594: Executing installation step 13 of 20: 'ConfigOHASD'. 2018/08/04 11:11:02 CLSRSC-330: Adding Clusterware entries to file 'oracle-ohasd.service' 2018/08/04 11:13:06 CLSRSC-594: Executing installation step 14 of 20: 'InstallAFD'. 2018/08/04 11:13:10 CLSRSC-594: Executing installation step 15 of 20: 'InstallACFS'. CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'db42' CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'db42' has completed CRS-4133: Oracle High Availability Services has been stopped. CRS-4123: Oracle High Availability Services has been started. 2018/08/04 11:14:48 CLSRSC-594: Executing installation step 16 of 20: 'InstallKA'. 2018/08/04 11:14:50 CLSRSC-594: Executing installation step 17 of 20: 'InitConfig'. CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'db42' CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'db42' has completed CRS-4133: Oracle High Availability Services has been stopped. CRS-4123: Oracle High Availability Services has been started. CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'db42' CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'db42' CRS-2677: Stop of 'ora.drivers.acfs' on 'db42' succeeded CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'db42' has completed CRS-4133: Oracle High Availability Services has been stopped. 2018/08/04 11:15:18 CLSRSC-594: Executing installation step 18 of 20: 'StartCluster'. CRS-4123: Starting Oracle High Availability Services-managed resources CRS-2672: Attempting to start 'ora.evmd' on 'db42' CRS-2672: Attempting to start 'ora.mdnsd' on 'db42' CRS-2676: Start of 'ora.evmd' on 'db42' succeeded CRS-2676: Start of 'ora.mdnsd' on 'db42' succeeded CRS-2672: Attempting to start 'ora.gpnpd' on 'db42' CRS-2676: Start of 'ora.gpnpd' on 'db42' succeeded CRS-2672: Attempting to start 'ora.gipcd' on 'db42' CRS-2676: Start of 'ora.gipcd' on 'db42' succeeded CRS-2672: Attempting to start 'ora.crf' on 'db42' CRS-2672: Attempting to start 'ora.cssdmonitor' on 'db42' CRS-2676: Start of 'ora.crf' on 'db42' succeeded CRS-2676: Start of 'ora.cssdmonitor' on 'db42' succeeded CRS-2672: Attempting to start 'ora.cssd' on 'db42' CRS-2672: Attempting to start 'ora.diskmon' on 'db42' CRS-2676: Start of 'ora.diskmon' on 'db42' succeeded CRS-2676: Start of 'ora.cssd' on 'db42' succeeded CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'db42' CRS-2672: Attempting to start 'ora.ctssd' on 'db42' CRS-2676: Start of 'ora.ctssd' on 'db42' succeeded CRS-2672: Attempting to start 'ora.crsd' on 'db42' CRS-2676: Start of 'ora.crsd' on 'db42' succeeded CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'db42' succeeded CRS-2672: Attempting to start 'ora.asm' on 'db42' CRS-2676: Start of 'ora.asm' on 'db42' succeeded CRS-6017: Processing resource auto-start for servers: db42 CRS-2672: Attempting to start 'ora.ASMNET1LSNR_ASM.lsnr' on 'db42' CRS-2672: Attempting to start 'ora.ons' on 'db42' CRS-2676: Start of 'ora.ASMNET1LSNR_ASM.lsnr' on 'db42' succeeded CRS-2672: Attempting to start 'ora.asm' on 'db42' CRS-2676: Start of 'ora.ons' on 'db42' succeeded CRS-2676: Start of 'ora.asm' on 'db42' succeeded CRS-2672: Attempting to start 'ora.proxy_advm' on 'db40' CRS-2672: Attempting to start 'ora.proxy_advm' on 'db42' CRS-2676: Start of 'ora.proxy_advm' on 'db40' succeeded CRS-2676: Start of 'ora.proxy_advm' on 'db42' succeeded CRS-6016: Resource auto-start has completed for server db42 CRS-6024: Completed start of Oracle Cluster Ready Services-managed resources CRS-4123: Oracle High Availability Services has been started. 2018/08/04 11:21:10 CLSRSC-343: Successfully started Oracle Clusterware stack 2018/08/04 11:21:11 CLSRSC-594: Executing installation step 19 of 20: 'ConfigNode'. 2018/08/04 11:21:52 CLSRSC-594: Executing installation step 20 of 20: 'PostConfig'. 2018/08/04 11:22:06 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded
root腳本成功執行完后繼續安裝:
這段時間也很漫長,看來Oracle集群軟件是越來越龐大了,老DBA們此時此刻有沒有懷念11g甚至10g的時代呢?
安裝過程中可以跟蹤安裝日志:
tail -20f /tmp/GridSetupActions2018-08-03_11-27-06PM/gridSetupActions2018-08-03_11-27-06PM.log
注:日志中可以看到有一個階段starting read loop消耗了1個多小時才進入到下個階段,這很可能是有異常導致時間過長,但即使排除這個異常時間段依然還有將近2小時的時間。
INFO: [Aug 4, 2018 2:04:42 PM] ... GenericInternalPlugIn: starting read loop. INFO: [Aug 4, 2018 3:25:37 PM] Completed Plugin named: mgmtca
注:最后這個報錯提示,查看日志發現是因為使用了一個scan ip的提示,可以忽略。
至此GI配置完成。
3.5 驗證crsctl的狀態
crsctl stat res -t查看集群資源狀態信息,看到18c又新出現一些資源了,看來DBA又有新東西需要學習了~
[grid@db40 grid]$ crsctl stat res -t -------------------------------------------------------------------------------- Name Target State Server State details -------------------------------------------------------------------------------- Local Resources -------------------------------------------------------------------------------- ora.ASMNET1LSNR_ASM.lsnr ONLINE ONLINE db40 STABLE ONLINE ONLINE db42 STABLE ora.LISTENER.lsnr ONLINE ONLINE db40 STABLE ONLINE ONLINE db42 STABLE ora.MGMT.GHCHKPT.advm OFFLINE OFFLINE db40 STABLE OFFLINE OFFLINE db42 STABLE ora.MGMT.dg ONLINE ONLINE db40 STABLE ONLINE ONLINE db42 STABLE ora.OCRVT.dg ONLINE ONLINE db40 STABLE ONLINE ONLINE db42 STABLE ora.chad ONLINE ONLINE db40 STABLE ONLINE ONLINE db42 STABLE ora.helper OFFLINE OFFLINE db40 STABLE OFFLINE OFFLINE db42 IDLE,STABLE ora.mgmt.ghchkpt.acfs OFFLINE OFFLINE db40 volume /opt/oracle/r hp_images/chkbase is unmounted,STABLE OFFLINE OFFLINE db42 STABLE ora.net1.network ONLINE ONLINE db40 STABLE ONLINE ONLINE db42 STABLE ora.ons ONLINE ONLINE db40 STABLE ONLINE ONLINE db42 STABLE ora.proxy_advm ONLINE ONLINE db40 STABLE ONLINE ONLINE db42 STABLE -------------------------------------------------------------------------------- Cluster Resources -------------------------------------------------------------------------------- ora.LISTENER_SCAN1.lsnr 1 ONLINE ONLINE db40 STABLE ora.MGMTLSNR 1 ONLINE ONLINE db40 169.254.7.255 10.0.0 .40,STABLE ora.asm 1 ONLINE ONLINE db40 Started,STABLE 2 ONLINE ONLINE db42 Started,STABLE 3 OFFLINE OFFLINE STABLE ora.cvu 1 ONLINE ONLINE db40 STABLE ora.db40.vip 1 ONLINE ONLINE db40 STABLE ora.db42.vip 1 ONLINE ONLINE db42 STABLE ora.mgmtdb 1 ONLINE ONLINE db40 Open,STABLE ora.qosmserver 1 ONLINE ONLINE db40 STABLE ora.rhpserver 1 OFFLINE OFFLINE STABLE ora.scan1.vip 1 ONLINE ONLINE db40 STABLE --------------------------------------------------------------------------------
crsctl stat res -t -init
-------------------------------------------------------------------------------- Name Target State Server State details -------------------------------------------------------------------------------- Cluster Resources -------------------------------------------------------------------------------- ora.asm 1 ONLINE ONLINE db40 Started,STABLE ora.cluster_interconnect.haip 1 ONLINE ONLINE db40 STABLE ora.crf 1 ONLINE ONLINE db40 STABLE ora.crsd 1 ONLINE ONLINE db40 STABLE ora.cssd 1 ONLINE ONLINE db40 STABLE ora.cssdmonitor 1 ONLINE ONLINE db40 STABLE ora.ctssd 1 ONLINE ONLINE db40 ACTIVE:0,STABLE ora.diskmon 1 OFFLINE OFFLINE STABLE ora.drivers.acfs 1 ONLINE ONLINE db40 STABLE ora.evmd 1 ONLINE ONLINE db40 STABLE ora.gipcd 1 ONLINE ONLINE db40 STABLE ora.gpnpd 1 ONLINE ONLINE db40 STABLE ora.mdnsd 1 ONLINE ONLINE db40 STABLE ora.storage 1 ONLINE ONLINE db40 STABLE --------------------------------------------------------------------------------
3.6 測試集群的FAILED OVER功能
節點2被重啟,查看節點1狀態:
[grid@db40 trace]$ crsctl stat res -t -------------------------------------------------------------------------------- Name Target State Server State details -------------------------------------------------------------------------------- Local Resources -------------------------------------------------------------------------------- ora.ASMNET1LSNR_ASM.lsnr ONLINE ONLINE db40 STABLE ora.LISTENER.lsnr ONLINE ONLINE db40 STABLE ora.MGMT.GHCHKPT.advm OFFLINE OFFLINE db40 STABLE ora.MGMT.dg ONLINE ONLINE db40 STABLE ora.OCRVT.dg ONLINE ONLINE db40 STABLE ora.chad ONLINE ONLINE db40 STABLE ora.helper OFFLINE OFFLINE db40 STABLE ora.mgmt.ghchkpt.acfs OFFLINE OFFLINE db40 volume /opt/oracle/r hp_images/chkbase is unmounted,STABLE ora.net1.network ONLINE ONLINE db40 STABLE ora.ons ONLINE ONLINE db40 STABLE ora.proxy_advm ONLINE ONLINE db40 STABLE -------------------------------------------------------------------------------- Cluster Resources -------------------------------------------------------------------------------- ora.LISTENER_SCAN1.lsnr 1 ONLINE ONLINE db40 STABLE ora.MGMTLSNR 1 ONLINE ONLINE db40 169.254.7.255 10.0.0 .40,STABLE ora.asm 1 ONLINE ONLINE db40 Started,STABLE 2 ONLINE OFFLINE STABLE 3 OFFLINE OFFLINE STABLE ora.cvu 1 ONLINE ONLINE db40 STABLE ora.db40.vip 1 ONLINE ONLINE db40 STABLE ora.db42.vip 1 ONLINE INTERMEDIATE db40 FAILED OVER,STABLE ora.mgmtdb 1 ONLINE ONLINE db40 Open,STABLE ora.qosmserver 1 ONLINE ONLINE db40 STABLE ora.rhpserver 1 OFFLINE OFFLINE STABLE ora.scan1.vip 1 ONLINE ONLINE db40 STABLE --------------------------------------------------------------------------------
節點1被重啟,查看節點2狀態:
[grid@db42 ~]$ crsctl stat res -t -------------------------------------------------------------------------------- Name Target State Server State details -------------------------------------------------------------------------------- Local Resources -------------------------------------------------------------------------------- ora.ASMNET1LSNR_ASM.lsnr ONLINE ONLINE db42 STABLE ora.LISTENER.lsnr ONLINE ONLINE db42 STABLE ora.MGMT.GHCHKPT.advm OFFLINE OFFLINE db42 STABLE ora.MGMT.dg ONLINE INTERMEDIATE db42 STABLE ora.OCRVT.dg ONLINE INTERMEDIATE db42 STABLE ora.chad ONLINE ONLINE db42 STABLE ora.helper OFFLINE OFFLINE db42 STABLE ora.mgmt.ghchkpt.acfs OFFLINE OFFLINE db42 STABLE ora.net1.network ONLINE ONLINE db42 STABLE ora.ons ONLINE ONLINE db42 STABLE ora.proxy_advm ONLINE ONLINE db42 STABLE -------------------------------------------------------------------------------- Cluster Resources -------------------------------------------------------------------------------- ora.LISTENER_SCAN1.lsnr 1 ONLINE ONLINE db42 STABLE ora.MGMTLSNR 1 ONLINE ONLINE db42 169.254.7.154 10.0.0 .42,STABLE ora.asm 1 ONLINE OFFLINE STABLE 2 ONLINE ONLINE db42 Started,STABLE 3 OFFLINE OFFLINE STABLE ora.cvu 1 ONLINE ONLINE db42 STABLE ora.db40.vip 1 ONLINE INTERMEDIATE db42 FAILED OVER,STABLE ora.db42.vip 1 ONLINE ONLINE db42 STABLE ora.mgmtdb 1 ONLINE ONLINE db42 Open,STABLE ora.qosmserver 1 ONLINE ONLINE db42 STABLE ora.rhpserver 1 OFFLINE OFFLINE STABLE ora.scan1.vip 1 ONLINE ONLINE db42 STABLE --------------------------------------------------------------------------------
附:集群日志位置:
[grid@db40 log]$ cd $ORACLE_BASE/diag/crs/db40/crs/trace [grid@db40 trace]$ pwd /u01/app/grid/diag/crs/db40/crs/trace [grid@db40 trace]$ tail -5f ocssd.trc 2018-08-04 17:35:16.507 : CSSD:758277888: clssgmMbrDataUpdt: Sending member data change to GMP for group HB+ASM, memberID 17:2:1 2018-08-04 17:35:16.509 : CSSD:770635520: clssgmpcMemberDataUpdt: grockName HB+ASM memberID 17:2:1, datatype 1 datasize 4 2018-08-04 17:35:16.514 : CSSD:755123968: clssgmcpDataUpdtCmpl: Status 0 mbr data updt memberID 17:2:1 from clientID 1:39:2 2018-08-04 17:35:17.319 : CSSD:3337582336: clssnmSendingThread: sending status msg to all nodes 2018-08-04 17:35:17.321 : CSSD:3337582336: clssnmSendingThread: sent 4 status msgs to all nodes 2018-08-04 17:35:17.793 : CSSD:762750720: clssgmpcGMCReqWorkerThread: processing msg (0x7f2914038720) type 2, msg size 76, payload (0x7f291403874c) size 32, sequence 27970, for clientID 1:39:2 2018-08-04 17:35:18.424 : CSSD:758277888: clssgmMbrDataUpdt: Processing member data change type 1, size 4 for group HB+ASM, memberID 17:2:1 2018-08-04 17:35:18.424 : CSSD:758277888: clssgmMbrDataUpdt: Sending member data change to GMP for group HB+ASM, memberID 17:2:1 2018-08-04 17:35:18.425 : CSSD:770635520: clssgmpcMemberDataUpdt: grockName HB+ASM memberID 17:2:1, datatype 1 datasize 4 2018-08-04 17:35:18.427 : CSSD:755123968: clssgmcpDataUpdtCmpl: Status 0 mbr data updt memberID 17:2:1 from clientID 1:39:2 2018-08-04 17:35:19.083 : CSSD:755123968: clssgmSendEventsToMbrs: Group GR+DB_+ASM, member count 1, event master 0, event type 6, event incarn 346, event member count 1, pids 31422-21167708, 2018-08-04 17:35:19.446 : CSSD:762750720: clssgmpcGMCReqWorkerThread: processing msg (0x7f2914038720) type 2, msg size 76, payload (0x7f291403874c) size 32, sequence 27972, for clientID 1:37:2 [grid@db40 trace]$ tail -20f alert.log Build version: 18.0.0.0.0 Build full version: 18.3.0.0.0 Build hash: 9256567290 Bug numbers: NoTransactionInformation Commands: Build version: 18.0.0.0.0 Build full version: 18.3.0.0.0 Build hash: 9256567290 Bug numbers: NoTransactionInformation 2018-08-04 18:02:57.013 [CLSECHO(3376)]ACFS-9327: Verifying ADVM/ACFS devices. 2018-08-04 18:02:57.058 [CLSECHO(3384)]ACFS-9156: Detecting control device '/dev/asm/.asm_ctl_spec'. 2018-08-04 18:02:57.115 [CLSECHO(3395)]ACFS-9156: Detecting control device '/dev/ofsctl'. 2018-08-04 18:02:58.991 [CLSECHO(3482)]ACFS-9294: updating file /etc/sysconfig/oracledrivers.conf 2018-08-04 18:02:59.032 [CLSECHO(3490)]ACFS-9322: completed 2018-08-04 18:03:00.398 [OSYSMOND(3571)]CRS-8500: Oracle Clusterware OSYSMOND process is starting with operating system process ID 3571 2018-08-04 18:03:00.324 [CSSDMONITOR(3567)]CRS-8500: Oracle Clusterware CSSDMONITOR process is starting with operating system process ID 3567 2018-08-04 18:03:00.796 [CSSDAGENT(3598)]CRS-8500: Oracle Clusterware CSSDAGENT process is starting with operating system process ID 3598 2018-08-04 18:03:01.461 [OCSSD(3621)]CRS-8500: Oracle Clusterware OCSSD process is starting with operating system process ID 3621 2018-08-04 18:03:03.374 [OCSSD(3621)]CRS-1713: CSSD daemon is started in hub mode 2018-08-04 18:03:23.052 [OCSSD(3621)]CRS-1707: Lease acquisition for node db40 number 1 completed 2018-08-04 18:03:29.122 [OCSSD(3621)]CRS-1605: CSSD voting file is online: /dev/mapper/mpathc; details in /u01/app/grid/diag/crs/db40/crs/trace/ocssd.trc. 2018-08-04 18:03:29.150 [OCSSD(3621)]CRS-1605: CSSD voting file is online: /dev/mapper/mpathb; details in /u01/app/grid/diag/crs/db40/crs/trace/ocssd.trc. 2018-08-04 18:03:31.087 [OCSSD(3621)]CRS-1605: CSSD voting file is online: /dev/mapper/mpatha; details in /u01/app/grid/diag/crs/db40/crs/trace/ocssd.trc. 2018-08-04 18:03:33.767 [OCSSD(3621)]CRS-1601: CSSD Reconfiguration complete. Active nodes are db40 db42 . 2018-08-04 18:03:35.311 [OLOGGERD(3862)]CRS-8500: Oracle Clusterware OLOGGERD process is starting with operating system process ID 3862 2018-08-04 18:03:35.833 [OCTSSD(3869)]CRS-8500: Oracle Clusterware OCTSSD process is starting with operating system process ID 3869 2018-08-04 18:03:36.055 [OCSSD(3621)]CRS-1720: Cluster Synchronization Services daemon (CSSD) is ready for operation. 2018-08-04 18:03:39.797 [OCTSSD(3869)]CRS-2407: The new Cluster Time Synchronization Service reference node is host db42. 2018-08-04 18:03:39.810 [OCTSSD(3869)]CRS-2401: The Cluster Time Synchronization Service started on host db40. 2018-08-04 18:03:41.572 [CRSD(3956)]CRS-8500: Oracle Clusterware CRSD process is starting with operating system process ID 3956 2018-08-04 18:03:57.242 [CRSD(3956)]CRS-1012: The OCR service started on node db40. 2018-08-04 18:03:59.550 [CRSD(3956)]CRS-1201: CRSD started on node db40.
至此,12cR2的GI配置已全部完成。
Linux平台 Oracle 18c RAC安裝指導:
Part1:Linux平台 Oracle 18c RAC安裝Part1:准備工作
Part2:Linux平台 Oracle 18c RAC安裝Part2:GI配置
Part3:Linux平台 Oracle 18c RAC安裝Part3:DB配置
本文安裝環境:OEL 7.5 + Oracle 18.3 GI & RAC
四、DB(Database)安裝
4.1 解壓DB的安裝包
oracle用戶登錄,在$ORACLE_HOME下解壓db包(18c的db也是直接解壓到$ORACLE_HOME下,免安裝):
Starting with Oracle Database 18c, installation and configuration of Oracle Database software is simplified with image-based installation.
[oracle@db40 ~]$ mkdir -p /u01/app/oracle/product/18.3.0/db_1 [oracle@db40 ~]$ cd $ORACLE_HOME/ [oracle@db40 db_1]$ pwd /u01/app/oracle/product/18.3.0/db_1 [oracle@db40 db_1]$ unzip /tmp/LINUX.X64_180000_db_home.zip
4.2 DB軟件配置
打開Xmanager軟件,Oracle用戶登錄,配置數據庫軟件。
[oracle@db40 db_1]$ pwd /u01/app/oracle/product/18.3.0/db_1 [oracle@db40 db_1]$ export DISPLAY=192.168.1.31:0.0 [oracle@db40 db_1]$ ./runInstaller
下面截取DB軟件配置的過程如下:
注:這里選擇只安裝軟件,數據庫后面創建好ASM磁盤組后再運行dbca創建。
注:配置好ssh等價性。
注:可以進行修復的,按提示執行腳本修復。
我這里還有swap的問題,因為是測試環境資源有限,可以忽略,如果生產環境,強烈建議調整符合要求。
如果還有其他的檢查項未通過,則無論是生產還是測試環境,都不建議忽略,而應該整改符合要求為止。
注:最后root用戶按安裝提示執行1個腳本,需要在各節點分別執行。
至此,已完成DB軟件的配置。
4.3 ASMCA創建磁盤組
打開Xmanager軟件,grid用戶登錄,asmca創建ASM磁盤組
[grid@db40 ~]$ export DISPLAY=192.168.1.31:0.0 [grid@db40 ~]$ asmca
這個asmca調用圖形等了幾分鍾才出來,首先映入眼簾的是鮮艷的18c配色圖:
然后正式進入asmca的界面:
這里我先創建一個DATA磁盤組,一個FRA磁盤組,冗余選擇external(生產如果選擇external,底層存儲必須已經做了RAID)。
這里看到新創建的DATA和FRA磁盤組已經創建完成並成功mount。
4.4 DBCA建庫
打開Xmanager軟件,oracle用戶登錄,dbca圖形創建數據庫,數據庫字符集我這里選擇ZHS16GBK。
下面是DBCA建庫的過程截圖:
注:這里選擇是否啟用CDB,並定義CDB和PDB的名稱。我選擇啟用CDB,並自動創建4個PDB,前綴名就叫PDB。
注:這里我選擇使用OMF。
注:這里我原計划啟用FRA,並設置路徑為+FRA。因為空間不夠,暫時不勾選,以后擴容后再調整。
注:這里選擇內存分配具體值,選擇數據庫的字符集,我這里沒選擇,字符集默認是AL32UTF8。需要根據實際情況修改。
注:這里可以選擇是否配置EM,我這里選擇配置,如果你不需要,可以選擇不配置。CVU一般也不配置,我這里學習目的選擇配置。
注:這里設置密碼,我實驗環境直接oracle,不符合規范,生產環境建議設置復雜密碼。
注:這里可以選擇將創建數據庫的腳本保存下來,根據你的需求,可選可不選。
注:這里如果還有其他的檢查未通過,則不能忽略。我這里是因為使用一個scan,對應報錯可以忽略。
注:這里是安裝信息的概覽,建議認真核實,如果有不對的還可以退回去改。確認無誤后開始創建數據庫。
注:18c建庫的時間也是長到讓人崩潰,感覺以后DBA安裝過程中可以提前下幾個電影來邊等邊看了。
至此,Oracle 18.3 RAC數據庫已經創建成功。
4.5 驗證crsctl的狀態
grid用戶登錄,crsctl stat res -t 查看集群資源的狀態,發現各節點的DB資源已經正常Open。
[grid@db40 ~]$ crsctl stat res -t -------------------------------------------------------------------------------- Name Target State Server State details -------------------------------------------------------------------------------- Local Resources -------------------------------------------------------------------------------- ora.ASMNET1LSNR_ASM.lsnr ONLINE ONLINE db40 STABLE ONLINE ONLINE db42 STABLE ora.DATA.dg ONLINE ONLINE db40 STABLE ONLINE ONLINE db42 STABLE ora.FRA.dg ONLINE ONLINE db40 STABLE ONLINE ONLINE db42 STABLE ora.LISTENER.lsnr ONLINE ONLINE db40 STABLE ONLINE ONLINE db42 STABLE ora.MGMT.GHCHKPT.advm OFFLINE OFFLINE db40 STABLE OFFLINE OFFLINE db42 STABLE ora.MGMT.dg ONLINE ONLINE db40 STABLE ONLINE ONLINE db42 STABLE ora.OCRVT.dg ONLINE ONLINE db40 STABLE ONLINE ONLINE db42 STABLE ora.chad ONLINE ONLINE db40 STABLE ONLINE ONLINE db42 STABLE ora.helper OFFLINE OFFLINE db40 IDLE,STABLE OFFLINE OFFLINE db42 STABLE ora.mgmt.ghchkpt.acfs OFFLINE OFFLINE db40 STABLE OFFLINE OFFLINE db42 STABLE ora.net1.network ONLINE ONLINE db40 STABLE ONLINE ONLINE db42 STABLE ora.ons ONLINE ONLINE db40 STABLE ONLINE ONLINE db42 STABLE ora.proxy_advm ONLINE ONLINE db40 STABLE ONLINE ONLINE db42 STABLE -------------------------------------------------------------------------------- Cluster Resources -------------------------------------------------------------------------------- ora.LISTENER_SCAN1.lsnr 1 ONLINE ONLINE db42 STABLE ora.MGMTLSNR 1 ONLINE ONLINE db42 169.254.7.154 10.0.0 .42,STABLE ora.asm 1 ONLINE ONLINE db40 Started,STABLE 2 ONLINE ONLINE db42 Started,STABLE 3 OFFLINE OFFLINE STABLE ora.cdb.db 1 ONLINE ONLINE db40 Open,HOME=/u01/app/o racle/product/18.3.0 /db_1,STABLE 2 ONLINE ONLINE db42 Open,HOME=/u01/app/o racle/product/18.3.0 /db_1,STABLE ora.cvu 1 ONLINE ONLINE db42 STABLE ora.db40.vip 1 ONLINE ONLINE db40 STABLE ora.db42.vip 1 ONLINE ONLINE db42 STABLE ora.mgmtdb 1 ONLINE ONLINE db42 Open,STABLE ora.qosmserver 1 ONLINE ONLINE db42 STABLE ora.rhpserver 1 OFFLINE OFFLINE STABLE ora.scan1.vip 1 ONLINE ONLINE db42 STABLE --------------------------------------------------------------------------------
oracle用戶登錄,sqlplus / as sysdba
[oracle@db40 ~]$ sqlplus / as sysdba SQL*Plus: Release 18.0.0.0.0 - Production on Sun Aug 5 16:04:42 2018 Version 18.3.0.0.0 Copyright (c) 1982, 2018, Oracle. All rights reserved. Connected to: Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production Version 18.3.0.0.0 SQL> select inst_id, name, open_mode from gv$database; INST_ID NAME OPEN_MODE ---------- --------- -------------------- 1 CDB READ WRITE 2 CDB READ WRITE SQL> show con_id CON_ID ------------------------------ 1 SQL> show con_name CON_NAME ------------------------------ CDB$ROOT SQL> show pdbs CON_ID CON_NAME OPEN MODE RESTRICTED ---------- ------------------------------ ---------- ---------- 2 PDB$SEED READ ONLY NO 3 PDB1 READ WRITE NO 4 PDB2 READ WRITE NO 5 PDB3 READ WRITE NO 6 PDB4 READ WRITE NO SQL> alter session set container = pdb4; Session altered. SQL> show pdbs CON_ID CON_NAME OPEN MODE RESTRICTED ---------- ------------------------------ ---------- ---------- 6 PDB4 READ WRITE NO SQL> select name from v$datafile; NAME -------------------------------------------------------------------------------- +DATA/CDB/72AB854658DD18D8E0532801A8C0CA21/DATAFILE/system.292.983371593 +DATA/CDB/72AB854658DD18D8E0532801A8C0CA21/DATAFILE/sysaux.293.983371593 +DATA/CDB/72AB854658DD18D8E0532801A8C0CA21/DATAFILE/undotbs1.291.983371593 +DATA/CDB/72AB854658DD18D8E0532801A8C0CA21/DATAFILE/undo_2.295.983372151 +DATA/CDB/72AB854658DD18D8E0532801A8C0CA21/DATAFILE/users.296.983372191 SQL>
可以看到所有的資源均正常,至此,整個在OEL 7.5 上安裝 Oracle 18.3 GI & RAC 的工作已經全部結束。