Oracle 19.3 RAC on Redhat 7.6 安裝最佳實踐


本文講述了在Redhat Linux 7.6上安裝Oracle 19.3 RAC的詳細步驟,是一篇step by step指南;

借鑒資深工程師趙慶輝、趙靖宇等人技術博客或公眾號編寫。

 一、實施前期准備工作

 二、安裝前期准備工作

 三、GI(Grid Infrastructure)安裝

 四、創建其他ASM磁盤組

 五、DB(Database)配置

 本文安裝環境:Redhat 7.6 + Oracle 19.3 GI & RAC

-----------------------------------------------------------------------------------------------------------------------------------

---環境介紹

分類

項目

說明

主機

 

 

 

 

 

 

操作系統

Red Hat Enterprise Linux Server release 7.6 (Maipo)

操作系統內核版本

Linux version 3.10.0-957.el7.x86_64

硬件配置

Intel(R) Core(TM) i5-8250U CPU @ 1.60GHz && 2CPUS
3GB RAM

RAC 公共網卡

Ethernet,1000Mb

CRS 網絡網卡

Ethernet,1000Mb

ASM網絡網卡

Ethernet,1000Mb

具體網絡IP地址設置

# Public IP
192.168.56.56    mm1903
192.168.56.57    mm1904
# Virtaul ip
192.168.56.58    mm1903-vip
192.168.56.59    mm1904-vip
# Scan ip
192.168.56.60    mm1903-scan
192.168.56.61    mm1903-scan
192.168.56.62    mm1903-scan
# ASM & Private IP
20.20.20.56      mm1903-priv
20.20.20.57      mm1904-priv

數據庫

 

 

 

 

 

 

Oracle版本

Oracle 19.3 64位

運行模式

RAC

ORACLE ASM

SYS        3GB
FRA        24GB
DATA1   12GB

數據庫名

C193  AL32UTF8字符集

實例名

C1931/ C1932

數據庫用戶

grid
oracle

本地安裝路徑

/u01 50G

---數據庫系統規划 

 

項目名稱\服務器名

mm1903

mm1904

公共IP地址(pub-ip)

192.168.56.56

192.168.56.57

虛擬IP地址(vip)

192.168.56.58

192.168.56.59

私有IP地址(priv-ip)

20.20.20.56

20.20.20.57

ASM網絡地址

20.20.20.56

20.20.20.57

SCAN IP/NAME

192.168.56.60
192.168.56.61
192.168.56.62

SCAN NAME

mm1903-scan

集群名稱

mm1903-cluster

集群數據庫名

C193

集群數據庫實例名稱

C1931

C1932

OCR/Vote磁盤組

/dev/asm-diskb
/dev/asm-diskc
/dev/asm-diskd

歸檔閃回磁盤組

/dev/asm-diske
/dev/asm-diskf

數據磁盤組

/dev/asm-diskg

集群軟件版本

19.3

數據庫版本

19.3

集群軟件BASE目錄

/u01/app/grid

集群軟件HOME目錄

/u01/app/19.3.0/grid

數據庫BASE目錄

/u01/app/oracle

數據庫軟件HOME目錄

/u01/app/oracle/product/19.3.0

數據庫監聽端口

1521

數據庫字符集

AL32UTF8

國家語言字符集

AL16UTF16

數據庫塊大小

8K

-----------------------------------------------------------------------------------------------------------------------------------

 本文安裝環境:Redhat 7.6 + Oracle 19.3 GI & RAC

一、實施前期准備工作

1.1 服務器安裝操作系統

在Oracle VM VirtualBox上配置完全相同的兩台服務器,安裝相同版本的Linux操作系統。留存系統光盤或者鏡像文件。

我這里是Redhat Linux 7.6,系統目錄大小均一致。對應Redhat Linux 7.6的系統鏡像文件放在服務器上,供后面配置本地yum使用。

1.2 Oracle安裝介質

Oracle 19.3 版本2個zip包(總大小6G+,注意空間):

LINUX.X64_193000_grid_home.zip

LINUX.X64_193000_db_home.zip

這個自己去Oracle官網下載,然后只需要上傳到節點1即可。

1.3 共享存儲規划

從Oracle 12CR1開始ASMFD被引入,相對於asmlib而言ASMFD具有IO過濾功能,能有效的防止非法寫入,從而避免ASM磁盤被誤寫入。

所以在軟件支持的前提下我們更推薦使用ASMFD。但根據Note 2034681.1,使用asmfd需要升級內核並安裝GI Patch for Bug 27494830。

故本次安裝沒有使用asmfd,而采用使用udev。

從存儲中划分出兩台主機可以同時看到的共享LUN,3個1G的盤用作OCR和Voting Disk,其余分了3個12G的盤規划做用做數據盤和FRA。

注:19c安裝GI時,可以選擇是否配置GIMR,且默認不配置,我這里選擇不配置,所以無需再給GIMR分配對應空間。

--Redhat7使用udev需要給磁盤創建分區,這里我使用fdisk 將對應盤創建一個主分區,分區號為2(這里只是為了區分):
sdb  sdc  sdd  sde  sdf  sdg 
sdb2 sdc2 sdd2 sde2 sdf2 sdg2
1G   1G   1G   12G  12G  12G

--Redhat7中udev需綁定對應磁盤的分區,借助腳本查看磁盤的UUID:
for i in b c d e f g; do echo "KERNEL==\"sd?2\", SUBSYSTEM==\"block\", PROGRAM==\"/usr/lib/udev/scsi_id -g -u -d /dev/\$parent\", RESULT==\"`/usr/lib/udev/scsi_id -g -u -d /dev/sd\$i`\", SYMLINK+=\"asm-disk$i\", OWNER=\"grid\", GROUP=\"asmadmin\", MODE=\"0660\"" done --編輯udev規則文件:vi /etc/udev/rules.d/99-oracle-asmdevices.rules KERNEL=="sd?2", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d /dev/$parent", RESULT=="1ATA_VBOX_HARDDISK_VBb639722b-ca23600a", SYMLINK+="asm-diskb", OWNER="grid", GROUP="asmadmin", MODE="0660" KERNEL=="sd?2", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d /dev/$parent", RESULT=="1ATA_VBOX_HARDDISK_VB9decb0f2-36c6eaf9", SYMLINK+="asm-diskc", OWNER="grid", GROUP="asmadmin", MODE="0660" KERNEL=="sd?2", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d /dev/$parent", RESULT=="1ATA_VBOX_HARDDISK_VBc164e474-ac1a4998", SYMLINK+="asm-diskd", OWNER="grid", GROUP="asmadmin", MODE="0660" KERNEL=="sd?2", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d /dev/$parent", RESULT=="1ATA_VBOX_HARDDISK_VB630a2ffb-321550c1", SYMLINK+="asm-diske", OWNER="grid", GROUP="asmadmin", MODE="0660" KERNEL=="sd?2", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d /dev/$parent", RESULT=="1ATA_VBOX_HARDDISK_VBf782becb-7f6544c2", SYMLINK+="asm-diskf", OWNER="grid", GROUP="asmadmin", MODE="0660" KERNEL=="sd?2", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d /dev/$parent", RESULT=="1ATA_VBOX_HARDDISK_VB8ebd6644-450da0cd", SYMLINK+="asm-diskg", OWNER="grid", GROUP="asmadmin", MODE="0660"
--udevadm配置重載生效: [root@mm1903 rules.d]# udevadm control --reload
[root@mm1903 rules.d]# udevadm trigger
或者
[root@mm1903 rules.d]# /sbin/udevadm trigger --type=devices --action=change
--確認udev綁定成功,已生成綁定后的設備: [root@mm1903 ~]# ls -ltr /dev/asm-disk* lrwxrwxrwx. 1 root root 4 3月 18 12:55 /dev/asm-diskb -> sdb2 lrwxrwxrwx. 1 root root 4 3月 18 12:55 /dev/asm-diskc -> sdc2 lrwxrwxrwx. 1 root root 4 3月 18 12:55 /dev/asm-diskf -> sdf2 lrwxrwxrwx. 1 root root 4 3月 18 12:55 /dev/asm-diske -> sde2 lrwxrwxrwx. 1 root root 4 3月 18 12:55 /dev/asm-diskg -> sdg2 lrwxrwxrwx. 1 root root 4 3月 18 12:55 /dev/asm-diskd -> sdd2

--再將/etc/udev/rules.d/99-oracle-asmdevices.rules拷貝到另一節點,並執行使其生效。
--第二個節點mm1904最開始直接使用udevadm操作發現不行,此時需先partprobe,再udevadm觸發即可成功 --使用partprobe將磁盤分區表變化信息通知內核,請求操作系統重新加載分區表 [root@mm1903 ~]# partprobe /dev/sdb
[root@mm1903 ~]# partprobe /dev/sdc
[root@mm1903 ~]# partprobe /dev/sdd
[root@mm1903 ~]# partprobe /dev/sde
[root@mm1903 ~]# partprobe /dev/sdf
[root@mm1903 ~]# partprobe /dev/sdg
--udevadm配置重載生效: [root@mm1903 ~]# udevadm control --reload
[root@mm1903 ~]# udevadm trigger
--確認udev已綁定成功: [root@mm1904 ~]# ll /dev/asm* lrwxrwxrwx. 1 root root 4 3月 18 12:58 /dev/asm-diskb -> sdb2 lrwxrwxrwx. 1 root root 4 3月 18 12:58 /dev/asm-diskc -> sdc2 lrwxrwxrwx. 1 root root 4 3月 18 12:58 /dev/asm-diskd -> sdd2 lrwxrwxrwx. 1 root root 4 3月 18 12:58 /dev/asm-diske -> sde2 lrwxrwxrwx. 1 root root 4 3月 18 12:58 /dev/asm-diskf -> sdf2 lrwxrwxrwx. 1 root root 4 3月 18 12:58 /dev/asm-diskg -> sdg2

1.4 網絡規范分配

公有網絡 以及 私有網絡。

公有網絡:這里實驗環境是enp0s3是public IP,enp0s8是ASM & Private IP.

實際生產需根據實際情況調整規划,一般public是有OS層綁定(bonding),private是使用HAIP。

二、安裝前期准備工作

2.0服務器檢查

--檢查CPU信息
cat /proc/cpuinfo | grep "model name"

--檢查物理內存容量,根據oracle安裝文檔,至少8GB物理內存。
cat /proc/meminfo|grep "MemTotal"

--檢查交換空間容量
/usr/sbin/swapon
free –m

按照oracle安裝文檔,對交換空間配置要求如下:
4GB < 主機內存 <16GB, then swap >= 主機內存
主機內存 >= 16GB, then swap = 16GB
注,如果使用了HugePages,則需要先從物理內存中減掉HugePages占用的內存大小,再按上面的公式計算。
--檢查文件系統空間
df -k

按照oracle安裝文檔,需要的文件系統如下:
/tmp至少需要1GB,再實際生產環境中我們推薦臨時目錄10GB以上。當前主機沒有為臨時目錄配置單獨掛載點,臨時目錄位於跟目錄下,空間為50GB,符合安裝要求。GRID_HOME至少需要8GB,ORACLE_HOME至少需要6.4GB。上面列出的只是安裝最低要求,也就是軟件實際占用的空間。但是為了避免以后打補丁時失敗及運行過程中產生大量日志導致文件系統不足,Oracle 推薦至少分配100G的空間給GI和DB的安裝目錄。
--檢查網卡信息
ifconfig
從Oracle 11.2.0.2開始要求私網必須支持多播,所以要順便看一下網卡配置信息里是否有MULTICAST字樣,Linux默認是開啟的。
--檢查操作系統版本
cat /etc/redhat-release 

--檢查Linux內核版本
uname -r

--檢查系統運行級別
runlevel

按要求必須運行在3級或5級。如果想修改到3節省資源,執行下面的命令,修改完后需要重啟生效。
systemctl set-default multi-user.target

2.1 各節點系統時間校對

各節點系統時間校對:

--檢驗時間和時區確認正確
date 

--配置系統時鍾同步,關閉chrony服務,移除chrony配置文件(后續使用ctss)
Oracle Clusterware要求集群中的所有節點時鍾同步,常用的方法有:
1.Cluster TimeSynchronization daemon (ctssd)
2.Ntp 或 chrony(Redhat 7.X)
我們仍然建議在用戶環境中配置ntp服務,這樣可以保證中所有主機時鍾一致。而chronyd是我們不建議的,這是因為chronyd並不能直接修改時鍾,只是對系統時鍾給出加快或放緩的建議。所以有些情況下效果並不明顯。

--關閉並禁用chrony服務
systemctl list-unit-files|grep chronyd systemctl status chronyd systemctl disable chronyd systemctl stop chronyd

--刪除其配置文件
mv /etc/chrony.conf /etc/chrony.conf_bak

--配置NTP,開啟微調模式
編輯/etc/sysconfig/ntpd,在-g后面加上-x 和 -p參數

# Command line options for ntpd
OPTIONS="-g -x -p /var/run/ntpd.pid"

注:
1.默認情況下ntp是沒有安裝的,需要單獨安裝,才能有上面的配置文件。
2.X參數用於設置微調模式,防止時鍾后退或大幅改變。P指定pid文件,都是必須的

參考文檔:
OracleLinux: NTP Does Not Start Automatically After Server Reboot on OL7 (文檔ID 2422378.1)
Tipson Troubleshooting NTP / chrony Issues (文檔ID 2068875.1)

2.2 各節點關閉防火牆和SELinux

各節點關閉防火牆:

systemctl list-unit-files|grep firewalld
systemctl status firewalld

systemctl disable firewalld
systemctl stop firewalld

各節點關閉SELinux:

getenforce
cat /etc/selinux/config

手工修改/etc/selinux/config SELINUX=disabled,或使用下面命令:
sed -i '/^SELINUX=.*/ s//SELINUX=disabled/' /etc/selinux/config
setenforce 0

最后核實各節點已經關閉SELinux即可。

2.3 各節點檢查系統必要的軟件包安裝情況

可以在配置好yum之后,直接安裝:

yum -y install bc binutils compat-libcap1compat-libstdc++-33 compat-libstdc++-33.i686 dtrace-modules dtrace-modules-headersdtrace-modules-provider-headers dtrace-utils elfutils-libelf.i686elfutils-libelf elfutils-libelf-devel.i686 elfutils-libelf-devel glibcglibc.i686 glibc-devel glibc-devel.i686 ksh libaio libaio.i686 libaio-devellibaio-devel.i686 libdtrace-ctf-devel libXrender libXrender-devel libX11.i686libX11 libXau.i686 libXau libXi.i686 libXi libXtst libXtst.i686 libgcclibgcc.i686 librdmacm-devel.i686 librdmacm-devel libstdc++ libstdc++.i686libstdc++-devel libstdc++-devel.i686 libxcb.i686 libxcb make nfs-utilsnet-tools smartmontools python python-configshell python-rtslib python-sixtargetcli gcc gcc-c++ sysstat
yum -y install compat-libcap1 libstdc++-devel* libaio-devel* compat-libstdc++-33*

2.4 各節點配置/etc/hosts

編輯/etc/hosts文件:

# Public IP
192.168.56.56    mm1903
192.168.56.57    mm1904

# Virtaul ip
192.168.56.58    mm1903-vip
192.168.56.59    mm1904-vip

# Scan ip
192.168.56.60    mm1903-scan
192.168.56.61    mm1903-scan
192.168.56.62    mm1903-scan

# ASM & Private IP
20.20.20.56      mm1903-priv
20.20.20.57      mm1904-priv

修改主機名(建議由SA調整):

--例如:修改主機名為mm1903:
hostnamectl status
hostnamectl set-hostname mm1903
hostnamectl status

2.5 各節點創建需要的用戶和組

創建group & user,給oracle、grid設置密碼:

groupadd -g 54321 oinstall  
groupadd -g 54322 dba  
groupadd -g 54323 oper  
groupadd -g 54324 backupdba  
groupadd -g 54325 dgdba  
groupadd -g 54326 kmdba  
groupadd -g 54327 asmdba  
groupadd -g 54328 asmoper  
groupadd -g 54329 asmadmin  
groupadd -g 54330 racdba  
  
useradd -u 54321 -g oinstall -G dba,asmdba,backupdba,dgdba,kmdba,racdba,oper oracle  
useradd -u 54322 -g oinstall -G asmadmin,asmdba,asmoper,dba grid  

echo oracle | passwd --stdin oracle
echo oracle | passwd --stdin grid

我這里測試環境設置密碼都是oracle,實際生產環境建議設置符合規范的復雜密碼

2.6 各節點創建安裝目錄

各節點創建安裝目錄(root用戶):

mkdir -p /u01/app/19.3.0/grid
mkdir -p /u01/app/grid
mkdir -p /u01/app/oracle
chown -R grid:oinstall /u01
chown oracle:oinstall /u01/app/oracle
chmod -R 775 /u01/

2.7 各節點系統配置文件修改

內存內核參數修改:vi /etc/sysctl.conf

fs.file-max = 6815744
fs.aio-max-nr = 1048576
kernel.shmall = 3145728
kernel.shmmax = 8589934592
kernel.shmmni = 4096
kernel.sem = 250 32000  100 128
net.ipv4.ip_local_port_range  = 9000 65500
net.core.rmem_default =  262144
net.core.rmem_max =  4194304
net.core.wmem_default =  262144
net.core.wmem_max =  1048576
vm.min_free_kbytes = 1048576
net.ipv4.conf.enp0s8.rp_filter  = 2

--使其生效
sysctl -p /etc/sysctl.conf

說明:

1.參數rp_filter用於控制系統是否開啟對數據包源地址的校驗。0為不校驗,1為開啟嚴格校驗,2為開啟松散校驗。當使用多個私網接口時需要將該參數設為0或2。詳情參考:
rp_filter for multipleprivate interconnects OL7 (文檔 ID 2216652.1)
2.min_free_kbytes指的是系統預留內存大小,以K為單位。我們建議預留1G內存。
3.smmax是指單個共享內存段的大小,對於Oracle數據庫來說就是SGA的部分,我們建議分配的足夠大,即使超出物理內存的大小也不會有什么影響。 4.Shmall是指全部共享內存段之和的上限,在單個數據實例的環境中可以設為shmmax/pagesize Pagesize可以通過下面的命令得到: [root@mm1904 ~]# getconfPAGESIZE 40965.參數fs.aio-max-nr設置異步IO打開的句柄數最大值,1048576僅為Oracle安裝的最小要求,每個進程打開的AIO 句柄並不相同,最多的打開了4096個。所以我們建議將這外參數調整為主機上所有實例的Oracle進程之和*4096。相關文檔可以參考: What value should kernelparameter AIO-MAX-NR be set to ? (文檔 ID 2229798.1)

設置用戶shell限制

在兩個節點上使用root用戶編輯文件/etc/security/limits.conf,添加下面的內容:

grid soft nproc 2047
grid hard nproc 16384
grid soft nofile 1024
grid hard nofile 65536
grid soft stack 10240
grid hard stack 32768
oracle soft nproc 2047
oracle hard nproc 16384
oracle soft nofile 1024
oracle hard nofile 65536
oracle soft stack 10240
oracle hard stack 32768
oracle hard memlock 6291456
oracle soft memlock 6291456

Memlock用於設置用戶運行時允許鎖定的內存,單位為K。這里也可以先不設置,等后面開啟大頁時再行設置。

2.8 各節點設置用戶的環境變量

第1個節點grid用戶:

export ORACLE_SID=+ASM1;
export ORACLE_BASE=/u01/app/grid;
export ORACLE_HOME=/u01/app/19.3.0/grid;
export PATH=$ORACLE_HOME/bin:$PATH;
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib;

第2個節點grid用戶:

export ORACLE_SID=+ASM2;
export ORACLE_BASE=/u01/app/grid;
export ORACLE_HOME=/u01/app/19.3.0/grid;
export PATH=$ORACLE_HOME/bin:$PATH;
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib;

第1個節點oracle用戶:

export ORACLE_SID=mm1903;
export ORACLE_BASE=/u01/app/oracle;
export ORACLE_HOME=/u01/app/oracle/product/19.3.0/db_1;
export PATH=$ORACLE_HOME/bin:$PATH;
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib;

第2個節點oracle用戶:

export ORACLE_SID=mm1904;
export ORACLE_BASE=/u01/app/oracle;
export ORACLE_HOME=/u01/app/oracle/product/19.3.0/db_1;
export PATH=$ORACLE_HOME/bin:$PATH;
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib;

三、GI(Grid Infrastructure)安裝

3.1 解壓GI的安裝包

[grid@mm1903 grid]$ pwd
/u01/app/19.3.0/grid
[grid@mm1903 grid]$ unzip /use/local/src/resource/LINUX.X64_193000_grid_home.zip 

3.2 安裝配置Xmanager軟件

安裝過程需要啟動圖形界面,本例中使用了xmanager,在開始下面的命令之前先開啟Xmanager - Passive,直接在SecureCRT連接的會話窗口中臨時配置DISPLAY變量直接調用圖形,下面的地址192.168.56.1是啟動圖形界面的機器地址。

export DISPLAY=192.168.56.1:0.0

3.3 共享存儲LUN的賦權

在《Linux平台 Oracle 19c RAC安裝Part1:准備工作 -> 1.3 共享存儲規划》中已完成綁定和權限,這里不需要再次操作。

[root@mm1903 ~]# ll /dev/sd?2*
brw-rw---- 1 root disk     8,  2 3月  19 13:06 /dev/sda2
brw-rw---- 1 grid asmadmin 8, 18 3月  19 13:25 /dev/sdb2
brw-rw---- 1 grid asmadmin 8, 34 3月  19 13:25 /dev/sdc2
brw-rw---- 1 grid asmadmin 8, 50 3月  19 13:25 /dev/sdd2
brw-rw---- 1 grid asmadmin 8, 66 3月  19 13:25 /dev/sde2
brw-rw---- 1 grid asmadmin 8, 82 3月  19 13:20 /dev/sdf2
brw-rw---- 1 grid asmadmin 8, 98 3月  19 13:25 /dev/sdg2

3.4 使用Xmanager圖形化界面配置GI

然后一步步執行下面的命令:

[grid@mm1903 ~]$ export DISPLAY=192.168.56.1:0.0
[grid@mm1903 ~]$ xhost +
access control disabled, clients can connect from any host
xhost:  must be on local machine to enable or disable access control.
[grid@mm1903 ~]$ cd $ORACLE_HOME
[grid@mm1903 grid]$ ./gridSetup.sh

從12C R2開始,GI的配置就跟之前有一些變化,19C也一樣,下面來看下GI配置的整個圖形化安裝的過程截圖,幾秒鍾之后出現安裝界面:

Step 1 of 9:選擇第一項->配置新的集群。然后點下一步,進入下面的界面:

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Step 2 of 9:選擇第一項,Standalone Cluster->傳統的RAC模式。然后點下一步進入下面的界面:

 Step 3 of 17:里輸入集群名和scan name等信息,集群的名字不能超過15位。因為不使用GNS所以注意不要勾選,進入下一步:

 Step 4 of 17:在這里只列出了本地節點的信息,需要點擊Add鍵將其它節點添加進去,添加完后如下。

 同時在這里建議點擊SSH connectivity測試節點間的連通性,如果前面用戶等效性配置錯誤,也可以在這里重新配置。確認無誤后進下一界面: 

 

 Step 5 of 17:按照規划,將192.168.56.0網段設為public,將20.20.20.0網段設為ASM & Private,其它不使用的網段設為Do Not Use。然后進入下一界面:

 

 Step 6 of 17:這里選擇Flex ASM,然后進入下一界面:

 

 Step 7 of 17:從Oracle19.2開始GIMR已經變成可選項了,考慮到多數用戶實際使用的不多。這里我們選擇“No”,不進行創建GIMR,然后進入下一界面配置ASM磁盤組:

 

 Step 8 of 17:這一界面默認沒有磁盤列出,需要點擊“Change Discovery Path”按鈕,將磁盤搜索路徑改為/dev/asmdisk*,可用磁盤就能列出來。配置OCR和Votedisk使用的磁盤組SYS,冗余級別選擇Normal,FailureGroup不用選。注意,不要選Configure Oracle ASM Filter Driver,因為默認的內核版本不支持ASMAFD,如果選中進入下一步會報錯,如果要堅持使用ASMAFD,則需要先升級內核版本,並安裝GI補丁patch 27494830:

 

 Step 9 of 17:輸入密碼,這里設置相同的密碼即可。

 

 Step 10 of 17:不使用IPMI,下一步:

 

 Step 11 of 17:暫時不配置Could Control。

 

 Step 12 of 17:配置操作系統用戶組:

 

 Step 13 of 17:指定Oracle Base:

 

 Step 14 of 17:設置inventory目錄,保持默認。進入下一界面:

 

 Step 15 of 17:這配置是否需要自動執行后面的root.sh卻本,如果選中自動執行則需要輸出root密碼。這里選擇不自動執行,然后點擊下一步:

 

Step 16 of 17:
在這一界面,檢查出來的問題都需要認真核對,確認確實可以忽略才可以點擊“Ignore All”。如果這里檢測出缺少某些RPM包,需要使用yum安裝好。
我這里是自己的測試環境,分的配置較低,所以有內存和DNS之類的檢測不通過,實際生產環境不應出現。
可以點擊Fix&Check Again按鈕,生成一個腳本,使用Root用戶執行修復問題。
檢查后確認問題可以忽略,則點Ignore All忽略錯誤,進入下一步:

 

 Step 17 of 17:這是一個Summery,最后確認一遍,沒有問題點擊Install,開始安裝:

 

 安裝過程中:

 

 安裝繼續,直到彈出下面的窗口:

 

 這是Grid安裝最關鍵的地方,需要按提示以root用戶在兩個節點上依次執行兩個腳本,第一次腳本在兩個節點的執行情況完全一致,以一節點為例:

[root@mm1903 ~]# /u01/app/oraInventory/orainstRoot.sh
Changing permissions of /u01/app/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.

Changing groupname of /u01/app/oraInventory to oinstall.
The execution of the script is complete.
[root@mm1904 ~]# /u01/app/oraInventory/orainstRoot.sh
Changing permissions of /u01/app/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.

Changing groupname of /u01/app/oraInventory to oinstall.
The execution of the script is complete.

第二個腳本兩個節點輸出是不一樣的,必須按順序執行。以root用戶在節點一上執行root.sh腳本:

[root@mm1903 ~]# /u01/app/19.3.0/grid/root.sh
Performing root user operation.

The following environment variables are set as:
    ORACLE_OWNER= grid
    ORACLE_HOME=  /u01/app/19.3.0/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]: 
   Copying dbhome to /usr/local/bin ...
   Copying oraenv to /usr/local/bin ...
   Copying coraenv to /usr/local/bin ...


Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Relinking oracle with rac_on option

Using configuration parameter file: /u01/app/19.3.0/grid/crs/install/crsconfig_params
The log of current session can be found at:
  /u01/app/grid/crsdata/mm1903/crsconfig/rootcrs_mm1903_2020-03-18_03-55-26PM.log
2020/03/18 15:55:52 CLSRSC-594: Executing installation step 1 of 19: 'SetupTFA'.
2020/03/18 15:55:52 CLSRSC-594: Executing installation step 2 of 19: 'ValidateEnv'.
2020/03/18 15:55:52 CLSRSC-363: User ignored prerequisites during installation
2020/03/18 15:55:52 CLSRSC-594: Executing installation step 3 of 19: 'CheckFirstNode'.
2020/03/18 15:55:55 CLSRSC-594: Executing installation step 4 of 19: 'GenSiteGUIDs'.
2020/03/18 15:55:56 CLSRSC-594: Executing installation step 5 of 19: 'SetupOSD'.
2020/03/18 15:55:56 CLSRSC-594: Executing installation step 6 of 19: 'CheckCRSConfig'.
2020/03/18 15:55:59 CLSRSC-594: Executing installation step 7 of 19: 'SetupLocalGPNP'.
2020/03/18 15:56:35 CLSRSC-594: Executing installation step 8 of 19: 'CreateRootCert'.
2020/03/18 15:56:43 CLSRSC-594: Executing installation step 9 of 19: 'ConfigOLR'.
2020/03/18 15:56:54 CLSRSC-4002: Successfully installed Oracle Trace File Analyzer (TFA) Collector.
2020/03/18 15:57:03 CLSRSC-594: Executing installation step 10 of 19: 'ConfigCHMOS'.
2020/03/18 15:57:03 CLSRSC-594: Executing installation step 11 of 19: 'CreateOHASD'.
2020/03/18 15:57:09 CLSRSC-594: Executing installation step 12 of 19: 'ConfigOHASD'.
2020/03/18 15:57:09 CLSRSC-330: Adding Clusterware entries to file 'oracle-ohasd.service'
2020/03/18 15:58:02 CLSRSC-594: Executing installation step 13 of 19: 'InstallAFD'.
2020/03/18 15:58:08 CLSRSC-594: Executing installation step 14 of 19: 'InstallACFS'.
2020/03/18 15:59:13 CLSRSC-594: Executing installation step 15 of 19: 'InstallKA'.
2020/03/18 15:59:19 CLSRSC-594: Executing installation step 16 of 19: 'InitConfig'.

已成功創建並啟動 ASM。

[DBT-30001] 已成功創建磁盤組。有關詳細信息, 請查看 /u01/app/grid/cfgtoollogs/asmca/asmca-200318下午040017.log2020/03/18 16:06:21 CLSRSC-482: Running command: '/u01/app/19.3.0/grid/bin/ocrconfig -upgrade grid oinstall'
CRS-4256: Updating the profile
Successful addition of voting disk be8e31c82d2e4f99bfd133b83e46ec15.
Successful addition of voting disk 8dac91231c6f4f6ebfebe323355bf358.
Successful addition of voting disk 0a42efb529af4f92bf4a4882811dd259.
Successfully replaced voting disk group with +SYS.
CRS-4256: Updating the profile
CRS-4266: Voting file(s) successfully replaced
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   be8e31c82d2e4f99bfd133b83e46ec15 (/dev/sdb2) [SYS]
 2. ONLINE   8dac91231c6f4f6ebfebe323355bf358 (/dev/sdc2) [SYS]
 3. ONLINE   0a42efb529af4f92bf4a4882811dd259 (/dev/sdd2) [SYS]
Located 3 voting disk(s).
2020/03/18 16:14:04 CLSRSC-594: Executing installation step 17 of 19: 'StartCluster'.
2020/03/18 16:16:44 CLSRSC-343: Successfully started Oracle Clusterware stack
2020/03/18 16:16:44 CLSRSC-594: Executing installation step 18 of 19: 'ConfigNode'.
2020/03/18 16:20:26 CLSRSC-594: Executing installation step 19 of 19: 'PostConfig'.
2020/03/18 16:24:07 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded

以root身份在節點二上執行:

[root@mm1904 ~]# /u01/app/19.3.0/grid/root.sh
Performing root user operation.

The following environment variables are set as:
    ORACLE_OWNER= grid
    ORACLE_HOME=  /u01/app/19.3.0/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]: 
   Copying dbhome to /usr/local/bin ...
   Copying oraenv to /usr/local/bin ...
   Copying coraenv to /usr/local/bin ...


Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Relinking oracle with rac_on option
Using configuration parameter file: /u01/app/19.3.0/grid/crs/install/crsconfig_params
The log of current session can be found at:
  /u01/app/grid/crsdata/mm1904/crsconfig/rootcrs_mm1904_2020-03-18_04-29-49PM.log
2020/03/18 16:31:55 CLSRSC-594: Executing installation step 1 of 19: 'SetupTFA'.
2020/03/18 16:31:55 CLSRSC-594: Executing installation step 2 of 19: 'ValidateEnv'.
2020/03/18 16:31:55 CLSRSC-363: User ignored prerequisites during installation
2020/03/18 16:31:55 CLSRSC-594: Executing installation step 3 of 19: 'CheckFirstNode'.
2020/03/18 16:32:47 CLSRSC-594: Executing installation step 4 of 19: 'GenSiteGUIDs'.
2020/03/18 16:32:47 CLSRSC-594: Executing installation step 5 of 19: 'SetupOSD'.
2020/03/18 16:32:47 CLSRSC-594: Executing installation step 6 of 19: 'CheckCRSConfig'.
2020/03/18 16:32:50 CLSRSC-594: Executing installation step 7 of 19: 'SetupLocalGPNP'.
2020/03/18 16:33:52 CLSRSC-594: Executing installation step 8 of 19: 'CreateRootCert'.
2020/03/18 16:33:52 CLSRSC-594: Executing installation step 9 of 19: 'ConfigOLR'.
2020/03/18 16:34:28 CLSRSC-4002: Successfully installed Oracle Trace File Analyzer (TFA) Collector.
2020/03/18 16:35:27 CLSRSC-594: Executing installation step 10 of 19: 'ConfigCHMOS'.
2020/03/18 16:35:27 CLSRSC-594: Executing installation step 11 of 19: 'CreateOHASD'.
2020/03/18 16:36:14 CLSRSC-594: Executing installation step 12 of 19: 'ConfigOHASD'.
2020/03/18 16:36:14 CLSRSC-330: Adding Clusterware entries to file 'oracle-ohasd.service'
2020/03/18 16:38:37 CLSRSC-594: Executing installation step 13 of 19: 'InstallAFD'.
2020/03/18 16:40:00 CLSRSC-594: Executing installation step 14 of 19: 'InstallACFS'.
2020/03/18 16:43:54 CLSRSC-594: Executing installation step 15 of 19: 'InstallKA'.
2020/03/18 16:44:46 CLSRSC-594: Executing installation step 16 of 19: 'InitConfig'.
2020/03/18 16:46:09 CLSRSC-594: Executing installation step 17 of 19: 'StartCluster'.
2020/03/18 16:49:44 CLSRSC-343: Successfully started Oracle Clusterware stack
2020/03/18 16:49:44 CLSRSC-594: Executing installation step 18 of 19: 'ConfigNode'.
2020/03/18 16:52:09 CLSRSC-594: Executing installation step 19 of 19: 'PostConfig'.
2020/03/18 16:55:14 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded

執行完成后,回到圖形窗口點擊OK繼續后面的步驟:

 

 最后提示集群檢查未通過,點擊details進行檢查。本次安裝是因為我們沒有將scan ip配置在DNS中,可以忽略。點擊“OK”繼續:

 

至此GI安裝完成,點擊Close退出。

 

3.5 驗證crsctl的狀態 

可以用grid用戶執行crsctl stat res -t命令檢查集群服務狀態。

[root@mm1903 ~]# su - grid
上一次登錄:三 3月 18 17:05:44 CST 2020
'abrt-cli status' timed out
[grid@mm1903 ~]$ crsctl stat res -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.LISTENER.lsnr
               ONLINE  ONLINE       mm1903                   STABLE
               ONLINE  ONLINE       mm1904                   STABLE
ora.chad
               ONLINE  ONLINE       mm1903                   STABLE
               ONLINE  ONLINE       mm1904                   STABLE
ora.net1.network
               ONLINE  ONLINE       mm1903                   STABLE
               ONLINE  ONLINE       mm1904                   STABLE
ora.ons
               ONLINE  ONLINE       mm1903                   STABLE
               ONLINE  ONLINE       mm1904                   STABLE
ora.proxy_advm
               OFFLINE OFFLINE      mm1903                   STABLE
               OFFLINE OFFLINE      mm1904                   STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr(ora.asmgroup)
      1        ONLINE  ONLINE       mm1903                   STABLE
      2        ONLINE  ONLINE       mm1904                   STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       mm1904                   STABLE
ora.LISTENER_SCAN2.lsnr
      1        ONLINE  ONLINE       mm1903                   STABLE
ora.LISTENER_SCAN3.lsnr
      1        ONLINE  ONLINE       mm1903                   STABLE
ora.SYS.dg(ora.asmgroup)
      1        ONLINE  ONLINE       mm1903                   STABLE
      2        ONLINE  ONLINE       mm1904                   STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.asm(ora.asmgroup)
      1        ONLINE  ONLINE       mm1903                   Started,STABLE
      2        ONLINE  ONLINE       mm1904                   Started,STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.asmnet1.asmnetwork(ora.asmgroup)
      1        ONLINE  ONLINE       mm1903                   STABLE
      2        ONLINE  ONLINE       mm1904                   STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.cvu
      1        ONLINE  ONLINE       mm1903                   STABLE
ora.mm1903.vip
      1        ONLINE  ONLINE       mm1903                   STABLE
ora.mm1904.vip
      1        ONLINE  ONLINE       mm1904                   STABLE
ora.qosmserver
      1        ONLINE  INTERMEDIATE mm1903                   CHECK TIMED OUT,STAB
                                                             LE
ora.scan1.vip
      1        ONLINE  ONLINE       mm1904                   STABLE
ora.scan2.vip
      1        ONLINE  ONLINE       mm1903                   STABLE
ora.scan3.vip
      1        ONLINE  ONLINE       mm1903                   STABLE
--------------------------------------------------------------------------------
[grid@mm1903 ~]$
[grid@mm1903 ~]$
[grid@mm1903 ~]$
[grid@mm1903 ~]$
[grid@mm1903 ~]$
[grid@mm1903 ~]$
[grid@mm1903 ~]$
[grid@mm1903 ~]$
[grid@mm1903 ~]$
[grid@mm1903 ~]$
[grid@mm1903 ~]$ crsctl stat res -t -init --------------------------------------------------------------------------------
Name           Target  State        Server                   State details       
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.asm
      1        ONLINE  ONLINE       mm1903                   Started,STABLE
ora.cluster_interconnect.haip
      1        ONLINE  ONLINE       mm1903                   STABLE
ora.crf
      1        ONLINE  ONLINE       mm1903                   STABLE
ora.crsd
      1        ONLINE  ONLINE       mm1903                   STABLE
ora.cssd
      1        ONLINE  ONLINE       mm1903                   STABLE
ora.cssdmonitor
      1        ONLINE  ONLINE       mm1903                   STABLE
ora.ctssd
      1        ONLINE  ONLINE       mm1903                   ACTIVE:0,STABLE
ora.diskmon
      1        OFFLINE OFFLINE                               STABLE
ora.drivers.acfs
      1        ONLINE  ONLINE       mm1903                   STABLE
ora.evmd
      1        ONLINE  ONLINE       mm1903                   STABLE
ora.gipcd
      1        ONLINE  ONLINE       mm1903                   STABLE
ora.gpnpd
      1        ONLINE  ONLINE       mm1903                   STABLE
ora.mdnsd
      1        ONLINE  ONLINE       mm1903                   STABLE
ora.storage
      1        ONLINE  ONLINE       mm1903                   STABLE
--------------------------------------------------------------------------------

3.6 測試集群的FAILED OVER功能

節點2被重啟,查看節點1狀態:

[grid@mm1903 ~]$ crsctl stat res -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.LISTENER.lsnr
               ONLINE  ONLINE       mm1903                   STABLE
               ONLINE  OFFLINE      mm1904                   STABLE
ora.chad
               ONLINE  ONLINE       mm1903                   STABLE
               ONLINE  OFFLINE      mm1904                   STABLE
ora.net1.network
               ONLINE  ONLINE       mm1903                   STABLE
               ONLINE  ONLINE       mm1904                   STABLE
ora.ons
               ONLINE  ONLINE       mm1903                   STABLE
               ONLINE  ONLINE       mm1904                   STABLE
ora.proxy_advm
               OFFLINE OFFLINE      mm1903                   STABLE
               OFFLINE OFFLINE      mm1904                   STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr(ora.asmgroup)
      1        ONLINE  ONLINE       mm1903                   STABLE
      2        ONLINE  ONLINE       mm1904                   STOPPING
      3        OFFLINE OFFLINE                               STABLE
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  OFFLINE                               STABLE
ora.LISTENER_SCAN2.lsnr
      1        ONLINE  ONLINE       mm1903                   STABLE
ora.LISTENER_SCAN3.lsnr
      1        ONLINE  ONLINE       mm1903                   STABLE
ora.SYS.dg(ora.asmgroup)
      1        ONLINE  ONLINE       mm1903                   STABLE
      2        ONLINE  OFFLINE                               STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.asm(ora.asmgroup)
      1        ONLINE  ONLINE       mm1903                   Started,STABLE
      2        ONLINE  OFFLINE                               STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.asmnet1.asmnetwork(ora.asmgroup)
      1        ONLINE  ONLINE       mm1903                   STABLE
      2        ONLINE  ONLINE       mm1904                   STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.cvu
      1        ONLINE  ONLINE       mm1903                   STABLE
ora.mm1903.vip
      1        ONLINE  ONLINE       mm1903                   STABLE
ora.mm1904.vip
      1        ONLINE  ONLINE       mm1904                   STOPPING
ora.qosmserver
      1        ONLINE  ONLINE       mm1903                   STABLE
ora.scan1.vip
      1        ONLINE  OFFLINE                               STABLE
ora.scan2.vip
      1        ONLINE  ONLINE       mm1903                   STABLE
ora.scan3.vip
      1        ONLINE  ONLINE       mm1903                   STABLE
--------------------------------------------------------------------------------

節點1被重啟,查看節點2狀態:

[grid@mm1904 trace]$ crsctl stat res -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.LISTENER.lsnr
               ONLINE  OFFLINE      mm1904                   STARTING
ora.chad
               ONLINE  OFFLINE      mm1904                   STABLE
ora.net1.network
               ONLINE  ONLINE       mm1904                   STABLE
ora.ons
               ONLINE  OFFLINE      mm1904                   STARTING
ora.proxy_advm
               OFFLINE OFFLINE      mm1904                   STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr(ora.asmgroup)
      1        ONLINE  OFFLINE                               STABLE
      2        ONLINE  OFFLINE      mm1904                   STARTING
      3        ONLINE  OFFLINE                               STABLE
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  OFFLINE      mm1904                   STARTING
ora.LISTENER_SCAN2.lsnr
      1        ONLINE  OFFLINE      mm1904                   STARTING
ora.LISTENER_SCAN3.lsnr
      1        ONLINE  OFFLINE      mm1904                   STARTING
ora.SYS.dg(ora.asmgroup)
      1        OFFLINE OFFLINE                               STABLE
      2        ONLINE  ONLINE       mm1904                   STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.asm(ora.asmgroup)
      1        ONLINE  OFFLINE                               STABLE
      2        ONLINE  ONLINE       mm1904                   Started,STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.asmnet1.asmnetwork(ora.asmgroup)
      1        ONLINE  OFFLINE                               STABLE
      2        ONLINE  ONLINE       mm1904                   STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.cvu
      1        ONLINE  ONLINE       mm1904                   STABLE
ora.mm1903.vip
      1        ONLINE  INTERMEDIATE mm1904                   FAILED OVER,STABLE
ora.mm1904.vip
      1        ONLINE  ONLINE       mm1904                   STABLE
ora.qosmserver
      1        ONLINE  OFFLINE      mm1904                   STARTING
ora.scan1.vip
      1        ONLINE  ONLINE       mm1904                   STABLE
ora.scan2.vip
      1        ONLINE  ONLINE       mm1904                   STABLE
ora.scan3.vip
      1        ONLINE  ONLINE       mm1904                   STABLE
--------------------------------------------------------------------------------

附:集群日志位置:

[grid@mm1903 ~]$ cd /u01/app/grid/diag/crs/mm1903/crs/trace/
[grid@mm1903 trace]$ ls -l alert.log 
-rw-rw---- 1 grid oinstall 30903 3月  18 17:50 alert.log
[grid@mm1903 trace]$ tail -40f alert.log
ACFS-9549:     Kernel and command versions.
Kernel:
    Build version: 19.0.0.0.0
    Build full version: 19.3.0.0.0
    Build hash:    9256567290
    Bug numbers:   NoTransactionInformation
Commands:
    Build version: 19.0.0.0.0
    Build full version: 19.3.0.0.0
    Build hash:    9256567290
    Bug numbers:   NoTransactionInformation
2020-03-18 17:46:54.141 [CLSECHO(11621)]ACFS-9327: Verifying ADVM/ACFS devices.
2020-03-18 17:46:54.168 [CLSECHO(11633)]ACFS-9156: Detecting control device '/dev/asm/.asm_ctl_spec'.
2020-03-18 17:46:54.438 [CLSECHO(11660)]ACFS-9156: Detecting control device '/dev/ofsctl'.
2020-03-18 17:46:56.196 [CLSECHO(11815)]ACFS-9294: updating file /etc/sysconfig/oracledrivers.conf
2020-03-18 17:46:56.222 [CLSECHO(11827)]ACFS-9322: completed
2020-03-18 17:46:58.465 [CSSDMONITOR(11950)]CRS-8500: Oracle Clusterware CSSDMONITOR process is starting with operating system process ID 11950
2020-03-18 17:46:58.889 [OSYSMOND(11954)]CRS-8500: Oracle Clusterware OSYSMOND process is starting with operating system process ID 11954
2020-03-18 17:46:59.550 [CSSDAGENT(11995)]CRS-8500: Oracle Clusterware CSSDAGENT process is starting with operating system process ID 11995
2020-03-18 17:47:00.561 [OCSSD(12079)]CRS-8500: Oracle Clusterware OCSSD process is starting with operating system process ID 12079
2020-03-18 17:47:01.977 [OCSSD(12079)]CRS-1713: CSSD daemon is started in hub mode
2020-03-18 17:47:11.078 [OCSSD(12079)]CRS-1707: Lease acquisition for node mm1903 number 1 completed
2020-03-18 17:47:12.364 [OCSSD(12079)]CRS-1621: The IPMI configuration data for this node stored in the Oracle registry is incomplete; details at (:CSSNK00002:) in /u01/app/grid/diag/crs/mm1903/crs/trace/ocssd.trc
2020-03-18 17:47:12.364 [OCSSD(12079)]CRS-1617: The information required to do node kill for node mm1903 is incomplete; details at (:CSSNM00004:) in /u01/app/grid/diag/crs/mm1903/crs/trace/ocssd.trc
2020-03-18 17:47:12.564 [OCSSD(12079)]CRS-1605: CSSD voting file is online: /dev/sdc2; details in /u01/app/grid/diag/crs/mm1903/crs/trace/ocssd.trc.
2020-03-18 17:47:12.696 [OCSSD(12079)]CRS-1605: CSSD voting file is online: /dev/sdd2; details in /u01/app/grid/diag/crs/mm1903/crs/trace/ocssd.trc.
2020-03-18 17:47:12.827 [OCSSD(12079)]CRS-1605: CSSD voting file is online: /dev/sdb2; details in /u01/app/grid/diag/crs/mm1903/crs/trace/ocssd.trc.
2020-03-18 17:47:14.236 [OCSSD(12079)]CRS-1601: CSSD Reconfiguration complete. Active nodes are mm1903 mm1904 .
2020-03-18 17:47:16.337 [OCSSD(12079)]CRS-1720: Cluster Synchronization Services daemon (CSSD) is ready for operation.
2020-03-18 17:47:16.871 [OCTSSD(15102)]CRS-8500: Oracle Clusterware OCTSSD process is starting with operating system process ID 15102
2020-03-18 17:47:19.080 [OCTSSD(15102)]CRS-2407: The new Cluster Time Synchronization Service reference node is host mm1904.
2020-03-18 17:47:19.081 [OCTSSD(15102)]CRS-2401: The Cluster Time Synchronization Service started on host mm1903.
2020-03-18 17:47:25.583 [OLOGGERD(16461)]CRS-8500: Oracle Clusterware OLOGGERD process is starting with operating system process ID 16461
2020-03-18 17:47:29.387 [ORAROOTAGENT(9074)]CRS-5019: All OCR locations are on ASM disk groups [SYS], and none of these disk groups are mounted. Details are at "(:CLSN00140:)" in "/u01/app/grid/diag/crs/mm1903/crs/trace/ohasd_orarootagent_root.trc".
2020-03-18 17:49:47.918 [CRSD(17390)]CRS-8500: Oracle Clusterware CRSD process is starting with operating system process ID 17390
2020-03-18 17:50:21.351 [CRSD(17390)]CRS-1012: The OCR service started on node mm1903.
2020-03-18 17:50:21.848 [CRSD(17390)]CRS-1201: CRSD started on node mm1903.
2020-03-18 17:50:23.272 [ORAAGENT(17586)]CRS-8500: Oracle Clusterware ORAAGENT process is starting with operating system process ID 17586
2020-03-18 17:50:23.647 [ORAROOTAGENT(17597)]CRS-8500: Oracle Clusterware ORAROOTAGENT process is starting with operating system process ID 17597
2020-03-18 17:50:42.458 [ORAAGENT(17707)]CRS-8500: Oracle Clusterware ORAAGENT process is starting with operating system process ID 17707
^C
[grid@mm1903 trace]$ 
[grid@mm1903 trace]$ pwd
/u01/app/grid/diag/crs/mm1903/crs/trace
View Code

四、創建其他ASM磁盤組

4.1 ASMCA創建磁盤組

GI軟件安裝過程中,只創建了一個磁盤組DGSYS,用於存儲OCR和vote。接下來我們需要創建其它磁盤組來存放數據庫相關文件。

下面是創建額外的數據和歸檔磁盤組的過程。

正式創建之前可以使用kfod先掃一下磁盤,確保所有磁盤都能被找到,這一步是可選的。

在兩個節點上都執行:

[grid@mm1903 ~]$ kfod disks=all
--------------------------------------------------------------------------------
 Disk          Size Path                                     User     Group   
================================================================================
   1:       1023 MB /dev/sdb2                                grid     asmadmin
   2:       1023 MB /dev/sdc2                                grid     asmadmin
   3:       1023 MB /dev/sdd2                                grid     asmadmin
   4:      12287 MB /dev/sde2                                grid     asmadmin
   5:      12287 MB /dev/sdf2                                grid     asmadmin
   6:      12287 MB /dev/sdg2                                grid     asmadmin
--------------------------------------------------------------------------------
ORACLE_SID ORACLE_HOME                                                          
================================================================================

設置好DISPLAY環境變量,執行asmca打開圖形界面:

[grid@mm1903 ~]$ export DISPLAY=192.168.56.1:0.0
[grid@mm1903 ~]$ xhost +
access control disabled, clients can connect from any host
xhost:  must be on local machine to enable or disable access control.
[grid@mm1903 ~]$ asmca

此處等待時間比較長,大約需要幾分鍾,圖形界面如下:

首先映入眼簾的是鮮艷的19c配色圖:

然后正式進入asmca的界面:

 點擊create按鈕創建”FRA”磁盤組(磁盤選擇/dev/asm-diske-f),

 冗余選擇external(生產如果選擇external,底層存儲必須已經做了RAID):

 

如圖:(第一次創建少選擇了一張盤,在后面有單獨添加)

 

同樣的步驟創建磁盤組DATA1(磁盤選擇/dev/asm-diskg):

創建完成后,磁盤組信息如下:

至此初始需要的磁盤組都創建完成,SYS存儲OCR和Votedisk,FRA用於存儲閃回區和歸檔,DATA用於存放數據。

五、DB(Database)配置

5.1 解壓DB的安裝包

使用oracle用戶解壓DB軟件:

[root@mm1903 ~]# su -  oracle
[oracle@mm1903 ~]$ cd  /u01/app/oracle/product/19.3.0/db_1
[oracle@mm1903 db_1]$ unzip /usr/local/src/resource/LINUX.X64_193000_db_home.zip

從Oracle 18.3開始,Oracle軟件也采用直接解壓到Oracle Home下的方式。與GI一樣只需要解壓到一個節點即可。

5.2 DB軟件配置

設置好DISPLAY環境變量后,啟動runInstaller進行DB軟件安裝。

[oracle@mm1903 ~]$  cd /u01/app/oracle/product/19.3.0/db_1
[oracle@mm1903 db_1]$  export DISPLAY=192.168.1.102:0.0
[oracle@mm1903 db_1]$  xhost +
access control disabled,  clients can connect from any host
[oracle@mm1903 db_1]$  ./runInstaller
Launching Oracle Database  Setup Wizard...

經過漫長的等待,彈出如下界面:

注意:對於RAC安裝,一定要選擇僅設置軟件(Set Up Software Only),安裝完軟件之后再使用DBCA建庫,這是與以往 版本不同的地方。不能在軟件安裝過程中創建數據庫了。

選擇RAC安裝,進入下一步:

選擇要安裝RAC軟件的節點。如果前沒有配置Oracle用戶在等效性,可以在這個界面點擊SSH connectivity按鈕進行配置。然后進入下一界面:

如圖:

選擇企業版,然后進入下一界面:

設置安裝目錄,用戶Oracle用戶的環境變量設置無誤,這個界面中的Oracle Base和Oracle Home會自動列出來,保留默認即可。

配置權限組映射,同樣默認即可。

配置是否執行root.sh,這里選擇不自動執行。然后點擊下一步,執行預安裝檢查:

precheck這里如果有問題需要認真檢查,對於配置問題可以點擊”Fix & Check Again”生成一個腳本,由root用戶進行自動修復。

最后,確認沒有問題可以忽略,點Ignore All忽略:

點擊next開始安裝:

直到出現下面的界面:

運行root腳本。然后返回繼續安裝直到完成:

腳本運行如下:

[root@mm1903 ~]# /u01/app/oracle/product/19.3.0/db_1/root.sh
Performing root user operation.

The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME=  /u01/app/oracle/product/19.3.0/db_1

Enter the full pathname of the local bin directory: [/usr/local/bin]: 
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
[root@mm1903 ~]# 




[root@mm1904 ~]# /u01/app/oracle/product/19.3.0/db_1/root.sh
Performing root user operation.

The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME=  /u01/app/oracle/product/19.3.0/db_1

Enter the full pathname of the local bin directory: [/usr/local/bin]: 
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
[root@mm1904 ~]#
View Code

點擊關閉:

至此,已完成DB軟件的配置。

5.3 創建CDB數據庫

以oracle用戶登錄,設置DISPLAY之后執行dbca,啟動圖形介面:

[oracle@mm1903 db_1]$ export DISPLAY=192.168.56.1:0.0
 [oracle@mm1903 db_1]$ xhost +
access control disabled, clients can connect from any host
xhost:  must be on local machine to enable or disable access control.
[oracle@mm1903 db_1]$ ./runInstaller 
正在啟動 Oracle 數據庫安裝向導...

初始介面如下:

選擇Create a database進入下一頁:

選擇默認的高級模式,進入下一面界:

選擇管理方式並選擇General Purpose or Transaction Processing模板,然后點擊下一步進行下一 界面:

選擇要創建數據庫的節點,然后進入下一界面:

輸入數據庫名,選擇創建Container database.

選擇“Use Local Undo tablespace for PDBs”為每個PDB創建單獨的Undo表空間。

可以一起創建PDB,也可以選擇創建空的Container Database,這里暫時不創建PDB。

注意:這里建議使用OMF管理數據文件,如果選擇不使用OMF創建數據庫,在創建過程中會失敗,報下面的錯誤:

Error while restoring PDB backup piece

這是因為安裝程序無法創建pdbseed目錄,解決的方法是自己在ASM中先自行創建下面的目錄結構:+DATA1/C193/pdbseed.

然后進入下一界面:

在這一界面中,暫時不開啟閃回區和歸檔,然后點擊下一步:

不開Vault和Label Security,然后進入下一界面:

指定SGA和PGA的大小,一般來說配置物理內存的一半給SGA和PGA。

數據庫塊(8k)不可修改,可以修改參數processes,注意如果SGA太小的話processes不適過大, 否則實例無法啟動。可以在創建完數據庫之后再手工修改參數processes。

選擇字符集AL32UTF8,不建議使用ZH16GBK,產生庫使用ZH16GBK早晚會遇到生僻字的難題。

選擇專用服務器模式: 

不需要執行CUV檢查,暫時不用注冊Cloud Control,點擊下一步

輸入sys用戶和system用戶的密碼,這里設為同樣的密碼,然后下一步:

在這一界面中,點擊Customize StorageLocation按鈕自定義控制文件和日志文件:

 

 

 

 

 

 

 

 

 

 

 

 

 

 

控制文件:

每個節點至少3組日志。並且適當擴大日志大小,要不然上線后再擴日志比較麻煩。

確認無誤后,開始預檢查:

與軟件安裝部分一樣,如有配置方面的問題,可以點擊Fix&Check Again,生成修復腳本使用root執行修。

對於可以忽略的問題,可以選中Ignore All忽略,然后點下一步開始安裝:

創建CDB數據庫概要,沒有問題即可點擊完成:

開始數據庫創建:

開始創建數據庫,直到完成:

數據庫創建已完成,密碼管理界面顯現,點close關閉窗口。

5.4 創建PDB數據庫

以oracle用戶登錄,設置DISPLAY之后執行dbca,再次啟動圖形介面:

選擇Manage Pluggable databases進入下一頁:

選擇Create a Pluggable database,進入下一頁:

在這一頁面需要先選擇在哪個CDB里創建PDB,然后輸入CDB的用戶名和密碼。如果 CDB使用操作系統認證也可以不輸用戶名和密碼。然后進入下一頁面:

在這個界面中可以選擇從種子PDB創建,也可以從一個被拔出的PDB創建。因為我們沒有被拔出的PDB需要插入,所以選擇從種子PDB創建,然后進入下一界面。

在一界面輸入PDB的名字、管理員、管理員密碼。然后進入下一界面:

可不選擇user表空間,下一步開始安裝創建PDB:

直到安裝結束:

點擊關閉:

最后,確認Pdb已經創建完成,並且已經注冊到監聽上:

[oracle@mm1903 ~]$ export NLS_LANG="AMERICAN_AMERICA.ZHS16GBK" 
[oracle@mm1903 ~]$ 
[oracle@mm1903 ~]$ sqlplus / as sysdba

SQL*Plus: Release 19.0.0.0.0 - Production on Thu Mar 19 13:20:35 2020
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle.  All rights reserved.


Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0

SQL> set linesize 100
SQL> col name for a20
SQL> select  name,open_mode from v$pdbs;

NAME                 OPEN_MODE
-------------------- --------------------
PDB$SEED             READ ONLY
P1931                READ WRITE

SQL> exit
Disconnected from Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0
[oracle@mm1903 ~]$




[grid@mm1903 ~]$ lsnrctl

LSNRCTL for Linux: Version 19.0.0.0.0 - Production on 19-MAR-2020 13:21:50

Copyright (c) 1991, 2019, Oracle.  All rights reserved.

Welcome to LSNRCTL, type "help" for information.

LSNRCTL> status
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER)))
STATUS of the LISTENER
------------------------
Alias                     LISTENER
Version                   TNSLSNR for Linux: Version 19.0.0.0.0 - Production
Start Date                19-MAR-2020 11:07:34
Uptime                    0 days 2 hr. 14 min. 17 sec
Trace Level               off
Security                  ON: Local OS Authentication
SNMP                      OFF
Listener Parameter File   /u01/app/19.3.0/grid/network/admin/listener.ora
Listener Log File         /u01/app/grid/diag/tnslsnr/mm1903/listener/alert/log.xml
Listening Endpoints Summary...
  (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.56.56)(PORT=1521)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.56.58)(PORT=1521)))
Services Summary...
Service "+ASM" has 1 instance(s).
  Instance "+ASM1", status READY, has 1 handler(s) for this service...
Service "+ASM_DATA1" has 1 instance(s).
  Instance "+ASM1", status READY, has 1 handler(s) for this service...
Service "+ASM_FRA" has 1 instance(s).
  Instance "+ASM1", status READY, has 1 handler(s) for this service...
Service "+ASM_SYS" has 1 instance(s).
  Instance "+ASM1", status READY, has 1 handler(s) for this service...
Service "C193" has 1 instance(s).
  Instance "C1931", status READY, has 1 handler(s) for this service...
Service "C193XDB" has 1 instance(s).
  Instance "C1931", status READY, has 1 handler(s) for this service...
Service "a12f2bfe86b26588e0550a00272f5be2" has 1 instance(s).
  Instance "C1931", status READY, has 1 handler(s) for this service...
Service "p1931" has 1 instance(s).
  Instance "C1931", status READY, has 1 handler(s) for this service...
The command completed successfully

至此,Oracle 19.3 RAC數據庫已經創建成功。

目前如果你的企業想上12c系列的數據庫,推薦直接選擇19c(12c的最終版本12.2.0.3),19c相對18c來說更趨於穩定,Oracle的支持周期也更長。

5.5 驗證crsctl的狀態

grid用戶登錄,crsctl stat res -t 查看集群資源的狀態,發現各節點的DB資源已經正常Open。

[grid@mm1903 ~]$ crsctl stat res -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.LISTENER.lsnr
               ONLINE  ONLINE       mm1903                   STABLE
ora.chad
               ONLINE  ONLINE       mm1903                   STABLE
ora.net1.network
               ONLINE  ONLINE       mm1903                   STABLE
ora.ons
               ONLINE  ONLINE       mm1903                   STABLE
ora.proxy_advm
               OFFLINE OFFLINE      mm1903                   STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr(ora.asmgroup)
      1        ONLINE  ONLINE       mm1903                   STABLE
      2        ONLINE  OFFLINE                               STABLE
      3        ONLINE  OFFLINE                               STABLE
ora.DATA1.dg(ora.asmgroup)
      1        ONLINE  ONLINE       mm1903                   STABLE
      2        ONLINE  OFFLINE                               STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.FRA.dg(ora.asmgroup)
      1        ONLINE  ONLINE       mm1903                   STABLE
      2        ONLINE  OFFLINE                               STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       mm1903                   STABLE
ora.LISTENER_SCAN2.lsnr
      1        ONLINE  ONLINE       mm1903                   STABLE
ora.LISTENER_SCAN3.lsnr
      1        ONLINE  ONLINE       mm1903                   STABLE
ora.SYS.dg(ora.asmgroup)
      1        ONLINE  ONLINE       mm1903                   STABLE
      2        ONLINE  OFFLINE                               STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.asm(ora.asmgroup)
      1        ONLINE  ONLINE       mm1903                   Started,STABLE
      2        ONLINE  OFFLINE                               STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.asmnet1.asmnetwork(ora.asmgroup)
      1        ONLINE  ONLINE       mm1903                   STABLE
      2        ONLINE  OFFLINE                               STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.c193.db
      1        ONLINE  ONLINE       mm1903                   Open,HOME=/u01/app/o
                                                             racle/product/19.3.0
                                                             /db_1,STABLE
ora.cvu
      1        ONLINE  ONLINE       mm1903                   STABLE
ora.mm1903.vip
      1        ONLINE  ONLINE       mm1903                   STABLE
ora.mm1904.vip
      1        ONLINE  INTERMEDIATE mm1903                   FAILED OVER,STABLE
ora.qosmserver
      1        ONLINE  ONLINE       mm1903                   STABLE
ora.scan1.vip
      1        ONLINE  ONLINE       mm1903                   STABLE
ora.scan2.vip
      1        ONLINE  ONLINE       mm1903                   STABLE
ora.scan3.vip
      1        ONLINE  ONLINE       mm1903                   STABLE
--------------------------------------------------------------------------------

oracle用戶登錄,sqlplus / as sysdba

[oracle@mm1903 ~]$ export NLS_LANG="AMERICAN_AMERICA.ZHS16GBK" 
[oracle@mm1903 ~]$ 
[oracle@mm1903 ~]$ 
[oracle@mm1903 ~]$ sqlplus / as sysdba

SQL*Plus: Release 19.0.0.0.0 - Production on Thu Mar 19 13:22:55 2020
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle.  All rights reserved.


Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0

SQL> select inst_id, name, open_mode from gv$database;

   INST_ID NAME               OPEN_MODE
---------- ------------------ ----------------------------------------
         1 C193               READ WRITE

SQL> show con_id

CON_ID
------------------------------
1
SQL> show con_name

CON_NAME
------------------------------
CDB$ROOT
SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 P1931                          READ WRITE NO
SQL> alter session set container = p1931;

Session altered.

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         3 P1931                          READ WRITE NO
SQL> select name from v$datafile;

NAME
--------------------------------------------------------------------------------
+DATA1/C193/A12F2BFE86B26588E0550A00272F5BE2/DATAFILE/system.280.1035464981
+DATA1/C193/A12F2BFE86B26588E0550A00272F5BE2/DATAFILE/sysaux.281.1035464983
+DATA1/C193/A12F2BFE86B26588E0550A00272F5BE2/DATAFILE/undotbs1.279.1035464981

5.6 添加實例節點

由於筆記本資源不足,在創建PDB數據庫時無法同時打開兩個虛擬機,無奈之下只好創建單節點的RAC PDB數據庫實例,后面再把二節點添加到集群中去。

以下是添加節點實例的截圖:

這一步選擇Oracle RAC 數據庫實例管理,點擊下一步:

 此頁面選擇添加實例,下一步:

 這一步是選擇活躍的集群,默認選項即可:

 實例名位C1932,選擇正確的節點名,點擊下一步:

 確認概覽信息后,點擊完成。

 至此,節點實例添加成功。

 --------------------------------------------------------------------------------------------------------------------

可以看到所有的資源均正常。

至此,整個在Redhat 7.6 上安裝 Oracle 19.3 GI & RAC 的工作已經全部結束。

 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM