CentOS7.6 靜默安裝Oracle 12CR2 RAC HUB CDB
Table of Contents
1 規划
-
系統基本信息
操作系統版本 public ip priv-ip vip GI版本 RMDBS版本 CentOS 7.6 192.168.1.13 172.26.9.30 192.168.1.4 12.2.0.2 12.2.0.2 CentOS 7.6 192.168.1.14 172.26.9.31 192.168.1.5 12.2.0.2 12.20.2 -
ASM存儲規划
作用 要求 實際單個裸設備 划分個數 計划使用個數 冗余方式 OCR+VOTE >=5G 10G 5 3 normal +SYSTEM >=200G 500G 1 1 exteranl +ARCH >=500G 500G 1 1 external +DATA >=2T 500G 5 5 external +MGMT >=100G 500G 1 1 external 當然其余可用祼設備,將留用冗余。日后需要時再添加。
- note
-
- +SYSTEM 用於存儲Oracle 數據庫自有表空間、redo、undo、臨時表空間等
- +ARCH 用於存儲歸檔文件,空間的划分滿足每天的歸檔量即可。比如月末月初出賬、入賬,生成的歸檔就會特別多。
- +DATA 用於存儲業務數據
- 本次安裝不使用cdb. 安裝好后,與11G RAC 使用完全相同。
2 下載軟件
OTN:oracle database 下載頁面。 下載完成后上傳至服務器:
[root@bossdb2 ~]# ls ~/ anaconda-ks.cfg linuxx64_12201_database.zip linuxx64_12201_grid_home.zip p29963428_12201190716ACFSJUL2019RU_Linux-x86-64.zip p6880880_180000_Linux-x86-64.zip
其中database 是dbms安裝包,grid 是集群安裝包,p6880880 是最新的OPatch , p29963428 是針對CentOS 7.6 ORACLE 12.2 的補丁包。 在安裝GI的時候,可能會遇到如下錯誤:
- AFD-620: AFD is not supported on this operating system version: 'centos-release-7-6.1810.2.el7.centos.x86_64 - ' - AFD-9201: Not Supported
3 安裝准備
注意:如下配置除非特別說明,否則兩個節點都需要操作
3.1 安裝軟件依賴
yum install binutils compat-libcap1 compat-libstdc++-33 compat-libstdc++-33.i686 gcc gcc-c++ glibc glibc.i686 glibc-develglibc-devel.i686 ksh libgcc libgcc.i686 libstdc++ libstdc++.i686 libstdc++-devel libstdc++-devel.i686 libaiolibaio.i686 libaio-devel libaio-devel.i686 libXext libXext.i686 libXtst libXtst.i686 libX11 libX11.i686 libXau libXau.i686 libxcb libxcb.i686 libXi libXi.i686 make sysstat unixODBC unixODBC-devel readline libtermcap-devel pdksh -y
3.2 修改host文件
#在兩台主機修改host文件,添加如下內容:
vim /etc/hosts #public ip 192.168.1.13 bossdb1 192.168.1.14 bossdb2 # oracle vip 192.168.1.6 bossdb1-vip 192.168.1.7 bossdb2-vip # oracle priv-ip 172.26.9.30 bossdb1-priv 172.26.9.31 bossdb2-priv # oracle scan-ip 192.168.1.4 192.168.1.5
3.3 關閉selinux和配置防火牆
setenforce 0 firewall-cmd --set-defaults-zone=trusted systemctl stop firewalld systemctl disable firewalld
3.4 添加組與用戶
#在兩個節點增加用戶與組:
groupadd -g 5001 oinstall groupadd -g 5002 dba groupadd -g 5003 oper groupadd -g 5004 backupdba groupadd -g 5005 dgdba groupadd -g 5006 kmdba groupadd -g 5007 asmdba groupadd -g 5008 asmoper groupadd -g 5009 asmadmin useradd -u 601 -g oinstall -G asmadmin,asmdba,dba,asmoper grid useradd -u 602 -g oinstall -G dba,backupdba,dgdba,kmdba,asmadmin,oper,asmdba oracle echo "grid" | passwd --stdin grid echo "oracle" | passwd --stdin oracle
修改用戶環境變量
bossdb1: su - grid cat >> .bash_profile <<EOF ORACLE_BASE=/g01/app/grid ORACLE_HOME=/g01/app/12.2.0 ORACLE_SID=+ASM1 JAVA_HOME=\$GRID_ORACLE_HOME/jdk PATH=\$JAVA_HOME/bin:\$PATH:\$ORACLE_HOME/bin export ORACLE_BASE ORACLE_HOME ORACLE_SID PATH JAVA_HOME umask 022 EOF su - oracle cat >> .bash_profile <<EOF ORACLE_BASE=/u01/app/oracle ORACLE_HOME=\$ORACLE_BASE/product/12.2.0/dbhome1 ORACLE_SID=boss1 JAVA_HOME=\$ORA_ORACLE_HOME/jdk PATH=\$JAVA_HOME/bin:\$PATH:\$ORACLE_HOME/bin export ORACLE_BASE ORACLE_HOME ORACLE_SID PATH JAVA_HOME umask 022 EOF bossdb2: su - grid cat >> .bash_profile <<EOF ORACLE_BASE=/g01/app/grid ORACLE_HOME=/g01/app/12.2.0 ORACLE_SID=+ASM2 JAVA_HOME=\$GRID_ORACLE_HOME/jdk PATH=\$JAVA_HOME/bin:\$PATH:\$ORACLE_HOME/bin export ORACLE_BASE ORACLE_HOME ORACLE_SID PATH JAVA_HOME umask 022 EOF su - oracle cat >> .bash_profile <<EOF ORACLE_BASE=/u01/app/oracle ORACLE_HOME=\$ORACLE_BASE/product/12.2.0/dbhome1 ORACLE_SID=boss2 JAVA_HOME=\$ORA_ORACLE_HOME/jdk PATH=\$JAVA_HOME/bin:\$PATH:\$ORACLE_HOME/bin export ORACLE_BASE ORACLE_HOME ORACLE_SID PATH JAVA_HOME umask 022 EOF
3.5 添加目錄
mkdir -p /g01/app/grid mkdir -p /g01/app/12.2.0 mkdir -p /g01/app/oraInventory chown -R grid:oinstall /g01 mkdir -p /u01/app/oracle/product/12.2.0/dbhome1 mkdir -p /u01/app/oraInventory chown -R oracle:oinstall /u01 chmod -R 775 /u01
3.6 修改操作系統參數
vim/etc/security/limits.d/99-grid-oracle-limits.conf
#ORACLE SETTING
grid soft nproc 2047
grid hard nproc 16384
grid soft nofile 1024
grid hard nofile 65536
grid soft stack 10240
grid hard stack 32768
oracle soft nproc 2047
oracle hard nproc 16384
oracle soft nofile 1024
oracle hard nofile 65536
oracle soft stack 10240
oracle hard stack 32768
MEMTOTAL=$(free -b | sed -n '2p' | awk '{print $2}')
SHMMAX=$(expr $MEMTOTAL \*4 / 5)
SHMMNI=4096
SHMALL=$(expr $MEMTOTAL /4\*1024)
cp /etc/sysctl.conf /etc/sysctl.conf.bak
cat >> /etc/sysctl.conf << EOF
fs.aio-max-nr = 1048576
fs.file-max = 6815744
kernel.shmmax = $SHMMAX
kernel.shmall = $SHMALL
kernel.shmmni = $SHMMNI
kernel.sem = 250 32000 100 128
net.ipv4.ip_local_port_range = 9000 65500
net.core.rmem_default = 262144
net.core.rmem_max = 4194304
net.core.wmem_default = 262144
net.core.wmem_max = 1048576
kernel.panic_on_oops = 1
EOF
# kernel.shmmax大於共享內存區,小於物理內存
# kernel.shmall物理內存/4K
讓配置生效:
sysctl –p
#使用centos 7.2 安裝grid時,需要修改這個參數,不然會報錯
vim /etc/systemd/logind.conf
RemoveIPC=no
systemctl daemon-reload
systemctl restart systemcd-logind
3.7 配置ssh無密登錄(兩節點)
-
grid 用戶
兩節點執行
su - grid ssh-keygen ssh-copy-id -i ~/.ssh/id_rsa.pub grid@bossdb1 ssh-copy-id -i ~/.ssh/id_rsa.pub grid@bossdb2 ssh bossdb1 date ssh bossdb2 date ssh bossdb2-priv date ssh bossdb1-priv date
-
oracle 用戶
兩節點操作。
su - oracle ssh-keygen ssh-copy-id -i ~/.ssh/id_rsa.pub oracle@bossdb1 ssh-copy-id -i ~/.ssh/id_rsa.pub oracle@bossdb2 ssh bossdb1 date ssh bossdb2 date ssh bossdb2-priv date ssh bossdb1-priv date
3.8 配置Central inventory
只在安裝節點配置。
echo -e "inventory_loc=/g01/app/oraInventory\ninst_group=oinstall" > /etc/oraInst.loc
我因為在這里配置錯了inventory_loc,浪費了大量的時間排查各種詭異的報錯。
4 多路徑配置
存儲划分好掛載到服務器后,操作系統並不能自動識別這些磁盤。需要我們手動掃描磁盤,並配置多路徑。
4.1 掃描磁盤
兩個節點上以root用戶執行如下命令:
for scsi_host in `ls /sys/class/scsi_host/` do echo "- - -" > /sys/class/scsi_host/$scsi_host/scan done
隨后 fdisk -l 命令可顯示所有已掛載磁盤。
4.2 安裝啟用Multipath
以root用戶在兩個節點上執行以下命令:
#安裝multipath 包 yum install -y device-mapper #將多路徑軟件添加至內核模塊中 modprobe dm-multipath modprobe dm-round-robin # 設置開機啟動多路徑軟件 systemctl enable multipathd
-
查看是否安裝
[root@bossdb1 ~]# rpm -qa|grep device-mapper device-mapper-persistent-data-0.7.3-3.el7.x86_64 device-mapper-event-libs-1.02.149-8.el7.x86_64 device-mapper-event-1.02.149-8.el7.x86_64 device-mapper-multipath-0.4.9-123.el7.x86_64 device-mapper-1.02.149-8.el7.x86_64 device-mapper-libs-1.02.149-8.el7.x86_64 device-mapper-multipath-libs-0.4.9-123.el7.x86_64
-
查看是否添加進內核
[root@bossdb1 ~]# lsmod |grep multipath dm_multipath 27792 1 dm_round_robin dm_mod 124407 25 dm_multipath,dm_log,dm_mirror
4.3 配置多路徑
上一步操作完並不能直接啟動Multipath,因為multipath的配置文件還沒有。
4.3.1 生成multipath 配置文件
root用戶兩個節點上都要操作,生成配置文件使用 /sbin/mpathconf –enable
[root@bossdb2 ~]# ls /etc/multipath.conf ls: cannot access /etc/multipath.conf: No such file or directory [root@bossdb2 ~]# /sbin/mpathconf --enable [root@bossdb2 ~]# ls /etc/multipath.conf /etc/multipath.conf
4.3.2 編輯配置文件
只在其中一個節點上操作即可。root用戶操作。
備份示例文件:
cp /etc/multipath.conf /etc/multipath-sample.conf
編輯后的配置文件如下:
blacklist_exceptions {
device {
vendor "NETAPP"
product "LUN C-Mode"
}
}
defaults {
user_friendly_names yes
find_multipaths yes
}
blacklist {
wwid 3600508b1001c205b7b67af3b895fda77
devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
}
devices {
device {
vendor "NETAPP"
product "LUN C-Mode"
path_grouping_policy multibus
getuid_callout "/usr/lib/udev/scsi_id -g -u /dev/%n"
path_checker readsector0
path_selector "round-robin 0"
hardware_handler "0"
failback 15
rr_weight priorities
no_path_retry queue
}
}
此時編輯完,啟動multipath后,在/dev/mapper里會自動生成很多連接設備,分別指向/dev/路徑下的dm塊設備。
但是由於塊設備都是以dm開頭的,不便於Oracle使用,我們需要再次編寫multipath.conf ,下一節增加別名配置。
關於查看vendor 和 product 信息參閱:Linux 日常操作1.7 查看磁盤基本信息
4.3.3 為每個磁盤配置別名
與上一節操作節點相同。 root 用戶操作。
multipath 在自動命名時,以uuid進行全名。如下:
# fdisk -l ..... 省略 ......... 3600a098038304448642b504c49475830 dm-40 NETAPP ,LUN C-Mode size=500G features='4 queue_if_no_path pg_init_retries 50 retain_attached_hw_handle' hwhandler='1 alua' wp=rw `-+- policy='round-robin 0' prio=30 status=active |- 1:0:6:19 sdbq 68:64 active ready running |- 4:0:6:19 sdga 131:96 active ready running |- 1:0:7:19 sdct 70:16 active ready running `- 4:0:7:19 sdhd 133:48 active ready running ..... 省略 .........
-
查看UUID對應磁盤大小
# fdisk -l |grep "dev/mapper/3600a"|awk '{print $2" "$3}'|sort -k2n|awk -F '/' '{print $4}'|sed 's/://g' ..... 省略 ....... 3600a098038304448642b504c4947576f 10.7 ..... 省略 ....... 3600a098038304448642b504c49475770 10.7 ..... 省略 ....... 3600a098038304448642b504c49475774 537.0 3600a098038304448642b504c4947577a 537.0 ..... 省略 .......
共享存儲的磁盤,uuid前面N個字符都是一樣的。這里的3600a 是共享存儲的UUID 前幾個字符,用於與本地磁盤加以區別。
-
根據磁盤大小為磁盤配置別名
現在我們知道了。uuid 對應的大小,那么可以開始配置別名了。10G 大小的用於OCR+VOTEDISK, 其他大小的用於存儲數據. 使用如下腳本,為每個磁盤起個別名,
echo "multipaths {" >> /etc/multipath.conf ocr_count=1 data_count=1 while read a b do if [[ `echo "$b > 11"|bc` -eq 1 ]]; then echo " multipath { wwid $a alias asm-data$data_count path_grouping_policy multibus path_selector \"round-robin 0\" failback auto rr_weight priorities no_path_retry 5 }" >> /etc/multipath.conf let data_count++ else echo " multipath { wwid $a alias asm-ocr$ocr_count path_grouping_policy multibus path_selector \"round-robin 0\" failback auto rr_weight priorities no_path_retry 5 }" >> /etc/multipath.conf let ocr_count++ fi done</tmp/uuids echo "}" >> /etc/multipath.conf將此配置文件 復制到集群中的其他節點:
scp /etc/multipath.conf bossdb2:/etc/multipath.conf
4.3.4 使配置生效:
兩個節點以root用戶執行 systemctl restart multipathd. 或者:
multipath -F # 清空當前多路徑磁盤符 multipath -v2 # 重新生成多路徑磁盤符
5 udev 配置磁盤權限
root用戶執行。
5.1 查看UDEV服務是否已啟動
systemctl |grep udev 或者 systemctl list-units --type=service |grep udevd
結果如下:
[root@bossdb2 dev]# systemctl start systemd-udevd [root@bossdb2 dev]# [root@bossdb2 dev]# [root@bossdb2 dev]# systemctl list-units --type=service |grep udevd systemd-udevd.service loaded active running udev Kernel Device Manager [root@bossdb2 dev]# systemctl |grep udev systemd-udev-trigger.service loaded active exited udev Coldplug all Devices systemd-udevd.service loaded active running udev Kernel Device Manager systemd-udevd-control.socket loaded active running udev Control Socket systemd-udevd-kernel.socket loaded active running udev Kernel Socket
5.2 配置修改權限規則
cat >> /etc/udev/rules.d/90-oracleasm.rules<<EOF
ENV{DM_NAME}=="asm*", OWNER:="grid", GROUP:="asmadmin", MODE:="660"
EOF
5.3 使配置生效
/sbin/udevadm trigger --type=devices --action=change /sbin/udevadm control --reload
使配置重啟后仍生效:
cat >> /etc/rc.d/rc.local<<EOF /sbin/udevadm trigger --type=devices --action=change /sbin/udevadm control --reload EOF
5.4 檢查權限是否生效
# ls -la /dev/mapper/asm* lrwxrwxrwx 1 root root 8 May 14 18:48 /dev/mapper/asm-data1 -> ../dm-39 lrwxrwxrwx 1 root root 8 May 14 18:48 /dev/mapper/asm-data10 -> ../dm-48 lrwxrwxrwx 1 root root 8 May 14 18:48 /dev/mapper/asm-data11 -> ../dm-49 lrwxrwxrwx 1 root root 8 May 14 18:48 /dev/mapper/asm-data12 -> ../dm-50 lrwxrwxrwx 1 root root 8 May 14 18:48 /dev/mapper/asm-data13 -> ../dm-25 lrwxrwxrwx 1 root root 8 May 14 18:48 /dev/mapper/asm-data14 -> ../dm-15 lrwxrwxrwx 1 root root 8 May 14 18:48 /dev/mapper/asm-data15 -> ../dm-16 ....... 省略 .......
ls -la /dev/dm* ...... 省略 ...... brw-rw---- 1 grid asmadmin 253, 38 May 15 12:50 /dev/dm-38 brw-rw---- 1 grid asmadmin 253, 39 May 15 12:50 /dev/dm-39 ...... 省略 ...... brw-rw---- 1 grid asmadmin 253, 48 May 15 12:50 /dev/dm-48 brw-rw---- 1 grid asmadmin 253, 49 May 15 12:50 /dev/dm-49 brw-rw---- 1 grid asmadmin 253, 50 May 15 12:50 /dev/dm-50 ...... 省略 ......
如上配置已成功。
6 配置dns
如果需要的話,可以配置dns. 不配置也沒關系。下面是配置示例,我本次沒有配置。 配置dns的最大優勢,就是利用DNS輪詢解析IP,以實現高可用。當其中一個scan ip 無法連接時,會自動使用其他的IP。
當然使用dns 解析,也需要其他服務器配置 DNSSERVER 。應用程序需要修改連接配置。基於此問題引發的其他事項的考慮, 本次不使用DNS解析。
# yum 安裝 yum -y install unbound yum install -y bind-utils # 配置配置文件/etc/unbound/unbound.conf vi /etc/unbound/unbound.conf …… 38 # interface: 0.0.0.0 39 interface: 0.0.0.0 …… //找到38行,復制去掉注釋行,打開監聽全網功能。 177 # access-control: 0.0.0.0/0 refuse 178 access-control: 192.168.10.0/24 allow 179 # access-control: 127.0.0.0/8 allow // 找到配置文件/etc/unbound/unbound.conf的第177行,缺省為注釋行,且內容為拒絕訪問。復制本行內容到下面一行,去掉注釋“#“,改refuse為allow。然后保存退出,重啟服務即可。 155 # do-ip6: yes 156 do-ip6: no //找到155行內容,在其下復制一行並去除注釋,改yes為no,重啟服務即可去除對Ipv6的監聽 # 創建解析文件 [root@orc1 ~]# cat cat > /etc/unbound/local.d/example.conf << EOF local-zone: "example.com." static local-data: "example.com. 86400 IN SOA ns.example.com. root 1 1D 1H 1W 1H" local-data: "ns.example.com. IN A 192.168.10.166" local-data: "orc1.example.com. IN A 192.168.10.166" local-data: "orc12c-scan.example.com. IN A 192.168.10.170" local-data: "orc12c-scan.example.com. IN A 192.168.10.171" local-data: "orc12c-scan.example.com. IN A 192.168.10.172" local-data-ptr: "192.168.10.170 orc12c-scan.example.com." local-data-ptr: "192.168.10.171 orc12c-scan.example.com." local-data-ptr: "192.168.10.172 orc12c-scan.example.com." EOF # 啟動服務及檢查 systemctl start unbound systemctl restart unbound systemctl status unbound netstat -tunlp |grep unbound
7 靜默安裝配置
7.1 解壓
注意12C 需要將安裝包解壓到 $ORACLE_HOME路徑中。
#以grid 用戶執行 su - grid mv linuxx64_12201_grid_home.zip $ORACLE_HOME/ cd $ORACLE_HOME unzip linuxx64_12201_grid_home.zip
需要安裝一個cvuqdisk。這個在grid 的壓縮包里是有的。
[grid@bossdb1 12.2.0]$ find ./ -name cvuqdisk* ./cv/rpm/cvuqdisk-1.0.10-1.rpm ./cv/remenv/cvuqdisk-1.0.10-1.rpm [grid@bossdb1 12.2.0]$ pwd /g01/app/12.2.0 [grid@bossdb1 12.2.0]$ exit # 注意安裝rpm包需要root權限 logout [root@bossdb1 log]# rpm -ivh /g01/app/12.2.0/cv/rpm/cvuqdisk-1.0.10-1.rpm Preparing... ################################# [100%] Updating / installing... 1:cvuqdisk-1.0.10-1 ################################# [100%]
將該包傳送至另外一個節點:
[root@bossdb1 ~]# scp /g01/app/12.2.0/cv/rpm/cvuqdisk-1.0.10-1.rpm bossdb2:~/
登錄另外一個節點,安裝:
rpm -ivh cvuqdisk-1.0.10-1.rpm
7.2 配置靜默安裝文件
oracle.install.responseFileVersion=/oracle/install/rspfmt_crsinstall_response_schema_v12.2.0 INVENTORY_LOCATION=/g01/app/oraInventory/ oracle.install.option=CRS_CONFIG ORACLE_BASE=/g01/app/grid oracle.install.asm.OSDBA=asmdba oracle.install.asm.OSOPER=asmoper oracle.install.asm.OSASM=asmadmin oracle.install.crs.config.gpnp.scanName=racscan # 這里是SCANIP 對應的別名。必須在/etc/hosts 中可以找到。 oracle.install.crs.config.gpnp.scanPort=1521 oracle.install.crs.config.ClusterConfiguration=STANDALONE # 不要與standalone節點混淆。這里指的是只在一個節點進行配置安裝。 oracle.install.crs.config.configureAsExtendedCluster=false # 是否要配置為Extended Cluster,新裝RAC時,都是false oracle.install.crs.config.memberClusterManifestFile= oracle.install.crs.config.clusterName=bossCluster # 集群名 oracle.install.crs.config.gpnp.configureGNS=false # 不配置GNS oracle.install.crs.config.autoConfigureClusterNodeVIP=false # 不自動分配VIP oracle.install.crs.config.gpnp.gnsOption= oracle.install.crs.config.gpnp.gnsClientDataFile= oracle.install.crs.config.gpnp.gnsSubDomain= oracle.install.crs.config.gpnp.gnsVIPAddress= oracle.install.crs.config.sites= oracle.install.crs.config.clusterNodes=bossdb1:bossdb1-vip:HUB,bossdb2:bossdb2-vip:HUB # 格式為 /etc/hosts 文件中的 主機名:VIP名:HUB,主機名:vip名:HUB,關於hub請查找 flex ASM hub node 與leaf node. oracle.install.crs.config.networkInterfaceList=eno1:192.168.1.0:1,eno3:172.26.9.0:5 # 格式為 網卡名:ip段:1:網卡名:ip段:5, 1--> 公網ip,5--> asm & private ip oracle.install.asm.configureGIMRDataDG=true # 配置mgmt oracle.install.crs.config.storageOption=FLEX_ASM_STORAGE # 配置為flex asm oracle.install.crs.config.useIPMI=false # 不啟用IPMI oracle.install.crs.config.ipmi.bmcUsername= oracle.install.crs.config.ipmi.bmcPassword= oracle.install.asm.storageOption=ASM # 一般選ASM即可。 oracle.install.asmOnNAS.ocrLocation= oracle.install.asmOnNAS.configureGIMRDataDG= oracle.install.asmOnNAS.gimrLocation= oracle.install.asm.SYSASMPassword=Sys123ora # sysasm 用戶密碼 oracle.install.asm.diskGroup.name=OCR # 存儲群集信息的磁盤組,一般命名為OCR oracle.install.asm.diskGroup.redundancy=NORMAL # 磁盤組冗余方式 oracle.install.asm.diskGroup.AUSize=4 # 配置ASM AU SIZE,默認4M. oracle.install.asm.diskGroup.FailureGroups= oracle.install.asm.diskGroup.disksWithFailureGroupNames=/dev/mapper/asm-ocr1,ocr1,/dev/mapper/asm-ocr9,ocr2,/dev/mapper/asm-ocr8,ocr3 # 這里不配置failure group ,所以為空 oracle.install.asm.diskGroup.disks=/dev/mapper/asm-ocr1,/dev/mapper/asm-ocr2,/dev/mapper/asm-ocr3 oracle.install.asm.diskGroup.diskDiscoveryString=/dev/mapper/asm* oracle.install.asm.monitorPassword=Sys123ora #表示ASMSNMP賬戶的密碼 oracle.install.asm.gimrDG.name=MGMT # 指定主MGMT安裝 名為gimr的ASM 磁盤組。 oracle.install.asm.gimrDG.redundancy=EXTERNAL # 注意大寫 oracle.install.asm.gimrDG.AUSize=4 oracle.install.asm.gimrDG.FailureGroups= oracle.install.asm.gimrDG.disksWithFailureGroupNames= # EXTERNAL 冗余模式,不能配置failureGroupNames oracle.install.asm.gimrDG.disks=/dev/mapper/asm-data1 oracle.install.asm.gimrDG.quorumFailureGroupNames= oracle.install.asm.configureAFD=false # 在12.2 中仍不太建議使用AFD(oracle 12C 新特性之一)。在18C 之后再考慮使用。AFD對於操作系統版本要求過於嚴格,而且還有不少BUG。 oracle.install.crs.configureRHPS=false oracle.install.crs.config.ignoreDownNodes=false oracle.install.config.managementOption=NONE oracle.install.config.omsHost= oracle.install.config.omsPort= oracle.install.config.emAdminUser= oracle.install.config.emAdminPassword= oracle.install.crs.rootconfig.executeRootScript=false oracle.install.crs.rootconfig.configMethod= oracle.install.crs.rootconfig.sudoPath= oracle.install.crs.rootconfig.sudoUserName= oracle.install.crs.config.batchinfo= oracle.install.crs.app.applicationAddress=
8 安裝前檢查環境是否滿足條件
$ORACLE_HOME/runcluvfy.sh stage -pre crsinst -n bossdb1,bossdb2 -verbose
輸出日志不記錄了。太長了。
9 靜默安裝GRID
9.1 安裝軟件
這一步實際只是安裝了軟件,沒有配置集群。配置集群在安裝完軟件后,提示需要執行的root.sh是真正配置集群的腳本。 只在安裝節點操作。
${ORACLE_HOME}/gridSetup.sh -ignorePrereq -waitforcompletion -silent -responseFile ${ORACLE_HOME}/install/response/gridsetup.rsp
也可以將以上配置,直接寫到命令行里,如下:
${ORACLE_HOME}/gridSetup.sh -skipPrereqs -waitforcompletion -ignoreInternalDriverError -silent \
-responseFile ${ORACLE_HOME}/install/response/gridsetup.rsp \
INVENTORY_LOCATION=/g01/app/oraInventory/ \
oracle.install.option=CRS_CONFIG \
ORACLE_BASE=/g01/app/grid \
oracle.install.asm.OSDBA=asmdba \
oracle.install.asm.OSOPER=asmoper \
oracle.install.asm.OSASM=asmadmin \
oracle.install.crs.config.gpnp.scanName=racscan \
oracle.install.crs.config.gpnp.scanPort=1521 \
oracle.install.crs.config.ClusterConfiguration=STANDALONE \
oracle.install.crs.config.configureAsExtendedCluster=false \
oracle.install.crs.config.clusterName=bossCluster \
oracle.install.crs.config.gpnp.configureGNS=false \
oracle.install.crs.config.autoConfigureClusterNodeVIP=false \
oracle.install.crs.config.clusterNodes=bossdb1:bossdb1-vip:HUB,bossdb2:bossdb2-vip:HUB \
oracle.install.crs.config.networkInterfaceList=eno1:192.168.1.0:1,eno3:172.26.9.0:5 \
oracle.install.asm.configureGIMRDataDG=true \
oracle.install.crs.config.storageOption=FLEX_ASM_STORAGE \
oracle.install.crs.config.useIPMI=false \
oracle.install.asm.storageOption=ASM \
oracle.install.asm.SYSASMPassword=Sys123ora \
oracle.install.asm.diskGroup.name=crs \
oracle.install.asm.diskGroup.redundancy=NORMAL \
oracle.install.asm.diskGroup.AUSize=4 \
oracle.install.asm.diskGroup.disksWithFailureGroupNames=/dev/mapper/asm-ocr1,ocr1,/dev/mapper/asm-ocr9,ocr2,/dev/mapper/asm-ocr8,ocr3 \
oracle.install.asm.diskGroup.disks=/dev/mapper/asm-ocr1,/dev/mapper/asm-ocr2,/dev/mapper/asm-ocr3 \
oracle.install.asm.diskGroup.diskDiscoveryString=/dev/mapper/asm* \
oracle.install.asm.monitorPassword=Sys123ora \
oracle.install.asm.gimrDG.name=gimr \
oracle.install.asm.gimrDG.redundancy=EXTERNAL \
oracle.install.asm.gimrDG.AUSize=4 \
oracle.install.asm.gimrDG.disksWithFailureGroupNames=/dev/mapper/asm-data1, \
oracle.install.asm.gimrDG.disks=/dev/mapper/asm-data1 \
oracle.install.asm.configureAFD=false \
oracle.install.crs.configureRHPS=false \
oracle.install.crs.config.ignoreDownNodes=false \
oracle.install.config.managementOption=NONE \
oracle.install.crs.rootconfig.executeRootScript=false
安裝時會提示日志文件:
Launching Oracle Grid Infrastructure Setup Wizard... You can find the log of this install session at: /g01/app/oraInventory/logs/GridSetupActions2020-05-15_11-21-02PM/gridSetupActions2020-05-15_11-21-02PM.log
在安裝過程中可查看該日志文件,了解安裝進度。
安裝完成后,有如下提示:
As a root user, execute the following script(s):
1. /g01/app/oraInventory/orainstRoot.sh
2. /g01/app/12.2.0/root.sh
Execute /g01/app/oraInventory/orainstRoot.sh on the following nodes:
[bossdb2]
Execute /g01/app/12.2.0/root.sh on the following nodes:
[bossdb1, bossdb2]
Run the script on the local node first. After successful completion, you can start the script in parallel on all other nodes.
9.2 配置集群
9.2.1 orainstRoot.sh
在非安裝節點(bossdb2),以root 用戶執行。
sh /g01/app/oraInventory/orainstRoot.sh
執行結果如下:
[root@bossdb2 ~]# sh /g01/app/oraInventory/orainstRoot.sh Changing permissions of /g01/app/oraInventory/. Adding read,write permissions for group. Removing read,write,execute permissions for world. Changing groupname of /g01/app/oraInventory/ to oinstall. The execution of the script is complete.
9.2.2 root.sh
需要先在安裝節點執行成功,再在其他節點執行。
- 安裝節點(bossdb1) sh /g01/app/12.2.0/root.sh - 其他節點(bossdb2) sh /g01/app/12.2.0/root.sh
此腳本共有19個步驟:
- install 'SetupTFA'
- install 'ValidateEnv'.
- install CheckFirstNode'
- install 'GenSiteGUIDs'
- install 'SaveParamFile'
- install 'SetupOSD'
- install 'CheckCRSConfig'
- install 'SetupLocalGPNP'
- install 'ConfigOLR'
- install 'ConfigCHMOS'
- install 'CreateOHASD'
- install 'ConfigOHASD', add cluster entry into 'oracle-ohasd.service'
- 'InstallAFD'
- 'InstallACFS',然后重啟ohasd
- 'InstallKA'
- 'InitConfig',然后重啟OHASD,啟動各種服務,配置votedisk,關閉crs.
- 'StartCluster'
- 'ConfigNode',主要配置監聽服務。
- 'PostConfig'.
在此步驟中,我遇到了點麻煩,ORACLE 在配置ASM 網絡的時候,與另外一塊網卡配置了相同的路由,導致節點2的asm無法連接到集群,無法啟動。
9.2.3 配置MGMT
最后,靜默安裝GRID和圖形界面安裝不一樣的地方還在於,在兩節點運行完腳本后,你還需要繼續按GI安裝的提示執行如下命令來完成mgmtdb的配置。這一步,在 root.sh 腳本之后並沒有提示。需要注意,不要遺漏。
在安裝節點,以grid 用戶執行下面命令:
$ORACLE_HOME/gridSetup.sh -executeConfigTools -silent -response ${ORACLE_HOME}/install/response/gridsetup.rsp
執行報錯了,先查看mgmt服務狀態:
[grid@bossdb1 addnode]$ crsctl stat res ora.MGMT.dg -t
--------------------------------------------------------------------------------
Name Target State Server State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.MGMT.dg
ONLINE ONLINE bossdb1 STABLE
ONLINE ONLINE bossdb2 STABLE
--------------------------------------------------------------------------------
由於mgmt 已online,而且狀態為stable,則忽略錯誤,說明這些錯誤信息本身是無所謂的。
10 創建ASM磁盤
靜默安裝下,有兩種方式創建asm 磁盤組: SQL, ASMCA. 建議使用 ASMCA 方式,比較穩定。 建議通過ASMCA 創建ASM磁盤組。 根據規則創建ASM磁盤:
| +SYSTEM | >=200G | 500G | 1 | 1 | exteranl |
| +ARCH | >=500G | 500G | 1 | 1 | external |
| +DATA | >=2T | 500G | 5 | 5 | external |
| +MGMT | >=100G | 500G | 1 | 1 | external |
10.1 SQL方式
語法:
create diskgroup <groupname> [external|normal|high redundancy] disk 'asmdisk-file'
su - grid sqlplus / as sysasm set lines 32767 pages 5000 col path for a40 create diskgroup arch external redundancy disk '/dev/mapper/asm-data3';
10.2 ASMCA 靜默方式
[grid@bossdb1 addnode]$ asmca -silent -createDiskGroup -diskGroupName SYSTEM -DISKLIST '/dev/mapper/asm-data4' -redundancy external -au_size 4 -compatible.asm '12.2' -sysAsmPassword Sys123ora Disk groups created successfully. Check /g01/app/grid/cfgtoollogs/asmca/asmca-200516PM030212.log for details.
創建完成后,查看當前磁盤組
col COMPATIBILITY for a13
col database_compatibility for a20
col name for a10
select group_number,name,state,COMPATIBILITY,DATABASE_COMPATIBILITY from v$asm_diskgroup;
GROUP_NUMBER NAME STATE COMPATIBILITY DATABASE_COMPATIBILITY
------------ ------------------------------ ----------- ------------------------------------------------------------ ------------------------------------------------------------
1 MGMT MOUNTED 12.2.0.1.0 10.1.0.0.0
2 OCR MOUNTED 12.2.0.1.0 10.1.0.0.0
3 DATA MOUNTED 12.2.0.0.0 10.1.0.0.0
4 ARCH MOUNTED 12.2.0.0.0 10.1.0.0.0
5 SYSTEM MOUNTED 12.2.0.0.0 10.1.0.0.0
10.3 查看磁盤與磁盤組信息
select group_number,name,state,total_mb,free_mb,type,offline_disks from v$asm_diskgroup;
GROUP_NUMBER NAME STATE TOTAL_MB FREE_MB TYPE OFFLINE_DISKS
------------ ---------- ----------- ---------- ---------- ------ -------------
1 ARCH MOUNTED 1024156 813981 EXTERN 0
2 DATA MOUNTED 3277268 649260 EXTERN 0
3 MGMT MOUNTED 512076 477828 EXTERN 0
4 OCR MOUNTED 30720 29852 NORMAL 0
5 SYSTEM MOUNTED 512076 489328 EXTERN 0
11 靜默安裝db
11.1 解壓文件
這里注意,ORACLE RDBMS安裝與grid 不一樣,不需要將壓縮包解壓到$ORACLE_HOME路徑,而且是必須不能。
# 以root執行 chown oracle:oinstall linuxx64_12201_database.zip cp linuxx64_12201_database.zip home/oracle/ #以grid 用戶執行 su - oracle unzip linuxx64_12201_database.zip
11.2 配置響應文件
此響應文件,可以安裝和配置數據。不需要先安裝軟件,再dbca建庫。注意配置INVENTORY_LOCATION, gi 和 rmdbs 的inventory要一致。
oracle.install.responseFileVersion=/oracle/install/rspfmt_dbinstall_response_schema_v12.2.0 oracle.install.option=INSTALL_DB_AND_CONFIG UNIX_GROUP_NAME=oinstall INVENTORY_LOCATION=/g01/app/oraInventory ORACLE_HOME=/u01/app/oracle/product/12.2/dbhome_1 ORACLE_BASE=/u01/app/oracle oracle.install.db.InstallEdition=EE oracle.install.db.OSDBA_GROUP=dba oracle.install.db.OSOPER_GROUP=oper oracle.install.db.OSBACKUPDBA_GROUP=backupdba oracle.install.db.OSDGDBA_GROUP=dgdba oracle.install.db.OSKMDBA_GROUP=kmdba oracle.install.db.OSRACDBA_GROUP=racdba oracle.install.db.rac.configurationType= oracle.install.db.CLUSTER_NODES=bossdb1,bossdb2 oracle.install.db.isRACOneInstall=false oracle.install.db.racOneServiceName= oracle.install.db.rac.serverpoolName= oracle.install.db.rac.serverpoolCardinality= oracle.install.db.config.starterdb.type=GENERAL_PURPOSE oracle.install.db.config.starterdb.globalDBName=boss oracle.install.db.config.starterdb.SID=boss oracle.install.db.ConfigureAsContainerDB=false oracle.install.db.config.PDBName= # 此處為空,上一行為false,創建的數據庫就是普通的RAC,不是CDB。如果要創建cdb+pdb, 上一行為true,此行需要設置pdbname oracle.install.db.config.starterdb.characterSet=ZHS16GBK oracle.install.db.config.starterdb.memoryOption=false oracle.install.db.config.starterdb.memoryLimit=51300 oracle.install.db.config.starterdb.installExampleSchemas=false oracle.install.db.config.starterdb.password.ALL=Sys123ora oracle.install.db.config.starterdb.password.SYS= oracle.install.db.config.starterdb.password.SYSTEM= oracle.install.db.config.starterdb.password.DBSNMP= oracle.install.db.config.starterdb.password.PDBADMIN= oracle.install.db.config.starterdb.managementOption=DEFAULT oracle.install.db.config.starterdb.omsHost= oracle.install.db.config.starterdb.omsPort= oracle.install.db.config.starterdb.emAdminUser= oracle.install.db.config.starterdb.emAdminPassword= oracle.install.db.config.starterdb.enableRecovery=true oracle.install.db.config.starterdb.storageType=ASM_STORAGE oracle.install.db.config.starterdb.fileSystemStorage.dataLocation= oracle.install.db.config.starterdb.fileSystemStorage.recoveryLocation= oracle.install.db.config.asm.diskGroup=SYSTEM oracle.install.db.config.asm.ASMSNMPPassword=Sys123ora MYORACLESUPPORT_USERNAME= MYORACLESUPPORT_PASSWORD= SECURITY_UPDATES_VIA_MYORACLESUPPORT= DECLINE_SECURITY_UPDATES= PROXY_HOST= PROXY_PORT= PROXY_USER= PROXY_PWD= COLLECTOR_SUPPORTHUB_URL=
11.3 執行安裝
$HOME/database/runInstaller -silent -skipPrereqs -responseFile $HOME/database/response/db_install.rsp
執行示例:
[oracle@bossdb1 ~]$ $HOME/database/runInstaller -silent -skipPrereqs -responseFile $HOME/database/response/db_install.rsp
Starting Oracle Universal Installer...
Checking Temp space: must be greater than 500 MB. Actual 23593 MB Passed
Checking swap space: must be greater than 150 MB. Actual 32191 MB Passed
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2020-05-16_06-23-30PM. Please wait ...[oracle@bossdb1 ~]$ You can find the log of this install session at:
/g01/app/oraInventory/logs/installActions2020-05-16_06-23-30PM.log
The installation of Oracle Database 12c was successful.
Please check '/g01/app/oraInventory/logs/silentInstall2020-05-16_06-23-30PM.log' for more details.
The Cluster Node Addition of /u01/app/oracle/product/12.2/dbhome_1 was successful.
Please check '/g01/app/oraInventory/logs/silentInstall2020-05-16_06-23-30PM.log' for more details.
As a root user, execute the following script(s):
1. /u01/app/oracle/product/12.2/dbhome_1/root.sh
Execute /u01/app/oracle/product/12.2/dbhome_1/root.sh on the following nodes:
[bossdb1, bossdb2]
Successfully Setup Software.
As install user, execute the following command to complete the configuration.
/home/oracle/database/runInstaller -executeConfigTools -responseFile /home/oracle/database/response/db_install.rsp [-silent]
執行完后,會提示在兩個節點分別以root執行root.sh,並在安裝節點以安裝rdbms的用戶(一般為Oracle)執行ConfigTools。按要求執行:
-
root.sh
節點1(bossdb1): [root@bossdb1 ~]# sh /u01/app/oracle/product/12.2/dbhome_1/root.sh Check /u01/app/oracle/product/12.2/dbhome_1/install/root_bossdb1_2020-05-16_18-45-31-958812474.log for the output of root script 節點2(bossdb2): [root@bossdb2 ~]# sh /u01/app/oracle/product/12.2/dbhome_1/root.sh Check /u01/app/oracle/product/12.2/dbhome_1/install/root_bossdb2_2020-05-16_18-45-57-937516434.log for the output of root script
-
executeConfigTools
[oracle@bossdb1 ~]$ /home/oracle/database/runInstaller -executeConfigTools -responseFile /home/oracle/database/response/db_install.rsp -silent Starting Oracle Universal Installer... Checking Temp space: must be greater than 500 MB. Actual 16204 MB Passed Checking swap space: must be greater than 150 MB. Actual 32191 MB Passed Preparing to launch Oracle Universal Installer from /tmp/OraInstall2020-05-16_06-46-44PM. Please wait ...[oracle@bossdb1 ~]$ You can find the logs of this session at: /g01/app/oraInventory/logs Successfully Configured Software.
11.4 查看集群狀態
我們看到集群中已有db 服務,狀態online. 說明之前操作都正常。
[grid@bossdb1 ~]$ crsctl stat res ora.boss.db -t
--------------------------------------------------------------------------------
Name Target State Server State details
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.boss.db
1 ONLINE ONLINE bossdb1 Open,HOME=/u01/app/o
racle/product/12.2/d
bhome_1,STABLE
2 ONLINE ONLINE bossdb2 Open,HOME=/u01/app/o
racle/product/12.2/d
bhome_1,STABLE
--------------------------------------------------------------------------------
12 安裝補丁
當前最新的補丁是 30501932(for grid) 和 30593149 (for db). 請到MOS(Doc ID 2558817.1))查找2020年 Oracle 12C 最新的RUR。 安裝補丁不再需要停CRS或者DB了。
-
解壓補丁包 以root用戶執行
# 更新OPatch,需要在兩個節點都執行 # for grid cp /opt/p6880880_180000_Linux-x86-64.zip /g01/app/12.2.0/ cd /g01/app/12.2.0 rm -rf ./OPatch unzip p6880880_180000_Linux-x86-64.zip chown -R grid:oinstall ./OPatch # for oracle rdbms cp /opt/p6880880_180000_Linux-x86-64.zip /u01/app/oracle/product/12.2/dbhome_1/ cd /u01/app/oracle/product/12.2/dbhome_1/ rm -rf ./OPatch unzip p6880880_180000_Linux-x86-64.zip chown -R oracle:oinstall ./OPatch
-
查看確認組件安裝信息
$ORACLE_HOME/OPatch/opatch lsinventory -detail -oh $ORACLE_HOME Oracle Interim Patch Installer version 12.2.0.1.21 Copyright (c) 2020, Oracle Corporation. All rights reserved. Oracle Home : /g01/app/12.2.0 Central Inventory : /g01/app/oraInventory from : /g01/app/12.2.0/oraInst.loc OPatch version : 12.2.0.1.21 OUI version : 12.2.0.1.4 Log file location : /g01/app/12.2.0/cfgtoollogs/opatch/opatch2020-05-17_16-06-38PM_1.log Lsinventory Output file location : /g01/app/12.2.0/cfgtoollogs/opatch/lsinv/lsinventory2020-05-17_16-06-38PM.txt -------------------------------------------------------------------------------- Local Machine Information:: Hostname: bossdb1 ARU platform id: 226 ARU platform description:: Linux x86-64 Installed Top-level Products (1): Oracle Grid Infrastructure 12c 12.2.0.1.0 There are 1 products installed in this Oracle Home. Installed Products (99): Assistant Common Files 12.2.0.1.0 Automatic Storage Management Assistant 12.2.0.1.0 BLASLAPACK Component 12.2.0.1.0 Buildtools Common Files 12.2.0.1.0 Cluster Ready Services Files 12.2.0.1.0 Cluster Verification Utility Common Files 12.2.0.1.0 Cluster Verification Utility Files 12.2.0.1.0 Database Configuration and Upgrade Assistants 12.2.0.1.0 Database Migration Assistant for Unicode 12.2.0.1.0 Database SQL Scripts 12.2.0.1.0 Database Workspace Manager 12.2.0.1.0 DB TOOLS Listener 12.2.0.1.0 Deinstallation Tool 12.2.0.1.0 Expat libraries 2.0.1.0.3 Hadoopcore Component 12.2.0.1.0 HAS Common Files 12.2.0.1.0 HAS Files for DB 12.2.0.1.0 Installation Common Files 12.2.0.1.0 Installation Plugin Files 12.2.0.1.0 Installer SDK Component 12.2.0.1.4 Java Development Kit 1.8.0.91.0 LDAP Required Support Files 12.2.0.1.0 OLAP SQL Scripts 12.2.0.1.0 Oracle Advanced Security 12.2.0.1.0 Oracle Bali Share 11.1.1.6.0 Oracle Clusterware RDBMS Files 12.2.0.1.0 Oracle Configuration Manager Deconfiguration 10.3.1.0.0 Oracle Core Required Support Files 12.2.0.1.0 Oracle Core Required Support Files for Core DB 12.2.0.1.0 Oracle Database 12c 12.2.0.1.0 Oracle Database 12c Multimedia Files 12.2.0.1.0 Oracle Database Deconfiguration 12.2.0.1.0 Oracle Database Utilities 12.2.0.1.0 Oracle DBCA Deconfiguration 12.2.0.1.0 Oracle Extended Windowing Toolkit 11.1.1.6.0 Oracle Globalization Support 12.2.0.1.0 Oracle Globalization Support 12.2.0.1.0 Oracle Globalization Support For Core 12.2.0.1.0 Oracle Grid Infrastructure 12c 12.2.0.1.0 Oracle Grid Infrastructure Bundled Agents 12.2.0.1.0 Oracle Grid Management Database 12.2.0.1.0 Oracle Help for Java 11.1.1.7.0 Oracle Help Share Library 11.1.1.7.0 Oracle Ice Browser 11.1.1.7.0 Oracle Internet Directory Client 12.2.0.1.0 Oracle Java Client 12.2.0.1.0 Oracle JDBC/OCI Instant Client 12.2.0.1.0 Oracle JDBC/THIN Interfaces 12.2.0.1.0 Oracle JFC Extended Windowing Toolkit 11.1.1.6.0 Oracle JVM 12.2.0.1.0 Oracle JVM For Core 12.2.0.1.0 Oracle LDAP administration 12.2.0.1.0 Oracle Locale Builder 12.2.0.1.0 Oracle Multimedia 12.2.0.1.0 Oracle Multimedia Client Option 12.2.0.1.0 Oracle Multimedia Java Advanced Imaging 12.2.0.1.0 Oracle Multimedia Locator 12.2.0.1.0 Oracle Multimedia Locator Java Required Support Files 12.2.0.1.0 Oracle Multimedia Locator RDBMS Files 12.2.0.1.0 Oracle Net 12.2.0.1.0 Oracle Net Listener 12.2.0.1.0 Oracle Net Required Support Files 12.2.0.1.0 Oracle Netca Client 12.2.0.1.0 Oracle Notification Service 12.2.0.1.0 Oracle Notification Service for Instant Client 12.2.0.1.0 Oracle One-Off Patch Installer 12.2.0.1.6 Oracle Quality of Service Management (Server) 12.2.0.1.0 Oracle RAC Deconfiguration 12.2.0.1.0 Oracle RAC Required Support Files-HAS 12.2.0.1.0 Oracle Recovery Manager 12.2.0.1.0 Oracle Security Developer Tools 12.2.0.1.0 Oracle Text Required Support Files 12.2.0.1.0 Oracle Universal Connection Pool 12.2.0.1.0 Oracle Universal Installer 12.2.0.1.4 Oracle USM Deconfiguration 12.2.0.1.0 Oracle Wallet Manager 12.2.0.1.0 oracle.swd.commonlogging 13.3.0.0.0 oracle.swd.opatchautodb 12.2.0.1.5 oracle.swd.oui.core.min 12.2.0.1.4 Parser Generator Required Support Files 12.2.0.1.0 Perl Interpreter 5.22.0.0.0 Perl Modules 5.22.0.0.0 PL/SQL 12.2.0.1.0 PL/SQL Embedded Gateway 12.2.0.1.0 Platform Required Support Files 12.2.0.1.0 Precompiler Required Support Files 12.2.0.1.0 RDBMS Required Support Files 12.2.0.1.0 RDBMS Required Support Files for Instant Client 12.2.0.1.0 Required Support Files 12.2.0.1.0 Secure Socket Layer 12.2.0.1.0 SQL*Plus 12.2.0.1.0 SQL*Plus Files for Instant Client 12.2.0.1.0 SQL*Plus Required Support Files 12.2.0.1.0 SSL Required Support Files for InstantClient 12.2.0.1.0 Tomcat Container 12.2.0.1.0 Tracle File Analyzer 12.2.0.1.0 Universal Storage Manager Files 12.2.0.1.0 XDK Required Support Files 12.2.0.1.0 XML Parser for Java 12.2.0.1.0 There are 99 products installed in this Oracle Home. There are no Interim patches installed in this Oracle Home. -------------------------------------------------------------------------------- OPatch succeeded.
-
確認Opatch沖突
由於這是新安裝的集群,此步驟不操作。命令如下:
% $ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir <UNZIPPED_PATCH_LOCATION>/30501932/30593149 % $ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir <UNZIPPED_PATCH_LOCATION>/30501932/30585969 % $ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir <UNZIPPED_PATCH_LOCATION>/30501932/30586063 % $ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir <UNZIPPED_PATCH_LOCATION>/30501932/26839277 % $ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir <UNZIPPED_PATCH_LOCATION>/30501932/30591794
-
空間檢查
cat >> /tmp/patch_list_gihome.txt <<EOF /g01/app/patches/30501932/30593149 /g01/app/patches/30501932/30585969 /g01/app/patches/30501932/30586063 /g01/app/patches/30501932/26839277 /g01/app/patches/30501932/30591794 EOF $ORACLE_HOME/OPatch/opatch prereq CheckSystemSpace -phBaseFile /tmp/patch_list_gihome.txt
執行示例如下:
Oracle Interim Patch Installer version 12.2.0.1.21 Copyright (c) 2020, Oracle Corporation. All rights reserved. PREREQ session Oracle Home : /g01/app/12.2.0 Central Inventory : /g01/app/oraInventory from : /g01/app/12.2.0/oraInst.loc OPatch version : 12.2.0.1.21 OUI version : 12.2.0.1.4 Log file location : /g01/app/12.2.0/cfgtoollogs/opatch/opatch2020-05-17_16-20-21PM_1.log Invoking prereq "checksystemspace" Prereq "checkSystemSpace" passed. OPatch succeeded.
-
檢查one-off patch沖突 以root用戶執行
/g01/app/12.2.0/OPatch/opatchauto apply /g01/app/patches/30501932 -analyze -oh /g01/app/12.2.0
-
安裝補丁 以root用戶執行。
export PATH=$PATH:/g01/app/12.2.0/OPatch -- 所有實例安裝 opatchauto apply /g01/app/patches/30501932 -- 本地安裝 opatchauto apply /g01/app/patches/30501932 -oh /g01/app/12.2.0 -nonrolling
13 優化
13.1 參數優化
alter system set memory_max_target=50000M sid='*' scope=spfile; alter system set memory_target=50000M sid='*' scope=spfile; alter system set audit_trail='none' sid='*' scope=spfile; alter system set db_files=2000 sid='*' scope=spfile; alter system set deferred_segment_creation=false SCOPE=BOTH SID='*'; --關閉段延遲創建 ALTER SYSTEM SET CONTROL_FILE_RECORD_KEEP_TIME=31 SCOPE=BOTH SID='*'; -- 調整控制文件內容保存時長為31天 ALTER SYSTEM SET MAX_DUMP_FILE_SIZE='2048M' SCOPE=BOTH SID='*'; -- 調整dumpfilesize ALTER SYSTEM SET PROCESSES=2048 SCOPE=SPFILE SID='*'; -- 調整最大進程數 ALTER SYSTEM SET "_UNDO_AUTOTUNE"=FALSE SCOPE=BOTH SID='*'; -- 關閉undo自動管理 ALTER SYSTEM SET "_USE_ADAPTIVE_LOG_FILE_SYNC"=FALSE SCOPE=BOTH SID='*'; -- 關閉自適應日志同步 ALTER SYSTEM SET SEC_CASE_SENSITIVE_LOGON=FALSE SCOPE=SPFILE SID='*'; -- 關閉密碼大小寫驗證,12.2 中這個參數已經被放棄。(需要sqlnet.ora配合配置) alter system set cursor_sharing=force scope=both sid='*'; -- 開啟游標共享,強制使用綁定變量模式,減少硬解析 alter system set result_cache_max_size=0 scope=both sid='*'; -- result cache 功能存在比較嚴重的BUG,會讓整個集群hang死。 -- alter database enable block change tracking using file '+data'; -- 開啟塊追蹤,此功能的開通需要消耗一定的資源,參考以往的業務負載決定是否開啟。 -- alter system set "_resource_manager_always_on"=FALSE SCOPE=SPFILE SID='*'; -- 11G中與下面的參數一同使用, 12C 中暫時不需要設置此參數,通過包管理,見功能關閉 -- alter system set "_resource_manager_always_off"=true scope=spfile sid='*';
13.2 sqlnet.ora
由於數據庫升級后,可能會出現過老的jdbc/odbc,為了保持正常連接,需要配置一下兼容性。
#12C 以前 cat $ORACLE_HOME/network/admin/sqlnet.ora SQLNET.ALLOWED_LOGON_VERSION_SERVER=8 # 12C 之后 SQLNET.ALLOWED_LOGON_VERSION_SERVER=8 SQLNET.ALLOWED_LOGON_VERSION_CLIENT=8
13.3 功能開啟與關閉
13.3.1 開啟歸檔
shutdown immediate startup mount; alter database archivelog; alter system set
13.3.2 關閉資源管理計划
execute dbms_scheduler.set_attribute('WEEKNIGHT_WINDOW','RESOURCE_PLAN','');
execute dbms_scheduler.set_attribute('WEEKEND_WINDOW','RESOURCE_PLAN','');
execute dbms_scheduler.set_attribute('SATURDAY_WINDOW','RESOURCE_PLAN','');
execute dbms_scheduler.set_attribute('SUNDAY_WINDOW','RESOURCE_PLAN','');
execute dbms_scheduler.set_attribute('MONDAY_WINDOW','RESOURCE_PLAN','');
execute dbms_scheduler.set_attribute('TUESDAY_WINDOW','RESOURCE_PLAN','');
execute dbms_scheduler.set_attribute('WEDNESDAY_WINDOW','RESOURCE_PLAN','');
execute dbms_scheduler.set_attribute('THURSDAY_WINDOW','RESOURCE_PLAN','');
execute dbms_scheduler.set_attribute('FRIDAY_WINDOW','RESOURCE_PLAN','');
13.3.3 啟用並配置profile
alter system set resource_limit=true scope=both sid='*'; -- 開啟資源管理, 主要為了限制無良應用的無限長連接。 alter profile default limit idle_time=180; -- 限制空閑連接最長180分鍾。請根據實際環境的業務量配置。
Created: 2020-05-22 Fri 14:18
