項目環境准備
3.1虛擬機配置
- 版本選擇
注意Linux操作系統。此次項目我選擇的版本是Oracle Enterprise Linux 5.4
- 內存的設置
本人電腦物理內存8G,由於此次實驗要開三台虛擬機,基於集群兩台機需要的內存較大,故每台給2G,單實例做standby的給1.5G。
- 添加選擇網卡類型
使用橋接方式容易引發IP沖突,所以我選擇的是Host Only方式,避免IP沖突。
兩個網卡使用分配:
NAT:作Public IP
Host Only:作Private IP
注意主機和虛擬機的防火牆要關閉,達到互信。
- 分配磁盤空間
在分配磁盤空間的時候不要立即分配,避免占用實際空間大小,可以分配給其他進程。這里我分配50G給根分區,選擇將虛擬磁盤存儲為單個文件,性能較好。
- 添加共享磁盤
添加3個共享磁盤,分別為asm1,asm2,asm3.(注意:共享磁盤不能建為本地磁盤),也就是說SCSI設置為1:n,而不是0:n。模式選擇獨立永久,立即分配磁盤,存儲為單個文件方式。每個磁盤均給3G,用於存儲數據庫相關文件。
由上可知,做RAC需要一塊本地硬盤,兩塊網卡,3個共享磁盤及OEL5.4的安裝配置。
3.2 安裝OEL5.4操作系統
- 這里不要全部選擇,sdb,sdc,sdd三個作裸設備,格式sda即可。防止其他三個盤全部被系統占用。
- 主機名:rac1.example.com
eth0:192.168.23.100/255.255.255.0(public)
eth1:192.168.21.10/255.255.255.0(private)
gateway:192.168.23.1
- 禁用防火牆和SELinux
Oracle不推薦私有網絡使用iptables.
- 安裝VMTOOLS
讓鼠標脫離虛擬機,便於虛擬機和主機的文件復制。
3.3配置操作系統
- 搭建yum倉庫,安裝所需要的包。
[root@rac1~]vi /etc/yum.repo.d/server
[server]
name=oel5.4
baseurl=file:///mnt/Server
gpgcheck=0
enable=1
[root@rac1~]yum install -y oracle-validate*
[root@rac1~]yum install -y *asm*
- 配置host文件
- 添加用戶和組
[root@rac1 ~]# userdel -r grid
userdel: user grid does not exist
[root@rac1 ~]# userdel -r oracle
[root@rac1 ~]# groupdel oinstall
[root@rac1 ~]# groupdel dba
[root@rac1 ~]# groupdel asmadmin
groupdel: group asmadmin does not exist
[root@rac1 ~]# groupdel oper
groupdel: group oper does not exist
[root@rac1 ~]# groupdel oper
groupdel: group oper does not exist
[root@rac1 ~]# groupdel asmdba
groupdel: group asmdba does not exist
[root@rac1 ~]# groupdel asmoper
groupdel: group asmoper does not exist
[root@rac1 ~]# groupadd -g 1000 oinstall
[root@rac1 ~]# groupadd -g 1100 asmadmin
[root@rac1 ~]# groupadd -g 1200 dba
[root@rac1 ~]# groupadd -g 1201 oper
[root@rac1 ~]# groupadd -g 1300 asmdba
[root@rac1 ~]# groupadd -g 1301 asmoper
[root@rac1 ~]# useradd -u 1100 -g oinstall -G asmadmin,asmdba,asmoper grid
[root@rac1 ~]# useradd -u 1101 -g oinstall -G dba,oper,asmdba oracle
[root@rac1 ~]# passwd oracle
Changing password for user oracle.
New UNIX password:
BAD PASSWORD: it is based on a dictionary word
Retype new UNIX password:
passwd: all authentication tokens updated successfully.
[root@rac1 ~]# passwd grid
- 修改配置用戶環境變量
grid用戶
export TMP=/tmp
export TMPDIR=$TMP
export ORACLE_SID=+ASM1
export ORACLE_BASE=/u01/app/grid
export ORACLE_HOME=/u01/app/11.2.0/grid
export NLS_DATE_FORMAT="yyyy-mm-dd HH24:MI:SS"
export PATH=$ORACLE_HOME/bin:$PATH
if [ $USER = "oracle" ] || [ $USER = "grid" ]; then
if [ $SHELL = "/bin/ksh" ]; then
ulimit -p 16384
ulimit -n 65536
else
ulimit -u 16384 -n 65536
fi
umask 022
fi
Oracle 用戶
export TMP=/tmp
export TMPDIR=$TMP
export ORACLE_SID=racdb1
export ORACLE_BASE=/u01/app/oracle
export ORACLE_HOME=/u01/app/oracle/product/11.2.0/db_1
export NLS_DATE_FORMAT="yyyy-mm-dd HH24:MI:SS"
export PATH=$ORACLE_HOME/bin:$PATH
if [ $USER = "oracle" ] || [ $USER = "grid" ]; then
if [ $SHELL = "/bin/ksh" ]; then
ulimit -p 16384
ulimit -n 65536
else
ulimit -u 16384 -n 65536
fi
umask 022
Fi
- 修改系統參數(用戶限制)
[root@rac1 ~]# vi /etc/security/limits.conf
grid soft nproc 2047
grid hard nproc 16384
grid soft nofile 1024
grid hard nofile 65536
oracle soft nproc 2047
oracle hard nproc 16384
oracle soft nofile 1024
oracle hard nofile 65536
- 創建相關目錄
[root@rac1 ~]# mkdir -p /u01/app/grid/11.2.0
[root@rac1 ~]# chown -R grid:oinstall /u01/
[root@rac1 ~]# mkdir -p /u01/app/oracle
[root@rac1 ~]# chown -R oracle:oinstall /u01/app/oracle
3.4創建第二個節點
- 修改虛擬機配置文件以創建共享磁盤
到E:\RAC\rac1目錄下找到rac1.vmx文件,記事本打開,進行修改
- 創建第二台主機
關閉rac1,復制所有rac1 虛擬機文件,放置文件夾rac2 中。
打開rac2,選擇“我已復制”。
1) 將rac2 的兩塊網卡刪除,重新添加;或者重新給定mac 地址,並給予正確的IP。
eth0:192.168.23.200/255.255.255.0(public)
eth1:192.168.21.20/255.255.255.0(private)
gateway:192.168.23.1
注意:兩台機器都要有默認網關。
2) 修改rac2 的主機名為rac2.example.com(/etc/sysconfig/network)
3) 修改rac2 中oracle 用戶換進變量的ORACLE_SID 為racdb2,grid用戶的ORACLE_SID為+ASM2。
4)再執行[root@rac1 ~]# hostname rac2.example.com
注意:配置完以后重啟rac2,確保主機名生效。
3.5安裝配置Grid軟件
- 對磁盤進行分區
[root@rac1 ~]#fdisk /dev/sdb
[root@rac1 ~]#fdisk /dev/sdc
[root@rac1 ~]#fdisk /dev/sdd
‘n’----’p’----’1’----’w’
- 配置oracleasm(兩個節點執行rac1,rac2)
/etc/init.d/oracleasm configure
Configuring the Oracle ASM library driver.
This will configure the on-boot properties of the Oracle ASM library
driver. The following questions will determine whether the driver is
loaded on boot and what permissions it will have. The current values
will be shown in brackets ('[]'). Hitting <ENTER> without typing an
answer will keep that current value. Ctrl-C will abort.
Default user to own the driver interface []: grid
Default group to own the driver interface []: oinstall
Start Oracle ASM library driver on boot (y/n) [n]: y
Scan for Oracle ASM disks on boot (y/n) [y]: y
Writing Oracle ASM library driver configuration: done
Initializing the Oracle ASMLib driver: [ OK ]
Scanning the system for Oracle ASMLib disks: [ OK ]
- 創建asm磁盤(rac1執行)
[root@rac1 ~]# oracleasm createdisk VOL1 /dev/sdb1
Writing disk header: done
Instantiating disk: done
[root@rac1 ~]# oracleasm createdisk VOL2 /dev/sdc1
Writing disk header: done
Instantiating disk: done
[root@rac1 ~]# oracleasm createdisk VOL3 /dev/sdd1
Writing disk header: done
Instantiating disk: done
[root@rac1 ~]# /etc/init.d/oracleasm listdisks
VOL1
VOL2
VOL3
- 掃描生成rac1創建的asm磁盤(rac2執行)
[root@rac2 ~]# oracleasm scandisks
Reloading disk partitions: done
Cleaning any stale ASM disks...
Scanning system for ASM disks...
[root@rac2 ~]# /etc/init.d/oracleasm listdisks
VOL1
VOL2
VOL3
- 兩個節點停止ntp服務
/etc/init.d/ntpd stop
/etc/init.d/ntpd status
chkconfig ntpd off
rm -rf /etc/ntp.conf
- 安裝Grid
拷貝grid壓縮軟件賦權解壓后執行安裝。(rac1執行)
[grid@rac1 grid]$ ./runInstaller
注意順序(rac2,rac1)root用戶
注意順序,所有腳本都在root用戶執行。
跑第二個腳本成功輸出結果:
/u01/app/11.2.0/grid/root.sh
Changing permissions of /u01/app/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.
Changing groupname of /u01/app/oraInventory to oinstall.
The execution of the script is complete.
You have mail in /var/spool/mail/root
[root@rac1 ~]# /u01/app/11.2.0/grid/root.sh
Performing root user operation for Oracle 11g
The following environment variables are set as:
ORACLE_OWNER= grid
ORACLE_HOME= /u01/app/11.2.0/grid
Enter the full pathname of the local bin directory: [/usr/local/bin]:
Copying dbhome to /usr/local/bin ...
Copying oraenv to /usr/local/bin ...
Copying coraenv to /usr/local/bin ...
..........................................................................................
...........................................................................................
.........................................................................................
## STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1. ONLINE 6f12fa9ffe274fc3bfa64e25d3e270de (/dev/oracleasm/disks/VOL1) [DATA]
Located 1 voting disk(s).
CRS-2672: Attempting to start 'ora.asm' on 'rac1'
CRS-2676: Start of 'ora.asm' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.DATA.dg' on 'rac1'
CRS-2676: Start of 'ora.DATA.dg' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.registry.acfs' on 'rac1'
CRS-2676: Start of 'ora.registry.acfs' on 'rac1' succeeded
Configure Oracle Grid Infrastructure for a Cluster ... succeeded
這是由於沒有配置DNS服務導致的,可以忽略。
- 檢測安裝結果
3.6創建ASM磁盤組
[grid@rac1 ~]$ asmca
3.7安裝數據庫
3.7.1安裝數據庫軟件
[oracle@rac1 database]$ ./runInstaller
注意先后順序
成功輸出結果
/u01/app/oracle/product/11.2.0/db_1/root.sh
Performing root user operation for Oracle 11g
The following environment variables are set as:
ORACLE_OWNER= oracle
ORACLE_HOME= /u01/app/oracle/product/11.2.0/db_1
Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Finished product-specific root actions.
You have new mail in /var/spool/mail/root
3.7.2 配置監聽
3.7.3創建數據庫
[oracle@rac1 admin]$ dbca
[grid@rac1 grid]$ srvctl config database -d racdb
Database unique name: racdb
Database name: racdb
Oracle home: /u01/app/oracle/product/11.2.0/db_1
Oracle user: oracle
Spfile: +DBDATA/racdb/spfileracdb.ora
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: racdb
Database instances: racdb1,racdb2
Disk Groups: DBDATA,DATA,RECOVER
Mount point paths:
Services:
Type: RAC
Database is administrator managed
[grid@rac1 grid]$
四、項目實現
4,配置RAC+Data Guard
4.1創建第三台機rac3
- 主機名:rac3.example.com
IP:192.168.23.155/255.255.255.0
Gateway:192.168.23.1
- 禁用防火牆和SELinux
- 安裝VMTOOLS
讓鼠標脫離虛擬機,便於虛擬機和主機的文件復制。
- 搭建yum倉庫,安裝所需要的包。
[root@rac1~]vi /etc/yum.repo.d/server
[server]
name=oel5.4
baseurl=file:///mnt/Server
gpgcheck=0
enable=1
[root@rac1~]yum install -y oracle-validate*
- 編輯/etc/hosts文件
- 用戶及環境變量配置
- 安裝數據庫軟件
4.2檢查環境
- 歸檔模式
SYS@racdb1>select log_mode from v$database;
LOG_MODE
------------
ARCHIVELOG
- 參數設置
SYS@racdb1>alter database force logging;
4.3配置監聽
- Tnsnames.ora文件配置,三台機
- Listener.ora(備庫主機racdg)
SID_LIST_LISTENER =
(SID_LIST =
(SID_DESC =
(GLOBAL_DBNAME = racdg)
(ORACLE_HOME = /u01/app/oracle/product/11.2.0/db_1)
(SID_NAME = racdg)
)
)
LISTENER =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = rac3.example.com)(PORT = 1521))
)
ADR_BASE_LISTENER = /u01/app/oracle
4.4文件准備
- 口令文件准備
[oracle@rac1 dbs]$ scp orapwracdb1 rac3:$ORACLE_HOME/dbs/orapwracdg
- 參數文件准備
主庫添加以下參數
#primary
DB_UNIQUE_NAME=racdb
LOG_ARCHIVE_CONFIG='DG_CONFIG=(racdb,racdg)'
LOG_ARCHIVE_DEST_1=
'LOCATION=+RECOVER
VALID_FOR=(ALL_LOGFILES,ALL_ROLES)
DB_UNIQUE_NAME=racdb'
LOG_ARCHIVE_DEST_2=
'SERVICE=racdg ASYNC
VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE)
DB_UNIQUE_NAME=racdg'
LOG_ARCHIVE_DEST_STATE_1=ENABLE
LOG_ARCHIVE_DEST_STATE_2=defer
LOG_ARCHIVE_FORMAT=%t_%s_%r.arc
#standby
FAL_SERVER=racdg
racdb1.fal_client=racdb1
racdb2.fal_client=racdb2
DB_FILE_NAME_CONVERT='/disk1/racdg/datafile','+DBDATA/racdb/datafile'
LOG_FILE_NAME_CONVERT=
'/disk2/racdg/logfile','+DBDATA/racdb/onlinelog'
STANDBY_FILE_MANAGEMENT=AUTO
備庫參數
racdg.__db_cache_size=138412032
racdg.__java_pool_size=4194304
racdg.__large_pool_size=4194304
racdg.__pga_aggregate_target=209715200
racdg.__sga_target=314572800
racdg.__shared_io_pool_size=0
racdg.__shared_pool_size=159383552
racdg.__streams_pool_size=0
*.audit_file_dest='/u01/app/oracle/admin/racdg/adump'
*.audit_trail='db'
*.compatible='11.2.0.0.0'
*.control_files='/disk3/racdg/controlfile/control01.ctl'
*.db_block_size=8192
*.db_create_file_dest='/disk3/racdg/fast'
*.db_domain=''
*.db_name='racdb'
*.diagnostic_dest='/u01/app/oracle'
*.log_archive_format='%t_%s_%r.dbf'
*.memory_target=524288000
*.open_cursors=300
*.processes=150
*.remote_login_passwordfile='exclusive'
racdg.undo_tablespace='UNDOTBS1'
#primary
DB_UNIQUE_NAME=racdg
LOG_ARCHIVE_CONFIG='DG_CONFIG=(racdg,racdb)'
LOG_ARCHIVE_DEST_1=
'LOCATION=/disk4/racdg/archivelog
VALID_FOR=(ALL_LOGFILES,ALL_ROLES)
DB_UNIQUE_NAME=racdg'
LOG_ARCHIVE_DEST_2=
'SERVICE=racdb1 ASYNC
VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE)
DB_UNIQUE_NAME=racdb'
LOG_ARCHIVE_FORMAT=%t_%s_%r.arc
#standby
FAL_SERVER=racdb1,racdb2
fal_client='racdg
DB_FILE_NAME_CONVERT='+DBDATA/racdb/datafile','/disk1/racdg/datafile'
LOG_FILE_NAME_CONVERT=
'+DBDATA/racdb/onlinelog', '/disk2/racdg/logfile'
STANDBY_FILE_MANAGEMENT=AUTO
- 創建standby redo
SYS@racdb1>alter database add standby logfile thread 1 group 5 size 50m;
SYS@racdb1>>alter database add standby logfile thread 1 group 6 size 50m;
SYS@racdb1>alter database add standby logfile thread 1 group 7 size 50m;
SYS@racdb1>alter database add standby logfile thread 2 group 8 size 50m;
SYS@racdb1>alter database add standby logfile thread 2 group 9 size 50m;
SYS@racdb1>alter database add standby logfile thread 2 group 10 size 50m;
- 數據文件和控制文件(rman備份復制)
主庫備份rman target
RMAN> backup format '/home/oracle/backup/bk_%u' current controlfile for standby;
RMAN>backup format '/home/oracle/backup/bkk_%u' database plus archivelog;
[oracle@rac1 backup]$ scp * rac3:/home/oracle/backup
備庫端復制
[oracle@rac3 backup]$ rman target sys/oracle@racdb1 auxiliary /
RMAN> duplicate target database for standby nofilenamecheck;
4.5測試及應用
- 查看兩庫的日志傳送情況和狀態
- 測試表空間和表數據
SYS@racdb1>create tablespace test datafile '/disk5/racdb1/test.dbf' size 20m;
SYS@racdb1>create table testdat(id number,name varchar2(20)) tablespace test;
SYS@racdb1>alter system switch logfile;
SYS@racdg>alter database recover managed standby database disconnect from session;
SYS@racdg>desc testdat;
Name Null? Type
----------------------------------------- --------
ID NUMBER
NAME VARCHAR2(20)
SYS@racdb1>insert into testdat values(41,'Leader.Zhang');
SYS@racdb1>commit;
SYS@racdb1>alter system switch logfile;
SYS@racdg>select * from testdat;
ID NAME
---------- --------------------
41 Leader.Zhang
4.6切換
- 從主庫切換到備庫(主庫執行)
SYS@racdb1>alter database commit to switchover to physical standby;
SYS@racdb1>shutdown immediate
SYS@racdb1>startup mount
SYS@racdb1>alter database recover managed standby database disconnect from session;
SYS@racdb1>select database_role,switchover_status from v$database;
DATABASE_ROLE SWITCHOVER_STATUS
---------------- --------------------
PHYSICAL STANDBY NOT ALLOWED
- 從備庫切換到主庫
SYS@racdg>alter database commit to switchover to primary;
SYS@racdg>shutdown immediate
SYS@racdg>startup
SYS@racdg>select database_role,switchover_status from v$database;
DATABASE_ROLE SWITCHOVER_STATUS
---------------- --------------------
PRIMARY RESOLVABLE GAP
SYS@racdg>alter system switch logfile;
SYS@racdg>select database_role,switchover_status from v$database;
DATABASE_ROLE SWITCHOVER_STATUS
---------------- --------------------
PRIMARY TO STANDBY
注意關機開機順序:主庫先關后起,備庫后關先起。
