目前有個這樣的需求,每天晚上23點將生產庫的數據備份到備份機器上,第二天備份機器的數據庫可以直接使用,數據是昨天生產庫的數據。(生產數據目前不多,全部數據不超過3000w條)。
由於沒有接觸過高深的Oracle知識,只會簡單的impdp和expdp這種自帶的工具,所以打算用這種自帶的命令加shell腳本進行完成。
現在假設生產庫ip為192.168.1.20(簡稱20),備份庫ip為192.168.1.140(簡稱140)
我的設想是這樣的,20機器在晚上11點進行自動備份,然后想辦法將備份文件拷貝到140機器,然后140機器進行導入操作。
實現過程:
1.在140上安裝nfs文件系統,將指定目錄共享到20上。
2.在20上添加備份腳本使用crontab按時執行expdp語句
#!/bin/
sh
ORACLE_BASE=/oracle
export ORACLE_BASE
ORACLE_HOME=$ORACLE_BASE/product/
export ORACLE_HOME
ORACLE_SID=ORCL
export ORACLE_SID
export PATH=$PATH:$ORACLE_HOME/bin
export DATA_DIR=/oracle/admin/orcl/dpdump
export LOGS_DIR=/oracle/admin/orcl/dpdump
export BAKUPTIME=` date +%Y%m%d%H`
export NLS_LANG=american_america.AL32UTF8
echo " Starting bakup... "
echo " Bakup file path /oracle/admin/orcl/dpdump/HJXD_$BAKUPTIME.dmp "
expdp HJXD/hjxd directory=DATA_PUMP_DIR dumpfile=HJXD_$BAKUPTIME.dmp schemas=HJXD
echo " Bakup completed. "
echo " start delete 10 day before . "
find /oracle/admin/orcl/dpdump/ -mtime + 30 -type f -name *.dmp[ab] -exec rm -f {} \;
echo " end delete 10 day before . "
export ORACLE_BASE
ORACLE_HOME=$ORACLE_BASE/product/
export ORACLE_HOME
ORACLE_SID=ORCL
export ORACLE_SID
export PATH=$PATH:$ORACLE_HOME/bin
export DATA_DIR=/oracle/admin/orcl/dpdump
export LOGS_DIR=/oracle/admin/orcl/dpdump
export BAKUPTIME=` date +%Y%m%d%H`
export NLS_LANG=american_america.AL32UTF8
echo " Starting bakup... "
echo " Bakup file path /oracle/admin/orcl/dpdump/HJXD_$BAKUPTIME.dmp "
expdp HJXD/hjxd directory=DATA_PUMP_DIR dumpfile=HJXD_$BAKUPTIME.dmp schemas=HJXD
echo " Bakup completed. "
echo " start delete 10 day before . "
find /oracle/admin/orcl/dpdump/ -mtime + 30 -type f -name *.dmp[ab] -exec rm -f {} \;
echo " end delete 10 day before . "
3.在20上添加cron任務將備份的文件拷貝到nfs共享的目錄
##!my bash
myfilepath=/oracle/admin/ORCL/dpdump/;
filename=HJXD_` date -d " 1 day ago " +%Y%m%d` 23.dmp;
cp /oracle/admin/orcl/dpdump/$filename $myfilepath
myfilepath=/oracle/admin/ORCL/dpdump/;
filename=HJXD_` date -d " 1 day ago " +%Y%m%d` 23.dmp;
cp /oracle/admin/orcl/dpdump/$filename $myfilepath
4.140上使用cron任務將拷貝過來的nfs共享目錄下面的數據文件導入140的數據庫
fullexp.log
PATH=$PATH:$HOME/bin
export PATH
ORACLE_BASE=/oracle
export ORACLE_BASE
ORACLE_HOME=$ORACLE_BASE/product/
export ORACLE_HOME
ORACLE_SID=ORCL
export ORACLE_SID
export PATH=$PATH:$ORACLE_HOME/bin
sqlplus sys/ 123456 as sysdba <<EOF
@/oracle/admin/ORCL/dpdump/impdp.sql;
EOF
export BAKUPTIME=` date -d " a day ago " +%Y%m%d23`;
chown oracle:oinstall /oracle/admin/ORCL/dpdump/HJXD_$BAKUPTIME.dmp;
echo " Starting impdp... ";
echo " impdp file path /oracle/admin/ORCL/dpdump/HJXD_$BAKUPTIME.dmp ";
impdp hjxdsas/ 123456 directory=DATA_PUMP_DIR dumpfile=HJXD_$BAKUPTIME.dmp logfile=fullexp.log remap_schema=HJXD:hjxdsas table_exists_action=replace
PATH=$PATH:$HOME/bin
export PATH
ORACLE_BASE=/oracle
export ORACLE_BASE
ORACLE_HOME=$ORACLE_BASE/product/
export ORACLE_HOME
ORACLE_SID=ORCL
export ORACLE_SID
export PATH=$PATH:$ORACLE_HOME/bin
sqlplus sys/ 123456 as sysdba <<EOF
@/oracle/admin/ORCL/dpdump/impdp.sql;
EOF
export BAKUPTIME=` date -d " a day ago " +%Y%m%d23`;
chown oracle:oinstall /oracle/admin/ORCL/dpdump/HJXD_$BAKUPTIME.dmp;
echo " Starting impdp... ";
echo " impdp file path /oracle/admin/ORCL/dpdump/HJXD_$BAKUPTIME.dmp ";
impdp hjxdsas/ 123456 directory=DATA_PUMP_DIR dumpfile=HJXD_$BAKUPTIME.dmp logfile=fullexp.log remap_schema=HJXD:hjxdsas table_exists_action=replace
impdp.sql文件
drop
user hjxdsas
cascade;
create user hjxdsas identified by 123456
default tablespace hjxd
temporary tablespace temp;
grant dba, create any trigger, drop any table, SELECT ANY table, SELECT ANY sequence, create user to hjxdsas identified by 123456;
grant connect, resource to hjxdsas;
grant exp_full_database,imp_full_database to hjxdsas;
create user hjxdsas identified by 123456
default tablespace hjxd
temporary tablespace temp;
grant dba, create any trigger, drop any table, SELECT ANY table, SELECT ANY sequence, create user to hjxdsas identified by 123456;
grant connect, resource to hjxdsas;
grant exp_full_database,imp_full_database to hjxdsas;
第4步最開始的時候設置了oracle環境變量,原因是文件拷貝過來是通過root用戶拷貝的(nfs要求兩邊操作的用戶有相同的uid,20,140機器的oracle用戶uid不一定一致,root用戶缺少一致);包括在使用oracle用戶進行cron任務執行也會發生一些錯誤,所以直接將oracle環境變量也設置給root,這樣直接使用root進行數據導入。