目前有个这样的需求,每天晚上23点将生产库的数据备份到备份机器上,第二天备份机器的数据库可以直接使用,数据是昨天生产库的数据。(生产数据目前不多,全部数据不超过3000w条)。
由于没有接触过高深的Oracle知识,只会简单的impdp和expdp这种自带的工具,所以打算用这种自带的命令加shell脚本进行完成。
现在假设生产库ip为192.168.1.20(简称20),备份库ip为192.168.1.140(简称140)
我的设想是这样的,20机器在晚上11点进行自动备份,然后想办法将备份文件拷贝到140机器,然后140机器进行导入操作。
实现过程:
1.在140上安装nfs文件系统,将指定目录共享到20上。
2.在20上添加备份脚本使用crontab按时执行expdp语句
#!/bin/
sh
ORACLE_BASE=/oracle
export ORACLE_BASE
ORACLE_HOME=$ORACLE_BASE/product/
export ORACLE_HOME
ORACLE_SID=ORCL
export ORACLE_SID
export PATH=$PATH:$ORACLE_HOME/bin
export DATA_DIR=/oracle/admin/orcl/dpdump
export LOGS_DIR=/oracle/admin/orcl/dpdump
export BAKUPTIME=` date +%Y%m%d%H`
export NLS_LANG=american_america.AL32UTF8
echo " Starting bakup... "
echo " Bakup file path /oracle/admin/orcl/dpdump/HJXD_$BAKUPTIME.dmp "
expdp HJXD/hjxd directory=DATA_PUMP_DIR dumpfile=HJXD_$BAKUPTIME.dmp schemas=HJXD
echo " Bakup completed. "
echo " start delete 10 day before . "
find /oracle/admin/orcl/dpdump/ -mtime + 30 -type f -name *.dmp[ab] -exec rm -f {} \;
echo " end delete 10 day before . "
export ORACLE_BASE
ORACLE_HOME=$ORACLE_BASE/product/
export ORACLE_HOME
ORACLE_SID=ORCL
export ORACLE_SID
export PATH=$PATH:$ORACLE_HOME/bin
export DATA_DIR=/oracle/admin/orcl/dpdump
export LOGS_DIR=/oracle/admin/orcl/dpdump
export BAKUPTIME=` date +%Y%m%d%H`
export NLS_LANG=american_america.AL32UTF8
echo " Starting bakup... "
echo " Bakup file path /oracle/admin/orcl/dpdump/HJXD_$BAKUPTIME.dmp "
expdp HJXD/hjxd directory=DATA_PUMP_DIR dumpfile=HJXD_$BAKUPTIME.dmp schemas=HJXD
echo " Bakup completed. "
echo " start delete 10 day before . "
find /oracle/admin/orcl/dpdump/ -mtime + 30 -type f -name *.dmp[ab] -exec rm -f {} \;
echo " end delete 10 day before . "
3.在20上添加cron任务将备份的文件拷贝到nfs共享的目录
##!my bash
myfilepath=/oracle/admin/ORCL/dpdump/;
filename=HJXD_` date -d " 1 day ago " +%Y%m%d` 23.dmp;
cp /oracle/admin/orcl/dpdump/$filename $myfilepath
myfilepath=/oracle/admin/ORCL/dpdump/;
filename=HJXD_` date -d " 1 day ago " +%Y%m%d` 23.dmp;
cp /oracle/admin/orcl/dpdump/$filename $myfilepath
4.140上使用cron任务将拷贝过来的nfs共享目录下面的数据文件导入140的数据库
fullexp.log
PATH=$PATH:$HOME/bin
export PATH
ORACLE_BASE=/oracle
export ORACLE_BASE
ORACLE_HOME=$ORACLE_BASE/product/
export ORACLE_HOME
ORACLE_SID=ORCL
export ORACLE_SID
export PATH=$PATH:$ORACLE_HOME/bin
sqlplus sys/ 123456 as sysdba <<EOF
@/oracle/admin/ORCL/dpdump/impdp.sql;
EOF
export BAKUPTIME=` date -d " a day ago " +%Y%m%d23`;
chown oracle:oinstall /oracle/admin/ORCL/dpdump/HJXD_$BAKUPTIME.dmp;
echo " Starting impdp... ";
echo " impdp file path /oracle/admin/ORCL/dpdump/HJXD_$BAKUPTIME.dmp ";
impdp hjxdsas/ 123456 directory=DATA_PUMP_DIR dumpfile=HJXD_$BAKUPTIME.dmp logfile=fullexp.log remap_schema=HJXD:hjxdsas table_exists_action=replace
PATH=$PATH:$HOME/bin
export PATH
ORACLE_BASE=/oracle
export ORACLE_BASE
ORACLE_HOME=$ORACLE_BASE/product/
export ORACLE_HOME
ORACLE_SID=ORCL
export ORACLE_SID
export PATH=$PATH:$ORACLE_HOME/bin
sqlplus sys/ 123456 as sysdba <<EOF
@/oracle/admin/ORCL/dpdump/impdp.sql;
EOF
export BAKUPTIME=` date -d " a day ago " +%Y%m%d23`;
chown oracle:oinstall /oracle/admin/ORCL/dpdump/HJXD_$BAKUPTIME.dmp;
echo " Starting impdp... ";
echo " impdp file path /oracle/admin/ORCL/dpdump/HJXD_$BAKUPTIME.dmp ";
impdp hjxdsas/ 123456 directory=DATA_PUMP_DIR dumpfile=HJXD_$BAKUPTIME.dmp logfile=fullexp.log remap_schema=HJXD:hjxdsas table_exists_action=replace
impdp.sql文件
drop
user hjxdsas
cascade;
create user hjxdsas identified by 123456
default tablespace hjxd
temporary tablespace temp;
grant dba, create any trigger, drop any table, SELECT ANY table, SELECT ANY sequence, create user to hjxdsas identified by 123456;
grant connect, resource to hjxdsas;
grant exp_full_database,imp_full_database to hjxdsas;
create user hjxdsas identified by 123456
default tablespace hjxd
temporary tablespace temp;
grant dba, create any trigger, drop any table, SELECT ANY table, SELECT ANY sequence, create user to hjxdsas identified by 123456;
grant connect, resource to hjxdsas;
grant exp_full_database,imp_full_database to hjxdsas;
第4步最开始的时候设置了oracle环境变量,原因是文件拷贝过来是通过root用户拷贝的(nfs要求两边操作的用户有相同的uid,20,140机器的oracle用户uid不一定一致,root用户缺少一致);包括在使用oracle用户进行cron任务执行也会发生一些错误,所以直接将oracle环境变量也设置给root,这样直接使用root进行数据导入。