traditional export and import
Logical Backups:-
=================
Logical backup can be considered as an alternate backup solution but not as main stream backup solution
Useful in the case of databae upgration and database migrations
Logical backups are platform independent
It is useful in the case of data reorganisation
Useful incase of moving objects from one schema to another schema
Logical backups generate dump files and log files and we cannot edit dump files otherwise it will be corrupted
Dump files contains meta data as well as business data
we can take incremental bacups using logical backups and dump files cannot be imported using impdp utility
Export can be done in 2 ways coventional path and direct path
In conventional path it uses sql statement to bring the data from disk to buffer cache and then it is evaluated to and traferred to export client which writes data to dump file
In direct path , it will directly reads data from disk into export session of PGA and then transfers to export sessions private buffer.
The dump file is approximately 70% of database size
user who is exporting should have exp_full_database role
Commands:
========
exp file=full_dp.dmp log=full_dp.og full=y
exporting metadata only
exp file=full_dp.dmp log=full_dp.og rows=n full=y
To improve export performance
exp file=full_dp.dmp log=full_dp.og full=y direct=Y
To supress DDL errors
imp file=exp_dp.dmp log=full_dp.log full=y ignore=y
importing into another schema
imp file=exp_dp.dmp log=full_dp.log fromuser=test touser=scott ignore=y
object level export
exp file=full_dp.dmp log=full_dp.og tables=emp
rowlevel export
exp file=full_dp.dmp log=full_dp.og tabes=emp query=''where deptnmo=10\'
Importing to another schema:
===========================
exp file=test.dmp log=testexp.dmp owner=scott direct=y
imp file=test.dmp log=test.dmp fromuser=scott touser=u1 ignore=y
Row level exports:
=================
exp file=testrow.dmp log=testrow.log tables=emp query=\'where deptno=20'\'
Performing table level exports:
==============================
select name from v$database;
sql> hrms
sql>create tablespace test datafile 'c:/app/oracle/oradata/hrms/test.dbf' size 10m;
sql>alter tablespace test read only;
sql> exit
exp file=testtbs.dmp log=testtbs.log transport_tablespace=y tablespaces=test direct=y tts_full_check=y
Now we have to copy the test tablesace of hrms to sales global database
set ORACLE_SID=sales
select name from v$database;
copy c:/app/oracle/oradata/hrms/test.dbf c:/app/oracle/oradata/sales/test.dbf
imp file=testtbs.dmp log=imptesttbs.log transport_tablespace=y datafiles='c:/app/oracle/oradata/sales/test.dbf' ignore=y
set oracle_sid=sales
alter tablespace test read write;
Incremental Backups:
====================
Incremental backups the objects that are only modified
Types of incremental backups:
============================
complete : it is the base backup for the next incremental backup for the next incrementals. It takes the backup of entire database
cummulative : It takes backup of al objects that are modified since last complete or cummulative
incremental : It takes backup of al objects that are modified since last complete or cummulative or incremental
Difference between full=y and inctype=complete: later generates the timestamp
Example:
========
exp file=sumday.dmp log=sunday.log inctype=complete statistics=none direct=y
conn scott/tiger
sql> insert into emp select * from emp;
sql> Alter table emp disable primary key;
sql> insert into salgrade select * from salgrade;
exp file=monday.dmp log=monday.log inctype=incremental statistics=none direct=y
SQL> alter table dept disable primary key;
sql> insert into dept select * from dept;
sql> insert into salgrade select * from salgrade;
exp file=tuesday.dmp log=tuesday.log inctype=incremental statistics=none direct=y
SQL> insert into dept select * from dept;
sql> insert into salgrade select * from salgrade
exp file=wednesday.dmp log=wednesday.log inctype=cumulative statistics=none direct=y
SQL> insert into dept select * from dept;
sql> insert into salgrade select * from salgrade
exp file=thursday.dmp log=thursday.log inctype=incremental statistics=none direct=y
Lets drop the user scott thinking scott user have been crashed
drop user scott cascade;
imp file=thursday.dmp log=thursday_imp.log full=y ignore=y buffers=20000 inctype=restore
Tables:
======
select * from dba_exp_files;
select * from dba_exp_objects;
select * from dba_exp_version;
select * from dba_datapump_jobs;
Datapump:
========
Datapump is a new feature introduced in 10g database
Utilities of datapump expdp and impdp
Datapump is a server side utility , that means it generates the dump file at server side
Datapump is faster than traditional because it uses dbms_metadata and dbms_datapump APIs
The biggest advantage of datapump is we can attach and dettach a job
Once the datapump job is initiated it starts a master control process in the users schema
MCP keeps track of the status of the jobs and updates the INFO in a separate table which was created by the job name
we cannot take incremental backups in datapump
we can estimate the space that is required for the dumpfile before the start of the job
if a user who wants to perform export job then he shoudl have permissions on the directoryt
Datapump is by default runs in a sequential manner and also we can make is parallel based on this it will create those many MCP processes
examples:
=========
expdp directory=datapump dumpfile=test.dmp logfile=test.log full=y
expdp directory=datapump dumpfile=test.dmp logfile=test.log full=y job_name=testbkpjob
expdp
ctrl+c
export> stop_job=immediate
export> attach=testbkpjob
impdp directory=datapump dumpfile=test.dmp logifle=testimp.log
export schema:
=============
expdp directory=datapump dumpfile=scott.dmp logfile=scott.log job_name=scottbkpjob
impdp directory=datapump dumpfile=scott.dmp logfile=scotttou1.log remap_schema=scott:u1
Parallel option:
================
expdp directory=datapump dumpfil=testu.dmp logfile=testu.log parallel=3 full=y job_name=parallelbkp
parameters:
==========
exclude=triggers/packages/constarints/procedures/triggers/indexes
content=metadata_only/all/data_only
estimate=blocks/statistics // estmates how much disk space is required
estimate_only=y it estimates the space required for the dumpfile the default value is N
specifying file size
schemas= schema name
nologfile=y/name
====================
expdp directory=datapump dumpfil=testu.dmp logfile=testu.log parallel=3 full=y job_name=parallelbkp filesize=5m
=================
Logical backup can be considered as an alternate backup solution but not as main stream backup solution
Useful in the case of databae upgration and database migrations
Logical backups are platform independent
It is useful in the case of data reorganisation
Useful incase of moving objects from one schema to another schema
Logical backups generate dump files and log files and we cannot edit dump files otherwise it will be corrupted
Dump files contains meta data as well as business data
we can take incremental bacups using logical backups and dump files cannot be imported using impdp utility
Export can be done in 2 ways coventional path and direct path
In conventional path it uses sql statement to bring the data from disk to buffer cache and then it is evaluated to and traferred to export client which writes data to dump file
In direct path , it will directly reads data from disk into export session of PGA and then transfers to export sessions private buffer.
The dump file is approximately 70% of database size
user who is exporting should have exp_full_database role
Commands:
========
exp file=full_dp.dmp log=full_dp.og full=y
exporting metadata only
exp file=full_dp.dmp log=full_dp.og rows=n full=y
To improve export performance
exp file=full_dp.dmp log=full_dp.og full=y direct=Y
To supress DDL errors
imp file=exp_dp.dmp log=full_dp.log full=y ignore=y
importing into another schema
imp file=exp_dp.dmp log=full_dp.log fromuser=test touser=scott ignore=y
object level export
exp file=full_dp.dmp log=full_dp.og tables=emp
rowlevel export
exp file=full_dp.dmp log=full_dp.og tabes=emp query=''where deptnmo=10\'
Importing to another schema:
===========================
exp file=test.dmp log=testexp.dmp owner=scott direct=y
imp file=test.dmp log=test.dmp fromuser=scott touser=u1 ignore=y
Row level exports:
=================
exp file=testrow.dmp log=testrow.log tables=emp query=\'where deptno=20'\'
Performing table level exports:
==============================
select name from v$database;
sql> hrms
sql>create tablespace test datafile 'c:/app/oracle/oradata/hrms/test.dbf' size 10m;
sql>alter tablespace test read only;
sql> exit
exp file=testtbs.dmp log=testtbs.log transport_tablespace=y tablespaces=test direct=y tts_full_check=y
Now we have to copy the test tablesace of hrms to sales global database
set ORACLE_SID=sales
select name from v$database;
copy c:/app/oracle/oradata/hrms/test.dbf c:/app/oracle/oradata/sales/test.dbf
imp file=testtbs.dmp log=imptesttbs.log transport_tablespace=y datafiles='c:/app/oracle/oradata/sales/test.dbf' ignore=y
set oracle_sid=sales
alter tablespace test read write;
Incremental Backups:
====================
Incremental backups the objects that are only modified
Types of incremental backups:
============================
complete : it is the base backup for the next incremental backup for the next incrementals. It takes the backup of entire database
cummulative : It takes backup of al objects that are modified since last complete or cummulative
incremental : It takes backup of al objects that are modified since last complete or cummulative or incremental
Difference between full=y and inctype=complete: later generates the timestamp
Example:
========
exp file=sumday.dmp log=sunday.log inctype=complete statistics=none direct=y
conn scott/tiger
sql> insert into emp select * from emp;
sql> Alter table emp disable primary key;
sql> insert into salgrade select * from salgrade;
exp file=monday.dmp log=monday.log inctype=incremental statistics=none direct=y
SQL> alter table dept disable primary key;
sql> insert into dept select * from dept;
sql> insert into salgrade select * from salgrade;
exp file=tuesday.dmp log=tuesday.log inctype=incremental statistics=none direct=y
SQL> insert into dept select * from dept;
sql> insert into salgrade select * from salgrade
exp file=wednesday.dmp log=wednesday.log inctype=cumulative statistics=none direct=y
SQL> insert into dept select * from dept;
sql> insert into salgrade select * from salgrade
exp file=thursday.dmp log=thursday.log inctype=incremental statistics=none direct=y
Lets drop the user scott thinking scott user have been crashed
drop user scott cascade;
imp file=thursday.dmp log=thursday_imp.log full=y ignore=y buffers=20000 inctype=restore
Tables:
======
select * from dba_exp_files;
select * from dba_exp_objects;
select * from dba_exp_version;
select * from dba_datapump_jobs;
Datapump:
========
Datapump is a new feature introduced in 10g database
Utilities of datapump expdp and impdp
Datapump is a server side utility , that means it generates the dump file at server side
Datapump is faster than traditional because it uses dbms_metadata and dbms_datapump APIs
The biggest advantage of datapump is we can attach and dettach a job
Once the datapump job is initiated it starts a master control process in the users schema
MCP keeps track of the status of the jobs and updates the INFO in a separate table which was created by the job name
we cannot take incremental backups in datapump
we can estimate the space that is required for the dumpfile before the start of the job
if a user who wants to perform export job then he shoudl have permissions on the directoryt
Datapump is by default runs in a sequential manner and also we can make is parallel based on this it will create those many MCP processes
examples:
=========
expdp directory=datapump dumpfile=test.dmp logfile=test.log full=y
expdp directory=datapump dumpfile=test.dmp logfile=test.log full=y job_name=testbkpjob
expdp
ctrl+c
export> stop_job=immediate
export> attach=testbkpjob
impdp directory=datapump dumpfile=test.dmp logifle=testimp.log
export schema:
=============
expdp directory=datapump dumpfile=scott.dmp logfile=scott.log job_name=scottbkpjob
impdp directory=datapump dumpfile=scott.dmp logfile=scotttou1.log remap_schema=scott:u1
Parallel option:
================
expdp directory=datapump dumpfil=testu.dmp logfile=testu.log parallel=3 full=y job_name=parallelbkp
parameters:
==========
exclude=triggers/packages/constarints/procedures/triggers/indexes
content=metadata_only/all/data_only
estimate=blocks/statistics // estmates how much disk space is required
estimate_only=y it estimates the space required for the dumpfile the default value is N
specifying file size
schemas= schema name
nologfile=y/name
====================
expdp directory=datapump dumpfil=testu.dmp logfile=testu.log parallel=3 full=y job_name=parallelbkp filesize=5m
Comments
Post a Comment