Export/Import – Data Pump interviews questions and answwer– 1 in oracle

1) What is use of CONSISTENT option in exp?

Cross-table consistency. Implements SET TRANSACTION READ ONLY. Default value N.

2) What is use of DIRECT=Y option in exp?

Setting direct=yes, to extract data by reading the data directly, bypasses the SGA,
bypassing the SQL command-processing layer (evaluating buffer), so it should be faster. Default value N.

3) What is use of COMPRESS option in exp?

Imports into one extent. Specifies how export will manage the initial extent for the table data.
This parameter is helpful during database re-organization.
Export the objects (especially tables and indexes) with COMPRESS=Y.
If table was spawning 20 Extents of 1M each (which is not desirable, taking into account performance), if you export the table with COMPRESS=Y, the DDL generated will have initial of 20M. Later on when importing the extents will be coalesced.
Sometime it is found desirable to export with COMPRESS=N, in situations where you do not have contiguous space on disk (tablespace), and do not want imports to fail.

4) How to improve exp performance?

ANS:

a). Set the BUFFER parameter to a high value. Default is 256KB.
b). Stop unnecessary applications to free the resources.
c). If you are running multiple sessions, make sure they write to different disks.
d). Do not export to NFS (Network File Share). Exporting to disk is faster.
e). Set the RECORDLENGTH parameter to a high value.
f). Use DIRECT=yes (direct mode export).

5) How to improve imp performance?

ANS:

a). Place the file to be imported in separate disk from datafiles.
b). Increase the DB_CACHE_SIZE.
c). Set LOG_BUFFER to big size.
d). Stop redolog archiving, if possible.
e). Use COMMIT=n, if possible.
f). Set the BUFFER parameter to a high value. Default is 256KB.
g). It’s advisable to drop indexes before importing to speed up the import process or set INDEXES=N and building indexes later on after the import.
Indexes can easily be recreated after the data was successfully imported.
h). Use STATISTICS=NONE
i). Disable the INSERT triggers, as they fire during import.
j). Set Parameter COMMIT_WRITE=NOWAIT(in Oracle 10g) or COMMIT_WAIT=NOWAIT (in Oracle 11g) during import.

6) What is use of INDEXFILE option in imp?

ANS:

Will write DDLs of the objects in the dumpfile into the specified file.

7) What is use of IGNORE option in imp?

ANS:

Will ignore the errors during import and will continue the import.

8) What are the differences between expdp and exp (Data Pump or normal exp/imp)?

ANS:

Data Pump is server centric (files will be at server).
Data Pump has APIs, from procedures we can run Data Pump jobs.
In Data Pump, we can stop and restart the jobs.
Data Pump will do parallel execution.
Tapes & pipes are not supported in Data Pump.
Data Pump consumes more undo tablespace.
Data Pump import will create the user, if user doesn’t exist.

9) Why expdp is faster than exp (or) why Data Pump is faster than conventional export/import?

Data Pump is block mode, exp is byte mode.
Data Pump will do parallel execution.
Data Pump uses direct path API.

10)  How to improve expdp performance?

ANS:

Using parallel option which increases worker threads. This should be set based on the number of cpus.

11) How to improve impdp performance?

ANS:

Using parallel option which increases worker threads. This should be set based on the number of cpus.

12) In Data Pump, where the jobs info will be stored (or) if you restart a job in Data Pump, how it will know from where to resume?

Whenever Data Pump export or import is running, Oracle will create a table with the JOB_NAME and will be deleted once the job is done. From this table, Oracle will find out how much job has completed and from where to continue etc.

Default export job name will be SYS_EXPORT_XXXX_01, where XXXX can be FULL or SCHEMA or TABLE.
Default import job name will be SYS_IMPORT_XXXX_01, where XXXX can be FULL or SCHEMA or TABLE.

13) What is the order of importing objects in impdp?

 Tablespaces
 Users
 Roles
 Database links
 Sequences
 Directories
 Synonyms
 Types
 Tables/Partitions
 Views
 Comments
 Packages/Procedures/Functions
 Materialized views

14) How to import only metadata?

ANS:

CONTENT= METADATA_ONLY

15) How to import into different user/tablespace/datafile/table?

ANS:

REMAP_SCHEMA
REMAP_TABLESPACE
REMAP_DATAFILE
REMAP_TABLE
REMAP_DATA

16) Using Data Pump, how to export in higher version (11g) and import into lower version (10g), can we import to 9i?

ANS:

Import data pump can always read export datapump dumpfile sets created by older versions of database. In your case it works, normal expdp on 10g and impdp on 11g
VERSION parameter in datapump is for other way around, if you want to import data taken from 11g into 10g database you need
to specify VERSION while taking backup.

17) How to do transport tablespaces (and across platforms) using exp/imp or expdp/impdp?

ANS: [http://satya-dba.blogspot.in/2010/01/oracle-transportable-tablespaces-tts.html ]

We can use the transportable tablespaces feature to copy/move subset of data (set of user tablespaces), from an Oracle database and plug it in to another Oracle database. The tablespaces being transported can be either dictionary managed or locally managed.

With Oracle 8i, Oracle introduced transportable tablespace (TTS) technology that moves tablespaces between databases. Oracle 8i supports tablespace transportation between databases that run on same OS platforms and use the same database block size.

With Oracle 9i, TTS (Transportable Tablespaces) technology was enhanced to support tablespace transportation between databases on platforms of the same type, but using different block sizes.

With Oracle 10g, TTS (Transportable Tablespaces) technology was further enhanced to support transportation of tablespaces between databases running on different OS platforms (e.g. Windows to Linux, Solaris to HP-UX), which has same ENDIAN formats. Oracle Database 10g Release 1 introduced cross platform transportable tablespaces (XTTS), which allows data files to be moved between platforms of different endian format. XTTS is an enhancement to the transportable tablespace (TTS). If ENDIAN formats are different we have to use RMAN (e.g. Windows to Solaris, Tru64 to AIX).

 select * from v$transportable_platform order by platform_id;

18)
How to determine the Schemas inside an Oracle Data Pump Export file ?

 strings dumpfile.dmp | grep SCHEMA_LIST
(or)
$ strings myfile.dmp|more

DB Refresh in oracle

DB Refresh steps:
 =================

 1. take export(expdp) of source database(ctsp).
 expdp system/manager directory=expdp_dir1 dumpfile=ctsfull.dmp logfile=ctsfull.log full=y

 2. move the dumpfile to target database(ctsd) using scp.. if both the database running in same server this steps is not required.
scp ctsfull.dmp oracle@:/u02/app/oracle

 3. Drop the application users and dont drop database default users like sys,system,dbsnmb,xdb.
If you give below command it will show username,created date. Mostly database default users will be created in same day.

 select username,created from dba_users;
 drop user raja cascade;

 4. Befor doing import(impdp) check the used space of source database and freespace in the target database. tablespaces names should

 same between sourec and target database then Target database each tablespace size should be more than or equal to source database tablespace.

 Source:
select tablespace_name,sum(bytes/1024/1024) from dba_segments group by tablespace_name;

 Target:
 select tablespace_name,sum(bytes/1024/1024) from dba_free_space group by tablespace_name;

 5. Start the impdport in taget database.
 impdp system/manager directory=expdp_dir1 dumpfile=ctsfull.dmp logfile=ctsfull_impdp.log full=y

 6. once competed compare the object counts between source and target databases.
 select object_type,count(*) from dba_objects where status=’VALID’ group by object_type;

 =============================== end ===========================

Schema Refresh in oracle

 Schema Refersh steps:
 =====================

 1. take expdport of a schema in source database(ctsp).
 expdp system/manager directory=expdp_dir1 dumpfile=ctsuser.dmp logfile=ctsuser.log schemas=raja

 Note: Dont give semicolon(;) at the end of the above command.

 2. move the dumpfile to target database(ctsd) using scp.if both the database running in same server this steps is not required.

 scp ctsuser.dmp oracle@:/u02/app/oracle

 3. create the new user in target database(if already existed drop and recreate)

 select username from dba_users;
 drop user raja cascade;
 create user raja identified by raja;

 4. Befor doing import(impdp) check the used space of a schema(user) in source database and freespace in the target database then

 target database tablespaces should be more than or equal to source database tablespaces.

 Source:
 select tablespace_name,sum(bytes/1024/1024) from dba_segments where owner=’RAJA’ group by tablespace_name;

 Target:
 select tablespace_name,sum(bytes/1024/1024) from dba_free_space group by tablespace_name;

5. Start the import(impdp) in taget database.

impdp system/manager directory=expdp_dir1 dumpfile=ctsuser.dmp logfile=ctsuser_impdp.log remap_schema=raja:raja

6. once completed compare the object counts between source and target databases.

 select object_type,count(*) from dba_objects where owner=’RAJA’ and status=’VALID’ group by object_type;

Prerequisites before Export and Import Process datapump

Prerequisites before Export and Import Process
1.Make sure you have set variable ORACLE_HOME in your environment. Yours may be vary from mine. export ORACLE_HOME /u01/app/oracle/product/11.2.0/db_1

2.Create a directory on your system, where we gonna put the Oracle Data Pump (dump) file. As for me, I wanna save the data pump file into /home/oracle/dpump_dir mkdir /home/oracle/dpump_dir

3.Next we need to create the “directory” from SQLPlus. You must logged in as SYSDBA. dbora is SID name on my system. $ORACLE_HOME/bin/sqlplus sys@dbora as sysdba

4.After logged in SQLPlus, execute this command create directory dpump_dir as ‘/home/oracle/dpump_dir’;
grant read,write on directory dpump_dir to dbuser;

dbuser is the username that we will use it for export and import process.

5.We also need to give the dbuser privileges to create any directory and to export / import full database. grant create any directory to dbuser;
grant EXP_FULL_DATABASE to dbuser;
grant IMP_FULL_DATABASE to dbuser;

IMPDP UTILITY :- TABLE_EXISTS_ACTION =OPTIONS IN ORACLE

SKIP: Default value for this parameter is SKIP. This parameter is exactly same as the IGNORE=Y
        (If a table already exists and IGNORE=y, then rows are imported into existing tables without any errors                  or
          messages being given option in conventional import utility.

APPEND: This option appends the data from the data dump. The extra rows in the dump will be appended
           to the table and the existing data remains unchanged.

TRUNCATE: This option truncate the exiting rows in the table and insert the rows from the dump

REPLACE: This option drop the current table and create the table as it is in the dump file.
        Both SKIP and REPLACE options are not valid if you set the  CONTENT=DATA_ONLY for the                impdp.
     

Method to Import only rows does not exist in the target table
See some examples here.
This is my sample table employee

SQL> select * from employee;
EMP_NAME                             DEPT        SAL
—————————— ———- ———-
Rupal                                  10       5000
Hero                                   10       5500
Jain                                   10       4000
John                                   20       6000

I took the data pump dump for employee table.
$ expdp directory=exp_dir tables=scott.employee dumpfile=emp.dmp logfile=emp.log

Export: Release 11.2.0.2.0 – Production on Tue May 1 23:31:04 2012
Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.
Username: / as sysdba
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 – 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options
Starting “SYS”.”SYS_EXPORT_TABLE_01″:  /******** AS SYSDBA directory=exp_dir tables=scott.employee dumpfile=emp.dmp logfile=emp.log
Estimate in progress using BLOCKS method…
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 64 KB
Processing object type TABLE_EXPORT/TABLE/TABLE
. . exported “SCOTT”.”EMPLOYEE”                        5.953 KB       4 rows
Master table “SYS”.”SYS_EXPORT_TABLE_01″ successfully loaded/unloaded
******************************************************************************
Dump file set for SYS.SYS_EXPORT_TABLE_01 is:
  /home/oracle/shony/emp.dmp
Job “SYS”.”SYS_EXPORT_TABLE_01″ successfully completed at 23:31:20



Now Let’s try each options.

1.TABLE_EXISTS_ACTION=SKIP

$ impdp directory=exp_dir dumpfile=emp.dmp logfile=imp1.log table_exists_action=skip

Import: Release 11.2.0.2.0 – Production on Tue May 1 23:36:07 2012
Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.
Username: / as sysdba
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 – 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options
Master table “SYS”.”SYS_IMPORT_FULL_01″ successfully loaded/unloaded
Starting “SYS”.”SYS_IMPORT_FULL_01″:  /******** AS SYSDBA directory=exp_dir dumpfile=emp.dmp logfile=imp1.log table_exists_action=skip
Processing object type TABLE_EXPORT/TABLE/TABLE
Table “SCOTT”.”EMPLOYEE” exists. All dependent metadata and data will be skipped due to table_exists_action of skip
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Job “SYS”.”SYS_IMPORT_FULL_01″ successfully completed at 23:36:13



2.TABLE_EXISTS_ACTION=APPEND
I have deleted and inserted 4 new rows into employee table. So as of now the rows the dump and table are different
and I am going to import the dump with APPEND option.

SQL> delete from employee;
4 rows deleted.

SQL> insert into employee (select * from emp where dept>20);
4 rows created.

SQL> commit;

SQL> select * from employee;
EMP_NAME                             DEPT        SAL
—————————— ———- ———-
Kiran                                  30       5500
Peter                                  30       6800
King                                   30       7600
Roshan                                 30       5500

$  impdp directory=exp_dir dumpfile=emp.dmp logfile=imp1.log table_exists_action=append

Import: Release 11.2.0.2.0 – Production on Wed May 2 00:50:18 2012
Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.
Username: / as sysdba
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 – 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options
Master table “SYS”.”SYS_IMPORT_FULL_01″ successfully loaded/unloaded
Starting “SYS”.”SYS_IMPORT_FULL_01″:  /******** AS SYSDBA directory=exp_dir dumpfile=emp.dmp logfile=imp1.log
table_exists_action=append
Processing object type TABLE_EXPORT/TABLE/TABLE
Table “SCOTT”.”EMPLOYEE” exists. Data will be appended to existing table but all dependent metadata will be 
skipped due to table_exists_action of append
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
. . imported “SCOTT”.”EMPLOYEE”                        5.953 KB       4 rows
Job “SYS”.”SYS_IMPORT_FULL_01″ successfully completed at 00:50:25

Now 4 more rows appended to the table.
  1* select * from employee
SQL> /
EMP_NAME                             DEPT        SAL
—————————— ———- ———-
Kiran                                  30       5500
Peter                                  30       6800
King                                   30       7600
Roshan                                 30       5500
Rupal                                  10       5000
Hero                                   10       5500
Jain                                   10       4000
John                                   20       6000
8 rows selected.

3.TABLE_EXISTS_ACTION=TRUNCATE
Now let’s try with table_exists_action=truncate option. In truncate option it will truncate the content of the
existing table and insert the rows from the dump. Currently my employee table has 8 rows which we inserted last insert.

$  impdp directory=exp_dir dumpfile=emp.dmp logfile=imp1.log table_exists_action=truncate

Import: Release 11.2.0.2.0 – Production on Wed May 2 00:55:03 2012
Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.
Username: / as sysdba
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 – 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options
Master table “SYS”.”SYS_IMPORT_FULL_01″ successfully loaded/unloaded
Starting “SYS”.”SYS_IMPORT_FULL_01″:  /******** AS SYSDBA directory=exp_dir dumpfile=emp.dmp logfile=imp1.log
table_exists_action=truncate
Processing object type TABLE_EXPORT/TABLE/TABLE
Table “SCOTT”.”EMPLOYEE” exists and has been truncated. Data will be loaded but all dependent metadata will be skipped 
due to table_exists_action of truncate
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
. . imported “SCOTT”.”EMPLOYEE”                        5.953 KB       4 rows
Job “SYS”.”SYS_IMPORT_FULL_01″ successfully completed at 00:55:09

  1* select * from employee
SQL> /
EMP_NAME                             DEPT        SAL
—————————— ———- ———-
Rupal                                  10       5000
Hero                                   10       5500
Jain                                   10       4000
John                                   20       6000

4.TABLE_EXISTS_ACTION=REPLACE
This option drop the current table in the database and the import recreate the new table as in the dumpfile.

impdp directory=exp_dir dumpfile=emp.dmp logfile=imp1.log table_exists_action=replace

Import: Release 11.2.0.2.0 – Production on Wed May 2 00:57:35 2012
Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.
Username: / as sysdba

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 – 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options
Master table “SYS”.”SYS_IMPORT_FULL_01″ successfully loaded/unloaded
Starting “SYS”.”SYS_IMPORT_FULL_01″:  /******** AS SYSDBA directory=exp_dir dumpfile=emp.dmp
logfile=imp1.log table_exists_action=replace
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
. . imported “SCOTT”.”EMPLOYEE”                        5.953 KB       4 rows
Job “SYS”.”SYS_IMPORT_FULL_01″ successfully completed at 00:57:40




Now if you check the last_ddl_time for the table it would be the same as the import time.

$ date
Wed May  2 00:58:21 EDT 2012

select OBJECT_NAME, to_char(LAST_DDL_TIME,’dd-mm-yyyy hh:mi:ss’) created from dba_objects
where OBJECT_NAME=’EMPLOYEE’
SQL> /
OBJECT_NAME          CREATED
——————– ——————-
EMPLOYEE             02-05-2012 12:57:40

datapump inctype explanation in oracle

$ exp file=scott.dmp log=scott.log inctype=complete
==> exporting full database (after some incremental/cumulative backups).

$ exp file=scott.dmp log=scott.log inctype=cumulative
==> exporting cumulatively (taking backup from last complete or cumulative backup).

$ exp file=scott.dmp log=scott.log inctype=incremental
==> exporting incrementally (taking backup from last complete or cumulative or incremental backup).

datapump practical examples

Oracle 11g has several new features. Here we are going to explore few of them related to datapump which was extended and enhanced EXP/IMP which first time was introduced on Oracle 10g.
Here are main features:
·                     Compression
·                     Encryption
·                     Transportable
·                     Partition Option
·                     Data Options
·                     Reuse Dumpfile(s)
·                     Remap_table
·                     Remap Data
One of the main and essential feature is Partition option. Because if table size more than a little bit GB and if table is partitioned how to transport this partition tables using EXPDP?
  
You can now export one or more partitions of a table without having to move the entire table.  On import, you can choose to load partitions as is, merge them into a single table, or promote each into a separate table. 
To understand partition feature in expdp-impdp/datapump, let’s create a scenario as below.
We are creating 2 schemas.
1.       User1 – This schema is having partition table SALES which needs to be export
2.       User2 – schema where we are going to import sales table with new name as SALES_T.
1. Create users:
[oracle@NVMBD01PSR183 ~]$ sqlplus / as sysdba
SQL*Plus: Release 11.2.0.3.0 Production on Fri Dec 19 13:15:44 2014
Copyright (c) 1982, 2011, Oracle.  All rights reserved.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 – 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
SQL> create user user1 identified by user1;
User created.
SQL> create user user2 identified by user2;
User created.
SQL> grant connect, resource to user1, user2;
Grant succeeded.
2.Create datapump directory.
SQL>CREATE OR REPLACE DIRECTORY EXPDP_DIR AS ‘ /oracle/data1/expdp’;
SQL> GRANT READ, WRITE ON DIRECTORY EXPDP_DIR to user1,user2;
3.Create partition table SALES.
SQL> conn user1/user1
Connected.
SQL> CREATE TABLE sales
( name       varchar2(5),
time_id       DATE)
PARTITION BY RANGE (time_id)
( PARTITION sales_AUG VALUES LESS THAN (TO_DATE(’01-AUG-2014′,’dd-MON-yyyy
TABLESPACE eodods,
PARTITION sales_SEP VALUES LESS THAN (TO_DATE(’01-SEP-2014′,’dd-MON-yyyy
TABLESPACE eodods,
PARTITION sales_OCT VALUES LESS THAN (TO_DATE(’01-OCT-2014′,’dd-MON-yyyy TABLESPACE eodods,
PARTITION sales_NOV VALUES LESS THAN (TO_DATE(’01-NOV-2014′,’dd-MON-yyyy
TABLESPACE eodods,
PARTITION sales_DEV VALUES LESS THAN (TO_DATE(’01-DEC-2014′,’dd-MON
TABLESPACE eodods );
Table created.
  
SQL> insert into sales values(‘prod1′,’01-AUG-2014’);
1 row created.
SQL> insert into sales values(‘prod2′,’01-SEP-2014’);
1 row created.
SQL> commit;
Commit complete.
SQL> insert into sales values(‘prod3′,’01-OCT-2014’);
1 row created.
Commit;
  
SQL> SELECT partitioned FROM dba_tables WHERE table_name = ‘SALES’;
partitioned
YES
Let`s check:
SQL> conn user1/user1
Connected.
SQL> SELECT partition_name FROM user_tab_partitions WHERE table_name = ‘SALES’;
PARTITION_NAME
——————————
SALES_AUG
SALES_DEV
SALES_NOV
SALES_OCT
SALES_SEP
EXPDP:
1. Export entire table including all partitions
[oracle@NVMBD01PSR183 ~]$ expdp user1/user1 directory=EXPDP_DIR dumpfile=sales_table.dmp tables=sales logfile=sales_table.log
Export: Release 11.2.0.3.0 – Production on Fri Dec 19 11:13:37 2014
Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 – 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting “USER1”.”SYS_EXPORT_TABLE_01″:  user1/******** directory=EXPDP_DIR dumpfile=sales_table.dmp tables=sales logfile=sales_table.log
Estimate in progress using BLOCKS method…
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 24 MB
Processing object type TABLE_EXPORT/TABLE/TABLE
. . exported “USER1″.”SALES”:”SALES_NOV”                5.421 KB       1 rows
. . exported “USER1″.”SALES”:”SALES_OCT”                5.421 KB       1 rows
. . exported “USER1″.”SALES”:”SALES_SEP”                5.421 KB       1 rows
. . exported “USER1″.”SALES”:”SALES_AUG”                    0 KB       0 rows
. . exported “USER1″.”SALES”:”SALES_DEV”                    0 KB       0 rows
Master table “USER1”.”SYS_EXPORT_TABLE_01″ successfully loaded/unloaded
******************************************************************************
Dump file set for USER1.SYS_EXPORT_TABLE_01 is:
  /oracle/data1/expdp/sales_table.dmp
Job “USER1”.”SYS_EXPORT_TABLE_01″ successfully completed at 11:13:46
2. Export specific partition of table:
[oracle@NVMBD01PSR183 ~]$ expdp user1/user1 directory=EXPDP_DIR dumpfile=sales_table_partition.dmp tables=sales:SALES_NOV logfile=sales_table_partition.log
Export: Release 11.2.0.3.0 – Production on Fri Dec 19 12:31:06 2014
Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 – 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting “USER1”.”SYS_EXPORT_TABLE_01″:  user1/******** directory=EXPDP_DIR dumpfile=sales_table_partition.dmp tables=sales:SALES_NOV logfile=sales_table_partition.log
Estimate in progress using BLOCKS method…
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 8 MB
Processing object type TABLE_EXPORT/TABLE/TABLE
. . exported “USER1″.”SALES”:”SALES_NOV”                5.421 KB       1 rows
Master table “USER1”.”SYS_EXPORT_TABLE_01″ successfully loaded/unloaded
******************************************************************************
Dump file set for USER1.SYS_EXPORT_TABLE_01 is:
  /oracle/data1/expdp/sales_table_partition.dmp
Job “USER1”.”SYS_EXPORT_TABLE_01″ successfully completed at 12:31:13
IMPDP
Move dmp file to target host (ftp, scp etc)
Or load data to another schema using remap_schema
1. Import entire partition table into new schema with new name.
[oracle@NVMBD01PSR183 ~]$ impdp user2/user2 directory=EXPDP_DIR dumpfile=sales_table.dmp remap_table=sales:sales_t remap_schema=user1:user2 remap_tablespace=user1:user2 table_exists_action = append;
Import: Release 11.2.0.3.0 – Production on Fri Dec 19 11:19:25 2014
Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 – 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Master table “USER2”.”SYS_IMPORT_FULL_01″ successfully loaded/unloaded
Starting “USER2”.”SYS_IMPORT_FULL_01″:  user2/******** directory=EXPDP_DIR dumpfile=sales_table.dmp remap_table=sales:sales_t remap_schema=user1:user2 remap_tablespace=user1:user2 table_exists_action=append
Processing object type TABLE_EXPORT/TABLE/TABLE
Table “USER2″.”SALES_T” exists. Data will be appended to existing table but all dependent metadata will be skipped due to table_exists_action of append
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
. . imported “USER2″.”SALES_T”:”SALES_SEP”               5.421 KB       1 rows
. . imported “USER2″.”SALES_T”:”SALES_AUG”                   0 KB       0 rows
. . imported “USER2″.”SALES_T”:”SALES_DEV”                   0 KB       0 rows
. . imported “USER2″.”SALES_T”:”SALES_OCT”               5.421 KB       1 rows
. . imported “USER2″.”SALES_T”:”SALES_NOV”               5.421 KB       1 rows
Job “USER2”.”SYS_IMPORT_FULL_01″ successfully completed at 11:19:30
Let`s check:
SQL> conn user2/user2
Connected.
SQL> select * from sales_t;
NAME  TIME_ID
—– ———
prod1 01-AUG-14
prod2 01-SEP-14
prod3 01-OCT-14
We have successfully imported entire table with name SALES_T.
2. Now import single partition only.
[oracle@NVMBD01PSR183 ~]$ impdp user2/user2 directory=EXPDP_DIR dumpfile=sales_table_partition.dmp remap_table=sales:sales_t remap_schema=user1:user2 remap_tablespace=user1:user2 table_exists_action = append;
Import: Release 11.2.0.3.0 – Production on Fri Dec 19 13:05:26 2014
Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 – 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Master table “USER2”.”SYS_IMPORT_FULL_01″ successfully loaded/unloaded
Starting “USER2”.”SYS_IMPORT_FULL_01″:  user2/******** directory=EXPDP_DIR dumpfile=sales_table_partition.dmp remap_table=sales:sales_t remap_schema=user1:user2 remap_tablespace=user1:user2 table_exists_action=append
Processing object type TABLE_EXPORT/TABLE/TABLE
Table “USER2″.”SALES_T” exists. Data will be appended to existing table but all dependent metadata will be skipped due to table_exists_action of append
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
. . imported “USER2″.”SALES_T”:”SALES_NOV”               5.421 KB       1 rows
Job “USER2”.”SYS_IMPORT_FULL_01″ successfully completed at 13:05:28
Here we have successfully imported single partition.
If a partition name is specified, it must be the name of a partition or subpartition in the associated table. 
Only the specified set of tables, partitions, and their dependent objects are unloaded.
  
When you use partition option (PARTITION_OPTIONS) of DataPump you have to select below options:
None – Tables will be imported such that they will look like those on the system on which the export was created.
 
Departition – Partitions will be created as individual tables rather than partitions of a partitioned table.
 
Merge – Combines all partitions into a single table.

improve the performance of exp/imp parameter in oracle

How to improve exp performance?
Answer & Explanation

1. Set the BUFFER parameter to a high value. Default is 256KB.
2. Stop unnecessary applications to free the resources.
3. If you are running multiple sessions, make sure they write to different disks.
4. Do not export to NFS (Network File Share). Exporting to disk is faster.
5. Set the RECORDLENGTH parameter to a high value.
6. Use DIRECT=yes (direct mode export).

How to improve imp performance?

1. Place the file to be imported in separate disk from datafiles.
2. Increase the DB_CACHE_SIZE.
3. Set LOG_BUFFER to big size.
4. Stop redolog archiving, if possible.
5. Use COMMIT=n, if possible.
6. Set the BUFFER parameter to a high value. Default is 256KB.
7. It’s advisable to drop indexes before importing to speed up the import process or set INDEXES=N and building indexes later on
  after the import. Indexes can easily be recreated after the data was successfully imported.
8. Use STATISTICS=NONE
9. Disable the INSERT triggers, as they fire during import.
10. Set Parameter COMMIT_WRITE=NOWAIT(in Oracle 10g) or COMMIT_WAIT=NOWAIT (in Oracle 11g) during import.
View Answer Discuss in Forum

Export and Import utilities in Oracle

Export and Import are the Oracle utilities that allow us to make exports & imports of the data objects, and transfer the data across databases that reside on different hardware platforms on different Oracle versions.

Export (exp) and import (imp) utilities are used to perform logical database backup and recovery. When exporting, database objects are dumped to a binary file which can then be imported into another Oracle database.

catexp.sql (in $ORACLE_HOME/rdbms/admin) will create EXP_FULL_DATABASE & IMP_FULL_DATABASE roles (no need to run this, if you ran catalog.sql at the time of database creation).
Before using these commands, you should set ORACLE_HOME, ORACLE_SID and PATH environment variables.

exp utility

Objects owned by SYS cannot be exported.
If you want to export objects of another schema, you need EXP_FULL_DATABASE role.


$ exp help=y
Keyword
Description (Default)
USERID
username/password
FULL
export entire file (N). To do full database export, that user must have EXP_FULL_DATABASE role
BUFFER
size of data buffer. OS dependent
OWNER
list of owner usernames
FILE
output files (EXPDAT.DMP)
TABLES
list of table names
COMPRESS
import into one extent (Y)
RECORDLENGTH
length of IO record
GRANTS
export grants (Y)
INCTYPE
incremental export type. valid values are
COMPLETE, INCREMENTAL, CUMULATIVE
INDEXES
export indexes (Y)
RECORD
track incremental export (Y)
DIRECT
direct path (N)
TRIGGERS
export triggers (Y)
LOG
log file of screen output
STATISTICS
analyze objects (ESTIMATE)
ROWS
export data rows (Y)
PARFILE
parameter filename
CONSISTENT
cross-table consistency(N). Implements SET TRANSACTION READ ONLY
CONSTRAINTS
export constraints (Y)
OBJECT_CONSISTENT
transaction set to read only during object export (N)
FEEDBACK
display progress (a dot) for every N rows (0)
FILESIZE
maximum size of each dump file
FLASHBACK_SCN
SCN used to set session snapshot back to
FLASHBACK_TIME
time used to get the SCN closest to the specified time
QUERY
select clause used to export a subset of a table
RESUMABLE
suspend when a space related error is encountered(N)
RESUMABLE_NAME
text string used to identify resumable statement
RESUMABLE_TIMEOUT
wait time for RESUMABLE
TTS_FULL_CHECK
perform full or partial dependency check for TTS
VOLSIZE
number of bytes to write to each tape volume (not available from Oracle 11g Release2)
TABLESPACES
list of tablespaces to export
TRANSPORT_TABLESPACE
export transportable tablespace metadata (N)
TEMPLATE
template name which invokes iAS mode export

Examples:
$ exp system/manager file=emp.dmp log=emp_exp.log full=y
==> exporting full database.

$ exp system/manager file=owner.dmp log=owner.log owner=owner direct=y STATISTICS=none
==> exporting all the objects of a schema.

$ exp file=schemas.dmp log=schemas.log owner=master,owner,user direct=y STATISTICS=none
==> exporting all the objects of multiple schemas.

$ exp file=testdb_emp.dmp log=testdb_emp.log tables=scott.emp direct=y STATISTICS=none
==> exporting all the rows in table (emp table records in scott schema).

$ exp file=itinitem.dmp log=itinitem.log tables=tom.ITIN,tom.ITEM
query=\”where CODE in \(\’OT65FR7H\’,\’ATQ56F7H\’\)\”
statistics=none
==> exporting the records of some tables which satisfies a particular criteria.

$ exp transport_tablespace=y tablespaces=THU statistics=none file=THU.dmp log=thu_exp.log
==> exporting at tablespace level.

$ exp FILE=file1.dmp,file2.dmp,file3.dmp FILESIZE=10M LOG=multiple.log
==> exporting to multiple files.

$ exp file=scott.dmp log=scott.log inctype=complete
==> exporting full database (after some incremental/cumulative backups).

$ exp file=scott.dmp log=scott.log inctype=cumulative
==> exporting cumulatively (taking backup from last complete or cumulative backup).

$ exp file=scott.dmp log=scott.log inctype=incremental
==> exporting incrementally (taking backup from last complete or cumulative or incremental backup).

imp utility

imp provides backward compatibility i.e. it will allows you to
import the objects that you have exported in lower Oracle versions also.

imp doesn’t recreate already existing objects. It either abort the import process (default) or ignores the errors (if you specify IGNORE=Y).


USERID must be the first parameter on the command line.

$ imp help=y

Keyword
Description (Default)
USERID
username/password
FULL
import entire file (N). To do the full database import, that user must have IMP_FULL_DATABASE role
BUFFER
size of data buffer. OS dependent
FROMUSER
list of owner usernames
FILE
input files (EXPDAT.DMP)
TOUSER
list of usernames
SHOW
just list file contents (N), will be used to check the validity of the dump file
TABLES
list of table names
IGNORE
ignore create errors (N)
RECORDLENGTH
length of IO record
GRANTS
import grants (Y)
INCTYPE
incremental import type. valid keywords are
SYSTEM (for definitions), RESTORE (for data)
INDEXES
import indexes (Y)
COMMIT
commit array insert (N)
ROWS
import data rows (Y)
PARFILE
parameter filename
LOG
log file of screen output
CONSTRAINTS
import constraints (Y)
DESTROY
overwrite tablespace datafile (N)
INDEXFILE
will write DDLs of the objects in the dumpfile into the specified file
SKIP_UNUSABLE_INDEXES
skip maintenance of unusable indexes (N)
FEEDBACK
display progress every x rows(0)
TOID_NOVALIDATE
skip validation of specified type ids
FILESIZE
maximum size of each dump file
STATISTICS
import precomputed statistics (ALWAYS)
RESUMABLE
suspend when a space related error is encountered(N)
RESUMABLE_NAME
text string used to identify resumable statement
RESUMABLE_TIMEOUT
wait time for RESUMABLE
COMPILE
compile procedures, packages, and functions (Y)
STREAMS_CONFIGURATION
import streams general metadata (Y)
STREAMS_INSTANTIATION
import streams instantiation metadata (N)
VOLSIZE
number of bytes in file on each volume of a file on tape (not available from Oracle 11g Release2)
DATA_ONLY
import only data (N) (from Oracle 11g Release2)

The following keywords only apply to transportable tablespaces
TRANSPORT_TABLESPACE import transportable tablespace metadata (N)
TABLESPACES tablespaces to be transported into database
DATAFILES datafiles to be transported into database
TTS_OWNERS users that own data in the transportable tablespace set

Examples:$ imp system/manager file=emp.dmp log=emp_imp.log full=y
==> importing all the exported data.

$ imp system/manager file=testdb_emp.dmp log=testdb_emp_imp.log tables=tester.employee
==> importing all the records of table (employee table records in tester schema).

$ imp FILE=two.dmp LOG=two.log IGNORE=Y GRANTS=N INDEXES=N COMMIT=Y TABLES=(brand, book)
==> importing all the records of couple of tables.

$ imp system/manager file=intenary.dmp log=intenary.log FROMUSER=tom TOUSER=jerry ignore=y
==> importing data of one schema into another schema

$ imp “/as sysdba” file=TUE.dmp TTS_OWNERS=OWNER tablespaces=TUE transport_tablespace=y datafiles=TUE.dbf

$ imp file=transporter3.dmp log=transporter3.log inctype=system
==> importing definitions from backup.

$ imp file=transporter3.dmp log=transporter3.log inctype=restore
==> importing data from backup.

$ imp file=spider.dmp log=spider.log show=y
==> checks the validity of the dumpfile.

$ imp file=scott.dmp log=scott.log indexfile=scott_schema.sql
==> will write DDLs of the objects in exported dumpfile (scott schema) into specified file. This command won’t import the objects.

How to improve Export & Import

exp:
1. Set the BUFFER parameter to a high value. Default is 256KB.
2. Stop unnecessary applications to free the resources.
3. If you are running multiple sessions, make sure they write to different disks.
4. Do not export to NFS (Network File Share). Exporting to disk is faster.
5. Set the RECORDLENGTH parameter to a high value.
6. Use DIRECT=yes (direct mode export).


imp:
1. Place the file to be imported in separate disk from datafiles.
2. Increase the DB_CACHE_SIZE.
3. Set LOG_BUFFER to big size.
4. Stop redolog archiving, if possible.
5. Use COMMIT=n, if possible.
6. Set the BUFFER parameter to a high value. Default is 256KB.
7. It’s advisable to drop indexes before importing to speed up the import process or set INDEXES=N and building indexes later on after the import. Indexes can easily be recreated after the data was successfully imported.
8. Use STATISTICS=NONE
9. Disable the INSERT triggers, as they fire during import.
10. Set Parameter COMMIT_WRITE=NOWAIT(in 10g) or COMMIT_WAIT=NOWAIT (in 11g) during import.

Related Views
DBA_EXP_VERSION
DBA_EXP_FILES
DBA_EXP_OBJECTS