Export/Import – Data Pump interviews questions and answwer– 1 in oracle

1) What is use of CONSISTENT option in exp?

Cross-table consistency. Implements SET TRANSACTION READ ONLY. Default value N.

2) What is use of DIRECT=Y option in exp?

Setting direct=yes, to extract data by reading the data directly, bypasses the SGA,
bypassing the SQL command-processing layer (evaluating buffer), so it should be faster. Default value N.

3) What is use of COMPRESS option in exp?

Imports into one extent. Specifies how export will manage the initial extent for the table data.
This parameter is helpful during database re-organization.
Export the objects (especially tables and indexes) with COMPRESS=Y.
If table was spawning 20 Extents of 1M each (which is not desirable, taking into account performance), if you export the table with COMPRESS=Y, the DDL generated will have initial of 20M. Later on when importing the extents will be coalesced.
Sometime it is found desirable to export with COMPRESS=N, in situations where you do not have contiguous space on disk (tablespace), and do not want imports to fail.

4) How to improve exp performance?

ANS:

a). Set the BUFFER parameter to a high value. Default is 256KB.
b). Stop unnecessary applications to free the resources.
c). If you are running multiple sessions, make sure they write to different disks.
d). Do not export to NFS (Network File Share). Exporting to disk is faster.
e). Set the RECORDLENGTH parameter to a high value.
f). Use DIRECT=yes (direct mode export).

5) How to improve imp performance?

ANS:

a). Place the file to be imported in separate disk from datafiles.
b). Increase the DB_CACHE_SIZE.
c). Set LOG_BUFFER to big size.
d). Stop redolog archiving, if possible.
e). Use COMMIT=n, if possible.
f). Set the BUFFER parameter to a high value. Default is 256KB.
g). It’s advisable to drop indexes before importing to speed up the import process or set INDEXES=N and building indexes later on after the import.
Indexes can easily be recreated after the data was successfully imported.
h). Use STATISTICS=NONE
i). Disable the INSERT triggers, as they fire during import.
j). Set Parameter COMMIT_WRITE=NOWAIT(in Oracle 10g) or COMMIT_WAIT=NOWAIT (in Oracle 11g) during import.

6) What is use of INDEXFILE option in imp?

ANS:

Will write DDLs of the objects in the dumpfile into the specified file.

7) What is use of IGNORE option in imp?

ANS:

Will ignore the errors during import and will continue the import.

8) What are the differences between expdp and exp (Data Pump or normal exp/imp)?

ANS:

Data Pump is server centric (files will be at server).
Data Pump has APIs, from procedures we can run Data Pump jobs.
In Data Pump, we can stop and restart the jobs.
Data Pump will do parallel execution.
Tapes & pipes are not supported in Data Pump.
Data Pump consumes more undo tablespace.
Data Pump import will create the user, if user doesn’t exist.

9) Why expdp is faster than exp (or) why Data Pump is faster than conventional export/import?

Data Pump is block mode, exp is byte mode.
Data Pump will do parallel execution.
Data Pump uses direct path API.

10)  How to improve expdp performance?

ANS:

Using parallel option which increases worker threads. This should be set based on the number of cpus.

11) How to improve impdp performance?

ANS:

Using parallel option which increases worker threads. This should be set based on the number of cpus.

12) In Data Pump, where the jobs info will be stored (or) if you restart a job in Data Pump, how it will know from where to resume?

Whenever Data Pump export or import is running, Oracle will create a table with the JOB_NAME and will be deleted once the job is done. From this table, Oracle will find out how much job has completed and from where to continue etc.

Default export job name will be SYS_EXPORT_XXXX_01, where XXXX can be FULL or SCHEMA or TABLE.
Default import job name will be SYS_IMPORT_XXXX_01, where XXXX can be FULL or SCHEMA or TABLE.

13) What is the order of importing objects in impdp?

 Tablespaces
 Users
 Roles
 Database links
 Sequences
 Directories
 Synonyms
 Types
 Tables/Partitions
 Views
 Comments
 Packages/Procedures/Functions
 Materialized views

14) How to import only metadata?

ANS:

CONTENT= METADATA_ONLY

15) How to import into different user/tablespace/datafile/table?

ANS:

REMAP_SCHEMA
REMAP_TABLESPACE
REMAP_DATAFILE
REMAP_TABLE
REMAP_DATA

16) Using Data Pump, how to export in higher version (11g) and import into lower version (10g), can we import to 9i?

ANS:

Import data pump can always read export datapump dumpfile sets created by older versions of database. In your case it works, normal expdp on 10g and impdp on 11g
VERSION parameter in datapump is for other way around, if you want to import data taken from 11g into 10g database you need
to specify VERSION while taking backup.

17) How to do transport tablespaces (and across platforms) using exp/imp or expdp/impdp?

ANS: [http://satya-dba.blogspot.in/2010/01/oracle-transportable-tablespaces-tts.html ]

We can use the transportable tablespaces feature to copy/move subset of data (set of user tablespaces), from an Oracle database and plug it in to another Oracle database. The tablespaces being transported can be either dictionary managed or locally managed.

With Oracle 8i, Oracle introduced transportable tablespace (TTS) technology that moves tablespaces between databases. Oracle 8i supports tablespace transportation between databases that run on same OS platforms and use the same database block size.

With Oracle 9i, TTS (Transportable Tablespaces) technology was enhanced to support tablespace transportation between databases on platforms of the same type, but using different block sizes.

With Oracle 10g, TTS (Transportable Tablespaces) technology was further enhanced to support transportation of tablespaces between databases running on different OS platforms (e.g. Windows to Linux, Solaris to HP-UX), which has same ENDIAN formats. Oracle Database 10g Release 1 introduced cross platform transportable tablespaces (XTTS), which allows data files to be moved between platforms of different endian format. XTTS is an enhancement to the transportable tablespace (TTS). If ENDIAN formats are different we have to use RMAN (e.g. Windows to Solaris, Tru64 to AIX).

 select * from v$transportable_platform order by platform_id;

18)
How to determine the Schemas inside an Oracle Data Pump Export file ?

 strings dumpfile.dmp | grep SCHEMA_LIST
(or)
$ strings myfile.dmp|more

DB Refresh in oracle

DB Refresh steps:
 =================

 1. take export(expdp) of source database(ctsp).
 expdp system/manager directory=expdp_dir1 dumpfile=ctsfull.dmp logfile=ctsfull.log full=y

 2. move the dumpfile to target database(ctsd) using scp.. if both the database running in same server this steps is not required.
scp ctsfull.dmp oracle@:/u02/app/oracle

 3. Drop the application users and dont drop database default users like sys,system,dbsnmb,xdb.
If you give below command it will show username,created date. Mostly database default users will be created in same day.

 select username,created from dba_users;
 drop user raja cascade;

 4. Befor doing import(impdp) check the used space of source database and freespace in the target database. tablespaces names should

 same between sourec and target database then Target database each tablespace size should be more than or equal to source database tablespace.

 Source:
select tablespace_name,sum(bytes/1024/1024) from dba_segments group by tablespace_name;

 Target:
 select tablespace_name,sum(bytes/1024/1024) from dba_free_space group by tablespace_name;

 5. Start the impdport in taget database.
 impdp system/manager directory=expdp_dir1 dumpfile=ctsfull.dmp logfile=ctsfull_impdp.log full=y

 6. once competed compare the object counts between source and target databases.
 select object_type,count(*) from dba_objects where status=’VALID’ group by object_type;

 =============================== end ===========================

Schema Refresh in oracle

 Schema Refersh steps:
 =====================

 1. take expdport of a schema in source database(ctsp).
 expdp system/manager directory=expdp_dir1 dumpfile=ctsuser.dmp logfile=ctsuser.log schemas=raja

 Note: Dont give semicolon(;) at the end of the above command.

 2. move the dumpfile to target database(ctsd) using scp.if both the database running in same server this steps is not required.

 scp ctsuser.dmp oracle@:/u02/app/oracle

 3. create the new user in target database(if already existed drop and recreate)

 select username from dba_users;
 drop user raja cascade;
 create user raja identified by raja;

 4. Befor doing import(impdp) check the used space of a schema(user) in source database and freespace in the target database then

 target database tablespaces should be more than or equal to source database tablespaces.

 Source:
 select tablespace_name,sum(bytes/1024/1024) from dba_segments where owner=’RAJA’ group by tablespace_name;

 Target:
 select tablespace_name,sum(bytes/1024/1024) from dba_free_space group by tablespace_name;

5. Start the import(impdp) in taget database.

impdp system/manager directory=expdp_dir1 dumpfile=ctsuser.dmp logfile=ctsuser_impdp.log remap_schema=raja:raja

6. once completed compare the object counts between source and target databases.

 select object_type,count(*) from dba_objects where owner=’RAJA’ and status=’VALID’ group by object_type;

Prerequisites before Export and Import Process datapump

Prerequisites before Export and Import Process
1.Make sure you have set variable ORACLE_HOME in your environment. Yours may be vary from mine. export ORACLE_HOME /u01/app/oracle/product/11.2.0/db_1

2.Create a directory on your system, where we gonna put the Oracle Data Pump (dump) file. As for me, I wanna save the data pump file into /home/oracle/dpump_dir mkdir /home/oracle/dpump_dir

3.Next we need to create the “directory” from SQLPlus. You must logged in as SYSDBA. dbora is SID name on my system. $ORACLE_HOME/bin/sqlplus sys@dbora as sysdba

4.After logged in SQLPlus, execute this command create directory dpump_dir as ‘/home/oracle/dpump_dir’;
grant read,write on directory dpump_dir to dbuser;

dbuser is the username that we will use it for export and import process.

5.We also need to give the dbuser privileges to create any directory and to export / import full database. grant create any directory to dbuser;
grant EXP_FULL_DATABASE to dbuser;
grant IMP_FULL_DATABASE to dbuser;

Oracle Expdp / Impdp Utility Theory,Examples, Practical ,Errors,Solutions,Real Time Scenarios As Well As Imp/Exp Comparison.

  Data pump is a new feature in Oracle10g that provides fast parallel data load. With direct path and parallel execution, data pump is several times faster then the traditional exp/imp. Traditional exp/imp runs on client side. But impdp/expdp runs on server side. So we have much control on expdp/expdp compared to traditional exp/imp. When compared to exp/imp, data pump startup time is longer. Because, it has to setup the jobs, queues, and master table. Also at the end of the export operation the master table data is written to the dump file set, and at the beginning of the import job the master table is located and loaded in the schema of the user.
Image
Following are the process involved in the data pump operation:
Client Process : This process is initiated by client utility. This process makes a call to the data pump API. Once the data pump is initiated, this process is not necessary for the progress of the job.
Shadow Process : When client log into the database, foreground process is created.
It services the client data pump API requests. This process creates the master table and creates
Advanced queuing queues used for communication. Once client process ends, shadow process also go away.
Master Control Process : MCP controls the execution of the data pump job. There is one MCP per job.
MCP divides the data pump job into various metadata and data load or
unload jobs and hands them over to the worker processes.
Worker Process : MCP creates worker process based on the valule of the PARALLEL parameter.
The worker process performs the task requested by MCP.
Advantage of Data pump
1. We can perform export in parallel. It can also write to multiple files on different disks.
(Specify parameters PARALLEL=2 and the two directory names with file specification
DUMPFILE=ddir1:/file1.dmp, DDIR2:/file2.dmp)
2. Has ability to attach and detach from job, monitor the job progress remotely.
3. Has more option to filter metadata objects. Ex, EXCLUDE, INCLUDE
4. ESTIMATE_ONLY option can be used to estimate disk space requirements before performs the job
5. Data can be exported from remote database by using Database link
6. Explicit DB version can be specified, so only supported object types are exported.
7. During impdp, we can change the target file names, schema, and tablespace.
Ex, REMAP_SCHEMA, REMAP_DATAFILES, REMAP_TABLESPACE
8. Has the option to filter data rows during impdp. Traditional exp/imp,
we have this filter option only in exp. But here we have filter option on both impdp, expdp.
9. Data can be imported from one DB to another without writing to dump file, using NETWORK_LINK parameter.
10. Data access methods are decided automatically. In traditional exp/imp, we specify the value for the parameter DIRECT.
But here, it decides where direct path can not be used , conventional path is used.
11. Job status can be queried directly from data dictionary(For example, dba_datapump_jobs, dba_datapump_sessions etc)
Exp & Expdp common parameters: These below parameters exists in both traditional exp and expdp utility.
FILESIZE
FLASHBACK_SCN
FLASHBACK_TIME
FULL
HELP
PARFILE
QUERY
TABLES
TABLESPACES
TRANSPORT_TABLESPACES(exp value is Y/N, expdp value is name of the tablespace)
Comparing exp & expdp parameters: These below parameters are equivalent parameters between exp & expdp.
Exp and corresponding Expdp parameters…
FEEDBACK => STATUS
FILE => DUMPFILE
LOG => LOGFILE
OWNER => SCHEMAS
TTS_FULL_CHECK => TRANSPROT_FULL_CHECK
New parameters in expdp Utility
ATTACH Attach the client session to existing data pump jobs
CONTENT Specify what to export(ALL, DATA_ONLY, METADATA_ONLY)
DIRECTORY Location to write the dump file and log file.
ESTIMATE Show how much disk space each table in the export job consumes.
ESTIMATE_ONLY It estimate the space, but does not perform export
EXCLUDE List of objects to be excluded
INCLUDE List of jobs to be included
JOB_NAME Name of the export job
KEEP_MASTER Specify Y not to drop the master table after export
NETWORK_LINK Specify dblink to export from remote database
NOLOGFILE Specify Y if you do not want to create log file
PARALLEL Specify the maximum number of threads for the export job
VERSION DB objects that are incompatible with the specified version will not be exported.
ENCRYPTION_PASSWORD The table column is encrypted, then it will be written as clear
text in the dump file set when the password is not specified. We can define any string as a password for this parameter.
COMPRESSION Specifies whether to compress metadata before writing to the dump file set.
The default is METADATA_ONLY. We have two values(METADATA_ONLY,NONE). We can use NONE
if we want to disable during the expdp.
SAMPLE – Allows you to specify a percentage of data to be sampled and unloaded from the source database.
The sample_percent indicates the probability that a block of rows will be selected as part of the sample.
Imp & Impdp common parameters: These below parameters exist in both traditional imp and impdp utility.
FULL
HELP
PARFILE
QUERY
SKIP_UNUSABLE_INDEXES
TABLES
TABLESPACES
Comparing imp & impdp parameters: These below parameters are equivalent parameters between imp & impdp.
imp and corresponding impdp parameters…
DATAFILES => TRANSPORT_DATAFILES
DESTROY =>REUSE_DATAFILES
FEEDBACK =>STATUS
FILE =>DUMPFILE
FROMUSER =>SCHEMAS, REMAP_SCHEMAS
IGNORE =>TABLE_EXISTS_ACTION(SKIP,APPEND,TRUNCATE,REPLACE)
INDEXFILE, SHOW=>SQLFILE
LOG =>LOGFILE
TOUSER =>REMAP_SCHEMA
New parameters in impdp Utility
FLASHBACK_SCN Performs import operation that is consistent with the SCN specified from the source database.
Valid only when NETWORK_LINK parameter is used.
FLASHBACK_TIME Similar to FLASHBACK_SCN, but oracle finds the SCN close to the time specified.
NETWORK_LINK Performs import directly from a source database using database link name specified in the parameter.
The dump file will be not be created in server when we use this parameter.
To get a consistent export from the source database,
we can use the FLASHBACK_SCN or FLASHBACK_TIME parameters.
These two parameters are only valid when we use NETWORK_LINK parameter.
REMAP_DATAFILE Changes name of the source DB data file to a different name in the target.
REMAP_SCHEMA Loads objects to a different target schema name.
REMAP_TABLESPACE Changes name of the source tablespace to a different name in the target.
TRANSFORM We can specify that the storage clause should not be generated in the DDL for import.
This is useful if the storage characteristics of the source and target database are different.
The valid values are SEGMENT_ATTRIBUTES, STORAGE. STORAGE removes the storage clause from the CREATE statement DDL,
whereas SEGMENT_ATTRIBUTES removes physical attributes, tablespace, logging, and storage attributes.
TRANSFORM = name:boolean_value[:object_type], where boolean_value is Y or N.
For instance, TRANSFORM=storage:N:table
ENCRYPTION_PASSWORD It is required on an import operation if an encryption password was specified on the export operation.
CONTENT, INCLUDE, EXCLUDE are same as expdp utilities.
Prerequisite for expdp/impdp:
Set up the dump location in the database.
system@orcl> create directory dumplocation
2 as ‘c:/dumplocation’;
Directory created.
system@orcl> grant read,write on directory dumploc to scott;
Grant succeeded.
Let us experiment expdp & impdp utility as different scenario…… We have two database orcl, ordb.
All the below scenarios are tested in Oracle10g R2 version.
We have two database orcl, ordb.
 Oracle10g R2 version.
Scenario 1 :
Export the whole orcl database.
Export Parfile content:
userid=system/password@orcl
dumpfile=expfulldp.dmp
logfile=expfulldp.log
full=y
directory=dumplocation
Scenario 2
Export the scott schema from orcl and import into ordb database.
While importing, exclude some objects(sequence,view,package,cluster,table).
Load the objects which came from RES tablespace into USERS tablespace in target database.
Export Parfile content:
userid=system/password@orcl
dumpfile=schemaexpdb.dmp
logfile=schemaexpdb.log
directory=dumplocation
schemas=scott
Import parfile content:
userid=system/password@ordb
dumpfile=schemaexpdb.dmp
logfile=schemaimpdb.log
directory=dumplocation
table_exists_action=replace
remap_tablespace=res:users
exclude=sequence,view,package,cluster,table:”in(‘LOAD_EXT’)”
Scenario 3
Export the emp table from scott schema at orcl instance and import into ordb instance.
Expdp parfile content:
userid=system/password@orcl
logfile=tableexpdb.log
directory=dumplocation
tables=scott.part_emp
dumpfile=tableexpdb.dmp
Impdp parfile content:
userid=system/password@ordb
dumpfile=tableexpdb.dmp
logfile=tabimpdb.log
directory=dumplocation
table_exists_action=REPLACE
Scenario 4
Export only specific partition in emp table from scott schema at orcl and import into ordb database.
Expdp parfile content:
userid=system/password@orcl
dumpfile=partexpdb.dmp
logfile=partexpdb.log
directory=dumplocation
tables=scott.part_emp:part10,scott.part_emp:part20
Impdp parfile content:
If we want to overwrite the exported data in target database, then we need to delete emp table for deptno in(10,20).
scott@ordb> delete part_emp where deptno=10;
786432 rows deleted.
scott@ordb> delete part_emp where deptno=20;
1310720 rows deleted.
scott@ordb> commit;
Commit complete.
userid=system/password@ordb
dumpfile=partexpdb.dmp
logfile=tabimpdb.log
directory=dumplocation
table_exists_action=append
Scenario 5
Export only tables in scott schema at orcl and import into ordb database.
Expdp parfile content:
userid=system/password@orcl
dumpfile=schemaexpdb.dmp
logfile=schemaexpdb.log
directory=dumplocation
include=table
schemas=scott
Impdp parfile content:
userid=system/password@orcl
dumpfile=schemaexpdb.dmp
logfile=schemaimpdb.log
directory=dumplocation
table_exists_action=replace
Scenario 6
Export only rows belonging to department 10 and 20 in emp and dept table from orcl database.
Import the dump file in ordb database. While importing, load only deptno 10 in target database.
Expdp parfile content:
userid=system/password@orcl
dumpfile=data_filter_expdb.dmp
logfile=data_filter_expdb.log
directory=dumplocation
content=data_only
schemas=scott
include=table:”in(‘EMP’,’DEPT’)”
query=”where deptno in(10,20)”
Impdp parfile content:
userid=system/password@ordb
dumpfile=data_filter_expdb.dmp
logfile=data_filter_impdb.log
directory=dumplocation
schemas=scott
query=”where deptno = 10?
table_exists_action=APPEND
Scenario 7
Export the scott schema from orcl database and split the dump file into 50M sizes.
Import the dump file into ordb datbase.
Expdp parfile content:
userid=system/password@orcl
logfile=schemaexp_split.log
directory=dumplocation
dumpfile=schemaexp_split_%U.dmp
filesize=50M
schemas=scott
include=table
As per the above expdp parfile, initially, schemaexp_split_01.dmp file will be created. Once the file is 50MB,
the next file called schemaexp_split_02.dmp will be created. Let us say, the dump file size is 500MB,
then it creates 10 dump file as each file size is 50MB.
Impdp parfile content:
userid=system/password@ordb
logfile=schemaimp_split.log
directory=dumplocation
dumpfile=schemaexp_split_%U.dmp
table_exists_action=replace
remap_tablespace=res:users
exclude=grant
Scenario 8
Export the scott schema from orcl database and split the dump file into four files. Import the dump file
into ordb datbase.
Expdp parfile content:
userid=system/password@orcl
logfile=schemaexp_split.log
directory=dumplocation
dumpfile=schemaexp_split_%U.dmp
parallel=4
schemas=scott
include=table
As per the above parfile content, initially four files will be created – schemaexp_split_01.dmp,
schemaexp_split_02.dmp, schemaexp_split_03.dmp, schemaexp_split_04.dmp. Notice that every occurrence
of the substation variable is incremented each time. Since there is no FILESIZE parameter,
no more files will be created.
Impdp parfile content:
userid=system/password@ordb
logfile=schemaimp_split.log
directory=dumplocation
dumpfile=schemaexp_split_%U.dmp
table_exists_action=replace
remap_tablespace=res:users
exclude=grant
Scenario 9
Export the scott schema from orcl database and split the dump file into three files. The dump files
will be stored in three different location. This method is especially useful if you do not have enough
space in one file system to perform the complete expdp job. After export is successful, import the dump
file into ordb database.
Expdp parfile content:
userid=system/password@orcl
logfile=schemaexp_split.log
directory=dumplocation
dumpfile=dump1:schemaexp_%U.dmp,dump2:schemaexp_%U.dmp,dump3:schemaexp_%U.dmp
filesize=50M
schemas=scott
include=table
         As per above expdp par file content, it place the dump file into three different location. Let us say,
entire expdp dump file size is 1500MB. Then it creates 30 dump files(each dump file size is 50MB) and
place 10 files in each file system.
Impdp parfile content:
userid=system/password@ordb
logfile=schemaimp_split.log
directory=dumplocation
dumpfile=dump1:schemaexp_%U.dmp,dump2:schemaexp_%U.dmp,dump3:schemaexp_%U.dmp
table_exists_action=replace
Scenario 10
        We are in orcl database server. Now export the ordb data and place the dump file in orcl database server.
After expdp is successful, import the dump file into orcl database. When we use network_link, the expdp user
and source database schema users should have identical privileges. If there no identical privileges,
then we get the below error.
C:\impexpdp>expdp parfile=networkexp1.par
Export: Release 10.2.0.1.0 – Production on Sunday, 17 May, 2009 12:06:40
Copyright (c) 2003, 2005, Oracle. All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 – Produc
tion
With the Partitioning, OLAP and Data Mining options
ORA-31631: privileges are required
ORA-39149: cannot link privileged user to non-privileged user
Expdp parfile content:
userid=scott/tiger@orcl
logfile=netwrokexp1.log
directory=dumplocation
dumpfile=networkexp1.dmp
schemas=scott
include=table
network_link=ordb
             As per the above parfile, expdp utility exports the ordb database data and place the dump file in orcl server. Since we are running expdp in orcl server. This is basically exporting the data from remote database.
Impdp parfile content:
userid=system/password@orcl
logfile=networkimp1.log
directory=dumplocation
dumpfile=networkexp1.dmp
table_exists_action=replace
Scenario 11
Export scott schema in orcl and import into ordb. But do not write dump file in server. The expdp and impdp should be completed without writing dump file in the server. Here we do not need to export the data. We can import the data without creating the dumpfile.
Here we run the impdp in ordb server and it contacts orcl DB and extract the data and import into ordb database.
If we do not have much space in the file system to place the dump file, then we can use this option to load the data.
Impdp parfile content:
userid=scott/tiger@ordb
network_link=orcl
logfile=networkimp2.log
directory=dumplocation
table_exists_action=replace
Scenario 12
Expdp scott schema in ordb and impdp the dump file in training schema in ordb database.
Expdp parfile content:
userid=scott/tiger@orcl
logfile=netwrokexp1.log
directory=dumplocation
dumpfile=networkexp1.dmp
schemas=scott
include=table
Impdp parfile content:
userid=system/password@ordb
logfile=networkimp1.log
directory=dumplocation
dumpfile=networkexp1.dmp
table_exists_action=replace
remap_schema=scott:training
Scenario 13
Expdp table on orcl database and imdp in ordb. When we export the data, export only 20 percent of the table data. We use SAMPLE parameter to accomplish this task.
SAMPLE parameter allows you to export subsets of data by specifying the percentage of data to be sampled and exported.
The sample_percent indicates the probability that a block of rows will be selected as part of the sample.
It does not mean that the database will retrieve exactly that amount of rows from the table.
The value you supply for sample_percent can be anywhere from .000001 up to, but not including, 100.
If no table is specified, then the sample_percent value applies to the entire export job. The SAMPLE parameter
is not valid for network exports.
Expdp parfile content:
userid=system/password@orcl
dumpfile=schemaexpdb.dmp
logfile=schemaexpdb.log
directory=dumplocation
tables=scott.part_emp
SAMPLE=20
            As per the above expdp parfile, it exports only 20 percent of the data in part_emp table.
Impdp parfile content:
userid=system/password@ordb
dumpfile=schemaexpdb.dmp
logfile=schemaimpdb.log
directory=dumplocation
table_exists_action=replace
————–>>>>>>>>>>>>>>————————————->>>>>>>>>>>>>>>>>————————>>>>>>>>>>>>>——————————->>>>>>>—->>>>>>>>>——————-
We are creating 2 schemas.
1.       User1 – This schema is having partition table SALES which needs to be export
2.       User2 – schema where we are going to import sales table with new name as SALES_T.
1.Create users:
[oracle@NVMBD01PSR183 ~]$ sqlplus / as sysdba
SQL*Plus: Release 11.2.0.3.0 Production on Fri Dec 19 13:15:44 2014
Copyright (c) 1982, 2011, Oracle.  All rights reserved.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 – 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
SQL> create user user1 identified by user1;
User created.
SQL> create user user2 identified by user2;
User created.
SQL> grant connect, resource to user1, user2;
Grant succeeded.
2.Create datapump directory.
SQL>CREATE OR REPLACE DIRECTORY EXPDP_DIR AS ‘ /oracle/data1/expdp’;
SQL> GRANT READ, WRITE ON DIRECTORY EXPDP_DIR to user1,user2;
3.Create partition table SALES.
SQL> conn user1/user1
Connected.
SQL> CREATE TABLE sales
( name varchar2(5),
time_id DATE)
PARTITION BY RANGE (time_id)
(PARTITION sales_AUG VALUES LESS THAN (TO_DATE(’01-AUG-2014′,’dd-MON-yyyy’))
TABLESPACE users,
PARTITION sales_SEP VALUES LESS THAN (TO_DATE(’01-SEP-2014′,’dd-MON-yyyy’))
TABLESPACE users,
PARTITION sales_OCT VALUES LESS THAN (TO_DATE(’01-OCT-2014′,’dd-MON-yyyy’))
TABLESPACE users,
PARTITION sales_NOV VALUES LESS THAN (TO_DATE(’01-NOV-2014′,’dd-MON-yyyy’))
TABLESPACE users,
PARTITION sales_DEV VALUES LESS THAN (TO_DATE(’01-DEC-2014′,’dd-MON-yyyy’))
TABLESPACE users );
SQL> SELECT partition_name FROM user_tab_partitions WHERE table_name = ‘SALES’;
PARTITION_NAME
——————————
SALES_AUG
SALES_DEV
SALES_NOV
SALES_OCT
SALES_SEP
EXPDP:
1. Export entire table including all partitions
[oracle@NVMBD01PSR183 ~]$ expdp user1/user1 directory=EXPDP_DIR dumpfile=sales_table.dmp tables=sales
logfile=sales_table.log
Export: Release 11.2.0.3.0 – Production on Fri Dec 19 11:13:37 2014
Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 – 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting “USER1”.”SYS_EXPORT_TABLE_01″:  user1/******** directory=EXPDP_DIR dumpfile=sales_table.dmp
tables=sales logfile=sales_table.log
Estimate in progress using BLOCKS method…
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 24 MB
Processing object type TABLE_EXPORT/TABLE/TABLE
. . exported “USER1″.”SALES”:”SALES_NOV”                5.421 KB       1 rows
. . exported “USER1″.”SALES”:”SALES_OCT”                5.421 KB       1 rows
. . exported “USER1″.”SALES”:”SALES_SEP”                5.421 KB       1 rows
. . exported “USER1″.”SALES”:”SALES_AUG”                    0 KB       0 rows
. . exported “USER1″.”SALES”:”SALES_DEV”                    0 KB       0 rows
Master table “USER1”.”SYS_EXPORT_TABLE_01″ successfully loaded/unloaded
******************************************************************************
Dump file set for USER1.SYS_EXPORT_TABLE_01 is:
 /oracle/data1/expdp/sales_table.dmp
Job “USER1”.”SYS_EXPORT_TABLE_01″ successfully completed at 11:13:46
2. Export specific partition of table:
[oracle@NVMBD01PSR183 ~]$ expdp user1/user1 directory=EXPDP_DIR dumpfile=sales_table_partition.dmp
tables=sales:SALES_NOV logfile=sales_table_partition.log
Export: Release 11.2.0.3.0 – Production on Fri Dec 19 12:31:06 2014
Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 – 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting “USER1”.”SYS_EXPORT_TABLE_01″:  user1/******** directory=EXPDP_DIR dumpfile=sales_table_partition.dmp
tables=sales:SALES_NOV logfile=sales_table_partition.log
Estimate in progress using BLOCKS method…
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 8 MB
Processing object type TABLE_EXPORT/TABLE/TABLE
. . exported “USER1″.”SALES”:”SALES_NOV”                5.421 KB       1 rows
Master table “USER1”.”SYS_EXPORT_TABLE_01″ successfully loaded/unloaded
******************************************************************************
Dump file set for USER1.SYS_EXPORT_TABLE_01 is:
 /oracle/data1/expdp/sales_table_partition.dmp
Job “USER1”.”SYS_EXPORT_TABLE_01″ successfully completed at 12:31:13
IMPDP
Move dmp file to target host (ftp, scp etc)
Or load data to another schema using remap_schema
1. Import entire partition table into new schema with new name.
[oracle@NVMBD01PSR183 ~]$ impdp user2/user2 directory=EXPDP_DIR dumpfile=sales_table.dmp remap_table=sales:sales_t
remap_schema=user1:user2 remap_tablespace=user1:user2 table_exists_action = append;
Import: Release 11.2.0.3.0 – Production on Fri Dec 19 11:19:25 2014
Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 – 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Master table “USER2”.”SYS_IMPORT_FULL_01″ successfully loaded/unloaded
Starting “USER2”.”SYS_IMPORT_FULL_01″:  user2/******** directory=EXPDP_DIR dumpfile=sales_table.dmp
remap_table=sales:sales_t remap_schema=user1:user2 remap_tablespace=user1:user2 table_exists_action=append
Processing object type TABLE_EXPORT/TABLE/TABLE
Table “USER2″.”SALES_T” exists. Data will be appended to existing table but all dependent metadata will
be skipped due to table_exists_action of append
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
. . imported “USER2″.”SALES_T”:”SALES_SEP”               5.421 KB       1 rows
. . imported “USER2″.”SALES_T”:”SALES_AUG”                   0 KB       0 rows
. . imported “USER2″.”SALES_T”:”SALES_DEV”                   0 KB       0 rows
. . imported “USER2″.”SALES_T”:”SALES_OCT”               5.421 KB       1 rows
. . imported “USER2″.”SALES_T”:”SALES_NOV”               5.421 KB       1 rows
Job “USER2”.”SYS_IMPORT_FULL_01″ successfully completed at 11:19:30
Let`s check:
SQL> conn user2/user2
Connected.
SQL> select * from sales_t;
NAME  TIME_ID
—– ———
prod1 01-AUG-14
prod2 01-SEP-14
prod3 01-OCT-14
2. Now import single partition only.
[oracle@NVMBD01PSR183 ~]$ impdp user2/user2 directory=EXPDP_DIR dumpfile=sales_table_partition.dmp
remap_table=sales:sales_t remap_schema=user1:user2 remap_tablespace=user1:user2 table_exists_action = append;
Import: Release 11.2.0.3.0 – Production on Fri Dec 19 13:05:26 2014
Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 – 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Master table “USER2”.”SYS_IMPORT_FULL_01″ successfully loaded/unloaded
Starting “USER2”.”SYS_IMPORT_FULL_01″:  user2/******** directory=EXPDP_DIR dumpfile=sales_table_partition.dmp
remap_table=sales:sales_t remap_schema=user1:user2 remap_tablespace=user1:user2 table_exists_action=append
Processing object type TABLE_EXPORT/TABLE/TABLE
Table “USER2″.”SALES_T” exists. Data will be appended to existing table but all dependent metadata
will be skipped due to table_exists_action of append
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
. . imported “USER2″.”SALES_T”:”SALES_NOV”               5.421 KB       1 rows
Job “USER2”.”SYS_IMPORT_FULL_01″ successfully completed at 13:05:28
Here we have successfully imported single partition.
If a partition name is specified, it must be the name of a partition or subpartition in the associated table.
Only the specified set of tables, partitions, and their dependent objects are unloaded.
 When you use partition option (PARTITION_OPTIONS) of DataPump you have to select below options:
  None – Tables will be imported such that they will look like those on the system on which the export was created.
 Departition – Partitions will be created as individual tables rather than partitions of a partitioned table
 Merge – Combines all partitions into a single table.
Managing Data Pump jobs
The datapump clients expdp and impdp provide an interactive command interface.
Since each expdp and impdp operation has a job name, you can attach to that job from any computer and
monitor the job or make adjustment to the job.
Here are the data pump interactive commands.
ADD_FILE Adds another file or a file set to the DUMPFILE set.
CONTINUE_CLIENT Changes mode from interactive client to logging mode
EXIT_CLIENT Leaves the client session and discontinues logging but leaves the current job running.
KILL_JOB Detaches all currently attached client sessions and terminates the job
PARALLEL Increase or decrease the number of threads
START_JOB Starts(or resume) a job that is not currently running. SKIP_CURRENT option can skip
the recent failed DDL statement that caused the job to stop.
STOP_JOB stops the current job, the job can be restarted later
STATUS Displays detailed status of the job, the refresh interval can be specified in seconds.
The detailed status is displayed to the output screen but not written to the log file.
Scenario14:- Let us start the job and in between, we stop the job in middle and resume the job.
After some time, let us kill the job and check the job status for every activity….
We can find what jobs are running currently in the database by using the below query.
SQL> select state,job_name from dba_datapump_jobs;
STATE JOB_NAME
—————————— ——————————
EXECUTING SYS_IMPORT_FULL_01
SQL>
C:\impexpdp>impdp parfile=schemaimp1.par
Import: Release 10.2.0.1.0 – Production on Sunday, 17 May, 2009 14:06:51
Copyright (c) 2003, 2005, Oracle. All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 – Produc
tion
With the Partitioning, OLAP and Data Mining options
Master table “SYSTEM”.”SYS_IMPORT_FULL_01″ successfully loaded/unloaded
Starting “SYSTEM”.”SYS_IMPORT_FULL_01″: parfile=schemaimp1.par
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Import> stop_job
Are you sure you wish to stop this job ([yes]/no): yes
C:\impexpdp>
When we want to stop the job, we need press Control-M to returnImport> prompt. Once it is returned to prompt(Import>),
we can stop the job as above by using stop_job command.
After the job is stoped, here is the job status.
SQL> select state,job_name from dba_datapump_jobs;
STATE JOB_NAME
—————————— ——————————
NOT RUNNING SYS_IMPORT_FULL_01
Now we are attaching job again….. Attaching the job does not restart the job.
C:\impexpdp>impdp system/password@ordb attach=SYS_IMPORT_FULL_01
Import: Release 10.2.0.1.0 – Production on Sunday, 17 May, 2009 14:17:11
Copyright (c) 2003, 2005, Oracle. All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 – Produc
tion
With the Partitioning, OLAP and Data Mining options
Job: SYS_IMPORT_FULL_01
Owner: SYSTEM
Operation: IMPORT
Creator Privs: FALSE
GUID: 54AD9D6CF9B54FC4823B1AF09C2DC723
Start Time: Sunday, 17 May, 2009 14:17:12
Mode: FULL
Instance: ordb
Max Parallelism: 1
EXPORT Job Parameters:
CLIENT_COMMAND parfile=schemaexp1.par
IMPORT Job Parameters:
Parameter Name Parameter Value:
CLIENT_COMMAND parfile=schemaimp1.par
TABLE_EXISTS_ACTION REPLACE
State: IDLING
Bytes Processed: 1,086,333,016
Percent Done: 44
Current Parallelism: 1
Job Error Count: 0
Dump File: c:/impexpdp\networkexp1.dmp
Worker 1 Status:
State: UNDEFINED
Import>
After attaching the job, here is the job status.
SQL> select state,job_name from dba_datapump_jobs;
STATE JOB_NAME
—————————— ——————————
IDLING SYS_IMPORT_FULL_01
Attaching the job does not resume the job. Now we are resuming job again…..
Import> continue_client
Job SYS_IMPORT_FULL_01 has been reopened at Sunday, 17 May, 2009 14:17
Restarting “SYSTEM”.”SYS_IMPORT_FULL_01″: parfile=schemaimp1.par
SQL> select state,job_name from dba_datapump_jobs;
STATE JOB_NAME
—————————— ——————————
EXECUTING SYS_IMPORT_FULL_01
Now again we are killing the same job…. Before we kill, we need to press Control-C to return the Import> prompt.
Import> kill_job
Are you sure you wish to stop this job ([yes]/no): yes
C:\impexpdp>
Now the job is disappared in the database.
SQL> select state,job_name from dba_datapump_jobs;
no rows selected
——>>>>>>>>>>>—————>>>>>>>>>>>>>>>———->>>>>>>>—————–>>>>>>>>>>>————————————->>>>>>>>>>>>>>>>>>>
Parallelism
Data Pump is much faster than the old exp and imp client commands. One new feature that really helps make it faster is the “parallel” option. With this option, the Data Pump will pump data in four different threads.
Data pump performance can be significantly improved by using the PARALLEL parameter. This should be used in conjunction with the “%U” wildcard in the DUMPFILE parameter to allow multiple dumpfiles to be created or read.
$expdp scott/tiger directory=dumpdir dumpfile=scott2.dmp parallel=4 job_name=scott2
$Impdp dpuser/dpuser@TDB10G  schemas=dbuser  directory=dumpdir dumpfile=expdp_TDB10G_dbuser.200470215_07.%U.dmp parallel=4
DATA PUMP VIEWS – Data Pump maintains a number of user- and DBA-accessible views to monitor the progress of jobs:
DBA_DATAPUMP_JOBS: This shows a summary of all active Data Pump jobs on the system.
USER_DATAPUMP_JOBS: This shows a summary of the current user’s active Data Pump jobs.
DBA_DATAPUMP_SESSIONS: This shows all sessions currently attached to Data Pump jobs.
V$SESSION_LONGOPS: A row is maintained in the view showing progress on each active Data Pump job. The OPNAME column displays the Data Pump job name.
Data Pump Import:
> impdp username/password DIRECTORY=dpump_dir1 DUMPFILE=scott.dmp TABLES=scott.emp REMAP_SCHEMA=scott:jim
In Data pump utility  the FROMUSER/TOUSER syntax is replaced by the REMAP_SCHEMA Option.
Example, export of an entire database to a dump files with all GRANTS, INDEXES, and data
> exp username/password FULL=y FILE=dba.dmp GRANTS=y INDEXES=y ROWS=y LOG=exportdba.log
> expdp username/password FULL=y INCLUDE=GRANT INCLUDE= INDEX DIRECTORY=dpump_dir1 DUMPFILE=dba.dmp CONTENT=ALL
Following are few EXPDP and IMPDP commands with additional attributes.
$ expdp dpuser/dpuser@TDB10G schemas=dpuser include= TABLE:\”IN (\’EMP\’, \’DEPT\’)\”  directory=dpump_dir1 dumpfile=dpuser.dmp log=dpuser.log
$expdp dpuser/dpuser@TDB10G schemas=dpuser exclude=TABLE:\”= \’EMP_DETAILS\’\”  directory=dpump_dir1 dumpfile=dpuser2.dmp logfile=dpuser.log
$Impdp dpuser/dpuser@TDB10G  schemas=dbuser  directory=dumpdir dumpfile=expdp_TDB10G_dbuser.200470215_07.%U.dmp parallel=4
$impdp dpuser/dpuser@TDB10G  schemas=dbuser dumpfile=DATA_PUMP_DIR:expdp_TDB10G_dbuser.200507_165936.%U.dmp logfile= DATA_PUMP_DIR:expdp_TDB10G_dbuser.200507_165936.out
$expdp dpuser/dpuser@TDB10G schemas=dpuser   directory=dpump_dir1 parallel=4 dumpfile=dpuser_%U.dmp logfile=dpuser.log
CREATING ORACLE DATABASE USING IMPDP and EXPDP
Steps for creating the Oracle database:
Before creating the database we have to take Backup/ Export the entire existing Schema.
Syntax: > expdp username/password FULL=y INCLUDE=GRANT INCLUDE= INDEX   DIRECTORY=dpump_dir1 DUMPFILE=dba.dmp CONTENT=ALL
Delete Database – Drops all objects and public synonyms for objects owned by the schema
Create the Database again using the Oracle Configuration Assistant and clone of deleted database.
Connect to the Oracle Database using system user and sysdba user and runs scripts for tablespace creation, creation of users – roles, granting of privileges.
Syntax:
create tablespace Tablespace_name
datafile ‘/&oradata_dir/oradata/&orasid/tablespace_name_datafile01.dbf’ size 24M
autoextend on next 24M maxsize 15000M
extent management local
segment space management auto
/
Create the dump directory using CREATE OR REPLACE DIRECTORY (dump_dir1) and grants read-write priviledges to dump directory object for the user.
The following SQL statements creates a user, a directory object named dpump_dir1 and     grants the permissions to the user.
Syntax:
SQLPLUS system/manager@TDB10G as sysdba
SQL> create user dpuser identified by dpuser;
SQL> grant connect, resource to dpuser;
SQL> CREATE DIRECTORY dpump_dir1 AS ‘/opt/app/oracle’;
SQL> grant read, write on directory dpump_dir1 to dpuser
Connect to the newly created schema to test the connection.
SQL> dpuser/ dpuser@TDB10G;
Run the IMPDP command from the command prompt
Syntax:
$impdp dpuser/dpuser@TDB10G schemas=dbuser dumpfile=DATA_PUMP_DIR:expdp_TDB10G_dbuser.200507_165936.%U.dmp
logfile= DATA_PUMP_DIR:expdp_TDB10G_dbuser.200507_165936.out
Verify the log file for errors or discrepancies.
We need to copy the .DMP files in default database directory (DATA_PUMP_DIR) path.
I:\oracle\product\10.2.0\admin\TDB10G\dpdump\
We can also place these .DMP files in database directory path where we have created.
Table Level Exports/Imports
The TABLES parameter is used to specify the tables that are to be exported. The following is an example of the table export and import syntax:
expdp scott/tiger@db10g tables=EMP,DEPT directory=TEST_DIR dumpfile=EMP_DEPT.dmp logfile=expdpEMP_DEPT.log
impdp scott/tiger@db10g tables=EMP,DEPT directory=TEST_DIR dumpfile=EMP_DEPT.dmp logfile=impdpEMP_DEPT.log
The TABLE_EXISTS_ACTION=APPEND parameter allows data to be imported into existing tables.
Schema Level Exports/Imports
The OWNER parameter of exp has been replaced by the SCHEMAS parameter which is used to specify the schemas to be exported. The following is an example of the schema export and import syntax:
expdp scott/tiger@db10g schemas=SCOTT directory=TEST_DIR dumpfile=SCOTT.dmp logfile=expdpSCOTT.log
impdp scott/tiger@db10g schemas=SCOTT directory=TEST_DIR dumpfile=SCOTT.dmp logfile=impdpSCOTT.log
Database Level Exports/Imports
The FULL parameter indicates that a complete database export is required. The following is an example of the full database export and import syntax:
expdp system/password@db10g full=Y directory=TEST_DIR dumpfile=DB10G.dmp logfile=expdpDB10G.log
pdp system/password@db10g full=Y directory=TEST_DIR dumpfile=DB10G.dmp logfile=impdpDB10G.log
——————————————————————

DATAPUMP UTILITY:-Transportable Tablespace Using Move Tablespace Ond Db To Another Db

          This article discusses simple example of performing TTS (Transportable Tablespace) across two databases running one same OS and using same database version.
SOURCE DB
1.Lets create tablespace and user TTS for this test
SQL>create tablespace tts_test01 datafile ‘/u01/oradata/orc01d/tts_test01_01.dbf’ size 256m autoextend on next 256m maxsize 20g;
SQL>create user TTS identified by TTS_123 default tablespace tts_test01;
SQL>grant connect,resource to tts;
2.We wish to transport a table EMP_STORE with CLOB columns to our new database
sys@orcl01d>grant select on scott.EMP_STORE to tts;
sys@orcl01d>conn tts/tts_123
tts@orcl01d>create table EMP_STORE_TTS as select * from scott.EMP_STORE;
Table created.
3.Lets verify the objects in the tablespace
sys@orcl01d>select owner,SEGMENT_NAME,SEGMENT_TYPE from dba_segments where tablespace_name=’TTS_TEST01′
OWNER SEGMENT_NAME SEGMENT_TYPE
———- —————————— ——————
TTS EMP_STORE_TTS TABLE
TTS SYS_IL0000333770C00003$$ LOBINDEX
TTS SYS_LOB0000333770C00003$$ LOBSEGMENT
4.We need to check if this tablespace is self contained and satisfies pre-requisites for TTS.
We execute DBMS_TTS.TRANSPORT_SET_CHECK with tablespace_name as argument
sys@orcl01d> exec DBMS_TTS.TRANSPORT_SET_CHECK(‘TTS_TEST01’)
PL/SQL procedure successfully completed.

sys@orcl01d> SELECT * FROM TRANSPORT_SET_VIOLATIONS;
no rows selected
note:-they will show the the possible error that may rise if the tablespace is not compatible or ready to transpoted
5.Above query does not return any rows, so we are safe to proceed.
     Once done, we need to make the tablespace read-only
sys@orcl01d>alter tablespace TTS_TEST01 read only;
6.Create oracle directory to store dumpfile.
Note we are using exclude=xmlschema to avoid Unpublished Bug 10185688 (Data Pump TTS Export Fails With ORA-39126 & ORA-904 [ID 1273450.1] )
which causes following errors
ORA-39126: Worker unexpected fatal error in KUPW$WORKER.UNLOAD_METADATA [PLUGTS_BLK]
ORA-00904: “Q”.”TABOBJ_NUM”: invalid identifier
ORA-06512: at “SYS.DBMS_SYS_ERROR”, line 95
ORA-06512: at “SYS.KUPW$WORKER”, line 8358
7.We will use expdp to perform export and will be using TRANSPORT_TABLESPACES option.
[oracle@orcl01d]~% expdp system DUMPFILE=tts.dmp DIRECTORY=EMP_test TRANSPORT_TABLESPACES=TTS_TEST01
logfile=tts_exp.log exclude=xmlschema
Export: Release 11.2.0.2.0 – Production on Thu Jul 26 01:47:58 2012
Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
Password:
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 – 64bit Production
With the Partitioning, Real Application Clusters, OLAP, Data Mining
and Real Application Testing options
Starting “SYSTEM”.”SYS_EXPORT_TRANSPORTABLE_01″: system/******** DUMPFILE=tts.dmp DIRECTORY=EMP_test
TRANSPORT_TABLESPACES=TTS_TEST01 logfile=tts_exp.log exclude=xmlschema
Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
Processing object type TRANSPORTABLE_EXPORT/TABLE
Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK
Master table “SYSTEM”.”SYS_EXPORT_TRANSPORTABLE_01″ successfully loaded/unloaded
******************************************************************************
Dump file set for SYSTEM.SYS_EXPORT_TRANSPORTABLE_01 is:
/home/oracle/EMP/tts.dmp
******************************************************************************
Datafiles required for transportable tablespace TTS_TEST01:
/u01/oradata/orc01d/tts_test01_01.dbf
Job “SYSTEM”.”SYS_EXPORT_TRANSPORTABLE_01″ successfully completed at 01:48:42
8.Copy the datafile(USING SCP OR CP /u01/oradata/orc01d/tts_test01_01.dbf /u04/oradata/orc02d/tts_test01_01.dbf) to
the new system and copy to database directory. Once done place tablespace back in read-write mode
sys@orcl01d>alter tablespace TTS_TEST01 read write;
TARGET DB
1.Create the database user and assign appropriate privileges. Note that TTS only imports table,indexes and
associated triggers. You need to have pre-created schema
create user TTS identified by TTS_123 default tablespace tts_test02;
grant connect,resource to tts;

2.Now import the datafile and give the location using impdp parameter TRANSPORT_DATAFILES
[oracle@orcl02d]~% impdp system DUMPFILE=tts.dmp DIRECTORY=EMP_test
TRANSPORT_DATAFILES=/u04/oradata/orc02d/tts_test01_01.dbf logfile=tts_imp.log
Import: Release 11.2.0.2.0 – Production on Thu Jul 26 02:34:29 2012
Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
Password:
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 – 64bit Production
With the Partitioning, Real Application Clusters, OLAP, Data Mining
and Real Application Testing options
Master table “SYSTEM”.”SYS_IMPORT_TRANSPORTABLE_01″ successfully loaded/unloaded
Starting “SYSTEM”.”SYS_IMPORT_TRANSPORTABLE_01″: system/******** DUMPFILE=tts.dmp DIRECTORY=EMP_test
TRANSPORT_DATAFILES=/u04/oradata/orc02d/tts_test01_01.dbf logfile=tts_imp.log
Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
Processing object type TRANSPORTABLE_EXPORT/TABLE
Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK
Job “SYSTEM”.”SYS_IMPORT_TRANSPORTABLE_01″ successfully completed at 02:34:42
3.Lets verify whether object is imported
sys@orcl02d>select owner,SEGMENT_NAME,SEGMENT_TYPE from dba_segments where tablespace_name=’TTS_TEST01′
OWNER SEGMENT_NAME SEGMENT_TYPE
———- —————————— ——————
TTS EMP_STORE_TTS TABLE
TTS SYS_IL0004878485C00003$$ LOBINDEX
TTS SYS_LOB0004878485C00003$$ LOBSEGMENT
Query to ensure records are present

sys@orcl02d> select count(*) from TTS.EMP_STORE_TTS;
COUNT(*)
———-
163856
4.Tablespace is imported in read-only mode which can be verified by querying status column in dba_tablespaces
sys@orcl02d> select tablespace_name,status from dba_tablespaces where tablespace_name=’TTS_TEST01′;
TABLESPACE_NAME STATUS
—————————— ———
TTS_TEST01 READ ONLY
Lets make it read-write as we want users to be able to update table
sys@orcl02d> alter tablespace TTS_TEST01 read write;
Tablespace altered.
You will notice that this is really fast approach of copying objects across tablespace as most of time taken is
for copying the datafile to target system.
You will notice that we had kept tablespace in read-only mode while taking export and copying files.
If you wish to workaround this problem (as this causes downtime for any dml operation),
you will have to make use of RMAN to perform TTS.

DATPUMP UTILITY :-TRANSPORT_FULL_CHECK={y | n} IN ORACLE

TRANSPORT_FULL_CHECK={y | n}

If TRANSPORT_FULL_CHECK=y,
 then Export verifies that there are no dependencies between those objects inside the
transportable set and those outside the transportable set. The check addresses two-way dependencies. For example, if a table is inside the transportable set but its index is not, a failure is returned and the export operation is terminated. Similarly, a failure is also returned if an index is in the transportable set but the table is not.

If TRANSPORT_FULL_CHECK=n, 
then Export verifies only that there are no objects within the transportable set that
are dependent on objects outside the transportable set. This check addresses a one-way dependency. For example, a table is not dependent on an index, but an index is dependent on a table, because an index without a table has no meaning. Therefore, if the transportable set contains a table, but not its index, then this check succeeds. However, if the transportable set contains an index, but not the table, the export operation is terminated.

IMPDP UTILITY :- TABLE_EXISTS_ACTION =OPTIONS IN ORACLE

SKIP: Default value for this parameter is SKIP. This parameter is exactly same as the IGNORE=Y
        (If a table already exists and IGNORE=y, then rows are imported into existing tables without any errors                  or
          messages being given option in conventional import utility.

APPEND: This option appends the data from the data dump. The extra rows in the dump will be appended
           to the table and the existing data remains unchanged.

TRUNCATE: This option truncate the exiting rows in the table and insert the rows from the dump

REPLACE: This option drop the current table and create the table as it is in the dump file.
        Both SKIP and REPLACE options are not valid if you set the  CONTENT=DATA_ONLY for the                impdp.
     

Method to Import only rows does not exist in the target table
See some examples here.
This is my sample table employee

SQL> select * from employee;
EMP_NAME                             DEPT        SAL
—————————— ———- ———-
Rupal                                  10       5000
Hero                                   10       5500
Jain                                   10       4000
John                                   20       6000

I took the data pump dump for employee table.
$ expdp directory=exp_dir tables=scott.employee dumpfile=emp.dmp logfile=emp.log

Export: Release 11.2.0.2.0 – Production on Tue May 1 23:31:04 2012
Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.
Username: / as sysdba
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 – 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options
Starting “SYS”.”SYS_EXPORT_TABLE_01″:  /******** AS SYSDBA directory=exp_dir tables=scott.employee dumpfile=emp.dmp logfile=emp.log
Estimate in progress using BLOCKS method…
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 64 KB
Processing object type TABLE_EXPORT/TABLE/TABLE
. . exported “SCOTT”.”EMPLOYEE”                        5.953 KB       4 rows
Master table “SYS”.”SYS_EXPORT_TABLE_01″ successfully loaded/unloaded
******************************************************************************
Dump file set for SYS.SYS_EXPORT_TABLE_01 is:
  /home/oracle/shony/emp.dmp
Job “SYS”.”SYS_EXPORT_TABLE_01″ successfully completed at 23:31:20



Now Let’s try each options.

1.TABLE_EXISTS_ACTION=SKIP

$ impdp directory=exp_dir dumpfile=emp.dmp logfile=imp1.log table_exists_action=skip

Import: Release 11.2.0.2.0 – Production on Tue May 1 23:36:07 2012
Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.
Username: / as sysdba
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 – 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options
Master table “SYS”.”SYS_IMPORT_FULL_01″ successfully loaded/unloaded
Starting “SYS”.”SYS_IMPORT_FULL_01″:  /******** AS SYSDBA directory=exp_dir dumpfile=emp.dmp logfile=imp1.log table_exists_action=skip
Processing object type TABLE_EXPORT/TABLE/TABLE
Table “SCOTT”.”EMPLOYEE” exists. All dependent metadata and data will be skipped due to table_exists_action of skip
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Job “SYS”.”SYS_IMPORT_FULL_01″ successfully completed at 23:36:13



2.TABLE_EXISTS_ACTION=APPEND
I have deleted and inserted 4 new rows into employee table. So as of now the rows the dump and table are different
and I am going to import the dump with APPEND option.

SQL> delete from employee;
4 rows deleted.

SQL> insert into employee (select * from emp where dept>20);
4 rows created.

SQL> commit;

SQL> select * from employee;
EMP_NAME                             DEPT        SAL
—————————— ———- ———-
Kiran                                  30       5500
Peter                                  30       6800
King                                   30       7600
Roshan                                 30       5500

$  impdp directory=exp_dir dumpfile=emp.dmp logfile=imp1.log table_exists_action=append

Import: Release 11.2.0.2.0 – Production on Wed May 2 00:50:18 2012
Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.
Username: / as sysdba
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 – 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options
Master table “SYS”.”SYS_IMPORT_FULL_01″ successfully loaded/unloaded
Starting “SYS”.”SYS_IMPORT_FULL_01″:  /******** AS SYSDBA directory=exp_dir dumpfile=emp.dmp logfile=imp1.log
table_exists_action=append
Processing object type TABLE_EXPORT/TABLE/TABLE
Table “SCOTT”.”EMPLOYEE” exists. Data will be appended to existing table but all dependent metadata will be 
skipped due to table_exists_action of append
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
. . imported “SCOTT”.”EMPLOYEE”                        5.953 KB       4 rows
Job “SYS”.”SYS_IMPORT_FULL_01″ successfully completed at 00:50:25

Now 4 more rows appended to the table.
  1* select * from employee
SQL> /
EMP_NAME                             DEPT        SAL
—————————— ———- ———-
Kiran                                  30       5500
Peter                                  30       6800
King                                   30       7600
Roshan                                 30       5500
Rupal                                  10       5000
Hero                                   10       5500
Jain                                   10       4000
John                                   20       6000
8 rows selected.

3.TABLE_EXISTS_ACTION=TRUNCATE
Now let’s try with table_exists_action=truncate option. In truncate option it will truncate the content of the
existing table and insert the rows from the dump. Currently my employee table has 8 rows which we inserted last insert.

$  impdp directory=exp_dir dumpfile=emp.dmp logfile=imp1.log table_exists_action=truncate

Import: Release 11.2.0.2.0 – Production on Wed May 2 00:55:03 2012
Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.
Username: / as sysdba
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 – 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options
Master table “SYS”.”SYS_IMPORT_FULL_01″ successfully loaded/unloaded
Starting “SYS”.”SYS_IMPORT_FULL_01″:  /******** AS SYSDBA directory=exp_dir dumpfile=emp.dmp logfile=imp1.log
table_exists_action=truncate
Processing object type TABLE_EXPORT/TABLE/TABLE
Table “SCOTT”.”EMPLOYEE” exists and has been truncated. Data will be loaded but all dependent metadata will be skipped 
due to table_exists_action of truncate
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
. . imported “SCOTT”.”EMPLOYEE”                        5.953 KB       4 rows
Job “SYS”.”SYS_IMPORT_FULL_01″ successfully completed at 00:55:09

  1* select * from employee
SQL> /
EMP_NAME                             DEPT        SAL
—————————— ———- ———-
Rupal                                  10       5000
Hero                                   10       5500
Jain                                   10       4000
John                                   20       6000

4.TABLE_EXISTS_ACTION=REPLACE
This option drop the current table in the database and the import recreate the new table as in the dumpfile.

impdp directory=exp_dir dumpfile=emp.dmp logfile=imp1.log table_exists_action=replace

Import: Release 11.2.0.2.0 – Production on Wed May 2 00:57:35 2012
Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.
Username: / as sysdba

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 – 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options
Master table “SYS”.”SYS_IMPORT_FULL_01″ successfully loaded/unloaded
Starting “SYS”.”SYS_IMPORT_FULL_01″:  /******** AS SYSDBA directory=exp_dir dumpfile=emp.dmp
logfile=imp1.log table_exists_action=replace
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
. . imported “SCOTT”.”EMPLOYEE”                        5.953 KB       4 rows
Job “SYS”.”SYS_IMPORT_FULL_01″ successfully completed at 00:57:40




Now if you check the last_ddl_time for the table it would be the same as the import time.

$ date
Wed May  2 00:58:21 EDT 2012

select OBJECT_NAME, to_char(LAST_DDL_TIME,’dd-mm-yyyy hh:mi:ss’) created from dba_objects
where OBJECT_NAME=’EMPLOYEE’
SQL> /
OBJECT_NAME          CREATED
——————– ——————-
EMPLOYEE             02-05-2012 12:57:40

Oracle Datapump expdp & impdp utility for different scenarios.

 We have two database orcl, ordb. 
  Oracle10g R2 version.

Scenario 1 :
Export the whole orcl database.

Export Parfile content:
userid=system/password@orcl
dumpfile=expfulldp.dmp
logfile=expfulldp.log
full=y
directory=dumplocation

Scenario 2
Export the scott schema from orcl and import into ordb database.
While importing, exclude some objects(sequence,view,package,cluster,table).
Load the objects which came from RES tablespace into USERS tablespace in target database.

Export Parfile content:

userid=system/password@orcl
dumpfile=schemaexpdb.dmp
logfile=schemaexpdb.log
directory=dumplocation
schemas=scott


Import parfile content:
userid=system/password@ordb
dumpfile=schemaexpdb.dmp
logfile=schemaimpdb.log
directory=dumplocation
table_exists_action=replace
remap_tablespace=res:users
exclude=sequence,view,package,cluster,table:”in(‘LOAD_EXT’)”

Scenario 3
Export the emp table from scott schema at orcl instance and import into ordb instance.

Expdp parfile content:
userid=system/password@orcl
logfile=tableexpdb.log
directory=dumplocation
tables=scott.part_emp
dumpfile=tableexpdb.dmp

Impdp parfile content:
userid=system/password@ordb
dumpfile=tableexpdb.dmp
logfile=tabimpdb.log
directory=dumplocation
table_exists_action=REPLACE


Scenario 4
Export only specific partition in emp table from scott schema at orcl and import into ordb database.

Expdp parfile content:
userid=system/password@orcl
dumpfile=partexpdb.dmp
logfile=partexpdb.log
directory=dumplocation
tables=scott.part_emp:part10,scott.part_emp:part20

Impdp parfile content:
If we want to overwrite the exported data in target database, then we need to delete emp table for deptno in(10,20).

scott@ordb> delete part_emp where deptno=10;
786432 rows deleted.

scott@ordb> delete part_emp where deptno=20;
1310720 rows deleted.

scott@ordb> commit;
Commit complete.

userid=system/password@ordb
dumpfile=partexpdb.dmp
logfile=tabimpdb.log
directory=dumplocation
table_exists_action=append

Scenario 5
Export only tables in scott schema at orcl and import into ordb database.
Expdp parfile content:
userid=system/password@orcl
dumpfile=schemaexpdb.dmp
logfile=schemaexpdb.log
directory=dumplocation
include=table
schemas=scott

Impdp parfile content:
userid=system/password@orcl
dumpfile=schemaexpdb.dmp
logfile=schemaimpdb.log
directory=dumplocation
table_exists_action=replace

Scenario 6
Export only rows belonging to department 10 and 20 in emp and dept table from orcl database.
 Import the dump file in ordb database. While importing, load only deptno 10 in target database.
Expdp parfile content:
userid=system/password@orcl
dumpfile=data_filter_expdb.dmp
logfile=data_filter_expdb.log
directory=dumplocation
content=data_only
schemas=scott
include=table:”in(‘EMP’,’DEPT’)”
query=”where deptno in(10,20)”

Impdp parfile content:
userid=system/password@ordb
dumpfile=data_filter_expdb.dmp
logfile=data_filter_impdb.log
directory=dumplocation
schemas=scott
query=”where deptno = 10?
table_exists_action=APPEND


Scenario 7
Export the scott schema from orcl database and split the dump file into 50M sizes.
Import the dump file into ordb datbase.

Expdp parfile content:
userid=system/password@orcl
logfile=schemaexp_split.log
directory=dumplocation
dumpfile=schemaexp_split_%U.dmp
filesize=50M
schemas=scott
include=table
As per the above expdp parfile, initially, schemaexp_split_01.dmp file will be created. Once the file is 50MB,
the next file called schemaexp_split_02.dmp will be created. Let us say, the dump file size is 500MB,
then it creates 10 dump file as each file size is 50MB.

Impdp parfile content:
userid=system/password@ordb
logfile=schemaimp_split.log
directory=dumplocation
dumpfile=schemaexp_split_%U.dmp
table_exists_action=replace
remap_tablespace=res:users
exclude=grant

Scenario 8

Export the scott schema from orcl database and split the dump file into four files. Import the dump file
into ordb datbase.

Expdp parfile content:
userid=system/password@orcl
logfile=schemaexp_split.log
directory=dumplocation
dumpfile=schemaexp_split_%U.dmp
parallel=4
schemas=scott
include=table
As per the above parfile content, initially four files will be created – schemaexp_split_01.dmp,
schemaexp_split_02.dmp, schemaexp_split_03.dmp, schemaexp_split_04.dmp. Notice that every occurrence
of the substation variable is incremented each time. Since there is no FILESIZE parameter,
no more files will be created.

Impdp parfile content:
userid=system/password@ordb
logfile=schemaimp_split.log
directory=dumplocation
dumpfile=schemaexp_split_%U.dmp
table_exists_action=replace
remap_tablespace=res:users
exclude=grant

Scenario 9
Export the scott schema from orcl database and split the dump file into three files. The dump files
will be stored in three different location. This method is especially useful if you do not have enough
space in one file system to perform the complete expdp job. After export is successful, import the dump
file into ordb database.

Expdp parfile content:
userid=system/password@orcl
logfile=schemaexp_split.log
directory=dumplocation
dumpfile=dump1:schemaexp_%U.dmp,dump2:schemaexp_%U.dmp,dump3:schemaexp_%U.dmp
filesize=50M
schemas=scott
include=table
          As per above expdp par file content, it place the dump file into three different location. Let us say,
entire expdp dump file size is 1500MB. Then it creates 30 dump files(each dump file size is 50MB) and
place 10 files in each file system.

Impdp parfile content:
userid=system/password@ordb
logfile=schemaimp_split.log
directory=dumplocation
dumpfile=dump1:schemaexp_%U.dmp,dump2:schemaexp_%U.dmp,dump3:schemaexp_%U.dmp
table_exists_action=replace

Scenario 10
         We are in orcl database server. Now export the ordb data and place the dump file in orcl database server.
After expdp is successful, import the dump file into orcl database. When we use network_link, the expdp user
and source database schema users should have identical privileges. If there no identical privileges,
then we get the below error.

C:\impexpdp>expdp parfile=networkexp1.par

Export: Release 10.2.0.1.0 – Production on Sunday, 17 May, 2009 12:06:40
Copyright (c) 2003, 2005, Oracle. All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 – Produc
tion
With the Partitioning, OLAP and Data Mining options
ORA-31631: privileges are required
ORA-39149: cannot link privileged user to non-privileged user

Expdp parfile content:
userid=scott/tiger@orcl
logfile=netwrokexp1.log
directory=dumplocation
dumpfile=networkexp1.dmp
schemas=scott
include=table
network_link=ordb
              As per the above parfile, expdp utility exports the ordb database data and place the dump file in orcl server. Since we are running expdp in orcl server. This is basically exporting the data from remote database.


Impdp parfile content:
userid=system/password@orcl
logfile=networkimp1.log
directory=dumplocation
dumpfile=networkexp1.dmp
table_exists_action=replace

Scenario 11

Export scott schema in orcl and import into ordb. But do not write dump file in server. The expdp and impdp should be completed without writing dump file in the server. Here we do not need to export the data. We can import the data without creating the dumpfile.
Here we run the impdp in ordb server and it contacts orcl DB and extract the data and import into ordb database.
If we do not have much space in the file system to place the dump file, then we can use this option to load the data.

Impdp parfile content:
userid=scott/tiger@ordb
network_link=orcl
logfile=networkimp2.log
directory=dumplocation
table_exists_action=replace

Scenario 12
Expdp scott schema in ordb and impdp the dump file in training schema in ordb database.

Expdp parfile content:
userid=scott/tiger@orcl
logfile=netwrokexp1.log
directory=dumplocation
dumpfile=networkexp1.dmp
schemas=scott
include=table

Impdp parfile content:
userid=system/password@ordb
logfile=networkimp1.log
directory=dumplocation
dumpfile=networkexp1.dmp
table_exists_action=replace
remap_schema=scott:training

Scenario 13
Expdp table on orcl database and imdp in ordb. When we export the data, export only 20 percent of the table data. We use SAMPLE parameter to accomplish this task.
SAMPLE parameter allows you to export subsets of data by specifying the percentage of data to be sampled and exported.
The sample_percent indicates the probability that a block of rows will be selected as part of the sample.
It does not mean that the database will retrieve exactly that amount of rows from the table.
The value you supply for sample_percent can be anywhere from .000001 up to, but not including, 100.
If no table is specified, then the sample_percent value applies to the entire export job. The SAMPLE parameter
is not valid for network exports.

Expdp parfile content:
userid=system/password@orcl
dumpfile=schemaexpdb.dmp
logfile=schemaexpdb.log
directory=dumplocation
tables=scott.part_emp
SAMPLE=20
             As per the above expdp parfile, it exports only 20 percent of the data in part_emp table.

Impdp parfile content:
userid=system/password@ordb
dumpfile=schemaexpdb.dmp
logfile=schemaimpdb.log
directory=dumplocation
table_exists_action=replace

———————————————————————————————————->>>>>>>—->>>>>>>>>——————-

We are creating 2 schemas.

1.       User1 – This schema is having partition table SALES which needs to be export
2.       User2 – schema where we are going to import sales table with new name as SALES_T.

1.Create users:
[oracle@NVMBD01PSR183 ~]$ sqlplus / as sysdba

SQL*Plus: Release 11.2.0.3.0 Production on Fri Dec 19 13:15:44 2014
Copyright (c) 1982, 2011, Oracle.  All rights reserved.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 – 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> create user user1 identified by user1;
User created.

SQL> create user user2 identified by user2;
User created.

SQL> grant connect, resource to user1, user2;
Grant succeeded.

2.Create datapump directory.

SQL>CREATE OR REPLACE DIRECTORY EXPDP_DIR AS ‘ /oracle/data1/expdp’;
SQL> GRANT READ, WRITE ON DIRECTORY EXPDP_DIR to user1,user2;

3.Create partition table SALES.

SQL> conn user1/user1
Connected.

SQL> CREATE TABLE sales
( name varchar2(5),
time_id DATE)
PARTITION BY RANGE (time_id)
(PARTITION sales_AUG VALUES LESS THAN (TO_DATE(’01-AUG-2014′,’dd-MON-yyyy’))
TABLESPACE users,
PARTITION sales_SEP VALUES LESS THAN (TO_DATE(’01-SEP-2014′,’dd-MON-yyyy’))
TABLESPACE users,
PARTITION sales_OCT VALUES LESS THAN (TO_DATE(’01-OCT-2014′,’dd-MON-yyyy’))
TABLESPACE users,
PARTITION sales_NOV VALUES LESS THAN (TO_DATE(’01-NOV-2014′,’dd-MON-yyyy’))
TABLESPACE users,
PARTITION sales_DEV VALUES LESS THAN (TO_DATE(’01-DEC-2014′,’dd-MON-yyyy’))
TABLESPACE users );

SQL> SELECT partition_name FROM user_tab_partitions WHERE table_name = ‘SALES’;
PARTITION_NAME
——————————
SALES_AUG
SALES_DEV
SALES_NOV
SALES_OCT
SALES_SEP

EXPDP:
1. Export entire table including all partitions

[oracle@NVMBD01PSR183 ~]$ expdp user1/user1 directory=EXPDP_DIR dumpfile=sales_table.dmp tables=sales
logfile=sales_table.log

Export: Release 11.2.0.3.0 – Production on Fri Dec 19 11:13:37 2014
Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 – 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting “USER1”.”SYS_EXPORT_TABLE_01″:  user1/******** directory=EXPDP_DIR dumpfile=sales_table.dmp
tables=sales logfile=sales_table.log
Estimate in progress using BLOCKS method…
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 24 MB
Processing object type TABLE_EXPORT/TABLE/TABLE
. . exported “USER1″.”SALES”:”SALES_NOV”                5.421 KB       1 rows
. . exported “USER1″.”SALES”:”SALES_OCT”                5.421 KB       1 rows
. . exported “USER1″.”SALES”:”SALES_SEP”                5.421 KB       1 rows
. . exported “USER1″.”SALES”:”SALES_AUG”                    0 KB       0 rows
. . exported “USER1″.”SALES”:”SALES_DEV”                    0 KB       0 rows
Master table “USER1”.”SYS_EXPORT_TABLE_01″ successfully loaded/unloaded
******************************************************************************
Dump file set for USER1.SYS_EXPORT_TABLE_01 is:
  /oracle/data1/expdp/sales_table.dmp
Job “USER1”.”SYS_EXPORT_TABLE_01″ successfully completed at 11:13:46

2. Export specific partition of table:

[oracle@NVMBD01PSR183 ~]$ expdp user1/user1 directory=EXPDP_DIR dumpfile=sales_table_partition.dmp
tables=sales:SALES_NOV logfile=sales_table_partition.log

Export: Release 11.2.0.3.0 – Production on Fri Dec 19 12:31:06 2014
Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 – 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting “USER1”.”SYS_EXPORT_TABLE_01″:  user1/******** directory=EXPDP_DIR dumpfile=sales_table_partition.dmp
tables=sales:SALES_NOV logfile=sales_table_partition.log
Estimate in progress using BLOCKS method…
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 8 MB
Processing object type TABLE_EXPORT/TABLE/TABLE
. . exported “USER1″.”SALES”:”SALES_NOV”                5.421 KB       1 rows
Master table “USER1”.”SYS_EXPORT_TABLE_01″ successfully loaded/unloaded
******************************************************************************
Dump file set for USER1.SYS_EXPORT_TABLE_01 is:
  /oracle/data1/expdp/sales_table_partition.dmp
Job “USER1”.”SYS_EXPORT_TABLE_01″ successfully completed at 12:31:13

IMPDP
Move dmp file to target host (ftp, scp etc)
Or load data to another schema using remap_schema

1. Import entire partition table into new schema with new name.

[oracle@NVMBD01PSR183 ~]$ impdp user2/user2 directory=EXPDP_DIR dumpfile=sales_table.dmp remap_table=sales:sales_t
remap_schema=user1:user2 remap_tablespace=user1:user2 table_exists_action = append;

Import: Release 11.2.0.3.0 – Production on Fri Dec 19 11:19:25 2014
Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 – 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Master table “USER2”.”SYS_IMPORT_FULL_01″ successfully loaded/unloaded
Starting “USER2”.”SYS_IMPORT_FULL_01″:  user2/******** directory=EXPDP_DIR dumpfile=sales_table.dmp
remap_table=sales:sales_t remap_schema=user1:user2 remap_tablespace=user1:user2 table_exists_action=append
Processing object type TABLE_EXPORT/TABLE/TABLE
Table “USER2″.”SALES_T” exists. Data will be appended to existing table but all dependent metadata will
be skipped due to table_exists_action of append
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
. . imported “USER2″.”SALES_T”:”SALES_SEP”               5.421 KB       1 rows
. . imported “USER2″.”SALES_T”:”SALES_AUG”                   0 KB       0 rows
. . imported “USER2″.”SALES_T”:”SALES_DEV”                   0 KB       0 rows
. . imported “USER2″.”SALES_T”:”SALES_OCT”               5.421 KB       1 rows
. . imported “USER2″.”SALES_T”:”SALES_NOV”               5.421 KB       1 rows
Job “USER2”.”SYS_IMPORT_FULL_01″ successfully completed at 11:19:30

Let`s check:

SQL> conn user2/user2
Connected.

SQL> select * from sales_t;
NAME  TIME_ID
—– ———
prod1 01-AUG-14
prod2 01-SEP-14
prod3 01-OCT-14


2. Now import single partition only.
[oracle@NVMBD01PSR183 ~]$ impdp user2/user2 directory=EXPDP_DIR dumpfile=sales_table_partition.dmp
remap_table=sales:sales_t remap_schema=user1:user2 remap_tablespace=user1:user2 table_exists_action = append;

Import: Release 11.2.0.3.0 – Production on Fri Dec 19 13:05:26 2014
Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 – 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Master table “USER2”.”SYS_IMPORT_FULL_01″ successfully loaded/unloaded
Starting “USER2”.”SYS_IMPORT_FULL_01″:  user2/******** directory=EXPDP_DIR dumpfile=sales_table_partition.dmp
remap_table=sales:sales_t remap_schema=user1:user2 remap_tablespace=user1:user2 table_exists_action=append
Processing object type TABLE_EXPORT/TABLE/TABLE
Table “USER2″.”SALES_T” exists. Data will be appended to existing table but all dependent metadata
will be skipped due to table_exists_action of append
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
. . imported “USER2″.”SALES_T”:”SALES_NOV”               5.421 KB       1 rows
Job “USER2”.”SYS_IMPORT_FULL_01″ successfully completed at 13:05:28

Here we have successfully imported single partition.
If a partition name is specified, it must be the name of a partition or subpartition in the associated table.
Only the specified set of tables, partitions, and their dependent objects are unloaded.
  When you use partition option (PARTITION_OPTIONS) of DataPump you have to select below options:
   None – Tables will be imported such that they will look like those on the system on which the export was created.
  Departition – Partitions will be created as individual tables rather than partitions of a partitioned table
  Merge – Combines all partitions into a single table.

Oracle provides an option reuse_dumpfile=[Y/N] to avoid this error. “ORA-27038: created file already exists”

                                      This is the option with data pump expdp utility. Normally when you perform the export using expdp utility and if the dumpfile is present in the export directory it will throw an error “ORA-27038: created file already exists”. This situation happens when you wanted to perform the repetitive exports using the same dumpfile.
                                        Oracle provides an option reuse_dumpfile=[Y/N] to avoid this error. You should mention the parameter value as Y to overwrite the existing dump file. By default the option considered as N. See the examples below.


Normal scenario with file already present in the export directory
$  expdp scott/tiger directory=exp_dir dumpfile=tde.dmp tables=example

Export: Release 11.1.0.7.0 – 64bit Production on Tuesday, 19 July, 2011 1:36:50
Copyright (c) 2003, 2007, Oracle.  All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 – 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
ORA-39001: invalid argument value
ORA-39000: bad dump file specification
ORA-31641: unable to create dump file “/home/oracle/scott/tde.dmp”
ORA-27038: created file already exists
Additional information: 1



Execute the expdp using REUSE_DUMPFILES
$ expdp scott/tiger directory=exp_dir dumpfile=tde.dmp tables=example reuse_dumpfiles=y

Export: Release 11.1.0.7.0 – 64bit Production on Tuesday, 19 July, 2011 1:46:05
Copyright (c) 2003, 2007, Oracle.  All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 – 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting “SCOTT”.”SYS_EXPORT_TABLE_01″:  scott/******** directory=exp_dir dumpfile=tde.dmp tables=example reuse_dumpfiles=y
Estimate in progress using BLOCKS method…
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 1.312 MB
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
. . exported “SCOTT”.”EXAMPLE”:”EXAMPLE_P1″             441.3 KB   49999 rows
. . exported “SCOTT”.”EXAMPLE”:”EXAMPLE_P2″             408.3 KB   45121 rows
Master table “SCOTT”.”SYS_EXPORT_TABLE_01″ successfully loaded/unloaded
******************************************************************************
Dump file set for SCOTT.SYS_EXPORT_TABLE_01 is:
  /home/oracle/scott/tde.dmp
Job “SCOTT”.”SYS_EXPORT_TABLE_01″ successfully completed at 01:49:16