rman interviews questions and answers –4 in oracle

1)what is RMAN and How to configure it?

  RMAN is an Oracle Database client
  It performs backup and recovery tasks on your databases and automates administration of your backup strategies
  It greatly simplifies the dba jobs by managing the production database’s backing up, restoring, and recovering database files
  This tool integrates with sessions running on an Oracle database to perform a range of backup and recovery activities, including maintaining an RMAN repository of historical data about backups
 There is no additional installation required for this tool
  It is by default get installed with the oracle database installation
  The RMAN environment consists of the utilities and databases that play a role in acking up your data
  We can access RMAN through the command line or through Oracle Enterprise Manager

2) Why to use RMAN?
     RMAN gives you access to several backup and recovery techniques and features not available with user-managed backup and recovery. The most noteworthy are the following:

Automatic specification of files to include in a backup
Establishes the name and locations of all files to be backed up

Maintain backup repository
   Backups are recorded in the control file, which is the main repository of RMAN metadata
  Additionally, you can store this metadata in a recovery catalog

Incremental backups
Incremental backup stores only blocks changed since a previous backup
Thus, they provide more compact backups and faster recovery, thereby reducing the need to apply redo during datafile media recovery
Unused block compression:
  In unused block compression, RMAN can skip data blocks that have never been used

Block media recovery
 We can repair a datafile with only a small number of corrupt data blocks without taking it offline or restoring it from backup

Binary compression
A binary compression mechanism integrated into Oracle Database reduces the size of backups

Encrypted backups
  RMAN uses backup encryption capabilities integrated into Oracle Database to store backup sets in an encrypted format

Corrupt block detection
RMAN checks for the block corruption before taking its backup

3) How RMAN works?
 RMAN backup and recovery operation for a target database are managed by RMAN client
 RMAN uses the target database control file to gather metadata about the target database and to store information about its own operations
  The RMAN client itself does not perform backup, restore, or recovery operations
  When you connect the RMAN client to a target database, RMAN allocates server sessions on the target instance and directs them to perform the operations
  The work of backup and recovery is performed by server sessions running on the  target database
  A channel establishes a connection from the RMAN client to a target or auxiliary database instance by starting a server session on the instance
  The channel reads data into memory, processes it, and writes it to the output device
  When you take a database backup using RMAN, you need to connect to the target database using RMAN Client
  The RMAN client can use Oracle Net to connect to a target database, so it can be located on any host that is connected to the target host through Oracle Net
  For backup you need to allocate explicit or implicit channel to the target database
An RMAN channel represents one stream of data to a device, and corresponds to one database server session.
 This session dynamically collect information of the files from the target database control file before taking the backup or while restoring
  For example if you give ‘ Backup database ‘ from RMAN, it will first get all the datafiles information from the controlfile
  Then it will divide all the datafiles among the allocated channels. (Roughly equal size of work as per the datafile size)
  Then it takes the backup in 2 steps

Step1:
The channel will read all the Blocks of the entire datafile to find out all the formatted blocks to backup

Note:
  RMAN do not take backup of the unformatted blocks

Step2:
  In the second step it takes back up of the formatted blocks

Example:
This is the best advantage of using RMAN as it only takes back up of the required blocks
  Lets say in a datafile of 100 MB size, there may be only 10 MB of use full data and rest 90 MB is free then RMAN will only take backup of those 10 MB

4) What O/S and oracle user privilege required using RMAN?
  RMAN always connects to the target or auxiliary database using the SYSDBA privilege
  RMAN always connects to the target or auxiliary database using the SYSDBA privilege
  Its connections to a database are specified and authenticated in the same way as SQL*Plus connections to a database
  The O/S user should be part of the DBA group
  For remote connection it needs the password file Authentication
  Target database should have the initialization parameter REMOTE_LOGIN_PASSWORDFILE set to EXCLUSIVE or SHARED

5) RMAN terminology:

A target database:
  An Oracle database to which RMAN is connected with the TARGET keyword
  A target database is a database on which RMAN is performing backup and recovery operations
  RMAN always maintains metadata about its operations on a database in the control file of the database

A recovery Catalog:
  A separate database schema used to record RMAN activity against one or more target databases
  A recovery catalog preserves RMAN repository metadata if the control file is lost, making it much easier to restore and recover following the loss of the control file
  The database may overwrite older records in the control file, but RMAN maintains records forever in the catalog unless deleted by the user

Backup sets:
RMAN can store backup data in a logical structure called a backup set, which is the smallest unit of an RMAN backup
  One backup set contains one or more datafiles a section of datafile or archivelogs

Backup Piece:
 A backup set contains one or more binary files in an RMAN-specific format
  This file is known as a backup piece
  Each backup piece is a single output file
  The size of a backup piece can be restricted; if the size is not restricted, the backup set will comprise one backup piece
  Backup piece size should be restricted to no larger than the maximum file size that your filesystem will support

Image copies:
An image copy is a copy of a single file (datafile, archivelog, or controlfile)
  It is very similar to an O/S copy of the file
  It is not a backupset or a backup piece
  No compression is performed

Snapshot Controlfile:
When RMAN needs to resynchronize from a read-consistent version of the control file, it creates a temporary snapshot control file
  The default name for the snapshot control file is port-specific

Database Incarnation:
Whenever you perform incomplete recovery or perform recovery using a backup control file, you must reset the online redo logs when you open the database
  The new version of the reset database is called a new incarnation
  The reset database command directs RMAN to create a new database incarnation record in the recovery catalog
  This new incarnation record indicates the current incarnation

6) What is RMAN Configuration and how to configure it?

The RMAN backup and recovery environment is preconfigured for each target database
  The configuration is persistent and applies to all subsequent operations on this target database, even if you exit and restart RMAN
  RMAN configured settings can specify backup devices, configure a connection to a backup device , policies affecting backup strategy, encryption algorithm, snap shot controlfile loaion  and others
  By default there are few default configuration are set when you login to RMAN
  You can customize them as per your requirement
  Any time you can check the current setting by using the “Show all” command
  CONFIGURE command is used to create persistent settings in the RMAN environment, which apply to all subsequent operations, even if you exit and restart RMAN

7) How to check RMAN configuration?
RMAN>Show all;

8) How to reset to default configuration?
To reset the default configuration setting use Connect to the target database from sqlplus and run
SQL> connect @target_database;
SQL> execute dbms_backup_restore.resetConfig;

RMAN Catalog Database

9) What is Catalog database and How to configure it?
 This is a separate database which contains catalog schema
  You can use the same target database as the catalog database but it’s not at all recommended

10) How Many catalog database I can have?

You can have multiple catalog databases for the same target database
  But at a time you can connect to only 1 catalog database via RMAN. Its not recommended to have multiple catalog database

11) Is this mandatory to use catalog database?
       No! It’s an optional one

12) What is the advantage of catalog database?
 Catalog database is a secondary storage of backup metadata
  It’s very useful in case you lost the current controlfile, as all the backup information are there in the catalog schema
  Secondly from contolfile the older backup information are aged out depending upon the control_file_record_keep_time
  RMAN catalog database mainten the history of data

13) What is the difference between catalog database & catalog schema?
Catalog database is like any other database which contains the RMAN catalog user’s schema

14)  What happen if catalog database lost?

Since catalog database is an optional there is no direct effect of loss of catalog database
  Create a new catalog database and register the target database with the newly created catalog one All the backup information from the target database current controlfile will be updated to the catalog schema
  If any backup information which is aged out from the target database then you need to manually catalog those backup pieces

RMAN backup:

15)  What are the database file’s that RMAN can backup?
 RMAN can backup Controlfile, Datafiles, Archive logs, standby database controfile, Spfile

16) What are the database file’s that RMAN cannot backup?
RMAN can not take backup of the pfile, Redo logs, network configuration files, password files, external tables and the contents of the Oracle home files

17) Can I have archivelogs and datafile backup in a single backupset?

    No.  We can not put datafiles and archive logs in the same backupset

18)  Can I have datafiles and contolfile backup in a single backup set?
 Yes
  If the controlfile autobackup is not ON then RMAN takes backup of controlfile along with the datafile 1, whenever you take backup of the database or System tablespace

19) Can I regulate the size of backup piece and backup set?
 Yes!
  You can set max size of the backupset as well as the backup piece
  By default one RMAN channel creates a single backupset with one backup piece in it
  You can use the MAXPIECESIZE channel parameter to set limits on the size of backup pieces
  You can also use the MAXSETSIZE parameter on the BACKUP and CONFIGURE commands to set a limit for the size of backup sets

20) What is the difference between backup set backup and Image copy backup?

 A backup set is an RMAN-specific proprietary format, whereas an image copy is a bit-for-bit copy of a file
  By default, RMAN creates backup sets

21) What is RMAN consistent backup and inconsistent backup?

 A consistent backup occurs when the database is in a consistent state
  That means backup of the database taken after a shutdown immediate, shutdown normal or shutdown transactional
  If the database is shutdown with abort option then its not a consistent backup
A backup when the database is up and running is called an inconsistent backup
  When a database is restored from an inconsistent backup, Oracle must perform media recovery before the database can be opened, applying any pending changes from the redo logs
  You can not take inconsistent backup when the database is in Noarchivelog mode

22)  Can I take RMAN backup when the database is down?

   No!
   You can take RMAN backup only when the target database is Open or in Mount stage
   It’s because RMAN keep the backup metadata in controfile
   Only in open or mount mode controlfile is accessible

23)  Do I need to place the database in begin backup mode while taking RMAN inconsistent backup?

       RMAN does not require extra logging or backup mode because it knows the format of data blocks
       RMAN is guaranteed not to back up fractured blocks
       No extra redo is generated during RMAN backup

24) Can I compress RMAN backups?

ü  RMAN supports binary compression of backup sets
ü  The supported algorithms are BZIP2 (default) and ZLIB
ü  It’s not recommended to compress the RMAN backup using any other OS or third party utility

Note:
ü  RMAN compressed backup with BZIP2 provides great compression but is CPU intensive
ü  Using ZLIB compression requires the Oracle Database 11g Advanced Compression Option and is only supported with an 11g database
ü  The feature is not backward compatible with 10g databases

25) Can I encrypt RMAN backup?

ü  RMAN supports backup encryption for backup sets
ü  You can use wallet-based transparent encryption, password-based encryption, or both
ü  You can use the CONFIGURE ENCRYPTION command to configure persistent transparent encryption
ü  Use the SET ENCRYPTION, command at the RMAN session level to specify password-based encryption

26)  Can RMAN take backup to Tape?

ü  Yes!
ü  We can use RMAN for the tape backup
ü  But RMAN can not able to write directly to tape
ü  You need to have third party Media Management Software installed
ü  Oracle has published an API specification which Media Management Vendor’s who are members of Oracle’s Backup Solutions Partner program have access to
ü  Media Management Vendors (MMVs) then write an interface library which the Oracle server uses to write and read to and from tape
Starting from oracle 10g R2 oracle has its Own Media management software for the database backup to tape called OSB

27) How RMAN Interact with Media manager?

ü  Before performing backup or restore to a media manager, you must allocate one or more channels or configure default channels for use with the media manager to handle the communication with the media manager
ü  RMAN does not issue specific commands to load, label, or unload tapes
ü  When backing up, RMAN gives the media manager a stream of bytes and associates a unique name with this stream
ü  When RMAN needs to restore the backup, it asks the media manager to retrieve the byte stream
ü  All details of how and where that stream is stored are handled entirely by the media manager

28) What is Proxy copy backup to tape?

ü  Proxy copy is functionality, supported by few media manager in which they handle the entire data movement between datafiles and the backup devices
ü  Such products may use technologies such as high-speed connections between storage and media subsystems to reduce load on the primary database server
ü  RMAN provides a list of files requiring backup or restore to the media manager, which in turn makes all decisions regarding how and when to move the data

29) What is Oracle Secure backup?

ü  Oracle Secure Backup is a media manager provided by oracle that provides reliable and secure data protection through file system backup to tape
ü  All major tape drives and tape libraries in SAN, Gigabit Ethernet, and SCSI environments are supported

30) Can I restore or duplicate my previous version database using a later version of Oracle?
For example, is it possible to restore a 9i backup while using the 10g executables?

It is possible to use the 10.2 RMAN executable to restore a 9.2 database (same for 11.2 to 11.1 or 11.1 to 10.2, etc) even if the restored datafiles will be stored in ASM
RMAN is configured so that a higher release is able to restore a lower release, but it is strongly suggested you use only the same version

31) Can I restore or duplicate between two different patchset levels?

ü  As you can restore between different Oracle versions, you can also do so between two different patchset levels
Alter database open resetlogs upgrade;
OR
alter database open resetlogs downgrade;

32) Can I restore or duplicate between two different versions of the same operating system?
For example, can I restore my 9.2.0.1.0 RMAN backup taken against a host running Solaris 9 to a different machine where 9.2.0.1.0 is installed but where that host is running Solaris 10?

 If the same Oracle Server installation CDs (media pack) can be used to install 9.2.0.1.0 on Solaris 9 and Solaris 10, this type of restore is supportable

33) Is it possible to restore or duplicate when the bit level (32 bit or 64 bit) of Oracle does not match?
For example, is it possible to restore or duplicate my 9.2. 64-bit database to a 9.2.32-bit installation?
 It is preferable to keep the same bit version when performing a restore/recovery
  However, excluding the use of duplicate command, the use of the same operating system platform should allow for a restore/recovery between bit levels (32 bit or 64 bit) of Oracle
  Note, this may be specific to the particular operating system and any problems with this should be reported to Oracle Support
  If you will be running the 64-bit database against the 32-bit binary files or vice versa, after the recovery has ended the database bit version must be converted using utlirp.sql
If you do not run utlirp.sql you will see errors including but not limited to:
ORA-06553: PLS-801: INTERNAL ERROR [56319]

34) Can I restore or duplicate my RMAN backup between two different platforms such as Solaris to Linux?
In general, you cannot restore or duplicate between two different platforms

35) What are the corruption types?

ü  Datafile Block Corruption – Physical/Logical
ü  Table/Index Inconsistency
ü  Extents Inconsistencies
ü  Data Dictionary Inconsistencies

Scenarios:
Goal: How to identify all the corrupted segments in the database reported by RMAN?

Solution:

Step 1: Identify the corrupt blocks (Datafile Block Corruption – Intra block corruption)
RMAN> backup validate check logical database;

To make it faster, it can be configured to use PARALLELISM with multiple channels:

RMAN> run {
allocate channel d1 type disk;
allocate channel d2 type disk;
allocate channel d3 type disk;
allocate channel d4 type disk;
backup validate check logical database;
}

Step2:  Using the view v$database_block_corruption:
SQL> select * from v$database_block_corruption;

          FILE#          BLOCK#          BLOCKS CORRUPTION_CHANGE# CORRUPTIO
————————————————————————————————————
              6              10                          1          8183236781662                      LOGICAL
              6              42                          1                  0                                      FRACTURED
              6              34                          2                  0                                      CHECKSUM
              6              50                          1      8183236781952                          LOGICAL
              6              26                          4                  0                                      FRACTURED

5 rows selected.

Datafile Block Corruption – Intra block corruption
It refers to intra block corruptions that may cause different errors like ORA-1578, ORA-8103, ORA-1410, ORA-600 etc.
  Oracle classifies the corruptions as Physical and Logical
ü  To identify both Physical and Logical Block Corruptions use the “CHECK LOGICAL” option
ü  It checks the complete database for both corruptions without actually doing a backup

Solution1:

$ rman target /
RMAN> backup check logical validate database;

$ rman target /
RMAN> backup check logical database;

Solution2:
ü  Chek the view V$DATABASE_BLOCK_CORRUPTION to identify the block corruptions detected by RMAN

Solution3: DBVerify – Identify Datafile Block Corruptions
ü  DBVERIFY identify Physical and Logical Intra Block Corruptions by default
ü  Dbverify cannot be run for the whole database in a single command
ü  It does not need a database connection either

dbv file= blocksize=

RMAN Vs DBVerify – Datafile Intra Block Corruption

When the logical option is used by RMAN, it does exactly the same checks as DBV does for intra block corruption.
RMAN can be run with PARALLELISM using multiple channels making it faster than DBV which can not be run in parallel in a single command
DBV checks for empty blocks. In 10g RMAN may not check blocks in free extents when Locally Managed Tablespaces are used. In 11g RMAN checks for both free and used extents.
Both DBV and RMAN (11g) can check for a range of blocks. RMAN: VALIDATE DATAFILE 1 BLOCK 10 to 100;.  DBV: start=10 end=100
RMAN keeps corruption information in the control file (v$database_block_corruption, v$backup_corruption). DBV does not.
RMAN may not report the corruption details like what is exactly corrupted in a block reported as a LOGICAL corrupted block. DBV reports the corruption details in the screen or in a log file.
DBV can scan blocks with a higher SCN than a given SCN.
DBV does not need a connection to the database.

dentify TABLE / INDEX Inconsistency
Table / Index inconsistencies is when an entry in the Table does not exist in the Index or vice versa. The common errors are ORA-8102, ORA-600 [kdsgrp1], ORA-1499 by “analyze validate structure cascade”.
The tool to identify TABLE / INDEX inconsistencies is the ANALYZE command:
analyze table validate structure cascade;

When an inconsistency is identified, the above analyze command will produce error ORA-1499 and a trace file.

35) What Happens When A Tablespace/Database Is Kept In Begin Backup Mode?

ü  One danger in making online backups is the possibility of inconsistent data within a block
ü  For example, assume that you are backing up block 100 in datafile users.dbf
ü  Also, assume that the copy utility reads the entire block while DBWR is in the middle of updating the block
ü  In this case, the copy utility may read the old data in the top half of the block and the new data in the bottom top half of the block
ü  The result is called a fractured block, meaning that the data contained in this block is not consistent at a given SCN

Therefore oracle internally manages the consistency as below :
The first time a block is changed in a datafile that is in hot backup mode, the entire block is written to the redo log files, not just the changed bytes
Normally only the changed bytes (a redo vector) is written
In hot backup mode, the entire block is logged the first time
This is because you can get into a situation where the process copying the datafile and DBWR are working on the same block simultaneously
Lets say they are and the OS blocking read factor is 512bytes (the OS reads 512 bytes from disk at a time). The backup program goes to read an 8k Oracle block. The OS gives it 4k. Meanwhile — DBWR has asked to rewrite this block. the OS schedules the DBWR write to occur right now. The entire 8k block is rewritten. The backup program starts running again (multi-tasking OS here) and reads the last 4k of the block. The backup program has now gotten an fractured block — the head and tail are from two points in time.
We cannot deal with that during recovery. Hence, we log the entire block image so that during recovery, this block is totally rewritten from redo and is consistent with itself atleast. We can recover it from there.

2.  The datafile headers which contain the SCN of the last completed checkpoint are not updated while a file is in hot backup mode. This lets the recovery process understand what archive redo log files might be needed to fully recover this file.

rman interviews questions and answers –3 in oracle

2. Difference between using recovery catalog and control file?
When new incarnation happens, the old backup information in control file will be lost. It will be preserved in recovery catalog.
In recovery catalog, we can store scripts.
Recovery catalog is central and can have information of many databases.

3. Can we use same target database as catalog?
No. The recovery catalog should not reside in the target database (database to be backed up), because the database can’t be recovered in the mounted state.

4. How do u know how much RMAN task has been completed?
By querying v$rman_status or v$session_longops

5. From where list & report commands will get input?

6. Command to delete archive logs older than 7days?
RMAN> delete archivelog all completed before sysdate-7;

7. How many days backup, by default RMAN stores?

8. What is the use of crosscheck command in RMAN?
Crosscheck will be useful to check whether the catalog information is intact with OS level information.

9. What are the differences between crosscheck and validate commands?

10. Which is one is good, differential (incremental) backup or cumulative (incremental) backup?
A differential backup, which backs up all blocks changed after the most recent incremental backup at level 1 or 0
A cumulative backup, which backs up all blocks changed after the most recent incremental backup at level 0

11. What is Level 0, Level 1 backup?
A level 0 incremental backup, which is the base for subsequent incremental backups, copies all blocks containing data, backing the datafile up into a backup set just as a full backup would. A level 1 incremental backup can be either of the following types:
A differential backup, which backs up all blocks changed after the most recent incremental backup at level 1 or 0
A cumulative backup, which backs up all blocks changed after the most recent incremental backup at level 0

12. Can we perform level 1 backup without level 0 backup?
If no level 0 backup is available, then the behavior depends upon the compatibility mode setting. If compatibility = 10.0.0, RMAN copies all blocks changed since the file was created, and stores the results as a level 1 backup. In other words, the SCN at the time the incremental backup is taken is the file creation SCN.

13.  Will RMAN put the database/tablespace/datafile in backup mode?
Nope.

14. What is snapshot control file?

15. What is the difference between backup set and backup piece?
Backup set is logical and backup piece is physical.

16. RMAN command to backup for creating standby database?
RMAN> duplicate target database to standby database ….

17. How to do cloning by using RMAN?
RMAN> duplicate target database …

18. You loss one datafile and DB is running in ARCHIVELOG mode. You have full database backup of 1 week/day old and don’t have backup of this (newly created) datafile. How do you restore/recover file?
create the datafile and recover that datafile.
SQL> alter database create datafile ‘…path..’ size n;
RMAN> recover datafile file_id;

19. What is obsolete backup & expired backup?
A status of “expired” means that the backup piece or backup set is not found in the backup destination.
A status of “obsolete” means the backup piece is still available, but it is no longer needed. The backup piece is no longer needed since RMAN has been configured to no longer need this piece after so many days have elapsed, or so many backups have been performed.

20. What is the difference between hot backup & RMAN backup?
For hot backup, we have to put database in begin backup mode, then take backup.
RMAN won’t put database in backup mode.

21. How to put manual/user-managed backup in RMAN (recovery catalog)?
By using catalog command.
RMAN> CATALOG START WITH ‘/tmp/backup.ctl’;

22. What are new features in Oracle 11g RMAN?

23. What is the difference between auxiliary channel and maintenance channel?

rman interviews questions and answers —2 in oracle

             
8) Assuming I have a “FULL” backup of users01.dbf containing employees table that contains 1000 blocks of data.
If I truncated employees table and then an incremental level 1 backup of user’s tablespace is taken, will RMAN include 1000 blocks that once contained data in the incremental backup?
                     
The blocks were not written to the only changes made by the truncate was to the data dictionary (or file header) so no, it won’t see them as changed blocks since they were not changed.                                                                                                                      
9)Where should the catalog be created?  
                                                                                                                           
The recovery catalog to be used by Rman should be created in a separate database other than the target database.
The reason is that the target database will be shutdown while datafiles are restored.              

8)How many times does oracle ask before dropping a catalog?

The default is two times one for the actual command, the other for confirmation.                    
9) What are the various reports available with RMAN?

rman>list backup;
rman> list archive;
   
10) What is the use of snapshot controlfile in terms of RMAN backup?
   
Rman uses the snapshot controlfile as a way to get a read consistent copy of the controlfile,
 it uses this to do things like RESYNC the catalog (else the controlfile is a ‘moving target’, constantly changing and Rman would get blocked and block the database)                                    
11) Can RMAN write to disk and tape Parallel? Is it possible?

Rman currently won’t do tape directly, y
ou need a media manager for that, regarding disk and tape parallel not as far as I know, you would run two backups separately (not sure).
May be trying to maintain duplicate like that could get the desired.
                                                                                                                                       
12) What is the difference between DELETE INPUT and DELETE ALL command in backup?      
                                                                         
Generally speaking LOG_ARCHIVE_DEST_n points to two disk drive locations where we archive the files,
when a command is issued through rman to backup archivelogs it uses one of the location to backup the data. When we specify delete input the location which was backed up will get deleted,
if we specify delete all (all log_archive_dest_n) will get deleted.                                                                                        
DELETE all applies only to archived logs.                                                                          
delete expired archivelog all;                                                                                          
13) Is it possible to restore a backupset (actually backup pieces) from a different location to where RMAN has recorded them to be.    
                                                                                                                                                         
With 9.2 and earlier it is not possible to restore a backupset (actually backup pieces) from a                                                                                                                                                                                      
different location to where RMAN has recorded them to be. As a workaround you would have to create a link using the location of where the backup was originally located.
Then when restoring, RMAN will think everything is the same as it was.                                                      
Starting in 10.1 it is possible to catalog the backup pieces in their new location into the                                                                  
controlfile and recovery catalog. This means they are available for restoration by RMAN without creating the link.                                                                                                          
14) What is difference between Report obsolete and Report obsolete orphan
                                                                                   
Report obsolete backup are reported unusable according to the user’s retention policy where as Report obsolete orphan report the backup that are unusable because they belong to incarnation of the database that are not direct ancestor of the current incarnation.                                  

15) How to Increase Size of Redo Log

1. Add new log files (groups) with new size                                                                                                            
ALTER DATABASE ADD LOGFILE GROUP…                                                                                                                                                                                                                                              
2. Switch with ‘alter system switch log file’ until a new log file group is in state current                                                      
3. Now you can delete the old log file                                                                                                    
ALTER DATABASE DROP LOGFILE MEMBER…    
                                                                 
16)  What is the difference between alter database recover and sql*plus recover command?
                                                                                   
ALTER DATABASE recover is useful when you as a user want to control the recovery where as SQL*PLUS recover command is useful when we prefer automated recovery.                            
Difference of two view V$Backup_Set and Rc_Backup_Set in respect of Rman                                                                                      
The V$Backup_Set is used to check the backup details when we are not managing Rman catalog that is the backup information is stored in controlfile where as Rc_Backup_Set is used when we are using catalog as a central repository to list the backup information.                                  
17) Can I cancel a script from inside the script? How I cancil a select on Windows client?          
Use ctl-c                                                                                                                      
18) How to Find the Number of Oracle Instances Running on Windows Machine                                                                                

C:\>net start |find “OracleService”                                                                                  
19) How to create an init.ora from the spfile when the database is down?
 
Follow the same way as you are using                                                                                                              
SQL> connect sys/oracle as sysdba                                                                                                                  
SQL> shutdown;                                                                                                                                        
SQL> create pfile from spfile;                                                                                                                            
SQL> create spfile from pfile;          
                                                                       
20) When you shutdown the database, how does oracle maintain the user session i.e.of sysdba?
You still have your dedicated server                                                                                                                            
!ps -auxww | grep ora920                                                                                                                                
sys@ORA920> !ps -auxww | grep ora920                                                                                                                      
sys@ORA920> shutdown                                                                                                                                            
sys@ORA920> !ps -auxww | grep ora920                                                                                                                                                                                                                                                        
You can see you still have your dedicated server. When you connect as sysdba, you fire up dedicated server that is where it is.                                                                                                                                                                              

21) What is ORA-002004 error? What you will do in that case?

A disk I/O failure was detected on reading the control file.
Basically you have to check whether the control file is available, permissions are right on the control file,
spfile/init.ora right to the right location, if all checks were done still you are getting the error, then from the multiplexed control file overlay on the corrupted one.                                                                                                                                                                                                                                                            
Let us say you have three control files control01.ctl, control02.ctl and control03.ctl and now you are getting errors on control03.ctl then just copy control01.ctl over to control03.ctl and you should be all set.                                                                                                            
In order to issue ALTER DATABASE BACKUP CONTROLFILE TO TRACE;
database should be mounted and in our case it is not mounted then the only other option available is to restore control file from backup or copy the multiplexed control file over to the bad one.                                                                                                                                                                                        

22) Why do we need SCOPE=BOTH clause?    

BOTH indicates that the change is made in memory and in the server parameter file. The new setting takes effect immediately and persists after the database is shut down and started up again. If a server parameter file was used to start up the database, then BOTH is the default. If a parameter file was used to start up the database, then MEMORY is the default, as well as the only scope you can specify.                                                                                                                                                                                            

23) How to know Number of CPUs on Oracle  

Login as SYSDBA                                                                                                                                                        
SQL>show parameter cpu_count                                                                                                                                                        
NAME TYPE VALUE                                                                                                                                                              
cpu_count integer 2
             
24) Could you please tell me what are the possible reason for Spfile corruption and Recovery?
                                                                         
It should not be corrupt under normal circumstances, if it were, it would be a bug or failure of some component in your system. It could be a file system error or could be a bug.                      
You can easily recover however from                                                                                                                                      
a) Your alert log has the non-default parameters in it from your last restart.                                                                                            
b) it should be in your backups                                                                                                                                              
c) strings spfile.ora > init$ORACLE_SID.ora – and then edit the resulting file to clean it up would be options.                                                                                                                                                                                                  

25) How you will check flashback is enabled or not?
 
Select flashback_on from v$database;                
                                                           
26) In case Revoke CREATE TABLE Privilege from an USER giving ORA-01952. What is the issue? How to do in that case?                                                                                                
SQL> revoke create table from Pay_payment_master;
ORA-01952: system privileges not granted to ‘PAY_PAYMENT_MASTER’                                                                                          
This is because this privilege is not assigned to this user directly rather it was assigned through role “CONNECT”
If you remove connect role from the user then you will not be able to create session (Connect) to database.
So basically we have to Revoke the CONNECT Role and Grant other than create table privilege to this user.                                                                                                                                                                                                                                                                        

27) What kind of information is stored in UNDO segments?  

Only before image of data is stored in the UNDO segments. If transaction is rolled back information from UNDO is applied to restore original datafile. UNDO is never multiplexed.            

28) How to Remove Oracle Service in windows environment?

We can add or remove Oracle Service using oradim which is available in ORACLE_HOME/bin                                      
C:\Oradim –delete –sid                                                                                                                                                              
or                                                                                                                                                                    
Oradim –delete –svrc                                                                                                      
29) Why ORA-28000: the account is locked? What you will do in that case?

The Oracle 10g default is to lock an account after 10 bad password attempts and giving ORA-28000: the account is locked.
In that case one of the solutions is increase default limit of the login attempts.                                                                                                    
SQL> Alter profile default limit FAILED_LOGIN_ATTEMPTS unlimited;                                    
30) How to Reduce the Physical Reads on Statistics?                                                                                                                    
You need to increase the Buffer Cache                                                                                                                          
Consider the situation Buffer Cache of the database is 300MB.
One SQL gave the Physical read as 100. I increased as 400MB and now the same SQL giving the Physical read value is 0                                                                                                  
31) How many redo groups are required for a Oracle DB?                                                                                                                
At least 2 redo groups are required for a Oracle database to be working normally.                    
32) My spfile is corrupt and now I cannot start my database running on my laptop. Is there a way to build spfile again?                                                                                                                                                                                              
if you are on unix then                                                                                                                                                
$ cd $ORACLE_HOME/dbs                                                                                                                                              
$ strings spfilename  temp_pfile.ora                                                                                                                                  
edit the temp_pfile.ora, clean it up if there is anything “wrong” with it and then                                                                                  
SQL> startup pfile=temp_pfile.ora                                                                                                                                        
SQL> create spfile from pfile;                                                                                                                                            
SQL> shutdown                                                                                                                                                      
SQL> startup                                                                                                                                                                      
On windows — just try editing the spfile [do not try with the prod db first try to check on test db. It can be dangerous], create a pfile from it.  save it,
and do the same or if you got problem you can startup the db from the command line using sqlplus create a pfile, do a manual startup (start the oracle service, then use sqlplus to start the database)                                                                                                                                                                                                                                            

33) What is a fractured block? What happens when you restore a file containing fractured block?                                                                  
A block in which the header and footer are not consistent at a given SCN.
 In a user-managed backup, an operating system utility can back up a datafile at the same time that DBWR is updating the file.
It is possible for the operating system utility to read a block in a half-updated state, so that the block that is copied to the backup media is updated in its first half,
while the second half contains older data. In this case, the block is fractured.                                                                                                                                            
For non-RMAN backups, the ALTER TABLESPACE … BEGIN BACKUP or ALTER DATABASE BEGIN BACKUP command is the solution for the fractured block problem.
When a tablespace is in backup mode, and a change is made to a data block, the database logs a copy of the entire block image before the change so that the database can reconstruct this block if media recovery finds that this block was fractured.                                                  
The block that the operating system reads can be split, that is, the top of the block is written at one point in time while the bottom of the block is written at another point in time.
If you restore a file containing a fractured block and Oracle reads the block, then the block is considered a corrupt.                                                                                                      
34) You recreated the control file by “using backup control file to trace” and using alter database backup controlfile to ‘location’ command what have you lost in that case?                              
You lost all of the backup information as using backup controlfile to trace where as using other ALTER DATABASE BACKUP CONTROLFILE to ‘D:\Backup\control01.ctl’.
All backup information is retained when you take binary control file backup.    
                                                     
35) If a backup is issued after “shutdown abort” command what kind of backup                                                                                          
is that?                                                                                                                                                                                                                                                        
It is an inconsistent backup.
If you are in noarchivelog mode ensure that you issue the shutdown immediate command or startup force is another option that you can issue:
startup force->shutdown abort;
followed by
shutdown immediate;                                                                                                    
36)  What is split brain

What is split brain ?
In RAC environment, server nodes communicate with each other using High speed private interconnects
network. A split brain situation happens when all the links of the private interconnect fail to respond to
each other but instances are still up and running. So each instance thinks that the other nodes/instances are
dead and that it should take over the ownership.
In split brain situation, instances independtly access the data and modify the same blocks and the database
will end up with changed database overwritten which could lead to data corruption. To avoid this, various
algorithm are implemented to handle split brain scenario.
In RAC, the IMR (Instance Membership Recovery) service is one of the one of the efficient algorithm
used to detect & resolve the split-brain syndrome. When one instance fails to communicate with other
instances or when one instance becomes inactive due to any reason and is unable to issue the control file
heartbeat, the split brain is detected and the detecting instance will evict the failed instance from the
database.This process is called node eviction.

37) What does the #!bin/ksh at the beginning of a shell script do? Why should it be there?

Ans: On the first line of an interpreter script, the “#!”, is the name of a program which should be used to interpret the contents of the file.
For instance, if the first line contains “#! /bin/ksh”, then the contents of the file are executed as a korn shell

38) What command is used to find the status of Oracle 10g Clusterware (CRS) and the various components it manages

(ONS, VIP, listener, instances, etc.)?

Ans: $ocrcheck

39) How would you find the interconnect IP address from any node within an Oracle 10g RAC configuration?

using oifcfg command.
se the oifcfg -help command to display online help for OIFCFG. The elements of OIFCFG commands, some of which are
optional depending on the command, are:

*nodename—Name of the Oracle Clusterware node as listed in the output from the olsnodes command
*if_name—Name by which the interface is configured in the system
*subnet—Subnet address of the interface
*if_type—Type of interface: public or cluster_interconnect

40) 15.What is the Purpose of the voting disk in Oracle 10g Clusterware?

Voting disk record node membership information.
Oracle Clusterware uses the voting disk to determine which instances are members of a cluster.
The voting disk must reside on a shared disk. For high availability, Oracle recommends that you have a minimum of three voting disks.
If you configure a single voting disk, then you should use external mirroring to provide redundancy.
You can have up to 32 voting disks in your cluster.

41) Data Guard Protection Modes :

In some situations, a business cannot afford to lose data at any cost.
In other situations, some applications require maximum database performance and can tolerate a potential loss of data.
 Data Guard provides three distinct modes of data protection to satisfy these varied requirements:

*Maximum Protection—> This mode offers the highest level of data protection.
Data is synchronously transmitted to the standby database from the primary database and transactions are not committed on the primary database unless the redo data is available on at least one standby database configured in this mode.
 If the last standby database configured in this mode becomes unavailable, processing stops on the primary database.
This mode ensures no-data-loss, even in the event of multiple failures.

*Maximum Availability—> This mode is similar to the maximum protection mode, including zero data loss.
However, if a standby database becomes unavailable (for example, because of network connectivity problems),
processing continues on the primary database.
When the fault is corrected, the standby database is automatically resynchronized with the primary database.
This mode achieves no-data-loss in the event of a single failure (e.g. network failure, primary site failure . . .)

*Maximum Performance—> This mode offers slightly less data protection on the primary database, but higher performance than maximum availability mode.
 In this mode, as the primary database processes transactions, redo data is asynchronously shipped to the standby database.
The commit operation of the primary database does not wait for the standby database to acknowledge receipt of redo data before completing write operations on the primary database.
If any standby destination becomes unavailable, processing continues on the primary database and there is little effect on primary database performance.

42) Connection hanging? what are the possibilities?

possibilities for Oracle hanging include:
External issues – The network being down, Kerberos security issues, SSO or a firewall issue can cause an Oracle connection to hang.
One way to test this is to set sqlnet.authentication_services=(none) in your sqlnet.ora file and retry connecting.
Listener is not running – Start by checking the listener (check lsnrctl stat). Also, see my notes on diagnosing Oracle network connectivity issues.
No RAM – Over allocation of server resources, usually RAM, whereby there is not enough RAM to spawn another connection to Oracle.
Contention – It is not uncommon for an end-user session to “hang” when they are trying to grab a shared data resource that is held by another end-user.
The end-user often calls the help desk trying to understand why they cannot complete their transaction, and the Oracle professional must quickly identify the source of the contention.”

43) What is Partition Pruning ?

Partition Pruning: Oracle optimizes SQL statements to mark the partitions or subpartitions that need to be accessed and eliminates (prunes) unnecessary partitions or subpartitions from access by those SQL statements. In other words, partition pruning is the skipping of unnecessary index and data partitions or subpartitions in a query.

44) FAN in RAC.

With Oracle RAC in place, database client applications can leverage a number of high availability features including:
Fast Connection Failover (FCF): Allows a client application to be immediately notified of a planned or unplanned database service outage by subscribing to Fast Application Notification (FAN) events.
Run-time Connection Load-Balancing: Uses the Oracle RAC Load Balancing Advisory events to distribute work appropriately across the cluster nodes and to quickly react to changes in cluster configuration, overworked nodes, or hangs.
Connection Affinity (11g recommended/required): Routes connections to the same database instance based on previous connections to an instance to limit performance impacts of switching between instances.
RAC supports web session and transaction-based affinity for different client scenarios.

45) Why extra standby redo log group?

Determine the appropriate number of standby redo log file groups.
Minimally, the configuration should have one more standby redo log file group
than the number of online redo log file groups on the primary database….
(maximum number of logfiles for each thread + 1) * maximum number of threads
Using this equation reduces the likelihood that the primary instance’s log
writer (LGWR) process will be blocked because a standby redo log file cannot be
allocated on the standby database. For example, if the primary database has 2
log files for each thread and 2 threads, then 6 standby redo log file groups
are needed on the standby database.”

I think it says that if you have groups #1 and #2 on primary and #1, #2 on
standby, and if LGWR on primary just finished #1, switched to #2, and now it
needs to switch to #1 again because #2 just became full, the standby must catch
up, otherwise the primary LGWR cannot reuse #1 because the standby is still
archiving the standby’s #1. Now, if you have the extra #3 on standby, the
standby in this case can start to use #3 while its #1 is being archived. That
way, the primary can reuse the primary’s #1 without delay.

46) how to take voting disk backup ?

On 10gR2 RAC used “Can be done online” for backup voting disk but in 11g you cannot use DD online (use oracle command to do it).

First, as root user, stop Oracle Clusterware (with the crsctl stop crs command) on all nodes if you want to add/restore voting disk.

Then, determine the current voting disk by issuing the following command:

crsctl query votedisk css

issue the dd or ocopy command to back up a voting disk, as appropriate.

Give the syntax of backing up voting disks:-
On Linux or UNIX systems:
dd if=voting_disk_name of=backup_file_name
where,
voting_disk_name is the name of the active voting disk
backup_file_name is the name of the file to which we want to back up the voting disk contents
On Windows systems, use the ocopy command:
ocopy voting_disk_name backup_file_name

47) What is the Oracle Recommendation for backing up voting disk?

Oracle recommends us to use the dd command to backup the voting disk with aminimum block size of 4KB.

48) How do you restore a voting disk?

To restore the backup of your voting disk, issue the dd or ocopy command for Linux and UNIX systems or ocopy for Windows systems respectively.
On Linux or UNIX systems:
dd if=backup_file_name of=voting_disk_name
On Windows systems, use the ocopy command:
ocopy backup_file_name voting_disk_name
where,
backup_file_name is the name of the voting disk backup file
voting_disk_name is the name of the active voting disk

49) How can we add and remove multiple voting disks?

If we have multiple voting disks, then we can remove the voting disks and add them back into our environment using the following commands,
where path is the complete path of the location where the voting disk resides:
crsctl delete css votedisk path
crsctl add css votedisk path

50) How do we stop Oracle Clusterware?When do we stop it?

Before making any modification to the voting disk, as root user,
stop Oracle Clusterware using the crsctl stop crs command on all nodes.

51) How do we add voting disk?

To add a voting disk, issue the following command as the root user,
replacing the path variable with the fully qualified path name for the voting disk we want to add:
crsctl add css votedisk path -force

52) How do we move voting disks?

To move a voting disk, issue the following commands as the root user,
 replacing the path variable with the fully qualified path name for the voting disk we want to move:
crsctl delete css votedisk path -force
crsctl add css votedisk path -force

53) How do we remove voting disks?

To remove a voting disk,
issue the following command as the root user, replacing the path variable with the fully qualified path name for the voting disk we want to remove:
crsctl delete css votedisk path -force

54) What should we do after modifying voting disks?

After modifying the voting disk,
restart Oracle Clusterware using the crsctl start crs command on all nodes, and verify the voting disk location using the following command:
crsctl query css votedisk

55) When can we use -force option?

If our cluster is down, then we can include the -force option to modify the voting disk configuration,
without interacting with active Oracle Clusterware daemons.
However, using the -force option while any cluster node is active may corrupt our configuration.

56) How to find Cluster Interconnect IP address from Oracle Database ?

Hello, The easiest way to find the cluster interconnect is to view the “hosts” file. The “hosts” file is located under: UNIX ………. /etc
Windows …… C:\WINDOWS\system32\drivers\etc
Following are the ways to find the cluster interconnect through Oracle database:
1) Query X$KSXPIA
The following query provides the interconnect IP address registered with Oracle database:
view plaincopy to clipboardprint?
SQL> select IP_KSXPIA from x$ksxpia where PUB_KSXPIA = ‘N’;
IP_KSXPIA
—————-
192.168.10.11
This query should be run on all instances to find the private interconnect IP address used on their respective nodes.

2) Query GV$CLUSTER_INTERCONNECTS view
Querying GV$CLUSTER_INTERCONNECTS view lists the interconnect used by all the participating instances of the RAC database.
view plaincopy to clipboardprint?
SQL> select INST_ID, IP_ADDRESS from GV$CLUSTER_INTERCONNECTS;
INST_ID IP_ADDRESS
———- —————-
1 192.168.10.11
2 192.168.10.12

57)  How to Identify master node in RAC ?

Grep crsd log file
# /u1/app/../crsd>grep MASTER crsd.log | tail -1

(or)

cssd >grep -i  “master node” ocssd.log | tail -1

OR You can also use V$GES_RESOURCE view to identify the master node.

58) how to monitor block transfer interconnects nodes in rac ?

The v$cache_transfer   and v$file_cache_transfer  views are used to examine RAC statistics.
The types of blocks that use the cluster interconnects in a RAC environment are monitored with the v$ cache transfer series of views:

v$cache_transfer: This view shows the types and classes of blocks that Oracle transfers over the cluster interconnect on a per-object basis.
 The forced_reads and forced_writes columns can be used to determine the types of objects the RAC instances are sharing.
Values in the forced_writes column show how often a certain block type is transferred out of a local buffer cache due to the current version being requested by another instance.

59) what is global cache service monitoring?

Global Cache Services (GCS) Monitoring

The use of the GCS relative to the number of buffer cache reads, or logical reads can be estimated
by dividing the sum of GCS requests (global cache gets   + global cache converts   + global cache cr blocks received   + global cache current blocks received )
by the number of logical reads (consistent gets   + db block gets ) for a given statistics collection interval.
A global cache service request is made in Oracle when a user attempts to access a buffer cache to read or modify a data block and the block is not in the local cache.
A remote cache read, disk read or change access privileges is the inevitable result.
These are logical read related. Logical reads form a superset of the global cache service operations.

rman interviews questions and answers–1 in oracle

1.What is RMAN?

Recovery Manager (RMAN) is a utility that can manage your entire Oracle backup and recovery activities.

2.What is the difference between using recovery catalog and control file?

 When new incarnation happens, the old backup information in control file will be lost. It will be preserved in recovery catalog.

In recovery catalog we can store scripts.

 Recovery catalog is central and can have information of many databases.

3.Can we use same target database as catalog?

     No, The recovery catalog should not reside in the target database (database should be backed up), because the database can’t be
   recovered in the mounted state.

4.How do you know that how much RMAN task has been completed?

   By querying v$rman_status or v$session_longops

5.From where list & report commands will get input?

Both the commands command quering v$ and recovery catalog views. V$BACKUP_FILES or many of the recovery catalog views such as
RC_DATAFILE_COPY or RC_ARCHIVED_LOG.

6.Command to delete archive logs older than 7days?

RMAN> delete archivelog all completed before sysdate-7;

How many times does oracle ask before dropping a catalog?

7.The default is two times one for the actual command, the other for confirmation.

8.How to view the current defaults for the database.

RMAN> show all;

9.What is the use of crosscheck command in RMAN?

Crosscheck will be useful to check whether the catalog information is intact with OS level information. This command only updates
repository records with the status of the backups.

e.g. If user removes archived logs from disk with an operating system command, the repository still indicates that the logs are on
 disk, when in fact they are not.

10.What are the differences between crosscheck and validate commands?

Validate command is to examine a backup set and report whether it can be restored. RMAN scans all of the backup pieces in the specified
backup sets and looks at the checksum to verify that the contents are intact so that backup can be successfully restored if necessary.

Crosscheck command is to verify the status of backups and copies recorded in the RMAN repository against media such as disk or tape.
The crosscheck command only processes files created on the same device type as the channel running crosscheck.

11.Which one is good, differential (incremental) backup or cumulative (incremental) backup?

A differential backup, which backs up all blocks changed after the most recent incremental backup at level 1 or 0

RMAN> BACKUP INCREMENTAL LEVEL 1 DATABASE;

A cumulative backup, which backs up all blocks changed after the most recent incremental backup at level 0

RMAN> BACKUP INCREMENTAL LEVEL 1 CUMULATIVE DATABASE;

Cumulative backups are preferable to differential backups when recovery time is more important than disk space,
****because during recovery each differential backup must be applied in succession. Use cumulative incremental backups
instead of differential, if enough disk space is available to store cumulative incremental backups.

This is command for taking Level 0 backup.

RMAN> BACKUP INCREMENTAL LEVEL 0 DATABASE;

12.What is the difference between backup set and backup piece?

Backup set is logical and backup piece is physical.

13.RMAN command to backup for creating standby database

RMAN> duplicate target database

14.You loss one datafile and DB is running in ARCHIVELOG mode. You have full database backup of 1 week/day old and don’t have
backup of this (newly created) datafile. How do you restore/recover file?

Create data file and recover datafile.

SQL> alter database create datafile ‘/u01/app/oracle/oradata/xyz.dbf’ size 2G;

RMAN> recover datafile file_id;

15.What is obsolete backup & expired backup?

A status of “expired” means that the backup piece or backup set is not found in the backup destination.

A status of “obsolete” means the backup piece is still available, but it is no longer needed. The backup piece is no longer needed since
 RMAN has been configured to no longer need this piece after so many days have elapsed, or so many backups have been performed.

16.What is the difference between hot backup & RMAN backup?

For hot backup, we have to put database in begin backup mode, then take backup.
RMAN won’t put database in backup mode.

17.How to put manual/user-managed backup in RMAN (recovery catalog)?

By using catalog command.

RMAN> CATALOG START WITH ‘/tmp/backup.ctl’;

18. What are the Architectural components of RMAN?

RMAN Executables
Sercer process
Channels
Target database
Recovery catalog database (optional)
Media management Layer (optional)
Backups, backup sets and backup pieces

19.What are channels?

A channel is an RMAN server process started when there is a need to communicate with an I/O device, such as a disk or a tape.
A channel is what reads and writes RMAN backup files. It is through the allocation of channels that you govern I/O characteristics:

Type of I/O device being read or written to, either a disk or an sbt_tape
Number of processes simultaneously accessing an I/O device
Maximize size of files created on I/O devices
Maximize rate at which database files are read
Maximize number of files open at a time

20.Why is the catalog optional?

Because RMAN manages backup and recovery operations, it requires a place to store necessary information about the database.
RMAN always stores this information in the target database control file. You can also store RMAN metadata in a recovery catalog schema
contained in a separate database. The recovery catalog schema must be stored in a database other than the target database.

21.What is a Backup set?

A logical grouping of backup files — the backup pieces — that are created when you issue an RMAN backup command. A backup set is RMAN’s
 name for a collection of files associated with a backup. A backup set is composed of one or more backup pieces.

22.What are the benefits of using RMAN?

Incremental backups that only copy data blocks that have changed since the last backup.
Tablespaces are not put in backup mode, thus there is noextra redo log generation during online backups.
Detection of corrupt blocks during backups.
Parallelization of I/O operations.
Automatic logging of all backup and recovery operations.
Built-in reporting and listing commands.

23.What are the various reports available with RMAN

RMAN>list backup;

RMAN> list archive;

24.In catalog database, if some of the blocks are corrupted due to system crash, How will you recover?

using RMAN BLOCK RECOVER command

25.How do you enable the autobackup for the controlfile using RMAN?

Issue command at RMAN prompt.

RMAN> configure controlfile autobackup on;

Also we can configure controlfile backup format.

RMAN> configure controlfile autobackup format for device type disk to

2> ‘$HOME/BACKUP/RMAN/ F.bkp’;

26.How do you identify what are the all the target databases that are being backed-up with RMAN database?

You don’t have any view to identify whether it is backed up or not. The only option is connect to the target database and
give list backup this will give you the backup information with date and timing.

27.How do you identify the block corruption in RMAN database? How do you fix it?

Using v$block_corruption view you can find which blocks corrupted.

RMAN> blockrecover datafile block ;

Using the above statement You recover the corrupted blocks. First check whether the block is corrupted or not by using this command

SQL>select file# block# from v$database_block_corruption;

file# block

2 507

the above block is corrupted…

conn to Rman

To recover the block use this command…

RMAN>blockrecover datafile 2 block 507;

the above command recover the block 507

Now just verify it…..

Rman>blockrecover corruption list;

28.How do you clone the database using RMAN software? Give brief steps? When do you use crosscheck command?

Check whether backup pieces proxy copies or disk copies still exist.

Two commands available in RMAN to clone database:

1) Duplicate

2) Restore.

29.List some of the RMAN catalog view names which contain the catalog information?

RC_DATABASE_INCARNATION
RC_BACKUP_COPY_DETAILS

RC_BACKUP_CORRUPTION

RC_BACKUP-DATAFILE_SUMMARY

30.How do you install the RMAN recovery catalog?

Steps to be followed:

1) Create connection string at catalog database.

2) At catalog database create one new user or use existing user and give that user a recovery_catalog_owner privilege.

3) Login into RMAN with connection string

a) export ORACLE_SID

b) rman target catalog @connection string

4) rman> create catalog;

5) register database;

31.What is the difference between physical and logical backups?

In Oracle Logical Backup is “which is taken using either Traditional Export/Import or Latest Data Pump”. Where as Physical backup is known
“when you take Physical Os Database related Files as Backup”.

32.What is RAID? What is RAID0? What is RAID1? What is RAID 10?

RAID: It is a redundant array of independent disk

RAID0: Concatenation and stripping

RAID1: Mirroring

33.How to enable Fast Incremental Backup to backup only those data blocks that have changed?

SQL> ALTER DATABASE enable BLOCK CHANGE TRACKING;

34.How do you set the flash recovery area?

SQL> ALTER SYSTEM SET db_recovery_file_dest = ‘/u10/oradata/school’;
SQL> ALTER SYSTEM SET db_recovery_file_dest_size = 100G;

35.What is auxiliary channel in RMAN? When do you need this?

An auxiliary channel is a link to auxiliary instance. If you do not have automatic channels configured, then
before issuing the DUPLICATE command, manually allocate at least one auxiliary channel within the same RUN command.

36.How do you use the V$RECOVERY_FILE_DEST view to display information regarding the flashrecovery area?

SQL> SELECT name, space_limit, space_used,space_reclaimable, number_of_filesFROM v$recovery_file_dest;

37.How can you display warning messages?

SQL> SELECT object_type, message_type,message_level, reason, suggested_actionFROM dba_outstanding_alerts;

38.How do you backup the entire database?

RMAN> BACKUP DATABASE;

39.How do you backup an individual tablespaces?

RMAN> CONFIGURE DEFAULT DEVICE TYPE TO DISK;

RMAN> BACKUP TABLESPACE system;

40.How do you backup datafiles and control files?

RMAN> BACKUP DATAFILE 3;

RMAN> BACKUP CURRENT CONTROLFILE;
41. What is the difference between to back up the current control file and to backup up control file copy?  
                                                                         If you backup “current control file” you backup control file which is currently open by an instance where as If you backup “controlfile file
copy” you backup the copy of control file which is created either with SVRMGRL command “alter system backup controlfile to ..” or with
RMAN command “copy current controlfile to …”.
In the other words, the control file copy is not current controlfile backup current controlfile creates a BACKUPSET containing controlfile.
You don’t have to give the FILENAME where as backup controlfile copy creates a BACKUPSET from a copy of controlfile.
You have to give the FILENAME.                                                                                                                                                                                                                      

42. How much of overhead in running BACKUP VALIDATE DATABASE and RESTORE VALIDATE DATABASE commands to check for block corruptions using RMAN?
Can I run these commands anytime?      
Backup validate works against the backups not against the live database so no impact on the live database, same for restore validate
they do not impact the real thing (it is reading the files there only).                                                                                                                          

43. Is there a way to force rman to use these obsolete backups or once it is marked obsolete?                                                        
As per my understanding it is just a report, they are still there until you delete them.
                                                                       
44. Can I use the same snapshot controlfile to backup multiple databases(one after another) running on the same server?      
This file is only use temporarily like a scratch file.  Only one rman session can access the snapshot controlfile at any time so
this would tend to serialize your backups if you do that.        

45. Why does not oracle keep RMAN info after recreating the controlfile?
  Creating the new controlfile from scratch how do you expect the create controlfile to “make up” the missing data?
 that would be like saying similarly we have drop and recreated my table and now it is empty similarly here recreating from the
  scratch means the contents there will be naturally will be gone.
  Use the rman catalog to deal this situation. It is just a suggestion.
                                       
46 What is the advantage of using PIPE in rman backups? In what circumstances one would use PIPE to backup and restore?  
                                                                                                                                                                                   
It lets 3rd parties (anyone really) build an alternative interface to RMAN as it permits anyone                                                                
that can connect to an Oracle instance to control RMAN programmatically.
                                                                                                                                                                                                                       
47. How To turn Debug Feature on in rman?                                                                                                                                  

run {                                                                                                                                                            
allocate channel c1 type disk;                                                                                                                                            
debug on;                                                                                                                                                                              
}                                                                                                                                                                              
rman>list backup of database;                                                                                                                                    
now you will see a output                                                                                                                                  
You can always turn debug off by issuing                                                                                                                              
rman>debug off;     

Schema Refresh in oracle

 Schema Refersh steps:
 =====================

 1. take expdport of a schema in source database(ctsp).
 expdp system/manager directory=expdp_dir1 dumpfile=ctsuser.dmp logfile=ctsuser.log schemas=raja

 Note: Dont give semicolon(;) at the end of the above command.

 2. move the dumpfile to target database(ctsd) using scp.if both the database running in same server this steps is not required.

 scp ctsuser.dmp oracle@:/u02/app/oracle

 3. create the new user in target database(if already existed drop and recreate)

 select username from dba_users;
 drop user raja cascade;
 create user raja identified by raja;

 4. Befor doing import(impdp) check the used space of a schema(user) in source database and freespace in the target database then

 target database tablespaces should be more than or equal to source database tablespaces.

 Source:
 select tablespace_name,sum(bytes/1024/1024) from dba_segments where owner=’RAJA’ group by tablespace_name;

 Target:
 select tablespace_name,sum(bytes/1024/1024) from dba_free_space group by tablespace_name;

5. Start the import(impdp) in taget database.

impdp system/manager directory=expdp_dir1 dumpfile=ctsuser.dmp logfile=ctsuser_impdp.log remap_schema=raja:raja

6. once completed compare the object counts between source and target databases.

 select object_type,count(*) from dba_objects where owner=’RAJA’ and status=’VALID’ group by object_type;

Prerequisites before Export and Import Process datapump

Prerequisites before Export and Import Process
1.Make sure you have set variable ORACLE_HOME in your environment. Yours may be vary from mine. export ORACLE_HOME /u01/app/oracle/product/11.2.0/db_1

2.Create a directory on your system, where we gonna put the Oracle Data Pump (dump) file. As for me, I wanna save the data pump file into /home/oracle/dpump_dir mkdir /home/oracle/dpump_dir

3.Next we need to create the “directory” from SQLPlus. You must logged in as SYSDBA. dbora is SID name on my system. $ORACLE_HOME/bin/sqlplus sys@dbora as sysdba

4.After logged in SQLPlus, execute this command create directory dpump_dir as ‘/home/oracle/dpump_dir’;
grant read,write on directory dpump_dir to dbuser;

dbuser is the username that we will use it for export and import process.

5.We also need to give the dbuser privileges to create any directory and to export / import full database. grant create any directory to dbuser;
grant EXP_FULL_DATABASE to dbuser;
grant IMP_FULL_DATABASE to dbuser;

Oracle Expdp / Impdp Utility Theory,Examples, Practical ,Errors,Solutions,Real Time Scenarios As Well As Imp/Exp Comparison.

  Data pump is a new feature in Oracle10g that provides fast parallel data load. With direct path and parallel execution, data pump is several times faster then the traditional exp/imp. Traditional exp/imp runs on client side. But impdp/expdp runs on server side. So we have much control on expdp/expdp compared to traditional exp/imp. When compared to exp/imp, data pump startup time is longer. Because, it has to setup the jobs, queues, and master table. Also at the end of the export operation the master table data is written to the dump file set, and at the beginning of the import job the master table is located and loaded in the schema of the user.
Image
Following are the process involved in the data pump operation:
Client Process : This process is initiated by client utility. This process makes a call to the data pump API. Once the data pump is initiated, this process is not necessary for the progress of the job.
Shadow Process : When client log into the database, foreground process is created.
It services the client data pump API requests. This process creates the master table and creates
Advanced queuing queues used for communication. Once client process ends, shadow process also go away.
Master Control Process : MCP controls the execution of the data pump job. There is one MCP per job.
MCP divides the data pump job into various metadata and data load or
unload jobs and hands them over to the worker processes.
Worker Process : MCP creates worker process based on the valule of the PARALLEL parameter.
The worker process performs the task requested by MCP.
Advantage of Data pump
1. We can perform export in parallel. It can also write to multiple files on different disks.
(Specify parameters PARALLEL=2 and the two directory names with file specification
DUMPFILE=ddir1:/file1.dmp, DDIR2:/file2.dmp)
2. Has ability to attach and detach from job, monitor the job progress remotely.
3. Has more option to filter metadata objects. Ex, EXCLUDE, INCLUDE
4. ESTIMATE_ONLY option can be used to estimate disk space requirements before performs the job
5. Data can be exported from remote database by using Database link
6. Explicit DB version can be specified, so only supported object types are exported.
7. During impdp, we can change the target file names, schema, and tablespace.
Ex, REMAP_SCHEMA, REMAP_DATAFILES, REMAP_TABLESPACE
8. Has the option to filter data rows during impdp. Traditional exp/imp,
we have this filter option only in exp. But here we have filter option on both impdp, expdp.
9. Data can be imported from one DB to another without writing to dump file, using NETWORK_LINK parameter.
10. Data access methods are decided automatically. In traditional exp/imp, we specify the value for the parameter DIRECT.
But here, it decides where direct path can not be used , conventional path is used.
11. Job status can be queried directly from data dictionary(For example, dba_datapump_jobs, dba_datapump_sessions etc)
Exp & Expdp common parameters: These below parameters exists in both traditional exp and expdp utility.
FILESIZE
FLASHBACK_SCN
FLASHBACK_TIME
FULL
HELP
PARFILE
QUERY
TABLES
TABLESPACES
TRANSPORT_TABLESPACES(exp value is Y/N, expdp value is name of the tablespace)
Comparing exp & expdp parameters: These below parameters are equivalent parameters between exp & expdp.
Exp and corresponding Expdp parameters…
FEEDBACK => STATUS
FILE => DUMPFILE
LOG => LOGFILE
OWNER => SCHEMAS
TTS_FULL_CHECK => TRANSPROT_FULL_CHECK
New parameters in expdp Utility
ATTACH Attach the client session to existing data pump jobs
CONTENT Specify what to export(ALL, DATA_ONLY, METADATA_ONLY)
DIRECTORY Location to write the dump file and log file.
ESTIMATE Show how much disk space each table in the export job consumes.
ESTIMATE_ONLY It estimate the space, but does not perform export
EXCLUDE List of objects to be excluded
INCLUDE List of jobs to be included
JOB_NAME Name of the export job
KEEP_MASTER Specify Y not to drop the master table after export
NETWORK_LINK Specify dblink to export from remote database
NOLOGFILE Specify Y if you do not want to create log file
PARALLEL Specify the maximum number of threads for the export job
VERSION DB objects that are incompatible with the specified version will not be exported.
ENCRYPTION_PASSWORD The table column is encrypted, then it will be written as clear
text in the dump file set when the password is not specified. We can define any string as a password for this parameter.
COMPRESSION Specifies whether to compress metadata before writing to the dump file set.
The default is METADATA_ONLY. We have two values(METADATA_ONLY,NONE). We can use NONE
if we want to disable during the expdp.
SAMPLE – Allows you to specify a percentage of data to be sampled and unloaded from the source database.
The sample_percent indicates the probability that a block of rows will be selected as part of the sample.
Imp & Impdp common parameters: These below parameters exist in both traditional imp and impdp utility.
FULL
HELP
PARFILE
QUERY
SKIP_UNUSABLE_INDEXES
TABLES
TABLESPACES
Comparing imp & impdp parameters: These below parameters are equivalent parameters between imp & impdp.
imp and corresponding impdp parameters…
DATAFILES => TRANSPORT_DATAFILES
DESTROY =>REUSE_DATAFILES
FEEDBACK =>STATUS
FILE =>DUMPFILE
FROMUSER =>SCHEMAS, REMAP_SCHEMAS
IGNORE =>TABLE_EXISTS_ACTION(SKIP,APPEND,TRUNCATE,REPLACE)
INDEXFILE, SHOW=>SQLFILE
LOG =>LOGFILE
TOUSER =>REMAP_SCHEMA
New parameters in impdp Utility
FLASHBACK_SCN Performs import operation that is consistent with the SCN specified from the source database.
Valid only when NETWORK_LINK parameter is used.
FLASHBACK_TIME Similar to FLASHBACK_SCN, but oracle finds the SCN close to the time specified.
NETWORK_LINK Performs import directly from a source database using database link name specified in the parameter.
The dump file will be not be created in server when we use this parameter.
To get a consistent export from the source database,
we can use the FLASHBACK_SCN or FLASHBACK_TIME parameters.
These two parameters are only valid when we use NETWORK_LINK parameter.
REMAP_DATAFILE Changes name of the source DB data file to a different name in the target.
REMAP_SCHEMA Loads objects to a different target schema name.
REMAP_TABLESPACE Changes name of the source tablespace to a different name in the target.
TRANSFORM We can specify that the storage clause should not be generated in the DDL for import.
This is useful if the storage characteristics of the source and target database are different.
The valid values are SEGMENT_ATTRIBUTES, STORAGE. STORAGE removes the storage clause from the CREATE statement DDL,
whereas SEGMENT_ATTRIBUTES removes physical attributes, tablespace, logging, and storage attributes.
TRANSFORM = name:boolean_value[:object_type], where boolean_value is Y or N.
For instance, TRANSFORM=storage:N:table
ENCRYPTION_PASSWORD It is required on an import operation if an encryption password was specified on the export operation.
CONTENT, INCLUDE, EXCLUDE are same as expdp utilities.
Prerequisite for expdp/impdp:
Set up the dump location in the database.
system@orcl> create directory dumplocation
2 as ‘c:/dumplocation’;
Directory created.
system@orcl> grant read,write on directory dumploc to scott;
Grant succeeded.
Let us experiment expdp & impdp utility as different scenario…… We have two database orcl, ordb.
All the below scenarios are tested in Oracle10g R2 version.
We have two database orcl, ordb.
 Oracle10g R2 version.
Scenario 1 :
Export the whole orcl database.
Export Parfile content:
userid=system/password@orcl
dumpfile=expfulldp.dmp
logfile=expfulldp.log
full=y
directory=dumplocation
Scenario 2
Export the scott schema from orcl and import into ordb database.
While importing, exclude some objects(sequence,view,package,cluster,table).
Load the objects which came from RES tablespace into USERS tablespace in target database.
Export Parfile content:
userid=system/password@orcl
dumpfile=schemaexpdb.dmp
logfile=schemaexpdb.log
directory=dumplocation
schemas=scott
Import parfile content:
userid=system/password@ordb
dumpfile=schemaexpdb.dmp
logfile=schemaimpdb.log
directory=dumplocation
table_exists_action=replace
remap_tablespace=res:users
exclude=sequence,view,package,cluster,table:”in(‘LOAD_EXT’)”
Scenario 3
Export the emp table from scott schema at orcl instance and import into ordb instance.
Expdp parfile content:
userid=system/password@orcl
logfile=tableexpdb.log
directory=dumplocation
tables=scott.part_emp
dumpfile=tableexpdb.dmp
Impdp parfile content:
userid=system/password@ordb
dumpfile=tableexpdb.dmp
logfile=tabimpdb.log
directory=dumplocation
table_exists_action=REPLACE
Scenario 4
Export only specific partition in emp table from scott schema at orcl and import into ordb database.
Expdp parfile content:
userid=system/password@orcl
dumpfile=partexpdb.dmp
logfile=partexpdb.log
directory=dumplocation
tables=scott.part_emp:part10,scott.part_emp:part20
Impdp parfile content:
If we want to overwrite the exported data in target database, then we need to delete emp table for deptno in(10,20).
scott@ordb> delete part_emp where deptno=10;
786432 rows deleted.
scott@ordb> delete part_emp where deptno=20;
1310720 rows deleted.
scott@ordb> commit;
Commit complete.
userid=system/password@ordb
dumpfile=partexpdb.dmp
logfile=tabimpdb.log
directory=dumplocation
table_exists_action=append
Scenario 5
Export only tables in scott schema at orcl and import into ordb database.
Expdp parfile content:
userid=system/password@orcl
dumpfile=schemaexpdb.dmp
logfile=schemaexpdb.log
directory=dumplocation
include=table
schemas=scott
Impdp parfile content:
userid=system/password@orcl
dumpfile=schemaexpdb.dmp
logfile=schemaimpdb.log
directory=dumplocation
table_exists_action=replace
Scenario 6
Export only rows belonging to department 10 and 20 in emp and dept table from orcl database.
Import the dump file in ordb database. While importing, load only deptno 10 in target database.
Expdp parfile content:
userid=system/password@orcl
dumpfile=data_filter_expdb.dmp
logfile=data_filter_expdb.log
directory=dumplocation
content=data_only
schemas=scott
include=table:”in(‘EMP’,’DEPT’)”
query=”where deptno in(10,20)”
Impdp parfile content:
userid=system/password@ordb
dumpfile=data_filter_expdb.dmp
logfile=data_filter_impdb.log
directory=dumplocation
schemas=scott
query=”where deptno = 10?
table_exists_action=APPEND
Scenario 7
Export the scott schema from orcl database and split the dump file into 50M sizes.
Import the dump file into ordb datbase.
Expdp parfile content:
userid=system/password@orcl
logfile=schemaexp_split.log
directory=dumplocation
dumpfile=schemaexp_split_%U.dmp
filesize=50M
schemas=scott
include=table
As per the above expdp parfile, initially, schemaexp_split_01.dmp file will be created. Once the file is 50MB,
the next file called schemaexp_split_02.dmp will be created. Let us say, the dump file size is 500MB,
then it creates 10 dump file as each file size is 50MB.
Impdp parfile content:
userid=system/password@ordb
logfile=schemaimp_split.log
directory=dumplocation
dumpfile=schemaexp_split_%U.dmp
table_exists_action=replace
remap_tablespace=res:users
exclude=grant
Scenario 8
Export the scott schema from orcl database and split the dump file into four files. Import the dump file
into ordb datbase.
Expdp parfile content:
userid=system/password@orcl
logfile=schemaexp_split.log
directory=dumplocation
dumpfile=schemaexp_split_%U.dmp
parallel=4
schemas=scott
include=table
As per the above parfile content, initially four files will be created – schemaexp_split_01.dmp,
schemaexp_split_02.dmp, schemaexp_split_03.dmp, schemaexp_split_04.dmp. Notice that every occurrence
of the substation variable is incremented each time. Since there is no FILESIZE parameter,
no more files will be created.
Impdp parfile content:
userid=system/password@ordb
logfile=schemaimp_split.log
directory=dumplocation
dumpfile=schemaexp_split_%U.dmp
table_exists_action=replace
remap_tablespace=res:users
exclude=grant
Scenario 9
Export the scott schema from orcl database and split the dump file into three files. The dump files
will be stored in three different location. This method is especially useful if you do not have enough
space in one file system to perform the complete expdp job. After export is successful, import the dump
file into ordb database.
Expdp parfile content:
userid=system/password@orcl
logfile=schemaexp_split.log
directory=dumplocation
dumpfile=dump1:schemaexp_%U.dmp,dump2:schemaexp_%U.dmp,dump3:schemaexp_%U.dmp
filesize=50M
schemas=scott
include=table
         As per above expdp par file content, it place the dump file into three different location. Let us say,
entire expdp dump file size is 1500MB. Then it creates 30 dump files(each dump file size is 50MB) and
place 10 files in each file system.
Impdp parfile content:
userid=system/password@ordb
logfile=schemaimp_split.log
directory=dumplocation
dumpfile=dump1:schemaexp_%U.dmp,dump2:schemaexp_%U.dmp,dump3:schemaexp_%U.dmp
table_exists_action=replace
Scenario 10
        We are in orcl database server. Now export the ordb data and place the dump file in orcl database server.
After expdp is successful, import the dump file into orcl database. When we use network_link, the expdp user
and source database schema users should have identical privileges. If there no identical privileges,
then we get the below error.
C:\impexpdp>expdp parfile=networkexp1.par
Export: Release 10.2.0.1.0 – Production on Sunday, 17 May, 2009 12:06:40
Copyright (c) 2003, 2005, Oracle. All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 – Produc
tion
With the Partitioning, OLAP and Data Mining options
ORA-31631: privileges are required
ORA-39149: cannot link privileged user to non-privileged user
Expdp parfile content:
userid=scott/tiger@orcl
logfile=netwrokexp1.log
directory=dumplocation
dumpfile=networkexp1.dmp
schemas=scott
include=table
network_link=ordb
             As per the above parfile, expdp utility exports the ordb database data and place the dump file in orcl server. Since we are running expdp in orcl server. This is basically exporting the data from remote database.
Impdp parfile content:
userid=system/password@orcl
logfile=networkimp1.log
directory=dumplocation
dumpfile=networkexp1.dmp
table_exists_action=replace
Scenario 11
Export scott schema in orcl and import into ordb. But do not write dump file in server. The expdp and impdp should be completed without writing dump file in the server. Here we do not need to export the data. We can import the data without creating the dumpfile.
Here we run the impdp in ordb server and it contacts orcl DB and extract the data and import into ordb database.
If we do not have much space in the file system to place the dump file, then we can use this option to load the data.
Impdp parfile content:
userid=scott/tiger@ordb
network_link=orcl
logfile=networkimp2.log
directory=dumplocation
table_exists_action=replace
Scenario 12
Expdp scott schema in ordb and impdp the dump file in training schema in ordb database.
Expdp parfile content:
userid=scott/tiger@orcl
logfile=netwrokexp1.log
directory=dumplocation
dumpfile=networkexp1.dmp
schemas=scott
include=table
Impdp parfile content:
userid=system/password@ordb
logfile=networkimp1.log
directory=dumplocation
dumpfile=networkexp1.dmp
table_exists_action=replace
remap_schema=scott:training
Scenario 13
Expdp table on orcl database and imdp in ordb. When we export the data, export only 20 percent of the table data. We use SAMPLE parameter to accomplish this task.
SAMPLE parameter allows you to export subsets of data by specifying the percentage of data to be sampled and exported.
The sample_percent indicates the probability that a block of rows will be selected as part of the sample.
It does not mean that the database will retrieve exactly that amount of rows from the table.
The value you supply for sample_percent can be anywhere from .000001 up to, but not including, 100.
If no table is specified, then the sample_percent value applies to the entire export job. The SAMPLE parameter
is not valid for network exports.
Expdp parfile content:
userid=system/password@orcl
dumpfile=schemaexpdb.dmp
logfile=schemaexpdb.log
directory=dumplocation
tables=scott.part_emp
SAMPLE=20
            As per the above expdp parfile, it exports only 20 percent of the data in part_emp table.
Impdp parfile content:
userid=system/password@ordb
dumpfile=schemaexpdb.dmp
logfile=schemaimpdb.log
directory=dumplocation
table_exists_action=replace
————–>>>>>>>>>>>>>>————————————->>>>>>>>>>>>>>>>>————————>>>>>>>>>>>>>——————————->>>>>>>—->>>>>>>>>——————-
We are creating 2 schemas.
1.       User1 – This schema is having partition table SALES which needs to be export
2.       User2 – schema where we are going to import sales table with new name as SALES_T.
1.Create users:
[oracle@NVMBD01PSR183 ~]$ sqlplus / as sysdba
SQL*Plus: Release 11.2.0.3.0 Production on Fri Dec 19 13:15:44 2014
Copyright (c) 1982, 2011, Oracle.  All rights reserved.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 – 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
SQL> create user user1 identified by user1;
User created.
SQL> create user user2 identified by user2;
User created.
SQL> grant connect, resource to user1, user2;
Grant succeeded.
2.Create datapump directory.
SQL>CREATE OR REPLACE DIRECTORY EXPDP_DIR AS ‘ /oracle/data1/expdp’;
SQL> GRANT READ, WRITE ON DIRECTORY EXPDP_DIR to user1,user2;
3.Create partition table SALES.
SQL> conn user1/user1
Connected.
SQL> CREATE TABLE sales
( name varchar2(5),
time_id DATE)
PARTITION BY RANGE (time_id)
(PARTITION sales_AUG VALUES LESS THAN (TO_DATE(’01-AUG-2014′,’dd-MON-yyyy’))
TABLESPACE users,
PARTITION sales_SEP VALUES LESS THAN (TO_DATE(’01-SEP-2014′,’dd-MON-yyyy’))
TABLESPACE users,
PARTITION sales_OCT VALUES LESS THAN (TO_DATE(’01-OCT-2014′,’dd-MON-yyyy’))
TABLESPACE users,
PARTITION sales_NOV VALUES LESS THAN (TO_DATE(’01-NOV-2014′,’dd-MON-yyyy’))
TABLESPACE users,
PARTITION sales_DEV VALUES LESS THAN (TO_DATE(’01-DEC-2014′,’dd-MON-yyyy’))
TABLESPACE users );
SQL> SELECT partition_name FROM user_tab_partitions WHERE table_name = ‘SALES’;
PARTITION_NAME
——————————
SALES_AUG
SALES_DEV
SALES_NOV
SALES_OCT
SALES_SEP
EXPDP:
1. Export entire table including all partitions
[oracle@NVMBD01PSR183 ~]$ expdp user1/user1 directory=EXPDP_DIR dumpfile=sales_table.dmp tables=sales
logfile=sales_table.log
Export: Release 11.2.0.3.0 – Production on Fri Dec 19 11:13:37 2014
Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 – 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting “USER1”.”SYS_EXPORT_TABLE_01″:  user1/******** directory=EXPDP_DIR dumpfile=sales_table.dmp
tables=sales logfile=sales_table.log
Estimate in progress using BLOCKS method…
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 24 MB
Processing object type TABLE_EXPORT/TABLE/TABLE
. . exported “USER1″.”SALES”:”SALES_NOV”                5.421 KB       1 rows
. . exported “USER1″.”SALES”:”SALES_OCT”                5.421 KB       1 rows
. . exported “USER1″.”SALES”:”SALES_SEP”                5.421 KB       1 rows
. . exported “USER1″.”SALES”:”SALES_AUG”                    0 KB       0 rows
. . exported “USER1″.”SALES”:”SALES_DEV”                    0 KB       0 rows
Master table “USER1”.”SYS_EXPORT_TABLE_01″ successfully loaded/unloaded
******************************************************************************
Dump file set for USER1.SYS_EXPORT_TABLE_01 is:
 /oracle/data1/expdp/sales_table.dmp
Job “USER1”.”SYS_EXPORT_TABLE_01″ successfully completed at 11:13:46
2. Export specific partition of table:
[oracle@NVMBD01PSR183 ~]$ expdp user1/user1 directory=EXPDP_DIR dumpfile=sales_table_partition.dmp
tables=sales:SALES_NOV logfile=sales_table_partition.log
Export: Release 11.2.0.3.0 – Production on Fri Dec 19 12:31:06 2014
Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 – 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting “USER1”.”SYS_EXPORT_TABLE_01″:  user1/******** directory=EXPDP_DIR dumpfile=sales_table_partition.dmp
tables=sales:SALES_NOV logfile=sales_table_partition.log
Estimate in progress using BLOCKS method…
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 8 MB
Processing object type TABLE_EXPORT/TABLE/TABLE
. . exported “USER1″.”SALES”:”SALES_NOV”                5.421 KB       1 rows
Master table “USER1”.”SYS_EXPORT_TABLE_01″ successfully loaded/unloaded
******************************************************************************
Dump file set for USER1.SYS_EXPORT_TABLE_01 is:
 /oracle/data1/expdp/sales_table_partition.dmp
Job “USER1”.”SYS_EXPORT_TABLE_01″ successfully completed at 12:31:13
IMPDP
Move dmp file to target host (ftp, scp etc)
Or load data to another schema using remap_schema
1. Import entire partition table into new schema with new name.
[oracle@NVMBD01PSR183 ~]$ impdp user2/user2 directory=EXPDP_DIR dumpfile=sales_table.dmp remap_table=sales:sales_t
remap_schema=user1:user2 remap_tablespace=user1:user2 table_exists_action = append;
Import: Release 11.2.0.3.0 – Production on Fri Dec 19 11:19:25 2014
Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 – 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Master table “USER2”.”SYS_IMPORT_FULL_01″ successfully loaded/unloaded
Starting “USER2”.”SYS_IMPORT_FULL_01″:  user2/******** directory=EXPDP_DIR dumpfile=sales_table.dmp
remap_table=sales:sales_t remap_schema=user1:user2 remap_tablespace=user1:user2 table_exists_action=append
Processing object type TABLE_EXPORT/TABLE/TABLE
Table “USER2″.”SALES_T” exists. Data will be appended to existing table but all dependent metadata will
be skipped due to table_exists_action of append
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
. . imported “USER2″.”SALES_T”:”SALES_SEP”               5.421 KB       1 rows
. . imported “USER2″.”SALES_T”:”SALES_AUG”                   0 KB       0 rows
. . imported “USER2″.”SALES_T”:”SALES_DEV”                   0 KB       0 rows
. . imported “USER2″.”SALES_T”:”SALES_OCT”               5.421 KB       1 rows
. . imported “USER2″.”SALES_T”:”SALES_NOV”               5.421 KB       1 rows
Job “USER2”.”SYS_IMPORT_FULL_01″ successfully completed at 11:19:30
Let`s check:
SQL> conn user2/user2
Connected.
SQL> select * from sales_t;
NAME  TIME_ID
—– ———
prod1 01-AUG-14
prod2 01-SEP-14
prod3 01-OCT-14
2. Now import single partition only.
[oracle@NVMBD01PSR183 ~]$ impdp user2/user2 directory=EXPDP_DIR dumpfile=sales_table_partition.dmp
remap_table=sales:sales_t remap_schema=user1:user2 remap_tablespace=user1:user2 table_exists_action = append;
Import: Release 11.2.0.3.0 – Production on Fri Dec 19 13:05:26 2014
Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 – 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Master table “USER2”.”SYS_IMPORT_FULL_01″ successfully loaded/unloaded
Starting “USER2”.”SYS_IMPORT_FULL_01″:  user2/******** directory=EXPDP_DIR dumpfile=sales_table_partition.dmp
remap_table=sales:sales_t remap_schema=user1:user2 remap_tablespace=user1:user2 table_exists_action=append
Processing object type TABLE_EXPORT/TABLE/TABLE
Table “USER2″.”SALES_T” exists. Data will be appended to existing table but all dependent metadata
will be skipped due to table_exists_action of append
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
. . imported “USER2″.”SALES_T”:”SALES_NOV”               5.421 KB       1 rows
Job “USER2”.”SYS_IMPORT_FULL_01″ successfully completed at 13:05:28
Here we have successfully imported single partition.
If a partition name is specified, it must be the name of a partition or subpartition in the associated table.
Only the specified set of tables, partitions, and their dependent objects are unloaded.
 When you use partition option (PARTITION_OPTIONS) of DataPump you have to select below options:
  None – Tables will be imported such that they will look like those on the system on which the export was created.
 Departition – Partitions will be created as individual tables rather than partitions of a partitioned table
 Merge – Combines all partitions into a single table.
Managing Data Pump jobs
The datapump clients expdp and impdp provide an interactive command interface.
Since each expdp and impdp operation has a job name, you can attach to that job from any computer and
monitor the job or make adjustment to the job.
Here are the data pump interactive commands.
ADD_FILE Adds another file or a file set to the DUMPFILE set.
CONTINUE_CLIENT Changes mode from interactive client to logging mode
EXIT_CLIENT Leaves the client session and discontinues logging but leaves the current job running.
KILL_JOB Detaches all currently attached client sessions and terminates the job
PARALLEL Increase or decrease the number of threads
START_JOB Starts(or resume) a job that is not currently running. SKIP_CURRENT option can skip
the recent failed DDL statement that caused the job to stop.
STOP_JOB stops the current job, the job can be restarted later
STATUS Displays detailed status of the job, the refresh interval can be specified in seconds.
The detailed status is displayed to the output screen but not written to the log file.
Scenario14:- Let us start the job and in between, we stop the job in middle and resume the job.
After some time, let us kill the job and check the job status for every activity….
We can find what jobs are running currently in the database by using the below query.
SQL> select state,job_name from dba_datapump_jobs;
STATE JOB_NAME
—————————— ——————————
EXECUTING SYS_IMPORT_FULL_01
SQL>
C:\impexpdp>impdp parfile=schemaimp1.par
Import: Release 10.2.0.1.0 – Production on Sunday, 17 May, 2009 14:06:51
Copyright (c) 2003, 2005, Oracle. All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 – Produc
tion
With the Partitioning, OLAP and Data Mining options
Master table “SYSTEM”.”SYS_IMPORT_FULL_01″ successfully loaded/unloaded
Starting “SYSTEM”.”SYS_IMPORT_FULL_01″: parfile=schemaimp1.par
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Import> stop_job
Are you sure you wish to stop this job ([yes]/no): yes
C:\impexpdp>
When we want to stop the job, we need press Control-M to returnImport> prompt. Once it is returned to prompt(Import>),
we can stop the job as above by using stop_job command.
After the job is stoped, here is the job status.
SQL> select state,job_name from dba_datapump_jobs;
STATE JOB_NAME
—————————— ——————————
NOT RUNNING SYS_IMPORT_FULL_01
Now we are attaching job again….. Attaching the job does not restart the job.
C:\impexpdp>impdp system/password@ordb attach=SYS_IMPORT_FULL_01
Import: Release 10.2.0.1.0 – Production on Sunday, 17 May, 2009 14:17:11
Copyright (c) 2003, 2005, Oracle. All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 – Produc
tion
With the Partitioning, OLAP and Data Mining options
Job: SYS_IMPORT_FULL_01
Owner: SYSTEM
Operation: IMPORT
Creator Privs: FALSE
GUID: 54AD9D6CF9B54FC4823B1AF09C2DC723
Start Time: Sunday, 17 May, 2009 14:17:12
Mode: FULL
Instance: ordb
Max Parallelism: 1
EXPORT Job Parameters:
CLIENT_COMMAND parfile=schemaexp1.par
IMPORT Job Parameters:
Parameter Name Parameter Value:
CLIENT_COMMAND parfile=schemaimp1.par
TABLE_EXISTS_ACTION REPLACE
State: IDLING
Bytes Processed: 1,086,333,016
Percent Done: 44
Current Parallelism: 1
Job Error Count: 0
Dump File: c:/impexpdp\networkexp1.dmp
Worker 1 Status:
State: UNDEFINED
Import>
After attaching the job, here is the job status.
SQL> select state,job_name from dba_datapump_jobs;
STATE JOB_NAME
—————————— ——————————
IDLING SYS_IMPORT_FULL_01
Attaching the job does not resume the job. Now we are resuming job again…..
Import> continue_client
Job SYS_IMPORT_FULL_01 has been reopened at Sunday, 17 May, 2009 14:17
Restarting “SYSTEM”.”SYS_IMPORT_FULL_01″: parfile=schemaimp1.par
SQL> select state,job_name from dba_datapump_jobs;
STATE JOB_NAME
—————————— ——————————
EXECUTING SYS_IMPORT_FULL_01
Now again we are killing the same job…. Before we kill, we need to press Control-C to return the Import> prompt.
Import> kill_job
Are you sure you wish to stop this job ([yes]/no): yes
C:\impexpdp>
Now the job is disappared in the database.
SQL> select state,job_name from dba_datapump_jobs;
no rows selected
——>>>>>>>>>>>—————>>>>>>>>>>>>>>>———->>>>>>>>—————–>>>>>>>>>>>————————————->>>>>>>>>>>>>>>>>>>
Parallelism
Data Pump is much faster than the old exp and imp client commands. One new feature that really helps make it faster is the “parallel” option. With this option, the Data Pump will pump data in four different threads.
Data pump performance can be significantly improved by using the PARALLEL parameter. This should be used in conjunction with the “%U” wildcard in the DUMPFILE parameter to allow multiple dumpfiles to be created or read.
$expdp scott/tiger directory=dumpdir dumpfile=scott2.dmp parallel=4 job_name=scott2
$Impdp dpuser/dpuser@TDB10G  schemas=dbuser  directory=dumpdir dumpfile=expdp_TDB10G_dbuser.200470215_07.%U.dmp parallel=4
DATA PUMP VIEWS – Data Pump maintains a number of user- and DBA-accessible views to monitor the progress of jobs:
DBA_DATAPUMP_JOBS: This shows a summary of all active Data Pump jobs on the system.
USER_DATAPUMP_JOBS: This shows a summary of the current user’s active Data Pump jobs.
DBA_DATAPUMP_SESSIONS: This shows all sessions currently attached to Data Pump jobs.
V$SESSION_LONGOPS: A row is maintained in the view showing progress on each active Data Pump job. The OPNAME column displays the Data Pump job name.
Data Pump Import:
> impdp username/password DIRECTORY=dpump_dir1 DUMPFILE=scott.dmp TABLES=scott.emp REMAP_SCHEMA=scott:jim
In Data pump utility  the FROMUSER/TOUSER syntax is replaced by the REMAP_SCHEMA Option.
Example, export of an entire database to a dump files with all GRANTS, INDEXES, and data
> exp username/password FULL=y FILE=dba.dmp GRANTS=y INDEXES=y ROWS=y LOG=exportdba.log
> expdp username/password FULL=y INCLUDE=GRANT INCLUDE= INDEX DIRECTORY=dpump_dir1 DUMPFILE=dba.dmp CONTENT=ALL
Following are few EXPDP and IMPDP commands with additional attributes.
$ expdp dpuser/dpuser@TDB10G schemas=dpuser include= TABLE:\”IN (\’EMP\’, \’DEPT\’)\”  directory=dpump_dir1 dumpfile=dpuser.dmp log=dpuser.log
$expdp dpuser/dpuser@TDB10G schemas=dpuser exclude=TABLE:\”= \’EMP_DETAILS\’\”  directory=dpump_dir1 dumpfile=dpuser2.dmp logfile=dpuser.log
$Impdp dpuser/dpuser@TDB10G  schemas=dbuser  directory=dumpdir dumpfile=expdp_TDB10G_dbuser.200470215_07.%U.dmp parallel=4
$impdp dpuser/dpuser@TDB10G  schemas=dbuser dumpfile=DATA_PUMP_DIR:expdp_TDB10G_dbuser.200507_165936.%U.dmp logfile= DATA_PUMP_DIR:expdp_TDB10G_dbuser.200507_165936.out
$expdp dpuser/dpuser@TDB10G schemas=dpuser   directory=dpump_dir1 parallel=4 dumpfile=dpuser_%U.dmp logfile=dpuser.log
CREATING ORACLE DATABASE USING IMPDP and EXPDP
Steps for creating the Oracle database:
Before creating the database we have to take Backup/ Export the entire existing Schema.
Syntax: > expdp username/password FULL=y INCLUDE=GRANT INCLUDE= INDEX   DIRECTORY=dpump_dir1 DUMPFILE=dba.dmp CONTENT=ALL
Delete Database – Drops all objects and public synonyms for objects owned by the schema
Create the Database again using the Oracle Configuration Assistant and clone of deleted database.
Connect to the Oracle Database using system user and sysdba user and runs scripts for tablespace creation, creation of users – roles, granting of privileges.
Syntax:
create tablespace Tablespace_name
datafile ‘/&oradata_dir/oradata/&orasid/tablespace_name_datafile01.dbf’ size 24M
autoextend on next 24M maxsize 15000M
extent management local
segment space management auto
/
Create the dump directory using CREATE OR REPLACE DIRECTORY (dump_dir1) and grants read-write priviledges to dump directory object for the user.
The following SQL statements creates a user, a directory object named dpump_dir1 and     grants the permissions to the user.
Syntax:
SQLPLUS system/manager@TDB10G as sysdba
SQL> create user dpuser identified by dpuser;
SQL> grant connect, resource to dpuser;
SQL> CREATE DIRECTORY dpump_dir1 AS ‘/opt/app/oracle’;
SQL> grant read, write on directory dpump_dir1 to dpuser
Connect to the newly created schema to test the connection.
SQL> dpuser/ dpuser@TDB10G;
Run the IMPDP command from the command prompt
Syntax:
$impdp dpuser/dpuser@TDB10G schemas=dbuser dumpfile=DATA_PUMP_DIR:expdp_TDB10G_dbuser.200507_165936.%U.dmp
logfile= DATA_PUMP_DIR:expdp_TDB10G_dbuser.200507_165936.out
Verify the log file for errors or discrepancies.
We need to copy the .DMP files in default database directory (DATA_PUMP_DIR) path.
I:\oracle\product\10.2.0\admin\TDB10G\dpdump\
We can also place these .DMP files in database directory path where we have created.
Table Level Exports/Imports
The TABLES parameter is used to specify the tables that are to be exported. The following is an example of the table export and import syntax:
expdp scott/tiger@db10g tables=EMP,DEPT directory=TEST_DIR dumpfile=EMP_DEPT.dmp logfile=expdpEMP_DEPT.log
impdp scott/tiger@db10g tables=EMP,DEPT directory=TEST_DIR dumpfile=EMP_DEPT.dmp logfile=impdpEMP_DEPT.log
The TABLE_EXISTS_ACTION=APPEND parameter allows data to be imported into existing tables.
Schema Level Exports/Imports
The OWNER parameter of exp has been replaced by the SCHEMAS parameter which is used to specify the schemas to be exported. The following is an example of the schema export and import syntax:
expdp scott/tiger@db10g schemas=SCOTT directory=TEST_DIR dumpfile=SCOTT.dmp logfile=expdpSCOTT.log
impdp scott/tiger@db10g schemas=SCOTT directory=TEST_DIR dumpfile=SCOTT.dmp logfile=impdpSCOTT.log
Database Level Exports/Imports
The FULL parameter indicates that a complete database export is required. The following is an example of the full database export and import syntax:
expdp system/password@db10g full=Y directory=TEST_DIR dumpfile=DB10G.dmp logfile=expdpDB10G.log
pdp system/password@db10g full=Y directory=TEST_DIR dumpfile=DB10G.dmp logfile=impdpDB10G.log
——————————————————————

Oracle provides an option reuse_dumpfile=[Y/N] to avoid this error. “ORA-27038: created file already exists”

                                      This is the option with data pump expdp utility. Normally when you perform the export using expdp utility and if the dumpfile is present in the export directory it will throw an error “ORA-27038: created file already exists”. This situation happens when you wanted to perform the repetitive exports using the same dumpfile.
                                        Oracle provides an option reuse_dumpfile=[Y/N] to avoid this error. You should mention the parameter value as Y to overwrite the existing dump file. By default the option considered as N. See the examples below.


Normal scenario with file already present in the export directory
$  expdp scott/tiger directory=exp_dir dumpfile=tde.dmp tables=example

Export: Release 11.1.0.7.0 – 64bit Production on Tuesday, 19 July, 2011 1:36:50
Copyright (c) 2003, 2007, Oracle.  All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 – 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
ORA-39001: invalid argument value
ORA-39000: bad dump file specification
ORA-31641: unable to create dump file “/home/oracle/scott/tde.dmp”
ORA-27038: created file already exists
Additional information: 1



Execute the expdp using REUSE_DUMPFILES
$ expdp scott/tiger directory=exp_dir dumpfile=tde.dmp tables=example reuse_dumpfiles=y

Export: Release 11.1.0.7.0 – 64bit Production on Tuesday, 19 July, 2011 1:46:05
Copyright (c) 2003, 2007, Oracle.  All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 – 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting “SCOTT”.”SYS_EXPORT_TABLE_01″:  scott/******** directory=exp_dir dumpfile=tde.dmp tables=example reuse_dumpfiles=y
Estimate in progress using BLOCKS method…
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 1.312 MB
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
. . exported “SCOTT”.”EXAMPLE”:”EXAMPLE_P1″             441.3 KB   49999 rows
. . exported “SCOTT”.”EXAMPLE”:”EXAMPLE_P2″             408.3 KB   45121 rows
Master table “SCOTT”.”SYS_EXPORT_TABLE_01″ successfully loaded/unloaded
******************************************************************************
Dump file set for SCOTT.SYS_EXPORT_TABLE_01 is:
  /home/oracle/scott/tde.dmp
Job “SCOTT”.”SYS_EXPORT_TABLE_01″ successfully completed at 01:49:16

IMPDP EXPDP ENCRYPTION_PASSWORD Parameter ORA-39173: Encrypted data has been stored unencrypted in dump file set

Error Description
Data pump expdp export failed with following error.

Data pump expdp export failed with following error.
. . exported “SCOTT”.”TABLE_ENCRPT”                      5.867 KB       1 rows
ORA-39173: Encrypted data has been stored unencrypted in dump file set.
Master table “SCOTT”.”SYS_EXPORT_TABLE_01″ successfully loaded/unloaded

******************************************************************************
Dump file set for SCOTT.SYS_EXPORT_TABLE_01 is:
  /home/oracle/scott/tde.dmp
Job “SCOTT”.”SYS_EXPORT_TABLE_01″ completed with 1 error(s) at 00:28:16

Solution Description
This error is because the table TABLE_ENCRPT is having encrypted columns using Transparent Data Encryption (TDE). So when you try to export the table you should use the encryption_password option to get rid of this error.
Transparent Data Encryption
Example for encryption_password Parameter for encrypted tables
This table is created with encrypted columns in the following way with TDE.
Add an entry to sqlnet.ora file
ENCRYPTION_WALLET_LOCATION =
(SOURCE=
(METHOD=file)
(METHOD_DATA=
(DIRECTORY=D:\Admin\Wallet)))
Create the wallet directory and check whether oracle user has read write and execute permission for the directory.
mkdir D:\Admin\Wallet
Set the encryption key for the wallet.
SQL> alter system set encryption key authenticated BY “encr_pwd”;
System altered.
Verify the wallet directory whether the file is created properly or not in D:\Admin\Wallet directory.
Table encryption example(Oracle 10g has introduced table encryption)
Create table table_encrpt with a column encrypt

SQL> create table table_encrpt(emp_no number(4),
   name varchar2(10),
   card_num varchar2(16) ENCRYPT);
Table created.
SQL> insert into table_encrpt values (11, ‘John’, ‘1234123412341234’);
1 row created.
SQL> commit;
Commit complete.

Run the export job without encryption_password option
$  expdp scott/scott directory=exp_dir dumpfile=tde.dmp tables=table_encrpt
Export: Release 11.1.0.7.0 – 64bit Production on Saturday, 16 July, 2011 0:25:52
Copyright (c) 2003, 2007, Oracle.  All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 – 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting “SCOTT”.”SYS_EXPORT_TABLE_01″:  scott/******** directory=exp_dir dumpfile=tde.dmp tables=table_encrpt
Estimate in progress using BLOCKS method…
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 64 KB
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
. . exported “SCOTT”.”TABLE_ENCRPT”                      5.867 KB       1 rows
ORA-39173: Encrypted data has been stored unencrypted in dump file set.
Master table “SCOTT”.”SYS_EXPORT_TABLE_01″ successfully loaded/unloaded
******************************************************************************
Dump file set for SCOTT.SYS_EXPORT_TABLE_01 is:
  /home/oracle/scott/tde.dmp
Job “SCOTT”.”SYS_EXPORT_TABLE_01″ completed with 1 error(s) at 00:28:16

Perform the expdp with encryption_password option and the error is removed.
$ expdp scott/scott encryption_password=Encr_pwd directory=exp_dir dumpfile=tde.dmp tables=table_encrpt
Export: Release 11.1.0.7.0 – 64bit Production on Saturday, 16 July, 2011 0:31:18
Copyright (c) 2003, 2007, Oracle.  All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 – 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting “SCOTT”.”SYS_EXPORT_TABLE_01″:  scott/******** encryption_password=******** directory=exp_dir dumpfile=tde.dmp tables=table_encrpt
Estimate in progress using BLOCKS method…
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 64 KB
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
. . exported “SCOTT”.”TABLE_ENCRPT”                      5.875 KB       1 rows
Master table “SCOTT”.”SYS_EXPORT_TABLE_01″ successfully loaded/unloaded
******************************************************************************
Dump file set for SCOTT.SYS_EXPORT_TABLE_01 is:
  /home/oracle/scott/tde.dmp
Job “SCOTT”.”SYS_EXPORT_TABLE_01″ successfully completed at 00:33:42

improve the performance of exp/imp parameter in oracle

How to improve exp performance?
Answer & Explanation

1. Set the BUFFER parameter to a high value. Default is 256KB.
2. Stop unnecessary applications to free the resources.
3. If you are running multiple sessions, make sure they write to different disks.
4. Do not export to NFS (Network File Share). Exporting to disk is faster.
5. Set the RECORDLENGTH parameter to a high value.
6. Use DIRECT=yes (direct mode export).

How to improve imp performance?

1. Place the file to be imported in separate disk from datafiles.
2. Increase the DB_CACHE_SIZE.
3. Set LOG_BUFFER to big size.
4. Stop redolog archiving, if possible.
5. Use COMMIT=n, if possible.
6. Set the BUFFER parameter to a high value. Default is 256KB.
7. It’s advisable to drop indexes before importing to speed up the import process or set INDEXES=N and building indexes later on
  after the import. Indexes can easily be recreated after the data was successfully imported.
8. Use STATISTICS=NONE
9. Disable the INSERT triggers, as they fire during import.
10. Set Parameter COMMIT_WRITE=NOWAIT(in Oracle 10g) or COMMIT_WAIT=NOWAIT (in Oracle 11g) during import.
View Answer Discuss in Forum