Wikipedia

Search results

Expdp Impdp Tutorial Part -2

10. ENCRYPTION in export Import
ENCRYPTION
Encrypt part or all of a dump file.
Valid keyword values are: ALL, DATA_ONLY, ENCRYPTED_COLUMNS_ONLY, METADATA_ONLY and NONE.
ENCRYPTION_ALGORITHM
Specify how encryption should be done.
Valid keyword values are: [AES128], AES192 and AES256.
ENCRYPTION_MODE
Method of generating encryption key.
Valid keyword values are: DUAL, PASSWORD and [TRANSPARENT].
ENCRYPTION_PASSWORD
Password key for creating encrypted data within a dump file.
ENCRYPTION_PWD_PROMPT ——– >>>>>>> INTRODUCED IN ORACLE 12C
Specifies whether to prompt for the encryption password [NO].
Terminal echo will be suppressed while standard input is read.

ENCRYPTION_PASSWORD:

To encrypt any dumpfile, provide password to ENCRYPTION_PASSWORD in parfile.
ENCRYPTION_PWD_PROMPT
With this parameter, No need to write the password in parfile or expdp command , we can pass the password, when it asks for input.
While importing this dump, we have to pass the same password.
If you try to import without encryption password, it will throw error as : ORA-39174: Encryption password must be supplied.
11. CONTENT parameter in EXPDP/IMPDP:
CONTENT
Specifies data to unload.
Valid keyword values are: [ALL], DATA_ONLY and METADATA_ONLY.
CONTENT=ALL is the default value
CONTENT=METADATA_ONLY:
It will take export only the skeleton/DDL on the object/schema. It wont export any of the data/rows.
CONTENT=DATA_ONLY:
It will take export only the ROWS/DATA of the tables , by excluding the DDL.
Before importing the dump into another database, make sure that the table exists in that table, as we have taken export using DATA_ONLY,
which don’t contain the DDL. Else import will fail with error as below.
Here we tried to load this dump into a database, where table doesn’t exists.
So it failed.
NOTE – NEVER USE TABLE_EXISTS_ACTION=REPLACE 

12. CLUSTER PARAMETER IN RAC:
CLUSTER
Utilize cluster resources and distribute workers across the Oracle RAC [YES].
In a RAC database, if you are taking export with parallel option and the datapump directory is not shared between the nodes, then export might fail with below error.
Because, the remote node is unable to write the dumps to that directory.
ORA-31617: unable to open dump file “/dumparea/TEST/asset_75.dmp” for write
ORA-19505: failed to identify file “/dumparea/TEST/asset_75.dmp”
ORA-27037: unable to obtain file status
SVR4 Error: 2: No such file or directory
So to avoid this, Use CLUSTER=N parameter, with which , the export will run worker processes only on the local node.
parfile:
13.EXPDP to an ASM DISKGROUP:
We can use asm diskgroups also to store the expdp dumpfile.
Create a directory pointing to asm diskgroup( for dumpfiles)
Create a directory pointing to normal filesystem ( for logfiles)
export syntax:
expdp dumpfile=dumpfile.dmp logfile= LOGFILE_DIR:logfile.log directory=DUMP_DIR  tables=….
expdp dumpfile=test.dmp logfile=EXPLOG:test.log directory=SOURCE_DUMP tables=dbatest.EMPTAB exclude=statistics
Note: If you dont mention the logfile directory, then it will fail with below error.
14.EXPDP TO MULTIPLE DIRECTORIES:
If you wish to take a expdp backup of a big table/schema, but you don’t sufficient space in a single mount point to keep the dump. In thiscase we take expdp dump to multiple directory.
Refer link –  EXPDP TO MULTIPLE DIRECTORIES: