Datapump Export (Everything you need to know)

This post is to describe all the clauses available with Oracle datapump export (expdp). This article assumes that the reader has basic knowledge of database administration. If not gone through yet please go through the articles Oracle datapump Architecture & Internals so that you have basic architecture understanding of datapump.

Oracle 12c datapump have following clauses, which we would discuss one by one:-

EXCLUDE – This is used to filter the metadata from the export operation. This is more like a where clause which you can use to exclude any object or schema from the export job.

Example if you want a full database export except 1 schema (say TEST), then:-

expdp user/password DIRECTORY=DATA_PUMP_DIR DUMPFILE=test.dmp FULL=YES EXCLUDE=SCHEMA:"='TEST'" LOGFILE=test.log

Similarly suppose you want to exclude all indexes from a schema export and the condition is to export only indexes that starts with TEST_%, then:-

expdp user/password DIRECTORY=DATA_PUMP_DIR DUMPFILE=test.dmp SCHEMAS=TEST EXCLUDE=INDEX:"LIKE 'TEST_%'" LOGFILE=test.log

So using this clause you can exclude any specific content from the dump, try and explore this option more. To get a list of available objects that can be excluded query the OBJECT_PATH column of DATABASE_EXPORT_OBJECTS, SCHEMA_EXPORT_OBJECTS and TABLE_EXPORT_OBJECTS views.

INCLUDE – Exactly opposite to EXCLUDE clause, this is used when you explicitly want to export, say only some tables, indexes along with views, packages in a schema dump, so only the the object types explicitly mentioned in the include clause would be imported and nothing else.

expdp user/password DIRECTORY=DATA_PUMP_DIR DUMPFILE=test.dmp SCHEMAS=TEST INCLUDE=TABLE:"IN ('EMP', 'DEPT')" INCLUDE=PROCEDURE INCLUDE=INDEX:"LIKE 'TEST%'" LOGFILE=test.log

So the above expdp command would only export tables EMP & DEPT, PROCEDURE and all indexes starting with TEST% no other objects of TEST schema would be exported.

QUERY – This clause allows you to filter the data that gets exported, think of it as EXCLUDE & INCLUDE clause but with more control over filtering, thus this is more like a WHERE clause of the expdp command.

When the QUERY parameter is specified for a table, Data Pump uses external tables to unload the target table.

It accepts values in the form:- QUERY = [schema.][table_name:] query_clause

expdp user/password DIRECTORY=DATA_PUMP_DIR DUMPFILE=test.dmp SCHEMAS=TEST QUERY=TEST.EMP:"WHERE DEPT_ID > 90 AND SAL > 90000" LOGFILE=test.log

Above command would export all tables of TEST schema but only specific records of EMP table.

CLUSTER – This is used in RAC env. and used to spread the worker process on other oracle instances to utilize RAC resources for datapump jobs.

It accepts values as:- CLUSTER=[YES | NO]

expdp user/password DIRECTORY=DATA_PUMP_DIR DUMPFILE=test_%U.dmp SCHEMAS=TEST CLUSTER=YES NOLOGFILE=YES

SERVICE_NAME – This parameter is used with CLUSTER clause in RAC env. As you know CLUSTER clause if specified as YES allows MCP to use creates worker process in all the instances depending upon the degree of parallelism. But if you want to restrict the cluster resources for data pump you can specify SERVICE_NAME clause with CLUSTER=YES then MCP could only fork worker process on the instances that lies with the specified service name.

expdp user/password DIRECTORY=DATA_PUMP_DIR DUMPFILE=test_%U.dmp SCHEMAS=TEST CLUSTER=YES SERVICE_NAME=SERVICE_HA NOLOGFILE=YES

ESTIMATE – This is used to estimate how much space each table in the export job will consume (in bytes).

It accepts values as:- ESTIMATE=[BLOCKS | STATISTICS]

  • BLOCKS: This is calculated by multiplying the number of database blocks used by the source objects, and the block size.
  • STATISTICS: This is calculated using statistics for each table.
expdp user/password DIRECTORY=DATA_PUMP_DIR DUMPFILE=test.dmp SCHEMAS=TEST ESTIMATE=STATISTICS LOGFILE=test.log

ESTIMATE_ONLY – This is used to estimate the space that a job would consume, without actually performing the export operation.

It accepts values as:- ESTIMATE_ONLY=[YES | NO]

expdp user/password SCHEMAS=TEST ESTIMATE_ONLY=YES

FILESIZE – This is used to specify the maximum size of each dump file. When the max file size for the dump file is reached MCP then closes this file and creates a new file for further write operations.

It accepts values as:- FILESIZE=integer[B | KB | MB | GB | TB]

If using this clause please do remember to assign a variable in DUMPFILE parameter else you’ll receive ORA-39095: Dump file space has been exhausted: Unable to allocate xxxxxxx bytes.

expdp user/password DIRECTORY=DATA_PUMP_DIR DUMPFILE=test_%U.dmp SCHEMAS=TEST EXCLUDE=INDEX:"LIKE 'TEST_%'" FILESIZE=1GB LOGFILE=test.log

FLASHBACK_SCN – If you want the dump to be consistent with a specific SCN this clause can be used. By default datapump dumps are consistent on a per table basis but if you want all the tables to be consistent then FLASHBACK_SCN or FLASHBACK_TIME clauses should be used, depending upon the requirement.

Therefore the export operation is performed with data that is consistent up to the specified SCN in the FLASHBACK_SCN clause.

expdp user/password DIRECTORY=DATA_PUMP_DIR DUMPFILE=test.dmp SCHEMAS=TEST COMPRESSION=DATA_ONLY FLASHBACK_SCN=103713102 LOGFILE=test.log

To get the current SCN value you can use the below queries:-

SELECT current_scn FROM v$database;
SELECT DBMS_FLASHBACK.get_system_change_number FROM dual;
SELECT TIMESTAMP_TO_SCN(SYSTIMESTAMP) FROM dual;

To convert time to SCN you can use TIMESTAMP_TO_SCN function as:-

SELECT TIMESTAMP_TO_SCN(SYSTIMESTAMP) FROM dual;

FLASHBACK_TIME – This is similar to FLASHBACK_SCN, in this case the SCN that most closely matches the specified time is found, and then this SCN is used to get the consistent data.

expdp user/password DIRECTORY=DATA_PUMP_DIR DUMPFILE=test.dmp SCHEMAS=TEST COMPRESSION=DATA_ONLY FLASHBACK_TIME="TO_TIMESTAMP('21-09-2016 12:36:00', 'DD-MM-YYYY HH24:MI:SS')" NOLOGFILE=YES REUSE_DUMPFILES=YES

FULL – This is used when you want to export full data dump.

expdp user/password DIRECTORY=DATA_PUMP_DIR DUMPFILE=full_%U.dmp full=Y PARALLEL=6 ENCRYPTION=data_only ENCRYPTION_PASSWORD=T3sT NOLOGFILE=YES REUSE_DUMPFILES=YES

JOB_NAME – This is used to give a specific name to the export job, by default the job name is in the format SYS_EXPORT__NN.

expdp user/password DIRECTORY=DATA_PUMP_DIR DUMPFILE=full_%U.dmp full=Y PARALLEL=6 ENCRYPTION=data_only ENCRYPTION_PASSWORD=T3sT NOLOGFILE=YES JOB_NAME=Encrypt_full_dump REUSE_DUMPFILES=YES

KEEP_MASTER – As we saw in Oracle datapump Architecture & Internals master table is used to store information (Export/import parameters, current status, object info etc) about the data pump job and is automatically deleted once the job is completed successfully or killed.

But what if you want to retain the information stored in master table to analyze the job, KEEP_MASTER clause is used if we need to retain the master table even after a successful job or killed job.

It accepts values as:- KEEP_MASTER=[YES | NO]

expdp user/password DIRECTORY=DATA_PUMP_DIR DUMPFILE=full_%U.dmp full=Y PARALLEL=6 ENCRYPTION=data_only ENCRYPTION_PASSWORD=T3sT NOLOGFILE=YES JOB_NAME=Encrypt_full_dump KEEP_MASTER=YES

LOGTIME – This clause is used to include timestamp in the output messages.

It accepts values as:- LOGTIME=[NONE | STATUS | LOGFILE | ALL]

  • NONE : No timestamps are included in the output.
  • STATUS : Timestamps are included in the standard output, but not in the log file.
  • LOGFILE : Timestamps are included in output to the log file, but not in the standard output.
  • ALL : Timestamps are included both to the standard output as well as to the log file.
expdp user/password DIRECTORY=DATA_PUMP_DIR DUMPFILE=test.dmp SCHEMAS=TEST COMPRESSION=DATA_ONLY FLASHBACK_TIME="TO_TIMESTAMP('21-09-2016 12:36:00', 'DD-MM-YYYY HH24:MI:SS')" LOGTIME=ALL REUSE_DUMPFILES=YES

METRICS – This clause is used to include additional information about the export job in the log file.

expdp user/password DIRECTORY=DATA_PUMP_DIR DUMPFILE=test.dmp SCHEMAS=TEST CLUSTER=NO METRICS=YES

NETWORK_LINK – Used in cases where database is exported via a database Link over the network. The data from the source database instance is written to a dump file set on the connected database instance.

This means that the system to which the expdp client is connected contacts the source database referenced by the DB link and writes the data to a dump file set back on the connected system.

expdp user/password FULL=YES DIRECTORY=DATA_PUMP_DIR DUMPFILE=test.dmp EXCLUDE=SCHEMA:"='TEST'" NETWORK_LINK=TEST_LINK LOGFILE=test.log

NOLOGFILE – This clause can be used to suppress the creation of logfile.

expdp user/password DIRECTORY=DATA_PUMP_DIR DUMPFILE=full_%U.dmp full=Y PARALLEL=6 ENCRYPTION=data_only ENCRYPTION_PASSWORD=T3sT NOLOGFILE=YES JOB_NAME=Encrypt_full_dump REUSE_DUMPFILES=YES

PARALLEL – This clause is used to authorize MCP to fork worker process and then further parallel query process upto the integer specified with the PARALLEL clause.

Since each active worker process and slave process writes exclusively on the dumpfiles to utilize actual parallelization benefits the DUMPFILE needs to be specified with a variable so that MCP could open appropriate number of files to accommodate the integer specified with PARALLEL clause.

expdp user/password DIRECTORY=DATA_PUMP_DIR DUMPFILE=full_%U.dmp full=Y PARALLEL=6 NOLOGFILE=YES JOB_NAME=Encrypt_full_dump LOGTIME=ALL

PARFILE – There can be times when there are several clauses of expdp used or due to operating system special characters the expdp parameters are impacted, in such cases you can use a parameter file specifying all the parameters as per your requirement and specify the parameter file with PARFILE Clause.

Here is a parameter file (say, full.par):-

DIRECTORY=DATA_PUMP_DIR
DUMPFILE=full_%U.dmp
full=Y
PARALLEL=6
NOLOGFILE=YES
JOB_NAME=Encrypt_full_dump
LOGTIME=ALL

And it can be called via PARFILE clause, as:-

expdp user/password PARFILE=full.par

REMAP_DATA – This clause is mostly used to mask the data when moving production dump to a test system. So using remap_data clause sensitive data can be replaced with random data.

It accepts values in the form:- REMAP_DATA=[schema.]table_name.column_name:[schema.]pkg.function

  • schema — schema containing the table to be remapped. By default, this is the schema of the user doing the export.
  • table_name — table whose column will be remapped.
  • column_name — column whose data is to be remapped.
  • schema — schema containing the PL/SQL package that contains the remapping function.
  • pkg — name of the PL/SQL package containing the remapping function.
  • function — name of the function within the PL/SQL that will be called to remap the column table in each row of the specified table.
expdp user/password DIRECTORY=DATA_PUMP_DIR DUMPFILE=test.dmp SCHEMAS=TEST REMAP_DATA=TEST.EMP.DEPT_ID:EMP.REMAP.MIN10  METRICS=YES LOGFILE=test.log

REUSE_DUMPFILES – If the dumpfile with the same name as you specified in the DUMPFILE clause is found at the path specified under the directory path, datapump would abort giving error.

You can however instruct MCP to overwrite the dumpfile using the REUSE_DUMPFILE clause.

expdp user/password DIRECTORY=DATA_PUMP_DIR DUMPFILE=test.dmp SCHEMAS=TEST REUSE_DUMPFILES=YES  METRICS=YES LOGFILE=test.log

SAMPLE – This is used to specify a percentage of data rows to be exported in the dumpfile set.

It accepts values in the form:- SAMPLE=[[schema_name.]table_name:]sample_percent

where sample_percent is the percentage of data to be exported.

expdp user/password DIRECTORY=DATA_PUMP_DIR DUMPFILE=test.dmp SCHEMAS=TEST SAMPLE=60 LOGFILE=test.log

SCHEMAS – This is used to specify the comma separated list of schemas that you want to be exported.

expdp user/password DIRECTORY=DATA_PUMP_DIR DUMPFILE=test.dmp SCHEMAS=TEST1,TEST2 SAMPLE=60 LOGFILE=test.log

STATUS – As we read in the article Queries to Monitor Datapump Jobs, STATUS can be used from another expdp session to get the current picture of all the Datapump Processes. But if you want the STATUS output to be print in the same session from where export operation is being initiated, you can use the command line argument STATUS.

It accepts values as:- STATUS=[integer]

expdp user/password DIRECTORY=DATA_PUMP_DIR DUMPFILE=test.dmp SCHEMAS=TEST LOGFILE=test.log STATUS=100

With the above command the information of STATUS clause would be printed in the standard output after every 100 seconds.

TABLES – As the name suggests this is used to specify if you want to export one or more set of tables and partitions/sub-partitions.

It accepts values in the form:- TABLES=[schema_name.]table_name[:partition_name] [, …]

expdp user/password DIRECTORY=DATA_PUMP_DIR DUMPFILE=test.dmp TABLES=TEST.EMP:TEST2:DEPT LOGFILE=test.log METRICS=Y

TABLESPACES – This clause is used to specify the list of tablespaces that you want to export in tablespace mode. In tablespace mode, only the tables contained in a specified set of tablespaces are unloaded. If a table is unloaded, then its dependent objects are also unloaded. Both object metadata and data are unloaded. If any part of a table resides in the specified set, then that table and all of its dependent objects are exported.

expdp user/password DIRECTORY=DATA_PUMP_DIR DUMPFILE=test.dmp TABLESPACES=TEST_TBS1,TEST_TBS2 LOGFILE=test.log METRICS=Y

TRANSPORT_FULL_CHECK – When this parameter is set to YES, the export job verifies that there are no dependencies between those objects inside the transportable set and those outside the transportable set.

For example, if a table is inside the transportable set but its index is not, then a failure is returned and the export operation is terminated. Similarly, a failure is also returned if an index is in the transportable set but the table is not.

expdp user/password DIRECTORY=DATA_PUMP_DIR DUMPFILE=test.dmp TABLESPACES=TEST_TBS1  TRANSPORT_FULL_CHECK=YES LOGFILE=test.log METRICS=Y

TRANSPORT_TABLESPACES – This parameter is used to specify a list of tablespace names for which object metadata will be exported from the source database into the target database.

expdp user/password DIRECTORY=DATA_PUMP_DIR DUMPFILE=test.dmp TRANSPORT_TABLESPACES=TEST_TBS1  TRANSPORT_FULL_CHECK=YES LOGFILE=test.log METRICS=Y

TRANSPORTABLE – Specifies whether the transportable option should be used during a table mode export (specified with the TABLES parameter) to export metadata for specific tables, partitions, and subpartitions.

It accepts value as:- TRANSPORTABLE = [ALWAYS | NEVER]

expdp user/password DIRECTORY=DATA_PUMP_DIR DUMPFILE=test.dmp TABLES=TEST.EMP:TEST2:DEPT TRANSPORTABLE=ALWAYS LOGFILE=test.log

VERSION – This clause is used to create a dumpfile set that is compatible with an older release of Oracle Database.

It accepts values as:- VERSION=[COMPATIBLE | LATEST | version_string]

  • COMPATIBLE: The version of the metadata corresponds to the database compatibility level.
  • LATEST: The version of the metadata corresponds to the database release.
  • version_string: Any specific database release can be specified.

So if you want to export a dump from 11.2.0.3 Oracle DB version and to make dumpfile compatible with 10.2.0.4 version:-

expdp user/password DIRECTORY=DATA_PUMP_DIR DUMPFILE=test.dmp SCHEMAS=TEST  VERSION=10.2 METRICS=YES LOGFILE=test.log

VIEWS_AS_TABLES – This clause is used if you want to export a view as a table. Data Pump exports a table with the same columns as the view and with row data fetched from the view. Data Pump also exports objects dependent on the view, such as grants and constraints.

It accepts values in the form:- VIEWS_AS_TABLES=[schema_name.]view_name[:table_name], …

  • schema_name: The name of the schema in which the view resides.
  • view_name: The name of the view to be exported as a table.
  • table_name: The name of a table to serve as the source of the metadata for the exported view.
expdp user/password DIRECTORY=DATA_PUMP_DIR DUMPFILE=test.dmp VIEW_AS_TABLES=TEST:EMP_V,TEST2:DEPT LOGFILE=test.log

ABORT_STEP – Used to stop the job after it is initialized, it can be used if you want to test a script or to populate master table with data from the dumpfile.

It accepts values as:- ABORT_STEP=[n | -1],

n — If the value is zero or greater, then the export operation is started and the job is aborted at the object that is stored in the master table with the corresponding process order number.

-1 — If the value is negative one (-1) then abort the job after setting it up, but before exporting any objects or data.

expdp us0er/password DIRECTORY=DATA_PUMP_DIR DUMPFILE=test.dmp SCHEMAS=TEST ABORT_STEP=-1 LOGFILE=test.log

CONTENT – Lets you filter out what to export, this is basically used when you want METADATA to be exported and imported across to a different database, to copy the structure.

It accepts values as:- CONTENT=[ALL | DATA_ONLY | METADATA_ONLY]

expdp user/password DIRECTORY=DATA_PUMP_DIR DUMPFILE=test.dmp SCHEMAS=TEST CONTENT=METADATA_ONLY LOGFILE=test.log

ACCESS_METHOD – Hope you have gone through Oracle datapump Architecture & Internals as to understand this further its manadatory to know the architecture and data movement methodologies of datapump APIs.

As you know that there are 3 choices available for data movement in datapump, using ACCESS_METHOD clause you can instruct MCP to choose a method regardless of the other factors.

It accepts values as:- ACCESS_METHOD=[AUTOMATIC | DIRECT_PATH | EXTERNAL_TABLE]

expdp user/password DIRECTORY=DATA_PUMP_DIR DUMPFILE=test.dmp SCHEMAS=TEST ACCESS_METHOD=EXTERNAL_TABLE LOGFILE=test.log

ATTACH – This clause is to attach a new datapump session to any existing JOB. This is used if you want to check STATUS of a running job or KILL/RESUME/STOP etc a JOB.

It accepts values as:- ATTACH= [job_name]

The JOB Name can be fetched from DBA_DATAPUMP_JOBS view.

expdp ATTACH=TEST.TEST_IMPORT_SCHEMA_02

COMPRESSION– To use compression in the dump file sets this clause is used.

It accepts values as:- COMPRESSION=[ALL | DATA_ONLY | METADATA_ONLY | NONE]

  • ALL: enables compression for the entire export operation.
  • DATA_ONLY: Enables compression for all data being written to the dump file.
  • METADATA_ONLY: Enables compression for all metadata being written to the dump file.
  • NONE: Disables compression for the entire export operation.
expdp user/password DIRECTORY=DATA_PUMP_DIR DUMPFILE=test.dmp SCHEMAS=TEST COMPRESSION=DATA_ONLY LOGFILE=test.log

COMPRESSION_ALGORITHM – To specify the compression algorithm to be used.

It accepts values as:- COMPRESSION_ALGORITHM=[BASIC], LOW, MEDIUM and HIGH

expdp user/password DIRECTORY=DATA_PUMP_DIR DUMPFILE=test.dmp SCHEMAS=TEST COMPRESSION=DATA_ONLY COMPRESSION_ALGORITHM=HIGH LOGFILE=test.log

DIRECTORY – Used to specify the Oracle directory where MCP would write the dumpfiles, thus the user invoking the expdp should have read/write access on this Oracle directory.

DUMPFILE – This is the dumpfile name that is created at the DIRECTORY. Variable %U can be used when assigning the dumpfile name, it is required if you use FILESIZE or PARALLEL parameters.

LOGFILE – This clause is used to specify the logfile name which is created in the directory specified by DIRECTORY clause.

By default the dumpfile name of export job, if the clause is not specified is export.log.

ENCRYPTION – This is used to encrypt part or all of the dump file.

It accepts values as:- ENCRYPTION = [ALL | DATA_ONLY | ENCRYPTED_COLUMNS_ONLY | METADATA_ONLY | NONE]

expdp user/password DIRECTORY=DATA_PUMP_DIR DUMPFILE=test.dmp SCHEMAS=TEST ENCRYPTION=data_only ENCRYPTION_PASSWORD=T3sT LOGFILE=test.log

ENCRYPTION_ALGORITHM – Specifies which cryptographic algorithm should be used to perform the encryption.

It accepts values as:- [AES128 | AES192 | AES256]

expdp user/password DIRECTORY=DATA_PUMP_DIR DUMPFILE=test.dmp SCHEMAS=TEST ENCRYPTION=data_only ENCRYPTION_PASSWORD=T3sT ENCRYPTION_ALGORITHM=AES128 LOGFILE=test.log

Hope this helps in understanding the expdp arguments, in case you have any issues in applying these clause or in general any query regarding datapump, please comment below and we’ll try to assist you with your issue.

8 thoughts on “Datapump Export (Everything you need to know)

  1. Pingback: Oracle datapump Internals | Oracle Database Internal Mechanism

  2. Hello all,

    V 12.1

    I have been trying to improve the performance of our exports and have noticed some really strange behavior in the span of a month with them.
    I cannot seem to set a benchmark on how long it should take.
    One of them took 5 mins and the following took over 40 mins with the same degree of parellism.

    Starting “SYS”.”SYS_EXPORT_SCHEMA_01″: userid=”sys/********@PRD01 AS SYSDBA” schemas=xxxx directory=DATAPUMP_DIR dumpfile=tes_ramep20180705%u.dmp logfile=tes_ramep20180705_exp.log parallel=4
    Estimate in progress using BLOCKS method…
    Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
    Total estimation using BLOCKS method: 10.05 GB

    Master table “SYS”.”SYS_EXPORT_SCHEMA_01″ successfully loaded/unloaded

    Dump file set for SYS.SYS_EXPORT_SCHEMA_01 is:
    E:\ORACLE\DATAPUMP_DIR\TES_RAMEP2018070501.DMP
    E:\ORACLE\DATAPUMP_DIR\TES_RAMEP2018070502.DMP
    E:\ORACLE\DATAPUMP_DIR\TES_RAMEP2018070503.DMP
    E:\ORACLE\DATAPUMP_DIR\TES_RAMEP2018070504.DMP
    Job “SYS”.”SYS_EXPORT_SCHEMA_01″ successfully completed at Thu Jul 5 10:43:39 2018 elapsed 0 00:05:59

    ============================================================================================================

    Starting “SYS”.”SYS_EXPORT_SCHEMA_01″: “/******** AS SYSDBA” dumpfile=tes_ra_20180905_0930_%U.dmp logfile=tes_ra_201809050930.log directory=DATAPUMP_DIR schemas=tes_ra parallel=4
    Estimate in progress using BLOCKS method…
    Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
    Total estimation using BLOCKS method: 11.14 GB

    Master table “SYS”.”SYS_EXPORT_SCHEMA_01″ successfully loaded/unloaded

    Dump file set for SYS.SYS_EXPORT_SCHEMA_01 is:
    E:\ORACLE\DATAPUMP_DIR\TES_RA_20180905_0930_01.DMP
    E:\ORACLE\DATAPUMP_DIR\TES_RA_20180905_0930_02.DMP
    E:\ORACLE\DATAPUMP_DIR\TES_RA_20180905_0930_03.DMP
    E:\ORACLE\DATAPUMP_DIR\TES_RA_20180905_0930_04.DMP
    Job “SYS”.”SYS_EXPORT_SCHEMA_01″ successfully completed at Wed Sep 5 09:09:06 2018 elapsed 0 00:41:58

    ===========================================================================================================

    The command is basically the same and I understand there is a bit more data but that much more time does not seem normal
    any ideas ?

    Much appreciated

    Like

    • Hi John,

      Datapump Export performance also depends on the load on the database as well as IO contention on the disks where the dumpfiles are created.
      Can you please try 2 exports during off business hours having similar load on the database and then compare the performance.

      Regards
      Nitish

      Like

      • thx for replying ! It is very strange as some seem to say it’s relating to the login parameters ?!
        as seen above in the log

        Like

      • Once you submit the datapump job a master table is created “SYS”.”SYS_EXPORT_SCHEMA_01″, after that login method is out of picture and in both the cases your user is SYS so it can’t be due to login.
        Somehow the worker process are being slowed down and since the env. & datapump parameters are some it can only be the database load or the IO contention.

        Regards
        Nitish

        Like

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.