8i | 9i | 10g | 11g | 12c | 13c | 18c | 19c | 21c | 23ai | Misc | PL/SQL | SQL | RAC | WebLogic | Linux
Data Pump (expdp, impdp) Enhancements in Oracle Database 21c
This article provides an overview of the main Data Pump enhancements in Oracle Database 21c.
- Setup
- JSON Data Type Support
- CHECKSUM, CHECKSUM_ALGORITHM, VERIFY_ONLY and VERIFY_CHECKSUM Parameters
- INCLUDE and EXCLUDE in the Same Operation
- Index Compression
- Transportable Tablespace Enhancements
- Export from Oracle Autonomous Database
- Export to and Import From Cloud Object Stores
Related articles.
- Oracle Cloud : Autonomous Database (ADW or ATP) - Export Data to an Object Store (expdp)
- Data Pump Export (expdp) to and Import (impdp) From Cloud Object Stores in Oracle Database 21c
- Data Pump Checksums in Oracle Database 21c
- Data Pump (expdp, impdp) : All Articles
- Data Pump Quick Links : 10g, 11g, 12cR1, 12cR2, 18c, 19c, 21c, Transportable Tablespaces
Setup
Create a test user in your pluggable database.
conn sys/SysPassword1@//localhost:1521/pdb1 as sysdba --drop user testuser1 cascade; create user testuser1 identified by testuser1 quota unlimited on users; grant connect, resource to testuser1; grant select_catalog_role to testuser1;
Create a new directory object and grant access to the test user.
create or replace directory tmp_dir as '/tmp/'; grant read, write on directory tmp_dir to testuser1;
Create and populate the following table in your test schema.
conn testuser1/testuser1@//localhost:1521/pdb1 -- drop table t1 purge; create table t1 ( id number generated always as identity, json_data json, constraint ta_pk primary key (id) ); insert into t1 (json_data) values (json('{"fruit":"apple","quantity":10}')); insert into t1 (json_data) values (json('{"fruit":"orange","quantity":20}')); commit;
JSON Data Type Support
The export and import utilities include support for the new JSON data type.
The following example exports the T1
table using the expdp
utility. Remember, the T1
table contains a column defined using the new JSON data type.
$ expdp testuser1/testuser1@//localhost:1521/pdb1 \ tables=t1 \ directory=tmp_dir \ dumpfile=t1.dmp \ logfile=expdp_t1.log \ exclude=statistics Export: Release 21.0.0.0.0 - Production on Sun Sep 5 08:41:15 2021 Version 21.3.0.0.0 Copyright (c) 1982, 2021, Oracle and/or its affiliates. All rights reserved. Connected to: Oracle Database 21c Enterprise Edition Release 21.0.0.0.0 - Production Starting "TESTUSER1"."SYS_EXPORT_TABLE_01": testuser1/********@//localhost:1521/pdb1 tables=t1 directory=tmp_dir dumpfile=t1.dmp logfile=expdp_t1.log exclude=statistics Processing object type TABLE_EXPORT/TABLE/TABLE_DATA Processing object type TABLE_EXPORT/TABLE/TABLE Processing object type TABLE_EXPORT/TABLE/IDENTITY_COLUMN Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT . . exported "TESTUSER1"."T1" 6.070 KB 2 rows Master table "TESTUSER1"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded ****************************************************************************** Dump file set for TESTUSER1.SYS_EXPORT_TABLE_01 is: /tmp/t1.dmp Job "TESTUSER1"."SYS_EXPORT_TABLE_01" successfully completed at Sun Sep 5 08:41:45 2021 elapsed 0 00:00:28 $
We import the dump file, remapping the table name to T1_COPY
.
$ impdp testuser1/testuser1@//localhost:1521/pdb1 \ tables=t1 \ directory=tmp_dir \ dumpfile=t1.dmp \ logfile=impdp_t1.log \ remap_table=testuser1.t1:t1_copy Import: Release 21.0.0.0.0 - Production on Sun Sep 5 08:46:32 2021 Version 21.3.0.0.0 Copyright (c) 1982, 2021, Oracle and/or its affiliates. All rights reserved. Connected to: Oracle Database 21c Enterprise Edition Release 21.0.0.0.0 - Production Master table "TESTUSER1"."SYS_IMPORT_TABLE_01" successfully loaded/unloaded Starting "TESTUSER1"."SYS_IMPORT_TABLE_01": testuser1/********@//localhost:1521/pdb1 tables=t1 directory=tmp_dir dumpfile=t1.dmp logfile=impdp_t1.log remap_table=testuser1.t1:t1_copy Processing object type TABLE_EXPORT/TABLE/TABLE Processing object type TABLE_EXPORT/TABLE/TABLE_DATA . . imported "TESTUSER1"."T1_COPY" 6.070 KB 2 rows Processing object type TABLE_EXPORT/TABLE/IDENTITY_COLUMN Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT ORA-31684: Object type CONSTRAINT:"TESTUSER1"."TA_PK" already exists Job "TESTUSER1"."SYS_IMPORT_TABLE_01" completed with 1 error(s) at Sun Sep 5 08:46:39 2021 elapsed 0 00:00:05 $
CHECKSUM, CHECKSUM_ALGORITHM, VERIFY_ONLY and VERIFY_CHECKSUM Parameters
Calculating checksums takes time. The bigger the dump file, the more effort it takes to compute a checksum.
The CHECKSUM
and CHECKSUM_ALGORITHM
parameters have been added to prevent tampering of data in dump files when they are at rest on disk. If we set the CHECKSUM_ALGORITHM
parameter, then the CHECKSUM
parameter defaults to yes. If neither are set, the CHECKSUM
parameter defaults to no. The CHECKSUM_ALGORITHM
parameter can be set to CRC32, SHA256, SHA384 or SHA512, with SHA256 being the default.
In the following example we enable the CHECKSUM
, and explicitly set the CHECKSUM_ALGORITHM
to the default value for a schema export.
$ expdp testuser1/testuser1@//localhost:1521/pdb1 \ schemas=testuser1 \ directory=tmp_dir \ dumpfile=testuser1.dmp \ logfile=expdp_testuser1.log \ exclude=statistics \ checksum=yes \ checksum_algorithm=SHA256 Export: Release 21.0.0.0.0 - Production on Sun Sep 5 08:58:55 2021 Version 21.3.0.0.0 Copyright (c) 1982, 2021, Oracle and/or its affiliates. All rights reserved. Connected to: Oracle Database 21c Enterprise Edition Release 21.0.0.0.0 - Production Starting "TESTUSER1"."SYS_EXPORT_SCHEMA_01": testuser1/********@//localhost:1521/pdb1 schemas=testuser1 directory=tmp_dir dumpfile=testuser1.dmp logfile=expdp_testuser1.log exclude=statistics checksum=yes checksum_algorithm=SHA256 Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA Processing object type SCHEMA_EXPORT/TABLE/TABLE Processing object type SCHEMA_EXPORT/TABLE/COMMENT Processing object type SCHEMA_EXPORT/TABLE/IDENTITY_COLUMN Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT . . exported "TESTUSER1"."T1" 6.070 KB 2 rows . . exported "TESTUSER1"."T1_COPY" 6.078 KB 2 rows Master table "TESTUSER1"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded Generating checksums for dump file set ****************************************************************************** Dump file set for TESTUSER1.SYS_EXPORT_SCHEMA_01 is: /tmp/testuser1.dmp Job "TESTUSER1"."SYS_EXPORT_SCHEMA_01" successfully completed at Sun Sep 5 08:59:38 2021 elapsed 0 00:00:41 $
We can validate the checksum of a dumpfile using the VERIFY_ONLY
parameter.
$ impdp testuser1/testuser1@//localhost:1521/pdb1 \ directory=tmp_dir \ dumpfile=testuser1.dmp \ verify_only=yes Import: Release 21.0.0.0.0 - Production on Sun Sep 5 09:10:55 2021 Version 21.3.0.0.0 Copyright (c) 1982, 2021, Oracle and/or its affiliates. All rights reserved. Connected to: Oracle Database 21c Enterprise Edition Release 21.0.0.0.0 - Production Verifying dump file checksums Master table "TESTUSER1"."SYS_IMPORT_FULL_01" successfully loaded/unloaded dump file set is complete verified checksum for dump file "/tmp/testuser1.dmp" dump file set is consistent Job "TESTUSER1"."SYS_IMPORT_FULL_01" successfully completed at Sun Sep 5 09:10:57 2021 elapsed 0 00:00:01 $
We use the VERIFY_CHECKSUM
parameter to verify the checksum during the import. If the verification fails, the import doesn't take place. If we don't use the VERIFY_CHECKSUM
parameter, the import will continue, even if the checksum is incorrect.
$ impdp testuser1/testuser1@//localhost:1521/pdb1 \ tables=t1 \ directory=tmp_dir \ dumpfile=testuser1.dmp \ logfile=impdp_t1_copy_again.log \ remap_table=testuser1.t1:t1_copy_again \ verify_checksum=yes Import: Release 21.0.0.0.0 - Production on Sun Sep 5 09:16:24 2021 Version 21.3.0.0.0 Copyright (c) 1982, 2021, Oracle and/or its affiliates. All rights reserved. Connected to: Oracle Database 21c Enterprise Edition Release 21.0.0.0.0 - Production Verifying dump file checksums Master table "TESTUSER1"."SYS_IMPORT_TABLE_01" successfully loaded/unloaded Starting "TESTUSER1"."SYS_IMPORT_TABLE_01": testuser1/********@//localhost:1521/pdb1 tables=t1 directory=tmp_dir dumpfile=testuser1.dmp logfile=impdp_t1_copy_again.log remap_table=testuser1.t1:t1_copy_again verify_checksum=yes Processing object type SCHEMA_EXPORT/TABLE/TABLE Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA . . imported "TESTUSER1"."T1_COPY_AGAIN" 6.070 KB 2 rows Processing object type SCHEMA_EXPORT/TABLE/IDENTITY_COLUMN Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT ORA-31684: Object type CONSTRAINT:"TESTUSER1"."TA_PK" already exists Job "TESTUSER1"."SYS_IMPORT_TABLE_01" completed with 1 error(s) at Sun Sep 5 09:16:30 2021 elapsed 0 00:00:04 $
INCLUDE and EXCLUDE in the Same Operation
In Oracle database 21c, INCLUDE
and EXCLUDE
parameters can be part of the same command. In previous releases INCLUDE
and EXCLUDE
parameters were mutually exclusive.
The following example combines INCLUDE
and EXCLUDE
parameters in a single command. We have to escape some of the quotes for the command line.
$ expdp testuser1/testuser1@//localhost:1521/pdb1 \ schemas=testuser1 \ directory=tmp_dir \ dumpfile=testuser1.dmp \ logfile=expdp_testuser1.log \ include="table:\"in ('T1')\"" \ exclude="table:\"in ('T1_COPY','T1_COPY_AGAIN')\"" \ exclude=statistics Export: Release 21.0.0.0.0 - Production on Sun Sep 5 10:54:03 2021 Version 21.3.0.0.0 Copyright (c) 1982, 2021, Oracle and/or its affiliates. All rights reserved. Connected to: Oracle Database 21c Enterprise Edition Release 21.0.0.0.0 - Production Starting "TESTUSER1"."SYS_EXPORT_SCHEMA_01": testuser1/********@//localhost:1521/pdb1 schemas=testuser1 directory=tmp_dir dumpfile=testuser1.dmp logfile=expdp_testuser1.log include=table:"in ('T1')" exclude=table:"in ('T1_COPY','T1_COPY_AGAIN')" exclude=statistics Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA Processing object type SCHEMA_EXPORT/TABLE/TABLE Processing object type SCHEMA_EXPORT/TABLE/IDENTITY_COLUMN Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT . . exported "TESTUSER1"."T1" 6.070 KB 2 rows Master table "TESTUSER1"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded ****************************************************************************** Dump file set for TESTUSER1.SYS_EXPORT_SCHEMA_01 is: /tmp/testuser1.dmp Job "TESTUSER1"."SYS_EXPORT_SCHEMA_01" successfully completed at Sun Sep 5 10:54:31 2021 elapsed 0 00:00:27 $
Index Compression
In Oracle database 21c we can optionally compress indexes on import using the TRANSFORM
parameter and the INDEX_COMPRESSION_CLAUSE
.
Create a test table with some indexes.
conn testuser1/testuser1@//localhost:1521/pdb1 -- drop table t2 purge; create table t2 as select level as id, 'Description for ' || level as col1, case mod(level, 2) when 0 then 'one' else 'two' end as col2, trunc(dbms_random.value(0,10)) as col3, trunc(dbms_random.value(0,20)) as col4 from dual connect by level <= 10000; alter table t2 add constraint t2_pk primary key (id); create index t2_col1_idx on t2(col1); create index t2_col2_idx on t2(col2); create index t2_col3_idx on t2(col3); create index t2_col4_idx on t2(col4);
Check the compression for the table and indexes.
select compression from user_tables where table_name = 'T2'; COMPRESS -------- DISABLED SQL> column index_name format a12 select index_name, compression from user_indexes where table_name = 'T2' order by 1; INDEX_NAME COMPRESSION ------------ ------------- T2_COL1_IDX DISABLED T2_COL2_IDX DISABLED T2_COL3_IDX DISABLED T2_COL4_IDX DISABLED T2_PK DISABLED SQL>
Export the table.
$ expdp testuser1/testuser1@//localhost:1521/pdb1 \ tables=t2 \ directory=tmp_dir \ dumpfile=t2.dmp \ logfile=expdp_t2.log \ exclude=statistics Export: Release 21.0.0.0.0 - Production on Sun Sep 5 11:57:18 2021 Version 21.3.0.0.0 Copyright (c) 1982, 2021, Oracle and/or its affiliates. All rights reserved. Connected to: Oracle Database 21c Enterprise Edition Release 21.0.0.0.0 - Production Starting "TESTUSER1"."SYS_EXPORT_TABLE_01": testuser1/********@//localhost:1521/pdb1 tables=t2 directory=tmp_dir dumpfile=t2.dmp logfile=expdp_t2.log exclude=statistics Processing object type TABLE_EXPORT/TABLE/TABLE_DATA Processing object type TABLE_EXPORT/TABLE/TABLE Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT . . exported "TESTUSER1"."T2" 384.8 KB 10000 rows Master table "TESTUSER1"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded ****************************************************************************** Dump file set for TESTUSER1.SYS_EXPORT_TABLE_01 is: /tmp/t2.dmp Job "TESTUSER1"."SYS_EXPORT_TABLE_01" successfully completed at Sun Sep 5 11:57:35 2021 elapsed 0 00:00:14 $
Remove the table, so we can reimport it.
conn testuser1/testuser1@//localhost:1521/pdb1 drop table t2 purge;
Import the table from the dump file, using the TRANSFORM
parameter to compress the table using the TABLE_COMPRESSION_CLAUSE
and the indexes using the INDEX_COMPRESSION_CLAUSE
.
$ impdp testuser1/testuser1@//localhost:1521/pdb1 \ tables=t2 \ directory=tmp_dir \ dumpfile=t2.dmp \ logfile=impdp_t2.log \ transform=table_compression_clause:\"compress basic\" \ transform=index_compression_clause:\"compress advanced low\" Import: Release 21.0.0.0.0 - Production on Sun Sep 5 12:02:22 2021 Version 21.3.0.0.0 Copyright (c) 1982, 2021, Oracle and/or its affiliates. All rights reserved. Connected to: Oracle Database 21c Enterprise Edition Release 21.0.0.0.0 - Production Master table "TESTUSER1"."SYS_IMPORT_TABLE_01" successfully loaded/unloaded Starting "TESTUSER1"."SYS_IMPORT_TABLE_01": testuser1/********@//localhost:1521/pdb1 tables=t2 directory=tmp_dir dumpfile=t2.dmp logfile=impdp_t2.log transform=table_compression_clause:"compress basic" transform=index_compression_clause:"compress advanced low" Processing object type TABLE_EXPORT/TABLE/TABLE Processing object type TABLE_EXPORT/TABLE/TABLE_DATA . . imported "TESTUSER1"."T2" 384.8 KB 10000 rows Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT Job "TESTUSER1"."SYS_IMPORT_TABLE_01" successfully completed at Sun Sep 5 12:02:29 2021 elapsed 0 00:00:05 $
Check the compression for the table and indexes.
conn testuser1/testuser1@//localhost:1521/pdb1 select compression from user_tables where table_name = 'T2'; COMPRESS -------- ENABLED SQL> column index_name format a12 select index_name, compression from user_indexes where table_name = 'T2' order by 1; INDEX_NAME COMPRESSION ------------ ------------- T2_COL1_IDX ADVANCED LOW T2_COL2_IDX ADVANCED LOW T2_COL3_IDX ADVANCED LOW T2_COL4_IDX ADVANCED LOW T2_PK DISABLED SQL>
We can see both the table and indexes have now been compressed.
You can read the full description for the TRANSFORM
parameter here. For information about index compression see the CREATE INDEX
statement here.
Transportable Tablespace Enhancements
In Oracle 21c transportable tablespace exports (expdp
) and imports (impdp
) can now use the PARALLEL
parameter to parallelize the operations.
In Oracle 21c Data Pump can resume failed transportable tablespace jobs at, or near, the point of failure. In previous releases transportable tablespace jobs could not be resumed.
Export from Oracle Autonomous Database
We can use a local Oracle 21.3 installation to export data from the autonomous database to an object store using the expdp
utility. You can read about this functionality in this article.
Export to and Import From Cloud Object Stores
Data Pump supports cloud object stores as a dumpfile location for on-prem installations from Oracle 21c onward.
For more information see:
- Oracle Database Utilities 21c
- Oracle Cloud : Autonomous Database (ADW or ATP) - Export Data to an Object Store (expdp)
- Data Pump Export (expdp) to and Import (impdp) From Cloud Object Stores in Oracle Database 21c
- Data Pump Checksums in Oracle Database 21c
- Data Pump (expdp, impdp) : All Articles
- Data Pump Quick Links : 10g, 11g, 12cR1, 12cR2, 18c, 19c, 21c, Transportable Tablespaces
Hope this helps. Regards Tim...