impdp tables remap_table transform 不产生归档

Data Pump import does not import statistics if using remap_table.

In this test case, the statistics of t1 table were not imported into t2 table if using the remap_table option with impdp.

conn / as sysdba

drop user tc cascade;

create user tc identified by tc;

grant dba to tc;

conn tc/tc

create table t1 (c1 number);

insert into t1 values(1);

commit;

exec dbms_stats.gather_table_stats('TC', 'T1', cascade=>TRUE);

host expdp tc/tc tables=t1 reuse_dumpfiles=yes
host impdp tc/tc remap_table=t1:t2

impdp <SCHEMA> tables=t1,t2 dumpfile=<DUMPFILE> remap_table=t1:t2,t2:t3 table_exists_action=truncate

column table_name format a10

select table_name,num_rows,last_analyzed from dba_tables where owner='TC';

Test Results

SQL> select table_name,num_rows,last_analyzed from dba_tables where owner='TC';

TABLE_NAME NUM_ROWS LAST_ANAL


T1 1 24-AUG-21

T2 <<== NULL


create table t1 (colu varchar2(5));

create table t2 (colu varchar2(5));

insert into t1 values ('1');

insert into t2 values ('2');

SQL> host expdp system/oracle tables=t1,t2 reuse_dumpfiles=yes

Dump file set for SYSTEM.SYS_EXPORT_TABLE_01 is:

/u01/app/oracle/product/19.0.0/db_1/rdbms/log/expdat.dmp

Job "SYSTEM"."SYS_EXPORT_TABLE_01" completed with 1 error(s) at Fri Apr 12 04:01:23 2024 elapsed 0 00:00:08

SQL> host expdp system/oracle tables=t1,t2 reuse_dumpfiles=yes

/u01/app/oracle/product/19.0.0/db_1/rdbms/log/expdat.dmp

Job "SYSTEM"."SYS_EXPORT_TABLE_01" completed with 1 error(s) at Fri Apr 12 04:01:32 2024 elapsed 0 00:00:04

drop table t1;

drop table t2;

create table t2 (colu varchar2(5));

create table t3 (colu varchar2(5));

host impdp system/oracle remap_table=t1:t2,t2:t3 table_exists_action=truncate

--default 读ORA-31640: unable to open dump file "/u01/app/oracle/product/19.0.0/db_1/rdbms/log/expdat.dmp" for read

To use multiple TRANSFORM options in one IMPDP process.

SOLUTION

Multiple TRANSFORM options can be set using comma in between.

For Example, the user wants to use "OID:N" and "DISABLE_ARCHIVE_LOGGING:Y". The IMPDP syntax for this will be:

IMPDP <username>/<password> tables=<owner>.<table_name> directory=<directory_name> transform=OID:N,DISABLE_ARCHIVE_LOGGING:Y

SYMPTOMS

Data Pump import parameter:

transform=disable_archive_logging:y

does not have the desired effect when used together with remap_table parameter.

CHANGES

CAUSE

The issue was addressed in unpublished Bug 29425370 - DATA PUMP PARAMETER DISABLE_ARCHIVE_LOGGING:Y DOES NOT HAVE THE DESIRED EFFECT FOR REMAP_TABLE, and Development Team determined that its cause is due to product defect BUG 29792959 - IMPDP WITH TRANSFORM=DISABLE_ARCHIVE_LOGGING:Y GENERATES ARCHIVES, fixed in 23.1.

If DISABLE_ARCHIVE_LOGGING:Y is used then worker process will execute the alter table stmt to disable the logging before loading the data, e.g:

ALTER TABLE "<SCHEMA_NAME>"."<TABLE_NAME>" NOLOGGING;

But in case of remap, it is also running the alter table stmt for old table only, e.g:

impdp .... remap_table=<TABLE_NAME>:<TARGET_TABLE_NAME>

ALTER TABLE "TEST1"."T1" NOLOGGING;

but the correct alter table should be :

ALTER TABLE "<SCHEMA_NAME>"."<TARGET_TABLE_NAME>" NOLOGGING;

Fix of unpublished BUG 29792959 corrects this behavior.

During DataPump import the following error is encountered:

ORA-39083: Object type INDEX:"CAL"."IDX_MESSAGE" failed to create with error:

ORA-00054: resource busy and acquire with NOWAIT specified or timeout expired

Failing sql is:

ALTER INDEX "CAL"."IDX_MESSAGE" LOGGING

CHANGES

Parameter TRANSFORM = DISABLE_ARCHIVE_LOGGING:Y is used

CAUSE

Using DISABLE_ARCHIVE_LOGGING:Y in impdp command will disable logging (redo generation) for table and indexes being imported through impdp operation.

The sequence of errors shows ORA-00054: resource busy error only for alter index logging statements in impdp.

  1. You are disabling logging with DISABLE_ARCHIVE_LOGGING:Y. (Disable redo generation)

  2. You are altering index to logging in same session. (Force redo generation)

This is contradicting to each other and causing error.

SOLUTION

  1. Do not use DISABLE_ARCHIVE_LOGGING:Y because in case of database failure or corruption, you may want to recover data. Recovery is done by applying redo information. Using DISABLE_ARCHIVE_LOGGING:Y disables redo generation and when recovery is needed you will not be able to recover and you have go for recreation of full database. When parameter DISABLE_ARCHIVE_LOGGING:Y is not used error ORA-00054: resource busy is not observed.

  2. In case DISABLE_ARCHIVE_LOGGING:Y must be used, then restrict the impact specific to table by using DISABLE_ARCHIVE_LOGGING:Y:table. This will disable redo generation for tables only. The indexes with logging will be imported without error ORA-00054: resource busy.

The database on a DBCS instance is in FORCE LOGGING mode by default and as per documentationhttps://docs.oracle.com/database/121/SUTIL/GUID-64FB67BD-EB67-4F50-A4D2-5D34518E6BDB.htm#SUTIL939, DISABLE_ARCHIVE_LOGGING option will not disable logging when indexes and tables are created if FORCE LOGGING is enabled. To avoid archive logs from being created, turn off FORCE LOGGING

SQL> alter database no force logging;

Once the data is imported, you can enable FORCE LOGGING

SQL> alter database force logging;

相关推荐
pepedd8645 分钟前
浅谈js拷贝问题-解决拷贝数据难题
前端·javascript·trae
@大迁世界7 分钟前
useCallback 的陷阱:当 React Hooks 反而拖了后腿
前端·javascript·react.js·前端框架·ecmascript
跟橙姐学代码7 分钟前
学Python别死记硬背,这份“编程生活化笔记”让你少走三年弯路
前端·python
前端缘梦8 分钟前
深入理解 Vue 中的虚拟 DOM:原理与实战价值
前端·vue.js·面试
Fantastic_sj8 分钟前
React 19 核心特性
前端·react.js·前端框架
VaJoy9 分钟前
Cocos Creator Shader 入门 ⒂ —— 自定义后处理管线
前端·cocos creator
小高0079 分钟前
📌React 路由超详解(2025 版):从 0 到 1 再到 100,一篇彻底吃透
前端·javascript·react.js
Data_Adventure14 分钟前
Java 与 TypeScript 的“同名方法”之争:重载机制大起底
前端·typescript
summer77716 分钟前
GIS三维可视化-Cesium
前端·javascript·数据可视化
天天摸鱼的java工程师17 分钟前
Snowflake 雪花算法优缺点(Java老司机实战总结)
java·后端·面试