impdp tables remap_table transform 不产生归档

Data Pump import does not import statistics if using remap_table.

In this test case, the statistics of t1 table were not imported into t2 table if using the remap_table option with impdp.

conn / as sysdba

drop user tc cascade;

create user tc identified by tc;

grant dba to tc;

conn tc/tc

create table t1 (c1 number);

insert into t1 values(1);

commit;

exec dbms_stats.gather_table_stats('TC', 'T1', cascade=>TRUE);

host expdp tc/tc tables=t1 reuse_dumpfiles=yes
host impdp tc/tc remap_table=t1:t2

impdp <SCHEMA> tables=t1,t2 dumpfile=<DUMPFILE> remap_table=t1:t2,t2:t3 table_exists_action=truncate

column table_name format a10

select table_name,num_rows,last_analyzed from dba_tables where owner='TC';

Test Results

SQL> select table_name,num_rows,last_analyzed from dba_tables where owner='TC';

TABLE_NAME NUM_ROWS LAST_ANAL


T1 1 24-AUG-21

T2 <<== NULL


create table t1 (colu varchar2(5));

create table t2 (colu varchar2(5));

insert into t1 values ('1');

insert into t2 values ('2');

SQL> host expdp system/oracle tables=t1,t2 reuse_dumpfiles=yes

Dump file set for SYSTEM.SYS_EXPORT_TABLE_01 is:

/u01/app/oracle/product/19.0.0/db_1/rdbms/log/expdat.dmp

Job "SYSTEM"."SYS_EXPORT_TABLE_01" completed with 1 error(s) at Fri Apr 12 04:01:23 2024 elapsed 0 00:00:08

SQL> host expdp system/oracle tables=t1,t2 reuse_dumpfiles=yes

/u01/app/oracle/product/19.0.0/db_1/rdbms/log/expdat.dmp

Job "SYSTEM"."SYS_EXPORT_TABLE_01" completed with 1 error(s) at Fri Apr 12 04:01:32 2024 elapsed 0 00:00:04

drop table t1;

drop table t2;

create table t2 (colu varchar2(5));

create table t3 (colu varchar2(5));

host impdp system/oracle remap_table=t1:t2,t2:t3 table_exists_action=truncate

--default 读ORA-31640: unable to open dump file "/u01/app/oracle/product/19.0.0/db_1/rdbms/log/expdat.dmp" for read

To use multiple TRANSFORM options in one IMPDP process.

SOLUTION

Multiple TRANSFORM options can be set using comma in between.

For Example, the user wants to use "OID:N" and "DISABLE_ARCHIVE_LOGGING:Y". The IMPDP syntax for this will be:

IMPDP <username>/<password> tables=<owner>.<table_name> directory=<directory_name> transform=OID:N,DISABLE_ARCHIVE_LOGGING:Y

SYMPTOMS

Data Pump import parameter:

transform=disable_archive_logging:y

does not have the desired effect when used together with remap_table parameter.

CHANGES

CAUSE

The issue was addressed in unpublished Bug 29425370 - DATA PUMP PARAMETER DISABLE_ARCHIVE_LOGGING:Y DOES NOT HAVE THE DESIRED EFFECT FOR REMAP_TABLE, and Development Team determined that its cause is due to product defect BUG 29792959 - IMPDP WITH TRANSFORM=DISABLE_ARCHIVE_LOGGING:Y GENERATES ARCHIVES, fixed in 23.1.

If DISABLE_ARCHIVE_LOGGING:Y is used then worker process will execute the alter table stmt to disable the logging before loading the data, e.g:

ALTER TABLE "<SCHEMA_NAME>"."<TABLE_NAME>" NOLOGGING;

But in case of remap, it is also running the alter table stmt for old table only, e.g:

impdp .... remap_table=<TABLE_NAME>:<TARGET_TABLE_NAME>

ALTER TABLE "TEST1"."T1" NOLOGGING;

but the correct alter table should be :

ALTER TABLE "<SCHEMA_NAME>"."<TARGET_TABLE_NAME>" NOLOGGING;

Fix of unpublished BUG 29792959 corrects this behavior.

During DataPump import the following error is encountered:

ORA-39083: Object type INDEX:"CAL"."IDX_MESSAGE" failed to create with error:

ORA-00054: resource busy and acquire with NOWAIT specified or timeout expired

Failing sql is:

ALTER INDEX "CAL"."IDX_MESSAGE" LOGGING

CHANGES

Parameter TRANSFORM = DISABLE_ARCHIVE_LOGGING:Y is used

CAUSE

Using DISABLE_ARCHIVE_LOGGING:Y in impdp command will disable logging (redo generation) for table and indexes being imported through impdp operation.

The sequence of errors shows ORA-00054: resource busy error only for alter index logging statements in impdp.

  1. You are disabling logging with DISABLE_ARCHIVE_LOGGING:Y. (Disable redo generation)

  2. You are altering index to logging in same session. (Force redo generation)

This is contradicting to each other and causing error.

SOLUTION

  1. Do not use DISABLE_ARCHIVE_LOGGING:Y because in case of database failure or corruption, you may want to recover data. Recovery is done by applying redo information. Using DISABLE_ARCHIVE_LOGGING:Y disables redo generation and when recovery is needed you will not be able to recover and you have go for recreation of full database. When parameter DISABLE_ARCHIVE_LOGGING:Y is not used error ORA-00054: resource busy is not observed.

  2. In case DISABLE_ARCHIVE_LOGGING:Y must be used, then restrict the impact specific to table by using DISABLE_ARCHIVE_LOGGING:Y:table. This will disable redo generation for tables only. The indexes with logging will be imported without error ORA-00054: resource busy.

The database on a DBCS instance is in FORCE LOGGING mode by default and as per documentationhttps://docs.oracle.com/database/121/SUTIL/GUID-64FB67BD-EB67-4F50-A4D2-5D34518E6BDB.htm#SUTIL939, DISABLE_ARCHIVE_LOGGING option will not disable logging when indexes and tables are created if FORCE LOGGING is enabled. To avoid archive logs from being created, turn off FORCE LOGGING

SQL> alter database no force logging;

Once the data is imported, you can enable FORCE LOGGING

SQL> alter database force logging;

相关推荐
Code blocks4 小时前
SpringBoot自定义请求前缀
java·spring boot·后端
万少4 小时前
开发者注意了 DevEco Studio 6 Release 开放了,但是我劝你慎重升级6应用
前端
Jabes.yang4 小时前
Java求职面试:从Spring Boot到Kafka的技术探讨
java·spring boot·面试·kafka·互联网大厂
啊?啊?5 小时前
1 玩转Linux命令行:基础文件操作实战教程
linux·服务器·基础指令
小刘不知道叫啥5 小时前
React 源码揭秘 | 合成事件
前端·javascript·react.js
一个不秃头的 程序员5 小时前
从 0 到上线、长期运行、后续更新的**全流程**(适配 CentOS 服务器)
linux·服务器·centos
canonical_entropy5 小时前
DDD本质论:从哲学到数学,再到工程实践的完整指南之实践篇
java·后端·领域驱动设计
_Power_Y5 小时前
Java面试常用算法api速刷
java·算法·面试
纪莫5 小时前
技术面:Spring (事务传播机制、事务失效的原因、BeanFactory和FactoryBean的关系)
java·spring·java面试⑧股
ziyue75755 小时前
vue修改element-ui的默认的class
前端·vue.js·ui