impdp tables remap_table transform 不产生归档

Data Pump import does not import statistics if using remap_table.

In this test case, the statistics of t1 table were not imported into t2 table if using the remap_table option with impdp.

conn / as sysdba

drop user tc cascade;

create user tc identified by tc;

grant dba to tc;

conn tc/tc

create table t1 (c1 number);

insert into t1 values(1);

commit;

exec dbms_stats.gather_table_stats('TC', 'T1', cascade=>TRUE);

host expdp tc/tc tables=t1 reuse_dumpfiles=yes
host impdp tc/tc remap_table=t1:t2

impdp <SCHEMA> tables=t1,t2 dumpfile=<DUMPFILE> remap_table=t1:t2,t2:t3 table_exists_action=truncate

column table_name format a10

select table_name,num_rows,last_analyzed from dba_tables where owner='TC';

Test Results

SQL> select table_name,num_rows,last_analyzed from dba_tables where owner='TC';

TABLE_NAME NUM_ROWS LAST_ANAL


T1 1 24-AUG-21

T2 <<== NULL


create table t1 (colu varchar2(5));

create table t2 (colu varchar2(5));

insert into t1 values ('1');

insert into t2 values ('2');

SQL> host expdp system/oracle tables=t1,t2 reuse_dumpfiles=yes

Dump file set for SYSTEM.SYS_EXPORT_TABLE_01 is:

/u01/app/oracle/product/19.0.0/db_1/rdbms/log/expdat.dmp

Job "SYSTEM"."SYS_EXPORT_TABLE_01" completed with 1 error(s) at Fri Apr 12 04:01:23 2024 elapsed 0 00:00:08

SQL> host expdp system/oracle tables=t1,t2 reuse_dumpfiles=yes

/u01/app/oracle/product/19.0.0/db_1/rdbms/log/expdat.dmp

Job "SYSTEM"."SYS_EXPORT_TABLE_01" completed with 1 error(s) at Fri Apr 12 04:01:32 2024 elapsed 0 00:00:04

drop table t1;

drop table t2;

create table t2 (colu varchar2(5));

create table t3 (colu varchar2(5));

host impdp system/oracle remap_table=t1:t2,t2:t3 table_exists_action=truncate

--default 读ORA-31640: unable to open dump file "/u01/app/oracle/product/19.0.0/db_1/rdbms/log/expdat.dmp" for read

To use multiple TRANSFORM options in one IMPDP process.

SOLUTION

Multiple TRANSFORM options can be set using comma in between.

For Example, the user wants to use "OID:N" and "DISABLE_ARCHIVE_LOGGING:Y". The IMPDP syntax for this will be:

IMPDP <username>/<password> tables=<owner>.<table_name> directory=<directory_name> transform=OID:N,DISABLE_ARCHIVE_LOGGING:Y

SYMPTOMS

Data Pump import parameter:

transform=disable_archive_logging:y

does not have the desired effect when used together with remap_table parameter.

CHANGES

CAUSE

The issue was addressed in unpublished Bug 29425370 - DATA PUMP PARAMETER DISABLE_ARCHIVE_LOGGING:Y DOES NOT HAVE THE DESIRED EFFECT FOR REMAP_TABLE, and Development Team determined that its cause is due to product defect BUG 29792959 - IMPDP WITH TRANSFORM=DISABLE_ARCHIVE_LOGGING:Y GENERATES ARCHIVES, fixed in 23.1.

If DISABLE_ARCHIVE_LOGGING:Y is used then worker process will execute the alter table stmt to disable the logging before loading the data, e.g:

ALTER TABLE "<SCHEMA_NAME>"."<TABLE_NAME>" NOLOGGING;

But in case of remap, it is also running the alter table stmt for old table only, e.g:

impdp .... remap_table=<TABLE_NAME>:<TARGET_TABLE_NAME>

ALTER TABLE "TEST1"."T1" NOLOGGING;

but the correct alter table should be :

ALTER TABLE "<SCHEMA_NAME>"."<TARGET_TABLE_NAME>" NOLOGGING;

Fix of unpublished BUG 29792959 corrects this behavior.

During DataPump import the following error is encountered:

ORA-39083: Object type INDEX:"CAL"."IDX_MESSAGE" failed to create with error:

ORA-00054: resource busy and acquire with NOWAIT specified or timeout expired

Failing sql is:

ALTER INDEX "CAL"."IDX_MESSAGE" LOGGING

CHANGES

Parameter TRANSFORM = DISABLE_ARCHIVE_LOGGING:Y is used

CAUSE

Using DISABLE_ARCHIVE_LOGGING:Y in impdp command will disable logging (redo generation) for table and indexes being imported through impdp operation.

The sequence of errors shows ORA-00054: resource busy error only for alter index logging statements in impdp.

  1. You are disabling logging with DISABLE_ARCHIVE_LOGGING:Y. (Disable redo generation)

  2. You are altering index to logging in same session. (Force redo generation)

This is contradicting to each other and causing error.

SOLUTION

  1. Do not use DISABLE_ARCHIVE_LOGGING:Y because in case of database failure or corruption, you may want to recover data. Recovery is done by applying redo information. Using DISABLE_ARCHIVE_LOGGING:Y disables redo generation and when recovery is needed you will not be able to recover and you have go for recreation of full database. When parameter DISABLE_ARCHIVE_LOGGING:Y is not used error ORA-00054: resource busy is not observed.

  2. In case DISABLE_ARCHIVE_LOGGING:Y must be used, then restrict the impact specific to table by using DISABLE_ARCHIVE_LOGGING:Y:table. This will disable redo generation for tables only. The indexes with logging will be imported without error ORA-00054: resource busy.

The database on a DBCS instance is in FORCE LOGGING mode by default and as per documentationhttps://docs.oracle.com/database/121/SUTIL/GUID-64FB67BD-EB67-4F50-A4D2-5D34518E6BDB.htm#SUTIL939, DISABLE_ARCHIVE_LOGGING option will not disable logging when indexes and tables are created if FORCE LOGGING is enabled. To avoid archive logs from being created, turn off FORCE LOGGING

SQL> alter database no force logging;

Once the data is imported, you can enable FORCE LOGGING

SQL> alter database force logging;

相关推荐
wow_DG3 分钟前
【运维✨】云服务器公网 IP 迷雾:为什么本机看不到那个地址?
运维·服务器·tcp/ip
TDengine (老段)3 分钟前
TDengine 字符串函数 CHAR 用户手册
java·大数据·数据库·物联网·时序数据库·tdengine·涛思数据
float_com14 分钟前
【java基础语法】------ 数组
java
Adellle18 分钟前
2.单例模式
java·开发语言·单例模式
零雲28 分钟前
java面试:有了解过RocketMq架构么?详细讲解一下
java·面试·java-rocketmq
Deamon Tree40 分钟前
HBase 核心架构和增删改查
java·hbase
一个处女座的暖男程序猿44 分钟前
2G2核服务器安装ES
服务器·elasticsearch·jenkins
冴羽1 小时前
今日苹果 App Store 前端源码泄露,赶紧 fork 一份看看
前端·javascript·typescript
蒜香拿铁1 小时前
Angular【router路由】
前端·javascript·angular.js
卡卡酷卡BUG1 小时前
Java 后端面试干货:四大核心模块高频考点深度解析
java·开发语言·面试