LogMiner redo log

Log:The internal operations must be archived in order to be recovered if the import is failing. This is considered an expected behavior.

Simple steps for mining the redo logs, for instance, to troubleshoot excessive redo generation

SOLUTION

  1. Enable SUPPLEMENTAL Log to Database.
复制代码
SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;
  1. As sysdba, install the logminer package (if not installed by default installed) from following path
复制代码
SQL> @$ORACLE_HOME/rdbms/admin/dbmslm.sql

NOTE:

You can simply check whether logminer is already available using

复制代码
SQL> desc dbms_logmnr

3 Create a list of logs by specifying the NEW option when executing the DBMS_LOGMNR.ADD_LOGFILE procedure.

NOTE:

You can query the available archived redo log files from v$archived_log

For example, enter the following:

复制代码
SQL> EXECUTE DBMS_LOGMNR.ADD_LOGFILE( -
     LOGFILENAME => '+DATA1/CDB/ARCHIVELOG/2024_04_13/thread_1_seq_242.566.1166174789', -
     OPTIONS => DBMS_LOGMNR.NEW);
  1. If desired, add more logs by specifying the ADDFILE option.
复制代码
SQL> EXECUTE DBMS_LOGMNR.ADD_LOGFILE( -
     LOGFILENAME => '+FRA/v1120/archivelog/2012_11_09/thread_1_seq_564.261.798899763', -
     OPTIONS => DBMS_LOGMNR.ADDFILE);
  1. Start LogMiner.

Start LogMiner and specify the dictionary to use.

复制代码
SQL> EXECUTE DBMS_LOGMNR.START_LOGMNR( -
     OPTIONS => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG);

NOTE:

Using DICT_FROM_ONLINE_CATALOG, the database must be open and only redo can be mined of the latest table versions.

For more information, please refer to

Oracle® Database Utilities

11g Release 2 (11.2)

Chapter 19 Using LogMiner to Analyze Redo Log Files

http://docs.oracle.com/cd/E11882_01/server.112/e22490/logminer.htm#SUTIL019

  1. Query the V$LOGMNR_CONTENTS view.
复制代码
SQL> SELECT username AS USR, 
     (XIDUSN || '.' || XIDSLT || '.' || XIDSQN) AS XID, 
     operation, 
     SQL_REDO, 
     SQL_UNDO 
     FROM V$LOGMNR_CONTENTS 
     WHERE username IN ('<Username>');

NOTE:

For other possible columns to query, please issue:

复制代码
SQL> desc v$logmnr_contents
  1. End the LogMiner session.
复制代码
SQL&gt; EXECUTE DBMS_LOGMNR.END_LOGMNR();

This article provides guidelines DBAs can use to determine which OPERATION codes are generating lots of redo information.

This article is intended for DBAs. The article assumes the reader is familiar with LogMiner and has basic skills in mining redo logs.

SOLUTION

--- How to determine the cause of lots of redo generation using LogMiner ---

Using OPERATION Codes to Understand Redo Information

There are multiple operation codes which can generate the redo information, using following guide lines you can identify the operation codes which are causing the high redo generation and you need to take an appropriate action on it to reduce the high redo generation.

NOTE:

Redo records are not all equally sized. So remember that just because certain statements show up a lot in the LogMiner output, this does not guarantee that you have found the area of functionality generating the excessive redo.

What are these OPERATION codes ?

  • INSERT / UPDATE / DELETE -- Operations are performed on SYS objects are also considered as an Internal Operations.
  • COMMIT -- This is also "Internal" operation, you will get line "commit;" in the column sql_redo.
  • START -- This is also "Internal" operation, you will get line "set transaction read write;" in sql_redo INTERNAL -- Dictionary updates
  • SELECT_FOR_UPDATE - This is also an Internal operation and oracle generates the redo information for "select" statements which has "for update" clause.

In general INTERNAL operations are not relevant, so to query the relevant data, use "seg_owner=' in the "where" clause.

Examples:

How to extract relevant information from the view v$logmnr_contents?

  1. This SQL lists operations performed by user SCOTT

SQL> select distinct operation,username,seg_owner from v$logmnr_contents where seg_owner='SCOTT';

OPERATION USERNAME SEG_OWNER


DDL SCOTT SCOTT

DELETE SCOTT SCOTT

INSERT SCOTT SCOTT

UPDATE SCOTT SCOTT

  1. This SQL lists the undo and redo associated with operations that user SCOTT performed

SQL> select seg_owner,operation,sql_redo,sql_undo from v$logmnr_contents where SEG_owner='SCOTT';

SCOTT DDL

create table LM1 (c1 number, c2 varchar2(10));

SCOTT INSERT

insert into "SCOTT"."LM1"("C1","C2") values ('101','AAAA');

delete from "SCOTT"."LM1" where "C1" = '101' and "C2" = 'AAAA'

and ROWID = 'AAAHfBAABAAAMUqAAA';

SCOTT UPDATE update "SCOTT"."LM1" set "C2" = 'YYY'

where "C2" = 'EEE' and ROWID = 'AAAHfBAABAAAMUqAAE';

update "SCOTT"."LM1" set "C2" = 'EEE' where "C2" = 'YYY'

and ROWID = 'AAAHfBAABAAAMUqAAE';

INSERT / UPDATE / DELETE -- Operations are performed on SYS objects are also considered as an Internal Operations.

  1. This SQL lists undo and redo generated for UPDATE statements issues by user SCOTT

SQL> select username, seg_owner,operation,sql_redo,sql_undo from v$logmnr_contents where operation ='UPDATE' and USERNAME='SCOTT';

UNAME SEG_OW OPERATION SQL_REDO SQL_UNDO


SCOTT SYS UPDATE update "SYS"."OBJ$" set "OBJ#" = '1'..... update ....

SCOTT SYS UPDATE update "SYS"."TSQ$" set "GRANTO..... update .......

SCOTT SYS UPDATE update "SYS"."SEG$" set "TYPE#" = '5'.. update......

As per above result user SCOTT has updated SYS objects so, if you query on USERNAME, you may get incorrect result. So, better to query v$logmnr_contents on SEG_OWNER.

  1. Identifying Operation Counts

Run the following query to see the OPERATION code row count from v$logmnr_contents, to understand which OPERATION code has generated lots of redo information.

SQL> select operation,count(*) from v$logmnr_contents group by operation;

OPERATION COUNT(*)


COMMIT 22236

DDL 2

DELETE 1

INSERT 11

INTERNAL 11

SELECT_FOR_UPDATE 32487

START 22236

UPDATE 480

8 rows selected

  1. Identifying User Counts

Run the following query to check user activity and operation counts:

SQL> select seg_owner,operation,count(*) from v$logmnr_contents group by seg_owner,operation;

SEG_OWNER OPERATION COUNT(*)


SCOTT COMMIT 22236

SCOTT DDL 2

SCOTT DELETE 1

...

BILLY COMMIT 12899

BILLY DDL 5

BILLY DELETE 2

...

NOTE:

Be aware of next known issue:

If you are not using "select for update" statements often in your application and yet find a high operation count for operation code "SELECT_FOR_UPDATE" then you might be hitting a known issue.

To confirm this check whether SQL_REDO shows select,update statements on AQ_QUEUE_TABLE_AFFINITIES and AQ_QUEUE_TABLES.

If you see these selects and updates, then check the value of the Init.ora parameter AQ_TM_PROCESSES. The default value is AQ_TM_PROCESSES = 0 meaning that the queue monitor is not created.

If you are not using Advanced Queuing, then set AQ_TM_PROCESSES back to zero to avoid lots of redo generation on objects AQ_QUEUE_TABLE_AFFINITIES and AQ_QUEUE_TABLES.

Despite force_logging is set to "NO" on the database, archive logs are still being generated when running a DataPump import job with "transform=disable_archive_logging:y".

Why?

SOLUTION

This parameter is intended to reduce redo generation, not disable archive logging.

The operations stored in the redo logs are internal operations done by SYS. The tables involved are internal tables like : "SYS"."ACCESS", "SYS"."DEPENDENCY", "SYS"."PROCEDURE", "SYS"."SETTINGS", "SYS"."SOURCE", "SYS"."SEG" , etc.

The internal operations must be archived in order to be recovered if the import is failing. This is considered an expected behavior.

Below is a scenario showing this behavior:

  1. Enable minimum supplemental logging:

SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;

  1. Install LogMiner (if not installed):

SQL> @ORACLE_HOME/rdbms/admin/dbmslm.sql

Note: You can simply check whether LogMiner is already available using

SQL> desc dbms_logmnr;

  1. Run an import to reproduce the issue:

> impdp <LOGIN>/<PASSWORD> directory=<DATA_PUMP_DIR> dumpfile=<DUMP_NAME>.dmp logfile=<LOG_NAME>.log schemas=<SCHEMA_NAME> transform=disable_archive_logging:y

  1. Identify the archived log created during the import execution.

SQL>select name from v$archived_log;

  1. Create the list of logs by specifying the NEW and ADDFILE options when executing the DBMS_LOGMNR.ADD_LOGFILE procedure.

EXECUTE DBMS_LOGMNR.ADD_LOGFILE(LOGFILENAME => '<PATH>/<ARCHIVED_LOG_FILE1>',OPTIONS => DBMS_LOGMNR.NEW);

EXECUTE DBMS_LOGMNR.ADD_LOGFILE(LOGFILENAME => '<PATH>/<ARCHIVED_LOG_FILE2>', OPTIONS => DBMS_LOGMNR.ADDFILE);

  1. Start LogMiner:

EXECUTE DBMS_LOGMNR.START_LOGMNR(OPTIONS => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG);

  1. Query V$LOGMNR_CONTENTS.

  2. You can use this also

SQL>select

select

count(*) total,

sum (case when seg_owner = 'SYS' and seg_name like '%$' then 1 else 0 end) SYS_count,

sum (case when seg_owner = 'SYS' and seg_name like '%$' then 1 else 0 end)/count(*)*100 SYS_percent

from v$logmnr_contents where rowid is not null;

This shows the total amount of DMLs, the total amount of DMLs that affect SYS-owned tables ending in '$' (which should be data dictionary tables),

and the percentage of DMLs against data dictionary tables expressed vs total number of DMLs.

The higher the percentage of operations by SYS against data dictionary tables, the more this note applies. Here is an example of the query output:

SQL> select

count(*) total,

sum (case when seg_owner = 'SYS' and seg_name like '%$' then 1 else 0 end) SYS_count,

sum (case when seg_owner = 'SYS' and seg_name like '%$' then 1 else 0 end)/count(*)*100 SYS_percent

from v$logmnr_contents where rowid is not null;

2 3 4 5

TOTAL SYS_COUNT SYS_PERCENT


201741 9577 4.74717583

.......

.......

Note: logging of SYS operations is only one possible reason for excess redo generation even if transform=disable_archive_logging:Y is set and the fix for Bug 25143018 is installed. There are others possible reasons for this behavior.

相关推荐
成富1 小时前
文本转SQL(Text-to-SQL),场景介绍与 Spring AI 实现
数据库·人工智能·sql·spring·oracle
荒川之神3 小时前
ORACLE 闪回技术简介
数据库·oracle
Mephisto.java5 小时前
【大数据学习 | kafka高级部分】kafka的优化参数整理
大数据·sql·oracle·kafka·json·database
BearHan7 小时前
Sqlsugar调用Oracle的存储过程
oracle·存储过程·orm
superman超哥7 小时前
04 深入 Oracle 并发世界:MVCC、锁、闩锁、事务隔离与并发性能优化的探索
数据库·oracle·性能优化·dba
Mephisto.java9 小时前
【大数据学习 | kafka高级部分】kafka的kraft集群
大数据·sql·oracle·kafka·json·hbase
Mephisto.java9 小时前
【大数据学习 | kafka高级部分】kafka的文件存储原理
大数据·sql·oracle·kafka·json
毕业设计制作和分享17 小时前
ssm《数据库系统原理》课程平台的设计与实现+vue
前端·数据库·vue.js·oracle·mybatis
Dingww10111 天前
梧桐数据库中的网络地址类型使用介绍分享
数据库·oracle·php
2401_857026231 天前
Spring Boot框架下的知识管理与多维分类
spring boot·后端·oracle