Logmnr 介绍

原文转自:http://blog.csdn.net/tianlesoftware/article/details/5604497

一.  Logminer 说明

 

Oracle LogMiner 的官网说明:

      Using LogMiner to Analyze Redo Log Files

       http://download.oracle.com/docs/cd/E11882_01/server.112/e16536/logminer.htm#SUTIL019

 

 

       Logminer是oracle从8i开始提供的用于分析重做日志信息的工具,它包括DBMS_LOGMNR和DBMS_LOGMNR_D两个package,后边的D是字典的意思。它既能分析redo log file,也能分析归档后的archive log file。在分析日志的过程中需要使用数据字典,一般先生成数据字典文件后使用,10g版本还可以使用在线数据字典。

       Logminer也可以分析其它数据库的重做日志文件,但是必须使用重做日志所在数据库的数据字典,否则会出现无法识别的乱码。另外被分析数据库的操作系统平台最好和当前Logminer所在数据库的运行平台一样,且block size相同。

 

       LogMiner是Oracle数据库提供的一个工具,它用于分析重做日志和归档日志所记载的事务操作。

       (1)确定数据库的逻辑损坏时间。假定某个用户执行drop table误删除了重要表sales,通过LogMiner可以准确定位该误操作的执行时间和SCN值,然后通过基于时间恢复或者基于SCN恢复可以完全恢复该表数据。

       (2)确定事务级要执行的精细逻辑恢复操作。假定某些用户在某表上执行了一系列DML操作并提交了事务,并且其中某个用户的DML操作存在错误。通过LogMiner可以取得任何用户的DML操作及相应的UNDO操作,通过执行UNDO操作可以取消用户的错误操作。

       (3)执行后续审计。通过LogMiner可以跟踪Oracle数据库的所有DML、DDL和DCL操作,从而取得执行这些操作的时间顺序、执行这些操作的用户等信息。

 

LogMiner 由如下2个脚本来安装:

(1)创建DBMS_LOGMNR:$ORACLE_HOME/rdbms/admin/dbmslm.sql

SQL> @dbmslm.sql                ——路径一定要正确,我自己的是: @D:\oracle\product\10.1.0\Db_1\RDBMS\ADMIN\dbmslm.sql

程序包已创建。

授权成功。

(2)创建DBMS_LOGMNR_D:$ORACLE_HOME/rdbms/admin/dbmslmd.sql.

SQL> @dbmslmd.sql                                   ——我的: @D:\oracle\product\10.1.0\Db_1\RDBMS\ADMIN\dbmslmd.sql

过程已创建。

没有错误。

授权成功。

PL/SQL 过程已成功完成。

程序包已创建。

 

 

1.1 Logminer 支持的数据类型和表的存储属性

LogMiner supports the following datatypes and table storage attributes:

(1).      CHAR

(2).      NCHAR

(3).      VARCHAR2 and VARCHAR

(4).      NVARCHAR2

(5).      NUMBER

(6).      DATE

(7).      TIMESTAMP

(8).      TIMESTAMP WITH TIME ZONE

(9).      TIMESTAMP WITH LOCAL TIME ZONE

(10).  INTERVAL YEAR TO MONTH

(11).  INTERVAL DAY TO SECOND

(12).  RAW

(13).  CLOB

(14).  NCLOB

(15).  BLOB

(16).  LONG

(17).  LONG RAW

(18).  BINARY_FLOAT

(19).  BINARY_DOUBLE

(20).  Index-organized tables (IOTs), including those with overflows or LOB columns

(21).  Function-based indexes

(22).  XMLTYPE data when it is stored in CLOB format

(23).  Tables using basic table compression and OLTP table compression

 

       Support for multibyte CLOBs is available only for redo logs generated by a database with compatibility set to a value of 10.1 or higher.

       Support for LOB and LONG datatypes is available only for redo logs generated by a database with compatibility set to a value of 9.2.0.0 or higher.

       Support for index-organized tables without overflow segment or with no LOB columns in them is available only for redo logs generated by a database with compatibility set to 10.0.0.0 or higher.

       Support for index-organized tables with overflow segment or with LOB columns is available onlyfor redo logs generated by a database with compatibility set to 10.2.0.0 or higher.

 

1.2  Logminer 不支持的数据类型和表存储属性

LogMiner does not support these datatypes and table storage attributes:

(1).      BFILE datatype

(2).      Simple and nested abstract datatypes (ADTs)

(3).      Collections (nested tables(嵌套表) and VARRAYs)

(4).      Object refs

(5).      SecureFiles (unless database compatibility is set to 11.2 or higher)

 

1.3  LogMiner基本对象

     

       There are four basic objects in a LogMiner configuration that you should be familiar with: the source database, the mining database, the LogMiner
dictionary, and the redo log files containing the data of interest:

       (1)Thesource database is the database that produces all the redo log files that you want
LogMiner to analyze.

       (2)Themining database is the database that LogMiner uses when it performs the analysis.

       (3)TheLogMiner dictionary allows LogMiner to provide table and column names, instead
of internal object IDs, when it presents the redo log data that you request.

       LogMiner uses the dictionary to translate internal object identifiers and datatypes to object names and external data formats.Without a dictionary, LogMiner returns internal object
IDs and presents data as binary data.

       -- LogMiner字典用于将内部对象ID号和数据类型转换为对象名和外部数据格式。使用LogMiner分析重做日志和归档日志时,应该生成LogMiner字典,否则将无法读懂分析结果。

 

For example, consider the following the SQL statement:

       INSERT INTO HR.JOBS(JOB_ID, JOB_TITLE, MIN_SALARY, MAX_SALARY)  VALUES('IT_WT','Technical Writer', 4000, 11000);

 

Without the dictionary, LogMiner will display:

insert into "UNKNOWN"."OBJ# 45522"("COL 1","COL 2","COL 3","COL 4") values (HEXTORAW('45465f4748'),HEXTORAW('546563686e6963616c20577269746572'),HEXTORAW('c229'),HEXTORAW('c3020b'));

      

       (4)The redo log filescontain the changes made to the database or database dictionary.

 

1.4  LogMiner配置要求

       The following are requirements for the source and mining database, the data dictionary, and the redo log files that LogMiner will mine:

 

Source and mining database

       (1)Both the source database and the mining database must be running on thesame hardware platform.

       -- 源数据库和分析数据库必须运行在相同硬件平台上;

       (2)The mining database can be the same as, or completely separate from, the source database.

       -- 分析数据库可以是独立数据库或源数据库;

       (3)The mining database must run the same release or a later release of the Oracle Database software as the source database.

       --分析数据库的版本不能低于源数据库的版本;

       (4)The mining database must use the same character set (or a superset of the character set) used by the source database.

       --分析数据库与源数据库必须具有相同的字符集。

 

LogMiner dictionary

       (1)The dictionary must be produced by the same source database that generates the redo log files that LogMiner will analyze.    

       -- LogMiner字典必须在源数据库中生成。

 

All redo log files:

       (1)Must be produced by the same source database.

       --当分析多个重做日志和归档日志时,它们必须是同一个源数据库的重做日志和归档日志;

       (2)Must be associated with the same database RESETLOGS SCN.

       --当分析多个重做日志和归档日志时,它们必须具有相同的resetlogs scn;

       (3)Must be from a release 8.0 or later Oracle Database. However, several of the LogMiner features introduced as of release 9.0.1 work only with redo log files produced on an Oracle9i or later database.

       --当分析的重做日志和归档日志必须在Oracle8.0版本以上。

 

       LogMiner does not allow you to mix redo log files from different databases or to use a dictionary from a different database than the one that generated the redo log files to be analyzed.

 

 

1.5补充日志(suppplemental logging)

 

       You must enable supplemental logging before generating log files that will be analyzed by LogMiner.

       When you enable supplemental logging, additional information is recorded in the redo stream that is needed to make the information in the redo log files useful to you.        Therefore, at the very least,you must enable minimal
supplemental logging, as the following SQL statement shows:

      ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;

 

       To determine whether supplemental logging is enabled, query the V$DATABASE view, as the following SQL statement shows:

      SELECT SUPPLEMENTAL_LOG_DATA_MIN FROM V$DATABASE;

 

       If the query returns a value of YES or IMPLICIT, then minimal supplemental logging is enabled.

 

 

       重做日志用于实现例程恢复和介质恢复,这些操作所需要的数据被自动记录在重做日志中。但是,重做应用可能还需要记载其他列信息到重做日志中,记录其他列的日志过程被称为补充日志。

 

       默认情况下,Oracle数据库没有提供任何补充日志,从而导致默认情况下LogMiner无法支持以下特征:

       (1)索引簇、链行和迁移行;

       (2)直接路径插入;

       (3)摘取LogMiner字典到重做日志;

       (4)跟踪DDL;

       (5)生成键列的SQL_REDO和SQL_UNDO信息;

       (6)LONG和LOB数据类型。

 

       因此,为了充分利用LogMiner提供的特征,必须激活补充日志。在数据库级激活补充日志的示例如下:

       SQL> conn /as sysdba

       已连接。

       SQL> alter database add supplemental log data;

       数据库已更改。

 

       注意:激活不用重启数据库,数据库联机即可。

 

 

二.  一个典型的LogMiner步骤

       This section describes the steps in a typical LogMiner session. Each step is described in its own subsection.

(1).      Enable Supplemental Logging

(2).      Extract a LogMiner Dictionary (unless you plan to use the online catalog)

(3).      Specify Redo Log Files for Analysis

(4).      Start LogMiner

(5).      Query V$LOGMNR_CONTENTS

(6).      End the LogMiner Session

 

       To run LogMiner, you use the DBMS_LOGMNR PL/SQL package. Additionally, you mightalso use the DBMS_LOGMNR_D package if you choose to extract a LogMiner dictionary rather than use the online catalog.

 

       The DBMS_LOGMNR package contains the procedures used to initialize and run LogMiner, including interfaces to specify names of redo log files, filter criteria, and session characteristics.(包括了指定redo 名字,过滤条件,会话字符集 的接口)

       The DBMS_LOGMNR_D package queries the database dictionary tables of the current database to create a LogMiner dictionary file.

 

       The LogMiner PL/SQL packages are owned by the SYS schema. Therefore, if you are not connected as user SYS, then:

       (1)You must include SYS in your call. For example:

              EXECUTE SYS.DBMS_LOGMNR.END_LOGMNR;

       (2)You must have been granted the EXECUTE_CATALOG_ROLE role.

 

2.1 Enable Supplemental(补充的) Logging

       Enable the type of supplemental logging you want to use. At the very least, you must enable minimal supplemental logging, as follows:

       SQL>ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;

 

2.2 Extract(提取,抽取) a LogMiner Dictionary

       To use LogMiner, you must supply it with a dictionary by doing one of the following:

(1).      Specify use of the online catalog by using the DICT_FROM_ONLINE_CATALOG option when you start LogMiner.

       See "Using the Online Catalog".

(2).      Extract database dictionary information to the redo log files.

       See "Extracting a LogMiner Dictionary to the Redo Log Files".

(3).      Extract database dictionary information to a flat file.

       See "Extracting the LogMiner Dictionary to a Flat File".

 

2.3 Specify Redo Log Files for Analysis

       Before you can start LogMiner, you must specify the redo log files that you want to analyze. To do so, execute theDBMS_LOGMNR.ADD_LOGFILE
procedure, as demonstrated(演示,示例) in the following steps. You can add and remove redo log files in any order.

 

Note:

       If you will be mining in the database instance that is generating the redo log files, then you only need to specify the CONTINUOUS_MINE option and one of the following when you start LogMiner:

(1).      The STARTSCN parameter

(2).      The STARTTIME parameter

For more information, see "Redo Log File Options".

 

       2.3.1 Use SQL*Plus to start an Oracle instance,with the database either mounted or unmounted. For example, enter the STARTUP statement at the SQL prompt:

              SQL>STARTUP

 

       2.3.2 Create a list of redo log files. Specify the NEW option of the DBMS_LOGMNR.ADD_LOGFILE PL/SQL procedure to signal that this is the beginning of a new list. For example, enter the following to specify the /oracle/logs/log1.f
redo log file:

       execute dbms_logmnr.add_logfile( logfilename => '/oracle/logs/log1.f', options => dbms_logmnr.new);

 

       2.3.3  If desired, add more redo log files by specifying the ADDFILE option of the DBMS_LOGMNR.ADD_LOGFILE PL/SQL procedure. For example, enter the following to add the /oracle/logs/log2.f redo log file:

       execute dbms_logmnr.add_logfile( logfilename => '/oracle/logs/log2.f', options => dbms_logmnr.addfile);

 

       The OPTIONS parameter is optional(可选的)when you are adding additional redo log files. For example, you could simply enter the following:

       execute dbms_logmnr.add_logfile( logfilename=>'/oracle/logs/log2.f');

 

       2.3.4 If desired, remove redo log files by using the DBMS_LOGMNR.REMOVE_LOGFILE PL/SQL procedure. For example, enter the following to remove the /oracle/logs/log2.f redo log file:

       execute dbms_logmnr.remove_logfile( logfilename => '/oracle/logs/log2.f');

 

2.4 Start LogMiner

       After you have created a LogMiner dictionary file and specified which redo log files to analyze, you must start LogMiner. Take the following steps:(创建完logminer 数据字典文件,选定了要分析的redo
log,可以start logminer 了)

 

       2.4.1 Execute the DBMS_LOGMNR.START_LOGMNR procedure to start LogMiner.

       Oracle recommends(推荐) that you specify a LogMiner dictionary option. If you do not, then LogMiner cannot translate internal object identifiers and datatypes to object names and external data formats. Therefore, it would
return internal object IDs and present data as binary data. Additionally, the MINE_VALUE and COLUMN_PRESENT functions cannot be used without a dictionary.

       (1)If you are specifying the name of a flat file(平面文件) LogMiner dictionary, then you must supply a fully qualified file name for the dictionary file. For example, to start LogMiner using /oracle/database/dictionary.ora,
issue the following statement:

       execute dbms_logmnr.start_logmnr( dictfilename =>'/oracle/database/dictionary.ora');

      

       (2)If you are not specifying a flat file dictionary name, then use the OPTIONS parameter to specify either the DICT_FROM_REDO_LOGS or DICT_FROM_ONLINE_CATALOG option.

       If you specify DICT_FROM_REDO_LOGS, then LogMiner expects to find a dictionary in the redo log files that you specified with the DBMS_LOGMNR.ADD_LOGFILE procedure.To determine which redo log files
contain a dictionary, look at the V$ARCHIVED_LOG view.

 

       See "Extracting a LogMiner Dictionary to the Redo Log Files" for an example.

 

Note:

       If you add additional redo log files after LogMiner has been started, you must restart LogMiner.

       LogMiner will not retain options that were included in the previous call to DBMS_LOGMNR.START_LOGMNR;you must respecify the options you want to use. However, LogMiner will retain the dictionary
specification from the previous call if you do not specify a dictionary in the current call to DBMS_LOGMNR.START_LOGMNR.

 

       For more information about the DICT_FROM_ONLINE_CATALOG option, see "Using the Online Catalog".

 

2.4.2     Optionally, you can filter your query by time or by SCN.

       See "Filtering Data by Time" or "Filtering
Data by SCN"
.

 

2.4.3      You can also use the OPTIONS parameter to specify additional characteristics of your LogMiner session.

       For example, you might decide to use the online catalog as your LogMiner dictionary and to have only committed transactions shown in the V$LOGMNR_CONTENTS view, as follows:

       execute dbms_logmnr.start_logmnr(options=>

  dbms_logmnr.dict_from_online_catalog + dbms_logmnr.committed_data_only);

 

       For more information about DBMS_LOGMNR.START_LOGMNR options, see Oracle Database PL/SQL Packages and Types Reference.

 

       You can execute the DBMS_LOGMNR.START_LOGMNR procedure multiple times, specifying different options each time. This can be useful, for example, if you did not get the desired results from
a query of V$LOGMNR_CONTENTS, and want to restart LogMiner with different options. Unless you need to respecify the LogMiner dictionary, you do not need to add redo log files if they were already added with a previous call to DBMS_LOGMNR.START_LOGMNR.

 

2.5  Query V$LOGMNR_CONTENTS

       At this point, LogMiner is started and you can perform queries against the V$LOGMNR_CONTENTS view.

       See "Filtering and Formatting Data Returned to V$LOGMNR_CONTENTS" for examples of this.

 

2.6 End the LogMiner Session

       To properly end a LogMiner session, use the DBMS_LOGMNR.END_LOGMNR PL/SQL procedure, as follows:

      EXECUTE DBMS_LOGMNR.END_LOGMNR;

 

       This procedure closes all the redo log files and allows all the database and system resources allocated by LogMiner to be released.

       If this procedure is not executed, then LogMiner retains all its allocated resources until the end of the Oracle session in which it was invoked. It is particularly important to use this procedure to end the LogMiner
session if either the DDL_DICT_TRACKING option or the DICT_FROM_REDO_LOGS option was used.

 

 

三.  LogMiner 数据字典和Redo Log Files

       Before you begin using LogMiner, it is important to understand how LogMiner works with the LogMiner dictionary file (or files) and redo log files. This will help you to get accurate results and to plan the use of your system resources.

 

3.1 LogMiner Dictionary Options

       LogMiner requires a dictionary to translate object IDs into object names when it returns redo data to you. LogMiner gives you three options for supplying the dictionary:

 

(1)Using the Online
Catalog

       Oracle recommends that you use this option when you will have access to the source database from which the redo log files were created and when no changes to the column definitions
in the tables of interest are anticipated. This is the most efficient and easy-to-use option.

 

(2)Extracting a LogMiner
Dictionary to the Redo Log Files

       Oracle recommends that you use this option when you do not expect to have access to the source database from which the redo log files were created, or if you anticipate(期望) that
changes will be made to the column definitions in the tables of interest.

 

(3)Extracting the
LogMiner Dictionary to a Flat File

       This option is maintained for backward compatibility with previous releases. This option does not guarantee transactional consistency. Oracle recommends that you use either the online catalog or extract the dictionary
from redo log files instead.

 

3.1.1 Using the Online Catalog(一般用这种查DML操作)

       To direct LogMiner to use the dictionary currently in use for the database, specify the online catalog as your dictionary source when you start LogMiner, as follows:

 

EXECUTE DBMS_LOGMNR.START_LOGMNR(OPTIONS => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG);

 

       In addition to using the online catalog to analyze online redo log files,you can use it to analyze archived redo log files, if you are on the same system that generated the archived redo log files.

 

       The online catalog contains the latest information about the database and may be the fastest way to start your analysis. Because DDL operations that change important tables are somewhat rare, the online catalog generally contains the information you
need for your analysis.

 

       Remember, however, that the online catalog can only reconstruct SQL statements that are executed on the latest version of a table. As soon as a table is altered, the online catalog no longer reflects the previous version of the table. This means that
LogMiner will not be able to reconstruct any SQL statements that were executed on the previous version of the table.

       -- 使用源数据库分析重做日志或归档日志时,如果要分析表的结构没有发生任何变化,Oracle建议使用该选项分析重做日志和归档日志。

 

 Instead, LogMiner generates nonexecutable SQL (including hexadecimal-to-raw formatting of binary values) in the SQL_REDO column of the V$LOGMNR_CONTENTS view similar to the following example:

      insert into HR.EMPLOYEES(col#1, col#2) values  (hextoraw('4a6f686e20446f65'),hextoraw('c306'));"

 

       The online catalog option requires that the database be open.

       The online catalog option is not valid with the DDL_DICT_TRACKING option of DBMS_LOGMNR.START_LOGMNR.

       -- dbms_logmnr.dict_from_online_catalog要求数据库必须处于open状态,并且该选项只能用于跟踪DML操作,而不能用于跟踪DDL操作。

 

 

3.1.2 Extracting a LogMiner Dictionary to the Redo Log Files

       To extract a LogMiner dictionary to the redo log files, the database must be openand in ARCHIVELOG mode and archiving must
be enabled. While the dictionary is being extracted to the redo log stream, no DDL statements can be executed. Therefore, the dictionary extracted to the redo log files is guaranteed to be consistent (whereas the dictionary extracted to a flat file
is not).

 

       To extract dictionary information to the redo log files, execute the PL/SQL DBMS_LOGMNR_D.BUILD procedure with the STORE_IN_REDO_LOGS option.Do not specify a file name or location.

 
EXECUTE DBMS_LOGMNR_D.BUILD( 
   OPTIONS=> DBMS_LOGMNR_D.STORE_IN_REDO_LOGS);
 

       The process of extracting the dictionary to the redo log files does consume database resources, but if you limit the extraction to off-peak hours, then this should not be a problem, and it is faster than extracting to a flat file. Depending on the
size of the dictionary, it may be contained in multiple redo log files. If the relevant redo log files have been archived,then you can find out which redo log files contain the start and end of an extracted dictionary. To do
so, query the V$ARCHIVED_LOG view, as follows:

 

SELECT NAME FROM V$ARCHIVED_LOG WHERE DICTIONARY_BEGIN='YES';
SELECT NAME FROM V$ARCHIVED_LOG WHERE DICTIONARY_END='YES';

 

       Specify the names of the start and end redo log files, and possibly other logs in between them, with the ADD_LOGFILE procedure when you are preparing to begin a LogMiner session.

 

       Oracle recommends that you periodically back up the redo log files so that the information is saved and available at a later date. Ideally, this will not involve any extra steps because if your database is being properly managed, then there should
already be a process in place for backing up and restoring archived redo log files. Again, because of the time required, it is good practice to do this during off-peak hours.

     

      使用分析数据库分析重做日志或归档日志,或者被分析表的结构发生改变时,Oracle建议使用该选项分析重做日志和归档日志。为了摘取LogMiner字典到重做日志,要求源数据库必须处于archivelog模式,并且该数据库处于open状态。

 

3.1.3 Extracting the LogMiner Dictionary to a Flat File

--用来查DDL的操作记录

       When the LogMiner dictionary is in a flat file, fewer system resources are used than when it is contained in the redo log files. Oracle recommends that you regularly back up the dictionary extract to ensure correct
analysis of older redo log files.

 

       To extract database dictionary information to a flat file, use the DBMS_LOGMNR_D.BUILD procedure with the STORE_IN_FLAT_FILE option.

Be sure that no DDL operations occur while the dictionary is being built.

 

       The following steps describe how to extract a dictionary to a flat file.Steps 1 and 2 are preparation steps. You only need to do them once, and then you can extract a dictionary to a flat file as many times as you want
to.

 

(1)The DBMS_LOGMNR_D.BUILD procedure requires access to a directorywhere it can place the dictionary file. Because PL/SQL procedures
do not normally access user directories, you must specify a directory for use by the DBMS_LOGMNR_D.BUILD procedure or the procedure will fail. To specify a directory, set the initialization parameter, UTL_FILE_DIR, in the initialization parameter file.

 

       For example, to set UTL_FILE_DIR to use /oracle/database as the directory where the dictionary file is placed, place the following in the initialization parameter file:

       UTL_FILE_DIR = /oracle/database

      

       Remember that for the changes to the initialization parameter file to take effect,you must stop and restart the database.

--要是该参数生效需要重启DB

 

(2)If the database is closed,then use SQL*Plus to mount and open the database whose redo log files you want to analyze. For example,
entering the SQL STARTUP command mounts and opens the database:

       SQL>STARTUP

 

(3)Execute the PL/SQL procedure DBMS_LOGMNR_D.BUILD. Specify a file name for the dictionary and a directory path name for the file.
This procedure creates the dictionary file.

 

For example, enter the following to create the file dictionary.ora in /oracle/database:

       EXECUTE DBMS_LOGMNR_D.BUILD('dictionary.ora', '/oracle/database/',

       DBMS_LOGMNR_D.STORE_IN_FLAT_FILE);

 

       You could also specify a file name and location without specifying the STORE_IN_FLAT_FILE option. The result would be the same.

 

 

3.2  Redo Log File Options

       To mine data in the redo log files, LogMiner needs information about which redo log files to mine. Changes made to the database that are found in these redo log files are
delivered to you through the V$LOGMNR_CONTENTS view.

 

       You can direct LogMiner to automatically and dynamically create a list of redo log files to analyze, or you can explicitly specify a list of redo log files for LogMiner to analyze, as follows:

 

3.2.1 Automatically

       If LogMiner is being used on the source database, then you can direct LogMiner to find and create a list of redo log files for analysis automatically. Use the CONTINUOUS_MINE option when you start LogMiner with
the DBMS_LOGMNR.START_LOGMNR procedure, and specify a time or SCN range.

       Although this example specifies the dictionary from the online catalog, any LogMiner dictionary can be used.

 

Note:

       The CONTINUOUS_MINE option requires that the database be mounted and that archiving be enabled.

 

       LogMiner will use the database control file to find and add redo log files that satisfy your specified time or SCN range to the LogMiner redo log file list. For example:

 

EXECUTE DBMS_LOGMNR.START_LOGMNR(    

       STARTTIME => '01-Jan-2003 08:30:00',   

       ENDTIME => '01-Jan-2003 08:45:00',

       OPTIONS => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG +         DBMS_LOGMNR.CONTINUOUS_MINE);

 

       You can also direct LogMiner to automatically build a list of redo log files to analyze by specifying just one redo log file using DBMS_LOGMNR.ADD_LOGFILE, and then specifying the CONTINUOUS_MINE option when you start LogMiner. The previously described
method is more typical, however.

 

3.2.2 Manually

       Use the DBMS_LOGMNR.ADD_LOGFILE procedure to manually create a list of redo log files before you start LogMiner. After the first redo log file has been added to the list, each subsequently added redo log file
must be from the same database and associated with the same database RESETLOGS SCN. When using this method, LogMiner need not be connected to the source database.

 

       For example, to start a new list of redo log files, specify the NEW option of theDBMS_LOGMNR.ADD_LOGFILE PL/SQL procedure to signal that this is the beginning of a new list. For example, enter the following to
specify /oracle/logs/log1.f:

 

EXECUTE DBMS_LOGMNR.ADD_LOGFILE( LOGFILENAME => '/oracle/logs/log1.f', OPTIONS => DBMS_LOGMNR.NEW);

 

       If desired, add more redo log files by specifying the ADDFILE option of the PL/SQL DBMS_LOGMNR.ADD_LOGFILE procedure. For example, enter the following to add /oracle/logs/log2.f:

EXECUTE DBMS_LOGMNR.ADD_LOGFILE( 
   LOGFILENAME => '/oracle/logs/log2.f', 
   OPTIONS => DBMS_LOGMNR.ADDFILE);

 

       To determine which redo log files are being analyzed in the current LogMiner session, you can query the V$LOGMNR_LOGS view, which contains one row for each redo log file.

 

 

四. LogMiner 示例

 

在做实验之前,检查下suppplemental logging:

SQL> SELECT SUPPLEMENTAL_LOG_DATA_MIN FROM V$DATABASE;

SUPPLEME

--------

YES

 

如果是YES 或者IMPLICIT则表明已经生效了,否则需要启动:

SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;

Database altered.

 

4.1 使用源数据库数据字典(Online Catalog)来分析DML操作

 

1、先进行DML和DDL的操作,以便下面分析。

SQL> conn /as sysdba

已连接。

SQL> show parameter utl;

NAME                                         TYPE        VALUE                         

------------------------------------ -----------     --------

create_stored_outlines        string                                    

utl_file_dir                              string   

SQL> insert into scott.dept  values('80','Dave','AnQing');

已创建 1 行。

SQL> update scott.dept set loc='shang hai' where deptno=70;

已更新 1 行。

SQL> commit;

提交完成。

SQL> delete from scott.dept where deptno=40;

已删除 1 行。

SQL> commit;

提交完成。

SQL> alter table scott.dept add(phone varchar2(32));

表已更改。

SQL> insert into scott.dept values(50,'David','Dai','13888888888');

已创建 1 行。

SQL> commit;

提交完成。

SQL> alter table scott.dept add(address varchar2(300));

表已更改。

 

2、把在线重做日志变成归档日志,这样分析归档日志就可以了

SQL> alter system switch logfile;

系统已更改。

 

3、建立日志分析列表:

 

----添加要分析的日志文件

SQL> execute dbms_logmnr.add_logfile(logfilename=>'D:/oracle/arch/TEST/ARCHIVELOG/2009_08_25/O1_MF_1_32_597FQD7B_.ARC',options=>dbms_logmnr.new);

PL/SQL 过程已成功完成。

---继续填加,用dbms_logmnr.removefile可以删除

SQL> execute dbms_logmnr.add_logfile(logfilename=>'D:/oracle/arch/TEST/ARCHIVELOG/2009_08_25/O1_MF_1_30_597B5P7B_.ARC',options=>dbms_logmnr.addfile);

PL/SQL 过程已成功完成。

 

4、启动LogMiner

SQL> execute dbms_logmnr.start_logmnr(options=>dbms_logmnr.dict_from_online_catalog);

PL/SQL 过程已成功完成。

 

5、查看日志分析结果:

SQL> col username format a8

SQL> col sql_redo format a50

SQL> alter session set nls_date_format='yyyy-mm-dd hh24:mi:ss';

会话已更改。

 

SQL> select username,scn,timestamp,sql_redo from v$logmnr_contents where seg_name='DEPT';

USERNAME        SCN       TIMESTAMP           SQL_REDO

   --------              ----------  ------------------- -----------------------------------

                        1645927 2009-08-25 16:54:56    delete from "SCOTT"."DEPT" where "DEPTNO" = '40' and "DNAME" = 'OPERATIONS' and "LOC" = 'BOSTON' and "PHONE" IS NULL and "ADDRESS" IS                                           NULL and ROWID = 'AAAMfNAAEAAAAAQAAD';

SYS               1645931  2009-08-25 16:54:57 alter table scott.dept add(phone varchar2(32)) ;

SYS               1645992  2009-08-25 16:56:33 alter table scott.dept add(address varchar2(300)) ;

 

6、结束LogMiner

SQL> execute dbms_logmnr.end_logmnr;

PL/SQL 过程已成功完成。

 

 

4.2 摘取LogMiner字典到字典文件分析DDL操作

1、进行DDL操作,以便分析

SQL> conn scott/admin

已连接。

SQL> drop table emp;

表已删除。

SQL> drop table dept;

表已删除。

SQL> conn /as sysdba

已连接。

SQL> alter system switch logfile;

系统已更改。

 

2、使用字典文件,请查看数据库是否配置utl_file_dir,这个参数为字典文件的目录。配置该参数后,需要重启数据库

SQL> show user;

USER 为 "SYS"

SQL> show parameter utl;

NAME                                 TYPE        VALUE                         

------------------------------------ ----------- ------------

create_stored_outlines               string                                    

utl_file_dir                         string                          

SQL> alter system set utl_file_dir='D:/oracle/logminer' scope=spfile;

System altered.

SQL> shutdown immediate

Database closed.

Database dismounted.

ORACLE instance shut down.

SQL> startup

ORACLE instance started.

SQL> show parameter utl

NAME                                 TYPE        VALUE                         

------------------------------------ ----------- -----------

create_stored_outlines               string                                    

utl_file_dir                         string      D:/oracle/logminer     

 

3、建立字典文件:

SQL> execute dbms_logmnr_d.build ('dict.ora','D:/oracle/logminer',dbms_logmnr_d.store_in_flat_file);

PL/SQL 过程已成功完成。

 

4、建立日志分析列表:

SQL> execute dbms_logmnr.add_logfile(logfilename=>'D:/oracle/arch/TEST/ARCHIVELOG/2009_08_25/O1_MF_1_32_597FQD7B_.ARC',options=>dbms_logmnr.new);

PL/SQL 过程已成功完成。

 

SQL> execute dbms_logmnr.add_logfile(logfilename=>'D:/oracle/arch/TEST/ARCHIVELOG/2009_08_25/O1_MF_1_30_597B5P7B_.ARC',options=>dbms_logmnr.addfile);

PL/SQL 过程已成功完成。

 

5、启动LogMiner

SQL> execute dbms_logmnr.start_logmnr(dictfilename=>'D:/oracle/logminer/dict.ora',options=>dbms_logmnr.ddl_dict_tracking);

PL/SQL 过程已成功完成。

 

6、查询分析日志结果:

 

SQL> select username,scn,timestamp,sql_redo from v$logmnr_contents  where lower(sql_redo) like '%table%';

USERNAME       SCN TIMESTAMP      SQL_REDO

--------            ---------- -------------- -----------------------------------

              1647538 25-8月 -09     ALTER TABLE "SCOTT"."EMP" RENAME CONSTRAINT "PK_EMP" TO "BIN$f/mFjN+nTmaYjrb17YU80w==$0" ;

             1647550 25-8月 -09     ALTER TABLE "SCOTT"."EMP" RENAME TO "BIN$E5UujHaTR+uItpLtzN0Ddw==$0" ;

             1647553 25-8月 -09     drop table emp AS "BIN$E5UujHaTR+uItpLtzN0Ddw==$0" ;

             1647565 25-8月 -09     ALTER TABLE "SCOTT"."DEPT" RENAME CONSTRAINT "PK_DEPT" TO "BIN$3imFL+/1SqONFCB7LoPcCg==$0" ;

             1647571 25-8月 -09     ALTER TABLE "SCOTT"."DEPT" RENAME TO "BIN$kYKBLvltRb+vYaT6RkaRiA==$0";

             1647574 25-8月 -09     drop table dept AS "BIN$kYKBLvltRb+vYaT6RkaRiA==$0" ;                        

                                              

或者其他的查询:

 

SQL> select username,scn,timestamp,sql_redo from v$logmnr_contents where username='SYS';

USERNAME TIMESTAMP           SQL_REDO                                          

-------- ------------------- --------------------------------------------------

USERNAME        SCN TIMESTAMP      SQL_REDO

-------- ---------- -------------- -----------------------------------

SYS         1647487 25-8月 -09     set transaction read write;

SYS         1647488 25-8月 -09     alter user scott account unlock;

SYS         1647490 25-8月 -09     Unsupported

SYS         1647492 25-8月 -09     commit;

                         

7、结束LogMiner

SQL> execute dbms_logmnr.end_logmnr;

PL/SQL 过程已成功完成。

 

       注意,v$logmnr_contents内容保存了日志的内容,只在当前会话有效,如果想长期保存分析,可以在当前会话用create table tablename as select * from v$logmnr_contents语句来持久保存。

时间: 2024-08-11 09:07:30

Logmnr 介绍的相关文章

PhotoShop中正片负片叠底的原理介绍

关于正片叠底,正片,负片,通道,色相,色相环等等的相关理论一堆,大家可以从网上查到,原理就不讲了. 感觉单通道正片叠底效果应该属于填充色一类,但却与填充色又有很大的差异,与照片滤镜功能也有所差异,运用得当,最大的优点是在叠底后仍能保持比较好的照片通透度,而且简单易用,特别适合不太熟悉PS操作的朋友,此类方法运用广泛,配合起来使用比较方便,慢慢介绍吧. photoshop教程注:以下介绍的为RGB模式下的叠底,与CMYK模式下有所区别 方法一,单通道正片叠底 例一,叠出阳光色.提示:图片应尽量少漏

PS蒙版详细介绍

教程像飞特的朋友们介绍PS的蒙版原理和实例的运用,蒙版,通道,历史记录,图层样式,混合模式(已经开篇讲解了),智能对象,智能滤镜,嵌入图层,动作,其它,后期我针对这些,讲讲应用,希望大家有更深的感受.享受学习理论的乐趣. 先从蒙版说起吧. 讲到蒙版,要先了解下蒙版的由来.相信你一定能深刻领悟蒙版的使用技巧,看下提纲: 1,蒙版的由来 2,快速蒙版与选区 3,图层蒙版 4,PS CS5的蒙版新用法 一,蒙版的由来 犹记当年计划生育推行的时候,墙上,电线杆上,到处都是大红标语,如"计划生育好&quo

andriod平台"点九" .9.png设计介绍

"点九"是andriod平台的应用软件开发里的一种特殊的图片形式,文件扩展名为:.9.png 智能手机中有自动横屏的功能,同一幅界面会在随着手机(或平板电脑)中的方向传感器的参数不同而改变显示的方向,在界面改变方向后,界面上的图形会因为长宽的变化而产生拉伸,造成图形的失真变形. 我们都知道android平台有多种不同的分辨率,很多控件的切图文件在被放大拉伸后,边角会模糊失真. OK,在android平台下使用点九PNG技术,可以将图片横向和纵向同时进行拉伸,以实现在多分辨率下的完美显示

Python中字典的基本知识初步介绍

  这篇文章主要介绍了Python中字典的基本知识初步介绍,是Python入门中的基础知识,需要的朋友可以参考下 字典是可变的,并且可以存储任意数量的Python对象,包括其他容器类型另一个容器类型.字典包括键对(称为项目)及其相应的值. Python字典也被称为关联数组或哈希表.字典的一般语法如下: ? 1 dict = {'Alice': '2341', 'Beth': '9102', 'Cecil': '3258'} 可以用下面的方式创建字典: ? 1 2 dict1 = { 'abc':

简单介绍Python2.x版本中的cmp()方法的使用

  这篇文章主要介绍了简单介绍Python2.x版本中的cmp()方法的使用,然而该方法在Python3.x版本中已并不再内置...需要的朋友可以参考下 cmp()方法比较两个列表的元素. 语法 以下是cmp()方法的语法: ? 1 cmp(list1, list2) 参数 list1 -- 这是要进行比较的第一个列表 list2 -- 这是要进行比较的第二个列表 返回值 如果元素是相同类型的,执行比较,并返回结果.如果元素是不同的类型,检查,看看他们是否是数字 如果是数字必要时强制进行数字比较

百度卫士界面功能使用介绍

百度卫士1.0 Beta版发布 和百度杀毒一样,百度卫士也做出了"永久免费.不骚扰用户.不胁迫用户.不偷窥用户隐私"几大承诺.百度卫士自我总结为"轻.快.智.净"四大特点,目标自然是直指业界一哥:360安全卫士. 百度卫士的自我总结:"轻.快.智.净" 下面马上为大家介绍百度卫士1.0 Beta版的各项功能. 1.百度卫士界面设计赏析 百度卫士的安装界面与百度杀毒相似,不同的是主色使用了浅蓝色(百度杀毒使用的是绿色),百度卫士的安装包体积不超过1

Java语言抽象工厂创立性模式介绍

工厂模式有简单工厂模式,工厂方法模式和抽象工厂模式几种形态.其中简单 工厂模式和工厂方法模式已经在前面作过介绍.在这里,我们来介绍抽象工厂模 式. 抽象工厂模式是所有形态的工厂模式中最为抽象和最具广泛性的一种形态. 抽象工厂模式的定义 抽象工厂模式是工厂方法模式的进一步扩广化和抽象化.我们给出抽象工厂 模式的类图定义如下. 图1. 抽象工厂模式的类图定义 从上图可以看出,简单工厂模式涉及到以下的角色 抽象工厂(AbstractFactory)类或接口 担任这个角色的是工厂方法模式的核心,它是与应

最实用Win7技巧快捷操作介绍

Win7的桌面设置更加个性化,我们的照片.喜欢的壁纸,都可以随时设为背景,而女生们的桌面壁纸很多都比较卡哇伊,但如果去做演示时,投射到投影公示,也许就会带来一些尴尬.其实在演示前,我们只需要通过Win X快捷键打开移动中心,对演示设置进行一些简单的设置即可,这样在演示时, 作为职场丽人,移动办公.快捷办公,都是必不可少,有一个安全稳定的操作系统和顺手的Office 办公软件尤为重要.微软Win7系统自发布以来就受到用户的欢迎,其美观的界面.流畅的操作体验.安全稳定的运行.高效便捷的功能特性,已经

Apache Gora介绍

介绍         Gora是一个开源的ORM框架,主要为大数据提供内存数据模型与数据的持久化.目前Gora支持对于列数据.key-value数据,文档数据与RDBMS数据的存储,还支持使用Apache Hadoop来对对大数据进行分析 特点             虽然目前市面上有很多不错的关系数据库的ORM框架,但是基于数据模型的框架如JDO还是有一些不足,如对于列数据模型的存储与持久化.Gora正好弥补了这个问题,它能使用户很容易对大数据时行 内存建模与持久化,而且支持Hadoop来对大