[20130104]关于索引重建的一些问题.txt

[20130104]关于索引重建的一些问题.txt

前一阵子与别人讨论关于索引的问题,实际上许多索引按照现在的观点都不需要重建。除非几种特殊的情况,大量的删除操作,
或者删除一部分数据,导致一些索引块不会在被插入等情况,写一些例子来说明情况:

1.建立测试环境:

SQL> select * from v$version where rownum
BANNER
--------------------------------------------------------------------------------
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
create table t as select rownum id ,lpad('x',100,'x') name  from dual connect by level
create index i_t_id on t(id);
exec dbms_stats.gather_table_stats(USER,'T',cascade => true);
2.测试:
SQL> column name format a100
SQL> set autot traceonly
SQL> select /*+ index(t i_t_id) */ * from t where id=5000;
Execution Plan
----------------------------------------------------------
Plan hash value: 4153437776
--------------------------------------------------------------------------------------
| Id  | Operation                   | Name   | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |        |     1 |   105 |     2   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID| T      |     1 |   105 |     2   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN          | I_T_ID |     1 |       |     1   (0)| 00:00:01 |
--------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("ID"=5000)
Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
          4  consistent gets
          0  physical reads
          0  redo size
        691  bytes sent via SQL*Net to client
        520  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed

--仅仅4个逻辑读。

SQL> set autot off
SQL> delete from t where id=5100;
9900 rows deleted.
SQL> commit ;
Commit complete.

--仅仅保留中间的100条数据。如果这时查询id的min以及max。

SQL> set autot traceonly
SQL> select min(id) from t ;
Execution Plan
----------------------------------------------------------
Plan hash value: 3363318368
-------------------------------------------------------------------------------------
| Id  | Operation                  | Name   | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT           |        |     1 |     4 |     2   (0)| 00:00:01 |
|   1 |  SORT AGGREGATE            |        |     1 |     4 |            |          |
|   2 |   INDEX FULL SCAN (MIN/MAX)| I_T_ID |     1 |     4 |     2   (0)| 00:00:01 |
-------------------------------------------------------------------------------------
Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
         12  consistent gets
          0  physical reads
          0  redo size
        525  bytes sent via SQL*Net to client
        520  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed
SQL> select max(id) from t ;
Execution Plan
----------------------------------------------------------
Plan hash value: 3363318368
-------------------------------------------------------------------------------------
| Id  | Operation                  | Name   | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT           |        |     1 |     4 |     2   (0)| 00:00:01 |
|   1 |  SORT AGGREGATE            |        |     1 |     4 |            |          |
|   2 |   INDEX FULL SCAN (MIN/MAX)| I_T_ID |     1 |     4 |     2   (0)| 00:00:01 |
-------------------------------------------------------------------------------------
Statistics
----------------------------------------------------------
          1  recursive calls
          0  db block gets
         12  consistent gets
          0  physical reads
          0  redo size
        526  bytes sent via SQL*Net to client
        520  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed

--可以发现逻辑读都不小,需要12个逻辑读。这个是因为索引的左右两端都删除了索引信息。
--如果查询id=5000并没有什么变化。

SQL> select /*+ index(t i_t_id) */ * from t where id=5000;
.....
Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
          4  consistent gets
          0  physical reads
          0  redo size

3.再做一个测试例子:

create table t1 as select * from all_objects ;
create index i_t1_owner_obj_name on t1(owner, object_name);
delete from t1;
commit;
exec dbms_stats.gather_table_stats(USER,'T1',cascade => true);

4.测试

SQL> set autot traceonly
SQL> select /*+ index(t1 i_t1_owner_obj_name) */ * from t1 where wner='SYS';
31659 rows selected.
Execution Plan
----------------------------------------------------------
Plan hash value: 1461409258
---------------------------------------------------------------------------------------------------
| Id  | Operation                   | Name                | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |                     | 29373 |  4532K| 19304   (1)| 00:03:52 |
|   1 |  TABLE ACCESS BY INDEX ROWID| T1                  | 29373 |  4532K| 19304   (1)| 00:03:52 |
|*  2 |   INDEX RANGE SCAN          | I_T1_OWNER_OBJ_NAME | 29373 |       |   165   (1)| 00:00:02 |
---------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("OWNER"='SYS')
Statistics
----------------------------------------------------------
          7  recursive calls
          0  db block gets
      23032  consistent gets
        155  physical reads
          0  redo size
    3222326  bytes sent via SQL*Net to client
       2258  bytes received via SQL*Net from client
        160  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
      31659  rows processed

--通过索引查找,仅仅23032个逻辑读,31659条记录,记录数很多。如果删除owner='SYS'的记录

SQL> delete from t1 where wner='SYS';
31659 rows deleted.
SQL> commit ;
Commit complete.
SQL> select /*+ index(t1 i_t1_owner_obj_name) */ * from t1 where wner='SYS';
no rows selected
Execution Plan
----------------------------------------------------------
Plan hash value: 1461409258
---------------------------------------------------------------------------------------------------
| Id  | Operation                   | Name                | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |                     | 30941 |  2930K| 21985   (1)| 00:04:24 |
|   1 |  TABLE ACCESS BY INDEX ROWID| T1                  | 30941 |  2930K| 21985   (1)| 00:04:24 |
|*  2 |   INDEX RANGE SCAN          | I_T1_OWNER_OBJ_NAME | 30941 |       |   180   (1)| 00:00:03 |
---------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("OWNER"='SYS')
Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
        176  consistent gets
          0  physical reads
          0  redo size
       1343  bytes sent via SQL*Net to client
        509  bytes received via SQL*Net from client
          1  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          0  rows processed

--可以发现通过索引探查,返回记录为0,逻辑读还有176个逻辑读。why?

3.做一个研究:
SQL> alter system flush BUFFER_CACHE;
System altered.

SQL> @10046on 12
Session altered.
SQL> select /*+ index(t1 i_t1_owner_obj_name) */ * from t1 where wner='SYS';
no rows selected

SQL> @10046off

*** 2013-01-04 15:48:11.994
WAIT #10: nam='SQL*Net message from client' ela= 26120136 driver id=1650815232 #bytes=1 p3=0 obj#=0 tim=1357285691994416
CLOSE #10:c=0,e=25,dep=0,type=1,tim=1357285691994557
=====================
PARSING IN CURSOR #2 len=71 dep=0 uid=84 ct=3 lid=84 tim=1357285691996164 hv=567281008 ad='b9e7dfa8' sqlid='037kwzhhx01bh'
select /*+ index(t1 i_t1_owner_obj_name) */ * from t1 where wner='SYS'
END OF STMT
PARSE #2:c=2000,e=1547,p=0,cr=0,cu=0,mis=1,r=0,dep=0,og=1,plh=1461409258,tim=1357285691996158
EXEC #2:c=0,e=55,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,plh=1461409258,tim=1357285691996323
WAIT #2: nam='SQL*Net message to client' ela= 4 driver id=1650815232 #bytes=1 p3=0 obj#=0 tim=1357285691996414
WAIT #2: nam='db file scattered read' ela= 102 file#=4 block#=1952 blocks=8 obj#=267947 tim=1357285691996659
WAIT #2: nam='db file scattered read' ela= 73 file#=4 block#=3440 blocks=8 obj#=267947 tim=1357285691996829
WAIT #2: nam='db file scattered read' ela= 87 file#=4 block#=3304 blocks=8 obj#=267947 tim=1357285691996997
WAIT #2: nam='db file scattered read' ela= 85 file#=4 block#=3312 blocks=8 obj#=267947 tim=1357285691997182
WAIT #2: nam='db file scattered read' ela= 85 file#=4 block#=3320 blocks=8 obj#=267947 tim=1357285691997357
WAIT #2: nam='db file scattered read' ela= 82 file#=4 block#=3328 blocks=8 obj#=267947 tim=1357285691997532
WAIT #2: nam='db file scattered read' ela= 73 file#=4 block#=3336 blocks=8 obj#=267947 tim=1357285691997686
WAIT #2: nam='db file scattered read' ela= 88 file#=4 block#=3344 blocks=8 obj#=267947 tim=1357285691997869
WAIT #2: nam='db file scattered read' ela= 90 file#=4 block#=3352 blocks=8 obj#=267947 tim=1357285691998049
WAIT #2: nam='db file scattered read' ela= 92 file#=4 block#=3360 blocks=8 obj#=267947 tim=1357285691998223
WAIT #2: nam='db file scattered read' ela= 98 file#=4 block#=3368 blocks=8 obj#=267947 tim=1357285691998406
WAIT #2: nam='db file scattered read' ela= 90 file#=4 block#=3376 blocks=8 obj#=267947 tim=1357285691998574
WAIT #2: nam='db file scattered read' ela= 93 file#=4 block#=3384 blocks=8 obj#=267947 tim=1357285691998751
WAIT #2: nam='db file scattered read' ela= 94 file#=4 block#=3392 blocks=8 obj#=267947 tim=1357285691998934
WAIT #2: nam='db file scattered read' ela= 99 file#=4 block#=3400 blocks=8 obj#=267947 tim=1357285691999120
WAIT #2: nam='db file scattered read' ela= 100 file#=4 block#=3408 blocks=8 obj#=267947 tim=1357285691999310
WAIT #2: nam='db file scattered read' ela= 96 file#=4 block#=3416 blocks=8 obj#=267947 tim=1357285691999495
WAIT #2: nam='db file scattered read' ela= 96 file#=4 block#=3424 blocks=8 obj#=267947 tim=1357285691999676
WAIT #2: nam='db file scattered read' ela= 97 file#=4 block#=3432 blocks=8 obj#=267947 tim=1357285691999867
WAIT #2: nam='db file scattered read' ela= 101 file#=4 block#=3448 blocks=8 obj#=267947 tim=1357285692000077
WAIT #2: nam='db file scattered read' ela= 100 file#=4 block#=3456 blocks=8 obj#=267947 tim=1357285692000260
WAIT #2: nam='db file scattered read' ela= 88 file#=4 block#=3464 blocks=8 obj#=267947 tim=1357285692000438
WAIT #2: nam='db file scattered read' ela= 103 file#=4 block#=3472 blocks=8 obj#=267947 tim=1357285692000627
WAIT #2: nam='db file scattered read' ela= 99 file#=4 block#=3480 blocks=8 obj#=267947 tim=1357285692000851
FETCH #2:c=3999,e=4452,p=192,cr=176,cu=0,mis=0,r=0,dep=0,og=1,plh=1461409258,tim=1357285692000912
STAT #2 id=1 cnt=0 pid=0 pos=1 bj=267946 p='TABLE ACCESS BY INDEX ROWID T1 (cr=176 pr=192 pw=0 time=0 us cost=21985 size=3001277 card=30941)'
STAT #2 id=2 cnt=0 pid=1 pos=1 bj=267947 p='INDEX RANGE SCAN I_T1_OWNER_OBJ_NAME (cr=176 pr=192 pw=0 time=0 us cost=180 size=0 card=30941)'

SQL> select object_name,object_id,data_object_id,object_type from dba_objects where  object_id=267947;
OBJECT_NAME          OBJECT_ID DATA_OBJECT_ID OBJECT_TYPE
-------------------- --------- -------------- -------------------
I_T1_OWNER_OBJ_NAME     267947         267947 INDEX

--可以发现依旧扫描原来OWNER='SYS'的索引区域。

5.重建索引看看?

SQL> alter index i_t1_owner_obj_name rebuild ;
Index altered.
SQL> set autot traceonly
SQL> select /*+ index(t1 i_t1_owner_obj_name) */ * from t1 where wner='SYS';
no rows selected
Execution Plan
----------------------------------------------------------
Plan hash value: 1461409258
---------------------------------------------------------------------------------------------------
| Id  | Operation                   | Name                | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |                     | 30941 |  2930K| 12523   (1)| 00:02:31 |
|   1 |  TABLE ACCESS BY INDEX ROWID| T1                  | 30941 |  2930K| 12523   (1)| 00:02:31 |
|*  2 |   INDEX RANGE SCAN          | I_T1_OWNER_OBJ_NAME | 30941 |       |   106   (0)| 00:00:02 |
---------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("OWNER"='SYS')
Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
          2  consistent gets
          0  physical reads
          0  redo size
       1343  bytes sent via SQL*Net to client
        509  bytes received via SQL*Net from client
          1  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          0  rows processed

--重建后,逻辑读仅仅2个。当然这些都是很极端的情况,实际上大多数情况索引都不需要重建。

时间: 2024-08-10 01:16:27

[20130104]关于索引重建的一些问题.txt的相关文章

[20120608]IOT的第2索引重建.txt

参考链接:http://richardfoote.wordpress.com/2012/05/15/index-rebuild-does-it-use-the-index-or-the-table-nothing-touches-me/ IOT表是特殊的索引结构,如果第2索引的物理猜失败很多,可以通过rebuild来重建索引,修复物理猜失败.很明显第2索引重建的一个目的就是修复物理猜失败,这样要获得正确的UROWID,必须扫描IOT组织表,获得正确的逻辑rowid.但是普通的堆表索引呢?下面看几

ORACLE 如何查看索引重建进度情况

   在ORACLE数据库中,如果一个比较大的索引在重建过程中耗费时间比较长,那么怎么查看索引重建耗费的时间,以及完成了多少(比例)了呢,我们可以通过V$SESSION_LONGOPS视图来查看索引重建的时间和进度.   官方文档关于V$SESSION_LONGOPS的介绍如下 V$SESSION_LONGOPS This view displays the status of various operations that run for longer than 6 seconds (in a

[20150321]索引空块的问题.txt

[20150321]索引空块的问题.txt --晚上看了: 索引空块较多造成index range scan的IO成本较高 http://www.dbaxiaoyu.com/archives/2504 --感觉有点怪怪的: SELECT /*+gather_plan_statistics ab*/ LOG.OID              OID, LOG.REGION           REGION, LOG.ACCEPT_SEQ       ACCEPT_SEQ, LOG.PROCESS_

[20170227]快速重建dg测试环境.txt

[20170227]快速重建dg测试环境.txt --//玩过火了,在测试互传在线日志时出现问题,导致dg破坏,安全起见我重新安装. 1.环境: SYS@book> @ &r/ver1 PORT_STRING                    VERSION        BANNER ------------------------------ -------------- --------------------------------------------------------

[20111223]索引键值在B tree索引块中的顺序.txt

[20111223]索引键值在B tree索引块中的顺序.txt 参考链接:http://www.adellera.it/blog/2009/05/24/order-keys-inside-index-blocks/ 自己为了加强理解重复一下对方的测试! 1.建立测试表以及索引 SQL> select * from v$version; BANNER -------------------------------------------------------------------------

[20131017]索引变大的问题.txt

[20131017]索引变大的问题.txt 昨天看别人的服务器,对方提示有一个表的索引很大,提议是否可以shrink减少大小,我仔细看发现这个表发现索引是一个sequence,也就是线性增加的,但是看块里面的信息确实很空.突然想起Jonathan Lewis博客提到的情况,在并发大量插入的情况下,索引的ITL会占用很多在分裂后依旧保持原有的数量,这样导致索引许多空间利用不上. 自己在做一些测试看看是否能模拟这个问题. 1.测试环境: SCOTT@test> @ver BANNER -------

[20141008]索引字符串的长度问题.txt

[20141008]索引字符串的长度问题.txt --oracle 的B tree 索引,一般保存方式是长度+键值+...+长度+rowid键值,如果索引唯一,rowid在前面(没有长度指示器),这样可以节省1个字节. --但是如果索引的字符串长度长度比如超过255个字符,这样索引的长度部分如何保存呢? --曾经写过一篇"varchar2(4000)如何保存",链接如下: http://blog.itpub.net/267265/viewspace-747304/         如果

索引重建的必要性与影响 (文档 ID 1525787.1)

索引重建的必要性与影响 (文档 ID 1525787.1) Index Rebuild, the Need vs the Implications (文档 ID 989093.1) 索引重建的必要性与影响 (文档 ID 1525787.1)                 > >                                                                                                             

【索引】分区表索引重建过程的10704事件跟踪

Normal 0 7.8 磅 0 2 false false false EN-US ZH-CN X-NONE --生成测试表. create table yang_part (              id number,              create_time date,              value varchar2(20),              name varchar2(30) ) partition by range (create_time) (