[20141217]记录长度与块大小.txt

[20141217]记录长度与块大小.txt

--昨天看了http://savvinov.com/2014/12/15/4k-bug-is-not-a-bug/
--提到转载:
A couple of weeks back I received an update on my "4k bug" SR (slow multirow DML performance for small block sizes). As
it turns out, the observed behavior is not a bug. It's actually more interesting than that. It is an undocumented
feature that controls the insert mode depending on the estimated row size (the sum of all column sizes). If the
estimated column size exceeds the size of the block, then Oracle switches to row-by-row processing.

--记录长度超出块大小,oracle插入会交换到row-by-row processing方式,自己在8k的数据块测试看看。

SCOTT@test> @ver1
PORT_STRING                    VERSION        BANNER
------------------------------ -------------- --------------------------------------------------------------------------------
x86_64/Linux 2.4.xx            11.2.0.3.0     Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production

create table t1(x number, x2 varchar2(4000),x3 varchar2(4000))  SEGMENT CREATION IMMEDIATE;
create table t2(x number, x2 varchar2(1000),x3 varchar2(1000))  SEGMENT CREATION IMMEDIATE;
--消除段延迟的影响。

set autotrace traceonly

insert into t1 select level, rpad(' ', 100, ' '),rpad('a',100,'a') from dual connect by level Execution Plan
----------------------------------------------------------
Plan hash value: 1236776825

------------------------------------------------------------------------------
| Id  | Operation                     | Name | Rows  | Cost (%CPU)| Time     |
------------------------------------------------------------------------------
|   0 | INSERT STATEMENT              |      |     1 |     2   (0)| 00:00:01 |
|   1 |  LOAD TABLE CONVENTIONAL      | T1   |       |            |          |
|*  2 |   CONNECT BY WITHOUT FILTERING|      |       |            |          |
|   3 |    FAST DUAL                  |      |     1 |     2   (0)| 00:00:01 |
------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter(LEVELStatistics
----------------------------------------------------------
        403  recursive calls
     128692  db block gets
       3553  consistent gets
       3877  physical reads
   46041644  redo size
        840  bytes sent via SQL*Net to client
        852  bytes received via SQL*Net from client
          3  SQL*Net roundtrips to/from client
          3  sorts (memory)
          0  sorts (disk)
     100000  rows processed

insert into t2 select level, rpad(' ', 100, ' '),rpad('a',100,'a') from dual connect by level     
Execution Plan
----------------------------------------------------------
Plan hash value: 1236776825
------------------------------------------------------------------------------
| Id  | Operation                     | Name | Rows  | Cost (%CPU)| Time     |
------------------------------------------------------------------------------
|   0 | INSERT STATEMENT              |      |     1 |     2   (0)| 00:00:01 |
|   1 |  LOAD TABLE CONVENTIONAL      | T2   |       |            |          |
|*  2 |   CONNECT BY WITHOUT FILTERING|      |       |            |          |
|   3 |    FAST DUAL                  |      |     1 |     2   (0)| 00:00:01 |
------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter(LEVELStatistics
----------------------------------------------------------
        358  recursive calls
      30132  db block gets
       6227  consistent gets
       3094  physical reads
   23675572  redo size
        843  bytes sent via SQL*Net to client
        852  bytes received via SQL*Net from client
          3  SQL*Net roundtrips to/from client
          3  sorts (memory)
          0  sorts (disk)
     100000  rows processed

-- 注意观察redo大小,前者46041644,后者23675572。
--46041644/23675572=1.94,增加了1倍。

exec sys.dbms_stats.gather_table_stats ( OwnName => 'SCOTT',TabName => 't1',Estimate_Percent => NULL,Method_Opt => 'FOR ALL COLUMNS SIZE 1 ',Cascade => True ,No_Invalidate => false)
exec sys.dbms_stats.gather_table_stats ( OwnName => 'SCOTT',TabName => 't2',Estimate_Percent => NULL,Method_Opt => 'FOR ALL COLUMNS SIZE 1 ',Cascade => True ,No_Invalidate => false)

SCOTT@test> set autotrace  off
SCOTT@test> select table_name,blocks,avg_row_len from dba_tables where owner=user and table_name in ('T1','T2');
TABLE_NAME     BLOCKS AVG_ROW_LEN
---------- ---------- -----------
T2               3016         207
T1               3016         207
--可以看到占用块大小一样。

--使用更小的表结构看看。
create table t3(x number, x2 varchar2(100),x3 varchar2(100))  SEGMENT CREATION IMMEDIATE;
set autotrace traceonly
insert into t3 select level, rpad(' ', 100, ' '),rpad('a',100,'a') from dual connect by level

Execution Plan
----------------------------------------------------------
Plan hash value: 1236776825
------------------------------------------------------------------------------
| Id  | Operation                     | Name | Rows  | Cost (%CPU)| Time     |
------------------------------------------------------------------------------
|   0 | INSERT STATEMENT              |      |     1 |     2   (0)| 00:00:01 |
|   1 |  LOAD TABLE CONVENTIONAL      | T3   |       |            |          |
|*  2 |   CONNECT BY WITHOUT FILTERING|      |       |            |          |
|   3 |    FAST DUAL                  |      |     1 |     2   (0)| 00:00:01 |
------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter(LEVELStatistics
----------------------------------------------------------
        421  recursive calls
      30203  db block gets
       6270  consistent gets
       3083  physical reads
   23683316  redo size
        843  bytes sent via SQL*Net to client
        852  bytes received via SQL*Net from client
          3  SQL*Net roundtrips to/from client
          3  sorts (memory)
          0  sorts (disk)
     100000  rows processed
--基本与t2的插入一样。

--转载作者的内容:

The query below can identify the tables that will have slow bulk DML because of the high maximum row length:

select c.owner,
       c.table_name,
       sum(data_length) est_row_length,
       ts.block_size
from dba_tab_columns c,
     (select owner, table_name, tablespace_name
      from dba_tables
      union select table_owner owner, table_name, tablespace_name
      from dba_tab_partitions
     ) t,
     dba_tablespaces ts
where c.owner = t.owner
and c.table_name = t.table_name
and ts.tablespace_name = t.tablespace_name
group by ts.block_size, c.owner, c.table_name
having sum(data_length)>=ts.block_size
order by 1, 2

--说明他的脚本查询的是sum(data_length)>=ts.block_size,应该查不到T1。

For the tables returned by this query, bulk DML will be internally performed row-by-row, decreasing performance and
increasing redo generation. If you want to change this, you'll need to either shrink column size, or migrate the table
to a tablespace with a larger block size.

--这个算给表结构设计不合理,一上来不管如何varchar2都是4000的一个建议,不要再这样设计表结构,根据业务选择合适的长度才是正
--道。当然这个对于批量操作才有影响。
--什么是row-by-row processing,我给看看这方面的文档,再写篇blog。

时间: 2024-09-05 07:41:55

[20141217]记录长度与块大小.txt的相关文章

[20170926]tune2fs调整保留块百分比.txt

[20170926]tune2fs调整保留块百分比.txt --//今天春节后给生产系统dg增加磁盘空间,在建立文件系统时没有调整保留区百分比. --//当时想都建立文件系统,也开始转移文件了.再重新来非常麻烦.也就放弃了. --//正好前一段时间使用tune2fs修改文件系统参数, --//参考链接 --//[20141229]关于linux文件系统的一些问题.txt --//主要问题是linux服务器长期不重启,有几种情况会导致重启在fsck上浪费许多时间. # tune2fs -l /de

[20150522]bbed与数据块检查和.txt

[20150522]bbed与数据块检查和.txt --我现在基本拿bbed学习,基本是拿bbed查看,而使用bvi修改数据.我感觉这种方便1写. --实际上使用bbed的好处就是修改数据块检查和不一致,而使用bbed修改很简单仅仅需要执行sum apply就ok了. --对比dbv与bbed确定检查和位置. 1.建立测试环境: SCOTT@test> @ &r/ver1 PORT_STRING                    VERSION        BANNER -------

[20161111]数据文件的第0块2.txt

[20161111]数据文件的第0块2.txt --如果数据文件的第0块是OS块信息,以前的测试如果rman做备份集都不会备份. --如果这块损坏,里面讲问题不大,你甚至可以不修复,如果在线resize就ok了,当然重建控制文件就出现问题. --而且解决也很简单,就是建立一样大小的数据文件,然后copy回去.做一个测试例子: 1.环境: SCOTT@book> @ &r/ver1 PORT_STRING                    VERSION        BANNER ---

[20130402]分区的段大小.txt

[20130402]分区的段大小.txt 参看链接:http://hemantoracledba.blogspot.com/2013/03/segment-size-of-partition-11202-and.html 自己重复它的测试,仅仅使用表空间名字与他不同吧了. SQL> @ver BANNER -------------------------------------------------------------------------------- Oracle Database

如何查看window文件系统块大小

 在window7下有一个工具,可以用来查看操作系统的块大小--fsutil,这个工具也可以运行在window2000的平台上;因此给了我们很大的便利性. 输入fsutil fsinfoo ntfsinfo c: 查看C信息: 每个簇的字节数便是文件系统的块的大小即4kb.这个大小可以在磁盘分区或格式化石自定义,通过format工具可以查看不同文件系统可以使用的块大小. <喎�"http://www.2cto.com/kf/ware/vc/" target="_blan

《Hadoop MapReduce实战手册》一2.6 设置HDFS块大小

2.6 设置HDFS块大小 Hadoop MapReduce实战手册HDFS跨集群存储文件时,会把文件切分成粗粒度的.大小固定的块.默认的HDFS块大小为64 MB.数据产品的块大小会影响文件系统操作的性能,如果存储和处理非常大的文件,那么较大的块大小会更高效.数据产品的块大小会影响MapReduce计算的性能,因为Hadoop的默认行为是为输入文件中的每个数据块创建一个map任务. 操作步骤 要使用NameNode的配置文件来设置HDFS的块大小,需要在$HADOOP_HOME/conf/hd

发生&amp;amp;quot;指定的初始化向量(IV)与此算法的块大小不匹配&amp;amp;quot;错误提示的解决方法

问题描述 那位大侠能解答,调试代码过程中发生"指定的初始化向量(IV)与此算法的块大小不匹配"错误提示的解决方法,谢谢! 解决方案 解决方案二:遇到同样问题....解决方案三:IV的字节数必须等于SymmetricAlgorithm.BlockSize/8

查看linux文件系统块大小的实现方法_Linux

在linux系统上,可以用命令tune2fs ,测试如下 [root@localhost test10g]# tune2fs -help tune2fs 1.35 (28-Feb-2004) tune2fs: invalid option -- h Usage: tune2fs [-c max-mounts-count] [-e errors-behavior] [-g group] [-i interval[d|m|w]] [-j] [-J journal-options] [-l] [-s

linux查看文件系统块大小与内存页大小的简单方法_Linux

一:查看文件系统块大小 sudo /sbin/tune2fs -l /dev/sda1|grep "Block size" 需要注意到系统可能有多个盘多个文件系统,可通过df命令查看 数值单位是字节,如图示: 二:查看内存页大小 getconf PAGESIZE 数值单位是字节,如图示: 以上就是小编为大家带来的linux查看文件系统块大小与内存页大小的简单方法全部内容了,希望大家多多支持~ 以上是小编为您精心准备的的内容,在的博客.问答.公众号.人物.课程等栏目也有的相关内容,欢迎继