PostgreSQL 业务数据质量 实时监控 实践

标签

PostgreSQL , pg_stat , 实时质量监控


背景

当业务系统越来越庞大后,各个业务线的数据对接会越来越频繁,但是也会引入一个问题。

数据质量。

例如上游是否去掉了一些字段,或者上游数据是否及时触达,又或者上游数据本身是否出现了问题。

通过业务数据质量监控,可以发现这些问题。

而PostgreSQL内置的统计信息能力,已经满足了大部分业务数据质量实时监控场景的需求。

如果需要更加业务话、定制的数据质量监控。PostgreSQL还能支持阅后即焚,流式计算、异步消息等特性,支持实时的数据质量监控。

内置功能,业务数据质量实时监控

PostgreSQL内置统计信息如下:

1、准实时记录数

postgres=# \d pg_class
                     Table "pg_catalog.pg_class"
       Column        |     Type     | Collation | Nullable | Default
---------------------+--------------+-----------+----------+---------
 relname             | name         |           | not null |   -- 对象名
 relnamespace        | oid          |           | not null |   -- 对象所属的schema, 对应pg_namespace.oid
 relpages            | integer      |           | not null |   -- 评估的页数(单位为block_size)
 reltuples           | real         |           | not null |   -- 评估的记录数

2、准实时的每列的统计信息(空值占比、平均长度、有多少唯一值、高频词、高频词的占比、均匀分布柱状图、线性相关性、高频元素、高频元素占比、高频元素柱状图)

详细的解释如下:

postgres=# \d pg_stats
                     View "pg_catalog.pg_stats"
         Column         |   Type   | Default
------------------------+----------+---------
 schemaname             | name     |   -- 对象所属的schema
 tablename              | name     |   -- 对象名
 attname                | name     |   -- 列名
 inherited              | boolean  |   -- 是否为继承表的统计信息(false时表示当前表的统计信息,true时表示包含所有继承表的统计信息)
 null_frac              | real     |   -- 该列空值比例
 avg_width              | integer  |   -- 该列平均长度
 n_distinct             | real     |   -- 该列唯一值个数(-1表示唯一,小于1表示占比,大于等于1表示实际的唯一值个数)
 most_common_vals       | anyarray |   -- 该列高频词
 most_common_freqs      | real[]   |   -- 该列高频词对应的出现频率
 histogram_bounds       | anyarray |   -- 该列柱状图(表示隔出的每个BUCKET的记录数均等)
 correlation            | real     |   -- 该列存储相关性(-1到1的区间),绝对值越小,存储越离散。小于0表示反向相关,大于0表示正向相关
 most_common_elems      | anyarray |   -- 该列为多值类型(数组)时,多值元素的高频词
 most_common_elem_freqs | real[]   |   -- 多值元素高频词的出现频率
 elem_count_histogram   | real[]   |   -- 多值元素的柱状图中,每个区间的非空唯一元素个数

3、准实时的每个表的统计信息,(被全表扫多少次,使用全表扫的方法扫了多少条记录,被索引扫多少次,使用索引扫扫了多少条记录,写入多少条记录,更新多少条记录,有多少DEAD TUPLE等)。

postgres=# \d pg_stat_all_tables
                      View "pg_catalog.pg_stat_all_tables"
       Column        |           Type           | Default
---------------------+--------------------------+---------
 relid               | oid                      |
 schemaname          | name                     |
 relname             | name                     |
 seq_scan            | bigint                   | -- 被全表扫多少次
 seq_tup_read        | bigint                   | -- 使用全表扫的方法扫了多少条记录
 idx_scan            | bigint                   | -- 被索引扫多少次
 idx_tup_fetch       | bigint                   | -- 使用索引扫的方法扫了多少条记录
 n_tup_ins           | bigint                   | -- 插入了多少记录
 n_tup_upd           | bigint                   | -- 更新了多少记录
 n_tup_del           | bigint                   | -- 删除了多少记录
 n_tup_hot_upd       | bigint                   | -- HOT更新了多少记录
 n_live_tup          | bigint                   | -- 多少可见记录
 n_dead_tup          | bigint                   | -- 多少垃圾记录
 n_mod_since_analyze | bigint                   |
 last_vacuum         | timestamp with time zone |
 last_autovacuum     | timestamp with time zone |
 last_analyze        | timestamp with time zone |
 last_autoanalyze    | timestamp with time zone |
 vacuum_count        | bigint                   |
 autovacuum_count    | bigint                   |
 analyze_count       | bigint                   |
 autoanalyze_count   | bigint                   |

4、统计信息分析调度策略

PostgreSQL会根据表记录的变化,自动收集统计信息。调度的参数控制如下:

#track_counts = on
#autovacuum = on                        # Enable autovacuum subprocess?  'on'
autovacuum_naptime = 15s                # time between autovacuum runs
#autovacuum_analyze_threshold = 50      # min number of row updates before
                                        # analyze  

默认变更 0.1% 后就会自动收集统计信息。  

#autovacuum_analyze_scale_factor = 0.1  # fraction of table size before analyze

通过内置的统计信息能得到这些信息:

1、准实时记录数

2、每列(空值占比、平均长度、有多少唯一值、高频词、高频词的占比、均匀分布柱状图、线性相关性、高频元素、高频元素占比、高频元素柱状图)

业务数据质量可以根据以上反馈,实时被发现。

例子

1、创建测试表

create table test(id int primary key, c1 int, c2 int, info text, crt_time timestamp);
create index idx_test_1 on test (crt_time);

2、创建压测脚本

vi test.sql  

\set id random(1,10000000)
insert into test values (:id, random()*100, random()*10000, random()::text, now()) on conflict (id) do update set crt_time=now();

3、压测

pgbench -M prepared -n -r -P 1 -f ./test.sql -c 32 -j 32 -T 1200

4、创建清除数据调度,保持30秒的数据。

delete from test where ctid = any (array(
  select ctid from test where crt_time < now()-interval '30 second'
));

0.1秒调度一次

psql   

delete from test where ctid = any (array(
  select ctid from test where crt_time < now()-interval '30 second'
));  

\watch 0.1
日志如下  

DELETE 18470  

Fri 08 Dec 2017 04:31:54 PM CST (every 0.1s)  

DELETE 19572  

Fri 08 Dec 2017 04:31:55 PM CST (every 0.1s)  

DELETE 20159  

Fri 08 Dec 2017 04:31:55 PM CST (every 0.1s)  

DELETE 20143  

Fri 08 Dec 2017 04:31:55 PM CST (every 0.1s)  

DELETE 21401  

Fri 08 Dec 2017 04:31:55 PM CST (every 0.1s)  

DELETE 21956  

Fri 08 Dec 2017 04:31:56 PM CST (every 0.1s)  

DELETE 19978  

Fri 08 Dec 2017 04:31:56 PM CST (every 0.1s)  

DELETE 21916

5、实时监测统计信息

每列统计信息

postgres=# select attname,null_frac,avg_width,n_distinct,most_common_vals,most_common_freqs,histogram_bounds,correlation from pg_stats where tablename='test';  

attname           | id
null_frac         | 0
avg_width         | 4
n_distinct        | -1
most_common_vals  |
most_common_freqs |
histogram_bounds  | {25,99836,193910,289331,387900,492669,593584,695430,795413,890787,1001849,1100457,1203161,1301537,1400265,1497824,1595610,1702278,1809415,1912946,2006274,2108505,2213771,2314440,2409333,2513067,2616217,2709052,2813209,2916342,3016292,3110554,3210817,3305896,3406145,3512379,3616638,3705990,3804538,3902207,4007939,4119100,4214497,4314986,4405492,4513675,4613327,4704905,4806556,4914360,5020248,5105998,5194904,5292779,5394640,5497986,5600441,5705246,5806209,5905498,6006522,6115688,6212831,6308451,6408320,6516028,6622895,6720613,6817877,6921460,7021999,7118151,7220074,7315355,7413563,7499978,7603076,7695692,7805120,7906168,8000492,8099783,8200918,8292854,8389462,8491879,8589691,8696502,8798076,8892978,8992364,9089390,9192142,9294759,9399562,9497099,9601571,9696437,9800758,9905327,9999758}
correlation       | -0.00220302
.....  

attname           | c2
null_frac         | 0
avg_width         | 4
n_distinct        | 9989
most_common_vals  | {3056,6203,1352,1649,1777,3805,7029,420,430,705,1015,1143,2810,3036,3075,3431,3792,4459,4812,5013,5662,5725,5766,6445,6882,7034,7064,7185,7189,7347,8266,8686,8897,9042,9149,9326,9392,9648,9652,9802,63,164,235,453,595,626,672,813,847,1626,1636,1663,1749,1858,2026,2057,2080,2106,2283,2521,2596,2666,2797,2969,3131,3144,3416,3500,3870,3903,3956,3959,4252,4265,4505,4532,4912,5048,5363,5451,5644,5714,5734,5739,5928,5940,5987,6261,6352,6498,6646,6708,6886,6914,7144,7397,7589,7610,7640,7687}
most_common_freqs | {0.000366667,0.000366667,0.000333333,0.000333333,0.000333333,0.000333333,0.000333333,0.0003,0.0003,0.0003,0.0003,0.0003,0.0003,0.0003,0.0003,0.0003,0.0003,0.0003,0.0003,0.0003,0.0003,0.0003,0.0003,0.0003,0.0003,0.0003,0.0003,0.0003,0.0003,0.0003,0.0003,0.0003,0.0003,0.0003,0.0003,0.0003,0.0003,0.0003,0.0003,0.0003,0.000266667,0.000266667,0.000266667,0.000266667,0.000266667,0.000266667,0.000266667,0.000266667,0.000266667,0.000266667,0.000266667,0.000266667,0.000266667,0.000266667,0.000266667,0.000266667,0.000266667,0.000266667,0.000266667,0.000266667,0.000266667,0.000266667,0.000266667,0.000266667,0.000266667,0.000266667,0.000266667,0.000266667,0.000266667,0.000266667,0.000266667,0.000266667,0.000266667,0.000266667,0.000266667,0.000266667,0.000266667,0.000266667,0.000266667,0.000266667,0.000266667,0.000266667,0.000266667,0.000266667,0.000266667,0.000266667,0.000266667,0.000266667,0.000266667,0.000266667,0.000266667,0.000266667,0.000266667,0.000266667,0.000266667,0.000266667,0.000266667,0.000266667,0.000266667,0.000266667}
histogram_bounds  | {0,103,201,301,399,495,604,697,802,904,1009,1121,1224,1320,1419,1514,1623,1724,1820,1930,2045,2147,2240,2335,2433,2532,2638,2738,2846,2942,3038,3143,3246,3342,3443,3547,3644,3744,3852,3966,4064,4162,4262,4354,4460,4562,4655,4755,4851,4948,5046,5143,5237,5340,5428,5532,5625,5730,5830,5932,6048,6144,6248,6349,6456,6562,6657,6768,6859,6964,7060,7161,7264,7357,7454,7547,7638,7749,7852,7956,8046,8138,8240,8337,8445,8539,8626,8728,8825,8924,9016,9116,9214,9311,9420,9512,9603,9709,9811,9911,10000}
correlation       | -0.00246515  

...  

attname           | crt_time
null_frac         | 0
avg_width         | 8
n_distinct        | -0.931747
most_common_vals  | {"2017-12-08 16:32:53.836223","2017-12-08 16:33:02.700473","2017-12-08 16:33:03.226319","2017-12-08 16:33:03.613826","2017-12-08 16:33:08.171908","2017-12-08 16:33:14.727654","2017-12-08 16:33:20.857187","2017-12-08 16:33:22.519299","2017-12-08 16:33:23.388035","2017-12-08 16:33:23.519205"}
most_common_freqs | {6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05,6.66667e-05}
histogram_bounds  | {"2017-12-08 16:32:50.397367","2017-12-08 16:32:50.987576","2017-12-08 16:32:51.628523","2017-12-08 16:32:52.117421","2017-12-08 16:32:52.610271","2017-12-08 16:32:53.152021","2017-12-08 16:32:53.712685","2017-12-08 16:32:54.3036","2017-12-08 16:32:54.735576","2017-12-08 16:32:55.269238","2017-12-08 16:32:55.691081","2017-12-08 16:32:56.066085","2017-12-08 16:32:56.541396","2017-12-08 16:32:56.865717","2017-12-08 16:32:57.350169","2017-12-08 16:32:57.698694","2017-12-08 16:32:58.062828","2017-12-08 16:32:58.464265","2017-12-08 16:32:58.92354","2017-12-08 16:32:59.27284","2017-12-08 16:32:59.667347","2017-12-08 16:32:59.984229","2017-12-08 16:33:00.310772","2017-12-08 16:33:00.644104","2017-12-08 16:33:00.976184","2017-12-08 16:33:01.366153","2017-12-08 16:33:01.691384","2017-12-08 16:33:02.021643","2017-12-08 16:33:02.382856","2017-12-08 16:33:02.729636","2017-12-08 16:33:03.035666","2017-12-08 16:33:03.508461","2017-12-08 16:33:03.829351","2017-12-08 16:33:04.151727","2017-12-08 16:33:04.4596","2017-12-08 16:33:04.76933","2017-12-08 16:33:05.125295","2017-12-08 16:33:05.537555","2017-12-08 16:33:05.83828","2017-12-08 16:33:06.15387","2017-12-08 16:33:06.545922","2017-12-08 16:33:06.843679","2017-12-08 16:33:07.111281","2017-12-08 16:33:07.414602","2017-12-08 16:33:07.707961","2017-12-08 16:33:08.119891","2017-12-08 16:33:08.388883","2017-12-08 16:33:08.674867","2017-12-08 16:33:08.979336","2017-12-08 16:33:09.339377","2017-12-08 16:33:09.647791","2017-12-08 16:33:09.94157","2017-12-08 16:33:10.232294","2017-12-08 16:33:10.652072","2017-12-08 16:33:10.921087","2017-12-08 16:33:11.17986","2017-12-08 16:33:11.477399","2017-12-08 16:33:11.776529","2017-12-08 16:33:12.110676","2017-12-08 16:33:12.382742","2017-12-08 16:33:12.70362","2017-12-08 16:33:13.020485","2017-12-08 16:33:13.477398","2017-12-08 16:33:13.788134","2017-12-08 16:33:14.072125","2017-12-08 16:33:14.346058","2017-12-08 16:33:14.625692","2017-12-08 16:33:14.889661","2017-12-08 16:33:15.139977","2017-12-08 16:33:15.390732","2017-12-08 16:33:15.697878","2017-12-08 16:33:16.127449","2017-12-08 16:33:16.438117","2017-12-08 16:33:16.725608","2017-12-08 16:33:17.01954","2017-12-08 16:33:17.344609","2017-12-08 16:33:17.602447","2017-12-08 16:33:17.919983","2017-12-08 16:33:18.201386","2017-12-08 16:33:18.444387","2017-12-08 16:33:18.714402","2017-12-08 16:33:19.099394","2017-12-08 16:33:19.402888","2017-12-08 16:33:19.673556","2017-12-08 16:33:19.991907","2017-12-08 16:33:20.23329","2017-12-08 16:33:20.517752","2017-12-08 16:33:20.783084","2017-12-08 16:33:21.032402","2017-12-08 16:33:21.304109","2017-12-08 16:33:21.725122","2017-12-08 16:33:21.998994","2017-12-08 16:33:22.232959","2017-12-08 16:33:22.462384","2017-12-08 16:33:22.729792","2017-12-08 16:33:23.001244","2017-12-08 16:33:23.251215","2017-12-08 16:33:23.534155","2017-12-08 16:33:23.772144","2017-12-08 16:33:24.076088","2017-12-08 16:33:24.471151"}
correlation       | 0.760231

记录数

postgres=# select reltuples from pg_class where relname='test';
-[ RECORD 1 ]----------
reltuples | 3.74614e+06

DML活跃度统计信息

postgres=# select * from pg_stat_all_tables where relname ='test';
-[ RECORD 1 ]-------+------------------------------
relid               | 591006
schemaname          | public
relname             | test
seq_scan            | 2
seq_tup_read        | 0
idx_scan            | 28300980
idx_tup_fetch       | 24713736
n_tup_ins           | 19730476
n_tup_upd           | 8567352
n_tup_del           | 16143587
n_tup_hot_upd       | 0
n_live_tup          | 3444573
n_dead_tup          | 24748887
n_mod_since_analyze | 547474
last_vacuum         |
last_autovacuum     | 2017-12-08 16:31:10.820459+08
last_analyze        |
last_autoanalyze    | 2017-12-08 16:35:16.75293+08
vacuum_count        | 0
autovacuum_count    | 1
analyze_count       | 0
autoanalyze_count   | 124

数据清理调度

由于是数据质量监控,所以并不需要保留所有数据,我们通过以下方法,可以高效的清除数据,不影响写入和读取。

《如何根据行号高效率的清除过期数据 - 非分区表,数据老化实践》

单实例,每秒的清除速度约263万行。

如何清除统计信息

postgres=# select pg_stat_reset_single_table_counters('test'::regclass);

如何强制手工收集统计信息

postgres=# analyze verbose test;
INFO:  analyzing "public.test"
INFO:  "test": scanned 30000 of 238163 pages, containing 560241 live rows and 4294214 dead rows; 30000 rows in sample, 4319958 estimated total rows
ANALYZE

定制化,业务数据质量实时监控

使用阅后即焚的方法,实时监测数据质量。

例子:

《HTAP数据库 PostgreSQL 场景与性能测试之 32 - (OLTP) 高吞吐数据进出(堆存、行扫、无需索引) - 阅后即焚(JSON + 函数流式计算)》

《HTAP数据库 PostgreSQL 场景与性能测试之 31 - (OLTP) 高吞吐数据进出(堆存、行扫、无需索引) - 阅后即焚(读写大吞吐并测)》

《HTAP数据库 PostgreSQL 场景与性能测试之 27 - (OLTP) 物联网 - FEED日志, 流式处理 与 阅后即焚 (CTE)》

《PostgreSQL 异步消息实践 - Feed系统实时监测与响应(如 电商主动服务) - 分钟级到毫秒级的实现》

数据清理调度

由于是数据质量监控,所以并不需要保留所有数据,我们通过以下方法,可以高效的清除数据,不影响写入和读取。

《如何根据行号高效率的清除过期数据 - 非分区表,数据老化实践》

单实例,每秒的清除速度约263万行。

参考

《如何根据行号高效率的清除过期数据 - 非分区表,数据老化实践》

《PostgreSQL 统计信息pg_statistic格式及导入导出dump_stat - 兼容Oracle》

《PostgreSQL pg_stat_ pg_statio_ 统计信息(scan,read,fetch,hit)源码解读》

时间: 2024-08-01 05:19:47

PostgreSQL 业务数据质量 实时监控 实践的相关文章

Facebook 公布其数据中心实时监控数据

Facebook 的开源计算项目(OCP),此前已经开源了其数据中心建筑和硬件设计.昨日,http://www.aliyun.com/zixun/aggregation/1560.html">Facebook 又公布了其数据中心实时监控数据. 所有能够翻墙的中国读者都可以在如下两个 Facebook 页面中看到几乎实时的 Facebook 数据中心监控数据-- Prineville, OR 和 Forest City, NC . 我们现在可以在这个面板中看到 24 小时内的详细数据以及一年

良好的数据质量才能实现更好的业务成果

数据质量改善并不是新概念,事实上它已经出现很长时间了.一般来说,借助数据质量流程进行对企业数据的清洗.标准化并丰富化的目标并没有改变.唯一不同的是,目前在市场竞争加剧和经济形势下滑的前提下,一个良好的企业数据质量管理模式就完全可以将领先者和落后者之间的距离拉开.拥有一个完善的企业数据质量管理模式的助益,可以使企业更充分地享有将企业数据转化为关键战略资产所产生的优势. 这意味着提高收入.削减成本和降低风险.通过采用正确的解决方案,企业机构能够数据质量问题出现的时候把它解决掉,从而能够以前所未有的方

业务实时监控服务(ARMS)征文活动

业务实时监控服务(ARMS) 是一款端到端一体化实时监控解决方案的PaaS级阿里云产品.通过该产品,用户可以基于海量的数据迅速便捷地通过定制化为企业带来秒级的业务监控和响应能力.ARMS产品孵化于阿里内部业务,经过长时间考验,目前已被广泛用于阿里内外的商品.物流.风控和各种云产品的各类业务监控场景. ARMS面向全部ARMS用户发起征文活动,为了向更多的人宣传您的业务,打造您的品牌,同时也为了帮助其他的ARMS用户更好地使用ARMS,现在动手写一篇经验满满的文章,传递您的干货,分享您的经验,散发

Android(Linux)实时监控串口数据

    之前在做WinCE车载方案时,曾做过一个小工具TraceMonitor,用于显示WinCE系统上应用程序的调试信息,特别是在实车调试时,用于监控和显示CAN盒与主机之间的串口数据.因为需要抢占市场先机,经常在新车上市前,就得配合CAN解码盒厂商同步调试车机端软件.这时候,TraceMonitor就可能派上大用场.遇到说不清是哪一边的问题时,抓数据一看,就清清楚楚了.有时终端客户的机器出现了原车功能异常的问题,也可以直接运行该工具,实时抓取串口数据提供给我们以分析问题原因.不过该工具需要业

使用CBE和JAXB集成实现对WAS上应用程序的业务数据监控

当今企业提倡的是随需而变,因此需要在尽可能短的时间分析元数据来响应这些变化的巨大挑战.此 外,如何提前预知企业应用中的潜在问题,如客户注册量.订单量的浮动等等,都对企业制定计划至关重 要,因此我们需要对元数据进行及时的收集分析. EJB3 作为当前的主流技术之一,提供了一系列的技术 点来支持开发人员更好的操作 JavaBean,使得对 JavaBean 的控制更加简单,同时,JAXB 提供了对 JavaBean 与 XML 之间的相互转换. 如我们所知,所有在 WebSphere Applica

联通推出国际漫游数据流量实时提醒业务

本报讯(记者 沈敏岚)记者近日从上海联通获悉,中国联通已正式推出国际漫游数据流量实时提醒业务,同时在日.韩.新加坡.泰国和台湾地区开通了"包天数据国际漫游套餐",每天150元封顶,就能无限量手机上网. 联通手机用户选择了国际漫游数据流量实时提醒业务后,当日使用国际漫游数据流量每达到10M/20M/30M等,就会收到提醒短信,流量用多用少随时心中有"数".上网流量提醒不仅限于国际漫游,联通还专门设计了3G上网流量提醒的业务,用户可以申请每日或每周流量使用量提醒.流量封

【Hadoop Summit Tokyo 2016】使用Hadoop来构建实时和批数据的数据质量服务

本讲义出自Alex Lv与Amber Vaidya在Hadoop Summit Tokyo 2016上的演讲,主要分享了构建于Spark和Hadoop上的开源数据质量平台Griffin,Griffin可以用于处理批量数据.实时数据和非结构化的数据,并且构建了统一的过程来检测无效或者不准确等DQ问题,讲义中介绍了eBayGriffin的技术架构.以及用例等.

如何做到数据实时监控,并记录下来

问题描述 用vb做一个程序,实时监控任意一个变量(一个字节),当这个变量的每一位有变化时,记录到txt中,备注:1.如果用text_change或者timer的时候,如何排除不同变量更改对一个变量自己值更改的干扰: 解决方案 解决方案二:vb.net?不知道你是否了解属性的{get:set:}可以在属性的set里动手脚,先比较再保存,不需要事件啥的解决方案三:1.变量所在的类要实现IPropertyChangedNotify接口,并定义PropertyChanged事件2.在变量的Set过程中触

何鸿凌:电信行业大数据应用案例的实践及思考

"中关村大数据产业联盟"推出"大数据100分"论坛,每晚9点开始,于"中关村大数据产业联盟"微信群进行时长100分钟的交流.探讨. [大数据100分]何鸿凌:电信行业大数据应用案例的实践及思考 文:何鸿凌 主持人:中关村大数据产业联盟 副秘书长 陈新河 主讲人:何鸿凌 承办:中关村大数据产业联盟 何鸿凌,中国移动集团公司业务支撑系统部项目经理,高级工程师.1978年生人,2001年获得重庆邮电大学计算机应用学士学位,2014年获得重庆大学软件工程