btrfs vs ext4 fsync

在虚拟机中,单个块设备下的fdatasync性能约为ext4的1/3。

PostgreSQL有很多地方会用到fsync,例如flush xlog,检查点,创建数据库,alter database move tablespace ,重写表,pg_clog等等。

参考:

http://blog.163.com/digoal@126/blog/static/1638770402015840480734/

fsync的性能直接影响数据库的性能。

以下是在CentOS 7 x64中的对比,btrfs 使用4.3.1的版本源码编译。

http://blog.163.com/digoal@126/blog/static/16387704020151025102118544/

ext4:

[root@digoal ~]# mkfs.ext4 /dev/sdb1
mke2fs 1.42.9 (28-Dec-2013)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
2621440 inodes, 10485504 blocks
524275 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=2157969408
320 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
        32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
        4096000, 7962624

Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done   

[root@digoal ~]# mount /dev/sdb1 /data01 -o defaults,noatime,nodiratime,discard,data=ordered
[root@digoal ~]# cd /data01/
[root@digoal data01]# /opt/pgsql9.5/bin/pg_test_fsync
5 seconds per test
O_DIRECT supported on this platform for open_datasync and open_sync.

Compare file sync methods using one 8kB write:
(in wal_sync_method preference order, except fdatasync is Linux's default)
        open_datasync                      5496.006 ops/sec     182 usecs/op
        fdatasync                          5357.773 ops/sec     187 usecs/op
        fsync                              2872.555 ops/sec     348 usecs/op
        fsync_writethrough                            n/a
        open_sync                          3059.961 ops/sec     327 usecs/op

Compare file sync methods using two 8kB writes:
(in wal_sync_method preference order, except fdatasync is Linux's default)
        open_datasync                      2997.891 ops/sec     334 usecs/op
        fdatasync                          4980.309 ops/sec     201 usecs/op
        fsync                              2934.537 ops/sec     341 usecs/op
        fsync_writethrough                            n/a
        open_sync                          1608.287 ops/sec     622 usecs/op

Compare open_sync with different write sizes:
(This is designed to compare the cost of writing 16kB in different write
open_sync sizes.)
         1 * 16kB open_sync write          2909.899 ops/sec     344 usecs/op
         2 *  8kB open_sync writes         1565.073 ops/sec     639 usecs/op
         4 *  4kB open_sync writes          830.664 ops/sec    1204 usecs/op
         8 *  2kB open_sync writes          459.544 ops/sec    2176 usecs/op
        16 *  1kB open_sync writes          227.552 ops/sec    4395 usecs/op

Test if fsync on non-write file descriptor is honored:
(If the times are similar, fsync() can sync data written on a different
descriptor.)
        write, fsync, close                3082.501 ops/sec     324 usecs/op
        write, close, fsync                2798.324 ops/sec     357 usecs/op

Non-sync'ed 8kB writes:
        write                            300198.383 ops/sec       3 usecs/op

btrfs默认性能:

[root@digoal ~]# mkfs.btrfs /dev/sdb1 -f
btrfs-progs v4.3.1
See http://btrfs.wiki.kernel.org for more information.

Label:              (null)
UUID:               26f9fd42-0933-4382-8124-437091e1cddf
Node size:          16384
Sector size:        4096
Filesystem size:    40.00GiB
Block group profiles:
  Data:             single            8.00MiB
  Metadata:         DUP               1.01GiB
  System:           DUP              12.00MiB
SSD detected:       no
Incompat features:  extref, skinny-metadata
Number of devices:  1
Devices:
   ID        SIZE  PATH
    1    40.00GiB  /dev/sdb1

[root@digoal ~]# mount /dev/sdb1 /data01
[root@digoal ~]# cd /data01/
[root@digoal data01]# /opt/pgsql9.5/bin/pg_test_fsync
5 seconds per test
O_DIRECT supported on this platform for open_datasync and open_sync.

Compare file sync methods using one 8kB write:
(in wal_sync_method preference order, except fdatasync is Linux's default)
        open_datasync                       672.325 ops/sec    1487 usecs/op
        fdatasync                           460.352 ops/sec    2172 usecs/op
        fsync                               385.227 ops/sec    2596 usecs/op
        fsync_writethrough                            n/a
        open_sync                           392.941 ops/sec    2545 usecs/op

Compare file sync methods using two 8kB writes:
(in wal_sync_method preference order, except fdatasync is Linux's default)
        open_datasync                       179.161 ops/sec    5582 usecs/op
        fdatasync                           358.958 ops/sec    2786 usecs/op
        fsync                               518.578 ops/sec    1928 usecs/op
        fsync_writethrough                            n/a
        open_sync                           273.567 ops/sec    3655 usecs/op

Compare open_sync with different write sizes:
(This is designed to compare the cost of writing 16kB in different write
open_sync sizes.)
         1 * 16kB open_sync write           566.545 ops/sec    1765 usecs/op
         2 *  8kB open_sync writes          268.357 ops/sec    3726 usecs/op
         4 *  4kB open_sync writes          144.014 ops/sec    6944 usecs/op
         8 *  2kB open_sync writes           79.028 ops/sec   12654 usecs/op
        16 *  1kB open_sync writes           31.814 ops/sec   31433 usecs/op

Test if fsync on non-write file descriptor is honored:
(If the times are similar, fsync() can sync data written on a different
descriptor.)
        write, fsync, close                 570.831 ops/sec    1752 usecs/op
        write, close, fsync                 562.849 ops/sec    1777 usecs/op

Non-sync'ed 8kB writes:
        write                            225085.038 ops/sec       4 usecs/op

btrfs优化后:
(data只存一份,使用4K的node size减少写锁冲突, 关闭压缩,使用space cache,关闭data cow。)

[root@digoal ~]# mkfs.btrfs /dev/sdb1 -m single -n 4096 -f
btrfs-progs v4.3.1
See http://btrfs.wiki.kernel.org for more information.

Label:              (null)
UUID:               1e859a5c-570b-4426-83ac-b73a473d1936
Node size:          4096
Sector size:        4096
Filesystem size:    40.00GiB
Block group profiles:
  Data:             single            8.00MiB
  Metadata:         single            8.00MiB
  System:           single            4.00MiB
SSD detected:       no
Incompat features:  extref, skinny-metadata
Number of devices:  1
Devices:
   ID        SIZE  PATH
    1    40.00GiB  /dev/sdb1

[root@digoal ~]# mount /dev/sdb1 /data01 -o ssd,discard,nodatacow,noatime,nodiratime,compress=no,space_cache
[root@digoal ~]# cd /data01/
[root@digoal data01]# /opt/pgsql9.5/bin/pg_test_fsync
5 seconds per test
O_DIRECT supported on this platform for open_datasync and open_sync.

Compare file sync methods using one 8kB write:
(in wal_sync_method preference order, except fdatasync is Linux's default)
        open_datasync                      1424.383 ops/sec     702 usecs/op
        fdatasync                          1870.474 ops/sec     535 usecs/op
        fsync                              1816.084 ops/sec     551 usecs/op
        fsync_writethrough                            n/a
        open_sync                          1458.938 ops/sec     685 usecs/op

Compare file sync methods using two 8kB writes:
(in wal_sync_method preference order, except fdatasync is Linux's default)
        open_datasync                       750.109 ops/sec    1333 usecs/op
        fdatasync                          1747.257 ops/sec     572 usecs/op
        fsync                              1729.970 ops/sec     578 usecs/op
        fsync_writethrough                            n/a
        open_sync                           723.056 ops/sec    1383 usecs/op

Compare open_sync with different write sizes:
(This is designed to compare the cost of writing 16kB in different write
open_sync sizes.)
         1 * 16kB open_sync write          1413.624 ops/sec     707 usecs/op
         2 *  8kB open_sync writes          720.379 ops/sec    1388 usecs/op
         4 *  4kB open_sync writes          352.704 ops/sec    2835 usecs/op
         8 *  2kB open_sync writes          157.877 ops/sec    6334 usecs/op
        16 *  1kB open_sync writes           73.355 ops/sec   13632 usecs/op

Test if fsync on non-write file descriptor is honored:
(If the times are similar, fsync() can sync data written on a different
descriptor.)
        write, fsync, close                1827.975 ops/sec     547 usecs/op
        write, close, fsync                1664.630 ops/sec     601 usecs/op

Non-sync'ed 8kB writes:
        write                            243183.732 ops/sec       4 usecs/op

btrfs使用条带性能可以进一步提升。

btrfs mount参数

https://btrfs.wiki.kernel.org/index.php/Mount_options

时间: 2024-08-02 17:22:16

btrfs vs ext4 fsync的相关文章

btrfs vs ext4 vs (zfs on freebsd) iozone

之前测试了btrfs和ext4的fsync接口性能对比,再用iozone测试一下综合性能. http://blog.163.com/digoal@126/blog/static/163877040201510301944705/ http://blog.163.com/digoal@126/blog/static/1638770402015103004729831/ 测试case: 2G文件,单个请求32K到16MB分布,测试所有的iozone case,全程使用O_DIRECT. [root@

安装iozone on CentOS 7 x64

iozone是一个非常专业的文件系统性能测试开源软件,用法和介绍可以参考如下: http://www.iozone.org/ http://www.iozone.org/docs/IOzone_msword_98.pdf 使用源码安装,步骤如下: 下载最新的稳定版源码: http://www.iozone.org/src/current/ wget http://www.iozone.org/src/current/iozone-3-434.src.rpm rpm -ivh iozone-3-4

大数据处理之如何确保断电不丢数据

  作者:杰菊 在Hadoop 2.0.2-alpha之前,HDFS在机器断电或意外崩溃的情况下,有可能出现正在写的数据丢失的问题.而最近刚发布的CDH4中HDFS在Client端提供了hsync()的方法调用(HDFS-744),从而保证在机器崩溃或意外断电的情况下,数据不会丢失.这篇文件将围绕这个新的接口对其实现细节进行简单的分析,从而希望找出一种合理使用hsync()的策略,避免重要数据丢失. HDFS中sync(),hflush()和hsync()的差别 在hsync()之前,HDFS就

理解 Ceph:一个开源的分布式存储平台

理解 Ceph:一个开源的分布式存储平台 Ceph是一个软件分布式存储平台,可运行在商用硬件上.为了了解Ceph的运行效率,我们首先要弄清什么是商用硬件.商用电脑是由多个硬件供应商提供的硬件组装而成的,供应商们开发这些硬件是基于同一个开放标准的.与超级微型计算机相比,商品电脑的成本更低,并且它的开放标准能减少众多硬件提供商提供的硬件差异性.Ceph存储集群运行在商用机上,为了确保集群中数据的分布式存储和良好的可扩展性,Ceph运用了著名的CRUSH(Controllled Replication

如何选择文件系统:EXT4、Btrfs 和 XFS

如何选择文件系统:EXT4.Btrfs 和 XFS 老实说,人们最不曾思考的问题之一是他们的个人电脑中使用了什么文件系统.Windows 和 Mac OS X 用户更没有理由去考虑,因为对于他们的操作系统,只有一种选择,那就是 NTFS 和 HFS+.相反,对于 Linux 系统而言,有很多种文件系统可以选择,现在默认的是广泛采用的 ext4.然而,现在也有改用一种称为 btrfs 文件系统的趋势.那是什么使得 btrfs 更优秀,其它的文件系统又是什么,什么时候我们又能看到 Linux 发行版

Linux系统调用fsync函数详解

  功能描述: 同步内存中所有已修改的文件数据到储存设备. 用法: #include int fsync(int fd); 参数: fd:文件描述词. 返回说明: 成功执行时,返回0.失败返回-1,errno被设为以下的某个值 EBADF: 文件描述词无效 EIO : 读写的过程中发生错误 EROFS, EINVAL:文件所在的文件系统不支持同步 强制把系统缓存写入文件sync和fsync函数,, fflush和fsync的联系和区别2010-05-10 11:25传统的U N I X实现在内核

使用Btrfs对Linux系统进行快照回滚方法

  Btrfs 又被称为 Butter FS.Better FS 或 B-Tree FS,是由 Oracle 于 2007 年开始设计.开发的一个现代文件系统,它于 2009 年开始便被合并入 Linux 2.6.29 内核.Btrfs 基于 GPL 许可,由于不是十分稳定,虽然许多 Linux 发行版都对其进行了集成,但并不作为默认文件系统进行使用.Btrfs 之所以被广泛集成,因其支持磁盘快照.支持递归快照.对 RAID 的支持,支持子卷(Subvolumes),以及允许在线调整文件系统大小

ext4 mount option data mode: journal ordered writeback

ext4支持3种DATA模式,用来区分记录journal的行为. ext4的journal类似于PostgreSQL的XLOG,可以用来做灾难恢复,以及确保数据的一致性. 文件在ext4中分两部分存储,一部分是文件的metadata,另一部分是data. metadata和data的操作日志journal也是分开管理的.你可以让ext4记录metadata的journal,而不记录data的journal. 这取决于mount ext4时的data参数. 1.  data=journal All

数据库内核月报 - 2015 / 09-PgSQL · 特性分析 · clog异步提交一致性、原子操作与fsync

本文分为三节,分别介绍clog的fsync频率,原子操作,与异步提交一致性. PostgreSQL pg_clog fsync 频率分析 分析一下pg_clog是在什么时候需要调用fsync的? 首先引用wiki里的一段pg_clog的介绍 Some details here are in src/backend/access/transam/README: 1. "pg_clog records the commit status for each transaction that has b