JBD Journaling block device

在看man mount, ext4章节关于barrier的讲解时有提到JBD.

这里摘录一下wikipedia的解释.

JBD在Linux内核中是负责记录块设备日志的, 某些文件系统会使用JBD以及它的事务特性来确保文件系统在操作系统异常DOWN机后可恢复到一致性的状态.


JBD, or journaling block device, is a generic block device journaling layer in the Linux kernel written by Stephen C. Tweedie from Red Hat.

Contents

  [hide

Overview[edit]

The Journaling Block Device (JBD) provides a filesystem-independent interface for filesystem journaling. ext3ext4 and OCFS2 are known to use JBD. OCFS2starting from linux 2.6.28[1] and ext4 use a fork of JBD called JBD2.[2]

JBD structures[edit]

Atomic handle[edit]

An atomic handle as basically a collection of all the low-level changes that occur during a single high-level atomic update to the file system. The atomic handle guarantees that the high-level update either happens or not, because the actual changes to the file system are flushed only after logging the atomic handle in the journal.

Transaction[edit]

For the sake of efficiency and performance, JBD groups several atomic handles into a single transaction, which is written to the journal after a fixed amount of time elapses or there is no free space left on the journal to fit it.

The transaction has several states:

  • Running - it means that the transaction is still live and can accept more handles
  • Locked - not accepting new handles, but the existing ones are still unfinished
  • Flush - the transaction is complete and is being written to the journal
  • Commit - the transaction is written to the journal and now the changes are being applied to the file system
  • Finished - the transaction has been fully written to the journal and the block device. It can be deleted from the journal.

Recovery[edit]

Based on the transaction states, the JBD is able to determine which transactions need to be replayed (or reapplied) to the file system.

Sources[edit]

  1. Jump up^ http://kernelnewbies.org/Linux_2_6_28#head-b683bcf44853cccbff4b09bda272169272c22ae6
  2. Jump up^ Mingming Cao (9 August 2006). "Forking ext4 filesystem and JBD2". Linux kernel mailing list.

[参考]
1. http://en.wikipedia.org/wiki/Journaling_block_device

时间: 2024-08-22 08:52:00

JBD Journaling block device的相关文章

DRBD (Distributed Replicated Block Device)

DRBD (Distributed Replicated Block Device) Homepage: http://www.drbd.org/ 实验环境需要两台电脑,如果你没有,建议你使用VMware,并且为每一个虚拟机添加两块硬盘. 实验环境 master: 192.168.0.1 DRBD:/dev/sdb slave: 192.168.0.2 DRBD:/dev/sdb 1.1. disk and partition Each of the following steps must b

network block device(nbd)

网络块设备是一个廉价的共享存储解决方案, 结合分布式文件系统可以构建比较有规模的共享块设备. 例如, 在以下架构里面, 分布式存储提供冗余,共享和扩展性.  多台NBD提供冗余以及共享网络块设备. NBD SERVER和分布式存储可以共享主机, 也可以分开主机.  这个架构需要注意的问题, 例如操作系统层缓存, 当从nbd-server切换到另一个nbd-server时, 如果有缓存为写入镜像文件的话, 会导致数据丢失. 例子 :  使用nbd构建Oracle-RAC :  典型的例子, 使用共

gluster3.4添加【device vg】参数来创建block device出错

问题描述 gluster3.4添加[device vg]参数来创建block device出错 各位大牛求指教:最近刚接触了gluster3.4中的部分内容,主要涉及的是块设备(block device)这块,现在碰到的问题是:当我加入[device vg]参数想创建块设备,会报错: [root@localhost ~]# gluster volume create bdvol001 device vg 192.168.1.116:/mount_dir Wrong brick type: dev

use ceph for openstack block device & object storage (cinder, glance)

转 :  http://docwiki.cisco.com/wiki/OpenStack:Ceph-COI-Installation OpenStack:Ceph-COI-Installation Contents [hide] 1 Installing a ceph cluster and configuring rbd-backed cinder volumes. 1.1 First steps 2 Choosing Your Configuration 2.1 Ceph Standalon

ZFS used as block device - ZVOL

前面两篇BLOG介绍了ZFS的性能提升原理和测试, ZFS的快照回滚和克隆方法.http://blog.163.com/digoal@126/blog/static/163877040201441694022110/ http://blog.163.com/digoal@126/blog/static/163877040201441723450443/ 本文将介绍一下ZFS的另一个强大之处, 将ZFS文件系统的功能剥离, 直接当块设备使用, 同时还兼具了快照, 克隆等功能. 块设备可以给OS层使

Linux文件系统及常用命令

Linux文件系统介绍: 一 .Linux文件结构 文件结构是文件存放在磁盘等存贮设备上的组织方法.主要体现在对文件和目录的组织上.目录提供了管理文件的一个方便而有效的途径. Linux使用树状目录结构,在安装的时候,安装程序已经为用户创建了文件系统和完整而固定的目录组成形式,并指定了每个目录的作用和其中的文件类型.                                                               ┃  /根目录 ┏━━┳━━━┳━━━┳━━━╋━━━┳

umount device is busy

 umount  device is busy 一.umout 移动硬盘 开始用sftp 将安装文件copy到服务器的时候,速度太慢了,500k/s.几个G的东西,copy 这些就要半个多小时,扛不住,拿移动硬盘来copy了.结果移动硬盘的格式不对.是NTFS 格式,Linux 识别不了.只能格式化成FAT32的.而GG 的win7 系统又不具备格式化成FAT32的功能.有点小变态.让同事在XP 下帮我格式化了. 安装文件copy到服务器后,同事直接将移动硬盘从服务器上拔下来了.导致的结果是,用

Linux umount设备时出现device is busy解决方法

在Linux中,有时使用umount命令去卸载LV或文件时,可能出现umount: xxx: device is busy的情况,如下案例所示 [root@DB-Server u06]# vgdisplay -v VolGroup03     Using volume group(s) on command line     Finding volume group "VolGroup03"   --- Volume group ---   VG Name              

嵌入式 VFS: Cannot open root device "mtdblock2" or unknown-block(2,0)

系统启动后,虽然nand驱动表现正常,但是最后挂载rootfs时候出错: Kernel command line: root=/dev/mtdblock2 rw init=/linuxrc console=ttyAMA1,115200 mem=64M rootfstype=yaffs2........ AS353X NAND Driver, (c) 2010 austriamicrosystemsas353x_nand_probeNand clock set to 24000000Nand:re