13.9. zfs

# yum info zfs-fuse

Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
* addons: mirrors.163.com
* base: mirrors.163.com
* epel: mirror01.idc.hinet.net
* extras: mirrors.163.com
* updates: mirrors.163.com
Available Packages
Name       : zfs-fuse
Arch       : x86_64
Version    : 0.6.9_beta3
Release    : 0.el5
Size       : 1.5 M
Repo       : epel
Summary    : ZFS ported to Linux FUSE
URL        : http://zfs-fuse.net/
License    : CDDL
Description: ZFS is an advanced modern general-purpose filesystem from Sun
           : Microsystems, originally designed for Solaris/OpenSolaris.
           :
           : This project is a port of ZFS to the FUSE framework for the Linux
           : operating system.
           :
           : Project home page is at http://zfs-fuse.net/
 

原文出处:Netkiller 系列 手札
本文作者:陈景峯
转载请与作者联系,同时请务必标明文章原始出处和作者信息及本声明。

时间: 2024-10-09 00:25:31

13.9. zfs的相关文章

第 13 章 ZFS

13.1. 初始化 echo "zfs_enable=YES" >> /etc/rc.conf echo 'daily_status_zfs_enable="YES"' >> /etc/periodic.conf freebsd# vi /etc/rc.conf zfs_enable="YES" freebsd# /etc/rc.d/zfs faststart 另一种临时启动方法 # /etc/rc.d/zfs onest

13.5. zfs mount/umount

Legacy mount points must be managed through legacy tools. An attempt to use ZFS tools result in an error. # zfs mount pool/home/billm cannot mount 'pool/home/billm': legacy mountpoint use mount(1M) to mount this filesystem # mount -F zfs tank/home/bi

13.13. ZFS Snapshots and Clones

13.13.1. ZFS Snapshots filesystem@snapname, volume@snapname 13.13.1.1. Creating ZFS Snapshots The following example creates a snapshot of tank/neo that is named friday. freebsd# zfs snapshot tank/neo@friday 13.13.1.2. Destroying ZFS Snapshots Snapsho

13.6. Sharing ZFS File Systems

13.6.1. Controlling Share Semantics freebsd# zfs set sharenfs=on zfs/www 13.6.2. Unsharing Filesystems freebsd# zfs unshare zfs/www This command unshares the zpool filesystem. To unshare all ZFS filesystems on the system, run: freebsd# zfs unshare -a

13.12. Backing Up and Restoring ZFS Data

13.12.1. Backing Up a ZFS Snapshot # zfs backup tank/dana@111505 > /dev/rmt/0 # zfs backup pool/fs@snap | gzip > backupfile.gz 13.12.2. Restoring a ZFS Snapshot # zfs backup tank/gozer@111105 > /dev/rmt/0 . . . # zfs restore tank/gozer2@today <

zOS R13上的zFS新特性

名词解释: 共享文件系统环境:在parallel sysplex环境中,z/OS UNIX 支持所有LPARs的用户都可以访问整个文件系统,这样的环境称为共享文件系统环境. 所有者系统(owning 系统):将文件系统mount的sysplex member称为owning系统,其他member称为clients(非owning 系统). z/OS Distributed File Service zSeries File System(zFS)是z/OS UNIX系统服务(z/OS USS)的

CentOS 7 lvm cache dev VS zfs VS flashcache VS bcache VS direct SSD

本文测试结果仅供参考, rhel 7.0的lvm cache也只是一个预览性质的特性, 从测试结果来看, 用在生产环境也尚早. 前段时间对比了Linux下ZFS和FreeBSD下ZFS的性能, 在fsync接口上存在较大的性能差异, 这个问题已经提交给zfsonlinux的开发组员.  http://blog.163.com/digoal@126/blog/static/1638770402014526992910/ https://github.com/zfsonlinux/zfs/issue

ZFS (sync, async) R/W IOPS / throughput performance tuning

本文讨论一下zfs读写IOPS或吞吐量的优化技巧, (读写操作分同步和异步两种情况). 影响性能的因素 1. 底层设备的性能直接影响同步读写 iops, throughput. 异步读写和cache(arc, l2arc) 设备或配置有关. 2. vdev 的冗余选择影响iops, through. 因为ZPOOL的IO是均分到各vdevs的, 所以vdev越多, IO和吞吐能力越好. vdev本身的话, 写性能 mirror > raidz1 > raidz2 > raidz3 , 

ZFS 12*SATA JBOD vs MSA 2312FC 24*SAS

今天拿了两台主机PK一下zfs和存储的性能. ZFS主机 联想 Reno/Raleigh 8核 Intel(R) Xeon(R) CPU E5-2407 0 @ 2.20GHz 24GB内存 12*SATA 2TB, 其中2块RAID1, 另外10块作为zpool (raidz 9 + spare 1 + raid1的一个分区作为log) 文件系统特殊项atime=off, compression=lz4 压缩比 约3.16 存储主机 DELL R610 16核 Intel(R) Xeon(R)