ceph GLOSSARY

ceph文档里术语较多, 为了方便理解, 最好先了解一下ceph的术语.

以下摘自ceph doc, 少了PG.

PG placement group

     PG, 存储 object 的逻辑组. PG存储在OSD中. OSD包含journal和data. 写完journal后返回ack确认数据安全性.

     一般journal使用SSD来存储, 需要高的响应速度(类型postgresql xlog)

     Ceph stores a client’s data as objects within storage pools. Using the CRUSH algorithm, Ceph calculates which placement group should contain the object, and further calculates which Ceph OSD Daemon should store the placement group. The CRUSH algorithm enables the Ceph Storage Cluster to scale, rebalance, and recover dynamically.

CEPH GLOSSARY

Ceph is growing rapidly. As firms deploy Ceph, the technical terms such as “RADOS”, “RBD,” “RGW” and so forth require corresponding marketing terms that explain what each component does. The terms in this glossary are intended to complement the existing technical terminology.

Sometimes more than one term applies to a definition. Generally, the first term reflects a term consistent with Ceph’s marketing, and secondary terms reflect either technical terms or legacy ways of referring to Ceph systems.

Ceph Project
The aggregate term for the people, software, mission and infrastructure of Ceph.
cephx
The Ceph authentication protocol. Cephx operates like Kerberos, but it has no single point of failure.
Ceph
Ceph Platform
All Ceph software, which includes any piece of code hosted at http://github.com/ceph.
Ceph System
Ceph Stack
A collection of two or more components of Ceph.
Ceph Node
Node
Host
Any single machine or server in a Ceph System.
Ceph Storage Cluster
Ceph Object Store
RADOS
RADOS Cluster
Reliable Autonomic Distributed Object Store
The core set of storage software which stores the user’s data (MON+OSD).
Ceph Cluster Map
cluster map
The set of maps comprising the monitor map, OSD map, PG map, MDS map and CRUSH map. See Cluster Map for details.
Ceph Object Storage
The object storage “product”, service or capabilities, which consists essentially of a Ceph Storage Cluster and a Ceph Object Gateway.
Ceph Object Gateway
RADOS Gateway
RGW
The S3/Swift gateway component of Ceph.
Ceph Block Device
RBD
The block storage component of Ceph.
Ceph Block Storage
The block storage “product,” service or capabilities when used in conjunction with librbd, a hypervisor such as QEMU or Xen, and a hypervisor abstraction layer such as libvirt.
Ceph Filesystem
CephFS
Ceph FS
The POSIX filesystem components of Ceph.
Cloud Platforms
Cloud Stacks
Third party cloud provisioning platforms such as OpenStack, CloudStack, OpenNebula, ProxMox, etc.
Object Storage Device
OSD
A physical or logical storage unit (e.g., LUN). Sometimes, Ceph users use the term “OSD” to refer to Ceph OSD Daemon, though the proper term is “Ceph OSD”.
Object Storage Devices.
Ceph OSD Daemon
Ceph OSD
The Ceph OSD software, which interacts with a logical disk (OSD). Sometimes, Ceph users use the term “OSD” to refer to “Ceph OSD Daemon”, though the proper term is “Ceph OSD”.
Ceph Monitor
MON
The Ceph monitor software.
Ceph Metadata Server
MDS
The Ceph metadata software.
Ceph Clients
Ceph Client
The collection of Ceph components which can access a Ceph Storage Cluster. These include the Ceph Object Gateway, the Ceph Block Device, the Ceph Filesystem, and their corresponding libraries, kernel modules, and FUSEs.
Ceph Kernel Modules
The collection of kernel modules which can be used to interact with the Ceph System (e.g,. ceph.ko, rbd.ko).
Ceph Client Libraries
The collection of libraries that can be used to interact with components of the Ceph System.
Ceph Release
Any distinct numbered version of Ceph.
Ceph Point Release
Any ad-hoc release that includes only bug or security fixes.
Ceph Interim Release
Versions of Ceph that have not yet been put through quality assurance testing, but may contain new features.
Ceph Release Candidate
A major version of Ceph that has undergone initial quality assurance testing and is ready for beta testers.
Ceph Stable Release
A major version of Ceph where all features from the preceding interim releases have been put through quality assurance testing successfully.
Ceph Test Framework
Teuthology
The collection of software that performs scripted tests on Ceph.
CRUSH
Controlled Replication Under Scalable Hashing. It is the algorithm Ceph uses to compute object storage locations.
ruleset
A set of CRUSH data placement rules that applies to a particular pool(s).
Pool
Pools
Pools are logical partitions for storing objects.

CLUSTER MAP

Ceph depends upon Ceph Clients and Ceph OSD Daemons having knowledge of the cluster topology, which is inclusive of 5 maps collectively referred to as the “Cluster Map”:

  1. The Monitor Map: Contains the cluster fsid, the position, name address and port of each monitor. It also indicates the current epoch, when the map was created, and the last time it changed. To view a monitor map, execute ceph mon dump.
  2. The OSD Map: Contains the cluster fsid, when the map was created and last modified, a list of pools, replica sizes, PG numbers, a list of OSDs and their status (e.g., up, in). To view an OSD map, execute ceph osd dump.
  3. The PG Map: Contains the PG version, its time stamp, the last OSD map epoch, the full ratios, and details on each placement group such as the PG ID, the Up Set, the Acting Set, the state of the PG (e.g., active + clean), and data usage statistics for each pool.
  4. The CRUSH Map: Contains a list of storage devices, the failure domain hierarchy (e.g., device, host, rack, row, room, etc.), and rules for traversing the hierarchy when storing data. To view a CRUSH map, execute ceph osd getcrushmap -o {filename}; then, decompile it by executing crushtool -d {comp-crushmap-filename} -o {decomp-crushmap-filename}. You can view the decompiled map in a text editor or with cat.
  5. The MDS Map: Contains the current MDS map epoch, when the map was created, and the last time it changed. It also contains the pool for storing metadata, a list of metadata servers, and which metadata servers are up and in. To view an MDS map, execute ceph mds dump.

Each map maintains an iterative history of its operating state changes. Ceph Monitors maintain a master copy of the cluster map including the cluster members, state, changes, and the overall health of the Ceph Storage Cluster.

[参考]
1. http://docs.ceph.com/docs/master/architecture/#cluster-map

2. http://ceph.com/

时间: 2025-01-30 07:38:03

ceph GLOSSARY的相关文章

ceph install in CentOS 7 x64 within docker - 1

本文使用docker来部署ceph, 基于CentOS 7 x64. 首先要安装docker, CentOS 7源已经包含了docker, 所以可以直接使用yum安装. # yum install -y docker 配置docker 运行时的root目录(-g 参数). [root@localhost ~]# lvs LV VG Attr LSize Pool Origin Data% Move Log Cpy%Sync Convert lv01 vgdata01 -wi-ao---- 409

ceph recommendation - os

内核版本 :  v3.16.3 or later (rbd deadlock regression in v3.16.[0-2]) v3.14.* v3.10.* 注意, 如果要使用firefly 的 crush tunables功能, 建议使用v3.16.3内核. firefly (CRUSH_TUNABLES3) tunables are supported starting with v3.15. See CRUSH Tunables for more details. 如果要使用 btr

ceph performance tune , mount the osd mon data directory to diff block dev

在centos或rhel中, ceph服务可以通过ceph sysv脚本来管理, 例如用来管理mds, osd, mon节点. 如下 : # less /etc/init.d/ceph usage_exit() { echo "usage: $0 [options] {start|stop|restart|condrestart} [mon|osd|mds]..." printf "\t-c ceph.conf\n" printf "\t--cluster

如何将 Ceph 存储集群集成到 OpenStack 云中

了解 Ceph,这是一个能够增强您的 OpenStack 环境的开源分布式存储系统 Ceph 是一个符合 POSIX (Portable Operating System for UNIX).开源的分布式存储系统,依据 GNU 次通用公共许可而运行.该项目最初由 Sage Weill 于 2007 年开发,该项目的理念是提出一个没有任何单点故障的集群,确保能够跨集群节点进行永久数据复制. 与在任何经典的分布式文件系统中一样,放入集群中的文件是条带化的,依据一种称为 Ceph Controlled

转 海量小文件存储与Ceph实践

海量小文件存储(简称LOSF,lots of small files)出现后,就一直是业界的难题,众多博文(如[1])对此问题进行了阐述与分析,许多互联网公司也针对自己的具体场景研发了自己的存储方案(如taobao开源的TFS,facebook自主研发的Haystack),还有一些公司在现有开源项目(如hbase,fastdfs,mfs等)基础上做针对性改造优化以满足业务存储需求: 一. 通过对若干分布式存储系统的调研.测试与使用,与其它分布式系统相比,海量小文件存储更侧重于解决两个问题: 1.

ceph remove osd

  前面讲了一下如何添加OSD daemon. http://blog.163.com/digoal@126/blog/static/163877040201411104393905/     本文将讲一下如何移除OSD daemon, 在移除osd之前, 请务必确保osd移除后, 集群中能放下所有的数据, 例如集群总共有100TB的空间, 已用90TB, 每个OSD daemon假设有1TB空间, 那么移除一个OSD后剩余99TB, 还能存下90T的数据. 所以这样是没有问题的,     移除

ceph distributed storage , suse storage based on it

早上起床的时候看朋友圈, 看到少聪分享的一个suse storage的链接(基于ceph改的), 非常棒. http://www.suse.com/company/press/2014/11/suse-offers-beta-preview-of-suse-storage.html ceph是一个很好的开源分布式存储, 光看介绍已经很兴奋了. 先记录一下, 好好研究. Ceph is a distributed object store and file system designed to p

Ceph FINDING AN OBJECT LOCATION

CEPH作为对象存储时, 例如用于Openstack的对象存储. 如何查找对象在ceph的位置? 首选, 我们看看如何将数据作为对象存储到ceph :  1. 需要选择一个池 2. 需要指定对象名 例如我把test.img放到ceph的pool1这个池. [root@localhost rbd0]# ceph osd lspools 0 rbd,1 pool1, [root@localhost rbd0]# rados lspools rbd pool1 [root@localhost rbd0

Ceph分布式存储实战.

云计算与虚拟化技术丛书 Ceph分布式存储实战 Ceph中国社区 著 图书在版编目(CIP)数据 Ceph分布式存储实战/Ceph中国社区著. -北京:机械工业出版社,2016.11 (云计算与虚拟化技术丛书) ISBN 978-7-111-55358-8 I. C- II. C- III. 分布式文件系统 IV. TP316 中国版本图书馆CIP数据核字(2016)第274895号 Ceph分布式存储实战 出版发行:机械工业出版社(北京市西城区百万庄大街22号 邮政编码:100037) 责任编