作为分布式存储, ceph由很多组件组成, 例如有osd, mon, mds等, 这些组件各自承担了各自的功能, 同时每种组件都可以由多个节点组成, 提供高可用. 监控ceph集群的状态, 包括各个组件的状态监控.
通过ceph命令, 在mon, osd, mds任意节点都可以获取到集群的信息.
例如 :
如果配置文件和KEY不在默认路径, 或集群名不是ceph的话, 请注意提供配置文件和key文件路径.
ceph -c /path/to/conf -k /path/to/keyring
ceph命令交互模式举例 :
ceph> health
HEALTH_OK
ceph> status
cluster f649b128-963c-4802-ae17-5a76f36c4c76
health HEALTH_OK
monmap e8: 5 mons at {mon1=172.17.0.2:6789/0,mon2=172.17.0.3:6789/0,mon3=172.17.0.4:6789/0,mon4=172.17.0.9:6789/0,mon5=172.17.0.10:6789/0}, election epoch 38, quorum 0,1,2,3,4 mon1,mon2,mon3,mon4,mon5
osdmap e29: 4 osds: 4 up, 4 in
pgmap v17442: 128 pgs, 1 pools, 0 bytes data, 0 objects
42197 MB used, 1595 GB / 1636 GB avail
128 active+clean
ceph> quorum_status
{"election_epoch":38,"quorum":[0,1,2,3,4],"quorum_names":["mon1","mon2","mon3","mon4","mon5"],"quorum_leader_name":"mon1","monmap":{"epoch":8,"fsid":"f649b128-963c-4802-ae17-5a76f36c4c76","modified":"2014-12-10 11:52:43.961159","created":"2014-12-09 15:24:33.339570","mons":[{"rank":0,"name":"mon1","addr":"172.17.0.2:6789\/0"},{"rank":1,"name":"mon2","addr":"172.17.0.3:6789\/0"},{"rank":2,"name":"mon3","addr":"172.17.0.4:6789\/0"},{"rank":3,"name":"mon4","addr":"172.17.0.9:6789\/0"},{"rank":4,"name":"mon5","addr":"172.17.0.10:6789\/0"}]}}
ceph> mon_status
{"name":"mon2","rank":1,"state":"peon","election_epoch":38,"quorum":[0,1,2,3,4],"outside_quorum":[],"extra_probe_peers":[],"sync_provider":[],"monmap":{"epoch":8,"fsid":"f649b128-963c-4802-ae17-5a76f36c4c76","modified":"2014-12-10 11:52:43.961159","created":"2014-12-09 15:24:33.339570","mons":[{"rank":0,"name":"mon1","addr":"172.17.0.2:6789\/0"},{"rank":1,"name":"mon2","addr":"172.17.0.3:6789\/0"},{"rank":2,"name":"mon3","addr":"172.17.0.4:6789\/0"},{"rank":3,"name":"mon4","addr":"172.17.0.9:6789\/0"},{"rank":4,"name":"mon5","addr":"172.17.0.10:6789\/0"}]}}
监控集群状态 :
ceph> quit
[root@mon1 ~]# ceph -w
cluster f649b128-963c-4802-ae17-5a76f36c4c76
health HEALTH_OK
monmap e8: 5 mons at {mon1=172.17.0.2:6789/0,mon2=172.17.0.3:6789/0,mon3=172.17.0.4:6789/0,mon4=172.17.0.9:6789/0,mon5=172.17.0.10:6789/0}, election epoch 38, quorum 0,1,2,3,4 mon1,mon2,mon3,mon4,mon5
osdmap e29: 4 osds: 4 up, 4 in
pgmap v17444: 128 pgs, 1 pools, 0 bytes data, 0 objects
42199 MB used, 1595 GB / 1636 GB avail
128 active+clean
2014-12-15 14:35:07.694309 mon.0 [INF] pgmap v17444: 128 pgs: 128 active+clean; 0 bytes data, 42199 MB used, 1595 GB / 1636 GB avail
2014-12-15 14:36:08.924492 mon.0 [INF] pgmap v17445: 128 pgs: 128 active+clean; 0 bytes data, 42201 MB used, 1595 GB / 1636 GB avail
2014-12-15 14:36:10.969529 mon.0 [INF] pgmap v17446: 128 pgs: 128 active+clean; 0 bytes data, 42203 MB used, 1595 GB / 1636 GB avail
输出内容包括
Cluster ID
Cluster health status
The monitor map epoch and the status of the monitor quorum
The OSD map epoch and the status of OSDs
The placement group map version
The number of placement groups and pools
The notional amount of data stored and the number of objects stored; and,
The total amount of data stored.
查看集群容量使用情况
[root@mon1 ~]# ceph df
GLOBAL:
SIZE AVAIL RAW USED %RAW USED
1636G 1595G 42206M 2.52
POOLS:
NAME ID USED %USED MAX AVAIL OBJECTS
rbd 0 0 0 797G 0
检查集群状态
ceph status
Or:
ceph -s
In interactive mode, type status and press Enter.
ceph> status
[root@mon1 ~]# ceph status
cluster f649b128-963c-4802-ae17-5a76f36c4c76
health HEALTH_OK
monmap e8: 5 mons at {mon1=172.17.0.2:6789/0,mon2=172.17.0.3:6789/0,mon3=172.17.0.4:6789/0,mon4=172.17.0.9:6789/0,mon5=172.17.0.10:6789/0}, election epoch 38, quorum 0,1,2,3,4 mon1,mon2,mon3,mon4,mon5
osdmap e29: 4 osds: 4 up, 4 in
pgmap v17454: 128 pgs, 1 pools, 0 bytes data, 0 objects
42210 MB used, 1595 GB / 1636 GB avail
128 active+clean
[root@mon1 ~]# ceph -s
cluster f649b128-963c-4802-ae17-5a76f36c4c76
health HEALTH_OK
monmap e8: 5 mons at {mon1=172.17.0.2:6789/0,mon2=172.17.0.3:6789/0,mon3=172.17.0.4:6789/0,mon4=172.17.0.9:6789/0,mon5=172.17.0.10:6789/0}, election epoch 38, quorum 0,1,2,3,4 mon1,mon2,mon3,mon4,mon5
osdmap e29: 4 osds: 4 up, 4 in
pgmap v17454: 128 pgs, 1 pools, 0 bytes data, 0 objects
42210 MB used, 1595 GB / 1636 GB avail
128 active+clean
[root@mon1 ~]# ceph
ceph> status
cluster f649b128-963c-4802-ae17-5a76f36c4c76
health HEALTH_OK
monmap e8: 5 mons at {mon1=172.17.0.2:6789/0,mon2=172.17.0.3:6789/0,mon3=172.17.0.4:6789/0,mon4=172.17.0.9:6789/0,mon5=172.17.0.10:6789/0}, election epoch 38, quorum 0,1,2,3,4 mon1,mon2,mon3,mon4,mon5
osdmap e29: 4 osds: 4 up, 4 in
pgmap v17454: 128 pgs, 1 pools, 0 bytes data, 0 objects
42210 MB used, 1595 GB / 1636 GB avail
128 active+clean
检查OSD状态
ceph osd stat
Or:
ceph osd dump
You can also check view OSDs according to their position in the CRUSH map.
ceph osd tree
输出示例
[root@mon1 ~]# ceph osd dump
epoch 29
fsid f649b128-963c-4802-ae17-5a76f36c4c76
created 2014-12-09 15:47:07.241109
modified 2014-12-09 16:08:25.501726
flags
pool 0 'rbd' replicated size 2 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 128 pgp_num 128 last_change 28 flags hashpspool stripe_width 0
max_osd 4
osd.0 up in weight 1 up_from 24 up_thru 28 down_at 0 last_clean_interval [0,0) 172.17.0.5:6800/435 172.18.0.5:6800/435 172.18.0.5:6801/435 172.17.0.5:6801/435 exists,up 854777b2-c188-4509-9df4-02f57bd17e12
osd.1 up in weight 1 up_from 22 up_thru 28 down_at 0 last_clean_interval [0,0) 172.17.0.6:6800/474 172.18.0.6:6800/474 172.18.0.6:6801/474 172.17.0.6:6801/474 exists,up d13959a4-a3fc-4589-91cd-7104fcd8dbe9
osd.2 up in weight 1 up_from 20 up_thru 28 down_at 0 last_clean_interval [0,0) 172.17.0.7:6800/447 172.18.0.7:6800/447 172.18.0.7:6801/447 172.17.0.7:6801/447 exists,up d28bdb68-e8ee-4ca8-be5d-7e86438e7663
osd.3 up in weight 1 up_from 18 up_thru 28 down_at 0 last_clean_interval [0,0) 172.17.0.8:6800/653 172.18.0.8:6800/653 172.18.0.8:6801/653 172.17.0.8:6801/653 exists,up c2e86146-fedc-4486-8d2f-ead6f473e841
[root@mon1 ~]# ceph osd tree
# id weight type name up/down reweight
-1 4 root default
-2 1 host osd1
0 1 osd.0 up 1
-3 1 host osd2
1 1 osd.1 up 1
-4 1 host osd3
2 1 osd.2 up 1
-5 1 host osd4
3 1 osd.3 up 1
监控MON状态
ceph mon stat
Or:
ceph mon dump
To check the quorum status for the monitor cluster, execute the following:
ceph quorum_status
输出示例 :
[root@mon1 ~]# ceph mon stat
e8: 5 mons at {mon1=172.17.0.2:6789/0,mon2=172.17.0.3:6789/0,mon3=172.17.0.4:6789/0,mon4=172.17.0.9:6789/0,mon5=172.17.0.10:6789/0}, election epoch 38, quorum 0,1,2,3,4 mon1,mon2,mon3,mon4,mon5
[root@mon1 ~]# ceph mon dump
dumped monmap epoch 8
epoch 8
fsid f649b128-963c-4802-ae17-5a76f36c4c76
last_changed 2014-12-10 11:52:43.961159
created 2014-12-09 15:24:33.339570
0: 172.17.0.2:6789/0 mon.mon1
1: 172.17.0.3:6789/0 mon.mon2
2: 172.17.0.4:6789/0 mon.mon3
3: 172.17.0.9:6789/0 mon.mon4
4: 172.17.0.10:6789/0 mon.mon5
[root@mon1 ~]# ceph quorum_status
{"election_epoch":38,"quorum":[0,1,2,3,4],"quorum_names":["mon1","mon2","mon3","mon4","mon5"],"quorum_leader_name":"mon1","monmap":{"epoch":8,"fsid":"f649b128-963c-4802-ae17-5a76f36c4c76","modified":"2014-12-10 11:52:43.961159","created":"2014-12-09 15:24:33.339570","mons":[{"rank":0,"name":"mon1","addr":"172.17.0.2:6789\/0"},{"rank":1,"name":"mon2","addr":"172.17.0.3:6789\/0"},{"rank":2,"name":"mon3","addr":"172.17.0.4:6789\/0"},{"rank":3,"name":"mon4","addr":"172.17.0.9:6789\/0"},{"rank":4,"name":"mon5","addr":"172.17.0.10:6789\/0"}]}}
监控mds状态
[root@mon1 ~]# ceph mds stat
e1: 0/0/0 up
[root@mon1 ~]# ceph mds dump
dumped mdsmap epoch 1
epoch 1
flags 0
created 0.000000
modified 2014-12-09 15:47:07.240898
tableserver 0
root 0
session_timeout 0
session_autoclose 0
max_file_size 0
last_failure 0
last_failure_osd_epoch 0
compat compat={},rocompat={},incompat={}
max_mds 0
in
up {}
failed
stopped
data_pools
metadata_pool 0
inline_data disabled
查看placement group状态
[root@mon1 ~]# ceph pg dump
dumped all in format plain
version 17466
stamp 2014-12-15 14:46:11.079706
last_osdmap_epoch 29
last_pg_scan 26
full_ratio 0.95
nearfull_ratio 0.85
pg_stat objects mip degr misp unf bytes log disklog state state_stamp v reported up up_primary acting acting_primary last_scrub scrub_stamp last_deep_scrub deep_scrub_stamp
0.2d 0 0 0 0 0 0 0 0 active+clean 2014-12-14 16:09:51.515740 0'0 29:33 [1,3] 1 [1,3] 1 0'0 2014-12-14 16:09:51.515668 0'0 2014-12-09 16:05:52.202755
................
ceph pg map {pool-num}.{pg-id}
[root@mon1 ~]# ceph pg map 0.2d
osdmap e29 pg 0.2d (0.2d) -> up [1,3] acting [1,3]
ceph pg {poolnum}.{pg-id} query
ceph pg dump -o {filename} --format=json
IDENTIFYING TROUBLED PGS
ceph pg dump_stuck [unclean|inactive|stale]
FINDING AN OBJECT LOCATION
ceph osd map {poolname} {object-name}
通过SOCK连接到ceph组件, 对当前连接的组件进行信息查询, 例如
ceph --admin-daemon {/path/to/admin/socket} config show
[root@mon1 ~]# ceph --admin-daemon /var/run/ceph/ceph-mon.mon1.asok config show
{ "name": "mon.mon1",
"cluster": "ceph",
"debug_none": "0\/5",
"debug_lockdep": "0\/1",
"debug_context": "0\/1",
"debug_crush": "1\/1",
"debug_mds": "1\/5",
"debug_mds_balancer": "1\/5",
"debug_mds_locker": "1\/5",
............
[参考]
1. http://ceph.com/docs/master/rados/operations/monitoring/
2. http://docs.ceph.com/docs/master/rados/operations/monitoring-osd-pg/
3. ceph -h
[root@mon1 ~]# ceph --help
General usage:
==============
usage: ceph [-h] [-c CEPHCONF] [-i INPUT_FILE] [-o OUTPUT_FILE]
[--id CLIENT_ID] [--name CLIENT_NAME] [--cluster CLUSTER]
[--admin-daemon ADMIN_SOCKET] [--admin-socket ADMIN_SOCKET_NOPE]
[-s] [-w] [--watch-debug] [--watch-info] [--watch-sec]
[--watch-warn] [--watch-error] [--version] [--verbose] [--concise]
[-f {json,json-pretty,xml,xml-pretty,plain}]
[--connect-timeout CLUSTER_TIMEOUT]
Ceph administration tool
optional arguments:
-h, --help request mon help
-c CEPHCONF, --conf CEPHCONF
ceph configuration file
-i INPUT_FILE, --in-file INPUT_FILE
input file
-o OUTPUT_FILE, --out-file OUTPUT_FILE
output file
--id CLIENT_ID, --user CLIENT_ID
client id for authentication
--name CLIENT_NAME, -n CLIENT_NAME
client name for authentication
--cluster CLUSTER cluster name
--admin-daemon ADMIN_SOCKET
submit admin-socket commands ("help" for help
--admin-socket ADMIN_SOCKET_NOPE
you probably mean --admin-daemon
-s, --status show cluster status
-w, --watch watch live cluster changes
--watch-debug watch debug events
--watch-info watch info events
--watch-sec watch security events
--watch-warn watch warn events
--watch-error watch error events
--version, -v display version
--verbose make verbose
--concise make less verbose
-f {json,json-pretty,xml,xml-pretty,plain}, --format {json,json-pretty,xml,xml-pretty,plain}
--connect-timeout CLUSTER_TIMEOUT
set a timeout for connecting to the cluster
Monitor commands:
=================
[Contacting monitor, timeout after 5 seconds]
auth add <entity> {<caps> [<caps>...]} add auth info for <entity> from input
file, or random key if no input is
given, and/or any caps specified in
the command
auth caps <entity> <caps> [<caps>...] update caps for <name> from caps
specified in the command
auth del <entity> delete all caps for <name>
auth export {<entity>} write keyring for requested entity, or
master keyring if none given
auth get <entity> write keyring file with requested key
auth get-key <entity> display requested key
auth get-or-create <entity> {<caps> add auth info for <entity> from input
[<caps>...]} file, or random key if no input given,
and/or any caps specified in the
command
auth get-or-create-key <entity> {<caps> get, or add, key for <name> from
[<caps>...]} system/caps pairs specified in the
command. If key already exists, any
given caps must match the existing
caps for that key.
auth import auth import: read keyring file from -i
<file>
auth list list authentication state
auth print-key <entity> display requested key
auth print_key <entity> display requested key
compact cause compaction of monitor's leveldb
storage
config-key del <key> delete <key>
config-key exists <key> check for <key>'s existence
config-key get <key> get <key>
config-key list list keys
config-key put <key> {<val>} put <key>, value <val>
df {detail} show cluster free space stats
fs ls list filesystems
fs new <fs_name> <metadata> <data> make new filesystem using named pools
<metadata> and <data>
fs rm <fs_name> {--yes-i-really-mean-it} disable the named filesystem
fsid show cluster FSID/UUID
health {detail} show cluster health
heap dump|start_profiler|stop_profiler| show heap usage info (available only
release|stats if compiled with tcmalloc)
injectargs <injected_args> [<injected_ inject config arguments into monitor
args>...]
log <logtext> [<logtext>...] log supplied text to the monitor log
mds add_data_pool <pool> add data pool <pool>
mds cluster_down take MDS cluster down
mds cluster_up bring MDS cluster up
mds compat rm_compat <int[0-]> remove compatible feature
mds compat rm_incompat <int[0-]> remove incompatible feature
mds compat show show mds compatibility settings
mds deactivate <who> stop mds
mds dump {<int[0-]>} dump info, optionally from epoch
mds fail <who> force mds to status failed
mds getmap {<int[0-]>} get MDS map, optionally from epoch
mds newfs <int[0-]> <int[0-]> {--yes-i- make new filesystem using pools
really-mean-it} <metadata> and <data>
mds remove_data_pool <pool> remove data pool <pool>
mds rm <int[0-]> <name (type.id)> remove nonactive mds
mds rmfailed <int[0-]> remove failed mds
mds set max_mds|max_file_size|allow_new_ set mds parameter <var> to <val>
snaps|inline_data <val> {<confirm>}
mds set_max_mds <int[0-]> set max MDS index
mds set_state <int[0-]> <int[0-20]> set mds state of <gid> to <numeric-
state>
mds setmap <int[0-]> set mds map; must supply correct epoch
number
mds stat show MDS status
mds stop <who> stop mds
mds tell <who> <args> [<args>...] send command to particular mds
mon add <name> <IPaddr[:port]> add new monitor named <name> at <addr>
mon dump {<int[0-]>} dump formatted monmap (optionally from
epoch)
mon getmap {<int[0-]>} get monmap
mon remove <name> remove monitor named <name>
mon stat summarize monitor status
mon_status report status of monitors
osd blacklist add|rm <EntityAddr> add (optionally until <expire> seconds
{<float[0.0-]>} from now) or remove <addr> from
blacklist
osd blacklist ls show blacklisted clients
osd blocked-by print histogram of which OSDs are
blocking their peers
osd create {<uuid>} create new osd (with optional UUID)
osd crush add <osdname (id|osd.id)> add or update crushmap position and
<float[0.0-]> <args> [<args>...] weight for <name> with <weight> and
location <args>
osd crush add-bucket <name> <type> add no-parent (probably root) crush
bucket <name> of type <type>
osd crush create-or-move <osdname (id| create entry or move existing entry
osd.id)> <float[0.0-]> <args> [<args>.. for <name> <weight> at/to location
.] <args>
osd crush dump dump crush map
osd crush link <name> <args> [<args>...] link existing entry for <name> under
location <args>
osd crush move <name> <args> [<args>...] move existing entry for <name> to
location <args>
osd crush remove <name> {<ancestor>} remove <name> from crush map (
everywhere, or just at <ancestor>)
osd crush reweight <name> <float[0.0-]> change <name>'s weight to <weight> in
crush map
osd crush reweight-subtree <name> change all leaf items beneath <name>
<float[0.0-]> to <weight> in crush map
osd crush rm <name> {<ancestor>} remove <name> from crush map (
everywhere, or just at <ancestor>)
osd crush rule create-erasure <name> create crush rule <name> for erasure
{<profile>} coded pool created with <profile> (
default default)
osd crush rule create-simple <name> create crush rule <name> to start from
<root> <type> {firstn|indep} <root>, replicate across buckets of
type <type>, using a choose mode of
<firstn|indep> (default firstn; indep
best for erasure pools)
osd crush rule dump {<name>} dump crush rule <name> (default all)
osd crush rule list list crush rules
osd crush rule ls list crush rules
osd crush rule rm <name> remove crush rule <name>
osd crush set set crush map from input file
osd crush set <osdname (id|osd.id)> update crushmap position and weight
<float[0.0-]> <args> [<args>...] for <name> to <weight> with location
<args>
osd crush show-tunables show current crush tunables
osd crush tunables legacy|argonaut| set crush tunables values to <profile>
bobtail|firefly|optimal|default
osd crush unlink <name> {<ancestor>} unlink <name> from crush map (
everywhere, or just at <ancestor>)
osd deep-scrub <who> initiate deep scrub on osd <who>
osd down <ids> [<ids>...] set osd(s) <id> [<id>...] down
osd dump {<int[0-]>} print summary of OSD map
osd erasure-code-profile get <name> get erasure code profile <name>
osd erasure-code-profile ls list all erasure code profiles
osd erasure-code-profile rm <name> remove erasure code profile <name>
osd erasure-code-profile set <name> create erasure code profile <name>
{<profile> [<profile>...]} with [<key[=value]> ...] pairs. Add a
--force at the end to override an
existing profile (VERY DANGEROUS)
osd find <int[0-]> find osd <id> in the CRUSH map and
show its location
osd getcrushmap {<int[0-]>} get CRUSH map
osd getmap {<int[0-]>} get OSD map
osd getmaxosd show largest OSD id
osd in <ids> [<ids>...] set osd(s) <id> [<id>...] in
osd lost <int[0-]> {--yes-i-really-mean- mark osd as permanently lost. THIS
it} DESTROYS DATA IF NO MORE REPLICAS
EXIST, BE CAREFUL
osd ls {<int[0-]>} show all OSD ids
osd lspools {<int>} list pools
osd map <poolname> <objectname> find pg for <object> in <pool>
osd metadata <int[0-]> fetch metadata for osd <id>
osd out <ids> [<ids>...] set osd(s) <id> [<id>...] out
osd pause pause osd
osd perf print dump of OSD perf summary stats
osd pg-temp <pgid> {<id> [<id>...]} set pg_temp mapping pgid:[<id> [<id>...
]] (developers only)
osd pool create <poolname> <int[0-]> create pool
{<int[0-]>} {replicated|erasure}
{<erasure_code_profile>} {<ruleset>}
{<int>}
osd pool delete <poolname> {<poolname>} delete pool
{--yes-i-really-really-mean-it}
osd pool get <poolname> size|min_size| get pool parameter <var>
crash_replay_interval|pg_num|pgp_num|
crush_ruleset|hit_set_type|hit_set_
period|hit_set_count|hit_set_fpp|auid|
target_max_objects|target_max_bytes|
cache_target_dirty_ratio|cache_target_
full_ratio|cache_min_flush_age|cache_
min_evict_age|erasure_code_profile|min_
read_recency_for_promote
osd pool get-quota <poolname> obtain object or byte limits for pool
osd pool mksnap <poolname> <snap> make snapshot <snap> in <pool>
osd pool rename <poolname> <poolname> rename <srcpool> to <destpool>
osd pool rmsnap <poolname> <snap> remove snapshot <snap> from <pool>
osd pool set <poolname> size|min_size| set pool parameter <var> to <val>
crash_replay_interval|pg_num|pgp_num|
crush_ruleset|hashpspool|hit_set_type|
hit_set_period|hit_set_count|hit_set_
fpp|debug_fake_ec_pool|target_max_
bytes|target_max_objects|cache_target_
dirty_ratio|cache_target_full_ratio|
cache_min_flush_age|cache_min_evict_
age|auid|min_read_recency_for_promote
<val> {--yes-i-really-mean-it}
osd pool set-quota <poolname> max_ set object or byte limit on pool
objects|max_bytes <val>
osd pool stats {<name>} obtain stats from all pools, or from
specified pool
osd primary-affinity <osdname (id|osd. adjust osd primary-affinity from 0.0 <=
id)> <float[0.0-1.0]> <weight> <= 1.0
osd primary-temp <pgid> <id> set primary_temp mapping pgid:<id>|-1 (
developers only)
osd repair <who> initiate repair on osd <who>
osd reweight <int[0-]> <float[0.0-1.0]> reweight osd to 0.0 < <weight> < 1.0
osd reweight-by-pg <int[100-]> reweight OSDs by PG distribution
{<poolname> [<poolname>...]} [overload-percentage-for-
consideration, default 120]
osd reweight-by-utilization {<int[100- reweight OSDs by utilization [overload-
]>} percentage-for-consideration, default
120]
osd rm <ids> [<ids>...] remove osd(s) <id> [<id>...] in
osd scrub <who> initiate scrub on osd <who>
osd set pause|noup|nodown|noout|noin| set <key>
nobackfill|norecover|noscrub|nodeep-
scrub|notieragent
osd setcrushmap set crush map from input file
osd setmaxosd <int[0-]> set new maximum osd value
osd stat print summary of OSD map
osd thrash <int[0-]> thrash OSDs for <num_epochs>
osd tier add <poolname> <poolname> {-- add the tier <tierpool> (the second
force-nonempty} one) to base pool <pool> (the first
one)
osd tier add-cache <poolname> add a cache <tierpool> (the second one)
<poolname> <int[0-]> of size <size> to existing pool
<pool> (the first one)
osd tier cache-mode <poolname> none| specify the caching mode for cache
writeback|forward|readonly|readforward tier <pool>
osd tier remove <poolname> <poolname> remove the tier <tierpool> (the second
one) from base pool <pool> (the first
one)
osd tier remove-overlay <poolname> remove the overlay pool for base pool
<pool>
osd tier set-overlay <poolname> set the overlay pool for base pool
<poolname> <pool> to be <overlaypool>
osd tree {<int[0-]>} print OSD tree
osd unpause unpause osd
osd unset pause|noup|nodown|noout|noin| unset <key>
nobackfill|norecover|noscrub|nodeep-
scrub|notieragent
pg debug unfound_objects_exist|degraded_ show debug info about pgs
pgs_exist
pg deep-scrub <pgid> start deep-scrub on <pgid>
pg dump {all|summary|sum|delta|pools| show human-readable versions of pg map
osds|pgs|pgs_brief [all|summary|sum| (only 'all' valid with plain)
delta|pools|osds|pgs|pgs_brief...]}
pg dump_json {all|summary|sum|pools| show human-readable version of pg map
osds|pgs [all|summary|sum|pools|osds| in json only
pgs...]}
pg dump_pools_json show pg pools info in json only
pg dump_stuck {inactive|unclean|stale show information about stuck pgs
[inactive|unclean|stale...]} {<int>}
pg force_create_pg <pgid> force creation of pg <pgid>
pg getmap get binary pg map to -o/stdout
pg map <pgid> show mapping of pg to osds
pg repair <pgid> start repair on <pgid>
pg scrub <pgid> start scrub on <pgid>
pg send_pg_creates trigger pg creates to be issued
pg set_full_ratio <float[0.0-1.0]> set ratio at which pgs are considered
full
pg set_nearfull_ratio <float[0.0-1.0]> set ratio at which pgs are considered
nearly full
pg stat show placement group status.
quorum enter|exit enter or exit quorum
quorum_status report status of monitor quorum
report {<tags> [<tags>...]} report full status of cluster,
optional title tag strings
scrub scrub the monitor stores
status show cluster status
sync force {--yes-i-really-mean-it} {-- force sync of and clear monitor store
i-know-what-i-am-doing}
tell <name (type.id)> <args> [<args>...] send a command to a specific daemon
时间: 2024-10-28 03:36:53