LinuxCommandLibrary

ceph

Manage and monitor Ceph storage clusters

TLDR

Check cluster health status

$ ceph status
copy

Check cluster usage stats
$ ceph df
copy

Get the statistics for the placement groups in a cluster
$ ceph pg dump --format [plain]
copy

Create a storage pool
$ ceph osd pool create [pool_name] [page_number]
copy

Delete a storage pool
$ ceph osd pool delete [pool_name]
copy

Rename a storage pool
$ ceph osd pool rename [current_name] [new_name]
copy

Self-repair pool storage
$ ceph pg repair [pool_name]
copy

SYNOPSIS

ceph [-h|--help] [-v|--version] [-d|--debug module=level...] [-i id|--id id] [--name name.type] [-k keyring|--keyring keyring] [-c conffile|--conf conffile] [-m monaddr|--mon monaddr] [command [args...]]

PARAMETERS

-h, --help
    Show help message and exit

-v, --version
    Show version and exit

-d, --debug =
    Set debug level for modules (e.g., mon=20)

-q, --quiet
    Quiet mode, fewer messages

-f, --foreground
    Run in foreground

-i , --id
    Set client ID for authentication

--name
    Set client name (default: admin)

-k , --keyring
    Path to keyring file

-c , --conf
    Path to configuration file

-m , --mon
    Connect to monitor at address

--setuser
    Set process UID

--setgroup
    Set process GID

-a, --auth
    Require authentication (default)

--no-ask-password
    Do not prompt for password

DESCRIPTION

Ceph is a scalable, open-source distributed storage system delivering object, block, and file storage. The ceph command serves as the central CLI for administrators to monitor, configure, and manage Ceph clusters.

It provides subcommands to check cluster health (ceph status), view OSD hierarchy (ceph osd tree), inspect monitors (ceph mon stat), manage pools (ceph osd pool create), and perform maintenance like rebalancing or scrubbing. Global options control authentication, logging, and monitor connections.

Essential for deployment and operations, it connects to Ceph monitors via TCP, using Cephx authentication. Users specify client IDs and keyrings for access control. Debug modes aid troubleshooting, while quiet/foreground flags suit scripting or daemon-like use.

In production, ceph integrates with tools like ceph-mgr for dashboards and orchestration via ceph-ansible or cephadm. It supports massive scales, handling petabytes across thousands of nodes, with features like erasure coding and replication for data durability.

CAVEATS

Requires running Ceph cluster and valid credentials; admin privileges needed for many operations. Network access to monitors mandatory. Debug output can be verbose.

COMMON SUBCOMMANDS

status: Cluster summary
health: Health details
osd tree: OSD/CRUSH map
df: Usage stats
mon stat: Monitor status

AUTHENTICATION NOTES

Default client.admin; keyrings in /etc/ceph. Use ceph auth get-or-create for new clients.

HISTORY

Developed by Sage Weil starting 2004 as PhD project at UCSC; open-sourced 2007. Inktank founded 2011 for commercialization, acquired by Red Hat 2014. Evolved into mature platform with Pacific (16.2, 2021) and later releases emphasizing cephadm orchestration.

SEE ALSO

rados(8), rbd(8), ceph-mgr(8), ceph-mon(8), ceph-osd(8), ceph-volume(8), cephadm(8)

Copied to clipboard