LinuxCommandLibrary

ceph

Manage and monitor Ceph storage clusters

TLDR

Check cluster health status

$ ceph status
copy

Check cluster usage stats
$ ceph df
copy

Get the statistics for the placement groups in a cluster
$ ceph pg dump --format [plain]
copy

Create a storage pool
$ ceph osd pool create [pool_name] [page_number]
copy

Delete a storage pool
$ ceph osd pool delete [pool_name]
copy

Rename a storage pool
$ ceph osd pool rename [current_name] [new_name]
copy

Self-repair pool storage
$ ceph pg repair [pool_name]
copy

SYNOPSIS

ceph [options] [command-options]

PARAMETERS

--cluster
    Specifies the name of the Ceph cluster to connect to. Defaults to 'ceph'.

--id
    Specifies the user ID to authenticate with. Defaults to the current user.

--name
    Specifies the entity name to authenticate with. Overrides --id option.

--keyring
    Specifies the path to the keyring file containing authentication keys.

--conf
    Specifies the path to the Ceph configuration file.

--mon-host [,,...]
    Specifies a comma-separated list of monitor hosts to connect to.

--version
    Displays the Ceph version.

--help
    Displays the command's help message.

status
    Displays the overall cluster status.

osd stat
    Displays the status of OSDs.

pool ls
    Lists all the available pools.

df
    Displays disk usage information.

mgr module ls
    Lists enabled MGR modules.

DESCRIPTION

The ceph command is the primary command-line interface (CLI) for managing and monitoring Ceph storage clusters.

It provides access to a wide range of functionalities, including cluster status monitoring, pool management (creation, deletion, modifications), object placement control, data manipulation, user management, and configuration adjustments.

Administrators use it to interact with the Ceph cluster, query its state, and perform actions to maintain its health and optimize its performance.

The command's versatility makes it an indispensable tool for Ceph administrators, enabling them to effectively operate and maintain their storage infrastructure. Effective usage requires understanding Ceph's architectural concepts, such as OSDs, monitors, and placement groups.

CAVEATS

Many `ceph` commands require appropriate permissions within the Ceph cluster. Incorrect usage can lead to data loss or cluster instability. Always consult the Ceph documentation before executing commands, especially those that modify cluster configuration or data.

TROUBLESHOOTING

If the ceph command fails, check the following:
1. Ceph cluster status.
2. Authentication credentials.
3. Ceph configuration file.
4. Network connectivity to the monitors.
Reviewing Ceph logs can provide additional insights into the cause of the problem.

HISTORY

The ceph command has evolved alongside the Ceph storage system itself. Initial development focused on basic cluster monitoring and management. As Ceph matured, the ceph command gained more features, becoming the central tool for administering complex Ceph deployments. Active development continues with each Ceph release, introducing new features, improving existing ones, and enhancing overall usability. Its usage has expanded from early adopters to enterprise environments, reflecting Ceph's growing adoption as a robust storage solution.

SEE ALSO

rados(1), rbd(8)

Copied to clipboard