LinuxCommandLibrary

pvesm

Manage Proxmox storage

TLDR

Get status for all datastores

$ pvesm [[st|status]]
copy

List storage contents
$ pvesm [[l|list]] [storage_name]
copy

Add a directory storage
$ pvesm add [[d|dir]] [storage_name] --path [path/to/directory]
copy

Set a storage to contain specific content
$ pvesm set [storage_name] --content [iso,images,backup,vztmpl,...]
copy

Delete a file from storage
$ pvesm free [local:iso/archlinux-2025.08.01-x86_64.iso]
copy

Remove a storage
$ pvesm [[r|remove]] [storage_name]
copy

SYNOPSIS

pvesm <command> [<id>] [OPTIONS]
or
pvesm add <type> <id> [OPTIONS]

PARAMETERS

add <type> <id>
    Adds a new storage backend of a specified type with a unique ID.

delete <id>
    Removes an existing storage backend identified by its ID.

set <id>
    Modifies properties and options of an existing storage backend.

list
    Displays a summary list of all configured storage backends.

scan <type>
    Scans for available storage resources of a specific type (e.g., LVM, iSCSI targets, ZFS pools).

status
    Shows the current status and usage information for all configured storages.

showconfig
    Outputs the raw, detailed configuration of all storage backends, often used for debugging or backup.

DESCRIPTION

pvesm is a powerful command-line interface tool within Proxmox Virtual Environment (PVE) designed for comprehensive storage backend management. It enables administrators to seamlessly add, modify, remove, and query various storage types, including local directories, NFS shares, iSCSI targets, LVM (Logical Volume Manager) volumes, ZFS pools, Ceph (RBD and FS), and GlusterFS. This utility is fundamental for defining where virtual machine disk images, container root file systems, backups, and ISO templates are stored. By interacting with the PVE storage subsystem, which leverages pmxcfs for cluster-wide consistency, pvesm facilitates flexible, scalable, and resilient storage solutions for virtualized and containerized workloads across a Proxmox cluster.

CAVEATS

Requires root privileges or sudo for execution.
Improper use can lead to data loss or render virtual machines/containers inaccessible.
All changes made via pvesm are immediately effective and, in a cluster setup, are synchronized across all nodes via pmxcfs.
Some operations (e.g., modifying storage backing a running VM) might require stopping or migrating affected virtual machines/containers.

COMMON STORAGE TYPES

pvesm supports a wide array of storage types, enabling flexible integration within Proxmox VE deployments. Common types include:
dir: Local directory storage.
nfs: Network File System shares.
lvm: Logical Volume Management volumes.
lvmthin: LVM thin-provisioned volumes.
zfspool: ZFS file system pools.
iscsi: iSCSI targets.
rbd: Ceph RADOS Block Device.
cephfs: Ceph File System.
glusterfs: GlusterFS distributed file system.

CONFIGURATION FILE

The configurations managed by pvesm are persistently stored in the cluster's main storage configuration file, typically located at /etc/pve/storage.cfg. This file is part of the pmxcfs and is automatically synchronized across all nodes in a Proxmox cluster.

HISTORY

pvesm has been a foundational utility in Proxmox VE since its early iterations, evolving alongside the platform's expanding storage capabilities. It was developed to provide a unified command-line interface for managing diverse storage technologies, crucial for PVE's goal of offering a comprehensive virtualization solution. Its design ensures cluster-aware configuration management, integrating tightly with pmxcfs to maintain consistency across Proxmox nodes. The command continues to be actively maintained and enhanced to support new storage backends and features as the Proxmox ecosystem grows.

SEE ALSO

qm(1), pct(1), pmxcfs(5), pve-manager(7), storage.cfg(5)

Copied to clipboard