LinuxCommandLibrary

qm-migrate

Migrate virtual machines between Proxmox nodes

TLDR

Migrate a specific virtual machine

$ qm [[mi|migrate]] [vm_id] [target]
copy

Override the current I/O bandwidth limit with 10 KiB/s
$ qm [[mi|migrate]] [vm_id] [target] --bwlimit 10
copy

Allow migration of virtual machines using local devices (root only)
$ qm [[mi|migrate]] [vm_id] [target] --force true
copy

Use online/live migration if a virtual machine is running
$ qm [[mi|migrate]] [vm_id] [target] --online true
copy

Enable live storage migration for local disks
$ qm [[mi|migrate]] [vm_id] [target] --with-local-disks true
copy

SYNOPSIS

qm migrate <vmid> <target> [OPTIONS]

PARAMETERS

<vmid>
    The unique ID of the virtual machine to migrate.

<target>
    The name of the target Proxmox VE node.

--bwlimit <integer> (MB/s)
    Sets the maximum bandwidth limit for the migration process in megabytes per second. Useful for controlling network load.

--compress <string>
    Specifies the compression algorithm to use for migration data. Default is 'zstd'. Other options may include 'none'.

--cpu <string>
    Forces the migration to use a specified CPU type on the target. Only valid with '--live' migration. Useful for managing CPU compatibility.

--dry-run <boolean>
    If set to 'true', the command only checks if the migration is possible and reports any issues, without actually performing the migration.

--force <boolean>
    Forces the migration to proceed even if the VM is locked or other checks fail. Use with caution.

--iface <string>
    Specifies the network interface to use for migration traffic on the source node. Useful for dedicated migration networks.

--live <boolean>
    If set to 'true', performs a live migration, meaning the VM remains running during the process with minimal downtime. Requires shared storage for disks by default.

--migrate-downtime <integer> (ms)
    Sets the maximum allowed downtime for live migration in milliseconds. Default is usually 150ms.

--migration-type <string>
    Defines the security type of the migration. Options are 'secure' (default, uses SSH) or 'insecure' (plaintext connection).

--nocompress <boolean>
    Disables ZSTD compression during migration. Can be useful in high-bandwidth, low-CPU scenarios.

--online <boolean>
    Deprecated. Use '--live' instead. Performs an online migration.

--shared-storage <boolean>
    Assumes that shared storage is used for the VM's disks. This allows live migration without transferring disk data.

--skip-shared-storage <boolean>
    When doing online migration and source/target storage are shared, this option attempts to skip the storage live migration setup.

--target-bridge <string>
    Specifies a different network bridge to use on the target node for the VM's network interfaces.

--target-storage <string>
    Provides a mapping from source storage to target storage for VM disks. Format: <source_storage>:<target_storage>[,...]. Essential when migrating VMs with local disks.

--with-local-disks <boolean>
    If set to 'true', enables migration of VMs with local disks. This implicitly triggers an offline migration, as disk data must be transferred.

DESCRIPTION

The qm migrate command facilitates the movement of a QEMU/KVM virtual machine (VM) from its current Proxmox VE host to a different node within the same Proxmox cluster. This operation can be performed as a live migration, where the VM continues to run with minimal interruption, or an offline migration, which requires the VM to be stopped. Live migration is generally preferred for critical services but typically requires shared storage for VM disks, or relies on storage migration if disks are local. The command efficiently handles the transfer of VM state, memory, and disk images, ensuring continuity or controlled downtime based on the chosen method and storage configuration.

CAVEATS

Live migration (--live) typically requires shared storage for VM disks. If local disks are present and not shared, the --with-local-disks option must be used, which will force an offline migration (VM downtime).
Ensure sufficient network bandwidth and proper network configuration between source and target nodes, especially for large VMs or live migrations. Consider using a dedicated migration network via the --iface option.
CPU compatibility between source and target nodes is crucial for live migration. If CPUs are not compatible, an offline migration might be necessary, or the --cpu option can be used to set a compatible CPU type.
The --force option should be used with extreme care as it can override critical safety checks.

MIGRATION TYPES EXPLAINED

Proxmox VE offers two primary migration types: Live Migration and Offline Migration.

Live Migration (--live): The VM remains running during the migration process, resulting in minimal or no perceived downtime for users. This is ideal for critical services. It typically requires the VM's disk images to be stored on shared storage accessible by both source and target nodes, or for Proxmox to perform a storage live migration if local disks are involved. Memory and VM state are transferred incrementally.

Offline Migration (--with-local-disks): The VM is stopped on the source node before the migration begins and is only restarted on the target node once all data transfer is complete. This method is necessary when disks are local to the source node and no shared storage is available, or when the live migration prerequisites cannot be met. It incurs noticeable downtime.

STORAGE CONSIDERATIONS

The storage configuration of your VM's disks significantly impacts migration capabilities:

If VM disks reside on shared storage (e.g., NFS, iSCSI, Ceph), live migration is straightforward as only the VM's memory and state need to be transferred; the disk data remains accessible to the new host. Use --shared-storage if disks are on shared storage.

If VM disks are stored on local storage (e.g., ZFS, LVM on a node's local disks), a live migration is still possible, but it requires the disk data to be transferred alongside the memory and state. This is achieved via a storage live migration. Alternatively, using --with-local-disks forces an offline migration, where disks are copied to the target node's local storage or specified --target-storage.

HISTORY

The qm migrate command is a fundamental component of Proxmox VE, designed to enable high availability and flexible resource management within clusters. It has been a core feature since the early iterations of Proxmox VE, continuously evolving with advancements in QEMU/KVM and cluster technologies. Subsequent Proxmox VE versions have introduced enhancements, such as improved live migration capabilities, better handling of local storage migration, and additional options for fine-tuning performance and security.

SEE ALSO

qm(1), pct(1), pvecm(1)

Copied to clipboard