LinuxCommandLibrary

duplicacy

Backup and restore data incrementally

TLDR

Use current directory as the repository, initialize a SFTP storage and encrypt the storage with a password

$ duplicacy init [[-e|-encrypt]] [snapshot_id] [sftp://user@192.168.2.100/path/to/storage]/
copy

Save a snapshot of the repository to the default storage
$ duplicacy backup
copy

List snapshots of current repository
$ duplicacy list
copy

Restore the repository to a previously saved snapshot
$ duplicacy restore -r [revision]
copy

Check the integrity of snapshots
$ duplicacy check
copy

Add another storage to be used for the existing repository
$ duplicacy add [storage_name] [snapshot_id] [storage_url]
copy

Prune a specific revision of snapshot
$ duplicacy prune -r [revision]
copy

Prune revisions, keeping one revision every n days for all revisions older than m days
$ duplicacy prune -keep [n:m]
copy

SYNOPSIS

duplicacy [global-options] <command> [command-options]

PARAMETERS

-log <max-level>
    Set max logging level (0=errors, 5=debug)

-stats
    Print detailed backup/restore statistics

-dry-run
    Simulate without writing to storage

-v
    Verbose mode (sets -log 3)

-vv
    More verbose (-log 4)

-vvv
    Debug mode (-log 5)

-threads <n>
    Number of worker threads (default 6)

-all
    Process all repositories in .duplicacy-web

-log-file <file>
    Append logs to specified file

-exclude <glob>
    Exclude files matching glob pattern

-exclude-if-present <file>
    Exclude dir if marker file exists

-filter <filter>
    Custom include/exclude filter

DESCRIPTION

Duplicacy is a fast, efficient backup utility featuring lock-free deduplication and content-defined chunking for incremental backups. It enables multiple clients to back up concurrently to the same storage without locks, using multi-threaded processing for high performance. Supports diverse backends: local filesystems, SFTP, WebDAV, S3-compatible (AWS, Wasabi, Backblaze), Dropbox, Google Drive, Azure Blob, and more.

Key strengths include client-side encryption (AES-GCM or ChaCha20-Poly1305), compression (LZ4, Zstd), and bandwidth throttling. Backups are snapshot-based with revisions; revisions share chunks to minimize storage. No garbage collection needed as pruning reuses space.

Workflow: Initialize a repository with duplicacy init, then backup, check integrity, prune old snapshots, or restore. Ideal for servers, NAS, desktops. CLI-focused with web GUI option. Cross-platform (Linux, macOS, Windows). Free and open-source.

CAVEATS

Must run duplicacy init first to set up repository.
Remote backends need credentials in ~/.duplicacy-web/repositories/config.
High CPU/RAM use during chunking on large filesystems.
No built-in scheduling; use cron/systemd.

MAIN SUBCOMMANDS

init <storage-url> <id>
backup [-v] [-stats]
restore <revision> [<dir>]
check [-reset] [-chunks]
prune -all keep 7:30d 14:180d
copy -from-repo <to>
list [-files]

STORAGE EXAMPLES

s3://bucket/path://key:secret@region
wasabi://bucket/path://access:secret@us-east-1
sftp://user@host/path
webdav://host/path
/local/path

HISTORY

Developed by Gilbert Chen since 2014 as response to limitations in tools like restic/borg. Initial release 2015; GitHub repo 2017. Evolved to v3.x with WebDAV/S3 support, encryption v2. Widely used for cloud backups; CLI remains free/open-source.

SEE ALSO

rsync(1), duplicity(1), restic(1), borg(1), rclone(1)

Copied to clipboard