LinuxCommandLibrary

duff

TLDR

Find duplicate files in directory

$ duff [directory]
copy
Find duplicates recursively
$ duff -r [directory]
copy
Show only duplicate file names
$ duff -q [directory]
copy
Follow symlinks
$ duff -L [directory]
copy
Compare files from multiple directories
$ duff [dir1] [dir2] [dir3]
copy
Exclude empty files
$ duff -z [directory]
copy

SYNOPSIS

duff [options] [directory...]

DESCRIPTION

duff (Duplicate File Finder) identifies duplicate files by comparing file sizes and contents. It groups files with identical content, making it useful for finding and removing redundant files to free disk space.
The tool first groups files by size, then compares content using checksums and byte-by-byte comparison when necessary. Output shows clusters of duplicate files separated by blank lines.

PARAMETERS

-r, --recursive

Search directories recursively.
-q, --quiet
Quiet mode; only print file names.
-L, --follow-links
Follow symbolic links.
-z, --no-empty
Exclude empty files.
-t, --thorough
Perform thorough (slow) comparison.
-e, --excess
Only list excess duplicates, not first occurrence.
-f format
Custom output format.
-l limit
Limit number of duplicates to report.

OUTPUT FORMAT

$ file1.txt
file2.txt

another1.jpg
another2.jpg
another3.jpg
copy
Each cluster contains files with identical content.

CAVEATS

Large directories may take time to process. Thorough mode is slower but more accurate. Does not delete files automatically; output can be piped to removal scripts. Hard links are detected and reported separately.

HISTORY

duff was written by Camilla Löwy as a simple, fast duplicate file finder for Unix systems. It focuses on efficiency by using a multi-stage comparison approach, avoiding unnecessary byte comparisons when checksums differ.

SEE ALSO

fdupes(1), rdfind(1), jdupes(1), find(1)

Copied to clipboard