LinuxCommandLibrary

dcfldd

forensic disk imaging tool with hashing

TLDR

Create disk image with progress

$ dcfldd if=[/dev/sda] of=[disk.img]
copy
Create image with MD5 hash verification
$ dcfldd if=[/dev/sda] of=[disk.img] hash=md5 hashlog=[hash.txt]
copy
Create image with multiple hashes
$ dcfldd if=[/dev/sda] of=[disk.img] hash=md5,sha256
copy
Write to multiple outputs simultaneously
$ dcfldd if=[/dev/sda] of=[disk1.img] of=[disk2.img]
copy
Split output into multiple files
$ dcfldd if=[/dev/sda] of=[disk.img] split=[1G] splitformat=aa
copy
Wipe disk with pattern
$ dcfldd pattern=[00] of=[/dev/sda]
copy
Verify image against source
$ dcfldd if=[/dev/sda] vf=[disk.img]
copy
Show status every 256 blocks
$ dcfldd if=[/dev/sda] of=[disk.img] statusinterval=[256]
copy

SYNOPSIS

dcfldd [options]

DESCRIPTION

dcfldd is an enhanced version of GNU dd developed by the Department of Defense Computer Forensics Lab (DCFL). It adds features critical for forensic imaging, including on-the-fly hashing, status output, split output, and verification.
The tool can compute multiple hash types (MD5, SHA1, SHA256, etc.) while copying, ensuring data integrity. It supports writing to multiple outputs simultaneously for creating duplicate forensic images.
dcfldd provides progress output during copying, addressing one of dd's most common complaints. It's widely used in digital forensics, incident response, and data recovery operations.

PARAMETERS

if= file

Input file or device.
of= file
Output file (can specify multiple).
vf= file
Verify file against input.
hash= types
Hash algorithm(s) (md5, sha1, sha256, sha384, sha512).
hashlog= file
Write hash to file.
hashwindow= n
Hash every n bytes.
pattern= hex
Fill pattern for wiping.
split= size
Split output at size intervals.
splitformat= fmt
Split file suffix format.
statusinterval= n
Show status every n blocks.
bs= size
Block size for read/write.
count= n
Copy only n blocks.
skip= n
Skip n blocks at start of input.
seek= n
Skip n blocks at start of output.
conv= options
Conversion options (noerror, sync, etc.).

CAVEATS

Slower than dd due to hashing overhead. Hash verification requires reading data twice. Forensic imaging should use write blockers on source media. Some features may behave differently than standard dd.

HISTORY

dcfldd was developed by Nick Harbour at the Department of Defense Computer Forensics Laboratory (DCFL) in the early 2000s. It was created to address the needs of forensic investigators who required verifiable, documented disk imaging capabilities. The tool became a standard in digital forensics training and practice.

SEE ALSO

dd(1), ddrescue(1), dc3dd(1), md5sum(1)

> TERMINAL_GEAR

Curated for the Linux community

Copied to clipboard

> TERMINAL_GEAR

Curated for the Linux community