LinuxCommandLibrary

hyperfine

command-line benchmarking tool that measures command execution time

TLDR

Benchmark a command

$ hyperfine '[sleep 0.3]'
copy
Compare multiple commands
$ hyperfine '[command1]' '[command2]' '[command3]'
copy
Run warmup iterations before timing
$ hyperfine --warmup [3] '[command]'
copy
Set minimum number of runs
$ hyperfine --min-runs [20] '[command]'
copy
Run setup command before each benchmark
$ hyperfine --prepare '[make clean]' '[make]'
copy
Export results to JSON
$ hyperfine '[command]' --export-json [results.json]
copy
Benchmark with parameter range
$ hyperfine -P threads 1 8 '[./program --threads {threads}]'
copy
Ignore command failures
$ hyperfine --ignore-failure '[command]'
copy

SYNOPSIS

hyperfine [options] command [command ...]

DESCRIPTION

hyperfine is a command-line benchmarking tool that measures command execution time with statistical analysis. It provides accurate measurements by running commands multiple times and calculating mean, standard deviation, min, max, and relative comparisons.
The tool automatically determines the optimal number of runs based on variance, ensuring statistically meaningful results. Warmup runs help account for cache effects and JIT compilation in interpreted languages.
When comparing multiple commands, hyperfine shows relative speedup/slowdown ratios. Color-coded output highlights the fastest command. This makes A/B testing of optimizations straightforward.
Parameter scanning enables benchmarking across value ranges without writing wrapper scripts. For example, testing thread counts from 1-16 with a single command. Results can be exported to JSON, CSV, or Markdown for further analysis or documentation.
The prepare option enables clean-state benchmarks (e.g., clearing caches or rebuilding). The shell option allows testing shell-specific features or running with minimal shell overhead using -N.

PARAMETERS

-w, --warmup n

Run n warmup iterations before timing.
-m, --min-runs n
Minimum number of runs (default: 10).
-M, --max-runs n
Maximum number of runs.
-r, --runs n
Exact number of runs.
-p, --prepare cmd
Command to run before each timing run.
-c, --cleanup cmd
Command to run after each timing run.
-s, --setup cmd
Command to run once before all benchmarks.
-P, --parameter-scan var start end
Run benchmark for parameter range.
-L, --parameter-list var vals
Run benchmark for comma-separated values.
-S, --shell shell
Shell to use (default: system default).
-N
No shell, run command directly.
--ignore-failure
Continue on non-zero exit codes.
--export-json file
Export to JSON.
--export-csv file
Export to CSV.
--export-markdown file
Export to Markdown.
--show-output
Show command output.
--style type
Output style: auto, full, basic, nocolor, color, none.

CAVEATS

System load affects results - close other applications for accurate measurements. Warmup is important for JIT-compiled or cached operations. Very fast commands (< 5ms) may have significant measurement overhead. Statistical outliers can affect mean; check min/max values.

HISTORY

hyperfine was created by David Peter (sharkdp) and released around 2018. Written in Rust, it was designed as a more user-friendly and statistically rigorous alternative to shell-based benchmarking with time. It's part of a collection of modern CLI tools (including fd, bat, etc.) that improve on traditional Unix utilities.

SEE ALSO

time(1), perf(1), bench(1), pv(1)

> TERMINAL_GEAR

Curated for the Linux community

Copied to clipboard

> TERMINAL_GEAR

Curated for the Linux community