LinuxCommandLibrary

hyperfine

hyperfine

TLDR

Run a basic benchmark, performing at least 10 runs

$ hyperfine '[make]'
copy


Run a comparative benchmark
$ hyperfine '[make target1]' '[make target2]'
copy


Change minimum number of benchmarking runs
$ hyperfine --min-runs [7] '[make]'
copy


Perform benchmark with warmup
$ hyperfine --warmup [5] '[make]'
copy


Run a command before each benchmark run (to clear caches, etc.)
$ hyperfine --prepare '[make clean]' '[make]'
copy


Run a benchmark where a single parameter changes for each run
$ hyperfine --prepare '[make clean]' --parameter-scan [num_threads] [1] [10] '[make -j {num_threads]}'
copy

SYNOPSIS

hyperfine [-ihV] [-w warmupruns] [-r runs] [-p cmd...] [-c cmd] [-s style] [cmd...]

DESCRIPTION

A command-line benchmarking tool which includes:

* Statistical analysis across multiple runs

* Support for arbitrary shell commands

* Constant feedback about the benchmark progress and current estimates

* Warmup runs can be executed before the actual benchmark

* Cache-clearing commands can be set up before each timing run

* Statistical outlier detection to detect interference from other programs and caching effects

* Export results to various formats: CSV, JSON, Markdown, AsciiDoc

* Parameterized benchmarks (e.g. vary the number of threads)

OPTIONS

-w, --warmup warmupruns

Perform warmupruns (number) before the actual benchmark. This can be used to fill (disk) caches for I/O-heavy programs.

-m, --min-runs minruns

Perform at least minruns (number) runs for each command. Default: 10.

-M, --max-runs maxruns

Perform at most maxruns (number) runs for each command. Default: no limit.

-r, --runs runs

Perform exactly runs (number) runs for each command. If this option is not specified, hyperfine automatically determines the number of runs.

-p, --prepare cmd...

Execute cmd before each timing run. This is useful for clearing disk caches, for example. The --prepare option can be specified once for all commands or multiple times, once for each command. In the latter case, each preparation command will be run prior to the corresponding benchmark command.

-c, --cleanup cmd

Execute cmd after the completion of all benchmarking runs for each individual command to be benchmarked. This is useful if the commands to be benchmarked produce artifacts that need to be cleaned up.

-P, --parameter-scan var min max

Perform benchmark runs for each value in the range min..max. Replaces the string '{var}' in each command by the current parameter value.

Example:

hyperfine -P threads 1 8 'make -j {threads}'

This performs benchmarks for 'make -j 1', 'make -j 2', ..., 'make -j 8'.

-D, --parameter-step-size delta

This argument requires --parameter-scan to be specified as well. Traverse the range min..max in steps of delta.

Example:

hyperfine -P delay 0.3 0.7 -D 0.2 'sleep {delay}'

This performs benchmarks for 'sleep 0.3', 'sleep 0.5' and 'sleep 0.7'.

-L, --parameter-list var values

Perform benchmark runs for each value in the comma-separated list of values. Replaces the string '{var}' in each command by the current parameter value.

Example:

hyperfine -L compiler gcc,clang '{compiler} -O2 main.cpp'

This performs benchmarks for 'gcc -O2 main.cpp' and 'clang -O2 main.cpp'.

-s, --style type

Set output style type (default: auto). Set this to 'basic' to disable output coloring and interactive elements. Set it to 'full' to enable all effects even if no interactive terminal was detected. Set this to 'nocolor' to keep the interactive output without any colors. Set this to 'color' to keep the colors without any interactive output. Set this to 'none' to disable all the output of the tool.

-S, --shell shell

Set the shell to use for executing benchmarked commands.

-i, --ignore-failure

Ignore non-zero exit codes of the benchmarked programs.

-u, --time-unit unit

Set the time unit to be used. Default: second. Possible values: millisecond, second.

--export-asciidoc file

Export the timing summary statistics as an AsciiDoc table to the given file.

--export-csv file

Export the timing summary statistics as CSV to the given file. If you need the timing results for each individual run, use the JSON export format.

--export-json file

Export the timing summary statistics and timings of individual runs as JSON to the given file.

--export-markdown file

Export the timing summary statistics as a Markdown table to the given file.

--show-output

Print the stdout and stderr of the benchmark instead of suppressing it. This will increase the time it takes for benchmarks to run, so it should only be used for debugging purposes or when trying to benchmark output speed.

-n, --command-name name

Identify a command with the given name. Commands and names are paired in the same order: the first command executed gets the first name passed as option.

-h, --help

Print help message.

-V, --version

Show version information.

EXAMPLES

Basic benchmark of 'find . -name todo.txt':

hyperfine 'find . -name todo.txt'

Perform benchmarks for 'sleep 0.2' and 'sleep 3.2' with a minimum 5 runs each:

hyperfine --min-runs 5 'sleep 0.2' 'sleep 3.2'

Perform a benchmark of 'grep' with a warm disk cache by executing 3 runs up front that are not part of the measurement:

hyperfine --warmup 3 'grep -R TODO *'

Export the results of a parameter scan benchmark to a markdown table:

hyperfine --export-markdown output.md --parameter-scan time 1 5 'sleep {time}'

AUTHOR

David Peter (sharkdp)

Source, bug tracker, and additional information can be found on GitHub at: https://github.com/sharkdp/hyperfine

Copied to clipboard