LinuxCommandLibrary

http_load

Load test web servers with simulated traffic

TLDR

Emulate 20 requests based on a given URL list file per second for 60 seconds

$ http_load -rate [20] -seconds [60] [path/to/urls.txt]
copy

Emulate 5 concurrent requests based on a given URL list file for 60 seconds
$ http_load -parallel [5] -seconds [60] [path/to/urls.txt]
copy

Emulate 1000 requests at 20 requests per second, based on a given URL list file
$ http_load -rate [20] -fetches [1000] [path/to/urls.txt]
copy

Emulate 1000 requests at 5 concurrent requests at a time, based on a given URL list file
$ http_load -parallel [5] -fetches [1000] [path/to/urls.txt]
copy

SYNOPSIS

http_load [-parallel N] [-fetches N] [-rate R] [-seconds S] [-timeout T] [-throttle B] [-verbose] [-version] [-help] [urlfile]

PARAMETERS

-parallel N or -p N
    Number of parallel connections (default: 1)

-fetches N or -f N
    Total fetches to perform (default: infinite)

-rate R or -r R
    Limit to R requests per second

-seconds S or -s S
    Run for S seconds (default: infinite)

-timeout T or -t T
    Timeout per request in seconds (default: 0, no timeout)

-throttle B
    Throttle total bandwidth to B bytes/second

-verbose or -v
    Enable verbose output

-version or -V
    Display version information

-help or -h
    Show usage help

DESCRIPTION

http_load is a lightweight, multi-threaded utility for benchmarking web server performance under concurrent load. It reads a list of URLs from a file or standard input and simulates parallel HTTP requests, allowing precise control over parallelism, request rate, total fetches, runtime, and timeouts.

Designed for stress testing, capacity planning, and tuning, it generates realistic loads using persistent connections and pipelining where possible. Upon completion, it reports detailed statistics including: requests completed, average/median/min/max latency, throughput (requests/second), average bytes/request, total data transferred, and error rates.

URLs in the input file can include repetition prefixes like 10+http://example.com/ to repeat fetches. The tool is single-process but forks worker processes for parallelism, making it efficient on multi-core systems. Written in C, it's portable, dependency-free, and ideal for scripting automated tests. While powerful for basic HTTP loads, it lacks support for advanced features like HTTPS, POST data, or cookies.

CAVEATS

Supports HTTP only (no HTTPS). Older tool; limited modern HTTP/2 or auth support. Install via package (http-load) or source. URL file format required; stdin if omitted.

URL FILE FORMAT

One URL per line. Prefix with N+ for repetition, e.g., 5+https://example.com/path fetches 5 times.

EXAMPLE USAGE

echo 'http://example.com/' | http_load -parallel 10 -fetches 1000 -rate 200 -seconds 30
Generates 10 parallel connections at 200 req/sec for 30s.

HISTORY

Developed by Stu Card at Acme Laboratories around 1996-1998. Open-sourced early, remains popular for its simplicity and effectiveness in Unix environments despite age.

SEE ALSO

ab(1), siege(1), wrk(1), httperf(1)

Copied to clipboard