LinuxCommandLibrary

wrk

Benchmark HTTP servers with multiple threads

TLDR

Run a benchmark for 30 seconds, using 12 threads, and keeping 400 HTTP connections open

$ wrk [[-t|--threads]] [12] [[-c|--connections]] [400] [[-d|--duration]] [30s] "[http://127.0.0.1:8080/index.html]"
copy

Run a benchmark with a custom header
$ wrk [[-t|--threads]] [2] [[-c|--connections]] [5] [[-d|--duration]] [5s] [[-H|--header]] "[Host: example.com]" "[http://example.com/index.html]"
copy

Run a benchmark with a request timeout of 2 seconds
$ wrk [[-t|--threads]] [2] [[-c|--connections]] [5] [[-d|--duration]] [5s] --timeout [2s] "[http://example.com/index.html]"
copy

SYNOPSIS

wrk [options] <url>

PARAMETERS

-t, --threads <N>
    Specifies the number of threads to use for the benchmark. Each thread will manage its own event loop and connections.

-c, --connections <N>
    Sets the total number of HTTP connections to keep open. These connections are distributed among the specified threads.

-d, --duration <N>
    Defines the duration of the benchmark test. N can be specified with units like 's' (seconds), 'm' (minutes), or 'h' (hours), e.g., '30s', '2m'.

-s, --script <file>
    Specifies a Lua script file to be used for customizing requests, processing responses, or generating custom reports.

-H, --header <header>
    Adds a custom HTTP header to all requests. This option can be specified multiple times for different headers.

-L, --latency
    Prints detailed latency statistics, including average, standard deviation, max, and percentile values.

--timeout <N>
    Sets a timeout for requests. If a response is not received within this duration, the request is considered failed. N can use units like 'ms' or 's'.

-m, --method <method>
    Specifies the HTTP method to use for requests (e.g., 'POST', 'PUT', 'DELETE'). Defaults to 'GET'.

--body <file>
    Sends the content of the specified file as the request body. Useful for sending large payloads with POST/PUT requests.

--data <string>
    Sends the provided string as the request body. Overrides --body if both are used.

--keep-alive
    Sends the 'Connection: keep-alive' header (default behavior). Note: wrk implicitly uses keep-alive; this is mostly for clarity or to override script behavior.

--disable-compression
    Prevents wrk from sending the 'Accept-Encoding: gzip, deflate' header, thus disabling content compression.

--no-http-2
    Disables HTTP/2 support, forcing wrk to use HTTP/1.1 even if the server supports HTTP/2.

DESCRIPTION

wrk is a modern, high-performance HTTP benchmarking tool designed for powerful multi-core CPUs. It can generate significant loads due to its unique architecture, which combines multi-threading with an event-driven (epoll/kqueue) request dispatching system. This allows wrk to efficiently utilize available CPU cores to send many requests concurrently and maintain a large number of open connections.

It supports both HTTP/1.1 and HTTP/2 protocols. A key feature is its extensibility through LuaJIT scripts, enabling users to define custom request headers, body content, handle redirects, process responses, and generate custom reports. wrk is ideal for quickly assessing the performance and scalability of web servers and APIs under various load conditions, providing metrics such as requests per second, latency distribution, and network throughput.

CAVEATS

  • wrk is a client-side benchmarking tool; it does not simulate full browser behavior (e.g., parsing HTML, loading linked resources).
  • Results can be significantly influenced by the client machine's resources (CPU, network, memory). Ensure the client has sufficient capacity to avoid becoming the bottleneck.
  • It is primarily designed for single-URL testing; simulating complex multi-page user journeys might be better handled by other tools.
  • Generating extremely high loads can saturate the network interface or CPU of the client machine, leading to inaccurate or skewed results.

LUA SCRIPTING

wrk's most powerful feature is its robust support for LuaJIT scripts. These scripts allow users to customize nearly every aspect of the request generation and response handling process. For example, a script can dynamically generate request bodies, add custom headers, handle redirects, or extract data from responses. They can also define functions to be executed before the test starts, after each request, or at the end to generate custom statistics and reports. This flexibility makes wrk exceptionally suitable for testing complex API endpoints or specific application logic that requires dynamic interactions.

OUTPUT METRICS

By default, wrk provides a concise summary of key performance metrics at the end of a test run. This includes the total number of requests completed, requests per second (RPS), total bytes read and written, and detailed latency statistics (average, standard deviation, max, and various percentiles like 50%, 75%, 90%, 99%). This comprehensive breakdown helps in understanding the server's behavior under load, identifying potential bottlenecks, and assessing the overall performance and responsiveness of the application.

HISTORY

wrk was created by Will Glozer and first released around 2012-2013. Its development was motivated by the need for a more efficient and powerful HTTP benchmarking tool than existing options like ApacheBench (ab), especially for testing modern web applications and servers on multi-core machines. By leveraging multi-threading and event-driven I/O, wrk aimed to overcome the limitations of single-threaded tools and provide more accurate performance metrics under high load. Its extensibility through Lua scripting also distinguished it, allowing for highly customized test scenarios beyond simple GET requests.

SEE ALSO

ab(1), siege(1), curl(1), h2load(1)

Copied to clipboard