LinuxCommandLibrary

fio

Benchmark storage I/O performance

TLDR

Test random reads

$ fio --filename=[path/to/file] --direct=1 --rw=randread --bs=4k --ioengine=libaio --iodepth=256 --runtime=120 --numjobs=4 --time_based --group_reporting --name=[job_name] --eta-newline=1 --readonly
copy

Test sequential reads
$ fio --filename=[path/to/file] --direct=1 --rw=read --bs=4k --ioengine=libaio --iodepth=256 --runtime=120 --numjobs=4 --time_based --group_reporting --name=[job_name] --eta-newline=1 --readonly
copy

Test random read/write
$ fio --filename=[path/to/file] --direct=1 --rw=randrw --bs=4k --ioengine=libaio --iodepth=256 --runtime=120 --numjobs=4 --time_based --group_reporting --name=[job_name] --eta-newline=1
copy

Test with parameters from a job file
$ fio [path/to/job_file]
copy

Convert a specific job file to command-line options
$ fio --showcmd [path/to/job_file]
copy

SYNOPSIS

fio [options] [job file ...]

PARAMETERS

--name=jobname
    Name of the job (used in output).

--directory=path
    Directory to store files for I/O operations.

--filename=filename
    Filename to use for I/O operations. Can be a list of files.

--size=filesize
    Size of the file for I/O operations.

--rw=rwmix
    Type of I/O pattern (read, write, randread, randwrite, etc.).

--bs=blocksize
    Block size for I/O operations.

--ioengine=engine
    I/O engine to use (sync, libaio, posixaio, etc.).

--iodepth=depth
    I/O depth (number of outstanding I/O operations).

--numjobs=number
    Number of jobs to run concurrently.

--runtime=seconds
    Duration of the test in seconds.

--time_based
    Run for a fixed amount of time, not a fixed amount of I/O.

--group_reporting
    Report statistics for all jobs as a group.

--eta=when
    When to print estimated time of arrival information.

--output=filename
    Write all normal output to this file.

--output-format=format
    Set the reporting format (terse, json, json+).

--latency-log
    Generate a latency log.

--direct=bool
    Use direct I/O (bypass the page cache).

--thread
    Use threads instead of processes.

--verify=method
    Enable data verification.

--help
    Display help information.

DESCRIPTION

fio is a powerful and versatile I/O workload generator and benchmark tool.

It is used to evaluate the performance of storage devices such as hard drives, solid-state drives (SSDs), and network storage. fio allows users to define a wide range of I/O patterns, including sequential and random reads/writes, different block sizes, and queue depths.

It supports various I/O engines like synchronous I/O (sync), asynchronous I/O (libaio), network I/O (tcp, udp) and many other to simulate diverse real-world workloads. fio provides detailed performance metrics, such as throughput, latency, and IOPS (Input/Output Operations Per Second).

Its configuration is done using a simple human-readable configuration file. It is also essential for troubleshooting storage performance issues, optimizing I/O configurations, and comparing the performance of different storage devices or configurations. It is widely used by system administrators, storage engineers, and developers for performance testing and benchmarking.

CAVEATS

fio can be resource-intensive, especially with high iodepth or numjobs. Ensure the system has sufficient resources (CPU, memory, I/O bandwidth) to avoid skewing results. Also, direct I/O (direct=1) might require root privileges.

JOB FILE CONFIGURATION

Job files are used to specify I/O workloads. They contain a set of options that define the I/O pattern, file size, I/O engine, and other parameters. Multiple job files can be specified, allowing for complex workloads to be defined. Example configuration:
[global]
ioengine=libaio
direct=1
filename=/dev/sda

[job1]
name=readtest
rw=read
bs=4k

INTERPRETING RESULTS

fio provides a detailed output with various performance metrics. Key metrics include:
IOPS (Input/Output Operations Per Second): The number of I/O operations completed per second.
Throughput: The amount of data transferred per second (e.g., MB/s).
Latency: The time taken to complete an I/O operation (e.g., milliseconds). These metrics provide insights into the storage device's performance under different workloads.

HISTORY

fio was originally written by Jens Axboe. It has been actively developed and maintained over the years, with contributions from many developers. It is now a standard tool for I/O benchmarking and performance testing in various Linux distributions and other operating systems. Development focus is on expanding the use case scenarios and new I/O engines.

SEE ALSO

dd(1), iostat(1), vmstat(8)

Copied to clipboard