LinuxCommandLibrary

pio-test

Run automated tests within PlatformIO projects

TLDR

Run all tests in all environments of the current PlatformIO project

$ pio test
copy

Test only specific environments
$ pio test [[-e|--environment]] [environment1] [[-e|--environment]] [environment2]
copy

Run only tests whose name matches a specific glob pattern
$ pio test [[-f|--filter]] "[pattern]"
copy

Ignore tests whose name matches a specific glob pattern
$ pio test [[-i|--ignore]] "[pattern]"
copy

Specify a port for firmware uploading
$ pio test --upload-port [upload_port]
copy

Specify a custom configuration file for running the tests
$ pio test [[-c|--project-conf]] [path/to/platformio.ini]
copy

SYNOPSIS

pio-test [options] [file_or_directory]

Typical usage with MPI:
mpirun -np 4 pio-test -s 10G -b 1M -t write -p seq /mnt/lustre/test_dir/testfile

PARAMETERS

-s SIZE, --size=SIZE
    Specifies the total I/O size for the test (e.g., 10G, 512M). This is the aggregated size across all participating processes.

-b SIZE, --block-size=SIZE
    Defines the size of each individual I/O operation (e.g., 4K, 1M). This sets the granularity of data transfer.

-t TYPE, --type=TYPE
    Sets the type of I/O operation: write (create and write data), read (read existing data), or rmw (read-modify-write).

-p PATTERN, --pattern=PATTERN
    Determines the access pattern for I/O operations: seq (sequential access) or rand (random access).

-D, --direct
    Enables direct I/O (O_DIRECT) to bypass the operating system's page cache, providing a more raw measurement of disk performance.

-l, --local
    Instructs pio-test to perform tests on a local file system rather than assuming a parallel file system environment.

-i ITERATIONS, --iterations=ITERATIONS
    Specifies the number of times to repeat the entire I/O test workload.

-v, --verbose
    Increases the verbosity of the output, providing more detailed information about the test progress and results.

-c, --collect-stats
    Enables the collection of detailed statistics, often used in conjunction with the piostats daemon for comprehensive performance monitoring.

-d, --dry-run
    Performs a dry run, parsing arguments and setting up the test without actually executing I/O operations.

-P NUM, --processes=NUM
    This flag is often informational or for non-MPI local tests; for parallel tests, the number of processes is typically controlled by the MPI runner (e.g., mpirun -np).

DESCRIPTION

The pio-test command is a utility designed for benchmarking and analyzing I/O performance, particularly on parallel file systems like Lustre, GPFS (now IBM Spectrum Scale), or BeeGFS, but it can also test local file systems. It allows users to simulate various I/O workloads by specifying parameters such as total I/O size, block size, I/O pattern (sequential or random), and the type of operation (read, write, read-modify-write).

The tool is invaluable in understanding the I/O characteristics and identifying bottlenecks of a storage system by performing synthetic read and write operations. It's commonly used by system administrators and performance engineers in HPC environments to optimize storage configurations, troubleshoot performance issues, and validate new hardware or file system deployments. The output typically includes aggregated throughput and sometimes latency metrics.

CAVEATS

pio-test frequently relies on an MPI (Message Passing Interface) environment (e.g., OpenMPI, MPICH) for its parallel testing capabilities. Without MPI, it can only perform local, single-process I/O tests. The validity of benchmark results can be significantly affected by system-level caching; using the --direct flag can help mitigate this. As a synthetic benchmark, pio-test may not perfectly mimic the complex I/O patterns of real-world applications, so its results should be interpreted as an indicator of maximum potential throughput rather than actual application performance. Running large-scale tests can consume substantial disk space and I/O bandwidth, potentially impacting other users or applications on a shared system.

INTEGRATION WITH MPI

For parallel I/O operations, pio-test is typically launched through an MPI job launcher like mpirun or srun. The number of parallel I/O streams or processes (MPI ranks) is determined by the -np argument passed to the MPI launcher, not usually directly by pio-test itself. Each MPI process contributes to the total I/O workload, allowing the tool to assess the aggregate performance of the parallel file system.

INTERPRETING OUTPUT

The primary output of pio-test focuses on aggregated I/O bandwidth, typically reported in MB/s or GB/s. For write tests, this signifies the rate at which data is written to the file system, and for read tests, the rate at which data is retrieved. While pio-test primarily provides throughput metrics, more detailed statistical breakdowns, including minimum, maximum, and average performance, can be obtained when used in conjunction with the full piostats collection framework, offering deeper insights into performance variability.

HISTORY

pio-test is a core component of the piostats suite, a collection of tools developed to address the specific challenges of monitoring and benchmarking parallel I/O in high-performance computing (HPC) environments. Its development is closely tied to the evolution and widespread adoption of parallel and distributed file systems such as Lustre, GPFS (now IBM Spectrum Scale), and others. The tool emerged from the critical need for system administrators, storage architects, and researchers to accurately measure, characterize, and optimize the throughput and latency of these complex, large-scale storage infrastructures to meet the demanding I/O requirements of scientific applications.

SEE ALSO

fio(1), iozone(1), dd(1), iostat(1), bonnie++(1), mpiio-test(1)

Copied to clipboard