lli
Execute LLVM bitcode directly
TLDR
Execute a bitcode or IR file
Execute with command-line arguments
Enable all optimizations
Load a dynamic library before linking
SYNOPSIS
lli [options] <input bitcode file> [program arguments]
PARAMETERS
-help
Print all available options and their descriptions.
-version
Display the lli version information.
-stats
Print statistics about the compilation and execution process.
-time-passes
Time the execution of individual optimization passes during JIT compilation.
-jit
Force the use of the JIT compilation engine, even if an interpreter is available.
-force-interpreter
Force the use of the interpreter engine, bypassing JIT compilation.
-load=<plugin>
Dynamically load a plugin (shared library) into lli at startup.
-args
All subsequent arguments on the command line are passed directly to the interpreted program. Often used as lli <bitcode> -args <program_args>.
-entry-function=<name>
Specify the entry function to execute in the bitcode file (default is main).
-disable-optimize
Disable optimization passes during JIT compilation or execution.
DESCRIPTION
lli (the LLVM interpreter) is a command-line tool from the LLVM project designed to execute LLVM bitcode directly. It provides a crucial capability for testing and running programs compiled to LLVM Intermediate Representation (IR) without the need for a full ahead-of-time (AOT) compilation to native machine code. This makes lli invaluable during compiler development, for rapid prototyping, or for executing code on diverse architectures where native compilation toolchains might be incomplete or slow.
lli supports multiple execution engines. By default, it often leverages a Just-In-Time (JIT) compiler to translate the bitcode into native machine code on the fly for better performance. Alternatively, it can be forced into an interpreter mode, which is generally slower but can be useful for debugging or when precise IR fidelity is required.
Users can feed lli an LLVM bitcode file (e.g., generated by clang -emit-llvm -c or opt), and it will execute the main function (or another specified entry point) within that bitcode. It also allows passing command-line arguments directly to the interpreted program. lli abstracts away the complexities of system calls and library loading, making it a powerful tool for developing and experimenting with LLVM-based languages and compilers.
CAVEATS
lli is typically slower than natively compiled executables, especially in interpreter mode. It requires the LLVM runtime environment to be correctly configured. While lli aims for portability, linking external libraries or interacting with platform-specific system calls can introduce dependencies that might not be fully transparent. Debugging complex issues within lli can sometimes be challenging compared to native debuggers.
EXECUTION MODES
lli supports both a pure interpretive mode and a Just-In-Time (JIT) compilation mode. The JIT mode compiles bitcode into native machine code on the fly for better performance, while the interpreter mode provides stricter fidelity to the LLVM IR, useful for debugging or when JIT compilation isn't feasible. The choice can often be controlled by flags like -jit or -force-interpreter.
PASSING PROGRAM ARGUMENTS
To pass command-line arguments to the program being executed by lli, you should place them after the bitcode file name, often separated by -args or --. For example: lli myprogram.bc -args arg1 arg2 or lli myprogram.bc -- arg1 arg2. Everything after -args or -- is treated as an argument for myprogram.bc, not lli itself.
HISTORY
lli has been a fundamental component of the LLVM project since its inception at the University of Illinois at Urbana-Champaign in 2000. It was designed to provide a rapid execution environment for LLVM Intermediate Representation, enabling quicker testing and development cycles for language frontends and compiler optimizations. Its evolution has mirrored the development of LLVM itself, incorporating advanced JIT capabilities alongside its foundational interpreter mode, making it a stable and essential tool for the LLVM ecosystem.