LinuxCommandLibrary
GitHubF-DroidGoogle Play Store

mods

AI assistant for the command line by Charm

TLDR

Ask a one-off question
$ mods "[explain what SIGPIPE does]"
copy
Pipe file contents into a prompt
$ cat [file.py] | mods "[review this code for bugs]"
copy
Select a specific model
$ mods -m [gpt-4o] "[summarise this]"
copy
Continue the last conversation
$ mods -C "[and how would I test that?]"
copy
Resume a named conversation
$ mods -c [refactor] "[next step]"
copy
Format the response as Markdown
$ mods -f "[write release notes from these commits]"
copy
List saved conversations
$ mods -l
copy
Apply a custom role / system prompt
$ mods --role [shell] "[find large files in /var]"
copy

SYNOPSIS

mods [options] [prompt]

DESCRIPTION

mods is a command-line AI companion that turns STDIN and arguments into a prompt, sends it to a configured LLM, and streams the response back to the terminal. It supports OpenAI, Anthropic, Cohere, Groq, Google Gemini, Azure, and local providers such as Ollama and LocalAI, selected per-request via --model or in the YAML configuration.Conversations are cached to disk so they can be resumed by name or continued from the last turn, and Markdown rendering via Glamour lets replies display with syntax highlighting and headings when --format is set.

PARAMETERS

PROMPT

Text passed to the model. Combined with any data read from STDIN.
-m, --model NAME
Use the named model (for example `gpt-4o`, `claude-3-5-sonnet`, a local Ollama model).
-M, --ask-model
Prompt interactively to choose a model.
-f, --format
Ask the model for formatted (Markdown) output and render it in the terminal.
--format-as FORMAT
Specify the output format (for example `markdown`, `json`).
-r, --raw
Print the raw, unformatted response.
-q, --quiet
Suppress non-error output.
-P, --prompt
Include the prompt from the command arguments and STDIN in the response.
-p, --prompt-args
Include the CLI prompt arguments in the response.
--max-tokens N
Limit the response length.
--no-limit
Do not restrict the response length.
--word-wrap WIDTH
Wrap output at the given column (default: 80).
-t, --title NAME
Title the current conversation for later retrieval.
-l, --list
List saved conversations.
-c, --continue NAME
Resume a saved conversation by name.
-C, --continue-last
Resume the most recent conversation.
-s, --show NAME
Print a saved conversation.
--no-cache
Do not persist the conversation to the cache.
--role NAME
Apply a custom role / system prompt defined in the config.
--temp FLOAT
Sampling temperature.
--topp FLOAT, --topk INT
Top-p / top-k sampling parameters.
--theme NAME
UI theme (`charm`, `catppuccin`, `dracula`, `base16`).
-x, --http-proxy URL
Route requests through an HTTP proxy.
--settings
Open the settings file in `$EDITOR`.

CAVEATS

Requires an API key (or local endpoint) configured for the chosen provider; commercial providers incur per-token costs. Prompts sent from the shell are stored in plain text in the cache directory unless --no-cache is used.As of March 2026 the upstream project is archived; Charm recommends Crush as its successor, though existing mods installations continue to work.

HISTORY

mods was created by Charm as part of their suite of terminal tools (alongside gum, glow, and charm). It was archived in March 2026 in favour of Charm's newer Crush CLI.

SEE ALSO

llm(1), ollama(1), glow(1), gum(1), charm(1)

Copied to clipboard
Kai