LinuxCommandLibrary

chatgpt

command-line interface for OpenAI language models

TLDR

Start an interactive chat session

$ chatgpt
copy
Send a single prompt and get a response
$ chatgpt "[What is the capital of France?]"
copy
Pipe input to ChatGPT
$ cat [file.txt] | chatgpt "[Summarize this text]"
copy
Use a specific model
$ chatgpt --model [gpt-4] "[prompt]"
copy
Continue a previous conversation
$ chatgpt --continue "[follow-up question]"
copy
Set system prompt for context
$ chatgpt --system "[You are a helpful coding assistant]" "[prompt]"
copy
Output response to a file
$ chatgpt "[prompt]" > [response.txt]
copy
Set temperature for response creativity
$ chatgpt --temperature [0.7] "[prompt]"
copy

SYNOPSIS

chatgpt [options] [prompt]

DESCRIPTION

chatgpt is a command-line interface for interacting with OpenAI's ChatGPT models. It provides terminal-based access to the GPT language models for text generation, coding assistance, analysis, and general conversation.
The tool supports both interactive mode for back-and-forth conversation and single-prompt mode for quick queries. Input can be piped from other commands or files, making it useful in shell pipelines for text processing tasks.
Configuration can be set via command-line flags, environment variables, or a config file. The OPENAI_API_KEY environment variable is commonly used for authentication. Different models offer varying capabilities, speed, and pricing.
System prompts allow customizing the assistant's behavior and expertise area. Temperature controls response creativity - lower values produce more focused responses while higher values increase variety. The streaming option displays responses token-by-token as they're generated.

PARAMETERS

--model, -m model

Specify the model to use (gpt-4, gpt-3.5-turbo, etc.).
--system, -s prompt
Set a system prompt to define assistant behavior.
--continue, -c
Continue the previous conversation.
--temperature, -t value
Control randomness (0.0-2.0, default: 1.0).
--max-tokens n
Maximum tokens in the response.
--top-p value
Nucleus sampling parameter.
--stream
Stream the response as it's generated.
--no-stream
Wait for complete response before displaying.
--api-key key
OpenAI API key (or set OPENAIAPIKEY environment variable).
--config file
Path to configuration file.
--list-models
List available models.
--help
Display help information.
--version
Display version information.

CONFIGURATION

OPENAI_API_KEY

Environment variable for API authentication.

CAVEATS

Requires an OpenAI API key with active billing. API usage incurs costs based on token consumption. Response quality and capabilities vary by model. Network connectivity required. Context length is limited by model constraints. Sensitive data should not be sent without considering privacy implications.

HISTORY

Command-line interfaces for ChatGPT emerged shortly after OpenAI released the ChatGPT API in March 2023. Multiple CLI implementations exist across languages (Python, Go, Rust, etc.) with varying feature sets. These tools brought GPT capabilities to terminal-centric workflows, enabling integration with shell scripts and development pipelines.

SEE ALSO

curl(1), jq(1), claude(1), ollama(1)

> TERMINAL_GEAR

Curated for the Linux community

Copied to clipboard

> TERMINAL_GEAR

Curated for the Linux community