sgpt
Get AI-powered responses from the command line
TLDR
Use it as a search engine, asking for the mass of the sun
Execute Shell commands, and apply chmod 444 to all files in the current directory
Generate code, solving classic fizz buzz problem
Start a chat session with a unique session name
Start a REPL (Read-eval-print loop) session
Display help
SYNOPSIS
sgpt [OPTIONS] <PROMPT...>
sgpt --shell [OPTIONS] <PROMPT...>
sgpt --code [OPTIONS] <LANGUAGE> <PROMPT...>
sgpt --chat [OPTIONS] <CHAT_ID> <PROMPT...>
sgpt --info
sgpt --help
PARAMETERS
-s, --shell
Instructs sgpt to generate and explain shell commands.
-c, --code
Directs sgpt to write code snippets in the specified programming LANGUAGE.
-p, --prompt
Sets sgpt to operate in general prompt mode, suitable for general questions and content generation (often the default).
-i, --chat
Enables a continuous conversational chat session identified by ID, maintaining context across interactions.
-m, --model
Specifies the LLM model to be used (e.g., gpt-4, gpt-3.5-turbo).
-t, --temperature
Controls the randomness of the output, with values typically ranging from 0.0 (deterministic) to 2.0 (more creative).
--no-stream
Disables the streaming output feature, causing sgpt to wait for the complete response before displaying it.
--info
Displays information about the current configuration, active model, and API usage.
-v, --version
Prints the installed sgpt version number.
-h, --help
Shows the command's help message and available options.
DESCRIPTION
sgpt, also known as ShellGPT, is a versatile command-line interface (CLI) tool designed to integrate large language models (LLMs) directly into your shell environment. It empowers users by providing AI-driven assistance for a wide array of tasks, including generating executable shell commands, explaining complex commands, writing code snippets, answering general questions, and engaging in conversational chats.
By leveraging the power of models like OpenAI's GPT series, sgpt transforms your terminal into an intelligent co-pilot. Users interact with it by posing natural language queries, and sgpt responds with relevant, context-aware information or direct command suggestions. It supports various operational modes (shell, code, chat, prompt) to optimize its output for specific user intentions, enhancing productivity and learning within the command-line workflow.
Typically installed via Python's package manager pip, sgpt requires an API key for the chosen LLM service to function, and it often streams its responses for a more dynamic and interactive user experience.
CAVEATS
sgpt is a third-party tool and not a standard component of Linux distributions, requiring separate installation. It relies on external Large Language Model APIs (e.g., OpenAI, Anthropic), meaning an API key is essential for its functionality, and usage may incur costs based on token consumption. The quality and accuracy of the generated content are dependent on the underlying LLM and the prompt's clarity, and it can occasionally produce incorrect, outdated, or potentially harmful information or commands.
INSTALLATION
sgpt is typically installed via Python's package manager:
pip install shell-gpt
Ensure Python and pip are installed on your system.
CONFIGURATION
Before first use, an API key for your chosen LLM provider must be configured. This is commonly done by setting an environment variable, for example:
export OPENAI_API_KEY='your_api_key_here'
Alternatively, sgpt supports a configuration file (often ~/.config/shell-gpt/config.yaml) for more advanced settings.
EXAMPLE USAGE
To get a shell command:
sgpt --shell "how to find files larger than 1GB?"
To ask a general question:
sgpt "What is the capital of France?"
To start a chat session:
sgpt --chat mychat "Tell me about Linux"
HISTORY
The sgpt project emerged as a response to the growing interest and accessibility of advanced Large Language Models, aiming to bring their capabilities directly into the Unix-like command-line environment. Developed primarily in Python, its design focuses on simplicity, ease of use, and seamless integration with existing shell workflows, providing a modern AI-driven alternative or complement to traditional command-line assistance tools.