LinuxCommandLibrary

tgpt

Interact with a ChatGPT-like AI in terminal

TLDR

Chat with the default provider (GPT-3.5-turbo)

$ tgpt "[prompt]"
copy

Start multi-line interactive mode
$ tgpt [[-m|--multiline]]
copy

Generate images and save them to the current directory
$ tgpt [[-img|--image]] "[prompt]"
copy

Generate code with the default provider (GPT-3.5-turbo)
$ tgpt [[-c|--code]] "[prompt]"
copy

Chat with a specific provider quietly (without animations)
$ tgpt --provider [openai|opengpts|koboldai|phind|llama2|blackboxai] [[-q|--quiet]] [[-w|--whole]] "[prompt]"
copy

Generate and execute shell commands using a specific provider (with a confirmation prompt)
$ tgpt --provider [llama2] [[-s|--shell]] "[prompt]"
copy

Prompt with an API key, model, max response length, temperature, and top_p (required when using openai provider)
$ tgpt --provider openai --key "[api_key]" --model "[gpt-3.5-turbo]" --max-length [10] --temperature [0.7] --top_p [0.9] "[prompt]"
copy

Feed a file as additional pre-prompt input
$ tgpt --provider [blackboxai] "[prompt]" < [path/to/file]
copy

SYNOPSIS

tgpt [options] [prompt ...]
tgpt [--model model_name] [--chat chat_id] [--stream] [--temperature value] [--system "prompt"] ["Your query here."]

PARAMETERS

prompt ...
    The text query or instruction to send to the AI model. If the prompt contains spaces, it typically needs to be enclosed in quotes.

--model model_name
    Specifies the AI model to use for the request (e.g., gpt-4, gpt-3.5-turbo, gemini-pro, llama). The availability of models depends on the configured API keys and the specific tgpt implementation.

--chat chat_id
    Engages in a conversational session. This option allows tgpt to maintain and retrieve conversation history using a specified chat_id (e.g., a session name or identifier).

--stream
    Enables streaming output, displaying the AI response character by character or word by word as it is generated, rather than waiting for the entire response to complete.

--temperature value
    Controls the randomness and creativity of the AI's output. A value closer to 0 (e.g., 0.2) results in more deterministic and focused responses, while a higher value (e.g., 0.8) encourages more diverse and creative output.

--system "prompt"
    Provides an initial system message or role-playing context to the AI. This guides the AI's behavior or persona for the current query or entire chat session.

--list-models
    Displays a list of AI models that the current tgpt configuration can access, based on the enabled APIs and keys.

--config
    Opens or displays the configuration file for tgpt, allowing users to inspect or modify API keys, default models, and other settings.

--help
    Displays a comprehensive help message, detailing the command's usage syntax and all available options.

--version
    Prints the version information of the tgpt script, including its release number and sometimes the underlying AI models it supports.

DESCRIPTION

tgpt is a versatile command-line utility designed to provide a seamless interface for interacting with various large language models (LLMs) directly from your terminal. Unlike standard Linux commands, tgpt is typically a community-driven, user-installed script (often written in Bash or Python) that acts as a wrapper around the APIs of popular AI services like OpenAI's GPT series, Google's Gemini, Anthropic's Claude, and open-source models.

It enables users to quickly query AI models, generate text, write code, answer questions, and perform other AI-driven tasks without needing to open a web browser. Key features often include support for different models, streaming responses for real-time output, managing conversation history, and customizable parameters like temperature and system prompts, making it an invaluable tool for developers, writers, and anyone looking to integrate AI into their command-line workflow.

CAVEATS

tgpt is not a standard Linux utility and typically requires manual installation from a source like a Git repository. Its operation is contingent on an active internet connection and valid API keys for the specific AI models it interacts with (e.g., OpenAI API key), which may involve usage costs. Users should be aware of data privacy implications, as queries are sent to third-party AI services. The functionality and supported models of tgpt are subject to changes and updates by the respective AI service providers and the script maintainers.

INSTALLATION

tgpt is typically installed by cloning its Git repository and placing the executable script in a directory included in your system's PATH. A common installation method involves:
git clone https://github.com/aandrew-me/tgpt.git
cd tgpt
sudo cp tgpt /usr/local/bin/
Follow the specific instructions from the project's README for the most accurate and up-to-date installation guidance.

CONFIGURATION

Before using tgpt, it's essential to configure your API keys for the desired AI models. This is usually done by editing a configuration file (often located in ~/.config/tgpt/tgpt.conf or similar) or by setting environment variables (e.g., OPENAI_API_KEY="your_key_here", GOOGLE_API_KEY="your_key_here"). Consult the project's documentation for precise configuration steps and supported environment variables.

HISTORY

The emergence of tgpt, and similar command-line interfaces for large language models, closely parallels the public accessibility and widespread adoption of powerful AI models like OpenAI's GPT series, beginning in the early 2020s. These tools were developed by the open-source community to provide a more direct and scriptable way for developers and power users to interact with AI from their terminal, leveraging existing API infrastructures. Unlike traditional Linux commands with decades of history, tgpt's development is rapid and continuously evolving to support new AI models, features, and API changes, driven by community contributions rather than being a part of a standard distribution.

SEE ALSO

curl(1), python(1), bash(1), gpt4all(1), ollama(1)

Copied to clipboard