LinuxCommandLibrary

ollama

runs large language models locally

TLDR

Run a model

$ ollama run [llama2]
copy
List installed models
$ ollama list
copy
Pull a model
$ ollama pull [mistral]
copy
Remove a model
$ ollama rm [model_name]
copy
Start server
$ ollama serve
copy
Create custom model
$ ollama create [mymodel] -f [Modelfile]
copy

SYNOPSIS

ollama [command] [options]

DESCRIPTION

ollama runs large language models locally. Manages model downloads and serving.
The tool provides local LLM inference. Supports various open models.

PARAMETERS

run MODEL

Run model interactively.
pull MODEL
Download model.
list
List local models.
rm MODEL
Remove model.
serve
Start API server.
create NAME
Create custom model.
--help
Display help information.

CAVEATS

Requires sufficient RAM/VRAM. Model sizes vary. GPU acceleration supported.

HISTORY

Ollama was created for easy local LLM deployment and management.

SEE ALSO

> TERMINAL_GEAR

Curated for the Linux community

Copied to clipboard

> TERMINAL_GEAR

Curated for the Linux community