2023-06-17 12:33:12 +00:00
# LLM
2023-06-15 16:51:12 +00:00
2024-05-14 01:41:07 +00:00
[](https://github.com/simonw/llm)
2023-06-17 17:03:05 +00:00
[](https://pypi.org/project/llm/)
[](https://llm.datasette.io/en/stable/changelog.html)
[](https://github.com/simonw/llm/actions?query=workflow%3ATest)
2023-07-03 20:11:33 +00:00
[](https://github.com/simonw/llm/blob/main/LICENSE)
2023-07-12 01:56:38 +00:00
[](https://datasette.io/discord-llm)
2023-09-03 18:50:18 +00:00
[](https://formulae.brew.sh/formula/llm)
2023-06-17 17:03:05 +00:00
2023-09-04 02:07:05 +00:00
A CLI utility and Python library for interacting with Large Language Models, both via remote APIs and models that can be installed and run on your own machine.
{ref}`Run prompts from the command-line < usage-executing-prompts > `, {ref}`store the results in SQLite < logging > `, {ref}`generate embeddings < embeddings > ` and more.
2023-07-12 14:09:16 +00:00
2024-06-17 17:18:24 +00:00
Here's a [YouTube video demo ](https://www.youtube.com/watch?v=QUXQNi6jQ30 ) and [accompanying detailed notes ](https://simonwillison.net/2024/Jun/17/cli-language-models/ ).
2023-07-12 14:09:16 +00:00
Background on this project:
- [llm, ttok and strip-tags—CLI tools for working with ChatGPT and other LLMs ](https://simonwillison.net/2023/May/18/cli-tools-for-llms/ )
- [The LLM CLI tool now supports self-hosted language models via plugins ](https://simonwillison.net/2023/Jul/12/llm/ )
2023-07-18 21:24:32 +00:00
- [Accessing Llama 2 from the command-line with the llm-replicate plugin ](https://simonwillison.net/2023/Jul/18/accessing-llama-2/ )
2023-08-12 16:27:36 +00:00
- [Run Llama 2 on your own Mac using LLM and Homebrew ](https://simonwillison.net/2023/Aug/1/llama-2-mac/ )
- [Catching up on the weird world of LLMs ](https://simonwillison.net/2023/Aug/3/weird-world-of-llms/ )
2023-09-04 20:42:01 +00:00
- [LLM now provides tools for working with embeddings ](https://simonwillison.net/2023/Sep/4/llm-embeddings/ )
2023-09-12 21:16:23 +00:00
- [Build an image search engine with llm-clip, chat with models with llm chat ](https://simonwillison.net/2023/Sep/12/llm-clip-and-chat/ )
2023-12-18 18:28:52 +00:00
- [Many options for running Mistral models in your terminal using LLM ](https://simonwillison.net/2023/Dec/18/mistral/ )
2023-06-15 16:51:12 +00:00
2023-08-21 04:53:16 +00:00
For more check out [the llm tag ](https://simonwillison.net/tags/llm/ ) on my blog.
2023-06-17 17:01:48 +00:00
## Quick start
2023-06-15 16:51:12 +00:00
2024-01-26 21:59:27 +00:00
First, install LLM using `pip` or Homebrew or `pipx` :
2023-07-12 03:00:47 +00:00
2023-06-17 17:01:48 +00:00
```bash
2023-06-15 16:51:12 +00:00
pip install llm
2023-07-24 15:36:42 +00:00
```
2024-01-26 21:59:27 +00:00
Or with Homebrew (see {ref}`warning note < homebrew-warning > `):
2023-07-24 15:36:42 +00:00
```bash
2024-01-26 21:50:21 +00:00
brew install llm
2023-07-12 14:01:01 +00:00
```
2024-01-26 21:59:27 +00:00
Or with [pipx ](https://pypa.github.io/pipx/ ):
```bash
pipx install llm
```
2025-01-11 17:57:10 +00:00
Or with [uv ](https://docs.astral.sh/uv/guides/tools/ )
```bash
uv tool install llm
```
2024-01-26 00:02:48 +00:00
If you have an [OpenAI API key ](https://platform.openai.com/api-keys ) key you can run this:
2023-07-12 14:01:01 +00:00
```bash
# Paste your OpenAI API key into this
2023-06-15 16:51:12 +00:00
llm keys set openai
2023-06-17 17:01:48 +00:00
2024-10-28 22:03:30 +00:00
# Run a prompt (with the default gpt-4o-mini model)
2023-06-15 16:51:12 +00:00
llm "Ten fun names for a pet pelican"
2023-07-12 03:00:47 +00:00
2024-10-28 22:03:30 +00:00
# Extract text from an image
llm "extract text" -a scanned-document.jpg
# Use a system prompt against a file
2023-07-12 03:00:47 +00:00
cat myfile.py | llm -s "Explain this code"
2023-06-15 16:51:12 +00:00
```
2023-07-12 14:01:01 +00:00
Or you can {ref}`install a plugin < installing-plugins > ` and use models that can run on your local device:
```bash
# Install the plugin
llm install llm-gpt4all
2023-06-15 16:51:12 +00:00
2023-09-02 18:47:06 +00:00
# Download and run a prompt against the Orca Mini 7B model
2023-10-24 19:39:04 +00:00
llm -m orca-mini-3b-gguf2-q4_0 'What is the capital of France?'
2023-07-12 14:01:01 +00:00
```
2023-09-05 06:35:39 +00:00
To start {ref}`an interactive chat < usage-chat > ` with a model, use `llm chat` :
```bash
2024-10-28 22:03:30 +00:00
llm chat -m gpt-4o
2023-09-05 06:35:39 +00:00
```
```
2024-10-28 22:03:30 +00:00
Chatting with gpt-4o
2023-09-05 06:35:39 +00:00
Type 'exit' or 'quit' to exit
2023-09-12 17:45:02 +00:00
Type '!multi' to enter multiple lines, then '!end' to finish
2023-09-05 06:35:39 +00:00
> Tell me a joke about a pelican
Why don't pelicans like to tip waiters?
Because they always have a big bill!
>
```
2023-07-12 03:00:47 +00:00
2023-06-17 17:01:48 +00:00
## Contents
2023-06-15 16:51:12 +00:00
```{toctree}
---
maxdepth: 3
---
setup
usage
2024-01-26 00:21:41 +00:00
openai-models
2023-07-15 17:01:03 +00:00
other-models
2023-08-28 05:24:10 +00:00
embeddings/index
2023-08-21 05:17:13 +00:00
plugins/index
2023-08-12 05:55:59 +00:00
aliases
2023-07-01 15:50:39 +00:00
python-api
2025-02-27 17:48:35 +00:00
schemas
2023-06-17 07:40:46 +00:00
templates
2023-06-15 16:51:12 +00:00
logging
2023-08-21 05:34:29 +00:00
related-tools
2023-06-17 08:09:22 +00:00
help
2023-06-15 16:51:12 +00:00
contributing
2023-06-16 08:06:30 +00:00
changelog
2023-06-15 16:51:12 +00:00
```