Access large language models from the command-line
Find a file
Simon Willison e9a6998ca3 Release 0.12
Refs #323, #324
2023-11-06 13:32:09 -08:00
.github Don't update stable docs on preview releases 2023-09-04 23:41:51 -07:00
docs Add GPT 4 turbo model and -o seed 1 option (#329) 2023-11-06 13:32:04 -08:00
llm Add GPT 4 turbo model and -o seed 1 option (#329) 2023-11-06 13:32:04 -08:00
tests Fix for bug where embed did not use default model, closes #317 2023-10-31 21:19:59 -07:00
.gitignore Set min/max constraints to float arguments 2023-07-26 10:59:09 -07:00
.readthedocs.yaml .readthedocs.yaml 2023-07-24 08:53:48 -07:00
Justfile 'just mypy' command, refs #253 2023-09-09 12:19:59 -07:00
LICENSE Initial prototype, refs #1 2023-04-01 14:28:24 -07:00
MANIFEST.in Don't include tests/ in the package 2023-07-01 11:45:00 -07:00
mypy.ini Initial CLI support and plugin hook for embeddings, refs #185 2023-08-27 22:24:10 -07:00
pytest.ini Fix error on pydantic 1.10.2 - refs #169 2023-08-20 23:16:10 -07:00
README.md Remove homebrew suggestion for the moment, refs #315 2023-10-27 21:36:38 -07:00
ruff.toml Lint using Ruff, refs #78 2023-07-02 12:41:40 -07:00
setup.py Release 0.12 2023-11-06 13:32:09 -08:00

LLM

PyPI Documentation Changelog Tests License Discord Homebrew

A CLI utility and Python library for interacting with Large Language Models, both via remote APIs and models that can be installed and run on your own machine.

Run prompts from the command-line, store the results in SQLite, generate embeddings and more.

Full documentation: llm.datasette.io

Background on this project:

Installation

Install this tool using pip:

pip install llm

Or using pipx:

pipx install llm

Detailed installation instructions.

Getting started

If you have an OpenAI API key you can get started using the OpenAI models right away.

As an alternative to OpenAI, you can install plugins to access models by other providers, including models that can be installed and run on your own device.

Save your OpenAI API key like this:

llm keys set openai

This will prompt you for your key like so:

Enter key: <paste here>

Now that you've saved a key you can run a prompt like this:

llm "Five cute names for a pet penguin"
1. Waddles
2. Pebbles
3. Bubbles
4. Flappy
5. Chilly

Read the usage instructions for more.

Installing a model that runs on your own machine

LLM plugins can add support for alternative models, including models that run on your own machine.

To download and run Llama 2 13B locally, you can install the llm-mlc plugin:

llm install llm-mlc
llm mlc pip install --pre --force-reinstall \
  mlc-ai-nightly \
  mlc-chat-nightly \
  -f https://mlc.ai/wheels
llm mlc setup

Then download the 15GB Llama 2 13B model like this:

llm mlc download-model Llama-2-13b-chat --alias llama2

And run a prompt through it:

llm -m llama2 'difference between a llama and an alpaca'

You can also start a chat session with the model using the llm chat command:

llm chat -m llama2
Chatting with mlc-chat-Llama-2-13b-chat-hf-q4f16_1
Type 'exit' or 'quit' to exit
Type '!multi' to enter multiple lines, then '!end' to finish
> 

Using a system prompt

You can use the -s/--system option to set a system prompt, providing instructions for processing other input to the tool.

To describe how the code a file works, try this:

cat mycode.py | llm -s "Explain this code"

Help

For help, run:

llm --help

You can also use:

python -m llm --help