Documentation now talks about installing models more, closes #100

This commit is contained in:
Simon Willison 2023-07-12 07:01:01 -07:00
parent 7a7bb3aed6
commit adcf691678
5 changed files with 92 additions and 35 deletions

View file

@ -29,7 +29,7 @@ brew install simonw/llm/llm
If you have an [OpenAI API key](https://platform.openai.com/account/api-keys) you can get started using the OpenAI models right away.
(You can [install plugins](https://github.com/simonw/llm-plugins) to access models by other providers, including models that can be installed and run on your own device.)
As an alternative to OpenAI, you can [install plugins](https://llm.datasette.io/en/stable/plugins/installing-plugins.html) to access models by other providers, including models that can be installed and run on your own device.
Save your OpenAI API key like this:

View file

@ -3,7 +3,7 @@
(v0_5)=
## Unreleased
LLM now supports **additional language models**, thanks to a new {ref}`plugins` mechanism for registering additional models.
LLM now supports **additional language models**, thanks to a new {ref}`plugins mechanism <installing-plugins>` for installing additional models.
Plugins are available for 19 models in addition to the default OpenAI ones:

View file

@ -6,18 +6,20 @@
[![License](https://img.shields.io/badge/license-Apache%202.0-blue.svg)](https://github.com/simonw/llm/blob/main/LICENSE)
[![Discord](https://img.shields.io/discord/823971286308356157?label=discord)](https://datasette.io/discord-llm)
A command-line utility for interacting with Large Language Models, such as OpenAI's GPT series.
A command-line utility for interacting with Large Language Models, including OpenAI, PaLM and local models installed on your own machine.
## Quick start
You'll need an [OpenAI API key](https://platform.openai.com/account/api-keys) for this:
First, install LLM using `pip` or Homebrew:
```bash
# Install LLM
pip install llm
# Or use: brew install simonw/llm/llm
# Paste your OpenAI API key into this:
```
If you have an [OpenAI API key](https://platform.openai.com/account/api-keys) key you can run this:
```bash
# Paste your OpenAI API key into this
llm keys set openai
# Run a prompt
@ -26,9 +28,14 @@ llm "Ten fun names for a pet pelican"
# Run a system prompt against a file
cat myfile.py | llm -s "Explain this code"
```
Or you can {ref}`install a plugin <installing-plugins>` and use models that can run on your local device:
```bash
# Install the plugin
llm install llm-gpt4all
You can also [install plugins](https://github.com/simonw/llm-plugins) to access models by other providers, including models that can be installed and run on your own device.
# Download and run a prompt against the Vicuna model
llm -m ggml-vicuna-7b-1 'What is the capital of France?'
```
## Contents

View file

@ -1,3 +1,4 @@
(installing-plugins)=
# Installing plugins
Plugins must be installed in the same virtual environment as LLM itself.
@ -6,14 +7,27 @@ You can find names of plugins to install in the [llm-plugins](https://github.com
Use the `llm install` command (a thin wrapper around `pip install`) to install plugins in the correct environment:
```bash
llm install llm-hello-world
llm install llm-gpt4all
```
Plugins can be uninstalled with `llm uninstall`:
```bash
llm uninstall llm-hello-world -y
llm uninstall llm-gpt4all -y
```
The `-y` flag skips asking for confirmation.
You can see additional models that have been added by plugins by running:
```bash
llm models list
```
Or add `--options` to include details of the options available for each model:
```bash
llm models list --options
```
To run a prompt against a newly installed model, pass its name as the `-m/--model` option:
```bash
llm -m ggml-vicuna-7b-1 'What is the capital of France?'
```
## Listing installed plugins
Run `llm plugins` to list installed plugins:
@ -24,15 +38,34 @@ llm plugins
```json
[
{
"name": "llm-hello-world",
"name": "llm-mpt30b",
"hooks": [
"register_commands"
"register_commands",
"register_models"
],
"version": "0.1"
},
{
"name": "llm-palm",
"hooks": [
"register_commands",
"register_models"
],
"version": "0.1"
},
{
"name": "llm.default_plugins.openai_models",
"hooks": [
"register_commands",
"register_models"
]
},
{
"name": "llm-gpt4all",
"hooks": [
"register_models"
],
"version": "0.1"
}
]
```
You can see additional models that have been added by plugins by running:
```bash
llm models list
```

View file

@ -2,7 +2,7 @@
The default command for this is `llm prompt` - you can use `llm` instead if you prefer.
## Executing a prompt
## Executing a prompt against OpenAI
To run a prompt, streaming tokens as they come in:
```bash
@ -30,49 +30,66 @@ Some models support options. You can pass these using `-o/--option name value` -
llm 'Ten names for cheesecakes' -o temperature 1.5
```
## Installing and using a local model
{ref}`LLM plugins <plugins>` can provide local models that run on your machine.
To install [llm-gpt4all](https://github.com/simonw/llm-gpt4all), providing 17 models from the [GPT4All](https://gpt4all.io/) project, run this:
```bash
llm install llm-gpt4all
```
Run `llm models list` to see the expanded list of available models.
To run a prompt through one of the models from GPT4All specify it using `-m/--model`:
```bash
llm -m ggml-vicuna-7b-1 'What is the capital of France?'
```
The model will be downloaded and cached the first time you use it.
## Continuing a conversation
By default, the tool will start a new conversation each time you run it.
You can opt to continue the previous conversation by passing the `-c/--continue` option:
llm 'More names' --continue
```bash
llm 'More names' --continue
```
This will re-send the prompts and responses for the previous conversation as part of the call to the language model. Note that this can add up quickly in terms of tokens, especially if you are using expensive models.
`--continue` will automatically use the same model as the conversation that you are continuing, even if you omit the `-m/--model` option.
To continue a conversation that is not the most recent one, use the `--cid/--conversation <id>` option:
llm 'More names' --cid 01h53zma5txeby33t1kbe3xk8q
```bash
llm 'More names' --cid 01h53zma5txeby33t1kbe3xk8q
```
You can find these conversation IDs using the `llm logs` command.
## Using with a shell
To generate a description of changes made to a Git repository since the last commit:
llm "Describe these changes: $(git diff)"
```bash
llm "Describe these changes: $(git diff)"
```
This pattern of using `$(command)` inside a double quoted string is a useful way to quickly assemble prompts.
## System prompts
You can use `-s/--system '...'` to set a system prompt.
llm 'SQL to calculate total sales by month' \
--system 'You are an exaggerated sentient cheesecake that knows SQL and talks about cheesecake a lot'
```bash
llm 'SQL to calculate total sales by month' \
--system 'You are an exaggerated sentient cheesecake that knows SQL and talks about cheesecake a lot'
```
This is useful for piping content to standard input, for example:
curl -s 'https://simonwillison.net/2023/May/15/per-interpreter-gils/' | \
llm -s 'Suggest topics for this post as a JSON array'
```bash
curl -s 'https://simonwillison.net/2023/May/15/per-interpreter-gils/' | \
llm -s 'Suggest topics for this post as a JSON array'
```
## Listing available models
The `llm models list` command lists every model that can be used with LLM, along with any aliases:
```
```bash
llm models list
```
Example output: