mirror of
https://github.com/Hopiu/llm.git
synced 2026-05-04 20:04:44 +00:00
Multi-page docs using Markdown and Sphinx, refs #21
Also documents new keys.json mechanism, closes #13
This commit is contained in:
parent
a5f5801b96
commit
b9865a5576
10 changed files with 458 additions and 118 deletions
141
README.md
141
README.md
|
|
@ -15,113 +15,34 @@ Install this tool using `pip`:
|
|||
|
||||
pip install llm
|
||||
|
||||
You need an OpenAI API key, which should either be set in the `OPENAI_API_KEY` environment variable, or saved in a plain text file called `~/.openai-api-key.txt` in your home directory.
|
||||
[Detailed installation instructions](https://llm.datasette.io/en/stable/installation.html).
|
||||
|
||||
## Usage
|
||||
## Getting started
|
||||
|
||||
The default command for this is `llm openai` - you can use `llm` instead if you prefer.
|
||||
First, create an OpenAI API key and save it to the tool like this:
|
||||
|
||||
To run a prompt:
|
||||
```
|
||||
llm keys set openai
|
||||
```
|
||||
This will prompt you for your key like so:
|
||||
```
|
||||
$ llm keys set openai
|
||||
Enter key:
|
||||
```
|
||||
|
||||
llm 'Ten names for cheesecakes'
|
||||
Now that you've saved a key you can run a prompt like this:
|
||||
|
||||
To stream the results a token at a time:
|
||||
|
||||
llm 'Ten names for cheesecakes' -s
|
||||
|
||||
To switch from ChatGPT 3.5 (the default) to GPT-4 if you have access:
|
||||
|
||||
llm 'Ten names for cheesecakes' -4
|
||||
|
||||
Pass `--model <model name>` to use a different model.
|
||||
|
||||
You can also send a prompt to standard input, for example:
|
||||
|
||||
echo 'Ten names for cheesecakes' | llm
|
||||
|
||||
### Continuing a conversation
|
||||
|
||||
By default, the tool will start a new conversation each time you run it.
|
||||
|
||||
You can opt to continue the previous conversation by passing the `-c/--continue` option:
|
||||
|
||||
llm 'More names' --continue
|
||||
|
||||
This will re-send the prompts and responses for the previous conversation. Note that this can add up quickly in terms of tokens, especially if you are using more expensive models.
|
||||
|
||||
To continue a conversation that is not the most recent one, use the `--chat <id>` option:
|
||||
|
||||
llm 'More names' --chat 2
|
||||
|
||||
You can find these chat IDs using the `llm logs` command.
|
||||
|
||||
Note that this feature only works if you have been logging your previous conversations to a database, having run the `llm init-db` command described below.
|
||||
|
||||
### Using with a shell
|
||||
|
||||
To generate a description of changes made to a Git repository since the last commit:
|
||||
|
||||
llm "Describe these changes: $(git diff)"
|
||||
|
||||
This pattern of using `$(command)` inside a double quoted string is a useful way to quickly assemble prompts.
|
||||
|
||||
## System prompts
|
||||
|
||||
You can use `--system '...'` to set a system prompt.
|
||||
|
||||
llm 'SQL to calculate total sales by month' -s \
|
||||
--system 'You are an exaggerated sentient cheesecake that knows SQL and talks about cheesecake a lot'
|
||||
|
||||
This is useful for piping content to standard input, for example:
|
||||
|
||||
curl -s 'https://simonwillison.net/2023/May/15/per-interpreter-gils/' | \
|
||||
llm --system 'Suggest topics for this post as a JSON array' --stream
|
||||
|
||||
The `--code` option will set a system prompt for you that attempts to output just code without explanation, and will strip off any leading or trailing markdown code block syntax. You can use this to generate code and write it straight to a file:
|
||||
|
||||
llm 'Python CLI tool: reverse string passed to stdin' --code > fetch.py
|
||||
|
||||
Be _very careful_ executing code generated by a LLM - always read it first!
|
||||
|
||||
## Logging to SQLite
|
||||
|
||||
If a SQLite database file exists in `~/.llm/log.db` then the tool will log all prompts and responses to it.
|
||||
|
||||
You can create that file by running the `init-db` command:
|
||||
|
||||
llm init-db
|
||||
|
||||
Now any prompts you run will be logged to that database.
|
||||
|
||||
To avoid logging a prompt, pass `--no-log` or `-n` to the command:
|
||||
|
||||
llm 'Ten names for cheesecakes' -n
|
||||
|
||||
### Viewing the logs
|
||||
|
||||
You can view the logs using the `llm logs` command:
|
||||
|
||||
llm logs
|
||||
|
||||
This will output the three most recent logged items as a JSON array of objects.
|
||||
|
||||
Add `-n 10` to see the ten most recent items:
|
||||
|
||||
llm logs -n 10
|
||||
|
||||
Or `-n 0` to see everything that has ever been logged:
|
||||
|
||||
llm logs -n 0
|
||||
|
||||
You can truncate the displayed prompts and responses using the `-t/--truncate` option:
|
||||
|
||||
llm logs -n 5 -t
|
||||
|
||||
This is useful for finding a conversation that you would like to continue.
|
||||
|
||||
You can also use [Datasette](https://datasette.io/) to browse your logs like this:
|
||||
|
||||
datasette ~/.llm/log.db
|
||||
```
|
||||
llm "Five cute names for a pet penguin"
|
||||
```
|
||||
```
|
||||
1. Waddles
|
||||
2. Pebbles
|
||||
3. Bubbles
|
||||
4. Flappy
|
||||
5. Chilly
|
||||
```
|
||||
Read the [usage instructions](https://llm.datasette.io/en/stable/usage.html) for more.
|
||||
|
||||
## Help
|
||||
|
||||
|
|
@ -132,19 +53,3 @@ For help, run:
|
|||
You can also use:
|
||||
|
||||
python -m llm --help
|
||||
|
||||
## Development
|
||||
|
||||
To contribute to this tool, first checkout the code. Then create a new virtual environment:
|
||||
|
||||
cd llm
|
||||
python -m venv venv
|
||||
source venv/bin/activate
|
||||
|
||||
Now install the dependencies and test dependencies:
|
||||
|
||||
pip install -e '.[test]'
|
||||
|
||||
To run the tests:
|
||||
|
||||
pytest
|
||||
|
|
|
|||
1
docs/.gitignore
vendored
Normal file
1
docs/.gitignore
vendored
Normal file
|
|
@ -0,0 +1 @@
|
|||
_build
|
||||
23
docs/Makefile
Normal file
23
docs/Makefile
Normal file
|
|
@ -0,0 +1,23 @@
|
|||
# Minimal makefile for Sphinx documentation
|
||||
#
|
||||
|
||||
# You can set these variables from the command line.
|
||||
SPHINXOPTS =
|
||||
SPHINXBUILD = sphinx-build
|
||||
SPHINXPROJ = sqlite-utils
|
||||
SOURCEDIR = .
|
||||
BUILDDIR = _build
|
||||
|
||||
# Put it first so that "make" without argument is like "make help".
|
||||
help:
|
||||
@$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
|
||||
|
||||
.PHONY: help Makefile
|
||||
|
||||
# Catch-all target: route all unknown targets to Sphinx using the new
|
||||
# "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS).
|
||||
%: Makefile
|
||||
@$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
|
||||
|
||||
livehtml:
|
||||
sphinx-autobuild -b html "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(0)
|
||||
173
docs/conf.py
Normal file
173
docs/conf.py
Normal file
|
|
@ -0,0 +1,173 @@
|
|||
#!/usr/bin/env python3
|
||||
# -*- coding: utf-8 -*-
|
||||
|
||||
from subprocess import PIPE, Popen
|
||||
|
||||
# This file is execfile()d with the current directory set to its
|
||||
# containing dir.
|
||||
#
|
||||
# Note that not all possible configuration values are present in this
|
||||
# autogenerated file.
|
||||
#
|
||||
# All configuration values have a default; values that are commented out
|
||||
# serve to show the default.
|
||||
|
||||
# If extensions (or modules to document with autodoc) are in another directory,
|
||||
# add these directories to sys.path here. If the directory is relative to the
|
||||
# documentation root, use os.path.abspath to make it absolute, like shown here.
|
||||
#
|
||||
# import os
|
||||
# import sys
|
||||
# sys.path.insert(0, os.path.abspath('.'))
|
||||
|
||||
|
||||
# -- General configuration ------------------------------------------------
|
||||
|
||||
# If your documentation needs a minimal Sphinx version, state it here.
|
||||
#
|
||||
# needs_sphinx = '1.0'
|
||||
|
||||
# Add any Sphinx extension module names here, as strings. They can be
|
||||
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
|
||||
# ones.
|
||||
extensions = ["myst_parser"]
|
||||
|
||||
# Add any paths that contain templates here, relative to this directory.
|
||||
templates_path = ["_templates"]
|
||||
|
||||
# The suffix(es) of source filenames.
|
||||
# You can specify multiple suffix as a list of string:
|
||||
#
|
||||
# source_suffix = ['.rst', '.md']
|
||||
source_suffix = ".rst"
|
||||
|
||||
# The master toctree document.
|
||||
master_doc = "index"
|
||||
|
||||
# General information about the project.
|
||||
project = "llm"
|
||||
copyright = "2023, Simon Willison"
|
||||
author = "Simon Willison"
|
||||
|
||||
# The version info for the project you're documenting, acts as replacement for
|
||||
# |version| and |release|, also used in various other places throughout the
|
||||
# built documents.
|
||||
#
|
||||
# The short X.Y version.
|
||||
pipe = Popen("git describe --tags --always", stdout=PIPE, shell=True)
|
||||
git_version = pipe.stdout.read().decode("utf8")
|
||||
|
||||
if git_version:
|
||||
version = git_version.rsplit("-", 1)[0]
|
||||
release = git_version
|
||||
else:
|
||||
version = ""
|
||||
release = ""
|
||||
|
||||
# The language for content autogenerated by Sphinx. Refer to documentation
|
||||
# for a list of supported languages.
|
||||
#
|
||||
# This is also used if you do content translation via gettext catalogs.
|
||||
# Usually you set "language" from the command line for these cases.
|
||||
language = "en"
|
||||
|
||||
# List of patterns, relative to source directory, that match files and
|
||||
# directories to ignore when looking for source files.
|
||||
# This patterns also effect to html_static_path and html_extra_path
|
||||
exclude_patterns = ["_build", "Thumbs.db", ".DS_Store"]
|
||||
|
||||
# The name of the Pygments (syntax highlighting) style to use.
|
||||
pygments_style = "sphinx"
|
||||
|
||||
# If true, `todo` and `todoList` produce output, else they produce nothing.
|
||||
todo_include_todos = False
|
||||
|
||||
|
||||
# -- Options for HTML output ----------------------------------------------
|
||||
|
||||
# The theme to use for HTML and HTML Help pages. See the documentation for
|
||||
# a list of builtin themes.
|
||||
#
|
||||
html_theme = "furo"
|
||||
|
||||
# Theme options are theme-specific and customize the look and feel of a theme
|
||||
# further. For a list of options available for each theme, see the
|
||||
# documentation.
|
||||
|
||||
html_theme_options = {}
|
||||
html_title = "llm"
|
||||
|
||||
# Add any paths that contain custom static files (such as style sheets) here,
|
||||
# relative to this directory. They are copied after the builtin static files,
|
||||
# so a file named "default.css" will overwrite the builtin "default.css".
|
||||
html_static_path = ["_static"]
|
||||
|
||||
|
||||
# -- Options for HTMLHelp output ------------------------------------------
|
||||
|
||||
# Output file base name for HTML help builder.
|
||||
htmlhelp_basename = "llm-doc"
|
||||
|
||||
|
||||
# -- Options for LaTeX output ---------------------------------------------
|
||||
|
||||
latex_elements = {
|
||||
# The paper size ('letterpaper' or 'a4paper').
|
||||
#
|
||||
# 'papersize': 'letterpaper',
|
||||
# The font size ('10pt', '11pt' or '12pt').
|
||||
#
|
||||
# 'pointsize': '10pt',
|
||||
# Additional stuff for the LaTeX preamble.
|
||||
#
|
||||
# 'preamble': '',
|
||||
# Latex figure (float) alignment
|
||||
#
|
||||
# 'figure_align': 'htbp',
|
||||
}
|
||||
|
||||
# Grouping the document tree into LaTeX files. List of tuples
|
||||
# (source start file, target name, title,
|
||||
# author, documentclass [howto, manual, or own class]).
|
||||
latex_documents = [
|
||||
(
|
||||
master_doc,
|
||||
"llm.tex",
|
||||
"llm documentation",
|
||||
"Simon Willison",
|
||||
"manual",
|
||||
)
|
||||
]
|
||||
|
||||
|
||||
# -- Options for manual page output ---------------------------------------
|
||||
|
||||
# One entry per manual page. List of tuples
|
||||
# (source start file, name, description, authors, manual section).
|
||||
man_pages = [
|
||||
(
|
||||
master_doc,
|
||||
"llm",
|
||||
"llm documentation",
|
||||
[author],
|
||||
1,
|
||||
)
|
||||
]
|
||||
|
||||
|
||||
# -- Options for Texinfo output -------------------------------------------
|
||||
|
||||
# Grouping the document tree into Texinfo files. List of tuples
|
||||
# (source start file, target name, title, author,
|
||||
# dir menu entry, description, category)
|
||||
texinfo_documents = [
|
||||
(
|
||||
master_doc,
|
||||
"llm",
|
||||
"llm documentation",
|
||||
author,
|
||||
"llm",
|
||||
" Access large language models from the command-line ",
|
||||
"Miscellaneous",
|
||||
)
|
||||
]
|
||||
35
docs/contributing.md
Normal file
35
docs/contributing.md
Normal file
|
|
@ -0,0 +1,35 @@
|
|||
# Contributing
|
||||
|
||||
To contribute to this tool, first checkout the code. Then create a new virtual environment:
|
||||
|
||||
cd llm
|
||||
python -m venv venv
|
||||
source venv/bin/activate
|
||||
|
||||
Or if you are using `pipenv`:
|
||||
|
||||
pipenv shell
|
||||
|
||||
Now install the dependencies and test dependencies:
|
||||
|
||||
pip install -e '.[test]'
|
||||
|
||||
To run the tests:
|
||||
|
||||
pytest
|
||||
|
||||
## Documentation
|
||||
|
||||
Documentation for this project uses [MyST](https://myst-parser.readthedocs.io/) - it is written in Markdown and rendered using Sphinx.
|
||||
|
||||
To build the documentation locally, run the following:
|
||||
|
||||
cd docs
|
||||
pip install -r requirements.txt
|
||||
make livehtml
|
||||
|
||||
This will start a live preview server, using [sphinx-autobuild](https://pypi.org/project/sphinx-autobuild/).
|
||||
|
||||
The CLI `--help` examples in the documentation are managed using [Cog](https://github.com/nedbat/cog). Update those files like this:
|
||||
|
||||
cog -r docs/*.md
|
||||
24
docs/index.md
Normal file
24
docs/index.md
Normal file
|
|
@ -0,0 +1,24 @@
|
|||
# llm
|
||||
|
||||
A command-line utility for interacting with Large Language Models, such as OpenAI's GPT series.
|
||||
|
||||
**Quick start**:
|
||||
|
||||
```
|
||||
pip install llm
|
||||
llm keys set openai
|
||||
# Paste in your API key
|
||||
llm "Ten fun names for a pet pelican"
|
||||
```
|
||||
|
||||
**Contents**
|
||||
|
||||
```{toctree}
|
||||
---
|
||||
maxdepth: 3
|
||||
---
|
||||
setup
|
||||
usage
|
||||
logging
|
||||
contributing
|
||||
```
|
||||
39
docs/logging.md
Normal file
39
docs/logging.md
Normal file
|
|
@ -0,0 +1,39 @@
|
|||
# Logging to SQLite
|
||||
|
||||
If a SQLite database file exists in `~/.llm/log.db` then the tool will log all prompts and responses to it.
|
||||
|
||||
You can create that file by running the `init-db` command:
|
||||
|
||||
llm init-db
|
||||
|
||||
Now any prompts you run will be logged to that database.
|
||||
|
||||
To avoid logging a prompt, pass `--no-log` or `-n` to the command:
|
||||
|
||||
llm 'Ten names for cheesecakes' -n
|
||||
|
||||
## Viewing the logs
|
||||
|
||||
You can view the logs using the `llm logs` command:
|
||||
|
||||
llm logs
|
||||
|
||||
This will output the three most recent logged items as a JSON array of objects.
|
||||
|
||||
Add `-n 10` to see the ten most recent items:
|
||||
|
||||
llm logs -n 10
|
||||
|
||||
Or `-n 0` to see everything that has ever been logged:
|
||||
|
||||
llm logs -n 0
|
||||
|
||||
You can truncate the displayed prompts and responses using the `-t/--truncate` option:
|
||||
|
||||
llm logs -n 5 -t
|
||||
|
||||
This is useful for finding a conversation that you would like to continue.
|
||||
|
||||
You can also use [Datasette](https://datasette.io/) to browse your logs like this:
|
||||
|
||||
datasette ~/.llm/log.db
|
||||
4
docs/requirements.txt
Normal file
4
docs/requirements.txt
Normal file
|
|
@ -0,0 +1,4 @@
|
|||
furo==2022.06.21
|
||||
sphinx-autobuild
|
||||
myst-parser
|
||||
cogapp
|
||||
69
docs/setup.md
Normal file
69
docs/setup.md
Normal file
|
|
@ -0,0 +1,69 @@
|
|||
# Setup
|
||||
|
||||
## Installation
|
||||
|
||||
Install this tool using `pip`:
|
||||
|
||||
pip install llm
|
||||
|
||||
Or using [pipx](https://pypa.github.io/pipx/):
|
||||
|
||||
pipx install llm
|
||||
|
||||
## Authentication
|
||||
|
||||
Many LLM models require an API key. These API keys can be provided to this tool using several different mechanisms.
|
||||
|
||||
### Saving and using stored keys
|
||||
|
||||
Keys can be persisted in a file that is used by the tool. This file is called `keys.json` and is located at the path shown when you run the following command:
|
||||
|
||||
```
|
||||
llm keys path
|
||||
```
|
||||
|
||||
Rather than editing this file directly, you can instead add keys to it using the `llm keys set` command.
|
||||
|
||||
To set your OpenAI API key, run the following:
|
||||
|
||||
```
|
||||
llm keys set openai
|
||||
```
|
||||
You will be prompted to enter the key like this:
|
||||
```
|
||||
% llm keys set openai
|
||||
Enter key:
|
||||
```
|
||||
Enter the key and hit Enter - the key will be saved to your `keys.json` file and automatically used for future command runs:
|
||||
|
||||
```
|
||||
llm "Five ludicrous names for a pet lobster"
|
||||
```
|
||||
### Passing keys using the --key option
|
||||
|
||||
Keys can be passed directly using the `--key` option, like this:
|
||||
|
||||
```
|
||||
llm "Five names for pet weasels" --key sk-my-key-goes-here
|
||||
```
|
||||
You can also pass the alias of a key stored in the `keys.json` file. For example, if you want to maintain a personal API key you could add that like this:
|
||||
```
|
||||
llm keys set personal
|
||||
```
|
||||
And then use it for prompts like so:
|
||||
|
||||
```
|
||||
llm "Five friendly names for a pet skunk" --key personal
|
||||
```
|
||||
|
||||
### Keys in environment variables
|
||||
|
||||
Keys can also be set using an environment variable. These are different for different models.
|
||||
|
||||
For OpenAI models the key will be read from the `OPENAI_API_KEY` environment variable.
|
||||
|
||||
The environment variable will be used only if no `--key` option is passed to the command.
|
||||
|
||||
If no environment variable is found, the tool will fall back to checking `keys.json`.
|
||||
|
||||
You can force the tool to use the key from `keys.json` even if an environment variable has also been set using `llm "prompt" --key openai`.
|
||||
67
docs/usage.md
Normal file
67
docs/usage.md
Normal file
|
|
@ -0,0 +1,67 @@
|
|||
# Usage
|
||||
|
||||
The default command for this is `llm openai` - you can use `llm` instead if you prefer.
|
||||
|
||||
## Executing a prompt
|
||||
|
||||
To run a prompt:
|
||||
|
||||
llm 'Ten names for cheesecakes'
|
||||
|
||||
To stream the results a token at a time:
|
||||
|
||||
llm 'Ten names for cheesecakes' -s
|
||||
|
||||
To switch from ChatGPT 3.5 (the default) to GPT-4 if you have access:
|
||||
|
||||
llm 'Ten names for cheesecakes' -4
|
||||
|
||||
Pass `--model <model name>` to use a different model.
|
||||
|
||||
You can also send a prompt to standard input, for example:
|
||||
|
||||
echo 'Ten names for cheesecakes' | llm
|
||||
|
||||
## Continuing a conversation
|
||||
|
||||
By default, the tool will start a new conversation each time you run it.
|
||||
|
||||
You can opt to continue the previous conversation by passing the `-c/--continue` option:
|
||||
|
||||
llm 'More names' --continue
|
||||
|
||||
This will re-send the prompts and responses for the previous conversation. Note that this can add up quickly in terms of tokens, especially if you are using more expensive models.
|
||||
|
||||
To continue a conversation that is not the most recent one, use the `--chat <id>` option:
|
||||
|
||||
llm 'More names' --chat 2
|
||||
|
||||
You can find these chat IDs using the `llm logs` command.
|
||||
|
||||
Note that this feature only works if you have been logging your previous conversations to a database, having run the `llm init-db` command described below.
|
||||
|
||||
## Using with a shell
|
||||
|
||||
To generate a description of changes made to a Git repository since the last commit:
|
||||
|
||||
llm "Describe these changes: $(git diff)"
|
||||
|
||||
This pattern of using `$(command)` inside a double quoted string is a useful way to quickly assemble prompts.
|
||||
|
||||
## System prompts
|
||||
|
||||
You can use `--system '...'` to set a system prompt.
|
||||
|
||||
llm 'SQL to calculate total sales by month' -s \
|
||||
--system 'You are an exaggerated sentient cheesecake that knows SQL and talks about cheesecake a lot'
|
||||
|
||||
This is useful for piping content to standard input, for example:
|
||||
|
||||
curl -s 'https://simonwillison.net/2023/May/15/per-interpreter-gils/' | \
|
||||
llm --system 'Suggest topics for this post as a JSON array' --stream
|
||||
|
||||
The `--code` option will set a system prompt for you that attempts to output just code without explanation, and will strip off any leading or trailing markdown code block syntax. You can use this to generate code and write it straight to a file:
|
||||
|
||||
llm 'Python CLI tool: reverse string passed to stdin' --code > fetch.py
|
||||
|
||||
Be _very careful_ executing code generated by a LLM - always read it first!
|
||||
Loading…
Reference in a new issue