llm/docs/help.md

767 lines
19 KiB
Markdown
Raw Normal View History

# CLI reference
This page lists the `--help` output for all of the `llm` commands.
<!-- [[[cog
from click.testing import CliRunner
from llm.cli import cli
def all_help(cli):
"Return all help for Click command and its subcommands"
# First find all commands and subcommands
# List will be [["command"], ["command", "subcommand"], ...]
commands = []
def find_commands(command, path=None):
path = path or []
commands.append(path + [command.name])
if hasattr(command, 'commands'):
for subcommand in command.commands.values():
find_commands(subcommand, path + [command.name])
find_commands(cli)
# Remove first item of each list (it is 'cli')
commands = [command[1:] for command in commands]
# Now generate help for each one, with appropriate heading level
output = []
for command in commands:
heading_level = len(command) + 2
result = CliRunner().invoke(cli, command + ["--help"])
hyphenated = "-".join(command)
if hyphenated:
hyphenated = "-" + hyphenated
output.append(f"\n(help{hyphenated})=")
output.append("#" * heading_level + " llm " + " ".join(command) + " --help")
output.append("```")
output.append(result.output.replace("Usage: cli", "Usage: llm").strip())
output.append("```")
return "\n".join(output)
cog.out(all_help(cli))
]]] -->
(help)=
## llm --help
```
Usage: llm [OPTIONS] COMMAND [ARGS]...
Access Large Language Models from the command-line
2023-06-17 08:17:52 +00:00
Documentation: https://llm.datasette.io/
LLM can run models from many different providers. Consult the plugin directory
for a list of available models:
https://llm.datasette.io/en/stable/plugins/directory.html
To get started with OpenAI, obtain an API key from them and:
2023-06-17 08:17:52 +00:00
$ llm keys set openai
Enter key: ...
Then execute a prompt like this:
llm 'Five outrageous names for a pet pelican'
Options:
--version Show the version and exit.
--help Show this message and exit.
Commands:
prompt* Execute a prompt
aliases Manage model aliases
chat Hold an ongoing chat with a model.
collections View and manage collections of embeddings
embed Embed text and store or return the result
embed-models Manage available embedding models
2023-09-03 22:08:04 +00:00
embed-multi Store embeddings for multiple strings at once
install Install packages from PyPI into the same environment as LLM
keys Manage stored API keys for different models
logs Tools for exploring logged prompts and responses
models Manage available models
openai Commands for working directly with the OpenAI API
plugins List installed plugins
schemas Manage stored schemas
similar Return top N similar IDs from a collection using cosine...
templates Manage stored prompt templates
uninstall Uninstall Python packages from the LLM environment
```
(help-prompt)=
### llm prompt --help
```
Usage: llm prompt [OPTIONS] [PROMPT]
2023-06-17 08:17:52 +00:00
Execute a prompt
Documentation: https://llm.datasette.io/en/stable/usage.html
2024-10-28 22:03:30 +00:00
Examples:
llm 'Capital of France?'
llm 'Capital of France?' -m gpt-4o
llm 'Capital of France?' -s 'answer in Spanish'
Multi-modal models can be called with attachments like this:
llm 'Extract text from this image' -a image.jpg
llm 'Describe' -a https://static.simonwillison.net/static/2024/pelicans.jpg
cat image | llm 'describe image' -a -
2024-10-28 22:06:17 +00:00
# With an explicit mimetype:
2024-10-28 22:03:30 +00:00
cat image | llm 'describe image' --at - image/jpeg
The -x/--extract option returns just the content of the first ``` fenced code
block, if one is present. If none are present it returns the full response.
llm 'JavaScript function for reversing a string' -x
Options:
2024-10-28 22:03:30 +00:00
-s, --system TEXT System prompt to use
-m, --model TEXT Model to use
-a, --attachment ATTACHMENT Attachment path or URL or -
--at, --attachment-type <TEXT TEXT>...
Attachment with explicit mimetype
-o, --option <TEXT TEXT>... key/value options for the model
--schema TEXT JSON schema, filepath or ID
2024-10-28 22:03:30 +00:00
-t, --template TEXT Template to use
-p, --param <TEXT TEXT>... Parameters for template
--no-stream Do not stream output
-n, --no-log Don't log to database
--log Log prompt and response to the database
-c, --continue Continue the most recent conversation.
--cid, --conversation TEXT Continue the conversation with the given ID.
--key TEXT API key to use
--save TEXT Save prompt with this template name
llm.get_async_model(), llm.AsyncModel base class and OpenAI async models (#613) - https://github.com/simonw/llm/issues/507#issuecomment-2458639308 * register_model is now async aware Refs https://github.com/simonw/llm/issues/507#issuecomment-2458658134 * Refactor Chat and AsyncChat to use _Shared base class Refs https://github.com/simonw/llm/issues/507#issuecomment-2458692338 * fixed function name * Fix for infinite loop * Applied Black * Ran cog * Applied Black * Add Response.from_row() classmethod back again It does not matter that this is a blocking call, since it is a classmethod * Made mypy happy with llm/models.py * mypy fixes for openai_models.py I am unhappy with this, had to duplicate some code. * First test for AsyncModel * Still have not quite got this working * Fix for not loading plugins during tests, refs #626 * audio/wav not audio/wave, refs #603 * Black and mypy and ruff all happy * Refactor to avoid generics * Removed obsolete response() method * Support text = await async_mock_model.prompt("hello") * Initial docs for llm.get_async_model() and await model.prompt() Refs #507 * Initial async model plugin creation docs * duration_ms ANY to pass test * llm models --async option Refs https://github.com/simonw/llm/pull/613#issuecomment-2474724406 * Removed obsolete TypeVars * Expanded register_models() docs for async * await model.prompt() now returns AsyncResponse Refs https://github.com/simonw/llm/pull/613#issuecomment-2475157822 --------- Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2024-11-14 01:51:00 +00:00
--async Run prompt asynchronously
-u, --usage Show token usage
-x, --extract Extract first fenced code block
--xl, --extract-last Extract last fenced code block
2024-10-28 22:03:30 +00:00
--help Show this message and exit.
```
(help-chat)=
### llm chat --help
```
Usage: llm chat [OPTIONS]
Hold an ongoing chat with a model.
Options:
2023-09-10 18:16:44 +00:00
-s, --system TEXT System prompt to use
-m, --model TEXT Model to use
-c, --continue Continue the most recent conversation.
--cid, --conversation TEXT Continue the conversation with the given ID.
-t, --template TEXT Template to use
-p, --param <TEXT TEXT>... Parameters for template
-o, --option <TEXT TEXT>... key/value options for the model
2023-09-10 18:49:32 +00:00
--no-stream Do not stream output
2023-09-10 18:16:44 +00:00
--key TEXT API key to use
--help Show this message and exit.
```
(help-keys)=
### llm keys --help
```
Usage: llm keys [OPTIONS] COMMAND [ARGS]...
2023-06-17 08:17:52 +00:00
Manage stored API keys for different models
Options:
--help Show this message and exit.
Commands:
2023-08-21 06:11:47 +00:00
list* List names of all stored keys
2024-11-11 17:47:13 +00:00
get Return the value of a stored key
2023-08-21 06:11:47 +00:00
path Output the path to the keys.json file
set Save a key in the keys.json file
```
(help-keys-list)=
2023-08-21 06:11:47 +00:00
#### llm keys list --help
```
Usage: llm keys list [OPTIONS]
List names of all stored keys
Options:
--help Show this message and exit.
```
(help-keys-path)=
#### llm keys path --help
```
Usage: llm keys path [OPTIONS]
2023-06-17 08:17:52 +00:00
Output the path to the keys.json file
Options:
--help Show this message and exit.
```
2024-11-11 17:47:13 +00:00
(help-keys-get)=
#### llm keys get --help
```
Usage: llm keys get [OPTIONS] NAME
Return the value of a stored key
Example usage:
export OPENAI_API_KEY=$(llm keys get openai)
2024-11-11 17:47:13 +00:00
Options:
--help Show this message and exit.
```
(help-keys-set)=
#### llm keys set --help
```
Usage: llm keys set [OPTIONS] NAME
2023-06-17 08:17:52 +00:00
Save a key in the keys.json file
Example usage:
$ llm keys set openai
Enter key: ...
Options:
--value TEXT Value to set
--help Show this message and exit.
```
(help-logs)=
### llm logs --help
```
Usage: llm logs [OPTIONS] COMMAND [ARGS]...
2023-06-17 08:17:52 +00:00
Tools for exploring logged prompts and responses
Options:
--help Show this message and exit.
Commands:
list* Show recent logged prompts and their responses
2023-07-12 02:48:16 +00:00
off Turn off logging for all prompts
on Turn on logging for all prompts
path Output the path to the logs.db file
status Show current status of database logging
```
(help-logs-path)=
#### llm logs path --help
```
Usage: llm logs path [OPTIONS]
2023-06-17 08:29:36 +00:00
Output the path to the logs.db file
Options:
--help Show this message and exit.
```
(help-logs-status)=
#### llm logs status --help
```
Usage: llm logs status [OPTIONS]
Show current status of database logging
2023-07-12 02:48:16 +00:00
Options:
--help Show this message and exit.
```
(help-logs-on)=
2023-07-12 02:48:16 +00:00
#### llm logs on --help
```
Usage: llm logs on [OPTIONS]
Turn on logging for all prompts
Options:
--help Show this message and exit.
```
(help-logs-off)=
2023-07-12 02:48:16 +00:00
#### llm logs off --help
```
Usage: llm logs off [OPTIONS]
Turn off logging for all prompts
Options:
--help Show this message and exit.
```
(help-logs-list)=
#### llm logs list --help
```
Usage: llm logs list [OPTIONS]
2023-06-17 08:17:52 +00:00
Show recent logged prompts and their responses
Options:
-n, --count INTEGER Number of entries to show - defaults to 3, use 0
for all
-p, --path FILE Path to log database
-m, --model TEXT Filter by model or model alias
-q, --query TEXT Search for logs matching this string
--schema TEXT JSON schema, filepath or ID
--data Output newline-delimited JSON data for schema
--data-array Output JSON array of data for schema
--data-key TEXT Return JSON objects from array in this key
-t, --truncate Truncate long strings in output
-s, --short Shorter YAML output with truncated prompts
-u, --usage Include token usage
-r, --response Just output the last response
-x, --extract Extract first fenced code block
--xl, --extract-last Extract last fenced code block
-c, --current Show logs from the current conversation
--cid, --conversation TEXT Show logs for this conversation ID
--json Output logs as JSON
--help Show this message and exit.
```
(help-models)=
### llm models --help
```
Usage: llm models [OPTIONS] COMMAND [ARGS]...
Manage available models
Options:
--help Show this message and exit.
Commands:
list* List available models
default Show or set the default model
```
(help-models-list)=
#### llm models list --help
```
Usage: llm models list [OPTIONS]
List available models
Options:
--options Show options for each model, if available
--async List async models
-q, --query TEXT Search for models matching these strings
--help Show this message and exit.
```
(help-models-default)=
#### llm models default --help
```
Usage: llm models default [OPTIONS] [MODEL]
Show or set the default model
Options:
--help Show this message and exit.
```
(help-templates)=
### llm templates --help
```
Usage: llm templates [OPTIONS] COMMAND [ARGS]...
2023-06-17 08:17:52 +00:00
Manage stored prompt templates
Options:
--help Show this message and exit.
Commands:
list* List available prompt templates
edit Edit the specified prompt template using the default $EDITOR
path Output the path to the templates directory
show Show the specified prompt template
```
(help-templates-list)=
#### llm templates list --help
```
Usage: llm templates list [OPTIONS]
2023-06-17 08:17:52 +00:00
List available prompt templates
Options:
--help Show this message and exit.
```
(help-templates-show)=
#### llm templates show --help
```
Usage: llm templates show [OPTIONS] NAME
2023-06-17 08:17:52 +00:00
Show the specified prompt template
Options:
--help Show this message and exit.
```
(help-templates-edit)=
#### llm templates edit --help
```
Usage: llm templates edit [OPTIONS] NAME
2023-06-17 08:17:52 +00:00
Edit the specified prompt template using the default $EDITOR
Options:
--help Show this message and exit.
```
(help-templates-path)=
#### llm templates path --help
```
Usage: llm templates path [OPTIONS]
2023-06-17 08:17:52 +00:00
Output the path to the templates directory
2023-08-12 05:55:59 +00:00
Options:
--help Show this message and exit.
```
(help-schemas)=
### llm schemas --help
```
Usage: llm schemas [OPTIONS] COMMAND [ARGS]...
Manage stored schemas
Options:
--help Show this message and exit.
Commands:
list* List stored schemas
```
(help-schemas-list)=
#### llm schemas list --help
```
Usage: llm schemas list [OPTIONS]
List stored schemas
Options:
-p, --path FILE Path to log database
-q, --query TEXT Search for schemas matching this string
--help Show this message and exit.
```
(help-aliases)=
2023-08-12 05:55:59 +00:00
### llm aliases --help
```
Usage: llm aliases [OPTIONS] COMMAND [ARGS]...
Manage model aliases
Options:
--help Show this message and exit.
Commands:
list* List current aliases
2023-08-12 16:15:46 +00:00
path Output the path to the aliases.json file
remove Remove an alias
set Set an alias for a model
2023-08-12 05:55:59 +00:00
```
(help-aliases-list)=
2023-08-12 05:55:59 +00:00
#### llm aliases list --help
```
Usage: llm aliases list [OPTIONS]
List current aliases
2023-08-12 16:15:46 +00:00
Options:
--json Output as JSON
--help Show this message and exit.
```
(help-aliases-set)=
2023-08-12 16:15:46 +00:00
#### llm aliases set --help
```
2025-02-13 23:38:25 +00:00
Usage: llm aliases set [OPTIONS] ALIAS [MODEL_ID]
2023-08-12 16:15:46 +00:00
Set an alias for a model
Example usage:
2025-02-13 23:38:25 +00:00
llm aliases set mini gpt-4o-mini
Alternatively you can omit the model ID and specify one or more -q options.
The first model matching all of those query strings will be used.
llm aliases set mini -q 4o -q mini
2023-08-12 16:15:46 +00:00
Options:
2025-02-13 23:38:25 +00:00
-q, --query TEXT Set alias for model matching these strings
--help Show this message and exit.
2023-08-12 16:15:46 +00:00
```
(help-aliases-remove)=
2023-08-12 16:15:46 +00:00
#### llm aliases remove --help
```
Usage: llm aliases remove [OPTIONS] ALIAS
Remove an alias
Example usage:
$ llm aliases remove turbo
Options:
--help Show this message and exit.
```
(help-aliases-path)=
2023-08-12 16:15:46 +00:00
#### llm aliases path --help
```
Usage: llm aliases path [OPTIONS]
Output the path to the aliases.json file
2023-06-17 16:46:53 +00:00
Options:
--help Show this message and exit.
```
(help-plugins)=
2023-06-17 16:46:53 +00:00
### llm plugins --help
```
Usage: llm plugins [OPTIONS]
List installed plugins
Options:
2023-09-10 21:18:16 +00:00
--all Include built-in default plugins
--help Show this message and exit.
```
(help-install)=
### llm install --help
```
2023-07-06 00:58:19 +00:00
Usage: llm install [OPTIONS] [PACKAGES]...
Install packages from PyPI into the same environment as LLM
Options:
2023-07-26 05:18:45 +00:00
-U, --upgrade Upgrade packages to latest version
-e, --editable TEXT Install a project in editable mode from this path
--force-reinstall Reinstall all packages even if they are already up-to-
date
--no-cache-dir Disable the cache
2023-07-26 05:18:45 +00:00
--help Show this message and exit.
```
(help-uninstall)=
### llm uninstall --help
```
Usage: llm uninstall [OPTIONS] PACKAGES...
Uninstall Python packages from the LLM environment
Options:
-y, --yes Don't ask for confirmation
--help Show this message and exit.
```
(help-embed)=
### llm embed --help
```
Usage: llm embed [OPTIONS] [COLLECTION] [ID]
Embed text and store or return the result
Options:
-i, --input PATH File to embed
-m, --model TEXT Embedding model to use
--store Store the text itself in the database
-d, --database FILE
-c, --content TEXT Content to embed
--binary Treat input as binary data
--metadata TEXT JSON object metadata to store
-f, --format [json|blob|base64|hex]
Output format
--help Show this message and exit.
```
(help-embed-multi)=
2023-09-03 22:08:04 +00:00
### llm embed-multi --help
```
Usage: llm embed-multi [OPTIONS] COLLECTION [INPUT_PATH]
Store embeddings for multiple strings at once
Input can be CSV, TSV or a JSON list of objects.
The first column is treated as an ID - all other columns are assumed to be
text that should be concatenated together in order to calculate the
embeddings.
Input data can come from one of three sources:
1. A CSV, JSON, TSV or JSON-nl file (including on standard input)
2. A SQL query against a SQLite database
3. A directory of files
Options:
--format [json|csv|tsv|nl] Format of input file - defaults to auto-detect
--files <DIRECTORY TEXT>... Embed files in this directory - specify directory
and glob pattern
--encoding TEXT Encoding to use when reading --files
--binary Treat --files as binary data
2023-09-03 22:08:04 +00:00
--sql TEXT Read input using this SQL query
--attach <TEXT FILE>... Additional databases to attach - specify alias
and file path
--batch-size INTEGER Batch size to use when running embeddings
--prefix TEXT Prefix to add to the IDs
2023-09-03 22:08:04 +00:00
-m, --model TEXT Embedding model to use
--prepend TEXT Prepend this string to all content before
embedding
2023-09-03 22:08:04 +00:00
--store Store the text itself in the database
-d, --database FILE
--help Show this message and exit.
```
(help-similar)=
2023-09-01 06:13:20 +00:00
### llm similar --help
```
Usage: llm similar [OPTIONS] COLLECTION [ID]
Return top N similar IDs from a collection using cosine similarity.
2023-09-01 06:13:20 +00:00
Example usage:
llm similar my-collection -c "I like cats"
Or to find content similar to a specific stored ID:
llm similar my-collection 1234
2023-09-01 06:13:20 +00:00
Options:
2023-09-12 18:22:21 +00:00
-i, --input PATH File to embed for comparison
-c, --content TEXT Content to embed for comparison
2023-09-12 18:22:21 +00:00
--binary Treat input as binary data
2023-09-01 06:13:20 +00:00
-n, --number INTEGER Number of results to return
-d, --database FILE
--help Show this message and exit.
```
(help-embed-models)=
### llm embed-models --help
```
Usage: llm embed-models [OPTIONS] COMMAND [ARGS]...
Manage available embedding models
Options:
--help Show this message and exit.
Commands:
list* List available embedding models
default Show or set the default embedding model
```
(help-embed-models-list)=
#### llm embed-models list --help
```
Usage: llm embed-models list [OPTIONS]
List available embedding models
Options:
-q, --query TEXT Search for embedding models matching these strings
--help Show this message and exit.
```
(help-embed-models-default)=
#### llm embed-models default --help
```
Usage: llm embed-models default [OPTIONS] [MODEL]
Show or set the default embedding model
Options:
--remove-default Reset to specifying no default model
--help Show this message and exit.
```
(help-collections)=
### llm collections --help
```
Usage: llm collections [OPTIONS] COMMAND [ARGS]...
View and manage collections of embeddings
Options:
--help Show this message and exit.
Commands:
list* View a list of collections
delete Delete the specified collection
path Output the path to the embeddings database
```
(help-collections-path)=
#### llm collections path --help
```
Usage: llm collections path [OPTIONS]
Output the path to the embeddings database
Options:
--help Show this message and exit.
```
(help-collections-list)=
#### llm collections list --help
```
Usage: llm collections list [OPTIONS]
View a list of collections
Options:
-d, --database FILE Path to embeddings database
--json Output as JSON
--help Show this message and exit.
```
(help-collections-delete)=
#### llm collections delete --help
```
Usage: llm collections delete [OPTIONS] COLLECTION
Delete the specified collection
Example usage:
llm collections delete my-collection
Options:
-d, --database FILE Path to embeddings database
--help Show this message and exit.
```
(help-openai)=
### llm openai --help
```
Usage: llm openai [OPTIONS] COMMAND [ARGS]...
Commands for working directly with the OpenAI API
Options:
--help Show this message and exit.
Commands:
models List models available to you from the OpenAI API
```
(help-openai-models)=
#### llm openai models --help
```
Usage: llm openai models [OPTIONS]
List models available to you from the OpenAI API
Options:
--json Output as JSON
--key TEXT OpenAI API key
--help Show this message and exit.
```
<!-- [[[end]]] -->