llm models options commands for setting default model options

Closes #829
This commit is contained in:
Simon Willison 2025-03-22 18:28:45 -07:00 committed by GitHub
parent 1ad7bbd32a
commit 468b0551ee
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
4 changed files with 454 additions and 18 deletions

View file

@ -336,6 +336,7 @@ Options:
Commands:
list* List available models
default Show or set the default model
options Manage default options for models
```
(help-models-list)=
@ -365,6 +366,85 @@ Options:
--help Show this message and exit.
```
(help-models-options)=
#### llm models options --help
```
Usage: llm models options [OPTIONS] COMMAND [ARGS]...
Manage default options for models
Options:
--help Show this message and exit.
Commands:
list* List default options for all models
clear Clear default option(s) for a model
set Set a default option for a model
show List default options set for a specific model
```
(help-models-options-list)=
##### llm models options list --help
```
Usage: llm models options list [OPTIONS]
List default options for all models
Example usage:
llm models options list
Options:
--help Show this message and exit.
```
(help-models-options-show)=
##### llm models options show --help
```
Usage: llm models options show [OPTIONS] MODEL
List default options set for a specific model
Example usage:
llm models options show gpt-4o
Options:
--help Show this message and exit.
```
(help-models-options-set)=
##### llm models options set --help
```
Usage: llm models options set [OPTIONS] MODEL KEY VALUE
Set a default option for a model
Example usage:
llm models options set gpt-4o temperature 0.5
Options:
--help Show this message and exit.
```
(help-models-options-clear)=
##### llm models options clear --help
```
Usage: llm models options clear [OPTIONS] MODEL [KEY]
Clear default option(s) for a model
Example usage:
llm models options clear gpt-4o
# Or for a single option
llm models options clear gpt-4o temperature
Options:
--help Show this message and exit.
```
(help-templates)=
### llm templates --help
```

View file

@ -45,30 +45,17 @@ Will run a prompt of:
```
For models that support them, {ref}`system prompts <usage-system-prompts>` are a better tool for this kind of prompting.
### Model options
Some models support options. You can pass these using `-o/--option name value` - for example, to set the temperature to 1.5 run this:
```bash
llm 'Ten names for cheesecakes' -o temperature 1.5
```
(usage-extract-fenced-code)=
### Extracting fenced code blocks
Use the `llm models --options` command to see which options are supported by each model.
If you are using an LLM to generate code it can be useful to retrieve just the code it produces without any of the surrounding explanatory text.
The `-x/--extract` option will scan the response for the first instance of a Markdown fenced code block - something that looks like this:
````
```python
def my_function():
# ...
```
````
It will extract and returns just the content of that block, excluding the fenced coded delimiters. If there are no fenced code blocks it will return the full response.
Use `--xl/--extract-last` to return the last fenced code block instead of the first.
The entire response including explanatory text is still logged to the database, and can be viewed using `llm logs -c`.
You can also {ref}`configure default options <usage-executing-default-options>` for a model using the `llm models options` commands.
(usage-attachments)=
### Attachments
@ -127,6 +114,26 @@ cat llm/utils.py | llm -t pytest
```
See {ref}`prompt templates <prompt-templates>` for more.
(usage-extract-fenced-code)=
### Extracting fenced code blocks
If you are using an LLM to generate code it can be useful to retrieve just the code it produces without any of the surrounding explanatory text.
The `-x/--extract` option will scan the response for the first instance of a Markdown fenced code block - something that looks like this:
````
```python
def my_function():
# ...
```
````
It will extract and returns just the content of that block, excluding the fenced coded delimiters. If there are no fenced code blocks it will return the full response.
Use `--xl/--extract-last` to return the last fenced code block instead of the first.
The entire response including explanatory text is still logged to the database, and can be viewed using `llm logs -c`.
(usage-schemas)=
### Schemas
@ -756,3 +763,33 @@ When running a prompt you can pass the full model name or any of the aliases to
llm -m 4o \
'As many names for cheesecakes as you can think of, with detailed descriptions'
```
(usage-executing-default-options)=
## Setting default options for models
To configure a default option for a specific model, use the `llm models options set` command:
```bash
llm models options set gpt-4o temperature 0.5
```
This option will then be applied automatically any time you run a prompt through the `gpt-4o` model.
Default options are stored in the `model_options.json` file in the LLM configuration directory.
You can list all default options across all models using the `llm models options list` command:
```bash
llm models options list
```
Or show them for an individual model with `llm models options show <model_id>`:
```bash
llm models options show gpt-4o
```
To clear a default option, use the `llm models options clear` command:
```bash
llm models options clear gpt-4o temperature
```
Or clear all default options for a model like this:
```bash
llm models options clear gpt-4o
```

View file

@ -63,7 +63,7 @@ import sqlite_utils
from sqlite_utils.utils import rows_from_file, Format
import sys
import textwrap
from typing import cast, Optional, Iterable, Union, Tuple
from typing import cast, Optional, Iterable, Union, Tuple, Any
import warnings
import yaml
@ -494,6 +494,12 @@ def prompt(
except pydantic.ValidationError as ex:
raise click.ClickException(render_errors(ex.errors()))
# Add on any default model options
default_options = get_model_options(model_id)
for key_, value in default_options.items():
if key_ not in validated_options:
validated_options[key_] = value
kwargs = {**validated_options}
resolved_attachments = [*attachments, *attachment_types]
@ -2371,6 +2377,143 @@ def collections_delete(collection, database):
collection_obj.delete()
@models.group(
cls=DefaultGroup,
default="list",
default_if_no_args=True,
)
def options():
"Manage default options for models"
@options.command(name="list")
def options_list():
"""
List default options for all models
Example usage:
\b
llm models options list
"""
options = get_all_model_options()
if not options:
click.echo("No default options set for any models.", err=True)
return
for model_id, model_options in options.items():
click.echo(f"{model_id}:")
for key, value in model_options.items():
click.echo(f" {key}: {value}")
@options.command(name="show")
@click.argument("model")
def options_show(model):
"""
List default options set for a specific model
Example usage:
\b
llm models options show gpt-4o
"""
import llm
try:
# Resolve alias to model ID
model_obj = llm.get_model(model)
model_id = model_obj.model_id
except llm.UnknownModelError:
# Use as-is if not found
model_id = model
options = get_model_options(model_id)
if not options:
click.echo(f"No default options set for model '{model_id}'.", err=True)
return
for key, value in options.items():
click.echo(f"{key}: {value}")
@options.command(name="set")
@click.argument("model")
@click.argument("key")
@click.argument("value")
def options_set(model, key, value):
"""
Set a default option for a model
Example usage:
\b
llm models options set gpt-4o temperature 0.5
"""
import llm
try:
# Resolve alias to model ID
model_obj = llm.get_model(model)
model_id = model_obj.model_id
# Validate option against model schema
try:
# Create a test Options object to validate
test_options = {key: value}
model_obj.Options(**test_options)
except pydantic.ValidationError as ex:
raise click.ClickException(render_errors(ex.errors()))
except llm.UnknownModelError:
# Use as-is if not found
model_id = model
set_model_option(model_id, key, value)
click.echo(f"Set default option {key}={value} for model {model_id}", err=True)
@options.command(name="clear")
@click.argument("model")
@click.argument("key", required=False)
def options_clear(model, key):
"""
Clear default option(s) for a model
Example usage:
\b
llm models options clear gpt-4o
# Or for a single option
llm models options clear gpt-4o temperature
"""
import llm
try:
# Resolve alias to model ID
model_obj = llm.get_model(model)
model_id = model_obj.model_id
except llm.UnknownModelError:
# Use as-is if not found
model_id = model
cleared_keys = []
if not key:
cleared_keys = list(get_model_options(model_id).keys())
for key_ in cleared_keys:
clear_model_option(model_id, key_)
else:
cleared_keys.append(key)
clear_model_option(model_id, key)
if cleared_keys:
if len(cleared_keys) == 1:
click.echo(f"Cleared option '{cleared_keys[0]}' for model {model_id}")
else:
click.echo(
f"Cleared {', '.join(cleared_keys)} options for model {model_id}"
)
def template_dir():
path = user_dir() / "templates"
path.mkdir(parents=True, exist_ok=True)
@ -2461,3 +2604,98 @@ def _human_readable_size(size_bytes):
def logs_on():
return not (user_dir() / "logs-off").exists()
def get_all_model_options() -> dict:
"""
Get all default options for all models
"""
path = user_dir() / "model_options.json"
if not path.exists():
return {}
try:
options = json.loads(path.read_text())
except json.JSONDecodeError:
return {}
return options
def get_model_options(model_id: str) -> dict:
"""
Get default options for a specific model
Args:
model_id: Return options for model with this ID
Returns:
A dictionary of model options
"""
path = user_dir() / "model_options.json"
if not path.exists():
return {}
try:
options = json.loads(path.read_text())
except json.JSONDecodeError:
return {}
return options.get(model_id, {})
def set_model_option(model_id: str, key: str, value: Any) -> None:
"""
Set a default option for a model.
Args:
model_id: The model ID
key: The option key
value: The option value
"""
path = user_dir() / "model_options.json"
if path.exists():
try:
options = json.loads(path.read_text())
except json.JSONDecodeError:
options = {}
else:
options = {}
# Ensure the model has an entry
if model_id not in options:
options[model_id] = {}
# Set the option
options[model_id][key] = value
# Save the options
path.write_text(json.dumps(options, indent=2))
def clear_model_option(model_id: str, key: str) -> None:
"""
Clear a model option
Args:
model_id: The model ID
key: Key to clear
"""
path = user_dir() / "model_options.json"
if not path.exists():
return
try:
options = json.loads(path.read_text())
except json.JSONDecodeError:
return
if model_id not in options:
return
if key in options[model_id]:
del options[model_id][key]
if not options[model_id]:
del options[model_id]
path.write_text(json.dumps(options, indent=2))

81
tests/test_cli_options.py Normal file
View file

@ -0,0 +1,81 @@
from click.testing import CliRunner
from llm.cli import cli
import pytest
import json
@pytest.mark.parametrize(
"args,expected_options,expected_error",
(
(
["gpt-4o-mini", "temperature", "0.5"],
{"gpt-4o-mini": {"temperature": "0.5"}},
None,
),
(
["gpt-4o-mini", "temperature", "invalid"],
{},
"Error: temperature\n Input should be a valid number",
),
(
["gpt-4o-mini", "not-an-option", "invalid"],
{},
"Extra inputs are not permitted",
),
),
)
def test_set_model_default_options(user_path, args, expected_options, expected_error):
path = user_path / "model_options.json"
assert not path.exists()
runner = CliRunner()
result = runner.invoke(cli, ["models", "options", "set"] + args)
if not expected_error:
assert result.exit_code == 0
assert path.exists()
data = json.loads(path.read_text("utf-8"))
assert data == expected_options
else:
assert result.exit_code == 1
assert expected_error in result.output
def test_model_options_list_and_show(user_path):
(user_path / "model_options.json").write_text(
json.dumps(
{"gpt-4o-mini": {"temperature": 0.5}, "gpt-4o": {"temperature": 0.7}}
),
"utf-8",
)
runner = CliRunner()
result = runner.invoke(cli, ["models", "options", "list"])
assert result.exit_code == 0
assert (
result.output
== "gpt-4o-mini:\n temperature: 0.5\ngpt-4o:\n temperature: 0.7\n"
)
result = runner.invoke(cli, ["models", "options", "show", "gpt-4o-mini"])
assert result.exit_code == 0
assert result.output == "temperature: 0.5\n"
def test_model_options_clear(user_path):
path = user_path / "model_options.json"
path.write_text(
json.dumps(
{
"gpt-4o-mini": {"temperature": 0.5},
"gpt-4o": {"temperature": 0.7, "top_p": 0.9},
}
),
"utf-8",
)
assert path.exists()
runner = CliRunner()
# Clear all for gpt-4o-mini
result = runner.invoke(cli, ["models", "options", "clear", "gpt-4o-mini"])
assert result.exit_code == 0
# Clear just top_p for gpt-4o
result2 = runner.invoke(cli, ["models", "options", "clear", "gpt-4o", "top_p"])
assert result2.exit_code == 0
data = json.loads(path.read_text("utf-8"))
assert data == {"gpt-4o": {"temperature": 0.7}}