mirror of
https://github.com/Hopiu/llm.git
synced 2026-03-17 05:00:25 +00:00
--schema t:template-name option, plus improved schema docs
Closes #799, refs #788
This commit is contained in:
parent
362bdc6dcc
commit
1bebf8b34a
5 changed files with 64 additions and 13 deletions
|
|
@ -6,6 +6,8 @@ Large Language Models are very good at producing structured output as JSON or ot
|
|||
|
||||
This feature is supported by models from OpenAI, Anthropic, Google Gemini and can be implemented for others {ref}`via plugins <advanced-model-plugins-schemas>`.
|
||||
|
||||
This page describes schemas used via the `llm` command-line tool. Schemas can also be used from the {ref}`Python API <python-api-schemas>`.
|
||||
|
||||
(schemas-json-schemas)=
|
||||
|
||||
## Understanding JSON schemas
|
||||
|
|
@ -20,16 +22,21 @@ A [JSON schema](https://json-schema.org/) is a specification that describes the
|
|||
|
||||
Different models may support different subsets of the overall JSON schema language. You should experiment to figure out what works for the model you are using.
|
||||
|
||||
In most cases it's simpler to use the {ref}`condensed LLM schema syntax <schemas-dsl>` instead.
|
||||
|
||||
(schemas-using-with-llm)=
|
||||
|
||||
## Using schemas with LLM
|
||||
## How to specify a schema
|
||||
|
||||
LLM provides several ways to use schemas:
|
||||
LLM accepts schema definitions for both running prompts and exploring logged responses, using the `--schema` option.
|
||||
|
||||
1. Directly via the command line with the `--schema` option
|
||||
2. Through stored schemas in the database
|
||||
3. Via templates that include schemas
|
||||
4. Through the {ref}`Python API <python-api-schemas>`
|
||||
This option can take multiple forms:
|
||||
|
||||
- A string providing a JSON schema: `--schema '{"type": "object", ...}'`
|
||||
- A {ref}`condensed schema definition <schemas-dsl>`: `--schema 'name,age int'`
|
||||
- The name or path of a file on disk containing a JSON schema: `--schema dogs.schema.json`
|
||||
- The hexadecimal ID of a previously logged schema: `--schema 520f7aabb121afd14d0c6c237b39ba2d` - these IDs can be found using the `llm schemas` command.
|
||||
- A schema that has been {ref}`saved in a template <prompt-templates-save>`: `--schema t:name-of-template`
|
||||
|
||||
(schemas-using-cli)=
|
||||
|
||||
|
|
@ -78,7 +85,7 @@ llm models --schemas
|
|||
|
||||
(schemas-dsl)=
|
||||
|
||||
## Alternative schema syntax
|
||||
## Concise LLM schema syntax
|
||||
|
||||
JSON schema's can be time-consuming to construct by hand. LLM also supports a concise alternative syntax for specifying a schema.
|
||||
|
||||
|
|
|
|||
|
|
@ -3,7 +3,9 @@
|
|||
|
||||
Prompt templates can be created to reuse useful prompts with different input data.
|
||||
|
||||
## Getting started
|
||||
(prompt-templates-save)=
|
||||
|
||||
## Getting started with --save
|
||||
|
||||
The easiest way to create a template is using the `--save template_name` option.
|
||||
|
||||
|
|
@ -38,6 +40,9 @@ If you add `--extract` the setting to {ref}`extract the first fenced code block
|
|||
llm --system 'write a Python function' --extract --save python-function
|
||||
llm -t python-function 'reverse a string'
|
||||
```
|
||||
|
||||
(prompt-templates-using)=
|
||||
|
||||
## Using a template
|
||||
|
||||
You can execute a named template using the `-t/--template` option:
|
||||
|
|
@ -51,6 +56,9 @@ This can be combined with the `-m` option to specify a different model:
|
|||
curl -s https://llm.datasette.io/en/latest/ | \
|
||||
llm -t summarize -m gpt-3.5-turbo-16k
|
||||
```
|
||||
|
||||
(prompt-templates-list)=
|
||||
|
||||
## Listing available templates
|
||||
|
||||
This command lists all available templates:
|
||||
|
|
@ -63,6 +71,8 @@ cmd : system: reply with macos terminal commands only, no extra informati
|
|||
glados : system: You are GlaDOS prompt: Summarize this: $input
|
||||
```
|
||||
|
||||
(prompt-templates-yaml)=
|
||||
|
||||
## Templates as YAML files
|
||||
|
||||
Templates are stored as YAML files on disk.
|
||||
|
|
@ -115,7 +125,9 @@ curl -s 'https://til.simonwillison.net/macos/imovie-slides-and-audio' | \
|
|||
Output:
|
||||
> In a fantastical steampunk world, Simon Willison decided to merge an old MP3 recording with slides from the talk using iMovie. After exporting the slides as images and importing them into iMovie, he had to disable the default Ken Burns effect using the "Crop" tool. Then, Simon manually synchronized the audio by adjusting the duration of each image. Finally, he published the masterpiece to YouTube, with the whimsical magic of steampunk-infused illustrations leaving his viewers in awe.
|
||||
|
||||
### System templates
|
||||
(prompt-templates-system)=
|
||||
|
||||
### System prompts
|
||||
|
||||
When working with models that support system prompts (such as `gpt-3.5-turbo` and `gpt-4`) you can set a system prompt using a `system:` key like so:
|
||||
|
||||
|
|
@ -130,6 +142,9 @@ You can combine system and regular prompts like so:
|
|||
system: You speak like an excitable Victorian adventurer
|
||||
prompt: 'Summarize this: $input'
|
||||
```
|
||||
|
||||
(prompt-templates-schemas)=
|
||||
|
||||
### Schemas
|
||||
|
||||
Use the `schema_object:` key to embed a JSON schema (as YAML) in your template. The easiest way to create these is with the `llm --schema ... --save name-of-template` command - the result should look something like this:
|
||||
|
|
@ -150,6 +165,7 @@ schema_object:
|
|||
type: object
|
||||
```
|
||||
|
||||
(prompt-templates-variables)=
|
||||
|
||||
### Additional template variables
|
||||
|
||||
|
|
@ -192,6 +208,7 @@ I got this:
|
|||
> My previous test subject seemed to have learned something new about iMovie. They exported keynote slides as individual images [...] Quite impressive for a human.
|
||||
|
||||
(prompt-default-parameters)=
|
||||
|
||||
### Specifying default parameters
|
||||
|
||||
You can also specify default values for parameters, using a `defaults:` key.
|
||||
|
|
@ -220,6 +237,8 @@ I got this:
|
|||
|
||||
> Text, summarize in Yoda's voice, I will: "Hmm, young padawan. Summary of this text, you seek. Hmmm. ...
|
||||
|
||||
(prompt-templates-extract)=
|
||||
|
||||
### Configuring code extraction
|
||||
|
||||
To configure the {ref}`extract first fenced code block <usage-extract-fenced-code>` setting for the template, add this:
|
||||
|
|
@ -228,6 +247,8 @@ To configure the {ref}`extract first fenced code block <usage-extract-fenced-cod
|
|||
extract: true
|
||||
```
|
||||
|
||||
(prompt-templates-default-model)=
|
||||
|
||||
### Setting a default model for a template
|
||||
|
||||
Templates executed using `llm -t template-name` will execute using the default model that the user has configured for the tool - or `gpt-3.5-turbo` if they have not configured their own default.
|
||||
|
|
|
|||
|
|
@ -305,7 +305,7 @@ def prompt(
|
|||
if schema_multi:
|
||||
schema_input = schema_multi
|
||||
|
||||
schema = resolve_schema_input(db, schema_input)
|
||||
schema = resolve_schema_input(db, schema_input, load_template)
|
||||
|
||||
if schema_multi:
|
||||
# Convert that schema into multiple "items" of the same schema
|
||||
|
|
@ -1002,7 +1002,7 @@ def logs_list(
|
|||
|
||||
if schema_multi:
|
||||
schema_input = schema_multi
|
||||
schema = resolve_schema_input(db, schema_input)
|
||||
schema = resolve_schema_input(db, schema_input, load_template)
|
||||
if schema_multi:
|
||||
schema = multi_schema(schema)
|
||||
|
||||
|
|
|
|||
10
llm/utils.py
10
llm/utils.py
|
|
@ -234,10 +234,16 @@ def output_rows_as_json(rows, nl=False):
|
|||
return "\n".join(lines)
|
||||
|
||||
|
||||
def resolve_schema_input(db, schema_input):
|
||||
# schema_input might be JSON or a filepath or an ID
|
||||
def resolve_schema_input(db, schema_input, load_template):
|
||||
# schema_input might be JSON or a filepath or an ID or t:name
|
||||
if not schema_input:
|
||||
return
|
||||
if schema_input.strip().startswith("t:"):
|
||||
name = schema_input.strip()[2:]
|
||||
template = load_template(name)
|
||||
if not template.schema_object:
|
||||
raise click.ClickException("Template '{}' has no schema".format(name))
|
||||
return template.schema_object
|
||||
if schema_input.strip().startswith("{"):
|
||||
try:
|
||||
return json.loads(schema_input)
|
||||
|
|
|
|||
|
|
@ -127,6 +127,23 @@ def test_templates_prompt_save(templates_path, args, expected_prompt, expected_e
|
|||
assert expected_error in result.output
|
||||
|
||||
|
||||
def test_templates_error_on_missing_schema(templates_path):
|
||||
runner = CliRunner()
|
||||
runner.invoke(
|
||||
cli, ["the-prompt", "--save", "prompt_no_schema"], catch_exceptions=False
|
||||
)
|
||||
# This should complain about no schema
|
||||
result = runner.invoke(
|
||||
cli, ["hi", "--schema", "t:prompt_no_schema"], catch_exceptions=False
|
||||
)
|
||||
assert result.output == "Error: Template 'prompt_no_schema' has no schema\n"
|
||||
# And this is just an invalid template
|
||||
result2 = runner.invoke(
|
||||
cli, ["hi", "--schema", "t:bad_template"], catch_exceptions=False
|
||||
)
|
||||
assert result2.output == "Error: Invalid template: bad_template\n"
|
||||
|
||||
|
||||
@mock.patch.dict(os.environ, {"OPENAI_API_KEY": "X"})
|
||||
@pytest.mark.parametrize(
|
||||
"template,extra_args,expected_model,expected_input,expected_error",
|
||||
|
|
|
|||
Loading…
Reference in a new issue