Commit graph

373 commits

Author SHA1 Message Date
Simon Willison
99a1adcece Initial llm schemas list implementation, refs #781 2025-02-27 07:35:48 -08:00
Simon Willison
a0845874ec
Schema template --save --schema support
* Don't hang on stdin if llm -t template-with-schema
* Docs on using schemas with templates
* Schema in template YAML file example
* Test for --save with --schema

Refs #778
2025-02-27 07:19:15 -08:00
Simon Willison
f35ac31c21
llm logs --schema, --data, --data-array and --data-key options (#785)
* llm logs --schema option, refs #782
* --data and --data-array and --data-key options, refs #782
* Tests for llm logs --schema options, refs #785
* Also implemented --schema ID lookup, refs #780
* Using --data-key implies --data
* Docs for llm logs --schema and --data etc
2025-02-26 21:51:08 -08:00
Kasper Primdal Lauritzen
6cb16a1d1a
Allow "reasoning" for extra-openai-models.yaml (#766)
* Allow "reasoning" for extra-openai-models.yaml

Currently you get an error when trying to use `-o reasoning_effort high` with a model that has been defined in `extra-openai-models.yaml`. 
This allows a `reasoning` field.

* Mention reasoning: true in other OpenAI models docs

---------

Co-authored-by: Simon Willison <swillison@gmail.com>
2025-02-26 21:50:14 -08:00
Simon Willison
f5c2cfba96 Note about Pydantic v1 support in changelog for 0.23a0
Refs #520, #775
2025-02-26 17:07:41 -08:00
Simon Willison
42122c79ba Release 0.23a0
Refs #776, #777
2025-02-26 17:05:13 -08:00
Simon Willison
62c90dd472
llm prompt --schema X option and model.prompt(..., schema=) parameter (#777)
Refs #776

* Implemented new llm prompt --schema and model.prompt(schema=)
* Log schema to responses.schema_id and schemas table
* Include schema in llm logs Markdown output
* Test for schema=pydantic_model
* Initial --schema CLI documentation
* Python docs for schema=
* Advanced plugin docs on schemas
2025-02-26 16:58:28 -08:00
Tomoko Uchida
eda1f4f588
Add note about similarity function in "similar" command's doc (#774)
* note about similarity function in similar command doc
* Link to Wikipedia definition

---------

Co-authored-by: Simon Willison <swillison@gmail.com>
2025-02-26 10:07:10 -08:00
Simon Willison
e46cb7e761 Update docs to no longer mention PaLM
!stable-docs
2025-02-16 22:37:00 -08:00
Simon Willison
64f9f2ef52 Promote llm-mlx in changelog and plugin directory
!stable-docs
2025-02-16 22:29:32 -08:00
Simon Willison
0eab3f5ff3 Link to 0.22 annotated release notes
!stable-docs
2025-02-16 22:20:40 -08:00
Simon Willison
b8b030fc58 Release 0.22
Refs #737, #742, #744, #745, #748, #752
2025-02-16 20:34:48 -08:00
Simon Willison
c053e214ec include token usage information, not get - refs #756 2025-02-16 15:18:04 -08:00
Simon Willison
53d6ecdd59 Documentation for logs --short, refs #756 2025-02-16 15:16:51 -08:00
Simon Willison
6c6b100f3e
KeyModel and AsyncKeyModel classes for models that taken keys (#753)
* New KeyModel and AsyncKeyModel classes for models that taken keys - closes #744
* llm prompt --key now uses new mechanism, including for async
* use new key mechanism in llm chat command
* Python API tests for llm.KeyModel and llm.AsyncKeyModel
* Python API docs for for prompt(... key="")
* Mention await model.prompt() takes other parameters, reorg sections
* Better title for the model tutorial
* Docs on writing model plugins that take a key
2025-02-16 14:38:51 -08:00
Simon Willison
8611d9203c Updated docs with new chatgpt-4o-latest model, refs #752 2025-02-15 17:46:07 -08:00
Simon Willison
747d92ea4f Docs for multiple -q option, closes #748 2025-02-13 16:01:02 -08:00
Simon Willison
31e900e9e1 llm aliases set -q option, refs #749 2025-02-13 15:49:47 -08:00
Simon Willison
20c18a716d -q multiple option for llm models and llm embed-models
Refs #748
2025-02-13 15:35:18 -08:00
Simon Willison
9a1374b447
llm embed-multi --prepend option (#746)
* llm embed-multi --prepend option

Closes #745
2025-02-12 15:19:18 -08:00
Simon Willison
f67c21522b
Docs for response.json() and response.usage()
!stable-docs
2025-02-11 08:35:27 -08:00
Simon Willison
41d64a8f12
llm logs --prompts option (#737)
Closes #736
2025-02-02 12:03:01 -08:00
Simon Willison
21df241443 llm-claude-3 is now called llm-anthropic
Refs https://github.com/simonw/llm-claude-3/issues/31

!stable-docs
2025-02-01 22:08:19 -08:00
Simon Willison
f8dcc67455 Release 0.21
Refs #717, #728
2025-01-31 12:35:10 -08:00
Simon Willison
eb0e1e761b o3-mini and reasoning_effort option, refs #728 2025-01-31 12:14:02 -08:00
Simon Willison
656d8fa3c4
--xl/--extract-last flag for prompt and log list commands (#718)
Closes #717
2025-01-24 10:52:46 -08:00
Simon Willison
e449fd4f46
Typo fix
!stable-docs
2025-01-22 22:17:07 -08:00
Simon Willison
3e88628602 uv tool upgrade llm, refs #702
!stable-docs
2025-01-22 21:08:16 -08:00
Simon Willison
bf10f63d3d
Mention gpt-4o-mini-audio-preview too #677
!stable-docs
2025-01-22 21:06:12 -08:00
Simon Willison
eb996baeab Documentation for model.attachment_types, closes #705 2025-01-22 20:46:28 -08:00
Simon Willison
2b9a1bbc50 Fixed broken link 2025-01-22 20:39:01 -08:00
Simon Willison
dc127d2a87 Release 0.20
Refs #654, #676, #677, #681, #688, #690, #700, #702, #709
2025-01-22 20:36:10 -08:00
Simon Willison
57d3baac42 Update embedding model names in docs, refs #654
Also ran Black.
2025-01-22 20:35:17 -08:00
Ryan Patterson
59983740e6
Update directory.md (#666) 2025-01-18 14:52:51 -08:00
abrasumente
e1388b27fe
Add llm-deepseek plugin (#517) 2025-01-11 18:56:34 -08:00
Steven Weaver
2b6b00641c
Update tutorial-model-plugin.md (#685)
pydantic.org -> pydantic.dev
2025-01-11 12:05:05 -08:00
Amjith Ramanujam
e3c104b136
Show the default model when listing all available models. (#688) 2025-01-11 12:04:39 -08:00
Simon Willison
1d75792f9b More uv/uvx tips, closes #702
Refs #690
2025-01-11 10:06:32 -08:00
Ariel Marcus
d964d02e90
Add installation docs with uv (#690) 2025-01-11 09:57:10 -08:00
watany
1c61b5addd
doc(plugin): adding AmazonBedrock (#698) 2025-01-10 16:42:39 -08:00
Arjan Mossel
4f4f9bc07d
Add llm-venice to plugin directory (#699) 2025-01-10 16:41:21 -08:00
Simon Willison
6baf1f7d83 o1
Closes #676
2025-01-10 15:57:06 -08:00
Csaba Henk
88a8cfd9e4
llm logs -x/--extract option (#693)
* llm logs -x/--extract option
* Update docs/help.md for llm logs -x
* Added test for llm logs -x/--extract, refs #693
* llm logs -xr behaves same as llm logs -x
* -x/--extract in llm logging docs

---------

Co-authored-by: Simon Willison <swillison@gmail.com>
2025-01-10 15:53:04 -08:00
Simon Willison
b452effa09 llm models -q/--query option, closes #700 2025-01-09 11:37:33 -08:00
Simon Willison
000e984def --extract support for templates, closes #681 2024-12-19 07:16:48 -08:00
Simon Willison
67d4a99645 llm prompt -x/--extract option, closes #681 2024-12-19 06:40:05 -08:00
Simon Willison
6305b86026 gpt-4o-mini-audio-preview, closes #677 2024-12-17 20:28:57 -08:00
Simon Willison
8898584ba6 New OpenAI audio models, closes #677 2024-12-17 11:14:42 -08:00
Simon Willison
b8e8052229 Release 0.19.1
Refs #667
2024-12-05 13:47:28 -08:00
Simon Willison
e78fea17df Fragment hash on 0.19 release
!stable-docs
2024-12-01 16:09:55 -08:00