Simon Willison
99a1adcece
Initial llm schemas list implementation, refs #781
2025-02-27 07:35:48 -08:00
Simon Willison
a0845874ec
Schema template --save --schema support
...
* Don't hang on stdin if llm -t template-with-schema
* Docs on using schemas with templates
* Schema in template YAML file example
* Test for --save with --schema
Refs #778
2025-02-27 07:19:15 -08:00
Simon Willison
f35ac31c21
llm logs --schema, --data, --data-array and --data-key options ( #785 )
...
* llm logs --schema option, refs #782
* --data and --data-array and --data-key options, refs #782
* Tests for llm logs --schema options, refs #785
* Also implemented --schema ID lookup, refs #780
* Using --data-key implies --data
* Docs for llm logs --schema and --data etc
2025-02-26 21:51:08 -08:00
Kasper Primdal Lauritzen
6cb16a1d1a
Allow "reasoning" for extra-openai-models.yaml ( #766 )
...
* Allow "reasoning" for extra-openai-models.yaml
Currently you get an error when trying to use `-o reasoning_effort high` with a model that has been defined in `extra-openai-models.yaml`.
This allows a `reasoning` field.
* Mention reasoning: true in other OpenAI models docs
---------
Co-authored-by: Simon Willison <swillison@gmail.com>
2025-02-26 21:50:14 -08:00
Simon Willison
02999e398d
Refactor tests to new test_llm_logs.py module
...
Refs #785
2025-02-26 20:23:55 -08:00
Simon Willison
9922d5bb6a
model.prompt(prompt= is now optional, closes #784
2025-02-26 19:31:38 -08:00
Simon Willison
c58e3a2246
Use -- none -- for no prompt
...
Refs https://github.com/simonw/llm/issues/783#issuecomment-2686760812
2025-02-26 19:28:19 -08:00
Simon Willison
3d4871f163
Improved log Markdown, closes #783
2025-02-26 19:25:25 -08:00
Simon Willison
f5c2cfba96
Note about Pydantic v1 support in changelog for 0.23a0
...
Refs #520 , #775
2025-02-26 17:07:41 -08:00
Simon Willison
42122c79ba
Release 0.23a0
...
Refs #776 , #777
2025-02-26 17:05:13 -08:00
Simon Willison
62c90dd472
llm prompt --schema X option and model.prompt(..., schema=) parameter ( #777 )
...
Refs #776
* Implemented new llm prompt --schema and model.prompt(schema=)
* Log schema to responses.schema_id and schemas table
* Include schema in llm logs Markdown output
* Test for schema=pydantic_model
* Initial --schema CLI documentation
* Python docs for schema=
* Advanced plugin docs on schemas
2025-02-26 16:58:28 -08:00
Tomoko Uchida
eda1f4f588
Add note about similarity function in "similar" command's doc ( #774 )
...
* note about similarity function in similar command doc
* Link to Wikipedia definition
---------
Co-authored-by: Simon Willison <swillison@gmail.com>
2025-02-26 10:07:10 -08:00
Simon Willison
849c65fe9d
Upgrade to Pydantic v2 ( #775 )
...
* Upgrade to Pydantic v2
* Stop testing against Pydantic v1
Closes #520
2025-02-26 10:05:54 -08:00
Simon Willison
e46cb7e761
Update docs to no longer mention PaLM
...
!stable-docs
2025-02-16 22:37:00 -08:00
Simon Willison
64f9f2ef52
Promote llm-mlx in changelog and plugin directory
...
!stable-docs
2025-02-16 22:29:32 -08:00
Simon Willison
0eab3f5ff3
Link to 0.22 annotated release notes
...
!stable-docs
2025-02-16 22:20:40 -08:00
Simon Willison
b8b030fc58
Release 0.22
...
Refs #737 , #742 , #744 , #745 , #748 , #752
2025-02-16 20:34:48 -08:00
Simon Willison
7bf1ea665e
Made load_conversation() async_ aware, closes #742
2025-02-16 20:19:38 -08:00
Simon Willison
3445f9a112
Test for async model conversations Python API, refs #742
2025-02-16 19:48:25 -08:00
Simon Willison
24b250506b
Better __repr__ and __str__ for conversation and model
...
Inspired by work on #752
2025-02-16 15:45:08 -08:00
Simon Willison
c053e214ec
include token usage information, not get - refs #756
2025-02-16 15:18:04 -08:00
Simon Willison
53d6ecdd59
Documentation for logs --short, refs #756
2025-02-16 15:16:51 -08:00
Simon Willison
2066397aae
llm logs --prompts is now -s/--short - also supports --usage
...
Refs #736 , closes #756
2025-02-16 15:12:28 -08:00
Simon Willison
6c6b100f3e
KeyModel and AsyncKeyModel classes for models that taken keys ( #753 )
...
* New KeyModel and AsyncKeyModel classes for models that taken keys - closes #744
* llm prompt --key now uses new mechanism, including for async
* use new key mechanism in llm chat command
* Python API tests for llm.KeyModel and llm.AsyncKeyModel
* Python API docs for for prompt(... key="")
* Mention await model.prompt() takes other parameters, reorg sections
* Better title for the model tutorial
* Docs on writing model plugins that take a key
2025-02-16 14:38:51 -08:00
Simon Willison
8611d9203c
Updated docs with new chatgpt-4o-latest model, refs #752
2025-02-15 17:46:07 -08:00
Simon Willison
b331b1e674
OpenAI model chatgpt-4o-latest, closes #752
2025-02-15 17:45:09 -08:00
Simon Willison
747d92ea4f
Docs for multiple -q option, closes #748
2025-02-13 16:01:02 -08:00
Simon Willison
31e900e9e1
llm aliases set -q option, refs #749
2025-02-13 15:49:47 -08:00
Simon Willison
20c18a716d
-q multiple option for llm models and llm embed-models
...
Refs #748
2025-02-13 15:35:18 -08:00
Simon Willison
9a1374b447
llm embed-multi --prepend option ( #746 )
...
* llm embed-multi --prepend option
Closes #745
2025-02-12 15:19:18 -08:00
Simon Willison
f67c21522b
Docs for response.json() and response.usage()
...
!stable-docs
2025-02-11 08:35:27 -08:00
Simon Willison
41d64a8f12
llm logs --prompts option ( #737 )
...
Closes #736
2025-02-02 12:03:01 -08:00
Simon Willison
21df241443
llm-claude-3 is now called llm-anthropic
...
Refs https://github.com/simonw/llm-claude-3/issues/31
!stable-docs
2025-02-01 22:08:19 -08:00
Simon Willison
deb8bc3b4f
Upgrade to black>=25.1.0
...
Refs https://github.com/simonw/llm/issues/728#issuecomment-2628348988
Refs https://github.com/psf/black/issues/4571#issuecomment-2628355450
2025-01-31 13:18:41 -08:00
Simon Willison
f8dcc67455
Release 0.21
...
Refs #717 , #728
2025-01-31 12:35:10 -08:00
Simon Willison
4c153ce675
Pin Black to get tests to pass, refs #728
...
See https://github.com/psf/black/issues/4571
2025-01-31 12:32:08 -08:00
Simon Willison
965ad819f9
Fix for tests with pydantic<2, refs #728
2025-01-31 12:18:33 -08:00
Simon Willison
eb0e1e761b
o3-mini and reasoning_effort option, refs #728
2025-01-31 12:14:02 -08:00
Simon Willison
656d8fa3c4
--xl/--extract-last flag for prompt and log list commands ( #718 )
...
Closes #717
2025-01-24 10:52:46 -08:00
Simon Willison
e449fd4f46
Typo fix
...
!stable-docs
2025-01-22 22:17:07 -08:00
Simon Willison
3e88628602
uv tool upgrade llm, refs #702
...
!stable-docs
2025-01-22 21:08:16 -08:00
Simon Willison
bf10f63d3d
Mention gpt-4o-mini-audio-preview too #677
...
!stable-docs
2025-01-22 21:06:12 -08:00
Simon Willison
eb996baeab
Documentation for model.attachment_types, closes #705
2025-01-22 20:46:28 -08:00
Simon Willison
2b9a1bbc50
Fixed broken link
2025-01-22 20:39:01 -08:00
Simon Willison
dc127d2a87
Release 0.20
...
Refs #654 , #676 , #677 , #681 , #688 , #690 , #700 , #702 , #709
2025-01-22 20:36:10 -08:00
Simon Willison
57d3baac42
Update embedding model names in docs, refs #654
...
Also ran Black.
2025-01-22 20:35:17 -08:00
web-sst
6f7ea406bf
Register full embedding model names ( #654 )
...
Provide backward compatible aliases.
This makes available the same model names that ttok uses.
2025-01-22 20:14:03 -08:00
Ryan Patterson
59983740e6
Update directory.md ( #666 )
2025-01-18 14:52:51 -08:00
Simon Willison
02e59a201e
Don't show default model for llm models -q, closes #710
2025-01-18 14:24:18 -08:00
Simon Willison
f95dd55cda
Make it easier to debug CLI errors in pytest
...
Found this pattern while working on #709
2025-01-18 14:21:43 -08:00