Simon Willison
63e5dff774
Update test, regs #834
2025-03-19 20:47:51 -07:00
Simon Willison
bfbcc201b7
Don't require input if template does not use $input, closes #835
2025-03-15 19:17:36 -07:00
Simon Willison
bc692e1f19
Templates only require input if they use $input
2025-03-15 19:06:41 -07:00
Simon Willison
1d552aeacc
llm models -m option, closes #825
2025-03-10 14:18:50 -07:00
Simon Willison
3a60290c82
llm logs --id-gt and --id-gte options, closes #801
2025-02-28 00:15:59 -08:00
Simon Willison
48f67f4085
llm logs --data-ids flag, closes #800
2025-02-27 20:31:50 -08:00
Simon Willison
1bebf8b34a
--schema t:template-name option, plus improved schema docs
...
Closes #799 , refs #788
2025-02-27 17:25:31 -08:00
Simon Willison
b1fe2e9857
sort_keys=False in --save, closes #798
2025-02-27 16:51:43 -08:00
Simon Willison
9a38021218
llm schemas dsl command, closes #793
...
Refs #790
2025-02-27 10:46:56 -08:00
Simon Willison
eb2b243fdf
schema_dsl(..., multi=True) parameter, refs #790
2025-02-27 10:28:42 -08:00
Simon Willison
8d32b71ef1
Renamed build_json_schema to schema_dsl
2025-02-27 10:22:29 -08:00
Simon Willison
7e819c2ffa
Implemented --schema-multi, closes #791
2025-02-27 10:12:21 -08:00
Simon Willison
321636e791
New schema DSL, closes #790
...
Plus made a start on schemas.md refs #788
2025-02-27 09:48:44 -08:00
Simon Willison
a0845874ec
Schema template --save --schema support
...
* Don't hang on stdin if llm -t template-with-schema
* Docs on using schemas with templates
* Schema in template YAML file example
* Test for --save with --schema
Refs #778
2025-02-27 07:19:15 -08:00
Simon Willison
f35ac31c21
llm logs --schema, --data, --data-array and --data-key options ( #785 )
...
* llm logs --schema option, refs #782
* --data and --data-array and --data-key options, refs #782
* Tests for llm logs --schema options, refs #785
* Also implemented --schema ID lookup, refs #780
* Using --data-key implies --data
* Docs for llm logs --schema and --data etc
2025-02-26 21:51:08 -08:00
Simon Willison
02999e398d
Refactor tests to new test_llm_logs.py module
...
Refs #785
2025-02-26 20:23:55 -08:00
Simon Willison
3d4871f163
Improved log Markdown, closes #783
2025-02-26 19:25:25 -08:00
Simon Willison
62c90dd472
llm prompt --schema X option and model.prompt(..., schema=) parameter ( #777 )
...
Refs #776
* Implemented new llm prompt --schema and model.prompt(schema=)
* Log schema to responses.schema_id and schemas table
* Include schema in llm logs Markdown output
* Test for schema=pydantic_model
* Initial --schema CLI documentation
* Python docs for schema=
* Advanced plugin docs on schemas
2025-02-26 16:58:28 -08:00
Simon Willison
7bf1ea665e
Made load_conversation() async_ aware, closes #742
2025-02-16 20:19:38 -08:00
Simon Willison
3445f9a112
Test for async model conversations Python API, refs #742
2025-02-16 19:48:25 -08:00
Simon Willison
24b250506b
Better __repr__ and __str__ for conversation and model
...
Inspired by work on #752
2025-02-16 15:45:08 -08:00
Simon Willison
2066397aae
llm logs --prompts is now -s/--short - also supports --usage
...
Refs #736 , closes #756
2025-02-16 15:12:28 -08:00
Simon Willison
6c6b100f3e
KeyModel and AsyncKeyModel classes for models that taken keys ( #753 )
...
* New KeyModel and AsyncKeyModel classes for models that taken keys - closes #744
* llm prompt --key now uses new mechanism, including for async
* use new key mechanism in llm chat command
* Python API tests for llm.KeyModel and llm.AsyncKeyModel
* Python API docs for for prompt(... key="")
* Mention await model.prompt() takes other parameters, reorg sections
* Better title for the model tutorial
* Docs on writing model plugins that take a key
2025-02-16 14:38:51 -08:00
Simon Willison
31e900e9e1
llm aliases set -q option, refs #749
2025-02-13 15:49:47 -08:00
Simon Willison
20c18a716d
-q multiple option for llm models and llm embed-models
...
Refs #748
2025-02-13 15:35:18 -08:00
Simon Willison
9a1374b447
llm embed-multi --prepend option ( #746 )
...
* llm embed-multi --prepend option
Closes #745
2025-02-12 15:19:18 -08:00
Simon Willison
41d64a8f12
llm logs --prompts option ( #737 )
...
Closes #736
2025-02-02 12:03:01 -08:00
Simon Willison
656d8fa3c4
--xl/--extract-last flag for prompt and log list commands ( #718 )
...
Closes #717
2025-01-24 10:52:46 -08:00
web-sst
6f7ea406bf
Register full embedding model names ( #654 )
...
Provide backward compatible aliases.
This makes available the same model names that ttok uses.
2025-01-22 20:14:03 -08:00
Csaba Henk
88a8cfd9e4
llm logs -x/--extract option ( #693 )
...
* llm logs -x/--extract option
* Update docs/help.md for llm logs -x
* Added test for llm logs -x/--extract, refs #693
* llm logs -xr behaves same as llm logs -x
* -x/--extract in llm logging docs
---------
Co-authored-by: Simon Willison <swillison@gmail.com>
2025-01-10 15:53:04 -08:00
Simon Willison
b452effa09
llm models -q/--query option, closes #700
2025-01-09 11:37:33 -08:00
Simon Willison
000e984def
--extract support for templates, closes #681
2024-12-19 07:16:48 -08:00
Simon Willison
67d4a99645
llm prompt -x/--extract option, closes #681
2024-12-19 06:40:05 -08:00
Simon Willison
571f4b2a4d
Fix for UTC warnings
...
Closes #672
2024-12-12 14:57:23 -08:00
Simon Willison
b6be09aa28
Fix get_models() and get_async_models() duplicates bug
...
Closes #667 , refs #640
2024-12-05 13:44:07 -08:00
Simon Willison
f9af563df5
response.on_done() mechanism, closes #653
2024-12-01 15:47:23 -08:00
Simon Willison
c52cfee881
llm.get_models() and llm.get_async_models(), closes #640
2024-11-20 20:09:06 -08:00
Simon Willison
8a7b0c4f5d
response.usage() and await aresponse.usage(), closes #644
2024-11-19 21:25:37 -08:00
Simon Willison
cfb10f4afd
Log input tokens, output tokens and token details ( #642 )
...
* Store input_tokens, output_tokens, token_details on Response, closes #610
* llm prompt -u/--usage option
* llm logs -u/--usage option
* Docs on tracking token usage in plugins
* OpenAI default plugin logs usage
2024-11-19 20:21:59 -08:00
Simon Willison
4a059d722b
Log --async responses to DB, closes #641
...
Refs #507
2024-11-19 18:11:52 -08:00
Simon Willison
157b29ddeb
Test for basic async conversation, refs #632
2024-11-14 14:28:17 -08:00
Simon Willison
ba75c674cb
llm.get_async_model(), llm.AsyncModel base class and OpenAI async models ( #613 )
...
- https://github.com/simonw/llm/issues/507#issuecomment-2458639308
* register_model is now async aware
Refs https://github.com/simonw/llm/issues/507#issuecomment-2458658134
* Refactor Chat and AsyncChat to use _Shared base class
Refs https://github.com/simonw/llm/issues/507#issuecomment-2458692338
* fixed function name
* Fix for infinite loop
* Applied Black
* Ran cog
* Applied Black
* Add Response.from_row() classmethod back again
It does not matter that this is a blocking call, since it is a classmethod
* Made mypy happy with llm/models.py
* mypy fixes for openai_models.py
I am unhappy with this, had to duplicate some code.
* First test for AsyncModel
* Still have not quite got this working
* Fix for not loading plugins during tests, refs #626
* audio/wav not audio/wave, refs #603
* Black and mypy and ruff all happy
* Refactor to avoid generics
* Removed obsolete response() method
* Support text = await async_mock_model.prompt("hello")
* Initial docs for llm.get_async_model() and await model.prompt()
Refs #507
* Initial async model plugin creation docs
* duration_ms ANY to pass test
* llm models --async option
Refs https://github.com/simonw/llm/pull/613#issuecomment-2474724406
* Removed obsolete TypeVars
* Expanded register_models() docs for async
* await model.prompt() now returns AsyncResponse
Refs https://github.com/simonw/llm/pull/613#issuecomment-2475157822
---------
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2024-11-13 17:51:00 -08:00
Simon Willison
7520671176
audio/wav not audio/wave, refs #603
2024-11-12 21:43:07 -08:00
Simon Willison
561784df6e
llm keys get command, refs #623
2024-11-11 09:47:13 -08:00
Simon Willison
5d1d723d4b
Special case treat audio/wave as audio/wav, closes #603
2024-11-07 17:13:54 -08:00
Simon Willison
12df1a3b2a
Show attachment types in llm models --options, closes #612
2024-11-05 22:49:26 -08:00
Simon Willison
1a60fa1667
Test to exercise gpt-4o-audio-preview, closes #608
2024-11-05 21:50:00 -08:00
Simon Willison
39d61d433a
Automated tests for attachments, refs #587
2024-10-28 19:21:11 -07:00
Simon Willison
758ff9ac17
Upgrade to pytest-httpx>=0.33.0
2024-10-28 15:41:34 -07:00
Simon Willison
db1d77f486
assert_all_responses_were_requested=False
2024-10-28 15:41:34 -07:00
Simon Willison
286cf9fcd9
attachments= keyword argument, tests pass again - refs #587
2024-10-28 15:41:34 -07:00
Andrew Wason
a466ddf3cd
Fix broken tests. Drop python 3.8. ( #580 )
...
* Fix broken tests. Drop python 3.8.
* Test on Python 3.13 too
---------
Co-authored-by: Simon Willison <swillison@gmail.com>
2024-10-27 11:26:47 -07:00
Simon Willison
6deed8f976
get_model() improvement, get_default_model() / set_default_wodel() now documented
...
Refs #553
2024-08-18 17:37:31 -07:00
Simon Willison
a83421607a
Switch default model to gpt-4o-mini (from gpt-3.5-turbo), refs #536
2024-07-18 11:57:19 -07:00
Simon Willison
964f4d9934
Fix for llm logs -q plus -m bug, closes #515
2024-06-16 14:35:38 -07:00
Simon Willison
fb63c92cd2
llm logs -r/--response option, closes #431
2024-03-04 13:29:07 -08:00
Simon Willison
8021e12aaa
Windows readline fix, plus run CI against macOS and Windows
...
* Run CI on Windows and macOS as well as Ubuntu, refs #407
* Use pyreadline3 on win32
* Back to fail-fast since we have a bigger matrix now
* Mark some tests as xfail on windows
2024-01-26 16:24:58 -08:00
Simon Willison
9119b03a07
Chmod 600 keys.json on creation, refs #351
2024-01-26 13:18:13 -08:00
Simon Willison
214fcaaf86
Upgrade to run against OpenAI >= 1.0
...
* strategy: fail-fast: false - to help see all errors
* Apply latest Black
Refs #325
2024-01-25 22:00:44 -08:00
Simon Willison
469644c2af
Test using test-llm-load-plugins.sh
...
* Run test-llm-load-plugins.sh, closes #378
2024-01-25 17:44:34 -08:00
Simon Willison
8b78ac6099
Fix for bug where embed did not use default model, closes #317
2023-10-31 21:19:59 -07:00
e. alvarez
839b4d7161
Fix issues: #274 , #280 ( #282 )
...
* Fix issue with reading directories in `iterate_files()` (#280 )
* Add directory checking logic in `iterate_files()` (#274 )
* Added tests for #282 , #274 , #280
---------
Co-authored-by: Simon Willison <swillison@gmail.com>
2023-09-18 23:14:30 -07:00
Simon Willison
b4ec54ef19
NotImplementedError for system prompts with OpenAI completion models, refs #284
...
Signed-off-by: Simon Willison <swillison@gmail.com>
2023-09-18 22:51:22 -07:00
Simon Willison
f76b2120e4
Revert "Handle system prompts for completion models, refs #284 "
...
This reverts commit 4eed871cf1 .
Decesion made in #288
2023-09-18 22:44:38 -07:00
Simon Willison
4eed871cf1
Handle system prompts for completion models, refs #284
2023-09-18 22:36:38 -07:00
Simon Willison
fcff36c6bc
completion: true to register completion models, refs #284
2023-09-18 22:17:26 -07:00
Simon Willison
4fea46113f
logprobs support for OpenAI completion models, refs #284
2023-09-18 22:04:28 -07:00
Simon Willison
2b504279d9
Test for OpenAI chat streaming, closes #287
2023-09-18 21:27:36 -07:00
Simon Willison
4d18da4e11
Bump default gpt-3.5-turbo-instruct max tokens to 256, refs #284
2023-09-18 20:29:39 -07:00
Simon Willison
4d46ebaa32
OpenAI completion models including gpt-3.5-turbo-instruct, refs #284
2023-09-18 18:34:32 -07:00
Simon Willison
356fcb72f6
NumPy decoding docs, plus extra tests for llm.encode/decode
...
!stable-docs
Refs https://discord.com/channels/823971286308356157/1128504153841336370/1151975583237034056
2023-09-14 14:01:47 -07:00
Simon Willison
33dee4762e
llm embed-multi --batch-size option, closes #273
2023-09-13 16:33:27 -07:00
Simon Willison
b9478e6a17
batch_size= argument to embed_multi(), refs #273
2023-09-13 16:24:04 -07:00
Simon Willison
6c43948325
llm.user_dir() creates directory if needed, closes #275
...
Would have fixed this bug too:
- https://github.com/simonw/llm-sentence-transformers/issues/9
2023-09-13 15:58:18 -07:00
Simon Willison
603da35e37
Fixed flaky order tests, refs #271
2023-09-12 11:37:36 -07:00
Simon Willison
9e529bb36a
Wrap !multi in single quotes, for consistency with exit/quit
2023-09-12 10:45:02 -07:00
Simon Willison
f54f2c659d
response.__str__ method, closes #268
2023-09-12 10:36:29 -07:00
Simon Willison
9c33d30843
llm chat !multi support, closes #267
2023-09-12 09:31:20 -07:00
Simon Willison
22a59f795e
llm collections defaults to llm collections list, close #265
2023-09-11 23:08:11 -07:00
Simon Willison
591ad6f571
Revert "Reuse embeddings for hashed content, --store now works on second run - closes #224 "
...
This reverts commit 267e2ea999 .
It's broken, see:
https://github.com/simonw/llm/issues/224#issuecomment-1715014393
2023-09-11 22:56:52 -07:00
Simon Willison
267e2ea999
Reuse embeddings for hashed content, --store now works on second run - closes #224
2023-09-11 22:44:22 -07:00
Simon Willison
52cec1304b
Binary embeddings ( #254 )
...
* Binary embeddings support, refs #253
* Write binary content to content_blob, with tests - refs #253
* supports_text and supports_binary embedding validation, refs #253
2023-09-11 18:58:44 -07:00
Simon Willison
5ba34dbe36
llm embed-db is now llm collections, refs #229
2023-09-10 14:24:27 -07:00
Alexis Métaireau
df32d7685d
Updated error message for invalid or missing embedding model ( #257 )
...
* Updated error message for missing embedding model
---------
Co-authored-by: Simon Willison <swillison@gmail.com>
2023-09-10 11:56:29 -07:00
Simon Willison
2246e8f4fd
Make tests robust against extra plugins, closes #258
2023-09-10 11:21:04 -07:00
Simon Willison
ae7f4f6de7
llm chat -o/--option - refs #244
2023-09-10 11:14:28 -07:00
Simon Willison
8de6783dcc
llm chat test system prompt
2023-09-04 23:36:25 -07:00
Simon Willison
c14959571e
llm chat test continue conversation
2023-09-04 23:36:25 -07:00
Simon Willison
5495112d9f
Initial tests for llm chat, refs #231
2023-09-04 23:36:25 -07:00
Simon Willison
78a0e9bd44
llm --files --encoding option and latin-1 fallback, closes #225
2023-09-04 12:28:31 -07:00
Simon Willison
2e90a30c4f
Add embedding model aliases to llm aliases, refs #192
2023-09-03 17:55:56 -07:00
Simon Willison
408297f1f5
Fixed test I broke in #217
2023-09-03 17:55:28 -07:00
Simon Willison
3bf781fba2
Duplicate content is only embedded once, closes #217
2023-09-03 17:39:11 -07:00
Simon Willison
0eda99e91c
Default embedding model finishing touches, closes #222
2023-09-03 17:21:47 -07:00
Simon Willison
b9c19a5666
Tests for multiple --files pairs
2023-09-03 16:40:00 -07:00
Simon Willison
6f62b7d613
Tests for llm embed-multi --files, refs #215
2023-09-03 16:40:00 -07:00
Simon Willison
0da1ed7d98
--remove-default for llm embed-models default, refs #222
2023-09-03 16:40:00 -07:00
Simon Willison
c8c0f80441
--prefix for llm embed-multi, refs #215
2023-09-03 16:40:00 -07:00
Simon Willison
70a3d4bdc4
Test for llm embed-multi against SQLite, refs #215
2023-09-03 16:40:00 -07:00
Simon Willison
5e686fe8b3
Tests for CSV/TSV/JSON/NL, refs #215
2023-09-03 16:40:00 -07:00