Commit graph

625 commits

Author SHA1 Message Date
Simon Willison
656d8fa3c4
--xl/--extract-last flag for prompt and log list commands (#718)
Closes #717
2025-01-24 10:52:46 -08:00
Simon Willison
e449fd4f46
Typo fix
!stable-docs
2025-01-22 22:17:07 -08:00
Simon Willison
3e88628602 uv tool upgrade llm, refs #702
!stable-docs
2025-01-22 21:08:16 -08:00
Simon Willison
bf10f63d3d
Mention gpt-4o-mini-audio-preview too #677
!stable-docs
2025-01-22 21:06:12 -08:00
Simon Willison
eb996baeab Documentation for model.attachment_types, closes #705 2025-01-22 20:46:28 -08:00
Simon Willison
2b9a1bbc50 Fixed broken link 2025-01-22 20:39:01 -08:00
Simon Willison
dc127d2a87 Release 0.20
Refs #654, #676, #677, #681, #688, #690, #700, #702, #709
2025-01-22 20:36:10 -08:00
Simon Willison
57d3baac42 Update embedding model names in docs, refs #654
Also ran Black.
2025-01-22 20:35:17 -08:00
web-sst
6f7ea406bf
Register full embedding model names (#654)
Provide backward compatible aliases.
This makes available the same model names that ttok uses.
2025-01-22 20:14:03 -08:00
Ryan Patterson
59983740e6
Update directory.md (#666) 2025-01-18 14:52:51 -08:00
Simon Willison
02e59a201e Don't show default model for llm models -q, closes #710 2025-01-18 14:24:18 -08:00
Simon Willison
f95dd55cda Make it easier to debug CLI errors in pytest
Found this pattern while working on #709
2025-01-18 14:21:43 -08:00
Simon Willison
64179fa9e0 Use openai>=1.55.3 for issue #709 2025-01-18 14:11:21 -08:00
abrasumente
e1388b27fe
Add llm-deepseek plugin (#517) 2025-01-11 18:56:34 -08:00
Steven Weaver
2b6b00641c
Update tutorial-model-plugin.md (#685)
pydantic.org -> pydantic.dev
2025-01-11 12:05:05 -08:00
Amjith Ramanujam
e3c104b136
Show the default model when listing all available models. (#688) 2025-01-11 12:04:39 -08:00
Simon Willison
1d75792f9b More uv/uvx tips, closes #702
Refs #690
2025-01-11 10:06:32 -08:00
Ariel Marcus
d964d02e90
Add installation docs with uv (#690) 2025-01-11 09:57:10 -08:00
watany
1c61b5addd
doc(plugin): adding AmazonBedrock (#698) 2025-01-10 16:42:39 -08:00
Arjan Mossel
4f4f9bc07d
Add llm-venice to plugin directory (#699) 2025-01-10 16:41:21 -08:00
Simon Willison
73043ec406 Fixed mypy complaint 2025-01-10 16:05:29 -08:00
Simon Willison
38a7366d8e o1 cannot stream
https://github.com/simonw/llm/issues/676#issuecomment-2584932453
2025-01-10 16:03:09 -08:00
Simon Willison
6baf1f7d83 o1
Closes #676
2025-01-10 15:57:06 -08:00
Csaba Henk
88a8cfd9e4
llm logs -x/--extract option (#693)
* llm logs -x/--extract option
* Update docs/help.md for llm logs -x
* Added test for llm logs -x/--extract, refs #693
* llm logs -xr behaves same as llm logs -x
* -x/--extract in llm logging docs

---------

Co-authored-by: Simon Willison <swillison@gmail.com>
2025-01-10 15:53:04 -08:00
Simon Willison
b452effa09 llm models -q/--query option, closes #700 2025-01-09 11:37:33 -08:00
Simon Willison
000e984def --extract support for templates, closes #681 2024-12-19 07:16:48 -08:00
Simon Willison
67d4a99645 llm prompt -x/--extract option, closes #681 2024-12-19 06:40:05 -08:00
Simon Willison
6305b86026 gpt-4o-mini-audio-preview, closes #677 2024-12-17 20:28:57 -08:00
Simon Willison
8898584ba6 New OpenAI audio models, closes #677 2024-12-17 11:14:42 -08:00
Simon Willison
aa25ad1d54 o1-preview and o1-mini can stream now
Refs https://github.com/simonw/llm/issues/676#issuecomment-2549328154
2024-12-17 10:53:15 -08:00
Simon Willison
571f4b2a4d
Fix for UTC warnings
Closes #672
2024-12-12 14:57:23 -08:00
Simon Willison
b8e8052229 Release 0.19.1
Refs #667
2024-12-05 13:47:28 -08:00
Simon Willison
491dd9b437 Removed accidental comment 2024-12-05 13:45:50 -08:00
Simon Willison
b6be09aa28 Fix get_models() and get_async_models() duplicates bug
Closes #667, refs #640
2024-12-05 13:44:07 -08:00
Simon Willison
e78fea17df Fragment hash on 0.19 release
!stable-docs
2024-12-01 16:09:55 -08:00
Simon Willison
c018104083 Release 0.19
Refs #495, #610, #640, #641, #644, #653
2024-12-01 15:58:27 -08:00
Sukhbinder Singh
ac3d0089d0
Fix windows bug where llm doesn't run <<llm chat>> on Windows issue #495 (#646)
* Fix windows bug where llm doesn't run <<llm chat>> on Windows issue #495

* Applied Black

---------

Co-authored-by: Sukhbinder Singh <sukhbindersingh@gmail.com>
Co-authored-by: Simon Willison <swillison@gmail.com>
2024-12-01 15:57:24 -08:00
Simon Willison
f9af563df5 response.on_done() mechanism, closes #653 2024-12-01 15:47:23 -08:00
Simon Willison
335b3e635a Release 0.19a2
Refs #640
2024-11-20 20:12:43 -08:00
Simon Willison
c52cfee881 llm.get_models() and llm.get_async_models(), closes #640 2024-11-20 20:09:06 -08:00
Simon Willison
845322e970 Release 0.19a1
Refs #644
2024-11-19 21:28:01 -08:00
Simon Willison
8a7b0c4f5d response.usage() and await aresponse.usage(), closes #644 2024-11-19 21:25:37 -08:00
Simon Willison
02852fe1a5 Release 0.19a0
Refs #610, #641
2024-11-19 20:23:54 -08:00
Simon Willison
cfb10f4afd
Log input tokens, output tokens and token details (#642)
* Store input_tokens, output_tokens, token_details on Response, closes #610
* llm prompt -u/--usage option
* llm logs -u/--usage option
* Docs on tracking token usage in plugins
* OpenAI default plugin logs usage
2024-11-19 20:21:59 -08:00
Simon Willison
4a059d722b Log --async responses to DB, closes #641
Refs #507
2024-11-19 18:11:52 -08:00
Simon Willison
a6d62b7ec9 Release 0.18
Refs #507, #600, #603, #608, #611, #612, #614
2024-11-17 12:31:48 -08:00
Simon Willison
0fec9746f4 text_or_raise() on sync Response too
Refs #632
2024-11-17 12:20:20 -08:00
Simon Willison
73823012ca Release 0.18a1
Refs #632
2024-11-14 15:10:39 -08:00
Simon Willison
cf172cc70a response.text_or_raise() workaround
Closes https://github.com/simonw/llm/issues/632
2024-11-14 15:08:41 -08:00
Simon Willison
3b6e73445c Better __repr__ for Response and AsyncResponse 2024-11-14 14:42:40 -08:00