Simon Willison
1d75792f9b
More uv/uvx tips, closes #702
...
Refs #690
2025-01-11 10:06:32 -08:00
Ariel Marcus
d964d02e90
Add installation docs with uv ( #690 )
2025-01-11 09:57:10 -08:00
watany
1c61b5addd
doc(plugin): adding AmazonBedrock ( #698 )
2025-01-10 16:42:39 -08:00
Arjan Mossel
4f4f9bc07d
Add llm-venice to plugin directory ( #699 )
2025-01-10 16:41:21 -08:00
Simon Willison
73043ec406
Fixed mypy complaint
2025-01-10 16:05:29 -08:00
Simon Willison
38a7366d8e
o1 cannot stream
...
https://github.com/simonw/llm/issues/676#issuecomment-2584932453
2025-01-10 16:03:09 -08:00
Simon Willison
6baf1f7d83
o1
...
Closes #676
2025-01-10 15:57:06 -08:00
Csaba Henk
88a8cfd9e4
llm logs -x/--extract option ( #693 )
...
* llm logs -x/--extract option
* Update docs/help.md for llm logs -x
* Added test for llm logs -x/--extract, refs #693
* llm logs -xr behaves same as llm logs -x
* -x/--extract in llm logging docs
---------
Co-authored-by: Simon Willison <swillison@gmail.com>
2025-01-10 15:53:04 -08:00
Simon Willison
b452effa09
llm models -q/--query option, closes #700
2025-01-09 11:37:33 -08:00
Simon Willison
000e984def
--extract support for templates, closes #681
2024-12-19 07:16:48 -08:00
Simon Willison
67d4a99645
llm prompt -x/--extract option, closes #681
2024-12-19 06:40:05 -08:00
Simon Willison
6305b86026
gpt-4o-mini-audio-preview, closes #677
2024-12-17 20:28:57 -08:00
Simon Willison
8898584ba6
New OpenAI audio models, closes #677
2024-12-17 11:14:42 -08:00
Simon Willison
aa25ad1d54
o1-preview and o1-mini can stream now
...
Refs https://github.com/simonw/llm/issues/676#issuecomment-2549328154
2024-12-17 10:53:15 -08:00
Simon Willison
571f4b2a4d
Fix for UTC warnings
...
Closes #672
2024-12-12 14:57:23 -08:00
Simon Willison
b8e8052229
Release 0.19.1
...
Refs #667
2024-12-05 13:47:28 -08:00
Simon Willison
491dd9b437
Removed accidental comment
2024-12-05 13:45:50 -08:00
Simon Willison
b6be09aa28
Fix get_models() and get_async_models() duplicates bug
...
Closes #667 , refs #640
2024-12-05 13:44:07 -08:00
Simon Willison
e78fea17df
Fragment hash on 0.19 release
...
!stable-docs
2024-12-01 16:09:55 -08:00
Simon Willison
c018104083
Release 0.19
...
Refs #495 , #610 , #640 , #641 , #644 , #653
2024-12-01 15:58:27 -08:00
Sukhbinder Singh
ac3d0089d0
Fix windows bug where llm doesn't run <<llm chat>> on Windows issue #495 ( #646 )
...
* Fix windows bug where llm doesn't run <<llm chat>> on Windows issue #495
* Applied Black
---------
Co-authored-by: Sukhbinder Singh <sukhbindersingh@gmail.com>
Co-authored-by: Simon Willison <swillison@gmail.com>
2024-12-01 15:57:24 -08:00
Simon Willison
f9af563df5
response.on_done() mechanism, closes #653
2024-12-01 15:47:23 -08:00
Simon Willison
335b3e635a
Release 0.19a2
...
Refs #640
2024-11-20 20:12:43 -08:00
Simon Willison
c52cfee881
llm.get_models() and llm.get_async_models(), closes #640
2024-11-20 20:09:06 -08:00
Simon Willison
845322e970
Release 0.19a1
...
Refs #644
2024-11-19 21:28:01 -08:00
Simon Willison
8a7b0c4f5d
response.usage() and await aresponse.usage(), closes #644
2024-11-19 21:25:37 -08:00
Simon Willison
02852fe1a5
Release 0.19a0
...
Refs #610 , #641
2024-11-19 20:23:54 -08:00
Simon Willison
cfb10f4afd
Log input tokens, output tokens and token details ( #642 )
...
* Store input_tokens, output_tokens, token_details on Response, closes #610
* llm prompt -u/--usage option
* llm logs -u/--usage option
* Docs on tracking token usage in plugins
* OpenAI default plugin logs usage
2024-11-19 20:21:59 -08:00
Simon Willison
4a059d722b
Log --async responses to DB, closes #641
...
Refs #507
2024-11-19 18:11:52 -08:00
Simon Willison
a6d62b7ec9
Release 0.18
...
Refs #507 , #600 , #603 , #608 , #611 , #612 , #614
2024-11-17 12:31:48 -08:00
Simon Willison
0fec9746f4
text_or_raise() on sync Response too
...
Refs #632
2024-11-17 12:20:20 -08:00
Simon Willison
73823012ca
Release 0.18a1
...
Refs #632
2024-11-14 15:10:39 -08:00
Simon Willison
cf172cc70a
response.text_or_raise() workaround
...
Closes https://github.com/simonw/llm/issues/632
2024-11-14 15:08:41 -08:00
Simon Willison
3b6e73445c
Better __repr__ for Response and AsyncResponse
2024-11-14 14:42:40 -08:00
Simon Willison
f90f29dec9
Removed accidental commit of Usage class
2024-11-14 14:29:05 -08:00
Simon Willison
157b29ddeb
Test for basic async conversation, refs #632
2024-11-14 14:28:17 -08:00
Simon Willison
041730d8b2
Release 0.18a0
...
Refs #507 , #599 , #600 , #603 , #608 , #611 , #612 , #613 , #614 , #615 , #616 , #621 , #622 , #623 , #626 , #629
2024-11-13 17:55:28 -08:00
Simon Willison
ba75c674cb
llm.get_async_model(), llm.AsyncModel base class and OpenAI async models ( #613 )
...
- https://github.com/simonw/llm/issues/507#issuecomment-2458639308
* register_model is now async aware
Refs https://github.com/simonw/llm/issues/507#issuecomment-2458658134
* Refactor Chat and AsyncChat to use _Shared base class
Refs https://github.com/simonw/llm/issues/507#issuecomment-2458692338
* fixed function name
* Fix for infinite loop
* Applied Black
* Ran cog
* Applied Black
* Add Response.from_row() classmethod back again
It does not matter that this is a blocking call, since it is a classmethod
* Made mypy happy with llm/models.py
* mypy fixes for openai_models.py
I am unhappy with this, had to duplicate some code.
* First test for AsyncModel
* Still have not quite got this working
* Fix for not loading plugins during tests, refs #626
* audio/wav not audio/wave, refs #603
* Black and mypy and ruff all happy
* Refactor to avoid generics
* Removed obsolete response() method
* Support text = await async_mock_model.prompt("hello")
* Initial docs for llm.get_async_model() and await model.prompt()
Refs #507
* Initial async model plugin creation docs
* duration_ms ANY to pass test
* llm models --async option
Refs https://github.com/simonw/llm/pull/613#issuecomment-2474724406
* Removed obsolete TypeVars
* Expanded register_models() docs for async
* await model.prompt() now returns AsyncResponse
Refs https://github.com/simonw/llm/pull/613#issuecomment-2475157822
---------
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2024-11-13 17:51:00 -08:00
Hiepler
5a984d0c87
docs: add llm-grok ( #629 )
...
Adds`llm-grok` xAI API (https://github.com/Hiepler/llm-grok ) to the plugin directory.
!stable-docs
2024-11-13 17:21:04 -08:00
Simon Willison
bc96e1c739
Ruff fix for #626
2024-11-13 06:37:31 -08:00
Simon Willison
7520671176
audio/wav not audio/wave, refs #603
2024-11-12 21:43:07 -08:00
Simon Willison
330e171e86
Fix for not loading plugins during tests, refs #626
2024-11-12 21:42:49 -08:00
gabriel pita
d34eac57d3
Update README.md ( #621 )
2024-11-12 19:07:28 -08:00
Travis Northcutt
c0cb1697bc
Update default model information ( #622 )
...
The default model is now 4o-mini; this change updates the usage page of the docs to reflect that
2024-11-12 19:06:16 -08:00
Simon Willison
dff53a9cae
Better --help for llm keys get, refs #623
2024-11-11 09:53:24 -08:00
Simon Willison
561784df6e
llm keys get command, refs #623
2024-11-11 09:47:13 -08:00
Simon Willison
5d1d723d4b
Special case treat audio/wave as audio/wav, closes #603
2024-11-07 17:13:54 -08:00
Simon Willison
febbc04fb6
Run cog -r in PRs, use that to update logging.md with new tables ( #616 )
...
* Create cog.yml
* Document attachments and prompt_attachments table schemas
Closes #615
---------
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2024-11-06 06:56:19 -08:00
Simon Willison
98d2c19876
Promote alternative model providers in llm --help
2024-11-06 06:38:53 -08:00
Simon Willison
3352eb9f57
Serialize usage to JSON properly, closes #614
2024-11-06 03:27:25 -08:00