Simon Willison
656d8fa3c4
--xl/--extract-last flag for prompt and log list commands ( #718 )
...
Closes #717
2025-01-24 10:52:46 -08:00
Csaba Henk
88a8cfd9e4
llm logs -x/--extract option ( #693 )
...
* llm logs -x/--extract option
* Update docs/help.md for llm logs -x
* Added test for llm logs -x/--extract, refs #693
* llm logs -xr behaves same as llm logs -x
* -x/--extract in llm logging docs
---------
Co-authored-by: Simon Willison <swillison@gmail.com>
2025-01-10 15:53:04 -08:00
Simon Willison
b452effa09
llm models -q/--query option, closes #700
2025-01-09 11:37:33 -08:00
Simon Willison
67d4a99645
llm prompt -x/--extract option, closes #681
2024-12-19 06:40:05 -08:00
Simon Willison
571f4b2a4d
Fix for UTC warnings
...
Closes #672
2024-12-12 14:57:23 -08:00
Simon Willison
b6be09aa28
Fix get_models() and get_async_models() duplicates bug
...
Closes #667 , refs #640
2024-12-05 13:44:07 -08:00
Simon Willison
f9af563df5
response.on_done() mechanism, closes #653
2024-12-01 15:47:23 -08:00
Simon Willison
c52cfee881
llm.get_models() and llm.get_async_models(), closes #640
2024-11-20 20:09:06 -08:00
Simon Willison
cfb10f4afd
Log input tokens, output tokens and token details ( #642 )
...
* Store input_tokens, output_tokens, token_details on Response, closes #610
* llm prompt -u/--usage option
* llm logs -u/--usage option
* Docs on tracking token usage in plugins
* OpenAI default plugin logs usage
2024-11-19 20:21:59 -08:00
Simon Willison
ba75c674cb
llm.get_async_model(), llm.AsyncModel base class and OpenAI async models ( #613 )
...
- https://github.com/simonw/llm/issues/507#issuecomment-2458639308
* register_model is now async aware
Refs https://github.com/simonw/llm/issues/507#issuecomment-2458658134
* Refactor Chat and AsyncChat to use _Shared base class
Refs https://github.com/simonw/llm/issues/507#issuecomment-2458692338
* fixed function name
* Fix for infinite loop
* Applied Black
* Ran cog
* Applied Black
* Add Response.from_row() classmethod back again
It does not matter that this is a blocking call, since it is a classmethod
* Made mypy happy with llm/models.py
* mypy fixes for openai_models.py
I am unhappy with this, had to duplicate some code.
* First test for AsyncModel
* Still have not quite got this working
* Fix for not loading plugins during tests, refs #626
* audio/wav not audio/wave, refs #603
* Black and mypy and ruff all happy
* Refactor to avoid generics
* Removed obsolete response() method
* Support text = await async_mock_model.prompt("hello")
* Initial docs for llm.get_async_model() and await model.prompt()
Refs #507
* Initial async model plugin creation docs
* duration_ms ANY to pass test
* llm models --async option
Refs https://github.com/simonw/llm/pull/613#issuecomment-2474724406
* Removed obsolete TypeVars
* Expanded register_models() docs for async
* await model.prompt() now returns AsyncResponse
Refs https://github.com/simonw/llm/pull/613#issuecomment-2475157822
---------
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2024-11-13 17:51:00 -08:00
Simon Willison
12df1a3b2a
Show attachment types in llm models --options, closes #612
2024-11-05 22:49:26 -08:00
Simon Willison
6deed8f976
get_model() improvement, get_default_model() / set_default_wodel() now documented
...
Refs #553
2024-08-18 17:37:31 -07:00
Simon Willison
a83421607a
Switch default model to gpt-4o-mini (from gpt-3.5-turbo), refs #536
2024-07-18 11:57:19 -07:00
Simon Willison
964f4d9934
Fix for llm logs -q plus -m bug, closes #515
2024-06-16 14:35:38 -07:00
Simon Willison
fb63c92cd2
llm logs -r/--response option, closes #431
2024-03-04 13:29:07 -08:00
Simon Willison
8021e12aaa
Windows readline fix, plus run CI against macOS and Windows
...
* Run CI on Windows and macOS as well as Ubuntu, refs #407
* Use pyreadline3 on win32
* Back to fail-fast since we have a bigger matrix now
* Mark some tests as xfail on windows
2024-01-26 16:24:58 -08:00
Simon Willison
214fcaaf86
Upgrade to run against OpenAI >= 1.0
...
* strategy: fail-fast: false - to help see all errors
* Apply latest Black
Refs #325
2024-01-25 22:00:44 -08:00
Simon Willison
b4ec54ef19
NotImplementedError for system prompts with OpenAI completion models, refs #284
...
Signed-off-by: Simon Willison <swillison@gmail.com>
2023-09-18 22:51:22 -07:00
Simon Willison
f76b2120e4
Revert "Handle system prompts for completion models, refs #284 "
...
This reverts commit 4eed871cf1 .
Decesion made in #288
2023-09-18 22:44:38 -07:00
Simon Willison
4eed871cf1
Handle system prompts for completion models, refs #284
2023-09-18 22:36:38 -07:00
Simon Willison
fcff36c6bc
completion: true to register completion models, refs #284
2023-09-18 22:17:26 -07:00
Simon Willison
4fea46113f
logprobs support for OpenAI completion models, refs #284
2023-09-18 22:04:28 -07:00
Simon Willison
2b504279d9
Test for OpenAI chat streaming, closes #287
2023-09-18 21:27:36 -07:00
Simon Willison
4d18da4e11
Bump default gpt-3.5-turbo-instruct max tokens to 256, refs #284
2023-09-18 20:29:39 -07:00
Simon Willison
4d46ebaa32
OpenAI completion models including gpt-3.5-turbo-instruct, refs #284
2023-09-18 18:34:32 -07:00
Simon Willison
6c43948325
llm.user_dir() creates directory if needed, closes #275
...
Would have fixed this bug too:
- https://github.com/simonw/llm-sentence-transformers/issues/9
2023-09-13 15:58:18 -07:00
Simon Willison
b30f6894f7
Fixed bug if LLM directory does not exist, closes #193
2023-08-31 20:26:05 -07:00
Simon Willison
c8e9565f47
Combine piped and argument prompts, closes #153
2023-08-20 22:57:29 -07:00
Simon Willison
36f8ffc2a1
Test for llm models --options, refs #169
2023-08-19 21:26:23 -07:00
Simon Willison
79d09936cb
Lose the indentation for log output, refs #160
2023-08-17 14:12:57 -07:00
Simon Willison
113df5dd87
llm logs now defaults to text output, use --json for JSON, use -c X for specific conversation
...
Refs #160
2023-08-17 13:57:18 -07:00
Simon Willison
cb41409e2b
conversation_name should not have newlines, closes #110
2023-07-15 21:28:35 -07:00
Simon Willison
178af27d95
llm logs list -q/--query option, closes #109
2023-07-15 16:20:28 -07:00
Simon Willison
ed8cd776a4
llm logs list -m model_id/alias option, closes #108
2023-07-15 15:55:13 -07:00
Simon Willison
e2072f7044
Ability to register additional OpenAI-compatible models
...
Closes #107 , closes #106
2023-07-15 10:01:03 -07:00
Simon Willison
fa67b3fdaf
--log option, closes #68
2023-07-11 20:18:16 -07:00
Simon Willison
d2fcff3f3a
llm logs on / off commands, refs #98
2023-07-11 19:48:16 -07:00
Simon Willison
2d3ebe7fe1
llm logs now uses new DB schema, refs #91
2023-07-11 07:21:22 -07:00
Simon Willison
c08344f986
llm logs now decodes JSON for prompt_json etc
2023-07-05 18:31:38 -07:00
Simon Willison
de81cc9a9e
Test messages logged in new format
2023-07-03 08:12:04 -07:00
Simon Willison
345ad0d2dc
Implemented new logs database schema
2023-07-03 07:27:47 -07:00
Simon Willison
9a180e65a8
llm models default command, plus refactored env variables
...
Closes #76
Closes #31
2023-07-01 14:01:29 -07:00
Simon Willison
c4513068fb
Disabled DB logging test for the moment
2023-07-01 11:07:10 -07:00
Simon Willison
b27c2cdac9
Rename log.db to logs.db, closes #41
2023-06-17 09:29:46 +01:00
Simon Willison
8308fe5cbf
Store debug info, closes #34
2023-06-16 08:51:56 +01:00
Simon Willison
af8d596e63
Moved logs.db to user_data_dir, closes #27
2023-06-15 20:19:02 +01:00
Simon Willison
1bb04c416c
Move logs.db to user_data_dir, refs #27
2023-06-15 19:59:08 +01:00
Simon Willison
44bcc8f63c
New table schema, with new migrations system - closes #26
2023-06-15 19:33:31 +01:00
Simon Willison
6545ce9da6
Stream by default, added--no-stream option, closes #25
...
Also finished the work needed to remove --code, refs #24
2023-06-15 18:49:11 +01:00
Simon Willison
12d6cf1049
Initial test for 'llm prompt' command, closes #18
2023-06-14 18:32:48 +01:00