mirror of
https://github.com/Hopiu/llm.git
synced 2026-05-11 07:13:10 +00:00
parent
ac3d0089d0
commit
c018104083
3 changed files with 16 additions and 1 deletions
|
|
@ -1,5 +1,16 @@
|
|||
# Changelog
|
||||
|
||||
## 0.19 (2024-12-01)
|
||||
|
||||
- Tokens used by a response are now logged to new `input_tokens` and `output_tokens` integer columns and a `token_details` JSON string column, for the default OpenAI models and models from other plugins that {ref}`implement this feature <advanced-model-plugins-usage>`. [#610](https://github.com/simonw/llm/issues/610)
|
||||
- `llm prompt` now takes a `-u/--usage` flag to display token usage at the end of the response.
|
||||
- `llm logs -u/--usage` shows token usage information for logged responses.
|
||||
- `llm prompt ... --async` responses are now logged to the database. [#641](https://github.com/simonw/llm/issues/641)
|
||||
- `llm.get_models()` and `llm.get_async_models()` functions, {ref}`documented here <python-api-listing-models>`. [#640](https://github.com/simonw/llm/issues/640)
|
||||
- `response.usage()` and async response `await response.usage()` methods, returning a `Usage(input=2, output=1, details=None)` dataclass. [#644](https://github.com/simonw/llm/issues/644)
|
||||
- `response.on_done(callback)` and `await response.on_done(callback)` methods for specifying a callback to be executed when a response has completed, {ref}`documented here <python-api-response-on-done>`. [#653](https://github.com/simonw/llm/issues/653)
|
||||
- Fix for bug running `llm chat` on Windows 11. Thanks, [Sukhbinder Singh](https://github.com/sukhbinder). [#495](https://github.com/simonw/llm/issues/495)
|
||||
|
||||
(v0_19a2)=
|
||||
## 0.19a2 (2024-11-20)
|
||||
|
||||
|
|
|
|||
|
|
@ -160,6 +160,8 @@ async for chunk in model.prompt(
|
|||
print(chunk, end="", flush=True)
|
||||
```
|
||||
|
||||
(python-api-conversations)=
|
||||
|
||||
## Conversations
|
||||
|
||||
LLM supports *conversations*, where you ask follow-up questions of a model as part of an ongoing conversation.
|
||||
|
|
@ -195,6 +197,8 @@ response = conversation.prompt(
|
|||
|
||||
Access `conversation.responses` for a list of all of the responses that have so far been returned during the conversation.
|
||||
|
||||
(python-api-response-on-done)=
|
||||
|
||||
## Running code when a response has completed
|
||||
|
||||
For some applications, such as tracking the tokens used by an application, it may be useful to execute code as soon as a response has finished being executed
|
||||
|
|
|
|||
2
setup.py
2
setup.py
|
|
@ -1,7 +1,7 @@
|
|||
from setuptools import setup, find_packages
|
||||
import os
|
||||
|
||||
VERSION = "0.19a2"
|
||||
VERSION = "0.19"
|
||||
|
||||
|
||||
def get_long_description():
|
||||
|
|
|
|||
Loading…
Reference in a new issue