* Sync models can now call async tools, refs #987
* Test for async tool functions in sync context, refs #987
* Test for asyncio tools, plus test that they run in parallel
* Docs for async tool usage
* WIP fragments: schema plus reading but not yet writing, refs #617
* Unique index on fragments.alias, refs #617
* Fragments are now persisted, added basic CLI commands
* Fragment aliases work now, refs #617
* Improved help for -f/--fragment
* Support fragment hash as well
* Documentation for fragments
* Better non-JSON display of llm fragments list
* llm fragments -q search option
* _truncate_string is now truncate_string
* Use condense_json to avoid duplicate data in JSON in DB, refs #617
* Follow up to 3 redirects for fragments
* Python API docs for fragments= and system_fragments=
* Fragment aliases cannot contain a : - this is to ensure we can add custom fragment loaders later on, refs https://github.com/simonw/llm/pull/859#issuecomment-2761534692
* Use template fragments when running prompts
* llm fragments show command plus llm fragments group tests
* Tests for fragments family of commands
* Test for --save with fragments
* Add fragments tables to docs/logging.md
* Slightly better llm fragments --help
* Handle fragments in past conversations correctly
* Hint at llm prompt --help in llm --help, closes#868
* llm logs -f filter plus show fragments in llm logs --json
* Include prompt and system fragments in llm logs -s
* llm logs markdown fragment output and tests, refs #617
Refs #776
* Implemented new llm prompt --schema and model.prompt(schema=)
* Log schema to responses.schema_id and schemas table
* Include schema in llm logs Markdown output
* Test for schema=pydantic_model
* Initial --schema CLI documentation
* Python docs for schema=
* Advanced plugin docs on schemas
* New KeyModel and AsyncKeyModel classes for models that taken keys - closes#744
* llm prompt --key now uses new mechanism, including for async
* use new key mechanism in llm chat command
* Python API tests for llm.KeyModel and llm.AsyncKeyModel
* Python API docs for for prompt(... key="")
* Mention await model.prompt() takes other parameters, reorg sections
* Better title for the model tutorial
* Docs on writing model plugins that take a key
- https://github.com/simonw/llm/issues/507#issuecomment-2458639308
* register_model is now async aware
Refs https://github.com/simonw/llm/issues/507#issuecomment-2458658134
* Refactor Chat and AsyncChat to use _Shared base class
Refs https://github.com/simonw/llm/issues/507#issuecomment-2458692338
* fixed function name
* Fix for infinite loop
* Applied Black
* Ran cog
* Applied Black
* Add Response.from_row() classmethod back again
It does not matter that this is a blocking call, since it is a classmethod
* Made mypy happy with llm/models.py
* mypy fixes for openai_models.py
I am unhappy with this, had to duplicate some code.
* First test for AsyncModel
* Still have not quite got this working
* Fix for not loading plugins during tests, refs #626
* audio/wav not audio/wave, refs #603
* Black and mypy and ruff all happy
* Refactor to avoid generics
* Removed obsolete response() method
* Support text = await async_mock_model.prompt("hello")
* Initial docs for llm.get_async_model() and await model.prompt()
Refs #507
* Initial async model plugin creation docs
* duration_ms ANY to pass test
* llm models --async option
Refs https://github.com/simonw/llm/pull/613#issuecomment-2474724406
* Removed obsolete TypeVars
* Expanded register_models() docs for async
* await model.prompt() now returns AsyncResponse
Refs https://github.com/simonw/llm/pull/613#issuecomment-2475157822
---------
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>