llm/docs/plugins/plugin-hooks.md

62 lines
1.9 KiB
Markdown
Raw Normal View History

# Plugin hooks
Plugins use **plugin hooks** to customize LLM's behavior. These hooks are powered by the [Pluggy plugin system](https://pluggy.readthedocs.io/).
Each plugin can implement one or more hooks using the @hookimpl decorator against one of the hook function names described on this page.
LLM imitates the Datasette plugin system. The [Datasette plugin documentation](https://docs.datasette.io/en/stable/writing_plugins.html) describes how plugins work.
## register_commands(cli)
This hook adds new commands to the `llm` CLI tool - for example `llm extra-command`.
This example plugin adds a new `hello-world` command that prints "Hello world!":
```python
from llm import hookimpl
import click
@hookimpl
def register_commands(cli):
@cli.command(name="hello-world")
def hello_world():
"Print hello world"
click.echo("Hello world!")
```
This new command will be added to `llm --help` and can be run using `llm hello-world`.
## register_models(register)
This hook can be used to register one or more additional models.
```python
import llm
@llm.hookimpl
def register_models(register):
register(HelloWorld())
class HelloWorld(llm.Model):
model_id = "helloworld"
def execute(self, prompt, stream, response):
return ["hello world"]
```
llm.get_async_model(), llm.AsyncModel base class and OpenAI async models (#613) - https://github.com/simonw/llm/issues/507#issuecomment-2458639308 * register_model is now async aware Refs https://github.com/simonw/llm/issues/507#issuecomment-2458658134 * Refactor Chat and AsyncChat to use _Shared base class Refs https://github.com/simonw/llm/issues/507#issuecomment-2458692338 * fixed function name * Fix for infinite loop * Applied Black * Ran cog * Applied Black * Add Response.from_row() classmethod back again It does not matter that this is a blocking call, since it is a classmethod * Made mypy happy with llm/models.py * mypy fixes for openai_models.py I am unhappy with this, had to duplicate some code. * First test for AsyncModel * Still have not quite got this working * Fix for not loading plugins during tests, refs #626 * audio/wav not audio/wave, refs #603 * Black and mypy and ruff all happy * Refactor to avoid generics * Removed obsolete response() method * Support text = await async_mock_model.prompt("hello") * Initial docs for llm.get_async_model() and await model.prompt() Refs #507 * Initial async model plugin creation docs * duration_ms ANY to pass test * llm models --async option Refs https://github.com/simonw/llm/pull/613#issuecomment-2474724406 * Removed obsolete TypeVars * Expanded register_models() docs for async * await model.prompt() now returns AsyncResponse Refs https://github.com/simonw/llm/pull/613#issuecomment-2475157822 --------- Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2024-11-14 01:51:00 +00:00
If your model includes an async version, you can register that too:
```python
class AsyncHelloWorld(llm.AsyncModel):
model_id = "helloworld"
async def execute(self, prompt, stream, response):
return ["hello world"]
@llm.hookimpl
def register_models(register):
register(HelloWorld(), AsyncHelloWorld(), aliases=("hw",))
```
This demonstrates how to register a model with both sync and async versions, and how to specify an alias for that model.
The {ref}`model plugin tutorial <tutorial-model-plugin>` describes how to use this hook in detail. Asynchronous models {ref}`are described here <advanced-model-plugins-async>`.