diff --git a/README.md b/README.md index 96eee44..9a7e18f 100644 --- a/README.md +++ b/README.md @@ -273,6 +273,7 @@ See also [the llm tag](https://simonwillison.net/tags/llm/) on my blog. * [Models that accept API keys](https://llm.datasette.io/en/stable/plugins/advanced-model-plugins.html#models-that-accept-api-keys) * [Async models](https://llm.datasette.io/en/stable/plugins/advanced-model-plugins.html#async-models) * [Supporting schemas](https://llm.datasette.io/en/stable/plugins/advanced-model-plugins.html#supporting-schemas) + * [Support tools](https://llm.datasette.io/en/stable/plugins/advanced-model-plugins.html#support-tools) * [Attachments for multi-modal models](https://llm.datasette.io/en/stable/plugins/advanced-model-plugins.html#attachments-for-multi-modal-models) * [Tracking token usage](https://llm.datasette.io/en/stable/plugins/advanced-model-plugins.html#tracking-token-usage) * [Utility functions for plugins](https://llm.datasette.io/en/stable/plugins/plugin-utilities.html) diff --git a/docs/plugins/advanced-model-plugins.md b/docs/plugins/advanced-model-plugins.md index 8a4f908..25bae95 100644 --- a/docs/plugins/advanced-model-plugins.md +++ b/docs/plugins/advanced-model-plugins.md @@ -8,6 +8,7 @@ Features to consider for your model plugin include: - {ref}`Accepting API keys ` using the standard mechanism that incorporates `llm keys set`, environment variables and support for passing an explicit key to the model. - Including support for {ref}`Async models ` that can be used with Python's `asyncio` library. - Support for {ref}`structured output ` using JSON schemas. +- Support for {ref}`tools `. - Handling {ref}`attachments ` (images, audio and more) for multi-modal models. - Tracking {ref}`token usage ` for models that charge by the token. @@ -122,6 +123,21 @@ And then adding code to your `.execute()` method that checks for `prompt.schema` Check the [llm-gemini](https://github.com/simonw/llm-gemini) and [llm-anthropic](https://github.com/simonw/llm-anthropic) plugins for example of this pattern in action. +(advanced-model-plugins-tools)= + +## Support tools + +Adding {ref}`tools support ` involves several steps: + +1. Add `supports_tools = True` to your model class. +2. If `prompt.tools` is populated, turn that list of `llm.Tool` objects into the correct format for your model. +3. Look out for requests to call tools in the responses from your model. Call `response.add_tool_call(llm.ToolCall(...))` for each of those. This should work for streaming and non-streaming and async and non-async cases. +4. If your prompt has a `prompt.tool_results` list, pass the information from those `llm.ToolResult` objects to your model. +5. Include `prompt.tools` and `prompt.tool_results` and tool calls from `response.tool_calls_or_raise()` in the conversation history constructed by your plugin. +6. Make sure your code is OK with prompts that do not have `prompt.prompt` set to a value, since they may be carrying exclusively the results of a tool call. + +This [commit to llm-gemini](https://github.com/simonw/llm-gemini/commit/a7f1096cfbb733018eb41c29028a8cc6160be298) implementing tools helps demonstrate what this looks like for a real plugin. + (advanced-model-plugins-attachments)= ## Attachments for multi-modal models