Made a start on tools.md docs, refs #997

Also documented register_tools() plugin hook, refs #991
This commit is contained in:
Simon Willison 2025-05-11 20:59:29 -07:00
parent 0f114be5f0
commit 4abd6e0faf
4 changed files with 50 additions and 1 deletions

View file

@ -93,6 +93,7 @@ setup
usage
openai-models
other-models
tools
schemas
templates
fragments

View file

@ -62,6 +62,31 @@ This demonstrates how to register a model with both sync and async versions, and
The {ref}`model plugin tutorial <tutorial-model-plugin>` describes how to use this hook in detail. Asynchronous models {ref}`are described here <advanced-model-plugins-async>`.
(plugin-hooks-register-tools)=
## register_tools(register)
This hook can register one or more tool functions for use with LLM. See {ref}`the tools documentation <tools>` for more details.
This example registers two tools: `upper` and `count_character_in_word`.
```python
import llm
def upper(text: str) -> str:
"""Convert text to uppercase."""
return text.upper()
def count_char(text: str, character: str) -> int:
"""Count the number of occurrences of a character in a word."""
return text.count(character)
@llm.hookimpl
def register_tools(register):
register(upper)
# Here the name= argument is used to specify a different name for the tool:
register(count_char, name="count_character_in_word")
```
(plugin-hooks-register-template-loaders)=
## register_template_loaders(register)

23
docs/tools.md Normal file
View file

@ -0,0 +1,23 @@
(tools)=
# Tools
Many Large Language Models have been trained to execute tools as part of responding to a prompt. LLM supports tool usage with both the command-line interface and the Python API.
## How tools work
A tool is effectively a function that the model can request to be executed. Here's how that works:
1. The initial prompt to the model includes a list of available tools, containing their names, descriptions and parameters.
2. The model can choose to call one (or sometimes more than one) of those tools, returning a request for the tool to execute.
3. The code that calls the model - in this case LLM itself - then executes the specified tool with the provided arguments.
4. LLM prompts the model a second time, this time including the output of the tool execution.
5. The model can then use that output to generate its next response.
## LLM's implementation of tools
In LLM every tool is a defined as a Python function. The function can take any number of arguments and can return a string or an object that can be converted to a string.
Tool functions should include a docstring that describes what the function does. This docstring will become the description that is passed to the model.
The Python API can accept functions directly. The command-line interface has two ways for tools to be defined: via plugins that implement the {ref}`register_tools() plugin hook <plugin-hooks-register-tools>`, or directly on the commad-line using the `--python-tools` argument to specify a block of Python code defining one or more functions.

View file

@ -276,7 +276,7 @@ def test_register_tools():
cli.cli, ["tools", "--python-tools", "def reverse(s: str): return s[::-1]"]
)
assert result3.exit_code == 0
assert 'reverse(s: str)' in result3.output
assert "reverse(s: str)" in result3.output
finally:
plugins.pm.unregister(name="ToolsPlugin")
assert llm.get_tools() == {}