New default llm_version tool, closes #1096

Refs https://github.com/simonw/llm/issues/1095#issuecomment-2910574597
This commit is contained in:
Simon Willison 2025-05-26 13:30:47 -07:00
parent e23e13e6c7
commit 9bbb37fae0
5 changed files with 55 additions and 2 deletions

View file

@ -183,6 +183,7 @@ See also [the llm tag](https://simonwillison.net/tags/llm/) on my blog.
* [Extra HTTP headers](https://llm.datasette.io/en/stable/other-models.html#extra-http-headers)
* [Tools](https://llm.datasette.io/en/stable/tools.html)
* [How tools work](https://llm.datasette.io/en/stable/tools.html#how-tools-work)
* [Trying out tools](https://llm.datasette.io/en/stable/tools.html#trying-out-tools)
* [LLMs implementation of tools](https://llm.datasette.io/en/stable/tools.html#llm-s-implementation-of-tools)
* [Tips for implementing tools](https://llm.datasette.io/en/stable/tools.html#tips-for-implementing-tools)
* [Schemas](https://llm.datasette.io/en/stable/schemas.html)

View file

@ -14,6 +14,23 @@ A tool is effectively a function that the model can request to be executed. Here
4. LLM prompts the model a second time, this time including the output of the tool execution.
5. The model can then use that output to generate its next response.
## Trying out tools
LLM comes with a default tool installed, called `llm_version`. You can try that out like this:
```bash
llm -T llm_version "What version of LLM is this?" --td
```
The output should look like this:
```
Tool call: llm_version({})
0.26a0
The installed version of the LLM is 0.26a0.
```
Further tools can be installed using plugins, or you can use the `llm --functions` option to pass tools implemented as PYthon functions directly, as {ref}`described here <usage-tools>`.
## LLM's implementation of tools
In LLM every tool is a defined as a Python function. The function can take any number of arguments and can return a string or an object that can be converted to a string.
@ -30,4 +47,5 @@ Consult the {ref}`register_tools() plugin hook <plugin-hooks-register-tools>` do
If your plugin needs access to API secrets I recommend storing those using `llm keys set api-name` and then reading them using the {ref}`plugin-utilities-get-key` utility function. This avoids secrets being logged to the database as part of tool calls.
<!-- Uncomment when this is true: The [llm-tools-datasette](https://github.com/simonw/llm-tools-datasette) plugin is a good example of this pattern in action. -->
<!-- Uncomment when this is true: The [llm-tools-datasette](https://github.com/simonw/llm-tools-datasette) plugin is a good example of this pattern in action. -->

View file

@ -0,0 +1,12 @@
import llm
from importlib.metadata import version
def llm_version() -> str:
"Return the installed version of llm"
return version("llm")
@llm.hookimpl
def register_tools(register):
register(llm_version)

View file

@ -5,7 +5,10 @@ import pluggy
import sys
from . import hookspecs
DEFAULT_PLUGINS = ("llm.default_plugins.openai_models",)
DEFAULT_PLUGINS = (
"llm.default_plugins.openai_models",
"llm.default_plugins.default_tools",
)
pm = pluggy.PluginManager("llm")
pm.add_hookspecs(hookspecs)

View file

@ -1,6 +1,9 @@
import asyncio
from click.testing import CliRunner
from importlib.metadata import version
import json
import llm
from llm import cli
from llm.migrations import migrate
import os
import pytest
@ -217,3 +220,19 @@ def test_conversation_with_tools(vcr):
)
).text()
assert "841881498" in output2
def test_default_tool_llm_version():
runner = CliRunner()
result = runner.invoke(
cli.cli,
[
"-m",
"echo",
"-T",
"llm_version",
json.dumps({"tool_calls": [{"name": "llm_version"}]}),
],
)
assert result.exit_code == 0
assert '"output": "{}"'.format(version("llm")) in result.output