llm.get_key() is now a documented utility, closes #1094

Refs #1093, https://github.com/simonw/llm-tools-datasette/issues/2
This commit is contained in:
Simon Willison 2025-05-26 09:39:40 -07:00
parent 5a8d7178c3
commit 7eb8acb767
4 changed files with 62 additions and 5 deletions

View file

@ -275,6 +275,7 @@ See also [the llm tag](https://simonwillison.net/tags/llm/) on my blog.
* [Attachments for multi-modal models](https://llm.datasette.io/en/stable/plugins/advanced-model-plugins.html#attachments-for-multi-modal-models)
* [Tracking token usage](https://llm.datasette.io/en/stable/plugins/advanced-model-plugins.html#tracking-token-usage)
* [Utility functions for plugins](https://llm.datasette.io/en/stable/plugins/plugin-utilities.html)
* [llm.get_key()](https://llm.datasette.io/en/stable/plugins/plugin-utilities.html#llm-get-key)
* [llm.user_dir()](https://llm.datasette.io/en/stable/plugins/plugin-utilities.html#llm-user-dir)
* [llm.ModelError](https://llm.datasette.io/en/stable/plugins/plugin-utilities.html#llm-modelerror)
* [Response.fake()](https://llm.datasette.io/en/stable/plugins/plugin-utilities.html#response-fake)

View file

@ -3,6 +3,31 @@
LLM provides some utility functions that may be useful to plugins.
(plugin-utilities-get-key)=
## llm.get_key()
This method can be used to look up secrets that users have stored using the {ref}`llm keys set <help-keys-set>` command. If your plugin needs to access an API key or other secret this can be a convenient way to provide that.
This returns either a string containing the key or `None` if the key could not be resolved.
Use the `alias="name"` option to retrieve the key set with that alias:
```python
github_key = llm.get_key(alias="github")
```
You can also add `env="ENV_VAR"` to fall back to looking in that environment variable if the key has not been configured:
```python
github_key = llm.get_key(alias="github", env="GITHUB_TOKEN")
```
In some cases you may allow users to provide a key as input, where they could input either the key itself or specify an alias to lookup in `keys.json`. Use the `input=` parameter for that:
```python
github_key = llm.get_key(input=input_from_user, alias="github", env="GITHUB_TOKEN")
```
An previous version of function used positional arguments in a confusing order. These are still supported but the new keyword arguments are recommended as a better way to use `llm.get_key()` going forward.
(plugin-utilities-user-dir)=
## llm.user_dir()
LLM stores various pieces of logging and configuration data in a directory on the user's machine.
@ -21,6 +46,7 @@ plugin_dir.mkdir(exist_ok=True)
data_path = plugin_dir / "plugin-data.db"
```
(plugin-utilities-modelerror)=
## llm.ModelError
If your model encounters an error that should be reported to the user you can raise this exception. For example:
@ -32,6 +58,7 @@ raise ModelError("MPT model not installed - try running 'llm mpt30b download'")
```
This will be caught by the CLI layer and displayed to the user as an error message.
(plugin-utilities-response-fake)=
## Response.fake()
When writing tests for a model it can be useful to generate fake response objects, for example in this test from [llm-mpt30b](https://github.com/simonw/llm-mpt30b):

View file

@ -333,15 +333,28 @@ def get_model(name: Optional[str] = None, _skip_async: bool = False) -> Model:
def get_key(
explicit_key: Optional[str], key_alias: str, env_var: Optional[str] = None
explicit_key: Optional[str] = None,
key_alias: Optional[str] = None,
env_var: Optional[str] = None,
*,
alias: Optional[str] = None,
env: Optional[str] = None,
input: Optional[str] = None,
) -> Optional[str]:
"""
Return an API key based on a hierarchy of potential sources.
Return an API key based on a hierarchy of potential sources. You should use the keyword arguments,
the positional arguments are here purely for backwards-compatibility with older code.
:param provided_key: A key provided by the user. This may be the key, or an alias of a key in keys.json.
:param key_alias: The alias used to retrieve the key from the keys.json file.
:param env_var: Name of the environment variable to check for the key.
:param input: Input provided by the user. This may be the key, or an alias of a key in keys.json.
:param alias: The alias used to retrieve the key from the keys.json file.
:param env: Name of the environment variable to check for the key as a final fallback.
"""
if alias:
key_alias = alias
if env:
env_var = env
if input:
explicit_key = input
stored_keys = load_keys()
# If user specified an alias, use the key stored for that alias
if explicit_key in stored_keys:

View file

@ -1,3 +1,4 @@
import json
import pytest
from llm.utils import (
extract_fenced_code_block,
@ -7,6 +8,7 @@ from llm.utils import (
simplify_usage_dict,
truncate_string,
)
from llm import get_key
@pytest.mark.parametrize(
@ -444,3 +446,17 @@ def test_instantiate_valid(spec, expected_cls, expected_attrs):
def test_instantiate_invalid(spec):
with pytest.raises(ValueError):
instantiate_from_spec({"Files": Files, "ValueFlag": ValueFlag}, spec)
def test_get_key(user_path, monkeypatch):
monkeypatch.setenv("ENV", "from-env")
(user_path / "keys.json").write_text(json.dumps({"testkey": "TEST"}), "utf-8")
assert get_key(alias="testkey") == "TEST"
assert get_key(input="testkey") == "TEST"
assert get_key(alias="missing", env="ENV") == "from-env"
assert get_key(alias="missing") is None
# found key should over-ride env
assert get_key(input="testkey", env="ENV") == "TEST"
# explicit key should over-ride alias
assert get_key(input="explicit", alias="testkey") == "explicit"
assert get_key(input="explicit", alias="testkey", env="ENV") == "explicit"