mirror of
https://github.com/Hopiu/llm.git
synced 2026-04-27 00:14:46 +00:00
Clarify lazy loading
https://bsky.app/profile/simonwillison.net/post/3lknwgbph522h !stable-docs
This commit is contained in:
parent
bfbcc201b7
commit
0f47565530
1 changed files with 6 additions and 0 deletions
|
|
@ -18,6 +18,12 @@ model.key = "sk-..."
|
|||
response = model.prompt("Five surprising names for a pet pelican")
|
||||
print(response.text())
|
||||
```
|
||||
Note that the prompt will not be evaluated until you call that `response.text()` method - a form of lazy loading.
|
||||
|
||||
If you inspect the response before it has been evaluated it will look like this:
|
||||
|
||||
<Response prompt='Your prompt' text='... not yet done ...'>
|
||||
|
||||
The `llm.get_model()` function accepts model IDs or aliases. You can also omit it to use the currently configured default model, which is `gpt-4o-mini` if you have not changed the default.
|
||||
|
||||
In this example the key is set by Python code. You can also provide the key using the `OPENAI_API_KEY` environment variable, or use the `llm keys set openai` command to store it in a `keys.json` file, see {ref}`api-keys`.
|
||||
|
|
|
|||
Loading…
Reference in a new issue