mirror of
https://github.com/Hopiu/llm.git
synced 2026-04-30 01:44:44 +00:00
Docs for llm logs on/off/status, closes #98
This commit is contained in:
parent
833c4b4892
commit
05b4bcf57c
3 changed files with 63 additions and 27 deletions
|
|
@ -1,13 +1,9 @@
|
|||
(logging)=
|
||||
# Logging to SQLite
|
||||
|
||||
`llm` can log all prompts and responses to a SQLite database.
|
||||
`llm` defaults to logging all prompts and responses to a SQLite database.
|
||||
|
||||
First, create a database in the correct location. You can do that using the `llm init-db` command:
|
||||
|
||||
```bash
|
||||
llm init-db
|
||||
```
|
||||
This creates a database in a directory on your computer. You can find the location of that database using the `llm logs path` command:
|
||||
You can find the location of that database using the `llm logs path` command:
|
||||
|
||||
```bash
|
||||
llm logs path
|
||||
|
|
@ -18,12 +14,35 @@ On my Mac that outputs:
|
|||
```
|
||||
This will differ for other operating systems.
|
||||
|
||||
Once that SQLite database has been created any prompts you run will be logged to that database.
|
||||
|
||||
To avoid logging a prompt, pass `--no-log` or `-n` to the command:
|
||||
To avoid logging an individual prompt, pass `--no-log` or `-n` to the command:
|
||||
```bash
|
||||
llm 'Ten names for cheesecakes' -n
|
||||
```
|
||||
|
||||
To turn logging by default off:
|
||||
|
||||
```bash
|
||||
llm logs off
|
||||
```
|
||||
To turn it back on again:
|
||||
|
||||
```bash
|
||||
llm logs on
|
||||
```
|
||||
|
||||
To see the status of that database, run this:
|
||||
```bash
|
||||
llm logs status
|
||||
```
|
||||
Example output:
|
||||
```
|
||||
Logging is ON for all prompts
|
||||
Found log database at /Users/simon/Library/Application Support/io.datasette.llm/logs.db
|
||||
Number of conversations logged: 32
|
||||
Number of responses logged: 47
|
||||
Database file size: 19.96MB
|
||||
```
|
||||
|
||||
## Viewing the logs
|
||||
|
||||
You can view the logs using the `llm logs` command:
|
||||
|
|
|
|||
|
|
@ -98,7 +98,26 @@ If no environment variable is found, the tool will fall back to checking `keys.j
|
|||
|
||||
You can force the tool to use the key from `keys.json` even if an environment variable has also been set using `llm "prompt" --key openai`.
|
||||
|
||||
## Custom directory location
|
||||
## Configuration
|
||||
|
||||
You can configure LLM in a number of different ways.
|
||||
|
||||
### Setting a custom default model
|
||||
|
||||
The model used when calling `llm` without the `-m/--model` option defaults to `gpt-3.5-turbo` - the fastest and least expensive OpenAI model, and the same model family that powers ChatGPT.
|
||||
|
||||
You can use the `llm models default` command to set a different default model. For GPT-4 (slower and more expensive, but more capable) run this:
|
||||
|
||||
```bash
|
||||
llm models default gpt-4
|
||||
```
|
||||
You can view the current model by running this:
|
||||
```
|
||||
llm models default
|
||||
```
|
||||
Any of the supported aliases for a model can be passed to this command.
|
||||
|
||||
### Setting a custom directory location
|
||||
|
||||
This tool stores various files - prompt templates, stored keys, preferences, a database of logs - in a directory on your computer.
|
||||
|
||||
|
|
@ -111,3 +130,16 @@ You can set a custom location for this directory by setting the `LLM_USER_PATH`
|
|||
```bash
|
||||
export LLM_USER_PATH=/path/to/my/custom/directory
|
||||
```
|
||||
### Turning SQLite logging on and off
|
||||
|
||||
By default, LLM will log every prompt and response you make to a SQLite database - see {ref}`logging` for more details.
|
||||
|
||||
You can turn this behavior off by default by running:
|
||||
```bash
|
||||
llm logs off
|
||||
```
|
||||
Or turn it back on again with:
|
||||
```
|
||||
llm logs on
|
||||
```
|
||||
Run `llm logs status` to see the current states of the setting.
|
||||
|
|
@ -155,19 +155,4 @@ When running a prompt you can pass the full model name or any of the aliases to
|
|||
```bash
|
||||
llm -m chatgpt-16k 'As many names for cheesecakes as you can think of, with detailed descriptions'
|
||||
```
|
||||
Models that have been installed using plugins will be shown here as well.
|
||||
|
||||
## Setting a custom default model
|
||||
|
||||
The model used when calling `llm` without the `-m/--model` option defaults to `gpt-3.5-turbo` - the fastest and least expensive OpenAI model, and the same model family that powers ChatGPT.
|
||||
|
||||
You can use the `llm models default` command to set a different default model. For GPT-4 (slower and more expensive, but more capable) run this:
|
||||
|
||||
```bash
|
||||
llm models default gpt-4
|
||||
```
|
||||
You can view the current model by running this:
|
||||
```
|
||||
llm models default
|
||||
```
|
||||
Any of the supported aliases for a model can be passed to this command.
|
||||
Models that have been installed using plugins will be shown here as well.
|
||||
Loading…
Reference in a new issue