llm/docs/logging.md

137 lines
3.3 KiB
Markdown
Raw Normal View History

(logging)=
# Logging to SQLite
`llm` defaults to logging all prompts and responses to a SQLite database.
You can find the location of that database using the `llm logs path` command:
```bash
llm logs path
```
On my Mac that outputs:
```
2023-06-17 08:29:36 +00:00
/Users/simon/Library/Application Support/io.datasette.llm/logs.db
```
This will differ for other operating systems.
To avoid logging an individual prompt, pass `--no-log` or `-n` to the command:
```bash
llm 'Ten names for cheesecakes' -n
```
To turn logging by default off:
```bash
llm logs off
```
2023-07-12 03:18:16 +00:00
If you've turned off logging you can still log an individual prompt and response by adding `--log`:
```bash
llm 'Five ambitious names for a pet pterodactyl' --log
```
To turn logging by default back on again:
```bash
llm logs on
```
2023-07-12 03:18:16 +00:00
To see the status of the logs database, run this:
```bash
llm logs status
```
Example output:
```
Logging is ON for all prompts
Found log database at /Users/simon/Library/Application Support/io.datasette.llm/logs.db
2023-07-12 03:18:16 +00:00
Number of conversations logged: 33
Number of responses logged: 48
Database file size: 19.96MB
```
(viewing-logs)=
## Viewing the logs
You can view the logs using the `llm logs` command:
```bash
llm logs
```
This will output the three most recent logged items as a JSON array of objects.
Add `-n 10` to see the ten most recent items:
```bash
llm logs -n 10
```
Or `-n 0` to see everything that has ever been logged:
```bash
llm logs -n 0
```
You can search the logs for a search term in the `prompt` or the `response` columns:
```bash
llm logs -q 'cheesecake'
```
You can filter to logs just for a specific model (or model alias) using `-m/--model`:
```bash
llm logs -m chatgpt
```
You can truncate the display of the prompts and responses using the `-t/--truncate` option:
```bash
llm logs -n 5 -t
```
This is useful for finding a conversation that you would like to continue.
You can also use [Datasette](https://datasette.io/) to browse your logs like this:
```bash
datasette "$(llm logs path)"
```
2023-06-16 07:39:01 +00:00
## SQL schema
2023-06-17 08:29:36 +00:00
Here's the SQL schema used by the `logs.db` database:
2023-06-16 07:39:01 +00:00
<!-- [[[cog
import cog
from llm.migrations import migrate
import sqlite_utils
2023-06-16 07:57:53 +00:00
import re
2023-06-16 07:39:01 +00:00
db = sqlite_utils.Database(memory=True)
migrate(db)
def cleanup_sql(sql):
first_line = sql.split('(')[0]
2023-06-16 07:57:53 +00:00
inner = re.search(r'\((.*)\)', sql, re.DOTALL).group(1)
2023-06-16 07:39:01 +00:00
columns = [l.strip() for l in inner.split(',')]
return first_line + '(\n ' + ',\n '.join(columns) + '\n);'
cog.out("```sql\n")
for table in ("conversations", "responses", "responses_fts"):
schema = db[table].schema
cog.out(format(cleanup_sql(schema)))
cog.out("\n")
cog.out("```\n")
2023-06-16 07:39:01 +00:00
]]] -->
```sql
CREATE TABLE [conversations] (
[id] TEXT PRIMARY KEY,
[name] TEXT,
[model] TEXT
);
CREATE TABLE [responses] (
[id] TEXT PRIMARY KEY,
2023-06-16 07:39:01 +00:00
[model] TEXT,
[prompt] TEXT,
[system] TEXT,
2023-07-03 14:27:47 +00:00
[prompt_json] TEXT,
[options_json] TEXT,
2023-06-16 07:39:01 +00:00
[response] TEXT,
2023-07-03 14:27:47 +00:00
[response_json] TEXT,
[conversation_id] TEXT REFERENCES [conversations]([id]),
2023-07-03 14:27:47 +00:00
[duration_ms] INTEGER,
[datetime_utc] TEXT
2023-06-16 07:39:01 +00:00
);
CREATE VIRTUAL TABLE [responses_fts] USING FTS5 (
[prompt],
[response],
content=[responses]
);
2023-06-16 07:39:01 +00:00
```
2023-06-16 07:57:53 +00:00
<!-- [[[end]]] -->
`responses_fts` configures [SQLite full-text search](https://www.sqlite.org/fts5.html) against the `prompt` and `response` columns in the `responses` table.