2023-06-15 16:51:12 +00:00
# Setup
## Installation
Install this tool using `pip` :
2023-06-17 09:21:30 +00:00
```bash
pip install llm
```
2023-06-15 16:51:12 +00:00
Or using [pipx ](https://pypa.github.io/pipx/ ):
2023-06-17 09:21:30 +00:00
```bash
pipx install llm
```
2023-06-17 21:54:11 +00:00
## Upgrading to the latest version
If you installed using `pip` :
```bash
pip install -U llm
```
For `pipx` :
```bash
pipx upgrade llm
```
For Homebrew:
```bash
2023-07-24 15:36:42 +00:00
brew upgrade llm
2023-06-17 21:54:11 +00:00
```
2023-09-12 17:47:13 +00:00
If the latest version is not yet available on Homebrew you can upgrade like this instead:
```bash
llm install -U llm
```
2023-06-17 21:54:11 +00:00
2023-07-12 03:00:47 +00:00
## Installing plugins
{ref}`plugins` can be used to add support for other language models, including models that can run on your own device.
For example, the [llm-gpt4all ](https://github.com/simonw/llm-gpt4all ) plugin adds support for 17 new models that can be installed on your own machine. You can install that like so:
```bash
llm install llm-gpt4all
```
2023-07-15 17:01:03 +00:00
(api-keys)=
## API key management
2023-06-15 16:51:12 +00:00
Many LLM models require an API key. These API keys can be provided to this tool using several different mechanisms.
2024-01-26 00:02:48 +00:00
You can obtain an API key for OpenAI's language models from [the API keys page ](https://platform.openai.com/api-keys ) on their site.
2023-06-15 16:51:12 +00:00
2023-06-17 09:20:35 +00:00
### Saving and using stored keys
2023-06-15 16:51:12 +00:00
2023-06-17 09:20:35 +00:00
The easiest way to store an API key is to use the `llm keys set` command:
2023-06-15 16:51:12 +00:00
2023-06-17 09:20:35 +00:00
```bash
2023-06-15 16:51:12 +00:00
llm keys set openai
```
You will be prompted to enter the key like this:
```
% llm keys set openai
Enter key:
```
2023-06-17 09:20:35 +00:00
Once stored, this key will be automatically used for subsequent calls to the API:
2023-06-15 16:51:12 +00:00
2023-06-17 09:20:35 +00:00
```bash
2023-06-15 16:51:12 +00:00
llm "Five ludicrous names for a pet lobster"
```
2023-08-21 06:11:47 +00:00
You can list the names of keys that have been set using this command:
```bash
llm keys
```
2023-06-17 09:20:35 +00:00
Keys that are stored in this way live in a file called `keys.json` . This file is located at the path shown when you run the following command:
```bash
llm keys path
```
2023-08-21 06:11:47 +00:00
2023-06-17 09:20:35 +00:00
On macOS this will be `~/Library/Application Support/io.datasette.llm/keys.json` . On Linux it may be something like `~/.config/io.datasette.llm/keys.json` .
2023-06-15 16:51:12 +00:00
### Passing keys using the --key option
Keys can be passed directly using the `--key` option, like this:
2023-06-17 09:21:30 +00:00
```bash
2023-06-15 16:51:12 +00:00
llm "Five names for pet weasels" --key sk-my-key-goes-here
```
You can also pass the alias of a key stored in the `keys.json` file. For example, if you want to maintain a personal API key you could add that like this:
2023-06-17 09:21:30 +00:00
```bash
2023-06-15 16:51:12 +00:00
llm keys set personal
```
And then use it for prompts like so:
2023-06-17 09:21:30 +00:00
```bash
2023-06-15 16:51:12 +00:00
llm "Five friendly names for a pet skunk" --key personal
```
### Keys in environment variables
Keys can also be set using an environment variable. These are different for different models.
For OpenAI models the key will be read from the `OPENAI_API_KEY` environment variable.
2023-08-21 06:27:21 +00:00
The environment variable will be used if no `--key` option is passed to the command and there is not a key configured in `keys.json`
2023-06-15 16:51:12 +00:00
2023-08-21 06:27:21 +00:00
To use an environment variable in place of the `keys.json` key run the prompt like this:
```bash
llm 'my prompt' --key $OPENAI_API_KEY
```
2023-07-01 21:01:29 +00:00
2023-07-12 03:09:07 +00:00
## Configuration
You can configure LLM in a number of different ways.
### Setting a custom default model
The model used when calling `llm` without the `-m/--model` option defaults to `gpt-3.5-turbo` - the fastest and least expensive OpenAI model, and the same model family that powers ChatGPT.
You can use the `llm models default` command to set a different default model. For GPT-4 (slower and more expensive, but more capable) run this:
```bash
llm models default gpt-4
```
You can view the current model by running this:
```
llm models default
```
Any of the supported aliases for a model can be passed to this command.
### Setting a custom directory location
2023-07-01 21:01:29 +00:00
This tool stores various files - prompt templates, stored keys, preferences, a database of logs - in a directory on your computer.
On macOS this is `~/Library/Application Support/io.datasette.llm/` .
On Linux it may be something like `~/.config/io.datasette.llm/` .
You can set a custom location for this directory by setting the `LLM_USER_PATH` environment variable:
```bash
export LLM_USER_PATH=/path/to/my/custom/directory
```
2023-07-12 03:09:07 +00:00
### Turning SQLite logging on and off
By default, LLM will log every prompt and response you make to a SQLite database - see {ref}`logging` for more details.
You can turn this behavior off by default by running:
```bash
llm logs off
```
Or turn it back on again with:
```
llm logs on
```
Run `llm logs status` to see the current states of the setting.