mirror of
https://github.com/Hopiu/llm.git
synced 2026-04-20 13:11:15 +00:00
115 lines
3.2 KiB
Markdown
115 lines
3.2 KiB
Markdown
# llm
|
|
|
|
[](https://pypi.org/project/llm/)
|
|
[](https://github.com/simonw/llm/releases)
|
|
[](https://github.com/simonw/llm/actions?query=workflow%3ATest)
|
|
[](https://github.com/simonw/llm/blob/master/LICENSE)
|
|
|
|
Access large language models from the command-line
|
|
|
|
## Installation
|
|
|
|
Install this tool using `pip`:
|
|
|
|
pip install llm
|
|
|
|
You need an OpenAI API key, which should either be set in the `OPENAI_API_KEY` environment variable, or saved in a plain text file called `~/.openai-api-key.txt` in your home directory.
|
|
|
|
## Usage
|
|
|
|
The default command for this is `llm chatgpt` - you can use `llm` instead if you prefer.
|
|
|
|
To run a prompt:
|
|
|
|
llm 'Ten names for cheesecakes'
|
|
|
|
To stream the results a token at a time:
|
|
|
|
llm 'Ten names for cheesecakes' -s
|
|
|
|
To switch from ChatGPT 3.5 (the default) to GPT-4 if you have access:
|
|
|
|
llm 'Ten names for cheesecakes' -4
|
|
|
|
Pass `--model <model name>` to use a different model.
|
|
|
|
### Using with a shell
|
|
|
|
To generate a description of changes made to a Git repository since the last commit:
|
|
|
|
llm "Describe these changes: $(git diff)"
|
|
|
|
This pattern of using `$(command)` inside a double quoted string is a useful way to quickly assemble prompts.
|
|
|
|
## System prompts
|
|
|
|
You can use `--system '...'` to set a system prompt.
|
|
|
|
llm 'SQL to calculate total sales by month' -s \
|
|
--system 'You are an exaggerated sentient cheesecake that knows SQL and talks about cheesecake a lot'
|
|
|
|
The `--code` option will set a system prompt for you that attempts to output just code without explanation, and will strip off any leading or trailing markdown code block syntax. You can use this to generate code and write it straight to a file:
|
|
|
|
llm 'Python CLI tool: reverse string passed to stdin' --code > fetch.py
|
|
|
|
Be _very careful_ executing code generated by a LLM - always read it first!
|
|
|
|
## Logging to SQLite
|
|
|
|
If a SQLite database file exists in `~/.llm/log.db` then the tool will log all prompts and responses to it.
|
|
|
|
You can create that file by running the `init-db` command:
|
|
|
|
llm init-db
|
|
|
|
Now any prompts you run will be logged to that database.
|
|
|
|
To avoid logging a prompt, pass `--no-log` or `-n` to the command:
|
|
|
|
llm 'Ten names for cheesecakes' -n
|
|
|
|
### Viewing the logs
|
|
|
|
You can view the logs using the `llm logs` command:
|
|
|
|
llm logs
|
|
|
|
This will output the three most recent logged items as a JSON array of objects.
|
|
|
|
Add `-n 10` to see the ten most recent items:
|
|
|
|
llm logs -n 10
|
|
|
|
Or `-n 0` to see everything that has ever been logged:
|
|
|
|
llm logs -n 0
|
|
|
|
You can also use [Datasette](https://datasette.io/) to browse your logs like this:
|
|
|
|
datasette ~/.llm/log.db
|
|
|
|
## Help
|
|
|
|
For help, run:
|
|
|
|
llm --help
|
|
|
|
You can also use:
|
|
|
|
python -m llm --help
|
|
|
|
## Development
|
|
|
|
To contribute to this tool, first checkout the code. Then create a new virtual environment:
|
|
|
|
cd llm
|
|
python -m venv venv
|
|
source venv/bin/activate
|
|
|
|
Now install the dependencies and test dependencies:
|
|
|
|
pip install -e '.[test]'
|
|
|
|
To run the tests:
|
|
|
|
pytest
|