llm/docs/index.md
Simon Willison 77cf56e54a
Initial CLI support and plugin hook for embeddings, refs #185
* Embeddings plugin hook + OpenAI implementation
* llm.get_embedding_model(name) function
* llm embed command, for returning embeddings or saving them to SQLite
* Tests using an EmbedDemo embedding model
* llm embed-models list and emeb-models default commands
* llm embed-db path and llm embed-db collections commands
2023-08-27 22:24:10 -07:00

2.3 KiB

LLM

PyPI Changelog Tests License Discord

A CLI utility and Python library for interacting with Large Language Models, including OpenAI, PaLM and local models installed on your own machine.

Background on this project:

For more check out the llm tag on my blog.

Quick start

First, install LLM using pip or Homebrew:

pip install llm

Or with Homebrew:

brew install llm

If you have an OpenAI API key key you can run this:

# Paste your OpenAI API key into this
llm keys set openai

# Run a prompt
llm "Ten fun names for a pet pelican"

# Run a system prompt against a file
cat myfile.py | llm -s "Explain this code"

Or you can {ref}install a plugin <installing-plugins> and use models that can run on your local device:

# Install the plugin
llm install llm-gpt4all

# Download and run a prompt against the Vicuna model
llm -m ggml-vicuna-7b-1 'What is the capital of France?'

Contents

---
maxdepth: 3
---
setup
usage
other-models
embeddings/index
plugins/index
aliases
python-api
templates
logging
related-tools
help
contributing
changelog