llm/docs/plugins/directory.md

7.7 KiB

(plugin-directory)=

Plugin directory

The following plugins are available for LLM. Here's {ref}how to install them <installing-plugins>.

Local models

These plugins all help you run LLMs directly on your own computer:

Remote APIs

These plugins can be used to interact with remotely hosted models via their API:

If an API model host provides an OpenAI-compatible API you can also configure LLM to talk to it without needing an extra plugin.

Embedding models

{ref}Embedding models <embeddings> are models that can be used to generate and store embedding vectors for text.

Extra commands

  • llm-cmd accepts a prompt for a shell command, runs that prompt and populates the result in your shell so you can review it, edit it and then hit <enter> to execute or ctrl+c to cancel.
  • llm-cmd-comp provides a key binding for your shell that will launch a chat to build the command. When ready, hit <enter> and it will go right back into your shell command line, so you can run it.
  • llm-python adds a llm python command for running a Python interpreter in the same virtual environment as LLM. This is useful for debugging, and also provides a convenient way to interact with the LLM {ref}python-api if you installed LLM using Homebrew or pipx.
  • llm-cluster adds a llm cluster command for calculating clusters for a collection of embeddings. Calculated clusters can then be passed to a Large Language Model to generate a summary description.
  • llm-jq lets you pipe in JSON data and a prompt describing a jq program, then executes the generated program against the JSON.

Just for fun