* Embeddings plugin hook + OpenAI implementation * llm.get_embedding_model(name) function * llm embed command, for returning embeddings or saving them to SQLite * Tests using an EmbedDemo embedding model * llm embed-models list and emeb-models default commands * llm embed-db path and llm embed-db collections commands
1.9 KiB
(embeddings-writing-plugins)=
Writing plugins to add new embedding models
Read the {ref}plugin tutorial <tutorial-model-plugin> for details on how to develop and package a plugin.
This page shows an example plugin that implements and registers a new embedding model.
There are two components to an embedding model plugin:
-
An implementation of the
register_embedding_models()hook, which takes aregistercallback function and calls it to register the new model with the LLM plugin system. -
A class that extends the
llm.EmbeddingModelabstract base class.The only required method on this class is
embed(text), which takes a string and returns a list of floating point numbers.
The following example uses the sentence-transformers package to provide access to the MiniLM-L6 embedding model.
import llm
from sentence_transformers import SentenceTransformer
@llm.hookimpl
def register_embedding_models(register):
model_id = "sentence-transformers/all-MiniLM-L6-v2"
register(SentenceTransformerModel(model_id, model_id, 384), aliases=("all-MiniLM-L6-v2",))
class SentenceTransformerModel(llm.EmbeddingModel):
def __init__(self, model_id, model_name, embedding_size):
self.model_id = model_id
self.model_name = model_name
self.embedding_size = embedding_size
self._model = None
def embed(self, text):
if self._model is None:
self._model = SentenceTransformer(self.model_name)
return list(map(float, self._model.encode([text])[0]))
Once installed, the model provided by this plugin can be used with the {ref}llm embed <embeddings-llm-embed> command like this:
cat file.txt | llm embed -m sentence-transformers/all-MiniLM-L6-v2
Or via its registered alias like this:
cat file.txt | llm embed -m all-MiniLM-L6-v2