mirror of
https://github.com/Hopiu/llm.git
synced 2026-03-16 20:50:25 +00:00
More labels in docs
This commit is contained in:
parent
f2cf81e29a
commit
2fd6c09db1
1 changed files with 32 additions and 0 deletions
|
|
@ -6,6 +6,8 @@ This tutorial will walk you through developing a new plugin for LLM that adds su
|
|||
|
||||
We will be developing a plugin that implements a simple [Markov chain](https://en.wikipedia.org/wiki/Markov_chain) to generate words based on an input string. Markov chains are not technically large language models, but they provide a useful exercise for demonstrating how the LLM tool can be extended through plugins.
|
||||
|
||||
(tutorial-model-plugin-initial)=
|
||||
|
||||
## The initial structure of the plugin
|
||||
|
||||
First create a new directory with the name of your plugin - it should be called something like `llm-markov`.
|
||||
|
|
@ -52,6 +54,8 @@ If you are comfortable with Python virtual environments you can create one now f
|
|||
|
||||
If you aren't familiar with virtual environments, don't worry: you can develop plugins without them. You'll need to have LLM installed using Homebrew or `pipx` or one of the [other installation options](https://llm.datasette.io/en/latest/setup.html#installation).
|
||||
|
||||
(tutorial-model-plugin-installing)=
|
||||
|
||||
## Installing your plugin to try it out
|
||||
|
||||
Having created a directory with a `pyproject.toml` file and an `llm_markov.py` file, you can install your plugin into LLM by running this from inside your `llm-markov` directory:
|
||||
|
|
@ -101,6 +105,8 @@ hello world
|
|||
```
|
||||
Next, we'll make it execute and return the results of a Markov chain.
|
||||
|
||||
(tutorial-model-plugin-building)=
|
||||
|
||||
## Building the Markov chain
|
||||
|
||||
Markov chains can be thought of as the simplest possible example of a generative language model. They work by building an index of words that have been seen following other words.
|
||||
|
|
@ -132,6 +138,9 @@ We can try that out by pasting it into the interactive Python interpreter and ru
|
|||
>>> transitions
|
||||
{'the': ['cat', 'mat'], 'cat': ['sat'], 'sat': ['on'], 'on': ['the']}
|
||||
```
|
||||
|
||||
(tutorial-model-plugin-executing)=
|
||||
|
||||
## Executing the Markov chain
|
||||
|
||||
To execute the model, we start with a word. We look at the options for words that might come next and pick one of those at random. Then we repeat that process until we have produced the desired number of output words.
|
||||
|
|
@ -170,6 +179,9 @@ Or you can generate a full string sentence with it like this:
|
|||
```python
|
||||
sentence = " ".join(generate(transitions, 20))
|
||||
```
|
||||
|
||||
(tutorial-model-plugin-register)=
|
||||
|
||||
## Adding that to the plugin
|
||||
|
||||
Our `execute()` method from earlier currently returns the list `["hello world"]`.
|
||||
|
|
@ -221,6 +233,8 @@ llm -m markov "the cat sat on the mat"
|
|||
the mat the cat sat on the cat sat on the mat cat sat on the mat cat sat on
|
||||
```
|
||||
|
||||
(tutorial-model-plugin-execute)=
|
||||
|
||||
## Understanding execute()
|
||||
|
||||
The full signature of the `execute()` method is:
|
||||
|
|
@ -235,6 +249,8 @@ The `prompt` argument is a `Prompt` object that contains the text that the user
|
|||
|
||||
`conversation` is the `Conversation` that the prompt is a part of - or `None` if no conversation was provided. Some models may use `conversation.responses` to access previous prompts and responses in the conversation and use them to construct a call to the LLM that includes previous context.
|
||||
|
||||
(tutorial-model-plugin-logging)=
|
||||
|
||||
## Prompts and responses are logged to the database
|
||||
|
||||
The prompt and the response will be logged to a SQLite database automatically by LLM. You can see the single most recent addition to the logs using:
|
||||
|
|
@ -313,6 +329,8 @@ llm logs -n 1
|
|||
```
|
||||
In this particular case this isn't a great idea here though: the `transitions` table is duplicate information, since it can be reproduced from the input data - and it can get really large for longer prompts.
|
||||
|
||||
(tutorial-model-plugin-options)=
|
||||
|
||||
## Adding options
|
||||
|
||||
LLM models can take options. For large language models these can be things like `temperature` or `top_k`.
|
||||
|
|
@ -455,6 +473,8 @@ llm logs -n 1
|
|||
]
|
||||
```
|
||||
|
||||
(tutorial-model-plugin-distributing)=
|
||||
|
||||
## Distributing your plugin
|
||||
|
||||
There are many different options for distributing your new plugin so other people can try it out.
|
||||
|
|
@ -463,6 +483,8 @@ You can create a downloadable wheel or `.zip` or `.tar.gz` files, or share the p
|
|||
|
||||
You can also publish your plugin to PyPI, the Python Package Index.
|
||||
|
||||
(tutorial-model-plugin-wheels)=
|
||||
|
||||
### Wheels and sdist packages
|
||||
|
||||
The easiest option is to produce a distributable package is to use the `build` command. First, install the `build` package by running this:
|
||||
|
|
@ -489,6 +511,8 @@ You can run the following command at any time to uninstall your plugin, which is
|
|||
llm uninstall llm-markov -y
|
||||
```
|
||||
|
||||
(tutorial-model-plugin-gists)=
|
||||
|
||||
### GitHub Gists
|
||||
|
||||
A neat quick option for distributing a simple plugin is to host it in a GitHub Gist. These are available for free with a GitHub account, and can be public or private. Gists can contain multiple files but don't support directory structures - which is OK, because our plugin is just two files, `pyproject.toml` and `llm_markov.py`.
|
||||
|
|
@ -506,10 +530,14 @@ The plugin can be installed using the `llm install` command like this:
|
|||
llm install 'https://gist.github.com/simonw/6e56d48dc2599bffba963cef0db27b6d/archive/cc50c854414cb4deab3e3ab17e7e1e07d45cba0c.zip'
|
||||
```
|
||||
|
||||
(tutorial-model-plugin-github)=
|
||||
|
||||
## GitHub repositories
|
||||
|
||||
The same trick works for regular GitHub repositories as well: the "Download ZIP" button can be found by clicking the green "Code" button at the top of the repository. The URL which that provides can then be used to install the plugin that lives in that repository.
|
||||
|
||||
(tutorial-model-plugin-pypi)=
|
||||
|
||||
## Publishing plugins to PyPI
|
||||
|
||||
The [Python Package Index (PyPI)](https://pypi.org/) is the official repository for Python packages. You can upload your plugin to PyPI and reserve a name for it - once you have done that, anyone will be able to install your plugin using `llm install <name>`.
|
||||
|
|
@ -521,6 +549,8 @@ python -m twine upload dist/*
|
|||
```
|
||||
You will need an account on PyPI, then you can enter your username and password - or create a token in the PyPI settings and use `__token__` as the username and the token as the password.
|
||||
|
||||
(tutorial-model-plugin-metadata)=
|
||||
|
||||
## Adding metadata
|
||||
|
||||
Before uploading a package to PyPI it's a good idea to add documentation and expand `pyproject.toml` with additional metadata.
|
||||
|
|
@ -561,6 +591,8 @@ It adds some links to useful pages (you can drop the `project.urls` section if t
|
|||
|
||||
You should drop a `LICENSE` file into the GitHub repository for your package as well. I like to use the Apache 2 license [like this](https://github.com/simonw/llm/blob/main/LICENSE).
|
||||
|
||||
(tutorial-model-plugin-breaks)=
|
||||
|
||||
## What to do if it breaks
|
||||
|
||||
Sometimes you may make a change to your plugin that causes it to break, preventing `llm` from starting. For example you may see an error like this one:
|
||||
|
|
|
|||
Loading…
Reference in a new issue