LLM prompts can optionally be composed out of **fragments** - reusable pieces of text that are logged just once to the database and can then be attached to multiple prompts.
These are particularly useful when you are working with long context models, which support feeding large amounts of text in as part of your prompt.
Fragments primarily exist to save space in the database, but may be used to support other features such as vendor prompt caching as well.
Fragments can be specified using several different mechanisms:
- URLs to text files online
- Paths to text files on disk
- Aliases that have been attached to a specific fragment
- Hash IDs of stored fragments, where the ID is the SHA256 hash of the fragment content
- Fragments that are provided by custom plugins - these look like `plugin-name:argument`
(fragments-usage)=
## Using fragments in a prompt
Use the `-f/--fragment` option to specify one or more fragments to be used as part of your prompt:
```bash
llm -f https://llm.datasette.io/robots.txt "Explain this robots.txt file in detail"
```
Here we are specifying a fragment using a URL. The contents of that URL will be included in the prompt that is sent to the model, prepended prior to the prompt text.
The `-f` option can be used multiple times to combine together multiple fragments.
Fragments can also be files on disk, for example:
```bash
llm -f setup.py 'extract the metadata'
```
Use `-` to specify a fragment that is read from standard input:
```bash
llm -f - 'extract the metadata' <setup.py
```
This will read the contents of `setup.py` from standard input and use it as a fragment.
Fragments can also be used as part of your system prompt. Use `--sf value` or `--system-fragment value` instead of `-f`.
(fragments-browsing)=
## Browsing fragments
You can view a truncated version of the fragments you have previously stored in your database with the `llm fragments` command:
The `llm logs` command lists the fragments that were used for a prompt. By default these are listed as fragment hash IDs, but you can use the `--expand` option to show the full content of each fragment.
This command will show the expanded fragments for your most recent conversation:
```bash
llm logs -c --expand
```
You can filter for logs that used a specific fragment using the `-f/--fragment` option:
Fragments are returned by `llm logs --json` as well. By default these are truncated but you can add the `-e/--expand` option to show the full content of each fragment.
```bash
llm logs -c --json --expand
```
(fragments-plugins)=
## Using fragments from plugins
LLM plugins can provide custom fragment loaders which do useful things.
One example is the [llm-fragments-github plugin](https://github.com/simonw/llm-fragments-github). This can convert the files from a public GitHub repository into a list of fragments, allowing you to ask questions about the full repository.
llm -f github:simonw/s3-credentials 'Suggest new features for this tool'
```
This plugin turns a single call to `-f github:simonw/s3-credentials` into multiple fragments, one for every text file in the [simonw/s3-credentials](https://github.com/simonw/s3-credentials) GitHub repository.
Running `llm logs -c` will show that this prompt incorporated 26 fragments, one for each file.
Running `llm logs -c --usage --expand` (shortcut: `llm logs -cue`) includes token usage information and turns each fragment ID into a full copy of that file. [Here's the output of that command](https://gist.github.com/simonw/c9bbbc5f6560b01f4b7882ac0194fb25).
Fragment plugins can return {ref}`attachments <usage-attachments>` (such as images) as well.
See the {ref}`register_fragment_loaders() plugin hook <plugin-hooks-register-fragment-loaders>` documentation for details on writing your own custom fragment plugin.