Add ollama to plugin directory (#395)

* Add ollama to plugin directory

!stable-docs
This commit is contained in:
Till Hohenberger 2024-01-25 22:57:40 +01:00 committed by GitHub
parent a1b97c06e6
commit 7ec958c080
No known key found for this signature in database
GPG key ID: B5690EEEBB952194

View file

@ -11,6 +11,7 @@ These plugins all help you run LLMs directly on your own computer:
- **[llm-mlc](https://github.com/simonw/llm-mlc)** can run local models released by the [MLC project](https://mlc.ai/mlc-llm/), including models that can take advantage of the GPU on Apple Silicon M1/M2 devices.
- **[llm-gpt4all](https://github.com/simonw/llm-gpt4all)** adds support for various models released by the [GPT4All](https://gpt4all.io/) project that are optimized to run locally on your own machine. These models include versions of Vicuna, Orca, Falcon and MPT - here's [a full list of models](https://observablehq.com/@simonw/gpt4all-models).
- **[llm-mpt30b](https://github.com/simonw/llm-mpt30b)** adds support for the [MPT-30B](https://huggingface.co/mosaicml/mpt-30b) local model.
- **[llm-ollama](https://github.com/taketwo/llm-ollama)** adds support for local models run using [Ollama](https://ollama.ai/).
## Remote APIs