mirror of
https://github.com/Hopiu/llm.git
synced 2026-03-23 07:50:24 +00:00
Add ollama to plugin directory (#395)
* Add ollama to plugin directory !stable-docs
This commit is contained in:
parent
a1b97c06e6
commit
7ec958c080
1 changed files with 1 additions and 0 deletions
|
|
@ -11,6 +11,7 @@ These plugins all help you run LLMs directly on your own computer:
|
|||
- **[llm-mlc](https://github.com/simonw/llm-mlc)** can run local models released by the [MLC project](https://mlc.ai/mlc-llm/), including models that can take advantage of the GPU on Apple Silicon M1/M2 devices.
|
||||
- **[llm-gpt4all](https://github.com/simonw/llm-gpt4all)** adds support for various models released by the [GPT4All](https://gpt4all.io/) project that are optimized to run locally on your own machine. These models include versions of Vicuna, Orca, Falcon and MPT - here's [a full list of models](https://observablehq.com/@simonw/gpt4all-models).
|
||||
- **[llm-mpt30b](https://github.com/simonw/llm-mpt30b)** adds support for the [MPT-30B](https://huggingface.co/mosaicml/mpt-30b) local model.
|
||||
- **[llm-ollama](https://github.com/taketwo/llm-ollama)** adds support for local models run using [Ollama](https://ollama.ai/).
|
||||
|
||||
## Remote APIs
|
||||
|
||||
|
|
|
|||
Loading…
Reference in a new issue