Commit graph

521 commits

Author SHA1 Message Date
Simon Willison
f3e02b6de6 Fix for broken markdown in docs, closes #860 2025-04-05 17:24:31 -07:00
Simon Willison
f740a5cbbd
Fragments (#859)
* WIP fragments: schema plus reading but not yet writing, refs #617
* Unique index on fragments.alias, refs #617
* Fragments are now persisted, added basic CLI commands
* Fragment aliases work now, refs #617
* Improved help for -f/--fragment
* Support fragment hash as well
* Documentation for fragments
* Better non-JSON display of llm fragments list
* llm fragments -q search option
* _truncate_string is now truncate_string
* Use condense_json to avoid duplicate data in JSON in DB, refs #617
* Follow up to 3 redirects for fragments
* Python API docs for fragments= and system_fragments=
* Fragment aliases cannot contain a : - this is to ensure we can add custom fragment loaders later on, refs https://github.com/simonw/llm/pull/859#issuecomment-2761534692
* Use template fragments when running prompts
* llm fragments show command plus llm fragments group tests
* Tests for fragments family of commands
* Test for --save with fragments
* Add fragments tables to docs/logging.md
* Slightly better llm fragments --help
* Handle fragments in past conversations correctly
* Hint at llm prompt --help in llm --help, closes #868
* llm logs -f filter plus show fragments in llm logs --json
* Include prompt and system fragments in llm logs -s
* llm logs markdown fragment output and tests, refs #617
2025-04-05 17:22:37 -07:00
Simon Willison
70e0799821 Hint at llm prompt --help in llm --help, closes #868 2025-03-29 21:00:41 -07:00
Simon Willison
f641b89882 llm similar -p/--plain option, closes #853 2025-03-28 00:36:08 -07:00
Simon Willison
5b2c611c82 llm prompt -d/--database option, closes #858 2025-03-28 00:20:31 -07:00
Simon Willison
7e7ccdc19a Hide -p/--path in favor of standard -d/--database, closes #857
Spotted while working on #853
2025-03-28 00:11:01 -07:00
Simon Willison
9a24605996 Allow -t to take a URL to a template, closes #856 2025-03-27 20:36:58 -07:00
Simon Willison
3f6bccf87d Link to two more blog entries
!stable-docs
2025-03-25 19:30:48 -07:00
Simon Willison
22175414f0 Extra OpenAI docs including mention of PDFs, closes #834 2025-03-25 19:30:42 -07:00
Simon Willison
468b0551ee
llm models options commands for setting default model options
Closes #829
2025-03-22 18:28:45 -07:00
Simon Willison
1ad7bbd32a
Ability to store options in templates (#845)
* llm prompt --save option support, closes #830
* Fix for templates with just a system prompt, closes #844
* Tests for options from template, refs #830
* Test and bug fix for --save with options, refs #830
* Docs for template options support, refs #830
2025-03-22 17:24:02 -07:00
giuli007
51db7afddb
Support vision and audio for extra-openai-models.yaml (#843)
Add a vision option to enable OpenAI-compatible
models to receive image and audio attachments
2025-03-22 16:14:18 -07:00
Simon Willison
99cd2aa148
Improved OpenAI model docs
Refs #839, closes #840
2025-03-21 18:31:20 -07:00
adaitche
de87d37c28
Add supports_schema to extra-openai-models (#819)
Recently support for structured output was added. But custom
OpenAI-compatible models didn't support the `supports_schema` property
in the config file `extra-openai-models.yaml`.
2025-03-21 16:59:34 -07:00
Simon Willison
6c9a8efb50
register_template_loaders plugin hook, closes #809
* Moved templates CLI commands next to each other
* llm templates loaders command
* Template loader tests
* Documentation for template loaders
2025-03-21 16:46:44 -07:00
Simon Willison
3541415db4 llm prompt -q X -q Y option, closes #841 2025-03-21 15:17:16 -07:00
Simon Willison
090e971bf4 Model feature list for advanced plugins documentation
!stable-docs
2025-03-19 21:43:17 -07:00
Simon Willison
c3a0bb7bb6 Ran cog, refs #834 2025-03-18 16:29:08 -07:00
Simon Willison
fea9eb9866
new way of configuring key
Refs #744

!stable-docs
2025-03-18 08:41:54 -07:00
Simon Willison
0f47565530
Clarify lazy loading
https://bsky.app/profile/simonwillison.net/post/3lknwgbph522h

!stable-docs
2025-03-18 08:04:52 -07:00
Simon Willison
1d552aeacc llm models -m option, closes #825 2025-03-10 14:18:50 -07:00
Simon Willison
31d264d9a9 Improved llm embed-multi docs, closes #824 2025-03-09 18:56:20 -05:00
Simon Willison
0865c2d939 LLM_RAISE_ERRORS debug feature, closes #817 2025-03-04 20:14:32 -08:00
Simon Willison
a7a9bc8323 Release 0.23
Refs #520, #766, #774, #775, #776, #777, #778, #780, #781, #782, #783, #784, #785, #788, #790, #791, #793, #794, #795, #796, #797, #798, #799, #800, #801, #806

Closes #803
2025-02-28 08:55:59 -08:00
Simon Willison
e060347f58 Recommend top level object, not array for schemas 2025-02-28 08:15:02 -08:00
Simon Willison
b829cd92e0 Show async in list of features, closes #806 2025-02-28 06:44:09 -08:00
Simon Willison
bf80b8a19b Schemas tutorial and cleaned up other schema docs, refs #788 2025-02-28 00:16:29 -08:00
Simon Willison
3a60290c82 llm logs --id-gt and --id-gte options, closes #801 2025-02-28 00:15:59 -08:00
Simon Willison
48f67f4085 llm logs --data-ids flag, closes #800 2025-02-27 20:31:50 -08:00
Simon Willison
1bebf8b34a --schema t:template-name option, plus improved schema docs
Closes #799, refs #788
2025-02-27 17:25:31 -08:00
Simon Willison
362bdc6dcc It's schema_object: not schema: 2025-02-27 17:12:06 -08:00
Simon Willison
98cccd294a llm models list --schemas option, closes #797
Also fixed bug where features showed even without --options, refs #796
2025-02-27 15:50:28 -08:00
Simon Willison
4a7a1f19ed Show features (including streaming) in llm models --options, closes #796 2025-02-27 15:44:38 -08:00
Simon Willison
6bec92fd78 Assign gpt-4.5 default alias, refs #795 2025-02-27 14:51:09 -08:00
Simon Willison
133d3bb173 Ran cog, refs #795 2025-02-27 14:50:02 -08:00
Simon Willison
74baf33a56 Moved some docs into schemas.md, refs #788 2025-02-27 11:20:50 -08:00
Simon Willison
a1ea85ecbd llm logs --schema-multi option 2025-02-27 11:20:23 -08:00
Simon Willison
6957e4ecbb Improvements to schemas.md refs #788 2025-02-27 11:08:39 -08:00
Simon Willison
259366a575 llm schemas dsl run with cog, refs #793 2025-02-27 10:48:15 -08:00
Simon Willison
9a38021218 llm schemas dsl command, closes #793
Refs #790
2025-02-27 10:46:56 -08:00
Simon Willison
eb2b243fdf schema_dsl(..., multi=True) parameter, refs #790 2025-02-27 10:28:42 -08:00
Simon Willison
8d32b71ef1 Renamed build_json_schema to schema_dsl 2025-02-27 10:22:29 -08:00
Simon Willison
7e819c2ffa Implemented --schema-multi, closes #791 2025-02-27 10:12:21 -08:00
Simon Willison
321636e791 New schema DSL, closes #790
Plus made a start on schemas.md refs #788
2025-02-27 09:48:44 -08:00
Simon Willison
523fc4f1a3 Fixed typo 2025-02-27 07:48:10 -08:00
Simon Willison
edc9e2dd7e Basic docs for llm schema list, closes #781 2025-02-27 07:46:34 -08:00
Simon Willison
c126b5d04c llm schemas list --full option
Refs https://github.com/simonw/llm/issues/781#issuecomment-2688342157
2025-02-27 07:43:41 -08:00
Simon Willison
4908cdfbd2 llm schemas show X command, refs #781 2025-02-27 07:39:36 -08:00
Simon Willison
99a1adcece Initial llm schemas list implementation, refs #781 2025-02-27 07:35:48 -08:00
Simon Willison
a0845874ec
Schema template --save --schema support
* Don't hang on stdin if llm -t template-with-schema
* Docs on using schemas with templates
* Schema in template YAML file example
* Test for --save with --schema

Refs #778
2025-02-27 07:19:15 -08:00
Simon Willison
f35ac31c21
llm logs --schema, --data, --data-array and --data-key options (#785)
* llm logs --schema option, refs #782
* --data and --data-array and --data-key options, refs #782
* Tests for llm logs --schema options, refs #785
* Also implemented --schema ID lookup, refs #780
* Using --data-key implies --data
* Docs for llm logs --schema and --data etc
2025-02-26 21:51:08 -08:00
Kasper Primdal Lauritzen
6cb16a1d1a
Allow "reasoning" for extra-openai-models.yaml (#766)
* Allow "reasoning" for extra-openai-models.yaml

Currently you get an error when trying to use `-o reasoning_effort high` with a model that has been defined in `extra-openai-models.yaml`. 
This allows a `reasoning` field.

* Mention reasoning: true in other OpenAI models docs

---------

Co-authored-by: Simon Willison <swillison@gmail.com>
2025-02-26 21:50:14 -08:00
Simon Willison
f5c2cfba96 Note about Pydantic v1 support in changelog for 0.23a0
Refs #520, #775
2025-02-26 17:07:41 -08:00
Simon Willison
42122c79ba Release 0.23a0
Refs #776, #777
2025-02-26 17:05:13 -08:00
Simon Willison
62c90dd472
llm prompt --schema X option and model.prompt(..., schema=) parameter (#777)
Refs #776

* Implemented new llm prompt --schema and model.prompt(schema=)
* Log schema to responses.schema_id and schemas table
* Include schema in llm logs Markdown output
* Test for schema=pydantic_model
* Initial --schema CLI documentation
* Python docs for schema=
* Advanced plugin docs on schemas
2025-02-26 16:58:28 -08:00
Tomoko Uchida
eda1f4f588
Add note about similarity function in "similar" command's doc (#774)
* note about similarity function in similar command doc
* Link to Wikipedia definition

---------

Co-authored-by: Simon Willison <swillison@gmail.com>
2025-02-26 10:07:10 -08:00
Simon Willison
e46cb7e761 Update docs to no longer mention PaLM
!stable-docs
2025-02-16 22:37:00 -08:00
Simon Willison
64f9f2ef52 Promote llm-mlx in changelog and plugin directory
!stable-docs
2025-02-16 22:29:32 -08:00
Simon Willison
0eab3f5ff3 Link to 0.22 annotated release notes
!stable-docs
2025-02-16 22:20:40 -08:00
Simon Willison
b8b030fc58 Release 0.22
Refs #737, #742, #744, #745, #748, #752
2025-02-16 20:34:48 -08:00
Simon Willison
c053e214ec include token usage information, not get - refs #756 2025-02-16 15:18:04 -08:00
Simon Willison
53d6ecdd59 Documentation for logs --short, refs #756 2025-02-16 15:16:51 -08:00
Simon Willison
6c6b100f3e
KeyModel and AsyncKeyModel classes for models that taken keys (#753)
* New KeyModel and AsyncKeyModel classes for models that taken keys - closes #744
* llm prompt --key now uses new mechanism, including for async
* use new key mechanism in llm chat command
* Python API tests for llm.KeyModel and llm.AsyncKeyModel
* Python API docs for for prompt(... key="")
* Mention await model.prompt() takes other parameters, reorg sections
* Better title for the model tutorial
* Docs on writing model plugins that take a key
2025-02-16 14:38:51 -08:00
Simon Willison
8611d9203c Updated docs with new chatgpt-4o-latest model, refs #752 2025-02-15 17:46:07 -08:00
Simon Willison
747d92ea4f Docs for multiple -q option, closes #748 2025-02-13 16:01:02 -08:00
Simon Willison
31e900e9e1 llm aliases set -q option, refs #749 2025-02-13 15:49:47 -08:00
Simon Willison
20c18a716d -q multiple option for llm models and llm embed-models
Refs #748
2025-02-13 15:35:18 -08:00
Simon Willison
9a1374b447
llm embed-multi --prepend option (#746)
* llm embed-multi --prepend option

Closes #745
2025-02-12 15:19:18 -08:00
Simon Willison
f67c21522b
Docs for response.json() and response.usage()
!stable-docs
2025-02-11 08:35:27 -08:00
Simon Willison
41d64a8f12
llm logs --prompts option (#737)
Closes #736
2025-02-02 12:03:01 -08:00
Simon Willison
21df241443 llm-claude-3 is now called llm-anthropic
Refs https://github.com/simonw/llm-claude-3/issues/31

!stable-docs
2025-02-01 22:08:19 -08:00
Simon Willison
f8dcc67455 Release 0.21
Refs #717, #728
2025-01-31 12:35:10 -08:00
Simon Willison
eb0e1e761b o3-mini and reasoning_effort option, refs #728 2025-01-31 12:14:02 -08:00
Simon Willison
656d8fa3c4
--xl/--extract-last flag for prompt and log list commands (#718)
Closes #717
2025-01-24 10:52:46 -08:00
Simon Willison
e449fd4f46
Typo fix
!stable-docs
2025-01-22 22:17:07 -08:00
Simon Willison
3e88628602 uv tool upgrade llm, refs #702
!stable-docs
2025-01-22 21:08:16 -08:00
Simon Willison
bf10f63d3d
Mention gpt-4o-mini-audio-preview too #677
!stable-docs
2025-01-22 21:06:12 -08:00
Simon Willison
eb996baeab Documentation for model.attachment_types, closes #705 2025-01-22 20:46:28 -08:00
Simon Willison
2b9a1bbc50 Fixed broken link 2025-01-22 20:39:01 -08:00
Simon Willison
dc127d2a87 Release 0.20
Refs #654, #676, #677, #681, #688, #690, #700, #702, #709
2025-01-22 20:36:10 -08:00
Simon Willison
57d3baac42 Update embedding model names in docs, refs #654
Also ran Black.
2025-01-22 20:35:17 -08:00
Ryan Patterson
59983740e6
Update directory.md (#666) 2025-01-18 14:52:51 -08:00
abrasumente
e1388b27fe
Add llm-deepseek plugin (#517) 2025-01-11 18:56:34 -08:00
Steven Weaver
2b6b00641c
Update tutorial-model-plugin.md (#685)
pydantic.org -> pydantic.dev
2025-01-11 12:05:05 -08:00
Amjith Ramanujam
e3c104b136
Show the default model when listing all available models. (#688) 2025-01-11 12:04:39 -08:00
Simon Willison
1d75792f9b More uv/uvx tips, closes #702
Refs #690
2025-01-11 10:06:32 -08:00
Ariel Marcus
d964d02e90
Add installation docs with uv (#690) 2025-01-11 09:57:10 -08:00
watany
1c61b5addd
doc(plugin): adding AmazonBedrock (#698) 2025-01-10 16:42:39 -08:00
Arjan Mossel
4f4f9bc07d
Add llm-venice to plugin directory (#699) 2025-01-10 16:41:21 -08:00
Simon Willison
6baf1f7d83 o1
Closes #676
2025-01-10 15:57:06 -08:00
Csaba Henk
88a8cfd9e4
llm logs -x/--extract option (#693)
* llm logs -x/--extract option
* Update docs/help.md for llm logs -x
* Added test for llm logs -x/--extract, refs #693
* llm logs -xr behaves same as llm logs -x
* -x/--extract in llm logging docs

---------

Co-authored-by: Simon Willison <swillison@gmail.com>
2025-01-10 15:53:04 -08:00
Simon Willison
b452effa09 llm models -q/--query option, closes #700 2025-01-09 11:37:33 -08:00
Simon Willison
000e984def --extract support for templates, closes #681 2024-12-19 07:16:48 -08:00
Simon Willison
67d4a99645 llm prompt -x/--extract option, closes #681 2024-12-19 06:40:05 -08:00
Simon Willison
6305b86026 gpt-4o-mini-audio-preview, closes #677 2024-12-17 20:28:57 -08:00
Simon Willison
8898584ba6 New OpenAI audio models, closes #677 2024-12-17 11:14:42 -08:00
Simon Willison
b8e8052229 Release 0.19.1
Refs #667
2024-12-05 13:47:28 -08:00
Simon Willison
e78fea17df Fragment hash on 0.19 release
!stable-docs
2024-12-01 16:09:55 -08:00
Simon Willison
c018104083 Release 0.19
Refs #495, #610, #640, #641, #644, #653
2024-12-01 15:58:27 -08:00
Simon Willison
f9af563df5 response.on_done() mechanism, closes #653 2024-12-01 15:47:23 -08:00
Simon Willison
335b3e635a Release 0.19a2
Refs #640
2024-11-20 20:12:43 -08:00
Simon Willison
c52cfee881 llm.get_models() and llm.get_async_models(), closes #640 2024-11-20 20:09:06 -08:00
Simon Willison
845322e970 Release 0.19a1
Refs #644
2024-11-19 21:28:01 -08:00
Simon Willison
02852fe1a5 Release 0.19a0
Refs #610, #641
2024-11-19 20:23:54 -08:00
Simon Willison
cfb10f4afd
Log input tokens, output tokens and token details (#642)
* Store input_tokens, output_tokens, token_details on Response, closes #610
* llm prompt -u/--usage option
* llm logs -u/--usage option
* Docs on tracking token usage in plugins
* OpenAI default plugin logs usage
2024-11-19 20:21:59 -08:00
Simon Willison
a6d62b7ec9 Release 0.18
Refs #507, #600, #603, #608, #611, #612, #614
2024-11-17 12:31:48 -08:00
Simon Willison
73823012ca Release 0.18a1
Refs #632
2024-11-14 15:10:39 -08:00
Simon Willison
cf172cc70a response.text_or_raise() workaround
Closes https://github.com/simonw/llm/issues/632
2024-11-14 15:08:41 -08:00
Simon Willison
041730d8b2 Release 0.18a0
Refs #507, #599, #600, #603, #608, #611, #612, #613, #614, #615, #616, #621, #622, #623, #626, #629
2024-11-13 17:55:28 -08:00
Simon Willison
ba75c674cb
llm.get_async_model(), llm.AsyncModel base class and OpenAI async models (#613)
- https://github.com/simonw/llm/issues/507#issuecomment-2458639308

* register_model is now async aware

Refs https://github.com/simonw/llm/issues/507#issuecomment-2458658134

* Refactor Chat and AsyncChat to use _Shared base class

Refs https://github.com/simonw/llm/issues/507#issuecomment-2458692338

* fixed function name

* Fix for infinite loop

* Applied Black

* Ran cog

* Applied Black

* Add Response.from_row() classmethod back again

It does not matter that this is a blocking call, since it is a classmethod

* Made mypy happy with llm/models.py

* mypy fixes for openai_models.py

I am unhappy with this, had to duplicate some code.

* First test for AsyncModel

* Still have not quite got this working

* Fix for not loading plugins during tests, refs #626

* audio/wav not audio/wave, refs #603

* Black and mypy and ruff all happy

* Refactor to avoid generics

* Removed obsolete response() method

* Support text = await async_mock_model.prompt("hello")

* Initial docs for llm.get_async_model() and await model.prompt()

Refs #507

* Initial async model plugin creation docs

* duration_ms ANY to pass test

* llm models --async option

Refs https://github.com/simonw/llm/pull/613#issuecomment-2474724406

* Removed obsolete TypeVars

* Expanded register_models() docs for async

* await model.prompt() now returns AsyncResponse

Refs https://github.com/simonw/llm/pull/613#issuecomment-2475157822

---------

Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2024-11-13 17:51:00 -08:00
Hiepler
5a984d0c87
docs: add llm-grok (#629)
Adds`llm-grok` xAI API (https://github.com/Hiepler/llm-grok) to the plugin directory.

!stable-docs
2024-11-13 17:21:04 -08:00
Simon Willison
7520671176 audio/wav not audio/wave, refs #603 2024-11-12 21:43:07 -08:00
Travis Northcutt
c0cb1697bc
Update default model information (#622)
The default model is now 4o-mini; this change updates the usage page of the docs to reflect that
2024-11-12 19:06:16 -08:00
Simon Willison
dff53a9cae Better --help for llm keys get, refs #623 2024-11-11 09:53:24 -08:00
Simon Willison
561784df6e llm keys get command, refs #623 2024-11-11 09:47:13 -08:00
Simon Willison
febbc04fb6
Run cog -r in PRs, use that to update logging.md with new tables (#616)
* Create cog.yml
* Document attachments and prompt_attachments table schemas

Closes #615

---------

Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2024-11-06 06:56:19 -08:00
Simon Willison
98d2c19876 Promote alternative model providers in llm --help 2024-11-06 06:38:53 -08:00
Simon Willison
245e025270 Ran cog, refs #612 2024-11-05 23:45:17 -08:00
Chris Mungall
3b2e5263a3
Allow passing of can_stream in openai_models.py (#600)
* Allow passing of can_stream in openai_models.py

Fixes #599 

* Only set can_stream: false if it is false

Refs https://github.com/simonw/llm/pull/600#issuecomment-2458825866

* Docs for can_stream: false

---------

Co-authored-by: Simon Willison <swillison@gmail.com>
2024-11-05 23:04:13 -08:00
Simon Willison
12df1a3b2a Show attachment types in llm models --options, closes #612 2024-11-05 22:49:26 -08:00
Simon Willison
0cc4072bcd Support attachments without prompts, closes #611 2024-11-05 21:27:18 -08:00
Simon Willison
41cb5c3387 Ran cog, refs #608 2024-11-05 21:13:36 -08:00
Simon Willison
fe1e09706f
llm-lambda-labs
!stable-docs
2024-11-04 10:26:02 -08:00
Simon Willison
a44ba49c21 Release 0.17
Refs #587, #590, #591
2024-10-28 19:36:12 -07:00
Simon Willison
ba1ccb3a4a Release 0.17a0
Refs #587, #590
2024-10-28 15:46:52 -07:00
Simon Willison
1f822d820b Update docs with cog 2024-10-28 15:41:34 -07:00
Simon Willison
f0ed54abf1 Docs for CLI attachments, refs #587 2024-10-28 15:41:34 -07:00
Simon Willison
570a3eccae Python attachment documentation, plus fixed a mimetype detection bug
Refs #587
2024-10-28 15:41:34 -07:00
Simon Willison
1126393ba1 Docs for writing models that accept attachments, refs #587 2024-10-28 15:41:34 -07:00
Simon Willison
7e6031e382
llm-gguf, llm-jq
!stable-docs
2024-10-26 22:44:06 -07:00
Simon Willison
d654c95212 Release notes for 0.16 2024-09-12 16:20:12 -07:00
Simon Willison
bfcfd2c91b
o1-preview and o1-mini, refs #570 (#573) 2024-09-12 16:08:04 -07:00
Kian-Meng Ang
50520c7c1c
Fix typos (#567)
Found via `codespell -H -L wit,thre`

!stable-docs
2024-09-08 08:44:43 -07:00
Simon Willison
7d6ece2a31 Fix for broken markdown on openai-models page
Refs #558 !stable-docs
2024-08-25 18:03:46 -07:00
Simon Willison
6deed8f976 get_model() improvement, get_default_model() / set_default_wodel() now documented
Refs #553
2024-08-18 17:37:31 -07:00
Simon Willison
d075336c69 Release 0.15
Refs #515, #525, #536, #537
2024-07-18 12:31:14 -07:00
Simon Willison
562fefb374 Use 3-small in docs instead of ada-002
Spotted while working on #537
2024-07-18 12:23:49 -07:00
Simon Willison
fcba89d73b Update docs to reflect new gpt-4o-mini default, refs #536 2024-07-18 12:16:03 -07:00
Simon Donohue
50454c1957
Update outdated reference to gpt-4-turbo (#525)
Looks like this alias was overlooked in 8171c9a. This commit makes it
match with the usage of gpt-4o in the associated example.
2024-07-18 12:10:40 -07:00
Simon Willison
2881576dd0 Re-ran cog, refs #536 2024-07-18 12:00:35 -07:00
Simon Willison
96db13f537
Link to new video
!stable-docs
2024-06-17 10:18:24 -07:00
Simon Willison
68df9721de
github repo static badge
!stable-docs
2024-05-13 18:41:07 -07:00
Simon Willison
45245413bd
GitHub stars badge
!stable-docs
2024-05-13 15:09:56 -07:00
Simon Willison
9a3236db61 gpt-4-turbo model ID, closes #493 2024-05-13 13:37:23 -07:00
Simon Willison
ab1cc4fd1f Release 0.14
Refs #404, #431, #470, #490, #491
2024-05-13 13:26:48 -07:00
Fabian Labat
6cdc29c8d6
Update directory.md (#486)
* Update directory.md

Added support for Bedrock Llama 3
2024-05-13 13:01:33 -07:00
Simon Willison
3cc588f247 List llm-llamafile in plugins directory, closes #470 2024-05-13 12:55:22 -07:00
Simon Willison
8171c9a6bf Update help for GPT-4o, closes #490 2024-05-13 12:53:31 -07:00
Simon Willison
73bbbec372 gpt-4o model, refs #490 2024-05-13 12:49:45 -07:00
Simon Willison
04915e95f8
llm-groq
!stable-docs
2024-04-21 20:33:23 -07:00