Update docs to reflect new gpt-4o-mini default, refs #536

This commit is contained in:
Simon Willison 2024-07-18 12:15:56 -07:00
parent 50454c1957
commit fcba89d73b
3 changed files with 4 additions and 4 deletions

View file

@ -62,10 +62,10 @@ llm -m orca-mini-3b-gguf2-q4_0 'What is the capital of France?'
```
To start {ref}`an interactive chat <usage-chat>` with a model, use `llm chat`:
```bash
llm chat -m chatgpt
llm chat -m gpt-4o-mini
```
```
Chatting with gpt-3.5-turbo
Chatting with gpt-4o-mini
Type 'exit' or 'quit' to exit
Type '!multi' to enter multiple lines, then '!end' to finish
> Tell me a joke about a pelican

View file

@ -46,7 +46,7 @@ OpenAI Completion: gpt-3.5-turbo-instruct (aliases: 3.5-instruct, chatgpt-instru
See [the OpenAI models documentation](https://platform.openai.com/docs/models) for details of each of these.
`gpt-3.5-turbo` (aliased to `3.5`) is the least expensive model. `gpt-4o` (aliased to `4o`) is the newest, cheapest and fastest of the GPT-4 family of models.
`gpt-4o-mini` (aliased to `4o-mini`) is the least expensive model, and is the default for if you don't specify a model at all. `gpt-4o` (aliased to `4o`) is the newest, cheapest and fastest of the GPT-4 family of models.
The `gpt-3.5-turbo-instruct` model is a little different - it is a completion model rather than a chat model, described in [the OpenAI completions documentation](https://platform.openai.com/docs/api-reference/completions/create).

View file

@ -136,7 +136,7 @@ You can configure LLM in a number of different ways.
### Setting a custom default model
The model used when calling `llm` without the `-m/--model` option defaults to `gpt-3.5-turbo` - the fastest and least expensive OpenAI model, and the same model family that powers ChatGPT.
The model used when calling `llm` without the `-m/--model` option defaults to `gpt-4o-mini` - the fastest and least expensive OpenAI model.
You can use the `llm models default` command to set a different default model. For GPT-4 (slower and more expensive, but more capable) run this: