gpt-4o model, refs #490

This commit is contained in:
Simon Willison 2024-05-13 12:49:45 -07:00
parent 04915e95f8
commit 73bbbec372
4 changed files with 24 additions and 3 deletions

View file

@ -29,6 +29,7 @@ gpt4 : gpt-4
gpt-4-turbo : gpt-4-turbo-preview
4-turbo : gpt-4-turbo-preview
4t : gpt-4-turbo-preview
4o : gpt-4o
3.5-instruct : gpt-3.5-turbo-instruct
chatgpt-instruct : gpt-3.5-turbo-instruct
ada : ada-002 (embedding)

View file

@ -23,16 +23,24 @@ Then paste in the API key.
Run `llm models` for a full list of available models. The OpenAI models supported by LLM are:
<!-- [[[cog
from click.testing import CliRunner
from llm.cli import cli
result = CliRunner().invoke(cli, ["models", "list"])
models = [line for line in result.output.split("\n") if line.startswith("OpenAI ")]
cog.out("```\n{}```".format("\n".join(models)))
]]] -->
```
OpenAI Chat: gpt-3.5-turbo (aliases: 3.5, chatgpt)
OpenAI Chat: gpt-3.5-turbo-16k (aliases: chatgpt-16k, 3.5-16k, turbo)
OpenAI Chat: gpt-3.5-turbo-16k (aliases: chatgpt-16k, 3.5-16k)
OpenAI Chat: gpt-4 (aliases: 4, gpt4)
OpenAI Chat: gpt-4-32k (aliases: 4-32k)
OpenAI Chat: gpt-4-1106-preview
OpenAI Chat: gpt-4-0125-preview
OpenAI Chat: gpt-4-turbo-preview (aliases: gpt-4-turbo, 4-turbo, 4t)
OpenAI Completion: gpt-3.5-turbo-instruct (aliases: 3.5-instruct, chatgpt-instruct, instruct)
```
OpenAI Chat: gpt-4o (aliases: 4o)
OpenAI Completion: gpt-3.5-turbo-instruct (aliases: 3.5-instruct, chatgpt-instruct)```
<!-- [[[end]]] -->
See [the OpenAI models documentation](https://platform.openai.com/docs/models) for details of each of these.

View file

@ -315,6 +315,16 @@ OpenAI Chat: gpt-4-turbo-preview (aliases: gpt-4-turbo, 4-turbo, 4t)
logit_bias: dict, str
seed: int
json_object: boolean
OpenAI Chat: gpt-4o (aliases: 4o)
temperature: float
max_tokens: int
top_p: float
frequency_penalty: float
presence_penalty: float
stop: str
logit_bias: dict, str
seed: int
json_object: boolean
OpenAI Completion: gpt-3.5-turbo-instruct (aliases: 3.5-instruct, chatgpt-instruct)
temperature: float
What sampling temperature to use, between 0 and 2. Higher values like

View file

@ -31,6 +31,8 @@ def register_models(register):
register(Chat("gpt-4-1106-preview"))
register(Chat("gpt-4-0125-preview"))
register(Chat("gpt-4-turbo-preview"), aliases=("gpt-4-turbo", "4-turbo", "4t"))
# GPT-4o
register(Chat("gpt-4o"), aliases=("4o",))
# The -instruct completion model
register(
Completion("gpt-3.5-turbo-instruct", default_max_tokens=256),