o4-mini, closes #976

This commit is contained in:
Simon Willison 2025-05-04 16:04:28 -07:00
parent 5d0a2bba59
commit 8e68c5e2d9
3 changed files with 24 additions and 0 deletions

View file

@ -58,6 +58,7 @@ OpenAI Chat: o1-preview
OpenAI Chat: o1-mini
OpenAI Chat: o3-mini
OpenAI Chat: o3
OpenAI Chat: o4-mini
OpenAI Completion: gpt-3.5-turbo-instruct (aliases: 3.5-instruct, chatgpt-instruct)
```
<!-- [[[end]]] -->

View file

@ -935,6 +935,25 @@ OpenAI Chat: o3
Keys:
key: openai
env_var: OPENAI_API_KEY
OpenAI Chat: o4-mini
Options:
temperature: float
max_tokens: int
top_p: float
frequency_penalty: float
presence_penalty: float
stop: str
logit_bias: dict, str
seed: int
json_object: boolean
reasoning_effort: str
Features:
- streaming
- schemas
- async
Keys:
key: openai
env_var: OPENAI_API_KEY
OpenAI Completion: gpt-3.5-turbo-instruct (aliases: 3.5-instruct, chatgpt-instruct)
Options:
temperature: float

View file

@ -121,6 +121,10 @@ def register_models(register):
Chat("o3", reasoning=True, supports_schema=True),
AsyncChat("o3", reasoning=True, supports_schema=True),
)
register(
Chat("o4-mini", reasoning=True, supports_schema=True),
AsyncChat("o4-mini", reasoning=True, supports_schema=True),
)
# The -instruct completion model
register(
Completion("gpt-3.5-turbo-instruct", default_max_tokens=256),