llm/default_plugins: add o3 model (#945)

* llm/default_plugins: add o3 model

This is the newest model released by OpenAI and is available through
the API.

* Ran cog

---------

Co-authored-by: Simon Willison <swillison@gmail.com>
This commit is contained in:
Kevin Burke 2025-05-04 16:01:55 -07:00 committed by GitHub
parent 8c7d33ee52
commit 5d0a2bba59
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
3 changed files with 24 additions and 0 deletions

View file

@ -57,6 +57,7 @@ OpenAI Chat: o1-2024-12-17
OpenAI Chat: o1-preview
OpenAI Chat: o1-mini
OpenAI Chat: o3-mini
OpenAI Chat: o3
OpenAI Completion: gpt-3.5-turbo-instruct (aliases: 3.5-instruct, chatgpt-instruct)
```
<!-- [[[end]]] -->

View file

@ -916,6 +916,25 @@ OpenAI Chat: o3-mini
Keys:
key: openai
env_var: OPENAI_API_KEY
OpenAI Chat: o3
Options:
temperature: float
max_tokens: int
top_p: float
frequency_penalty: float
presence_penalty: float
stop: str
logit_bias: dict, str
seed: int
json_object: boolean
reasoning_effort: str
Features:
- streaming
- schemas
- async
Keys:
key: openai
env_var: OPENAI_API_KEY
OpenAI Completion: gpt-3.5-turbo-instruct (aliases: 3.5-instruct, chatgpt-instruct)
Options:
temperature: float

View file

@ -117,6 +117,10 @@ def register_models(register):
Chat("o3-mini", reasoning=True, supports_schema=True),
AsyncChat("o3-mini", reasoning=True, supports_schema=True),
)
register(
Chat("o3", reasoning=True, supports_schema=True),
AsyncChat("o3", reasoning=True, supports_schema=True),
)
# The -instruct completion model
register(
Completion("gpt-3.5-turbo-instruct", default_max_tokens=256),