Models¶
List available Codex models and their capabilities.
Listing Models¶
Model Fields¶
| Field | Type | Description |
|---|---|---|
slug |
str |
Model identifier (e.g. "gpt-5.1-codex-mini") |
display_name |
str |
Human-readable name |
context_window |
int \| None |
Maximum context length in tokens |
reasoning_levels |
list[str] |
Supported reasoning effort levels |
input_modalities |
list[str] |
Supported input types (e.g. ["text", "image"]) |
supports_parallel_tool_calls |
bool |
Whether parallel tool calls are supported |
priority |
int |
Display priority |
Caching¶
Model results are cached for 5 minutes to avoid redundant API calls. Force a refresh:
# Uses cache if available
models = client.models.list()
# Bypasses cache
models = client.models.list(force_refresh=True)