⌘K

GPT-5.3-Codex

vtrix-gpt-5.3-codex

The most capable agentic coding model to date. Optimized for agentic coding tasks in Codex or similar environments. Supports low, medium, high, and xhigh reasoning effort settings. 400K context window and 128K max output tokens.

Authentication

authorization string required

All APIs require authentication via Bearer Token.

Get API Key:

Visit API Key Management Page to get your API Key.

Usage:

Add to request header:

Authorization: Bearer YOUR_API_KEY

Parameters

model string required

Model ID to use for the request.

Value: vtrix-gpt-5.3-codex


messages array required

Array of message objects representing the conversation history.

role string required

Message role.

Options: user, assistant, system, developer

content string | array required

Text string or multimodal array. Supports text and image inputs.


reasoning_effort string

Level of reasoning effort to apply. Higher effort may improve quality for complex coding tasks but increases latency and cost.

Options: low, medium, high, xhigh

Default: medium


max_tokens integer

Maximum tokens to generate in the completion.

Range: 1 - 128000


temperature number

Sampling temperature to use.

Default: 1.0

Range: 0.0 - 2.0


top_p number

Nucleus sampling parameter.

Default: 1.0

Range: 0.0 - 1.0


stream boolean

Whether to stream response incrementally.

Default: false


functions array

List of functions the model may generate JSON inputs for.

name string required

Function name.

description string

Description of what the function does.

parameters object

Parameters the function accepts, described as a JSON Schema object.


function_call string | object

Controls how the model calls functions.

Options: none, auto, or {"name": "function_name"} to force a specific function

Default: auto


response_format object

Format that the model must output. Used for structured outputs.

type string required

Output format type.

Options: text, json_object, json_schema


Response Format

id string

Unique identifier for the completion.


object string

Object type, always chat.completion.


created integer

Unix timestamp of when the completion was created.


model string

The model used for the completion.


choices array

Array of completion choices.

index integer

The index of this choice in the array.

message object

The generated message.

role string

The role of the message author.

content string

The content of the message.

finish_reason string

The reason the model stopped generating tokens.

Values: stop, length, function_call, content_filter


usage object

Token usage statistics.

prompt_tokens integer

Number of tokens in the prompt.

completion_tokens integer

Number of tokens in the completion.

total_tokens integer

Total number of tokens used.