Skip to main content
GET
/
responses
/
{response_id}
Get a model response
curl --request GET \
  --url https://api.openai.com/v1/responses/{response_id}
{
  "id": "resp_67ccd3a9da748190baa7f1570fe91ac604becb25c45c1d41",
  "object": "response",
  "created_at": 1741476777,
  "status": "completed",
  "completed_at": 1741476778,
  "model": "gpt-4o-2024-08-06",
  "output": [
    {
      "type": "message",
      "id": "msg_67ccd3acc8d48190a77525dc6de64b4104becb25c45c1d41",
      "status": "completed",
      "role": "assistant",
      "content": [
        {
          "type": "output_text",
          "text": "The image depicts a scenic landscape with a wooden boardwalk or pathway leading through lush, green grass under a blue sky with some clouds. The setting suggests a peaceful natural area, possibly a park or nature reserve. There are trees and shrubs in the background.",
          "annotations": []
        }
      ]
    }
  ],
  "parallel_tool_calls": true,
  "reasoning": {},
  "store": true,
  "background": false,
  "temperature": 1,
  "presence_penalty": 0,
  "frequency_penalty": 0,
  "text": {
    "format": {
      "type": "text"
    }
  },
  "tool_choice": "auto",
  "tools": [],
  "top_p": 1,
  "truncation": "disabled",
  "usage": {
    "input_tokens": 328,
    "input_tokens_details": {
      "cached_tokens": 0
    },
    "output_tokens": 52,
    "output_tokens_details": {
      "reasoning_tokens": 0
    },
    "total_tokens": 380
  },
  "metadata": {}
}

Path Parameters

response_id
string
required

The ID of the response to retrieve.

Query Parameters

include
string[]

Additional fields to include in the response.

include_obfuscation
boolean

When true, stream obfuscation will be enabled.

starting_after
integer

The sequence number of the event after which to start streaming.

stream
boolean

If set to true, the model response data will be streamed.

Response

200 - application/json

The Response object matching the specified ID.

The complete response object that was returned by the Responses API.

id
string
required

The unique ID of the response that was created.

object
enum<string>
default:response
required

The object type, which was always response.

Available options:
response
created_at
integer
required

The Unix timestamp (in seconds) for when the response was created.

completed_at
integer | null
required

The Unix timestamp (in seconds) for when the response was completed, if it was completed.

status
string
required

The status that was set for the response.

incomplete_details
Incomplete details · object
required

Details about why the response was incomplete, if applicable.

model
string
required

The model that generated this response.

previous_response_id
string | null
required

The ID of the previous response in the chain that was referenced, if any.

instructions
string | null
required

Additional instructions that were used to guide the model for this response.

output
(Message · object | Function call · object | Function call output · object | Reasoning item · object)[]
required

The output items that were generated by the model.

An item representing a message, tool call, tool output, reasoning, or other response element.

error
Error · object
required

The error that occurred, if the response failed.

tools
Function · object[]
required

The tools that were available to the model during response generation.

tool_choice
required
truncation
enum<string>
required

How the input was truncated by the service when it exceeded the model context window.

Available options:
auto,
disabled
parallel_tool_calls
boolean
required

Whether the model was allowed to call multiple tools in parallel.

text
object
required

Configuration options for text output that were used.

top_p
number
required

The nucleus sampling parameter that was used for this response.

presence_penalty
number
required

The presence penalty that was used to penalize new tokens based on whether they appear in the text so far.

frequency_penalty
number
required

The frequency penalty that was used to penalize new tokens based on their frequency in the text so far.

top_logprobs
integer
required

The number of most likely tokens that were returned at each position, along with their log probabilities.

temperature
number
required

The sampling temperature that was used for this response.

reasoning
Reasoning · object
required

Reasoning configuration and outputs that were produced for this response.

usage
Usage · object
required

Token usage statistics that were recorded for the response, if available.

max_output_tokens
integer | null
required

The maximum number of tokens the model was allowed to generate for this response.

max_tool_calls
integer | null
required

The maximum number of tool calls the model was allowed to make while generating the response.

store
boolean
required

Whether this response was stored so it can be retrieved later.

background
boolean
required

Whether this request was run in the background.

service_tier
string
required

The service tier that was used for this response.

metadata
any
required

Developer-defined metadata that was associated with the response.

safety_identifier
string | null
required

A stable identifier that was used for safety monitoring and abuse detection.

prompt_cache_key
string | null
required

A key that was used to read from or write to the prompt cache.