Skip to main content

Documentation Index

Fetch the complete documentation index at: https://developers.scrunch.com/llms.txt

Use this file to discover all available pages before exploring further.

Overview

The Responses API provides row-level access to the full AI responses captured by Scrunch. Each record represents a single AI-generated answer observed on a supported platform and includes the complete response text, citation metadata, and brand and competitor evaluations. This API is designed for teams that need maximum fidelity into how AI platforms answer questions about their category, brand, and competitors. Typical use cases include ETL pipelines, research workflows, full-text analysis, citation audits, internal tooling, and advanced modeling.

What the Responses API includes

Each response record may include:
  • Full AI response text (markdown)
  • Citations, including URL, domain, snippet, title, and source type
  • Brand presence, sentiment, and position
  • Competitor presence, sentiment, and position
  • Prompt metadata (persona, tags, key topics, stage)
  • Platform, country, and collection timestamp
Each item corresponds to one AI response, not an aggregate or summary.

When to use the Responses API

Choose the Responses API if you need:
  • Per-response visibility instead of averages
  • The exact text produced by AI platforms
  • Citation-level analysis and influence modeling
  • Competitor comparisons within individual responses
  • Custom pipelines or internal UIs built on raw AI output
  • Daily or periodic ingestion jobs using high watermarks
This API is intentionally verbose and optimized for depth and accuracy rather than aggregation.

When not to use the Responses API

The Responses API is not ideal if you only need:
  • Aggregated metrics (presence percentage, position score, sentiment score)
  • Lightweight dashboards or BI reporting
  • Trend analysis over time without response text
For those use cases, the Query API is more efficient and better suited.

Data mutability and re-evaluation

Not all fields behave the same over time. Immutable fields:
  • response_text
  • citations
  • created_at
These reflect exactly what was observed at the time the response was captured. Fields that may be re-evaluated:
  • stage, tags, key_topics
  • brand_present, brand_sentiment, brand_position
  • competitors evaluation fields
These may change if prompt metadata is edited in the Scrunch UI or if brand configuration is updated and re-evaluation is requested. For ETL workflows, always deduplicate or upsert using the globally unique id.

Request parameters

All parameters are optional.
ParameterTypeDescriptionNotes
platformString (Enum)AI platform to retrieve responses forValid values: chatgpt, claude, google_ai_overviews, perplexity, meta, google_ai_mode, google_gemini
prompt_idNumberSpecific prompt ID to retrieve responses for
start_dateDate (YYYY-MM-DD)Start date of responses to retrieveInclusive
end_dateDate (YYYY-MM-DD)End date of responses to retrieveExclusive
limitNumberMaximum number of responses to returnMust be greater than 1, max 1000
offsetNumberOffset for paginationShould be a multiple of limit
For incremental ingestion:
  1. Pull responses using a date window
  2. Store the created_at value from the latest record
  3. Use that internally as a high watermark
  4. Or load the previous UTC day after midnight to ensure completeness

Denormalized response model

Each API item represents a single response with related data embedded as arrays.
  • citations
  • tags
  • key_topics
  • competitors
The API does not fan out rows for many-to-many relationships. If you are building dimensional tables or star schemas, you will need to normalize these arrays downstream. This differs intentionally from the Query API, which performs aggregation and grouping.

Response schema

Collection

Responses use Scrunch’s standard paginated collection format.
FieldTypeDescriptionNotes
totalNumberTotal number of responses available for the current parameters
offsetNumberCurrent offset retrievedAdd limit to this value to retrieve the next page
limitNumberLimit applied to the current request
itemsResponse[]List of Response objectsSee below
To retrieve the next page:
offset = offset + limit
For stable ETL jobs, ensure offset increments in multiples of limit.

Response

FieldTypeDescriptionNotes
idNumberUnique ID for the responseResponses are immutable in normal operations — safe to deduplicate or upsert on id
created_atTimestamp (UTC)Granular timestamp the response was collected at
prompt_idNumberUnique ID for the promptCan be retrieved from the Prompts API
promptStringPrompt text
persona_idNumber (Nullable)Persona ID attached to the prompt
persona_nameString (Nullable)Persona name attached to the prompt
countryString (Nullable)2-character ISO country code the response was retrieved for
stageString (Enum)Stage of the customer journeyAdvice, Awareness, Comparison, Evaluation, Other
tagsString[]Tags attached to the prompt
key_topicsString[]Key topics attached to the prompt
platformString (Enum)AI platform the response was retrieved fromSee platform values under Request Parameters
brand_presentBooleanWhether the brand is present in the response
brand_sentimentString (Enum) (Nullable)Sentiment toward the brandNull when brand not present. Values: Positive, Mixed, Negative, None
brand_positionString (Enum) (Nullable)Position of the brand within the responseNull when brand not present. Values: Top, Middle, Bottom
competitors_presentString[]List of competitor names found in the response
response_textStringFull AI-generated response textMarkdown format
citationsCitation[]Citations included in the responseSee Citation below
competitorsCompetitorEvaluation[]Per-competitor evaluation dataSuperset of the information in competitors_present. See CompetitorEvaluation below

Citation

FieldTypeDescriptionNotes
urlStringURL of the citation
titleString (Nullable)Title tag of the citationNot exposed by all platforms
snippetString (Nullable)Search engine snippet for the citationOften but not always from the meta description tag. Not exposed by all platforms
source_typeString (Enum)Classification of the citation sourceBrand, Competitor, Other — “Other” corresponds to third-party in the Scrunch UI
domainStringDomain name of the URLProvided for convenience

CompetitorEvaluation

FieldTypeDescriptionNotes
nameStringName of the competitor
idNumberUnique ID for the competitor
presentBooleanWhether the competitor is present in the response
positionString (Enum) (Nullable)Position of the competitor within the responseNull when present is false. Values: Top, Middle, Bottom
sentimentString (Enum) (Nullable)Sentiment toward the competitorNull when present is false. Values: Positive, Mixed, Negative, None

  1. Start with start_date (UTC)
  2. Pull responses in batches (limit=1000)
  3. Store created_at from the last record
  4. Use that as the new start date for your next batch
  5. Deduplicate using id (globally unique)

Typical downstream uses

Customers commonly use the Responses API to:
  • Audit AI hallucinations or brand misrepresentation
  • Analyze which third-party sources influence AI answers
  • Train internal RAG or evaluation systems
  • Perform NLP or sentiment analysis across competitors
  • Build internal review tools for AI output quality
  • Support custom reporting or research workflows

Get started with Responses

Responses API Quickstart →