Documentation Index Fetch the complete documentation index at: https://developers.scrunch.com/llms.txt
Use this file to discover all available pages before exploring further.
Overview
The Responses API provides row-level access to the full AI responses captured by Scrunch. Each record represents a single AI-generated answer observed on a supported platform and includes the complete response text, citation metadata, and brand and competitor evaluations.
This API is designed for teams that need maximum fidelity into how AI platforms answer questions about their category, brand, and competitors.
Typical use cases include ETL pipelines, research workflows, full-text analysis, citation audits, internal tooling, and advanced modeling.
What the Responses API includes
Each response record may include:
Full AI response text (markdown)
Citations, including URL, domain, snippet, title, and source type
Brand presence, sentiment, and position
Competitor presence, sentiment, and position
Prompt metadata (persona, tags, key topics, stage)
Platform, country, and collection timestamp
Each item corresponds to one AI response, not an aggregate or summary.
When to use the Responses API
Choose the Responses API if you need:
Per-response visibility instead of averages
The exact text produced by AI platforms
Citation-level analysis and influence modeling
Competitor comparisons within individual responses
Custom pipelines or internal UIs built on raw AI output
Daily or periodic ingestion jobs using high watermarks
This API is intentionally verbose and optimized for depth and accuracy rather than aggregation.
When not to use the Responses API
The Responses API is not ideal if you only need:
Aggregated metrics (presence percentage, position score, sentiment score)
Lightweight dashboards or BI reporting
Trend analysis over time without response text
For those use cases, the Query API is more efficient and better suited.
Data mutability and re-evaluation
Not all fields behave the same over time.
Immutable fields:
response_text
citations
created_at
These reflect exactly what was observed at the time the response was captured.
Fields that may be re-evaluated:
stage, tags, key_topics
brand_present, brand_sentiment, brand_position
competitors evaluation fields
These may change if prompt metadata is edited in the Scrunch UI or if brand configuration is updated and re-evaluation is requested.
For ETL workflows, always deduplicate or upsert using the globally unique id.
Request parameters
All parameters are optional.
Parameter Type Description Notes platformString (Enum) AI platform to retrieve responses for Valid values: chatgpt, claude, google_ai_overviews, perplexity, meta, google_ai_mode, google_gemini prompt_idNumber Specific prompt ID to retrieve responses for start_dateDate (YYYY-MM-DD) Start date of responses to retrieve Inclusive end_dateDate (YYYY-MM-DD) End date of responses to retrieve Exclusive limitNumber Maximum number of responses to return Must be greater than 1, max 1000 offsetNumber Offset for pagination Should be a multiple of limit
For incremental ingestion:
Pull responses using a date window
Store the created_at value from the latest record
Use that internally as a high watermark
Or load the previous UTC day after midnight to ensure completeness
Denormalized response model
Each API item represents a single response with related data embedded as arrays.
citations
tags
key_topics
competitors
The API does not fan out rows for many-to-many relationships.
If you are building dimensional tables or star schemas, you will need to normalize these arrays downstream.
This differs intentionally from the Query API, which performs aggregation and grouping.
Response schema
Collection
Responses use Scrunch’s standard paginated collection format.
Field Type Description Notes totalNumber Total number of responses available for the current parameters offsetNumber Current offset retrieved Add limit to this value to retrieve the next page limitNumber Limit applied to the current request itemsResponse[]List of Response objects See below
To retrieve the next page:
For stable ETL jobs, ensure offset increments in multiples of limit.
Response
Field Type Description Notes idNumber Unique ID for the response Responses are immutable in normal operations — safe to deduplicate or upsert on id created_atTimestamp (UTC) Granular timestamp the response was collected at prompt_idNumber Unique ID for the prompt Can be retrieved from the Prompts API promptString Prompt text persona_idNumber (Nullable) Persona ID attached to the prompt persona_nameString (Nullable) Persona name attached to the prompt countryString (Nullable) 2-character ISO country code the response was retrieved for stageString (Enum) Stage of the customer journey Advice, Awareness, Comparison, Evaluation, OthertagsString[] Tags attached to the prompt key_topicsString[] Key topics attached to the prompt platformString (Enum) AI platform the response was retrieved from See platform values under Request Parameters brand_presentBoolean Whether the brand is present in the response brand_sentimentString (Enum) (Nullable) Sentiment toward the brand Null when brand not present. Values: Positive, Mixed, Negative, None brand_positionString (Enum) (Nullable) Position of the brand within the response Null when brand not present. Values: Top, Middle, Bottom competitors_presentString[] List of competitor names found in the response response_textString Full AI-generated response text Markdown format citationsCitation[]Citations included in the response See Citation below competitorsCompetitorEvaluation[]Per-competitor evaluation data Superset of the information in competitors_present. See CompetitorEvaluation below
Citation
Field Type Description Notes urlString URL of the citation titleString (Nullable) Title tag of the citation Not exposed by all platforms snippetString (Nullable) Search engine snippet for the citation Often but not always from the meta description tag. Not exposed by all platforms source_typeString (Enum) Classification of the citation source Brand, Competitor, Other — “Other” corresponds to third-party in the Scrunch UIdomainString Domain name of the URL Provided for convenience
CompetitorEvaluation
Field Type Description Notes nameString Name of the competitor idNumber Unique ID for the competitor presentBoolean Whether the competitor is present in the response positionString (Enum) (Nullable) Position of the competitor within the response Null when present is false. Values: Top, Middle, Bottom sentimentString (Enum) (Nullable) Sentiment toward the competitor Null when present is false. Values: Positive, Mixed, Negative, None
ETL recommended workflow
Start with start_date (UTC)
Pull responses in batches (limit=1000)
Store created_at from the last record
Use that as the new start date for your next batch
Deduplicate using id (globally unique)
Typical downstream uses
Customers commonly use the Responses API to:
Audit AI hallucinations or brand misrepresentation
Analyze which third-party sources influence AI answers
Train internal RAG or evaluation systems
Perform NLP or sentiment analysis across competitors
Build internal review tools for AI output quality
Support custom reporting or research workflows
Get started with Responses Responses API Quickstart →