Documentation Index
Fetch the complete documentation index at: https://developers.scrunch.com/llms.txt
Use this file to discover all available pages before exploring further.
Overview
The Query API provides aggregated access to Scrunch’s core AI visibility metrics. It is the same data that powers the Scrunch dashboard, exposed in a flexible, queryable format for analytics and reporting workflows. This API is optimized for scale and performance, making it ideal for BI tools, reporting pipelines, scheduled exports, and automation where response-level detail is not required. Each request returns pre-aggregated metrics based on the dimensions you select.What the Query API includes
The Query API returns aggregated metrics such as:- Brand presence percentage
- Brand position score
- Brand sentiment score
- Competitor presence percentage
- Response counts
- Date (day, week, month, quarter, year)
- Prompt or prompt metadata
- Persona
- Tag
- Platform
- Competitor
- Source URL
- Branded vs non-branded
When to use the Query API
Use the Query API when you need:- Weekly or monthly reporting
- Trend analysis over time
- Brand and competitor visibility metrics
- Aggregation by date, persona, tag, platform, or prompt
- Data for dashboards (Looker, Power BI, Tableau)
- Large batch metric pulls for automation or reporting
“How do we compare to competitors by topic or platform?”
When not to use the Query API
The Query API is not appropriate if you need:- Raw AI response text
- Citation URLs or snippets
- Per-response competitor sentiment or position
- Full message-level audits or research
Example query
filters (pre-aggregation, on dimensions) and having (post-aggregation, on metrics):
Fields reference
Specify fields in thefields= array to control what is returned. Dimensions determine how metrics are grouped — querying a dimension alone returns its unique values. Metrics are numeric measures — querying a metric alone returns its overall aggregate across all data.
Dimensions
| Field | Type | Description | Constraints | Prompt relationship |
|---|---|---|---|---|
prompt_id | Number | Unique identifier for the prompt | — | — |
prompt | String | Full text of the prompt submitted to the AI platform | — | — |
date_month | String | Month responses were collected, truncated to the first day of the month. Use for monthly trend reporting. | Last 90 days only | Many to many |
date_week | String | Calendar week responses were collected, truncated to the start of the week. Use for weekly trend reporting. | Last 90 days only. Recommend keeping date filters week-aligned. | Many to many |
date | String | Specific date responses were collected. Use for daily granularity. | Last 90 days only | Many to many |
source_url | String | Full URL of a citation source found in AI responses. Use to analyze which web properties appear most in AI answers. | — | Many to many |
source_type | String (Enum) | Classification of a citation source. Values: Brand, Competitor, Other (third-party in the Scrunch UI) | — | Many to many |
persona_id | Number | Unique identifier for the persona associated with the prompt | — | One to many |
persona_name | String | Name of the persona associated with the prompt. Personas represent audience segments or geographic configurations. | — | One to many |
competitor_id | Number | Unique identifier for a competitor tracked in your Scrunch brand configuration | — | Many to many |
competitor_name | String | Name of the competitor. Use alongside competitor metrics to analyze individual competitor visibility. | — | Many to many |
ai_platform | String (Enum) | AI platform that generated the response. Values: chatgpt, perplexity, google_ai_overviews, meta, claude | — | Many to many |
tag | String | User-defined tag attached to prompts in the Scrunch UI. Use to group and filter by custom categories. | — | Many to many |
branded | Boolean | Whether the prompt includes the brand name or an alternate brand name. Use to compare branded vs. unbranded query performance. | — | One to many |
stage | String (Enum) | Stage of the customer journey the prompt is mapped to. Values: Advice, Awareness, Evaluation, Comparison, Other | — | One to many |
prompt_topic | String | Key topic extracted from or assigned to the prompt. Use to group performance by topic area. | — | Many to many |
country | String | 2-letter ISO country code for which the response was retrieved, based on the persona or brand default configuration. | — | One to many |
Metrics
| Field | Type | Description | Aggregation | Constraints |
|---|---|---|---|---|
responses | Number | Total number of AI responses collected for the selected dimensions. The base volume metric. | Count | — |
unique_prompts | Number | Distinct count of prompts that produced responses in the selected window. Use to size the underlying prompt set behind any aggregate. | Distinct count | — |
brand_presence_percentage | Number | Percentage of responses in which your brand was mentioned. The primary measure of AI visibility. | Average | — |
brand_unique_prompts | Number | Distinct count of prompts where your brand was mentioned in at least one response. | Distinct count | — |
brand_unique_responses | Number | Distinct count of responses that mentioned your brand. | Distinct count | — |
brand_position_score | Number | Weighted score (0–100) reflecting how prominently your brand appears across all responses. Derived from the distribution of Top, Middle, and Bottom positions — higher scores indicate more top-of-response appearances. Note: the Scrunch dashboard displays the raw share of Top-position responses; this score is a continuous aggregate and will differ from that figure. | Average | Range: 0–100 |
brand_sentiment_score | Number | Weighted score (0–100) reflecting overall sentiment toward your brand across all responses. Derived from the distribution of Positive, Mixed, Negative, and None sentiments — higher scores indicate a more positive distribution. Note: the Scrunch dashboard displays the raw share of Positive-sentiment responses; this score is a continuous aggregate and will differ from that figure. | Average | Range: 0–100 |
competitor_presence_percentage | Number | Percentage of responses in which the competitor was mentioned. | Average | Must be used with competitor_id or competitor_name (or both) |
competitor_position_score | Number | Weighted score (0–100) reflecting how prominently the competitor appears across all responses. Derived from the distribution of Top, Middle, and Bottom positions — see note on brand_position_score regarding differences from dashboard figures. | Average | Range: 0–100. Must be used with competitor_id or competitor_name (or both) |
competitor_sentiment_score | Number | Weighted score (0–100) reflecting overall sentiment toward the competitor across all responses. Derived from the distribution of Positive, Mixed, Negative, and None sentiments — see note on brand_sentiment_score regarding differences from dashboard figures. | Average | Range: 0–100. Must be used with competitor_id or competitor_name (or both) |
competitor_unique_prompts | Number | Distinct count of prompts where the competitor was mentioned in at least one response. | Distinct count | Must be used with competitor_id or competitor_name (or both) |
competitor_unique_responses | Number | Distinct count of responses that mentioned the competitor. | Distinct count | Must be used with competitor_id or competitor_name (or both) |
Filtering results
The Query API supports two filter parameters that narrow what is returned. Both can be combined in the same request and both can be repeated to apply multiple filters (combined with AND).Dimension filters (filters)
Use filters to narrow rows before aggregation runs. Each filter takes the form field:value. Combine multiple values with | for an IN match, and prefix the value with ! to negate.
prompt_id, persona_id, persona_name, ai_platform, ai_platform_search_enabled, tag, competitor_id, competitor_name, branded, stage, prompt_topic, country, date, date_week, date_month, date_quarter, date_year.
source_url, source_type, and prompt are not filterable.
Metric filters (having)
Use having to filter on aggregated metric values after GROUP BY runs. Each entry takes the form metric:operator:value.
| Operator | Meaning |
|---|---|
gt | Greater than |
gte | Greater than or equal |
lt | Less than |
lte | Less than or equal |
eq | Equal |
neq | Not equal |
having must also appear in fields.
Date range and validation
Use thestart_date and end_date query parameters to scope a request to a specific window. Both are optional and accept the YYYY-MM-DD format.
| Behavior | What happens |
|---|---|
| Both omitted | Returns the last 30 days, ending today (UTC). |
Only start_date set | end_date defaults to today (UTC). |
Only end_date set | start_date defaults to 30 days before end_date. |
Empty string (?start_date=) | Treated as missing. The default applies. |
Trailing or leading whitespace (?start_date=2026-03-20%20) | Trimmed before parsing. The cleaned value is used. |
Malformed value (?start_date=2024-02-30 or ?start_date=tomorrow) | Returns HTTP 400 with the offending value in detail. |
Example: valid request
Example: invalid date returns 400
Cardinality and result size
Because the Query API performs grouping dynamically, combining highly granular dimensions can significantly increase the number of rows returned. Examples of high-cardinality dimensions include:- ai_platform
- tag
- prompt_topic
- competitor_id
- competitor_name
- source_url
- source_type
Limits and performance considerations
- The Query API supports large batch pulls (up to tens of thousands of rows per request)
- Results are pre-aggregated and optimized for analytics and BI ingestion
- Query performance degrades as result cardinality increases
Best practices
- Prefer date_week or date_month over daily granularity when possible
- Run separate queries for different reporting needs and join downstream
- Keep field lists small to control result size
- Use brand-scoped API keys when embedding in client-facing dashboards
- Treat Query API outputs as metrics tables, not raw data logs
Relationship to the Responses API
The Query API and Responses API are complementary:- Query API: fast, aggregated metrics for reporting and dashboards
- Responses API: full-fidelity response text and citation data for deep analysis
Run your first query
Go to the Query API Quickstart →