Skip to main content

Documentation Index

Fetch the complete documentation index at: https://developers.scrunch.com/llms.txt

Use this file to discover all available pages before exploring further.

Overview

The Query API provides aggregated access to Scrunch’s core AI visibility metrics. It is the same data that powers the Scrunch dashboard, exposed in a flexible, queryable format for analytics and reporting workflows. This API is optimized for scale and performance, making it ideal for BI tools, reporting pipelines, scheduled exports, and automation where response-level detail is not required. Each request returns pre-aggregated metrics based on the dimensions you select.

What the Query API includes

The Query API returns aggregated metrics such as:
  • Brand presence percentage
  • Brand position score
  • Brand sentiment score
  • Competitor presence percentage
  • Response counts
These metrics can be grouped by dimensions including:
  • Date (day, week, month, quarter, year)
  • Prompt or prompt metadata
  • Persona
  • Tag
  • Platform
  • Competitor
  • Source URL
  • Branded vs non-branded
All results are derived summaries, not raw AI responses.

When to use the Query API

Use the Query API when you need:
  • Weekly or monthly reporting
  • Trend analysis over time
  • Brand and competitor visibility metrics
  • Aggregation by date, persona, tag, platform, or prompt
  • Data for dashboards (Looker, Power BI, Tableau)
  • Large batch metric pulls for automation or reporting
The Query API is designed to answer questions like: “How is our AI visibility changing over time?”
“How do we compare to competitors by topic or platform?”

When not to use the Query API

The Query API is not appropriate if you need:
  • Raw AI response text
  • Citation URLs or snippets
  • Per-response competitor sentiment or position
  • Full message-level audits or research
For those use cases, use the Responses API, which exposes the underlying response records in full detail.

Example query

curl -X GET \
  "https://api.scrunchai.com/v1/1234/query?fields=date_week,brand_presence_percentage,brand_sentiment_score" \
  -H "Authorization: Bearer $SCRUNCH_API_TOKEN"
You can narrow results with filters (pre-aggregation, on dimensions) and having (post-aggregation, on metrics):
curl -X GET \
  "https://api.scrunchai.com/v1/1234/query?fields=date_week,ai_platform,brand_presence_percentage&filters=ai_platform:ChatGPT|Claude&having=brand_presence_percentage:gt:0.1" \
  -H "Authorization: Bearer $SCRUNCH_API_TOKEN"

Fields reference

Specify fields in the fields= array to control what is returned. Dimensions determine how metrics are grouped — querying a dimension alone returns its unique values. Metrics are numeric measures — querying a metric alone returns its overall aggregate across all data.

Dimensions

FieldTypeDescriptionConstraintsPrompt relationship
prompt_idNumberUnique identifier for the prompt
promptStringFull text of the prompt submitted to the AI platform
date_monthStringMonth responses were collected, truncated to the first day of the month. Use for monthly trend reporting.Last 90 days onlyMany to many
date_weekStringCalendar week responses were collected, truncated to the start of the week. Use for weekly trend reporting.Last 90 days only. Recommend keeping date filters week-aligned.Many to many
dateStringSpecific date responses were collected. Use for daily granularity.Last 90 days onlyMany to many
source_urlStringFull URL of a citation source found in AI responses. Use to analyze which web properties appear most in AI answers.Many to many
source_typeString (Enum)Classification of a citation source. Values: Brand, Competitor, Other (third-party in the Scrunch UI)Many to many
persona_idNumberUnique identifier for the persona associated with the promptOne to many
persona_nameStringName of the persona associated with the prompt. Personas represent audience segments or geographic configurations.One to many
competitor_idNumberUnique identifier for a competitor tracked in your Scrunch brand configurationMany to many
competitor_nameStringName of the competitor. Use alongside competitor metrics to analyze individual competitor visibility.Many to many
ai_platformString (Enum)AI platform that generated the response. Values: chatgpt, perplexity, google_ai_overviews, meta, claudeMany to many
tagStringUser-defined tag attached to prompts in the Scrunch UI. Use to group and filter by custom categories.Many to many
brandedBooleanWhether the prompt includes the brand name or an alternate brand name. Use to compare branded vs. unbranded query performance.One to many
stageString (Enum)Stage of the customer journey the prompt is mapped to. Values: Advice, Awareness, Evaluation, Comparison, OtherOne to many
prompt_topicStringKey topic extracted from or assigned to the prompt. Use to group performance by topic area.Many to many
countryString2-letter ISO country code for which the response was retrieved, based on the persona or brand default configuration.One to many

Metrics

FieldTypeDescriptionAggregationConstraints
responsesNumberTotal number of AI responses collected for the selected dimensions. The base volume metric.Count
unique_promptsNumberDistinct count of prompts that produced responses in the selected window. Use to size the underlying prompt set behind any aggregate.Distinct count
brand_presence_percentageNumberPercentage of responses in which your brand was mentioned. The primary measure of AI visibility.Average
brand_unique_promptsNumberDistinct count of prompts where your brand was mentioned in at least one response.Distinct count
brand_unique_responsesNumberDistinct count of responses that mentioned your brand.Distinct count
brand_position_scoreNumberWeighted score (0–100) reflecting how prominently your brand appears across all responses. Derived from the distribution of Top, Middle, and Bottom positions — higher scores indicate more top-of-response appearances. Note: the Scrunch dashboard displays the raw share of Top-position responses; this score is a continuous aggregate and will differ from that figure.AverageRange: 0–100
brand_sentiment_scoreNumberWeighted score (0–100) reflecting overall sentiment toward your brand across all responses. Derived from the distribution of Positive, Mixed, Negative, and None sentiments — higher scores indicate a more positive distribution. Note: the Scrunch dashboard displays the raw share of Positive-sentiment responses; this score is a continuous aggregate and will differ from that figure.AverageRange: 0–100
competitor_presence_percentageNumberPercentage of responses in which the competitor was mentioned.AverageMust be used with competitor_id or competitor_name (or both)
competitor_position_scoreNumberWeighted score (0–100) reflecting how prominently the competitor appears across all responses. Derived from the distribution of Top, Middle, and Bottom positions — see note on brand_position_score regarding differences from dashboard figures.AverageRange: 0–100. Must be used with competitor_id or competitor_name (or both)
competitor_sentiment_scoreNumberWeighted score (0–100) reflecting overall sentiment toward the competitor across all responses. Derived from the distribution of Positive, Mixed, Negative, and None sentiments — see note on brand_sentiment_score regarding differences from dashboard figures.AverageRange: 0–100. Must be used with competitor_id or competitor_name (or both)
competitor_unique_promptsNumberDistinct count of prompts where the competitor was mentioned in at least one response.Distinct countMust be used with competitor_id or competitor_name (or both)
competitor_unique_responsesNumberDistinct count of responses that mentioned the competitor.Distinct countMust be used with competitor_id or competitor_name (or both)

Filtering results

The Query API supports two filter parameters that narrow what is returned. Both can be combined in the same request and both can be repeated to apply multiple filters (combined with AND).

Dimension filters (filters)

Use filters to narrow rows before aggregation runs. Each filter takes the form field:value. Combine multiple values with | for an IN match, and prefix the value with ! to negate.
# Only ChatGPT and Claude, branded prompts only
curl "https://api.scrunchai.com/v1/$BRAND_ID/query?fields=date_week,brand_presence_percentage&filters=ai_platform:ChatGPT|Claude&filters=branded:true" \
  -H "Authorization: Bearer $SCRUNCH_API_KEY"
# Exclude a specific competitor
curl "https://api.scrunchai.com/v1/$BRAND_ID/query?fields=competitor_name,competitor_presence_percentage&filters=competitor_id:!42" \
  -H "Authorization: Bearer $SCRUNCH_API_KEY"
Filterable dimensions: prompt_id, persona_id, persona_name, ai_platform, ai_platform_search_enabled, tag, competitor_id, competitor_name, branded, stage, prompt_topic, country, date, date_week, date_month, date_quarter, date_year. source_url, source_type, and prompt are not filterable.

Metric filters (having)

Use having to filter on aggregated metric values after GROUP BY runs. Each entry takes the form metric:operator:value.
OperatorMeaning
gtGreater than
gteGreater than or equal
ltLess than
lteLess than or equal
eqEqual
neqNot equal
# Weeks where presence is above 10% and at least 50 responses were collected
curl "https://api.scrunchai.com/v1/$BRAND_ID/query?fields=date_week,brand_presence_percentage,responses&having=brand_presence_percentage:gt:0.1&having=responses:gte:50" \
  -H "Authorization: Bearer $SCRUNCH_API_KEY"
The metric you reference in having must also appear in fields.

Date range and validation

Use the start_date and end_date query parameters to scope a request to a specific window. Both are optional and accept the YYYY-MM-DD format.
BehaviorWhat happens
Both omittedReturns the last 30 days, ending today (UTC).
Only start_date setend_date defaults to today (UTC).
Only end_date setstart_date defaults to 30 days before end_date.
Empty string (?start_date=)Treated as missing. The default applies.
Trailing or leading whitespace (?start_date=2026-03-20%20)Trimmed before parsing. The cleaned value is used.
Malformed value (?start_date=2024-02-30 or ?start_date=tomorrow)Returns HTTP 400 with the offending value in detail.

Example: valid request

curl -X GET \
  "https://api.scrunchai.com/v1/1234/query?start_date=2026-03-01&end_date=2026-03-31&fields=date_week,brand_presence_percentage" \
  -H "Authorization: Bearer $SCRUNCH_API_TOKEN"

Example: invalid date returns 400

curl -X GET \
  "https://api.scrunchai.com/v1/1234/query?start_date=2024-02-30" \
  -H "Authorization: Bearer $SCRUNCH_API_TOKEN"
{
  "detail": "Invalid start_date '2024-02-30': expected YYYY-MM-DD"
}
If you build query strings dynamically, prefer omitting start_date / end_date when you don’t have a value rather than passing an empty string — the result is the same, but the intent is clearer.

Cardinality and result size

Because the Query API performs grouping dynamically, combining highly granular dimensions can significantly increase the number of rows returned. Examples of high-cardinality dimensions include:
  • ai_platform
  • tag
  • prompt_topic
  • competitor_id
  • competitor_name
  • source_url
  • source_type
Each additional high-cardinality field multiplies the number of possible result rows.
Avoid combining multiple high-cardinality dimensions unless required, as this can produce very large result sets and slower queries.

Limits and performance considerations

  • The Query API supports large batch pulls (up to tens of thousands of rows per request)
  • Results are pre-aggregated and optimized for analytics and BI ingestion
  • Query performance degrades as result cardinality increases
For best performance, keep field selections focused and intentional.

Best practices

  • Prefer date_week or date_month over daily granularity when possible
  • Run separate queries for different reporting needs and join downstream
  • Keep field lists small to control result size
  • Use brand-scoped API keys when embedding in client-facing dashboards
  • Treat Query API outputs as metrics tables, not raw data logs

Relationship to the Responses API

The Query API and Responses API are complementary:
  • Query API: fast, aggregated metrics for reporting and dashboards
  • Responses API: full-fidelity response text and citation data for deep analysis
Most customers use the Query API for ongoing reporting and the Responses API selectively for audits, research, or investigation.

Run your first query

Go to the Query API Quickstart →