Executives want one slide that explains why rankings, traffic, and pipeline no longer move together. The scorecard has one job: translate modern search into a fast read that answers “are we visible,” “are we shaping the answer,” and “is demand rising.” AI summaries can reduce click share, so being cited and correctly framed becomes a first-class KPI, not a nice-to-have.
What You’ll Learn in this Article
-
The 6 KPIs that fit on one slide and explain modern search performance in under 60 seconds.
-
How to measure AI answer inclusion (citations and narrative accuracy) without pretending there is a clean analytics tag.
-
The operating rules that keep the scorecard credible: fixed prompt panel, fixed competitor set, monthly cadence, quarterly resets.
-
The action levers that move each KPI, so the scorecard always produces a 30-day plan.
-
Executive interpretation patterns for common situations like rankings holding while sessions flatten.
Why the one-slide scorecard exists
The old reporting model assumed: rankings up equals clicks up equals pipeline up. AI Overviews break that assumption on many query types. Pew’s analysis found users clicked a traditional result in 8% of visits when an AI summary appeared versus 15% when it did not, and clicks on links inside the summary were rare.
A modern scorecard treats search as three layers:
-
Visibility: classic discoverability and eligibility
-
Influence: inclusion inside AI answers (citations and framing)
-
Demand: branded lift that proves the market is moving toward you
The one-slide scorecard structure
Three sections, two KPIs each. Keep it stable month to month so executives recognize the pattern.
Visibility (foundation)
-
Non-brand Coverage Index
-
Money Page Health
Influence (AI answer inclusion)
3) AI Overview Footprint
4) Citation Rate
Demand (downstream proof)
5) Brand Search Lift
6) Brand Narrative Accuracy Score
If there is room for one more line, add a small context strip:
-
Competitor AI share of voice (SOV) for the same prompt set
The 6 KPIs, definitions, formulas, sources, and levers
1) Non-brand Coverage Index
Definition
Weighted share of priority non-brand queries in Top 10 (or Top 3) across intent groups.
Formula (simple version)
(sum of intent group weights × percent of queries in Top 10) ÷ (sum of weights)
Data sources
Rank tracker segmented by intent group: category, best-for, comparisons, integrations, pricing, security, support.
Action levers
Internal linking, content gaps by intent, content refresh, technical hygiene, consolidation of thin pages.
2) Money Page Health
Definition
Percent of priority decision assets that are indexable, canonical-correct, and stable (no duplication conflicts).
Formula
healthy decision assets ÷ total decision assets
Data sources
Crawl and index coverage checks, canonical validation, duplication detection.
Action levers
Consolidate duplicates, fix canonicals, improve crawl paths, retire legacy pages, strengthen internal linking from hubs.
3) AI Overview Footprint
Definition
Percent of tracked queries that trigger AI Overviews.
Formula
AIO-triggered queries ÷ tracked queries
Data sources
Tracked query set plus manual spot checks.
Why it matters
Shows how exposed your category is to answer-layer displacement.
Context
Google notes AI Overviews and AI Mode may use query fan-out, meaning one query can expand into multiple related searches and sub-answers.
Action levers
This KPI is not fully controllable. Use it to set expectations, then shift effort toward citations and narrative accuracy on the prompts that matter.
4) Citation Rate
Definition
Percent of prompts where your domain is cited or linked as a supporting source.
Formula
prompts citing your domain ÷ prompts tested
Data sources
Monthly or quarterly prompt panel captures across AI Overviews plus 2 to 3 assistants your buyers use.
Action levers
Answer-first blocks, comparison assets, integration prerequisites, pricing model explainer, security pages, tighter internal linking, removal of conflicting pages that dilute “best source” signals.
5) Brand Search Lift
Definition
Change in branded impressions and clicks versus baseline.
Formula
(branded impressions this period ÷ branded impressions baseline) minus 1
Data sources
Google Search Console branded queries filter in the Performance report.
Action levers
Increase inclusion on consideration prompts, strengthen third-party corroboration, ship best-for and comparison assets, align brand language across key profiles so AI and buyers repeat the same story.
6) Brand Narrative Accuracy Score
Definition
Percent of prompts where the AI description matches your intended positioning: correct category placement, correct best-for segment, correct differentiators, correct constraints.
Scoring rubric (fast, exec-friendly)
-
Correct
-
Partially correct
-
Incorrect
-
Risky (hallucinated claims, overclaims, compliance risk)
Formula
Correct prompts ÷ prompts scored
Data sources
Scored prompt panel with captured outputs.
Action levers
Ground truth pages, constraint language, entity consistency cleanup, removal of contradictions across site and third-party surfaces, hardening of pricing, integration scope, and compliance language.
Scorecard operating rules
These rules prevent the scorecard from becoming a subjective argument.
-
Fixed prompt panel: 40 to 80 prompts, keep 70% unchanged month to month
-
Fixed competitor set: 3 to 8 direct competitors scored alongside you for SOV context
-
Cadence: monthly scorecard, quarterly deep audit and backlog reset
-
Output discipline: every KPI includes one sentence on what moved it and a named owner for next actions
-
Single source rule: if branding, product truth, pricing model language, or compliance statements conflict, fix the conflict before publishing new content
Google states that sites appearing in AI features are included in overall search traffic in Search Console, reported in the Performance report under the Web search type. Treat AI as a layer that changes the meaning of clicks, not a separate analytics channel.
Executive interpretation patterns
Use these to prevent leadership from drawing the wrong conclusions.
-
Rankings flat, citations up, brand lift up
Clicks may compress, but influence and demand are rising. Double down on comparison and best-for assets to convert. -
Rankings up, citations flat, brand lift flat
Classic SEO gains are not translating into answer inclusion. Prioritize decision assets and quote-ready structures. -
Mentions up, accuracy down
Presence is becoming a risk. Fix entity clarity and constraints first, then expand.
The 30-day action backlog that should follow the scorecard
Every scorecard should produce a short backlog tied to the KPIs.
Week 1: Fix Money Page Health blockers
Canonical issues, duplication, indexation gaps, broken internal linking to decision assets.
Week 2: Ship quote-ready upgrades on top decision assets
Definition block, best-for, constraints, integration prerequisites, pricing model clarity, security boundaries.
Week 3: Publish or refresh two citation magnets
One best-for hub and one comparison or alternatives page, built to be cited.
Week 4: Corroboration sweep and prompt panel rerun
Update top third-party profiles that show up in citations, then rerun the prompt subset to validate citation and accuracy movement.
Potenture’s Executive LLM Visibility Scorecard Setup operationalizes this: define the prompt panel, build the scoring rubric, connect branded lift reporting, and deliver a one-slide monthly scorecard tied to a 30-day action backlog.


