How to Tell When Your Measurement Stack Is Broken
Most measurement stacks that look broken aren't actually broken. They're working exactly as designed — each tool answering its own question, none of them designed to converge. Here's how to tell when the stack itself has stopped serving the business.

Most measurement stacks that look broken aren't actually broken. They're working exactly as designed — each tool answering its own question, none of them designed to converge. The problem is that no one in the room knows that, so every quarterly review becomes a debate about which dashboard is right.
The symptoms that tell you the stack has stopped serving the business are not about data quality. They are about how decisions get made.
The clearest one: your team can describe what each tool does but not what each tool is for. The Meta lead can explain ROAS methodology. The analytics lead can explain GA4's attribution logic. The measurement lead can explain how MMM is fit. But if you ask which tool drives the next budget decision, the answers conflict — and the conflict is treated as a debate to win rather than a structural gap to close.
Three measurement primitives, three different questions
Most measurement stacks look like a pile of tools, but underneath they are three primitives doing three different jobs. Confusing the jobs is what creates the disagreement.
Attribution answers a tactical question. Which campaign or channel touch was associated with which conversion? Useful for optimizing within a channel. Bad for cross-channel budget decisions.
Incrementality answers a causal question. If we had not run this campaign, what revenue would have happened anyway? Useful for budget defense. Slow and expensive to run continuously.
MMM answers a structural question. Across this quarter, how much of total revenue can be assigned to each marketing input given diminishing returns and lagged effects? Useful for top-down budget allocation. Dependent on having enough history to fit a stable model.
When a Meta dashboard, a GA4 report, an MMM output, and an incrementality result disagree, they disagree because they are answering different questions. The error is not in the tools. The error is upstream, in expecting them to converge.
| Tool | What it answers | Best for | Where it fails |
|---|---|---|---|
| Attribution / MTA | Which touchpoint was credited | Tactical optimization within a channel | Cross-channel budget allocation |
| Incrementality | What lifted vs. what would have happened anyway | Budget defense, channel-level reads | Always-on, granular tactical reads |
| MMM | Each input's structural contribution to revenue | Quarterly allocation, scenario planning | Same-week tactical decisions |
Attribution is for tactics. Incrementality is for defense. MMM is for allocation.
Attribution is the right tool for the question your in-platform team asks every Tuesday. Which Meta ad set is converting and what should we move budget into next? The credit math is wrong in the macro sense, because last touch and view-through windows over-credit the platform that ran the bottom-funnel ad. For tactical decisions inside a single channel, that bias is consistent enough to be useful.
Incrementality answers the question your CFO asks every quarter. Are we wasting money? Geo holdouts, ghost bidding tests, and matched-market lifts are how your team finds out which spend would have happened on its own and which produced new demand. The cost is calendar time and statistical complexity. Most brands run two or three a year per channel, not weekly.
MMM answers the question your CMO has to walk into the boardroom with. How should we split the next $20 million across brand, performance, retail media, and CTV? It needs at least 18 months of weekly history to produce a stable read. It does not tell you which Meta ad set to scale on Friday. It tells you what fraction of next quarter's plan should go to Meta in the first place.
When a brand picks one of these and calls it the truth, the other two questions go unanswered. That is how stacks break.
The right stack changes by spend level, not by category
A $2 million paid media program does not need an MMM. It needs trustworthy attribution and one or two incrementality reads a year on the channel that takes the most spend. The team is too small to operate three measurement systems without one of them rotting.
A $10 million program is where MMM starts to be worth the investment. There is enough history, enough budget across channels, and enough at stake in cross-channel decisions to justify a quarterly MMM read alongside ongoing attribution and a cadence of two or three incrementality tests a year.
A $50 million program needs all three running continuously, plus a triangulation layer that reconciles them. This is where most brands stall, because they buy more tools without changing how decisions get made. Athena, QRY's internal AI platform, is built for this layer specifically. Continuous cross-channel anomaly detection, channel efficiency shifts, and forecasting signals that sit on top of attribution, incrementality, and MMM outputs.
A $200 million program needs the same stack as a $50 million one, but with dedicated headcount running each component. The shift is organizational, not technical.
The reconciliation framework comes first, the tools come second
Most measurement projects fail at the same step. The team buys the tools, ships dashboards, and then leadership starts arguing about which dashboard is right. The framework that prevents this argument has to be agreed on before the tools come online, not after.
A working reconciliation framework states three things in writing.
First, which decision each tool drives. Attribution drives tactical reallocation inside a channel. Incrementality drives the question of whether a channel should keep its budget at all. MMM drives cross-channel allocation. CFO conversations are anchored to MMM outputs adjusted by the most recent incrementality reads, not to platform ROAS.
Second, how often each tool is read. Attribution weekly. Incrementality on a published calendar by channel. MMM quarterly. The cadence is in the framework so nobody re-litigates it midyear.
Third, what happens when they disagree. The honest answer is that they will, and the framework should specify which tool wins on which kind of decision. MMM wins on quarterly allocation. Incrementality wins on whether a channel is doing real work. Platform ROAS does not win on anything strategic.
The Impact Methodology's Measure phase exists to install this framework alongside the tools, not after. It is the reason brands working with QRY stop having the four-dashboards-in-a-meeting argument within one quarter.
The CMO who can say 'we measure to one framework, not four' wins more budget conversations than the one with better tools.
The mistakes that turn a measurement stack into a measurement liability
Three patterns show up in nearly every $10M+ stack we review.
Over-investing in attribution precision. Brands hire data engineers to refine identity stitching when the underlying tool was never built for cross-channel allocation. The ceiling on attribution accuracy is set by the platform's deduplication, not by the engineering effort applied to it.
Running MMM without enough history. Eight months of weekly data does not produce a stable model. The team gets numbers anyway and starts making decisions on noise. The right answer is to not stand up MMM until the history exists, and to lean on incrementality reads in the interim.
Using attribution to make budget decisions. The single most expensive mistake in this category. A platform reporting 4x ROAS will always look like it deserves more budget than the channel the platform cannot see attributing to. CTV and YouTube get systematically defunded for this reason, and the brands that defund them watch CAC rise 12 months later. This is the same dynamic that lets demand problems hide as performance problems.
Build the stack in three phases, not all at once
A team that ships its full measurement stack in six months almost always ends up with a brittle one. The components have not been pressure-tested against real decisions.
Phase one is attribution plus a single annual incrementality read on the largest channel. Six months. The deliverable is one trustworthy weekly read and one honest answer to whether the biggest channel is doing real work.
Phase two adds MMM at quarterly cadence and expands incrementality to two channels. The next six months. The deliverable is the start of cross-channel allocation conversations grounded in MMM, calibrated by the incrementality reads.
Phase three adds the triangulation and orchestration layer. The reconciliation framework gets formalized in a memo signed by the CMO, CFO, and media lead. The team starts running scenario plans against MMM-projected allocations. This is where Athena's forecasting work compounds. It stops being a dashboard and starts being a planning input. We have published a broader framework for measuring what actually matters that walks through the cross-functional alignment piece in more depth.
The reason for the phasing is not budget. It is that the framework has to mature alongside the tools. Standing up all three primitives in the same quarter produces three sources of truth and zero shared decisions.
Frequently asked questions
- How long should it take to set up a working measurement stack?
Eighteen months from a standing start to a full attribution, MMM, and incrementality stack with a reconciliation framework. Six months to phase one (attribution plus one incrementality read), another six to phase two (quarterly MMM and a second incrementality channel), and a final six to triangulate. Anyone selling a faster build is selling tools, not a measurement system.
- What is the difference between MMM and incrementality?
MMM is a structural model that estimates each marketing input's contribution to revenue using historical data. Incrementality is a controlled experiment that estimates a single channel's causal contribution by holding out spend in matched markets or audiences. MMM gives you ongoing cross-channel allocation. Incrementality gives you a sharp causal read on one channel at a time.
- Should we trust platform ROAS as our primary measurement?
No. Platform ROAS, including Meta's and Google's reported numbers, will be higher than your blended ROAS for mechanical reasons related to last-touch credit and view-through windows. Use platform ROAS for tactical optimization within the channel. Use blended ROAS, MMM, and incrementality for everything that touches budget allocation.
- What spend level justifies investing in MMM?
Around $10 million in annual paid media, with at least 18 months of clean weekly data. Below that level, the model becomes unstable and the team is too small to operate it without neglecting attribution and incrementality. Brands at $5M to $10M usually get more value from a disciplined incrementality cadence than from rushing into MMM.
- How do we get our CMO and CFO to agree on which measurement number is real?
Pre-commit. Before the quarter starts, write down which decision each tool drives and which tool wins when they disagree. The CMO, CFO, and media lead sign the memo. The argument shifts from "which number is real" to "are we executing on what we agreed to," which is a much more productive conversation.
The fix is not buying a fifth tool. The fix is the conversation that happens before any of the tools matter, where the leadership team agrees on which tool drives which decision and what to do when they disagree. Most teams skip this conversation because it is uncomfortable. The brands that have it spend the next year making faster decisions on less data, while the brands that do not are still running the four-dashboard argument every Monday.
If your CMO is staring at four numbers and choosing the one she likes best, the question to bring to next week's leadership meeting is not which dashboard to fix. It is which one decision each dashboard is allowed to drive.
Get smarter about paid media
Strategy and data for senior marketers. No spam.
Founder & CEO
Samir Balwani is the founder and CEO of QRY, a full-funnel paid media agency he started in 2017. He has 15+ years of advertising experience and previously led brand strategy and digital innovation at American Express. He writes on paid media strategy, measurement, and how agencies should operate.


