The Measurement Memo: How to Stop the Four-Dashboard Argument Before Next Quarter
There is a one-page document that, if your CMO and CFO sign it before next quarter starts, eliminates roughly 80% of the measurement arguments your team is having right now. Almost no brand has it.

There is a one-page document that, if your CMO and CFO sign it before next quarter starts, will eliminate roughly 80% of the measurement arguments your team is having right now. Almost no brand has it. The brands that do stop having those arguments within a quarter.
The document is a measurement protocol — a written pre-commitment that names which tool drives which decision, at what cadence each tool reads, and which tool wins when they disagree. It is not a measurement framework. It is not a dashboard standard. It is a governance artifact that takes the question “which number is real” off the table for the rest of the year.
The reason most teams don’t have this document is that producing it requires the CMO and CFO to agree, in writing, on something they have never been asked to agree on: that the four tools in their measurement stack are answering four different questions, and that the right response to disagreement is not reconciliation but a written hierarchy of which tool drives which class of decision.
The default response, when leadership confronts the disagreement, is to pick the most flattering number and optimize to it. Most teams pick the platform-reported number for a few months because it is the most generous. Then someone asks why blended ROAS does not match, and the team picks GA4 instead. Then MMM updates and produces a third number, and the conversation starts over.
The other default response is to throw analysts at the reconciliation, or to pick a tool from a head-to-head comparison and call it the new source of truth. Teams hire data engineers to refine identity stitching, build attribution overlays in BigQuery, or stand up a custom MMM in-house. The tools get more sophisticated. The disagreement gets more sophisticated. Leadership confidence does not.
Both responses miss the same thing. The disagreement is not the bug. The disagreement is mechanically guaranteed. Each tool measures something different by design. Reconciling the outputs is the wrong job. Specifying which output drives which decision is the right one.
The four tools answer four different questions. Platform attribution answers which campaign got tactical credit for this conversion. MMM answers what fraction of total revenue this quarter is structurally explained by each marketing input. Incrementality answers what would have happened anyway if the campaign had not run. GA4 sits somewhere between attribution and analytics, with its own credit rules and its own data losses. The four outputs do not converge because the four questions do not converge.
When a team forces convergence, it ends with one of two outcomes. Either the team picks one number and explains away the others, which over time means the brand is making decisions on the most flattering view of performance and ignoring the corrections from the other three tools. Or the team produces a reconciled number that is a weighted average of four outputs, which is mathematically meaningless because it averages four answers to four different questions.
We have written before about triangulation as the honest alternative, where each tool stays in its lane and the combination of outputs informs decisions without collapsing into a single number. The triangulation framing only works if the team has agreed in advance which tool wins on which decision. Without that agreement, triangulation becomes another version of the four-dashboard argument.
This is why the same conversation happens every quarter. The CFO asks for the number. The CMO offers MMM. The growth lead offers blended ROAS from the data warehouse. The platform leads offer their reported numbers. The team spends an hour arguing about which is right, makes a compromise decision, and then re-argues the same question the following quarter. The argument is not a sign that the team is dysfunctional. It is the predictable result of not having a pre-commitment.
| Decision | Tool that wins | Cadence |
|---|---|---|
| Tactical reallocation inside a channel | Platform attribution | Weekly |
| Whether a channel is doing real work | Incrementality | Quarterly, per channel |
| Cross-channel allocation for next quarter | MMM | Quarterly |
| Reporting to CFO and the board | Blended + MMM, adjusted by incrementality | Monthly + quarterly |
| Optimization inside ad platforms | Platform attribution | Weekly |
A working framework has three lines, agreed in writing by the CMO, CFO, and media lead before the quarter starts. First, which decision each tool drives. Second, the cadence at which each tool reads. Third, what happens when they disagree, including which tool wins on which class of decision.
Attribution drives weekly tactical reallocation inside a channel. Incrementality drives the quarterly question of whether each channel is doing real work. MMM drives cross-channel allocation for the next quarter's budget. Platform ROAS drives nothing strategic. Your CFO conversations are anchored to MMM outputs adjusted by the most recent incrementality reads, never to platform numbers. The hierarchy is in the document, not negotiated in the meeting.
This is the conversation that QRY's Marketing Alignment Framework is built around. It produces a memo that the CMO and CFO both sign, ahead of the quarter, naming the measurement primitives and what they drive. Once your team has that memo, the four-dashboard argument stops happening. The team can use platform reports for what they are good at, MMM for what it is good at, and incrementality for what it is good at, without re-litigating the order of operations every Monday.
The four-dashboard argument is the predictable result of not having a pre-commitment. The fix is the document the leadership team signs before the quarter.
Frequently asked questions
- Why do four attribution tools disagree if they are all measuring the same thing?
They are not measuring the same thing. Platform attribution measures campaign-level credit assignment within a single ad system. MMM measures the structural contribution of each marketing input to total revenue across a quarter or longer. Incrementality measures the causal lift of a specific campaign or channel against a controlled holdout. Each is correct in its own frame. The disagreement is mechanical, not a sign that any of them is broken.
- Should we pick one tool as the source of truth and ignore the others?
No. Each tool answers a question the others cannot. Picking one as truth means the brand is making decisions on the most flattering view of performance, and it ignores the answers from the tools that would correct that view. The right move is to specify which tool drives which decision, in writing, ahead of the quarter.
- What goes into a written measurement framework that the CMO and CFO can both sign?
Three things. Which decision each tool drives, for example MMM for cross-channel allocation, incrementality for channel viability, and attribution for tactical optimization within a channel. The cadence at which each tool reads, weekly or quarterly. What happens when the tools disagree, including which tool wins on which kind of decision.
- How do we run this conversation if our team has been making decisions on platform ROAS for years?
Start with one quarter and one decision class. Write down the tool that should drive that one decision and have the CMO and CFO co-sign. Run the next quarter with that one rule enforced. Add a second rule the following quarter. Trying to install the full framework in one meeting almost always produces a document nobody enforces. Rolling it out one decision at a time produces one that survives.
- Where does Athena fit into this?
Athena is QRY's internal AI platform. It monitors signals across the measurement stack continuously, surfaces anomalies, and produces forecasting reads on top of attribution, incrementality, and MMM outputs. It is not a replacement for any of the primitives. It is the layer that makes the framework operational by giving the team faster reads on the same primitives, so decisions can move at weekly cadence without re-litigating the framework.
If your CMO and CFO are walking into next quarter's planning without a written agreement on which number drives which decision, the four-dashboard argument is going to happen again. The fix is not the fifth tool. It is the conversation that happens before the tools matter, where the leadership team agrees in writing on which decision each number is allowed to drive.
Once that document exists, the disagreement does not go away. The argument about the disagreement does.
Get smarter about paid media
Strategy and data for senior marketers. No spam.
Founder & CEO
Samir Balwani is the founder and CEO of QRY, a full-funnel paid media agency he started in 2017. He has 15+ years of advertising experience and previously led brand strategy and digital innovation at American Express. He writes on paid media strategy, measurement, and how agencies should operate.


