In this lesson, we'll explore the overall measurement landscape: the foundational approaches to marketing performance measurement, effective triangulation between systems, and how to choose a stack that works for your specific business.

"All models are wrong." — George Box
Most marketers are working with measurement systems they know are only telling half the truth.
That’s not because they’re doing anything wrong — it’s because marketing has changed, and no single measurement system can give you the full picture of performance anymore.
The truth is that all marketing measurement systems are fundamentally based on one of three foundational approaches, each with their own strengths and weaknesses.
Combining them intelligently is the key to building out a measurement stack that is responsive, holistic and causally robust.
The aim of this lesson is to give you an understanding of these fundamental categories, what they’re good and bad at, and how to choose the best combination for your needs.
There are three primary approaches to marketing measurement from which all popular measurement tools have developed:
Rules-based attribution methods. They usually rely on observable user-journey information and fixed rules to assign credit to touchpoints across them.
Last Click (LC), First Click (FC), Last Non-Direct Click (LNDC), Linear, Position-based, Multi-Touch Attribution (MTA) and Data-Driven Attribution (DDA)
Technically, Data-Driven Attribution (DDA) relies upon a statistical approach to provide its attributions. However, because it works from tracked user journeys, for practical purposes it has been included under the deterministic category as it shares the same strengths and weaknesses as other models in this category.
Approaches for assigning credit based on statistical analysis. These methods use Machine Learning and statistics to identify how influential different channels, touchpoints, and broader factors are at driving conversions/revenue.
Marketing Mix Models (MMM)
An umbrella term for experiments designed to isolate the incremental effect of a single marketing factor. These tests work by comparing the outcomes between a test group, that is exposed to your marketing, against an identical control group, that is not.
Geo-Lift Tests, A/B Tests, Split Tests, User Holdout Tests, Randomised Control Trials (RCTs)
If you’ve worked with a marketing measurement solution of any kind it will be primarily based on one or more of these fundamental approaches.
In the next section we’ll discuss their pros and cons in more detail but if you’re primarily interested in practical decision-making support, you can skip to the bottom for a guide for choosing the best measurement option for your use case.
Basing credit on customer-journey interactions sounds like an airtight measurement model – after all, we know with certainty that the customer actions took place and can make assign credit immediately after conversion without waiting for tests or models to run.
However, there are several reasons why deterministic solutions have limited application in the real-world.
The primary issue is that measurement approaches based on customer journeys require those journeys to be consistently tracked. With increasing cookie restrictions, and social platforms that keep their data within their own ecosystems, we often end up with multiple, fragmented user journeys, each claiming full credit for the same conversion.
This also means conversion tracking through deterministic methods is often patchy, so tools using these systems frequently report different total conversions than your ecommerce source of truth—limiting their usefulness.
Finally, these approaches can't account for factors outside the observable part of the user journey (the "attribution window"). For example, if a customer sees an ad on one device but converts on another, or views an impression while logged out, those touchpoints won't be included in the attribution decision—resulting in a well-known bottom-of-funnel bias.
While user journey data remains highly valuable, the key challenge for deterministic measurement approaches is bridging these data gaps and fairly assigning credit to influential but unseen upper-funnel activity.
Statistical techniques are the primary method used to bridge the measurement gaps of deterministic models.
Because they operate on aggregated data (e.g., daily totals for impressions, clicks and conversions), they are not tied to any specific user journey and are inherently robust to privacy shifts and cookie-loss issues. They also excel at modelling the impact of external factors like seasonal trends, price fluctuations, and macro-economic shifts, which may be significantly influential but not explicitly linked to any customer journey.
However, they’re not without limitations. Models such as MMMs are typically slow to report – with 3-6 month reporting cycles the norm – and require years of data for every factor being analysed.
This also limits how granular they can be. For campaign, ad, or creative-level reporting, there simply isn't enough data for robust statistical patterns to be identified.
And while they’re excellent at identifying relationships in historical data (correlation), it’s often unclear how much confidence you can place in them for future action (causation).
Being broad and detailed in their analysis, MMMs are powerful tools, but to maximise their value marketers need a mechanism to prove causal relationships and move from descriptive analysis to action.
This need for prescriptive action is precisely where incrementality testing comes into its own.
By using a controlled experiment (test group vs. control group), incrementality tests isolate the effect of a single marketing factor to definitively prove causal impact (i.e., incrementality). But again, while highly valuable, incrementality tests are not a one-size-fits all solution.
Firstly, tests are deliberately highly targeted in nature. This means that their outputs can only really be applied to the specific time-period, geographic location or channel in question. They’re also expensive and time-consuming to design and execute, making them unsuitable for frequent, holistic measurement across channels and geographies.
If you rely heavily on ad-platform data, you may find the signals confusing or credit claims overlapping. If you use a Last Click model, you'll likely see heavy bottom-of-funnel bias. If you work solely with a traditional MMM, you'll be flying blind between refresh cycles.
To make sense of the different insights provided by different tools, marketers typically rely on a technique called triangulation. Triangulation is a common term in marketing measurement but it’s rarely explained in practical terms.
In practice, triangulation isn’t a one-time validation activity, but a workflow. It’s how advanced teams use multiple tools to address the blind spots of individual systems and validate insights for confident decision-making.
Here’s what a typical workflow could look like:
Model (via MMM) — Use your MMM to identify which channels are driving sales across your full funnel. It shows what's working at a strategic level and where you might be over- or under-investing.
Activate (via MTA) — Use MTA data to optimize campaigns and creative based on channel learnings. It guides daily, tactical optimization based on immediate user behaviour.
Validate (via Incrementality Tests) — When you spot unusual outputs or discrepancies between MMM and MTA results, run targeted experiments in specific channels to validate the correct attribution signals. This provides causal ground truth - proof that a channel or tactic truly drives incremental value.
This is triangulation in practice—using each tool for what it does best and incrementally refining your models to converge on a single, trusted source of truth.
However, not every brand needs to be at the same level of measurement maturity. What matters most is using what you have well, and knowing what step to take to fill in the gaps.
If the idea of procuring and onboarding multiple solutions is not appealing, modern measurement providers are increasingly offering “convergent solutions” that bring strategic breadth, frequent reporting, and incremental proofs into a single tool.
These can be excellent choices for simplifying your reporting while covering the bases important to you.
In the previous lesson we covered one such approach - daily, Bayesian MMMs that deliver MMM’s breadth of insights at MTA-speed, while grounding outputs in causal truth.
Regardless of your measurement maturity, your goal is not to find a single, perfect tool or to bolt all available tools together, but to establish a trusted workflow that support your business decision-making. In the next lesson, we’ll dive deeper about how to balance valuable one-off analyses with always-on measurement approaches.