In our previous lesson, we introduced incrementality testing as one of the three core approaches to measurement. This lesson takes that further by breaking down the main test types, explaining when each is most appropriate, and showing how to design tests that deliver the greatest value.

When we talk about incrementality in marketing, what we mean is the true, additional impact that a specific activity generates.
In practical terms, the incrementality (or "incremental lift") of any marketing activity is the extra revenue, conversions, or downloads it drives that are directly and causally attributable to that activity alone.
It may feel that this is answered through other approaches - but neither deterministic MTA-style measurement nor traditional MMMs can truly isolate the causal impact of a single factor from all the other possible influences happening at the same time.
It's for this reason that incrementality tests are often held up as the gold standard for marketing measurement, particularly when you need definitive proof of how much extra revenue you can expect from a budgetary uplift.
However, despite their strengths, there are good reasons why we can’t rely on tests alone.
At its core, incrementality is the marketing application of a fundamental principle from data science: causality - proving a direct cause-and-effect relationship.
Incrementality testing provides the clear and most conclusive evidence of marketing impact but there are practical and operational constraints that prevent its use for continuous, "always-on" measurement:
They're expensive, complex, and time-consuming — Tests can easily be invalidated if not set up and managed correctly. Properly designing and maintaining them requires significant time, effort, and technical expertise, making them costly and difficult to scale.
They need minimum run times and audience sizes — To achieve statistical significance, tests must run long enough against large enough populations (the control and test groups), making it difficult to test small campaigns or channels.
Results come with a lag — Most tests require a minimum run time of weeks or months, meaning results always arrive delayed and can't be used for real-time campaign optimizations or course corrections.
They come with a built-in opportunity cost — All incrementality tests require a control group who aren't exposed to the marketing activity you're testing. This means sacrificing potential immediate conversions or revenue in that segment.
They only ever provide a snapshot — Tests are deliberately targeted to give you statistical certainty on a single factor, which makes their results highly specific too. You can't extrapolate results beyond the time period, channel, or audience examined, giving them a limited shelf life.
They don’t model diminishing returns — A test only measures the incremental lift for the specific dollar amount invested during the experiment. Since marketing channels are subject to diminishing returns, you also can’t rely on the tested return to remain constant if you significantly change your budget.
Think of incrementality tests as an investment in insights. They don't paint the whole picture, but they confirm the cause-and-effect relationships you to know need for your most important campaign decisions.
Before we explore tactics for maximizing the value of test results in your work, let's review some of the more common test types and what they’re designed for.
All incrementality tests follow the same core principle: compare a Test Group (exposed to the activity) to a Control Group (not exposed).
The method used to divide the audience into these groups determines the test type and there are fundamentally two main categories: user-based and market-based.
User-based tests are considered the most precise form of incrementality testing. They rely on the purest principle of experimentation: Randomized Control Trials (RCTs).
Individual users (or user IDs) are randomly sorted into either the Test or Control group before the campaign begins.
This randomised selection ensures that both groups are as statistically identical as possible (e.g., in demographics, purchasing history, organic intent, etc.), meaning any difference in outcome is almost certainly causally linked to the marketing activity.
Here are some terms you may be familiar with:
%20(2).png)
Market-based tests are useful necessary when you can't track or segment individual users, or when the activity being tested affects entire regions (like TV or radio).
These tests use large, aggregated groups - typically geographic markets like cities or states - and treat entire markets as either the Test or Control group.
To ensure validity, markets have to be carefully selected so they're as similar as possible in historical sales, population, and demographics.
%20(3).png)
Incrementality tests are too slow and resource-intensive to serve as your only measurement solution. However, smart marketers use their statistical certainty to strengthen more scalable, always-on models like MMM and MTA.
Incrementality tests shouldn't be seen as a replacement for these models, but rather, a tool for critical validation.
Here are two examples of validation workflows using incrementality tests:
Tactical granularity — A traditional MMM might give you a strong strategic insight like "TV is highly effective and you should invest more," but lack the detail needed for tactical decisions: "Should I invest in Network A or Network B?" A targeted Geo-Lift Test can deliver high-confidence incremental findings that let you confidently shift significant budget to the proven network.
Resolving conflicts between tools — When working with multiple tools, it's not uncommon for different models to assign drastically different values to the same channel. Your MTA (tracking individual clicks) might show a $6 ROAS for Paid Social, while your MMM (tracking high-level budget allocation) credits it with only $3 ROAS. A Conversion Lift Study (CLS) run on that channel can reveal the true incremental ROAS and determine the channel's actual value.
But to truly maximize the value of test results beyond the specific question being asked, you need a mechanism to let your MTA or MMM models learn from them. This is achieved through calibration.
Calibration is the process of using the measured incremental lift to correct assumptions in less-precise models.
Ultimately, while incrementality tests are essential for validation, their value is maximized through calibration. This ensures that every expensive, resource-intensive test doesn't just answer a single question - it permanently raises the accuracy of your entire measurement framework.
Calibrating your MMM with test results is powerful, but traditional ("frequentist") MMMs require a complete re-run to incorporate new findings. This means you often wait three to six months before the model reflects the causal truth you just discovered.
Modern MMMs built on Bayesian statistics solve this problem. They allow near real-time calibration through the concept of priors.
Unlike traditional models, Bayesian models start with "prior beliefs" - initial assumptions about a channel's effectiveness from which it starts its learning process. When an incrementality test delivers a causally proven result (e.g., a $4 ROAS), that result becomes a highly confident "strong prior."
Instead of requiring a complete rebuild, plugging this reliable test result into the model guides its learning process by giving it a clearer idea of where the true effectiveness of that channel lies. This focused starting point allows the model to learn the channel's performance faster and with greater confidence. The result is a measurement system that offers the speed and accuracy of direct testing, combined with the comprehensive strategic view of a full-scale media mix model (MMM).
Instead of requiring a complete rebuild, plugging this reliable test result into the model guides its learning process by giving it a clearer idea of where the true effectiveness of that channel lies. This focused starting point allows the model to learn the channel's performance faster and with greater confidence. The result is a measurement system that offers the speed and accuracy of direct testing, combined with the comprehensive strategic view of a full-scale media mix model (MMM).
We discussed Bayesian MMMs and the benefits they drive in a previous lesson, so click here for a more detailed explanation of the industry shift they’re driving.
Incrementality tests are often considered the gold standard of marketing measurement because they use controlled experimentation to deliver the definitive, causal truth about an activity's value - something other measurement approaches simply can’t guarantee.
However, their high cost, slow speed, and deliberately narrow focus mean they can’t be relied upon as a standalone, always-on solution. Instead, smart marketers use tests strategically, and maximize their value through calibration of faster, more scalable models like MTA and Bayesian MMMs.
In the next lesson, we’ll talk about how tests and other measurement approaches can be leveraged to quantify impact beyond your owned marketplace to an estimation of the “halo effect.”