You're probably familiar with the challenge of attribution. You want to understand and ratify your ad spend, you need to prove your ads are driving growth, and your seniors are waiting for results. So you pull the ad data from the platform, like your Facebook Ads results, and you put your trust their numbers. But how do you know what part of those conversions wouldn't have happened anyway — marketing campaign or not?
This is where advertising incrementality comes into play.
Incrementality measurement makes the value of your efforts easier to pinpoint. It allows you to isolate the true lift driven by your campaigns, so you can optimize advertising spend, justify marketing budgets and make data-backed decisions that drive real results.
But how do you measure incrementality? In this post, we'll get into marketing incrementality, how to get started, why you need to do it, and what to avoid, as well as our tried-and-tested Funnel method for incrementality. So let's get to it.
What is advertising incrementality?
Measuring incrementality in advertising is about working out how much impact a campaign or channel has had on your overall conversions. It helps you see if a specific channel or campaign has driven more sales or actions than if you had done nothing.
A ‘control versus test’ approach is the best way to determine the incremental effects of a marketing action on a result.
What is incrementality – an example
Using incrementality in marketing, you can isolate conversions directly driven by a specific campaign, separating them from organic sales. This approach helps identify the actual impact of your marketing efforts, beyond what would happen naturally.
What do we mean by "organic sales"?
These are the sales that happen because your brand has gained a trusted position in the market over time. Loyal buyers already know how to find you whether or not you run ads. You may also have other customers who find you through organic search after you have invested years in SEO.
These are sales that would happen even if you did not buy ads. They happen organically, not because of advertising.
An example of incrementality testing
Imagine you’re running a Black Friday campaign across multiple marketing channels. With incrementality measurement, you compare a group exposed to some Instagram ads to another set that isn’t, and this allows you to measure the incremental sales increase driven by your marketing campaign. Without incrementality measurement, it would be difficult to determine which sales would have happened anyway as part of the usual seasonal shopping trends.
With this control and test approach, you also can understand marketing impact at a more granular level. For instance, you can figure out if your display ads or social media posts are the primary drivers of an uplift in your YOY holiday sales.
The goal is clear: determine which campaigns and channels contribute the most to your sales and which part of the sales might have occurred anyway, even if you hadn’t shown any ads.
Why does marketing incrementality matter?
Incrementality helps you separate the results that you would have achieved anyway, without marketing actions, from those that were caused by your marketing. That means you're no longer just counting clicks and impressions — you're actually understanding what genuinely is driving sales or leads.
There can be a problem with attribution models when they work in isolation. For example, last click attribution puts all the credit on the last place a customer visited before they made a purchase — and in that way completely overlooks all the other marketing elements that might have made an impact into their decision-making process. With triangulation and incrementality, you get a more unbiased view of attribution, so you can be more accurate about where the most valuable marketing efforts are.
Marketing budget is often tight, and often slashed when companies need to tighten their belts. Having incrementality in your toolbox means you're ready to prove what's the most vital, valuable elements for your spend — the things that will give you the biggest bang for your buck. It also means it's easier to prove to your superiors that your efforts are working, and making the company more profitable.
Incrementality, marketing attribution and marketing mix modeling: what's the difference?
Marketers use marketing mix modeling, attribution and incrementality testing to measure marketing success. These approaches provide different parts of the story, but combining them offers the juiciest insights into incremental lift.
Marketing mix modeling (MMM)
Marketing mix modeling (MMM) helps uncover your baseline sales while pinpointing how much of your growth comes from your media and marketing campaigns. It’s a powerful way to identify what’s driving results so you can fine-tune your strategy.
However, MMM works best alongside tools like multi-touch attribution (MTA) and incrementality testing. Together, these measurement methods provide a clearer picture of both immediate and sustained impact, helping you make smarter, more confident decisions.
Marketing attribution
Marketing attribution focuses on assigning credit to different touchpoints in a customer’s journey — showing how channels work together to drive a conversion. While MMM and incrementality testing reveal the overall lift of marketing, it's attribution that helps unpack the route customers take along the way. For example, attribution can show how paid search, display, and email collectively influence a purchase, even if one channel appears stronger in isolation.
When layered with incrementality testing and MMM, attribution provides the in-between context — not only showing what drives results, but also how different touchpoints contribute to them.
Incrementality testing
Incrementality testing helps you identify what’s driving results by comparing two groups: one that sees your marketing campaign (exposed) and one that doesn’t (control). It’s a simple but useful way to isolate the impact of your marketing.
This approach works especially well for channel-specific evaluations, like measuring the lift from paid social or email campaigns. It’s all about knowing what moves the needle so you can better manage your marketing spend and drive real growth.
Using MMM, MTA and incrementality testing together in measurement triangulation provides a more holistic overview of ad effectiveness, enabling you to make better data-informed decisions about budget allocation.
The Funnel method for effective incrementality testing
You might have heard us talk before about "triangulation" — this is how we at Funnel combine marketing mix modeling, digital attribution and incrementality testing to minimize bias and create a holistic view of the results of your marketing efforts. Triangulation is the gold standard in marketing measurement, but it requires your data sources to be as accurate as possible.
Other fundamental elements include:
Relying on first-party transaction data for accuracy
If you pull data directly from your transaction history to assess campaign impact, avoiding media platform-specific metrics (like conversions from Facebook or Instagram), it may not capture the full customer journey.
For example, an online fashion retailer segments customers into two groups. One receives SMS campaigns (exposed group), and the other doesn’t (control group). Using the sales data from their customer relationship management system (CRM), not data from the SMS tool, they can track conversions, average order value and purchase frequency over time.
By comparing these metrics between the test and control groups, they can see the incremental lift from SMS, isolating its impact from other channels like email and organic search.
This reveals whether SMS truly drives additional sales, allowing the retailer to make better-informed decisions about future SMS investments. This level of insight helps you manage budgets with greater precision, making justifying your ad spend that much easier.
Looking beyond platform lift studies for cross-channel insights
Platform-specific studies might only reflect single-channel campaign performance and overlook cross-channel interactions. But you can also use incrementality measurement to capture the interplay between channels.
Let’s say a fitness brand uses incrementality measurement to assess the combined impact of Instagram and YouTube ads on online course sales. First, they split the audience into three groups:
- One sees only YouTube ads
- Another, only Instagram ads
- And a third sees both in a set sequence (YouTube first, then Instagram)
Splitting groups for individual testing makes it easier to see incremental lift across channels.
By comparing conversion rates across these groups, the fitness brand finds that sequential exposure (YouTube followed by Instagram) boosts conversions by 40% over single-channel exposure.
This insight allows them to focus their budget on coordinated, cross-channel campaigns rather than isolated ones that don’t have as much bang for their buck.
How to measure incrementality: a step-by-step guide
Ready to leverage incrementality testing for quick, actionable insights into specific campaigns or tactics? It all starts with comparing with your baseline to create a comprehensive and robust overview.
1. Model your baseline sales
Before we can start testing, we need a baseline to compare against. This will make creating a holistic view of your results much simpler later on.
How to gather historical data
- First, you must collect sales and ad spend data across all marketing channels for a set period.
- Include other potential influencers like seasonality, economic factors and competitor marketing activity to create a full model.
2. Set up incrementality testing
Define your control and treatment groups
- Choose your platform (e.g., Facebook, Google Ads, in-store promotions).
- Identify a representative audience segment for testing.
- Choose campaigns with immediate calls to action (e.g., promotions, product launches).
Set up your campaign environment
- Create campaigns within your ad platform that allow you to isolate the test and control groups for online environments.
- Consider geographic split testing (different regions) or timing variations (specific days for ads) for offline or hybrid environments.
- For geo-lift incrementality testing, for example, find two geographic regions in your audience that are similar.
- Then, pause all ads in one of these regions.
- After a few weeks, compare the regions to see how much less sales, if any, the region without ads is driving compared to those with ads.
- Avoid any other changes in marketing or promotions for the test duration to keep the comparison clean.
3. Run the incrementality test
Launch the campaign
- Run your campaign as planned, showing the ad or marketing intervention only to the test group.
- Keep the control group excluded from any exposure to the test ad.
- You can use comprehensive tracking tools like a central data hub, ad platform pixels and customer relationship management (CRM) software, to monitor responses from each group in real time.
Monitor data collection
- Track conversions for both test and control groups.
- You can use conversions like purchases, sign-ups, downloads or any specific action you want to measure.
- Confirm that each action is correctly attributed to the relevant group in your tracking setup.
- Track MTA so your interpretation can be more comprehensive and accurate.
- Gather data for a long enough period to capture reliable results (e.g., 2–4 weeks for short campaigns or several months for longer campaigns).
Analyze results to determine incrementality
While you might have a tool that will manage this all for you in a central Data Hub, it pays to understand the numbers you’re going to be working with and how to calculate incrementality on the fly.
Compare conversion rates
First, calculate the conversion rate for each group:
Let’s assume the following:
- Test group size: 2,000
- Test group conversions: 300
- Control group size: 2,000
- Control group conversions: 250
Step 1: Calculate the conversion rates
The test conversion rate is 300 / 2000 = 0.15 (15%)
The control conversion rate: 250 / 2000 = 0.125 (12.5%)
The test group conversion rate (15%) is higher than the control group conversion rate (12.5%).
Step 2: Calculate the absolute difference
First, express your conversion percentages as decimals. Then, calculate the absolute difference.
Absolute difference = 0.15 - 0.125 = 0.025 (2.5 percentage points)
Step 3: Calculate relative uplift and incremental lift
When your exec team asks you how much your tests improved sales, you can express it as relative uplift:
Relative uplift (%) = (0.025 / 0.125) × 100 = 20%
Or incremental uplift:
Subtract the control group’s conversion rate from the test group’s conversion rate:
Incremental lift = Test group conversion rate - Control group conversion rate
Incremental Lift: 300−250 = 50 conversions
Step 4: Interpret the results
The absolute improvement is 2.5 percentage points.
The relative uplift is 20%, meaning the test group's ad increased conversions by 20% compared to the control group.
The incremental lift is 50 conversions.
From these simple calculations, you can determine not only if your test was successful but also by how much.
5. Triangulate your data for a holistic overview of your results
All mature businesses have baseline sales, meaning they will generate revenue even without running any marketing campaigns. This concept can be difficult for some marketers to accept, but it is an undeniable reality.
When it comes to marketing measurement, there is no definitive ground level of sales. The true incremental impact of a marketing channel would be the actual revenue minus the revenue in an alternative universe where everything is identical, except the brand didn’t run marketing on that channel (or didn’t run the campaign)
Since alternative universes don’t exist (or at least we have to go with that assumption until proven otherwise), we have to rely on statistical modeling.
While modeling provides valuable insights and can increase our confidence in the results, it doesn't offer absolute certainty. This is why triangulation (using multiple data sources and methods to validate findings) is so important. It brings you that much closer to exactitude.
Step 1: Triangulate findings
- Use data from both the incrementality test and MMM as a cross-check.
- If the lift is similar across both, your results are likely accurate.
- If there are discrepancies, analyze variables and conditions in each test to determine any reasons for variations.
Step 2: Combine data for holistic insights
- Blend results from MMM (long-term, multi-channel view) and MTA (attributions) with incrementality test insights (specific campaign impact).
6. Make decisions based on results
Now, you can produce your data narratives based on your triangulated data and make data-informed decisions about future campaigns.
Consider expanding or maintaining the campaign if the test shows a strong incremental lift. For minimal impact, experiment with creative, targeting or timing adjustments.
Then, implement your findings in future campaigns and monitor them closely. You can use your insights from successful tests to inform similar future efforts.
Refine your approach based on results to continuously optimize your marketing strategy.
This structured approach allows you to systematically test, validate and optimize your campaigns, leading to more effective marketing strategy decisions.
Common pitfalls in incrementality
It all sounds too good to be true, right? Well, there are some common pitfalls to avoid when performing incrementality in your marketing analysis. Here are the ones we'd advise you to watch out for.
1. Misinterpreting results
Incrementality tests show causal lift, but if the experiment isn’t designed correctly, results can be misleading. For example, testing too small a sample size, or not accounting for seasonality, can exaggerate or understate the true impact.
2. Overly narrow scope
Running incrementality tests on single channels or short campaigns gives useful but limited insights. Without combining results across channels or using models like MMM, marketers risk making decisions in silos.
3. Short-term bias
Incrementality often focuses on immediate lift, which can undervalue campaigns with long-term brand impact (like awareness or upper-funnel media). This creates a bias toward tactics with fast, measurable outcomes.
4. Operational complexity
Designing, executing, and analyzing incrementality tests can be resource-intensive. If the process requires constant engineering or analyst support, it may slow down decision-making and frustrate teams.
5. Ignoring attribution context
Incrementality tells you what changed because of marketing, but not which touchpoints along the journey influenced the outcome. Without combining incrementality with attribution or MMM, you can miss the bigger picture.
6. False sense of certainty
It’s easy to treat test outcomes as absolute. But lift can vary across audiences, geographies, or time periods. Treating one test as a universal truth risks oversimplifying the complexity of marketing impact.
Data that paints the whole picture
Stop wasting your budget on ads that aren’t moving the needle and start making data-informed decisions that show the true value of your marketing. Use incrementality measurement as part of your triangulation process to cut through the noise of overinflated metrics and see what’s really driving your results.