Incrementality Testing
An experimental approach that measures the true causal impact of a marketing activity by comparing outcomes between a group exposed to the marketing and a control group that was not, isolating the genuine lift beyond what would have happened organically.
Also known as: lift testing, causal measurement, incrementality measurement
Why It Matters
Incrementality testing answers the hardest question in marketing: "Would this conversion have happened anyway, even without our marketing?" Attribution models can tell you which touchpoints were present in a conversion path, but they cannot tell you which ones genuinely caused the conversion versus merely being correlated with it.
This distinction has massive budget implications. Retargeting ads might show excellent ROAS under any attribution model, but incrementality testing often reveals that 60-80% of retargeted users would have converted without the ad. Branded search might appear to drive huge revenue, but incrementality tests show that most of those searchers were already committed to purchasing.
Incrementality testing provides the only truly causal measure of marketing effectiveness. It uses controlled experiments (geo-holdouts, user-level holdouts, or matched markets) to create a counterfactual: what would have happened without this marketing activity? The difference between test and control is the true incremental impact.
Industry Applications
A DTC clothing brand runs a geo-holdout test on Facebook advertising, pausing ads in 5 matched markets for 4 weeks. The test reveals that only 35% of Facebook-attributed revenue is truly incremental - the remaining 65% would have occurred through organic search and direct visits. This shifts $500K annually from Facebook to channels with higher incrementality.
A B2B software company runs a user-level holdout test on their retargeting program, withholding ads from 20% of eligible users. The test shows only 20% incrementality for retargeting (most users were already in active sales conversations), leading them to redirect budget toward top-of-funnel content that shows 80% incrementality in a subsequent test.
How to Track in KISSmetrics
Design incrementality tests by creating matched control groups that do not receive the marketing activity being tested. Use KISSmetrics to track conversion outcomes for both test and control groups, ensuring consistent measurement. For geo-based tests, use KISSmetrics geographic segmentation to compare conversion rates between exposed and holdout markets.
Common Mistakes
- -Running incrementality tests for too short a period to capture the full conversion cycle
- -Not properly randomizing or matching test and control groups, which introduces selection bias
- -Testing low-spend channels first instead of high-spend channels where incrementality insights have the biggest budget impact
- -Treating incrementality results as permanent - incremental impact changes as market conditions, creative, and targeting evolve
- -Confusing incrementality with attribution - they measure different things and should complement each other
Pro Tips
- +Test your highest-spend channels first because even small incrementality corrections on large budgets produce significant savings or reallocation opportunities
- +Use geo-based holdouts for channels where user-level targeting is difficult (TV, radio, billboards)
- +Run incrementality tests quarterly on major channels because incremental impact changes over time
- +Combine incrementality testing with attribution modeling: use incrementality to calibrate your attribution model's credit allocation
- +Calculate the incremental cost per conversion (total spend / incremental conversions) to get the true cost of each marketing-driven conversion
Related Terms
Attribution Model
A set of rules or algorithms that determine how credit for conversions and revenue is assigned to the marketing touchpoints in a customer's journey, shaping how channel ROI is measured and budget is allocated.
Holdout Group
A randomly selected subset of users permanently excluded from a specific change, feature, or experiment, used to measure the long-term incremental impact of that change by comparing their outcomes to exposed users.
Media Mix Model
A statistical modeling approach that uses regression analysis on historical data to estimate the impact of each marketing channel on business outcomes, accounting for external factors like seasonality, pricing, and competitive activity.
Data-Driven Attribution
An attribution model that uses machine learning algorithms to analyze actual conversion paths and assign credit to touchpoints based on their measured impact on conversion probability, rather than using predetermined rules.
Channel Attribution
The process of assigning conversion credit to specific marketing channels (paid search, email, social media, organic search, etc.) to evaluate each channel's contribution to revenue and guide budget allocation decisions.
See Incrementality Testing in action
KISSmetrics tracks every user across sessions and devices so you can measure what matters. Start free - no credit card required.