In this article, we’ll break down common traps where advertising often fail to be incremental and how to fix each one.

Picture this: It’s the first Monday of the month. You’re sitting in the conference room, surrounded by department heads. The KPI dashboard is up on the big screen, presenting last month’s marketing performance.
The dashboard looks great — ROAS is up, CPAs are down, and conversions are climbing. Everyone’s smiling. But then someone asks the question that makes the room go quiet:
If we turned off all our ads tomorrow, would we actually lose sales?
Silence.
It’s the kind of question that keeps marketers awake at night. And honestly? Most are too scared to find out the answer. Because deep down, we know: some of those “results” might just be illusion. Not because the data is wrong, but because it’s telling the wrong story.
Here’s what we don’t like to admit: A significant portion of what we call “successful” advertising isn’t actually driving new business. It’s just taking credit for sales that were going to happen anyway.
This is the core challenge of incremental sales — the sales that only occurred because of your campaign.

Why False Incrementality Occurs #
Just because a sale is attributed to an ad doesn’t mean the ad caused it. Attribution models assign credit; they don’t prove causality.
Let’s be honest about something: Most attribution systems, especially the ones built by advertising platforms, have a conflict of interest. They’re incentivized to show that their ads work, not to give you an unbiased view of what’s really driving your business.
The advertiser is also providing the data on how the advertising is performing […] and that’s always been a little bit suspect.
These systems are algorithmically trained to prove their own value, not question it.
The result? We end up with a distorted view that makes retargeting, branded search, and smart bidding look like performance miracles — when, in fact, they’re often just cleaning up after the heavy lifting has already been done by upper-funnel efforts, organic discovery, or customer intent.
Let’s make this concrete with a story.
Imagine you hire a salesperson named Meta (any resemblance to real ad platforms is, of course, purely coincidental 😉).
You give Meta a promo code to track their performance. After a week, the numbers look incredible — Meta’s code is used by 100 customers a day!
But your total daily customers? Still 220, exactly like before.
When you investigate, you find Meta standing near the checkout counter, handing out their code to people already in line to buy.
Mr Meta wasn’t driving new business. He was just taking credit for sales that were already happening.
That’s what a lot of digital advertising is doing today: showing up at the finish line and claiming the win. They typically reward:
- Bottom-funnel tactics that catch users already ready to convert.
- Last-click paths that ignore the multi-touch reality of decision-making.
- View-through conversions that require zero actual engagement.
That inflated sense of performance masks the true impact (or lack thereof) of your ad spend. You’re misled into thinking your paid media is more incremental than it is, and keep pouring money into what might be a low-impact tactic.
That’s the essence of an incrementality illusion.
The Traps of False Incrementality #
Let’s break down the most common illusions that trick marketers into overestimating their campaigns’ true impact:
1. The Retargeting Trap #
Chasing People Already Running Toward You

→ The Lie: “Our retargeting ROAS is 15x!”
→ The Truth: You’re showing 47 ads to 800 people who’d convert with zero ads.
Retargeting campaigns are the darling of performance marketers everywhere. Conversion rates of 3-5%, CPAs that make your boss happy, and ROAS numbers that look like phone numbers.
But here’s the uncomfortable question: Who exactly are you retargeting? Retargeting, by definition, targets people who have already shown interest. These are people who’ve already visited your website, browsed your products, maybe even added items to their cart. They’re already in the buying mindset. Many of them were going to complete their purchase anyway — they just needed time to think it over, check their bank account, or sleep on the decision.
Your retargeting ad didn’t persuade them to buy; it just reminded them to finish what they started. It’s the difference between convincing someone to go to a restaurant and reminding them they made a reservation.
→ The Fix: Set frequency caps at 3-5 impressions per week and monitor your reach-to-impression ratio. Test different retargeting windows (1 day vs 7 days vs 30 days) and measure incremental lift at each stage. Focus on retargeting audiences that truly need the extra nudge, not everyone who visited your homepage once.
2. The Branded Search Trap #
Paying for Traffic You Already Own

→ The Lie: “Our branded CPA is just $2!”
→ The Truth: Those users would’ve clicked your free organic listing.
Someone searches for your company name, clicks your ad, and converts. The metrics look fantastic, and you are celebrating those “cheap” conversions – it feels like easy money.
But let’s think about this logically. Someone who searches for your brand already knows who you are. They’ve already decided they want to engage with your business. If your paid ad wasn’t there, what would they do? They’d probably click on your organic listing, which sits right below where your ad was.

This isn’t just theoretical. eBay proved this with a controlled experiment where they paused branded search ads in certain markets. The result? No measurable impact on overall sales. People just clicked the organic listings instead.
→ The Fix: Pause your branded campaigns in select markets for 2-4 weeks and measure the impact on total conversions, not just paid conversions. If total sales stay flat, reallocate that budget to actual customer acquisition.
3. The Existing Customer Trap #
Targeting Yesterday’s Buyers

→ The Lie: “Our prospecting campaign is scaling!”
→ The Truth: 40% of “new customers” bought from you last month.
Here’s a scenario that happens more often than we’d like to admit: Your “new customer acquisition” campaign is actually spending 40% of its budget targeting people who bought from you recently.
Most advertising platforms include existing customers in their targeting by default. Meta’s Advantage+ Shopping campaigns, Google’s Performance Max, and nearly every “smart” campaign type does this unless you explicitly tell them not to.
Meta Advantage+ Shopping campaigns dynamically optimize your audience, but includes current customers by default unless you choose to cap their reach.
Google Performance Max targets both new and returning customers and will even use brand terms unless you explicitly tell the system to only focus on new customers and non-branded terms.
The result? You’re paying to show ads to people who were already planning to buy from you again — whether because of a loyalty email, an SMS offer, or simple habit. You get the conversion, the platform takes the credit, and you pay the bill for a sale that was going to happen anyway.
→ The Fix: Exclude past purchasers from prospecting. Every. Single. Time. Set up proper audience exclusions and monitor customer overlap religiously.
4. The MTA Trap #
Stealing from Other Marketing

→ The Lie: “Meta drove 1,000 sales this month!”
→ The Truth: It just got credit after your other channels did the heavy lifting.
Multi-channel marketing creates a credit war. You send out a compelling email newsletter on Monday morning. Sales spike throughout the day — but many customers click through a Meta ad before purchasing, so Meta gets full credit for the conversion.
The result? The platform looks like the hero — you believe the ad caused the sale, when in reality, your email did the real work.
→ The Fix: Treat attribution as a lens of opinion — not a single source of truth. Compare multiple models side by side to uncover the real dynamics behind your conversions. Use post-purchase surveys (“How did you hear about us?”). Match spikes in organic/search traffic to offline campaigns. Look for patterns your attribution data can’t explain.
5. The Algorithmic Optimization Trap #
AI just finds easy targets, not new customers

→ The Lie: “Our AI-optimized campaigns are delivering amazing results!”
→ The Truth: The algorithm is just targeting people who were already going to buy.
Here’s something most marketers don’t realize: When you tell Meta or Google to “optimize for conversions,” their algorithms don’t try to change minds or create demand. They try to find people who are most likely to convert right now.
And who’s most likely to convert? People who are already planning to buy. Frequent customers. Users who’ve been researching your product for weeks. People whose friends just posted about your brand on social media.
The algorithm becomes incredibly efficient at finding these low-hanging fruit. Your conversion rates soar, your CPAs plummet, and everyone thinks you’ve cracked the code. But you haven’t expanded your customer base or changed anyone’s behavior — you’ve just gotten really good at predicting who was going to buy anyway.
→ The Fix: Compare smart bidding performance against manual bidding with the same budget allocation. Use audience exclusions to prevent algorithms from only targeting your warmest prospects. Monitor new customer acquisition rates, not just total conversions. If algorithms are working, you should see genuine audience expansion, not just higher efficiency with the same pool of buyers.
6. The Channel Overlap Trap #
When Every Campaign Takes Credit

→ The Lie: “All channels’ ROAS is crushing it!”
→ The Truth: You’re counting the same conversion five times across different channels.
Picture this: Sarah sees your YouTube ad on Monday, clicks a Facebook ad on Wednesday, searches for your brand on Friday, and finally converts through an email link on Sunday. How many campaigns claim credit for Sarah’s purchase?
All of them.
YouTube reports a view-through conversion. Facebook logs a click-through conversion. Google Ads attributes it to branded search. Email marketing adds it to their conversion tally. Your affiliate program might even claim a piece.
One customer, one purchase, five different reports showing success. Without proper deduplication, every channel looks like it’s driving incremental value when they’re all just fighting over credit for the same inevitable sale.
→ The Fix: Implement first-party attribution that deduplicates conversions across channels. Use customer journey analysis to understand true touchpoint influence, not just last-click or first-click attribution.
7. The Attribution Window Trap #
Taking Credit for Yesterday’s Influence

→ The Lie: “Our brand awareness campaign drove 500 conversions this month!”
→ The Truth: Those conversions happened 28 days after someone accidentally saw your ad for 0.3 seconds.
Attribution windows are supposed to capture the delay between ad exposure and purchase decision. But they’ve become a way for campaigns to claim credit for conversions that have nothing to do with their influence.
A user glimpses your display ad while scrolling through a news article. They don’t click, barely notice it, and forget about it immediately. Three weeks later, a friend recommends your product, they research it thoroughly, and decide to buy. But because they saw your ad within the 30-day attribution window, your display campaign gets full credit for a conversion it had zero influence on.
The longer the attribution window, the more likely you are to capture these phantom conversions. It’s like claiming credit for every sale that happens to anyone who’s ever walked past your billboard.
→ The Fix: Consider the natural purchase cycle of your product — a 30-day window might make sense for cars, but not for coffee. Shorten attribution windows for bottom-funnel campaigns (1-7 days) and analyze conversion timing patterns. If most conversions happen weeks after ad exposure, question whether your ads are actually influencing decisions.
8. The View-Through Trap #
When Seeing Isn’t Believing

→ The Lie: “Our conversion campaign drove 2,000 conversions this month!”
→ The Truth: 80% were view-through conversions from people who never clicked your ad.
View-throughs conversions aren’t inherently bad. They can signal brand awareness — your ad planted a seed that grew into a purchase — but they’re a red flag for bottom-funnel efforts. If most of your conversions are view-through (people who saw your ad but didn’t click), It suggests users may have already planned to convert and that the ad didn’t change their behavior — just happened to be visible along the way.
Think about it logically: if your ad was truly compelling and influential, wouldn’t people click on it? When someone sees an ad for something they want, their natural response is to engage. If they’re not clicking but converting later anyway, it’s more likely they were already in market and your ad was just coincidentally in their field of vision.
The result? Your “successful” campaign might just be running ads to people who were going to buy anyway, creating phantom incrementality through passive exposure.
→ The Fix: Analyze your click-through versus view-through conversion rates. For bottom-funnel campaigns, you want to see mostly click-through conversions. If view-through dominates, question whether your ads are truly driving behavior change.
9. The Platform Cannibalisation Trap #
New Channel, Same Customers

→ The Lie: “TikTok is driving incredible incremental growth!”
→ The Truth: You’re just moving existing customers from one platform to another.
Here’s a scenario that’s becoming increasingly common: A brand launches on TikTok and sees impressive conversion lift studies. The platform reports significant incremental sales, and everyone celebrates the successful expansion into a new channel.
But dig deeper, and you discover something troubling: TikTok isn’t actually finding new customers—it’s intercepting existing ones. The same people who used to discover your products through Instagram or Google are now finding you on TikTok instead. Your Amazon sales from previous customers haven’t grown; they’ve just shifted attribution.
One advertiser shared their experience with TikTok’s Conversion Lift Study, which initially reported excellent incremental results. But the “incrementality” couldn’t be sustained. TikTok had picked off the easy conversions—new for TikTok, but not new for the brand. The platform was simply competing with other channels for the same pool of ready-to-buy customers.
This is channel cannibalization disguised as growth. Your total customer base stays the same, but now you’re paying multiple platforms to fight over the same people. It’s like opening a second store across the street from your first one and celebrating when it gets customers—without noticing your original store’s traffic has dropped.
→ The Fix: Monitor total customer acquisition across all channels, not just individual platform performance. Track customer overlap between platforms and measure true business growth, not just channel-specific metrics.
10. The Artificial Causality Trap #
Attribution in a costume

→ The Lie: “Amazon’s Marketing Cloud shows a 71% lift in conversion!”
→ The Truth: Lift? Sure. Confidence intervals? Optional. Cross-channel causality? Not a chance.
After years of advertisers demanding incrementality measurements, the platforms have finally responded. Amazon now offers incrementality tests through Marketing Cloud. Meta launched “incremental attribution”. Google’s Performance Max touts “New Customer” metrics. Everyone’s celebrating – finally, we have ‘incrementality testing’!
But here’s the uncomfortable reality: these tests compare an “exposed” group to a so-called “control” group — but both groups still live inside the same platform.
The exposed group? These users are often prequalified by intent — the highest-intent users the algorithm can find. Of course they convert more. They were likely to convert anyway.
The control group: these users aren’t “unexposed” — they’re just not served ads on that platform.
Meta doesn’t account for that email that dropped the day before. Google ignores the influencer collab that boosted your brand searches. Amazon can’t see the TikTok that triggered product discovery. But if a customer happens to click the ad on their way to purchase? That gets counted as “incremental.”
The result? Inflated lift. Artificial causality. A difference that might’ve happened anyway.
→ The Fix: Run true holdout tests — geo-split experiments, customer suppressions, time-based pauses. Then look at real business outcomes: new customer revenue, LTV, and profit. If “incrementality” doesn’t show up in your P&L, it’s not real — it’s just attribution in a costume.
Proving True Incrementality #
To truly understand if your marketing is working, you need more than surface metrics. You need to separate causation from correlation. That’s where incrementality testing come in.
The only way to know the truth is incrementality testing.
- Geo holdouts (turn off ads in some regions)
- Channel switch-offs (pause a platform entirely)
- Conversion lift studies (controlled platform experiments)
These prove whether sales would’ve happened without the ads.
The Uncomfortable Truth #
We’ve built entire careers on metrics that might be measuring correlation instead of causation.
However, the most important question in advertising isn’t “Did people buy after seeing my ad?” It’s “Did people buy because they saw my ad?”
But here’s the thing — the marketers who face this reality head-on, who become incrementality detectives, who demand proof instead of pretty dashboards? They’re the ones who will thrive in a world where every dollar needs to justify itself.
As Andrew McInnes, former VP of Marketing at Uber, puts it: “The biggest challenge in marketing today isn’t finding customers — it’s proving that we actually influenced them to buy”.