Oct 14, 2025
Incremental Attribution in Meta: What We’re Learning Right Now
The Attribution Problem No One’s Solved (Yet)
What if you could stop guessing which conversions were actually driven by your Meta ads - and started knowing?
Many brands are spending heavily on Meta but struggling to connect the dots between impressions and real, incremental performance.
Attribution can still be a bit messy. Even post-iOS14.5, Meta’s default reporting credits conversions that may have happened anyway. Someone sees your ad, doesn’t click, and buys later - and your ad gets the credit.
That leads to some costly misreads:
You’re optimizing toward inflated numbers: Campaigns look like they’re winning, but the reported ROAS is padded with conversions that would’ve happened without paid support.
Your creative tests get misleading feedback: Ads that didn’t truly influence behavior still show strong results, while genuinely high-performing creatives don’t get the attention they deserve.
You’re scaling campaigns based on false positives: Budgets get pushed into ad sets that appear to be working, but aren't truly driving net-new outcomes. The result? Rising spend and flat growth.
The numbers show what happened. They don’t tell you if your ads made it happen.
That’s what Incremental Attribution, Meta’s newest attribution setting, aims to solve.
Sounds promising? But what does it actually tell us in practice?
We’ve been deep in the data. Here’s what we’re learning, and how to use it without blowing up your performance.
What Is Incremental Attribution (in Plain English)?
Most attribution in Meta shows every conversion that happens after someone sees or clicks your ad - even if they were already planning to buy.
That’s the core problem.
Incremental Attribution is Meta’s new setting inside Ads Manager that redefines how conversions are measured and reported.
Here’s the simplest way to think about it:
It predicts which conversions wouldn’t have happened without your ad - and only counts those.
Instead of assuming every post-click or post-view action was influenced by your campaign, Meta uses machine learning to isolate the ones it believes were actually caused by ad.
So what makes it different?
It’s less about correlation, and more about causation.
It avoids crediting ads just because they were seen. The focus is on impact.
It gives you a cleaner, more honest read on what your campaigns are really doing.
In real terms, that means:
No more inflated conversion counts.
Cleaner signals on which campaigns are adding value
Smarter optimization based on actual lift - not noise.
Yes, conversion numbers might look smaller, but they’ll be real. And that’s what growth strategy is built on.
The theory is solid but can it work in practice?
While it’s still rolling out, we’ve already been testing it across several health and wellness brands.
What We’re Measuring
We’ve been rolling this out with a few high-volume clients in health and wellness. Brands where understanding true ad impact matters just as much as driving conversions.
To get a clear picture of how Incremental Attribution stacks up, we’re tracking performance across both attribution models (standard vs incremental) using these metrics:
ROAS (Incremental vs. Standard): To understand how much of reported return is ad-driven vs. inflated by attribution bias.
CPA & Cost per Incremental Result: To assess the efficiency of campaigns in driving true, net-new actions.
MER (Blended): To validate if Meta's reported results actually align with real revenue on the backend.
Net Margin Impact: We’re watching not just how many conversions client get, but also how profitable they are under each attribution model. Lower volume with healthier margins under incremental can easily beat inflated ROAS from standard campaigns that rely on discounts or returning buyers.
New Customer %: We’re watching this closely to see where the lift is really coming from. Under standard attribution, repeat buyers can inflate the numbers. Under incremental, a higher new customer rate is a strong sign your campaign is actually driving growth, not just retargeting people who were coming anyway.
Post-Purchase Survey Attribution: This gives us a reality-check against Meta’s model. If Meta says the ad caused the sale but the customer says they found you via a podcast, an influencer, or word-of-mouth, we take a closer look. Especially under incremental attribution, we want to be sure the conversions being credited actually feel ad-driven from the customer’s point of view.
Early Insights From the Test
It’s not a silver bullet for instant ROAS gains. But it will help you spot when reported performance is inflated. That’s crucial for making smarter, more confident budget decisions.
After a few weeks of testing, here’s what we’re starting to see in clients’ accounts:
ROAS is lower, but more reliable. Incremental ROAS tends to be 20-40% lower than standard. It’s not bad news. It’s just a more honest view of what your ads are actually contributing.
Creative performance rankings are shifting. Some “top-performing” ads under standard attribution aren’t pulling as much incremental weight. Meanwhile, ads that didn’t stand out before are showing true lift.
Stronger alignment with actual business data. Incremental reporting more closely mirrors Shopify revenue and blended MER, which gives us confidence we’re scaling the right things.
Longer ramp time - but more stable efficiency. With less inflated signal, the learning phase takes longer. But once campaigns lock in, they hold their performance more consistently.
How to pressure-test the read:
If you want to validate incremental lift versus standard attribution, run a holdout test. These controlled comparisons help separate true uplift from re-credited conversions into three buckets:
Performed worse than standard (incremental setting under-delivers)
Performed about the same (no material difference)
Performed better than standard (incremental setting outperforms)
These holdout tests are the next logical step to validate whether the lift we’re observing holds under controlled conditions.
If you’re running high-spend evergreen or lead-gen, Incremental Attribution is a promising option worth testing. But it isn’t a universal switch. Results vary by audience mix, creative, offer, and volume. Our recommendation: give it a shot in a controlled test and see how it performs for you - then scale if it holds up. Use your own benchmarks and validate with a holdout design where you can.
Initial Takeaways (The Good and the Gaps)
We’ve been testing long enough to start seeing patterns. Not just in the numbers, but in how Incremental Attribution behaves in real client accounts.
Here’s what’s stood out so far: what’s working, what’s promising, and where we’re still raising an eyebrow.
What we like so far:
More trustworthy signal. We’re finally seeing which campaigns are truly driving new revenue- not just piggybacking off brand demand.
Better creative decisions. With inflated data out of the way, it’s easier to identify ads that are actually pulling weight. Direct budget to where it actually yields the best results.
No need for enterprise tools. For brands priced out of platforms like Measured or Rockerbox, this is a practical, built-in way to get incrementality insights.
What we’re cautious about:
It’s still a black box. You don’t get visibility into how Meta defines “incremental”. You have to trust the model and machine learning.
No customization. You can’t control attribution windows or define incrementality rules.
Not ideal for low-volume accounts. Smaller brands may not have the volume to feed the model strong signal. It could be noisy for smaller brands.
Why This Matters for CPG Brands Specifically
For CPG brands, especially those with lower Average Order Values (AOVs), tighter margins, and little room for wasted spend, incremental clarity is essential.
Every dollar has to work harder. When your product is $29 and your ad costs $35 to acquire a customer, even slight over-attribution can push you into the red. Incremental Attribution helps surface the campaigns that actually have meaningful impact, not just showing up in the path to purchase.
Creative fatigue, stockouts, and promotions muddy attribution. CPG brands deal with short product cycles and high volatility. Standard attribution often misreads the signal. Incremental modeling helps cut through noise by isolating the conversions that were truly driven by paid media.
It separates the real winners from those riding brand momentum. Some ads perform well because of timing, product virality, or loyal customers. Incremental Attribution shows you which ones are actually creating demand - not just coasting on brand momentum or repeat buyers.
It brings financial discipline into performance marketing. If you’re managing media efficiency like a P&L (as you should be), incrementality lets you treat ad spend like a testable investment. It’s grounded in impact, not appearances.
When Should You Pay Attention to This?
This isn’t a "flip it on and walk away" feature. But for brands starting to outgrow simple attribution models, there are clear signs it’s time to explore Incremental Attribution:
You’re scaling and attribution is getting murky. Spend is up, conversions are up, but you can’t tie outcomes back to specific campaigns with confidence.
Your media efficiency is flatlining. Traffic’s rising, costs are rising, but blended revenue isn’t keeping pace. You might be over-investing in campaigns that look good under standard attribution, but aren’t actually creating lift.
You’ve tested creative angles, but don’t know what’s working. UGC, branded content, influencers, they all look equally effective under standard attribution because any conversion in the window gets counted. Incrementality helps you see which creative types are actually responsible for driving new customers.
You need to justify spend with more than surface-level metrics. If you’re reporting to a founder, investor, or CFO, “Meta says it worked” isn’t enough. Incrementality adds credibility to your media strategy.
How We’re Planning to Evolve the Test
We’re not stopping here. This is just phase one. This is phase one of a deeper dive. We’re already planning the next layers of testing.
Creative segmentation:
We’re breaking down performance by creative type (UGC vs. branded vs. demo) to identify which formats are actually driving incremental lift. It’s not always the ones with the flashiest ROAS.
SKU-level attribution deep dive:
Some products attract ready-to-buy customers. We’re testing whether these high-intent SKUs are inflating standard attribution. Does incremental modeling truly help to surface the true revenue drivers?
Landing page testing:
Conversion rate optimization might lift total performance. But does it lift incremental performance? We’re testing different landing experiences to isolate what’s improving real ad-driven impact, not just overall sales.
Cross-account, transparent case study:
This isn’t a black-box internal experiment. We’re turning it into a multi-brand, live case study. We’ll share the process, regular updates on what’s working, what’s not, and what we’re learning in real time.
The real opportunity here goes beyond optimization. It’s about building trust by being transparent about the process, the pivots, and the imperfect moments in between, not just the wins.
Attribution strategy works best when it’s shared, not siloed. This is how performance marketing becomes more accountable and more valuable.
Final Word: Promising, But Not a Silver Bullet
Early signs are encouraging-but we’re not throwing out the playbook just yet
You still need strong creative, conversion-ready pages, and a smart media buying strategy
Meta’s attribution model is evolving. So should your measurement mindset.
Incremental Attribution is a step in the right direction, but it’s not a magic fix. You still need:
High-impact creative, a compelling offer, and a seamless customer journey.
To track MER, net margin, and retention, not just what shows up in Ads Manager.
Conversion-ready pages and a smart buying strategy.
To test, learn, and adapt, but now with a cleaner, more honest signal guiding the way.
The fundamentals haven’t changed. This isn’t about throwing out the playbook. What’s changed is how you interpret it, now with better data behind every decision.
Meta’s attribution tools are evolving.
Your measurement mindset should evolve with them.
Let’s Talk Attribution - For Real
We test attribution models like this every week, so our clients don’t have to guess.
If you're trying to scale smarter, spend more efficiently, or finally understand what’s actually working in your ad account. Let’s talk.