Marketing Attribution Problems: How to Make Decisions Without Perfect Data

If you’re running a SaaS company that’s grown beyond the scrappy startup phase, you’ve probably noticed something unsettling about your marketing reports: they don’t add up. Your marketing attribution data tells impossible stories where channels claim credit for more revenue than you actually generated.

Your retargeting vendor claims they drove $500,000 in revenue last month. Facebook says they generated $300,000. Google takes credit for $400,000. Add it all up, and somehow your marketing channels are claiming 200% of your actual revenue.

Welcome to the attribution circus, where every vendor is both the ringmaster and the star performer.

Why marketing attribution problems are getting worse

The fundamental challenge isn’t just that vendors are inflating their numbers – though they absolutely are. It’s that modern customer journeys have become impossible to track accurately. Your prospects are researching you across dozens of touchpoints: organic social media, dark social shares, offline conversations, competitor comparison sites, review platforms, and podcasts you’ve never heard of.

Traditional attribution models try to force these messy, non-linear journeys into neat, linear funnels. First-touch attribution gives all credit to the first interaction. Last-touch attribution credits only the final click. Multi-touch models attempt to distribute credit across touchpoints, but they’re still just making educated guesses about what actually influenced the decision.

Meanwhile, privacy changes like iOS 14.5 and the death of third-party cookies have made tracking even more fragmented. The attribution data you’re getting today is less complete and less accurate than what you had three years ago.

And here’s the kicker: Even if you could track everything perfectly, tracking doesn’t equal causation. Just because someone saw your retargeting ad before converting doesn’t mean the ad caused the conversion.

The vendor attribution shell game

Here’s what’s actually happening with your specific channels: Your retargeting vendor is claiming credit for every purchase made by anyone who saw their ad in the past seven days. But here’s the thing – that audience is basically “everyone who visited your website recently.” Many of these people were already planning to buy. They would have converted anyway, with or without seeing that retargeting ad for the eighth time.

Marketing platforms have a dirty secret: they’re grading their own homework, and surprise – they’re all getting A’s. According to research from Avinash Kaushik, one of the world’s leading analytics experts, the true incrementality of marketing typically ranges from just 0% to 25%. That means 75% or more of the conversions your vendors are claiming would have happened anyway.

But the problems go far beyond vendor self-interest. Even expensive attribution software struggles with the same fundamental issues. Multi-touch attribution models look sophisticated with their algorithmic credit distribution, but they’re still just making educated guesses. They can tell you someone clicked your Facebook ad, then your Google ad, then converted – but they can’t tell you which touchpoint actually influenced the decision.

Jon Loomer, who’s spent years dissecting Meta’s advertising platform, found that view-through conversions in remarketing campaigns often suggest that “many of the people you reached may not have seen your ad and would have purchased anyway — usually as a result of getting an email that same day.” The platforms know this. They’re counting on you not knowing it.

Meanwhile, your attribution software might show completely different numbers than your ad platforms because of data sampling differences, attribution window variations, and tracking pixel conflicts. One tool says LinkedIn drove 50 conversions last month. Another says 12. Your CRM says 31. Which one is right? Probably none of them.

The technical term for this is “double attribution,” and it gets worse. AppsFlyer’s documentation reveals that in retargeting campaigns, in-app events are often double attributed – credited to both the retargeting source and the original acquisition source. Your vendors are literally counting the same conversion multiple times.

Why traditional attribution models fail for modern SaaS

If you’re selling B2B SaaS with a sales cycle longer than six months and multiple stakeholders involved in purchasing decisions, these marketing attribution problems become even more pronounced. Traditional attribution is essentially worthless when dealing with complex B2B buyer journeys. Research from GetUpLead found a shocking example: LinkedIn ads showed 0.5% revenue attribution in last-touch models and 1.5% in multi-touch attribution. The actual impact after proper testing? 20% of conversions.

That’s a 13x difference between what attribution models said and what was actually happening.

The problem is fundamental. Only about 5% of your potential buyers are actively in-market at any given time. The other 95% are on non-linear journeys that traditional tracking can’t follow. They’re reading your blog posts in incognito mode, discussing your product in Slack channels you can’t see, and comparing you to competitors in ways that leave no digital footprint.

Forrester’s 2024 research drives this home: 64% of B2B marketing leaders feel their organization doesn’t trust marketing’s measurement for decision-making. When nearly two-thirds of companies don’t believe their own marketing metrics, you know the attribution challenges have reached a breaking point.

Enter sensitivity analysis: Your solution to marketing attribution problems

Here’s where sensitivity analysis becomes your sanity check for these marketing measurement challenges. Instead of accepting vendor numbers at face value or drowning in complex attribution models that still won’t give you the truth, you flip the question.

The approach is beautifully simple. You ask: “What percentage of this channel’s reported revenue would need to be truly incremental for it to break even?”

Let’s say your retargeting campaign reportedly drove $100,000 in revenue last month, and you spent $20,000 on it. If your gross margin is 80%, then you made $80,000 in gross profit from that reported revenue. For the campaign to break even, you need $20,000 in gross profit, which means you need $25,000 in revenue (at 80% margin).

That’s 25% of the reported $100,000.

So the question becomes: Do you believe at least 25% of those retargeting conversions were truly incremental?

Now you can use your judgment. You know your business. You know that most people who visit your pricing page three times are probably going to convert regardless. You can make an educated guess that maybe 30-40% of retargeting conversions are actually incremental. Great – the campaign is probably worth it.

Building your sensitivity analysis framework

The math is straightforward, but the insights are powerful. Here’s the formula:

Required Incremental Revenue % = (Total Campaign Cost) ÷ (Attributed Revenue × Gross Margin %)

But here’s where it gets interesting. You can create multiple scenarios:

  • Conservative scenario: Assume only 10% incrementality
  • Moderate scenario: Assume 25% incrementality
  • Optimistic scenario: Assume 40% incrementality

For each scenario, calculate your actual ROI. If even your conservative scenario shows positive ROI, you’ve found a winner. If only your optimistic scenario works, maybe it’s time to cut that channel.

Research from McKinsey shows that companies using advanced marketing measurement approaches see significant efficiency improvements without increasing budgets. They’re not spending more – they’re spending smarter.

Real sensitivity in action

Let me show you how this works with a real example from M-Squared Analytics’ research. A fashion brand was spending $1.3 million on Facebook advertising and seeing great “results” according to Facebook’s attribution:

  • With a 1-day attribution window: 17,000 new customers at $57 cost per acquisition
  • With a 28-day window: 27,000 customers at $230+ cost per acquisition

That’s a 4x difference just from changing the attribution window. But when they applied actual incrementality testing, the results were even more sobering. The majority of attributed conversions were actually returning customers, not new acquisitions. The real cost per new customer was significantly higher than Facebook reported, and Facebook’s true contribution to new customer acquisition was much lower than their attribution suggested.

Without sensitivity analysis, they would have kept pouring money into Facebook based on inflated metrics. With it, they could make an informed decision about whether that real $120 cost per acquisition was worth it for their business model.

The hidden power of “good enough” measurement

Here’s something that might surprise you: You don’t need perfect attribution data to make smart marketing decisions. You need good enough attribution with intelligent interpretation. And sensitivity analysis gives you exactly that approach to overcome marketing attribution problems.

Instead of chasing the impossible dream of perfect attribution, you’re acknowledging uncertainty and working with it. You’re saying, “I don’t know exactly how much of this is incremental, but I can figure out what the threshold needs to be for this to make sense.”

This approach also helps you have better conversations with your team and board about marketing attribution challenges. Instead of presenting attribution numbers everyone knows are inflated, you can say: “Even if we assume only 20% of these conversions are incremental – which is conservative – this channel is still delivering a 150% ROI.”

That’s a credible argument that acknowledges reality while still demonstrating value.

What about incrementality testing?

You might be thinking, “Why not just run incrementality tests for everything?” Fair question. Incrementality testing through geo-experiments or holdout groups is the gold standard for measuring true marketing impact. Google, Meta, and other platforms offer conversion lift studies that randomly split audiences into test and control groups.

But here’s the reality: Comprehensive incrementality testing is expensive, time-consuming and often impractical for mid-sized SaaS companies. A proper geo-experiment needs to run for 6-12 weeks minimum. You need sufficient sample sizes for statistical significance. And you’re literally turning off marketing to some portion of your potential customers during that time.

Sensitivity analysis bridges this gap. It’s not as precise as incrementality testing, but it’s immediate, free and good enough for most decisions. Use incrementality testing for your biggest channels and highest-stakes decisions. Use sensitivity analysis for everything else.

Implementing sensitivity analysis in your SaaS company

Start simple. Pick your three largest marketing channels and run this analysis:

  1. Gather the numbers: Campaign cost, reported revenue and your gross margin
  2. Calculate break-even incrementality: Use the formula above
  3. Apply your judgment: Based on your understanding of the channel and customer behavior, estimate the likely incrementality percentage
  4. Create scenarios: Run conservative, moderate and optimistic scenarios
  5. Make decisions: Cut channels that don’t work even in optimistic scenarios, double down on those that work even in conservative ones

For channels in the middle, that’s where you might invest in proper incrementality testing to get a definitive answer.

Remember to factor in your specific context. Branded search campaigns might have only 10% incrementality (people were searching for you anyway). Cold outreach might have 80% incrementality (they wouldn’t have found you otherwise). Your retargeting probably sits somewhere in between.

The fractional CMO advantage

If all of this feels overwhelming, you’re not alone. Most SaaS companies at your stage struggle with these exact challenges. You’re big enough that marketing attribution matters, but not big enough to have a full team of data scientists figuring it out.

This is precisely where fractional CMOs excel. They’ve seen these attribution challenges across dozens of companies. They know which channels typically have high versus low incrementality. They can implement sensitivity analysis frameworks quickly because they’ve done it before.

A fractional CMO can also help you navigate the politics of attribution. When your retargeting vendor pushes back on your incrementality estimates, an experienced CMO has the credibility and expertise to defend your methodology. They can translate between the technical reality and business implications in a way that resonates with your board and executive team.

Your next steps

Stop accepting vendor attribution at face value. Stop pretending that multi-touch attribution models solve the incrementality problem. They don’t.

Instead, embrace uncertainty and use sensitivity analysis to make smarter decisions despite that uncertainty. Here’s your action plan:

  1. This week: Run sensitivity analysis on your largest marketing channel
  2. This month: Extend the analysis to your top five channels
  3. This quarter: Implement a formal sensitivity analysis framework for all marketing decisions
  4. This year: Run incrementality tests on 1-2 critical channels to calibrate your estimates

The goal isn’t perfect measurement. It’s better decisions. And sensitivity analysis gets you there faster, cheaper and with less organizational friction than any other approach.

Your marketing channels are all claiming credit for the same conversions. Your vendors are marking their own homework. Your traditional attribution models are lying to you. But with sensitivity analysis, you finally have a tool that acknowledges these realities while still enabling smart, defensible marketing decisions.

Welcome to the post-attribution age. Your CFO is going to love this.

Sources and further reading

You might also enjoy

Must-Have CRM Data for Effective SaaS Marketing

SaaS companies have the worst data integration issues of any industry. Learn the 14 essential CRM data categories that drive 299% marketing ROI and how fractional CMOs optimize systems for sustainable growth.

Who Should Write Your SaaS Help Documentation?

In our previous article on documentation as a growth engine, we saw how great help docs can convert trial users into paying customers. But knowing documentation is crucial begs a practical question: who on your team should actually write these help articles? It’s tempting to simply hand off documentation to