Every marketing team has a dashboard. Every dashboard tells a story. Most of those stories are wrong — not because the data is bad, but because the models interpreting the data are fundamentally limited.
The Attribution Illusion
Last-touch attribution is the default in most analytics setups. It’s simple: whoever touched the customer last before conversion gets credit. But think about what this actually means.
A prospect reads your blog post. Three weeks later, they see a retargeting ad. A week after that, they Google your brand name and click a paid search ad. They sign up.
In last-touch attribution, paid search gets 100% of the credit. The blog post that created awareness? Zero credit. The retargeting that maintained consideration? Zero credit.
The Measurement Stack
A honest measurement framework has three layers:
Layer 1: Quantitative Attribution
Use multi-touch models as a starting point. They’re still imperfect, but they’re less wrong than last-touch:
- Linear attribution — splits credit equally across all touchpoints
- Time-decay — gives more credit to touchpoints closer to conversion
- Position-based — weights first and last touch heavily, with remaining credit spread across the middle
Layer 2: Incrementality Testing
Attribution tells you what correlated with conversion. Incrementality testing tells you what caused it.
The basic framework:
- Hold out a control group from a specific channel
- Measure the conversion rate difference between exposed and control groups
- The delta is the true incremental impact of that channel
Layer 3: Qualitative Research
Ask customers. Not in a survey with five-star ratings — in actual conversations.
“How did you first hear about us?” is the single most underrated question in marketing analytics. It captures awareness channels that no digital attribution model can see — podcast mentions, word of mouth, conference talks, Slack community recommendations.
Building Decision-Oriented Dashboards
The goal of measurement isn’t to fill dashboards. It’s to make better decisions. Every metric should be tied to a decision:
- If this metric goes up, we do X
- If this metric goes down, we do Y
- If this metric stays flat, we investigate Z
If a metric doesn’t tie to a decision, it’s decoration, not measurement.
The Honest Dashboard
Here’s what I recommend as a starting framework:
- Leading indicators — metrics that predict future performance (pipeline velocity, content engagement depth, email reply rates)
- Lagging indicators — metrics that confirm past performance (revenue, conversion rates, customer acquisition cost)
- Health indicators — metrics that signal system health (site performance, deliverability rates, data quality scores)
The most dangerous thing in analytics is false precision. A model that says “paid search drove 47.3% of conversions” creates an illusion of accuracy that doesn’t exist. A model that says “paid search is one of our top 3 conversion-contributing channels” is less precise but more honest — and leads to better decisions.