The Personalisation Metrics That Actually Matter (And the Vanity Ones to Ignore)
Most personalisation reports I see are full of the wrong numbers. Opens, clicks, coverage percentages. They look like progress, but don't tell you whether revenue moved. Here's what to measure instead, and a simple framework to report it without needing a data science team.
Most personalisation reports I see are full of the wrong numbers. Opens, clicks, coverage percentages, time on site. They look like progress. They're not.
When we sit down with a client to review how their personalisation is performing, the first job is usually to change what they're looking at. Their numbers tell them how busy the personalisation is. They don't tell them whether it's working.
So let's sort the signal from the noise. Here are the metrics that feel meaningful but aren't, the ones that actually tell you personalisation is doing its job, and a simple way to report both without needing a data science team.
The Vanity Metrics Problem
Try this test on any number on your personalisation dashboard: could it go up whilst revenue stays flat? If yes, it's a vanity metric.
The usual suspects:
- Email open rates. A proxy for subject line quality and sender reputation. Tells you next to nothing about whether the personalised content inside the email changed what anyone did.
- Click-through rates on their own. Clicks aren't purchases. A customer can click through ten recommended products and still buy nothing.
- Personalisation coverage. The share of visitors who saw a personalised experience. Serving 80% of your traffic a tailored homepage doesn't matter if conversion didn't move.
- Time on site and pages per session. These can mean engagement. They can also mean confusion. Without context, they tell you nothing useful.
- Banner engagement on its own. We've seen composite banner work lift engagement significantly. But engagement numbers with no revenue story behind them are just noise.
None of these are bad metrics in themselves. They're useful when you're diagnosing a problem. They're not proof that personalisation is working.
The Metrics That Actually Indicate Personalisation Is Working
These are the ones we watch with clients:
- Revenue per visitor by segment. If personalisation is working for a segment, this goes up. Compare the personalised group against a holdout group and you've got your answer.
- Conversion rate by segment. Same logic. Are the people who saw personalised content converting at a higher rate than those who didn't?
- Average order value by segment. Good recommendations should lift basket size. This is usually where the clearest personalisation wins show up.
- Time to second purchase. For retention personalisation (nudge emails, win-back, loyalty), does the gap between first and second order shorten? One of the most reliable signs that something is actually changing behaviour.
- Win-back revenue as a share of total sales. For one beverage client I've mentioned before, this sits at 19% of monthly sales. Not 19% of email revenue, 19% of total sales. That's the kind of number worth putting on the board.
- Zero-result search rate. If you're personalising discovery, this should fall over time as you stock and surface what customers actually want.
- Return visit rate for personalised versus non-personalised segments. Personalisation should build habit. If returning visitor rates aren't improving for your personalised cohorts, it isn't landing.
Every one of these answers the same question: did the customer do something different, and did it make you money?
The Attribution Problem (And a Practical Way Around It)
Personalisation attribution is genuinely hard. A customer sees a personalised homepage on Monday, gets a nudge email on Thursday, converts through a paid ad on Saturday. Which channel gets the credit?
You can spend six figures on attribution tooling and still not answer that cleanly. So don't try.
The practical alternative: run simple holdout tests. Show 50% of a segment the personalised experience, 50% the default. Measure the revenue gap over the test window. That gap is your attribution answer. Imperfect, but actionable.
One caveat. This only works if you've got enough traffic for the test to reach statistical significance. For a high-volume consumer brand, 30 days at a 50/50 split usually gives a clean read. For a B2B business or a smaller retailer, you may need 60 to 90 days, or run the test at segment level rather than site-wide. Better a slower test than a noisy one.
The Measurement Cadence
Most teams either measure too often or not often enough. Weekly dashboards stuffed with vanity metrics. Or quarterly revenue reviews that don't isolate personalisation at all.
The cadence we recommend:
- Weekly: anomaly watching only. Has anything broken, spiked, or collapsed? If yes, investigate. If no, move on.
- Monthly: segment performance. Conversion, AOV, and time to repurchase for your key segments. Thirty minutes with a spreadsheet, not a dashboard marathon.
- Quarterly: strategy review. Which personalisation rules are earning their keep? Which aren't? What's worth testing next?
Weekly is for spotting problems. Monthly is for tracking progress. Quarterly is for making decisions. Keep them separate and you'll waste a lot less time.
The Baseline Question
A number means nothing without a baseline. "Conversion is up 2%" is meaningless if you don't know where it started.
Before you switch anything on, record 30 days of pre-implementation metrics for every segment you plan to target. Conversion rate, AOV, revenue per visitor, time to repurchase. That's your reference point. Every "improvement" claim afterwards has to be measured against it.
We generally do this on every personalisation project, and it's saved more than one client from celebrating a win that wasn't really there.
A Simple Reporting Framework
You don't need a BI tool for any of this. A one-page monthly report does the job:
| Segment | Baseline metric | Current metric | Change % | Revenue impact (£) |
Five columns. One row per segment. The number that actually matters (revenue impact) in the final column.
If you can't fill in the revenue impact column, either the personalisation isn't earning its keep, or you're not measuring it properly. Either way, that's the conversation worth having.
The Point of All This
Personalisation is a commercial tool. It's there to grow revenue, shorten repeat purchase cycles, and build the habits that make customers stick around. Every metric you track should answer that question, and the ones that don't are a distraction.
Open rates will tell you your subject lines work. Coverage percentages will tell you your rules are firing. Neither will tell you whether any of it is making your business more money.
Measure the right things. Report fewer numbers. Make better decisions.
If you're not sure what your personalisation is actually doing, get in touch. Happy to talk it through.