A/B Testing ain’t for settling your disagreements

Tom Kerwin
3 min readNov 15, 2017
Using A/B Testing to settle your design disagreement is like using a JCB to dig a child’s sandpit.

Decision time in the design meeting. Sally thinks design A is better. Karl thinks design B is better. Both of them know they’re right.

To break the deadlock, some bright spark says, “let’s A/B test it!”

Eurgh!

It’s not that it won’t work — it sort of will. It’s just the wrong tool for that job, like using a massive JCB to dig a child’s sandpit.

First, it’s the same problem as when we validate hypotheses. This is coming at testing from the wrong end. When we test Sally’s Idea™ vs. Karl’s Idea™, we’re focused on finding out which one is “right” and we’re missing every other possible design in the universe, many of which may be much better than either.

Much more importantly though, we’re making the biggest mistake in A/B testing: we’re testing in the wrong place. We’re running experiments based on ideas we’ve had and ignoring the very real possibility that the thing we’re testing doesn’t actually matter to our customers.

Taken to an extreme, we become the team charged with designing a nuclear power plant who spend months arguing about what colour to paint the bike sheds.

Businesses have limited resources and precious few opportunities to run A/B tests. It’s vital that we increase our chances of running experiments that make a real difference. (And limit our discussions of what shade of green would best match our lycra cycling gear.)

How do we get to more valuable tests?

Well that’s a pretty big topic, but first we have to agree on the outcome we want.

This shouldn’t be hard, should it? After all, we all broadly want the same thing: a better product, sold better, so the customers and the business are happy.

But when we don’t agree on exactly what “better” means before we start designing experiments, we run into trouble. “Better” means different things to different people: faster response time, prettier design, clearer words, more robust security, more options, fewer options, all of the above… it depends on perspective.

We tend to fall in love with our ideas that we believe will make things generically better. Our ideas feel precious, so we’re tempted to defend them. That’s when we start arguing at cross-purposes and missing the point. Ideas are not the point.

Drop your precious ideas, and start caring about the outcomes

Start with your desired outcome and explore what factors might influence that. Do your research. Map out the customer journeys. Usability test. Find the bottlenecks. Let the ideas flow from there.

This involves a little bit of research work, but it opens up our minds to finding what really matters, instead of listening to our opinions. It also opens up our minds to the ideas that maybe don’t look as great at the first glance.

Agree on your main outcome

Go beyond agreeing an outcome for each experiment.

Many teams agree on some metrics, but they change them from test to test. They argue that each experiment is trying to change different things, so that’s what we should measure.

But then how can we know the relative value to the business of each place we experiment? Does it matter more if we get 30% more sign ups to an email list, 20% more items added to the shopping basket, or 10% more referrals? We can’t tell.

To know the value of an experiment, we have to measure what it means to the business. If one of the above resulted in 6% more revenue, that’s the value. That’s the place to double-down and test more.

We have to agree what “better” means not just before we start each experiment, but before we design our testing programme in the first place.

Get precise about “better” – define your Single Success Metric

Find a metric that’s as close as you can get to the company’s bottom line without being too hard to measure. You’re going to use this one metric to decide which of the variants in your experiments is better.

Andrew Anderson calls it the Single Success Metric. Sean Ellis calls it the North Star Metric.

Whatever you call it, agree on it before you do anything else.

If you want more like this, join awesome designers and smart business owners and get my weekly letter. I write about evidence-based product design, contrarian A/B testing, and how to be profitably wrong.

--

--

Tom Kerwin

Be profitably wrong. Join awesome designers and smart business owners and get my weekly letter. Evidence-based design and better A/B testing → www.tomkerwin.com