The evidence-based UX design guide

A/B Tests

What you can learn

A/B tests have long been seen as the ultimate method for an evidence-based and data-driven approach to designing websites and apps. The theory is that you take a new website design (version B) and serve it up to to some of your users whilst showing the rest of your users the original design (version A). This can be anything from a different version of a button to a complete page redesign. You then measure to see which version provides a better rate of conversion, and the winner is put live on the site to all of your users.

You can also measure secondary goals and other interactions on the website to see if it has had an effect on more than just your main conversion rate. Something might not increase page conversions but might improve another desirable metric. You can also do multi-variate testing where you test out more than just the two options.

In principle this type of testing is simple and allows you to measure the success of your designs in the real world and with your actual users. In practice it is somewhat more complex than that (see the 'watch out for' section) and isn't something that should be undertaken lightly. In fact to do so risks you getting inaccurate results and can cause you to make the wrong decisions for your product so it's well-worth getting a professional data analyst to do the work.

How to do it

You're going to need to install some software code from your chosen tool and, as with most other things like this, it's a pretty straight-forward task of copy and pasting. Once installed you can then use the software to set up your A/B tests. However before doing this, it's worth going through a few steps:

Watch out for

Do you have enough traffic?

This is the biggest problem for a lot of startups and small sites and it's not as simple as knowing how much traffic you get to the website overall. Even with 100,000 users a month you may not have enough traffic to run the test you want in a reasonable time. Let me explain through an example:

Let's say you want to get more users to reach checkout from your product page and you redesign a section of it. Your current conversion rate of that page is 5% and to consider this a success you want that to increase by 10% to 5.5%. This means with a statistical significance of 90% (which isn't amazing, 95% is more commonly used) you need 30,000 individual visitors to go through your test per variation to be sure you know whether it's 10% better. So in an A/B test you'll need 60,000 users to go through your test. If your product page only gets about a third of the total users to your site (of 100,000 users) then you're going to need to run that test for two months before you have a result.

Two months is a long time for a lot of companies and they would probably be better off gathering several other forms of evidence (such as audience data, guerrilla user testing, conversion funnels etc) in that time, which will give lots of areas for improvement.

Are you just testing for confirmation?

The biggest problem with A/B testing is that people use it at the wrong time. Too often they have already redesigned their site and built it and then are just testing to see how much better it is than the current version. They've already put the time and money in and are not interested in knowing whether it is actually worse. They just want a number to boast about how much better the new one is.

Do you understand statistics?

Ultimately, to properly run A/B tests involves a good knowledge of statistics and an experience in doing it before. There are lots of things to understand like sample sizes, statistical significance, statistical power, one/two-tailed tests and more to know if you're doing the right things. Taking a punt and doing it on your own almost guarantees that you'll make mistakes. I know, I've been there. The software can be very reassuring and make you think you're getting great results but when you come to launch them you're left with something that doesn't work.

There are many things to watch out for in A/B testing, which are solved when you get an A/B testing pro to help you out.

Example tools (and cost)

The leaders in this field are probably Optimizely (from $49/mo) who have some very user-friendly software for running A/B tests, even if it can encourage you into thinking you have results earlier than you might. There are other equally powerful options out there like VWO (from $49/mo) and even Google Analytics (free) offer a simple version called Content Experiments.

How long does it take?

Once your designs are ready, setup should be a matter of an hour or two. Running a test can take a long time (often several weeks).

How often should you use it?

Often

Sometimes

Rarely

Resources

Last updated on 5 December 2016

Get the latest

Never shared. Unsubscribe at any time.
10 tools for Evidence-Based UX Design

Note: the examples in this guide are for website design, but most of the content is also applicable for native apps and software.

How about another method?

Friends & Family Opinions

Not something you may think of as a piece of evidence but certainly something that will influence you. Learn how to incorporate this feedback.

Learn more | Last updated on 28 November 2016

View all the methods in the guide