Category: UX advice / Evidence-based design methods
User testing is arguably the most useful evidence gathering method of all. In my experience, the number of ideas for site improvements that come out of a session of user testing surpasses any other method.
If you've followed the process of gathering quantitive data first and you know you have conversion issues on your site, this can tell you why issues have been happening. You'll be able to watch users go through your flow and (providing your test is well designed) you'll see where they get stuck, and hear them tell you why they don't like something or can't find things.
You can gain this knowledge from approaches such as lab user testing or even guerilla user testing. However I find remote testing has a few big advantages:
There’s no excuse not to set up a quick unmoderated remote test with a few users ahead of each redesign project or as a regular monthly thing.
There are three main methods of remote user testing: 1. facilitated & moderated by you; 2. facilitated (and possibly moderated) by someone else; 3. facilitated by you but unmoderated. Each of them works a bit differently, which I’ll explain here.
For the moderated & facilitated by you approach, this obviously involves the most work for you but can potentially cost nothing. You'll need to find the users, organise a time to have a video call, record the call, and then write up notes. I will ask clients to suggest users for me to contact, and will email them to book in a time for a call using the handy Calendly.
The call itself consists of using Skype so they can share their screen with me if on desktop or I’ll use a tool like Validately to get access to their screen if they’re on mobile. I then share a link to a prototype or website and can see and hear them as they navigate it. The call can be recorded with screen recording software (QuickTime is handy for this) and immediately after I write up my main observations.
The facilitated by someone else approach means hiring a company to set up and run your user test, which may or may not be moderated by them as well. Moderation is generally useful when you're testing a prototype or early version of something that requires a bit of explaining or isn't fully working.
Either way you’re role is to specify what you want to test, and liaise with them as they develop the test. They'll then run and analyse it so you get a report at the end with the findings.
The unmoderated option consists of you setting up the test and putting it out to a panel of users who are ready to go. You then get back the videos of the users navigating the site for you to analyse and draw insights from. If it's a real live website I think unmoderated is the best way to go as this is closer to the reality of how users actually browse the web.
You will need to develop some skills in putting together a decent test and you’ll need the patience to watch videos of people going through your site. As painful as this can be at times, as a UX designer or product manager, there are few better ways to understand what your users face.
An important part of writing a user test is to make sure you're not putting leading instructions in there. Like leading questions when surveying, you don't want to be pushing the users to do certain things or you'll never learn what they would naturally do. Keep tasks simple by saying things like 'show how you would search' rather than 'click the search button in the top right and fill out your dates and location'.
Some people simplify the whole test by only setting users one task like 'show how you would buy a product'. The danger here is that users whizz through the process and you don't get to see them interact with all parts of your site, hence why I prefer a bit of guidance with a task per step of the flow I’m testing.
Make sure you recruit accurately for your tests. You'll want people who match your actual users (you can use your audience data to discover this). It’s very rare that a website is designed to appeal to absolutely everybody so you want users who are going to provide authentic feedback.
Five users is usually fine for each test—any more and you just tend to see repeated behaviours—but make sure you have five per major device category, as people can behave very differently on them. For example, I most commonly test with five on desktop and five on mobile.
When it comes to analysing your own tests try and stick to recoding observed behaviours. Users might say that they don't like a feature (especially if it is new) only to be perfectly competent at using it. Quotes are useful to put in reports to explain behaviours but shouldn't be used if they don't reflect what actually happened.
Aim to watch all your tests through and annotate them first so you have a good sense of events, before summarising the repeated insights and critical issues in a lightweight report.
If you want someone else to facilitate, a company that I've heard good things about is WhatUsersDo (from £3,000/project).
When it comes to unmoderated testing, I'm a huge fan of Validately (from $199/month) which offers a great set of tools for creating and watching back your tests, including being able to time stamp your videos with notes and create clips to share. A cheaper, pay as you go option that seems to offer a very similar set of features is UserFeel (from $49/test).
Assuming five users, unmoderated testing can be done in a matter of a few days. If you're moderating it yourself then the extra organising tends to mean it takes about a week.
Sign up here to get a guide to my favourite (mostly free) tools for evidence-based designing. Plus a massive, advice-filled reading list.You'll also get my new articles & content emailed to you every couple of weeks. Your email is never shared. Unsubscribe at any time.