Category: UX advice / Research articles
I’m a huge fan of remote unmoderated user testing, where you set tasks for users to complete by themselves on your website. I love how quick it is to get results and how you get honest feedback from users in their real environment. It cuts out a lot of the hassle of setting up your own user testing lab.
I mostly use this approach to test websites—often at the start of a project to improve a client’s ecommerce site—but you can also use this approach to test apps and even prototypes. Though I prefer moderating tests of unfinished prototypes as these need more explaining.
Remote unmoderated tests mean that you obviously aren't there to correct any issues that may arise. So here are my top tips for success, based on over six years of experience using this method.
The rule of testing with only five users is a solid one that has always stood me in good stead—it allows you to find most of the issues quickly. However, testing a website these days means you need to consider the different devices it can be viewed on, as the size of screen creates a very different experience.
For the average site I recommend testing with five users on desktop, and five on mobile, so ten in total. If you just test five users in total across both device types (e.g. two on desktop, three on mobile) then you’re not going to see enough common behaviours to be sure you’ve caught everything.
I don’t test with tablets very often as they tend to be a small chunk of the user base. Also most tablets either display a large version of the mobile site or a slightly smaller version of the desktop one, which I’ll already be testing.
However do check your stats to see if tablets represent a big section of your users—if it’s greater than 15-20% it’s probably worth running some tests for them.
You should make sure the users you test with are similar to the actual users of your site. The demographic selection is often quite lightweight on remote user testing tools but you can at least set gender, age, device, and location. These are all things that you can find out about your audience by checking Google Analytics.
To get more specific in finding similar users, you need to make use of the screener questions. You can ask your users multiple choice questions and only allow those who answer as you want to take your test. This is helpful for getting users who might actually be interested in your product or service, and thus act more like real customers.
You don’t need to get too worried if your participants don’t match your users exactly as it’s the usability you’re mainly testing. Just seeing how real humans behave in front of your website will help you improve it.
It can be tempting to just ask users to do a single task on your site (i.e. complete a sign up process, buy a product) and then set your test going. The danger is that some users will just rush on through the test and you only get back a five minute video, missing key parts of your site. It is a good idea to only cover one user flow per test but that doesn’t mean you just need one task.
I generally aim for a task to cover each of the main pieces of functionality on the site. A task set for the user on a search results screen might be ‘compare the results available and choose one, explaining why you chose it’. Usually 4-5 tasks on a fully functioning site is enough to get 15-20 min long videos back.
Be careful and clear in how you write your tasks as this is the main interface you have with your users—you’re not actually there to clarify your points so the words have to do the work.
Try and summarise each task in short, snappy sentences and try to avoid ambiguity. In the past I’ve made them too wordy and that made users confused as to what was being asked of them.
It’s worth quickly testing your tasks by reading them aloud or getting someone else to read them. Also if a task relies on the user being on a certain page and you want to be sure users are there, then you can always put the relevant link in the task itself.
Once you’ve written your tasks it’s obviously quickest to order all your tests at once. The trouble with this approach is if there's a mistake in the instructions then all of your users could trip up on it. This could mean a load of useless tests and it can start to get expensive to replace them.
It’s better to start with one user and check it has run smoothly before rolling it out to more people. After the first one I generally do two or three tests at a time. This gives you a chance to tweak any confusing wording or spot any technical issues with the site that are preventing any actions.
Though be aware that you don’t want to be wildly changing the test between users as the results won’t be meaningful and you’ll be testing different things with different users.
You are often able to set questions to ask users after they’ve completed the test. This is a handy place for gathering a bit of extra user research, especially if you’ve screened your users to be like your actual customers.
You can try asking them questions that can help form broader research, like ‘what features are most important to you when buying a product like this?’ or you can learn about potential competitors by asking ‘what other similar websites to this have you used?’.
It's important to have a process for analysing the videos of your tests. Don't just watch them and try to remember the things you need to change.
I watch each video through and make timestamped notes of any interesting things that happen—mainly usability issues or bugs. After I've watched them all I then look through these notes to spot common themes or issues. I then form these into a list for each part of the website, which becomes my lightweight report to share with my client.
Once I know what these issues are, I go through the notes and make a clip for each, which I link to from my report to illustrate my findings. Clients love being able to quickly see real demonstrations of the problems you discover—this is much more powerful than you just telling them.
When it comes to actually writing your notes and reports it’s tempting to put solutions in there, i.e. 'a bigger button would solve this'. However you're getting ahead of yourself and probably putting the first answer that comes to mind.
Stick to highlighting what actually happened. Tackle solutions when you come to design improvements.
It's also easy to only focus on the broken things or issues with the product but cover the positives as well. You want to remember what users liked so you don't remove the bits that are working.
There are several different tools out there for running remote unmoderated user tests. I used to be a big fan of UserTesting until they moved to only target the enterprise market with huge prices to match.
So I shopped around to find something that offered the same functionality, with the abilities to set audience demographics, to screen them with questions, to notate videos, to create clips, and to share clips easily. The answer turned out to be Validately, which I’ve found to be great. Pricing starts at $2,388 for a year (+$30 per remote user recruited from their panel). It also allows you to run tests with your own users that you moderate—it handles mobile particularly well.
Others I considered were Userfeel (seemed have most functionality other than notes, $49/user), Loop11 (didn’t seem to allow notes or clips, $1,788 per year), Usertest.io (didn’t seem to allow notes or clips, £18/user).
Sign up here to get a guide to my favourite (mostly free) tools for evidence-based designing. Plus a massive, advice-filled reading list.You'll also get UX articles & content emailed to you once a week. Your email is never shared. Unsubscribe at any time.