I’ve written a lot on this website about the techniques I use to be evidence-based when carrying out my UX consulting projects. This includes my framework showing the full evidence-based design process you can use (and the order to use different methods).
In this step-by-step guide I will be outline the standard approach I use to improve my clients' websites, linking off to specific articles for more details.
Sometimes it’s a bit of a pick-and-mix of the different elements depending on the time and budget available. I’ve used a website as the example but this process would work for designing mobile apps as well. As I work a lot with startups, I try and go for lean methods that don’t get bogged down in excessive deliverables and can be done on a small budget.
In most projects I do with a client I will aim to get a decent period of research in up front. This enables me to get up to speed with their product and understand their users, which is vital before I can make informed suggestions about how to improve things. Most of the data gathering goes in at the start of a project, as I need to learn early so I can implement it in my design work.
The first thing to get my head around is how the website currently works and often the quickest way to do this is to speak to the client. This is certainly a form of evidence. Unfortunately it’s often the only point of data that many people use and it’s a dangerous one as they can be biased.
However the client will be great at explaining what they hope users can achieve on the site (the goals) and all the different features it has, so it’s a quick way of getting up to speed. Get them to define the KPI for the project—are you doing it for more sales, more enquiries, or something else?
If you're remote this can be achieved through an hour long Skype call. It can also be handy if the client can write it up in a shared doc but it’s always worth having some time to get them to explain things. I then sketch out the key user flows on paper or will sometimes put them together on screen to confirm that I’ve heard them correctly.
I've written more here on using client knowledge.
Once I have an understanding of how the site works, it’s then time to get to grips with some data to see how users are actually behaving on there. Every company I’ve worked with has had at least Google Analytics (GA) installed, so I use this to drill into the performance of the pages in the user flows and work out conversion rates.
At this point it is worth understanding how Google Analytics tracks users, which I write about in this article:
GA offers three types of traffic metric by default: users, sessions and pageviews. You can also set events manually on specific pages or interactions.
If I want a clearer idea of how the site is performing as I design it, I might build a funnel in GA. This may only be suitable for a longer project because it will take time to gather data. Or I might install Mixpanel for this job, as it is better at tracking individual users. As I explained in chapter 2.2 of my book:
The Google Analytics interface gives you a visualisation of a funnel and shows you how many people move forward and how many drop out. Due to the fact that it’s based around sessions, the GA funnel isn’t the most accurate at telling you how your users are moving through your site (Mixpanel is the one for that) but if you want an aggregate overview or can’t install Mixpanel for any reason, GA can do a job.
Mixpanel offers a lot more customisation around its funnels: you can slice and dice them by lots of different properties and it will accurately represent what is happening at a user level. Setting up funnels in Mixpanel is very simple, you just need to have some events set up.
Now I know how the site is performing I can also use Google Analytics to understand who the users are. It’s rare to get a project where there’s time and budget to do a piece of user research and interviewing up front. As a result I need a technique that will enable me to learn about the users without actually meeting them.
For this I delve into the Google Analytics user data. It’s never going to give the insight of interviewing but it is real data about the site’s users and gives a lot of information about their identity. I cover how I create these outline personas in chapter 3 of my book and in this article.
I’m a fan of keeping personas minimal and not filling them with lots of irrelevant stuff: keep it focused on things that will help you design. The kind of data I’m looking for from Google Analytics is age, gender, device, location, source of traffic, new vs returning, and whether they convert or not.
One of the key things to understand at this point is the split by device of users, i.e. how many mobile, tablet, and desktop users there are. This comes in handy deciding which devices to user test on.
I don't always have this option available but increasingly I encourage my clients to install a tool like Hotjar or Mouseflow. This enables me to see richer user behaviour data in the form of heatmaps of where they spend time and click on the page. The scroll heatmaps can be great at diagnosing problems with page layout, an example of which I write about in this guide:
Where scroll heatmaps are most useful is when they show a sudden drop in the percentage of users at a point near the top or the middle of your page. This means that the content has for some reason caused users to stop scrolling and could be the sign of a 'false floor’, where the design makes it look as if the page has finished.
It also offers visitor recordings, which are a great way to watch the actual journeys users are making through the website. It lacks the explanation of a user test but allows you a window on real user behaviour:
This isn’t a user test that you have set up, it is just someone going about their business because they’ve arrived on your site through their own choice (and are presumably interested in what you are offering).
These two methods usually give me plenty of ideas as to the problems on the existing site, and gives me hypotheses to check in the user tests.
Copying the competition might be obvious but a decent critical analysis of similar sites is a useful point of data to gather. I find looking at five sites is enough to get a sense of what users expect. I’ve written about it in this article on competitor analysis:
Along with quantitative metrics and qualitative user tests, competitor inspiration is another valuable source of data. At its heart you are looking for things that work and patterns that users will already understand. Only copying one site is like only having one data point: liable to lead you completely astray.
The final chunk of up front work I like to do is to run a user test. This gives me lots of evidence for why users behave the way they do and why they are—or aren’t—converting. This also acts as a benchmark of how the website is performing before I started work on it. It also helps to prioritise which pages need the most work.
In-person testing can be seen as a bit excessive when testing the current website (as they're usually more interested making a new version) so I need to use quick methods. Remote testing offers that speed as well as a way to record the events. There are two remote methods I will use depending on the project.
This involves me facilitating the test with users in a different location. There’s no fancy equipment needed and it’s great for projects where the product is internal or for a specialist audience. I’ve written about how I do it with Skype here.
Once you’ve started the video call, ask them to share their screen with you. You’ll then handily get the view of their screen so you can see how they use your website alongside a small shot of their head, which is handy for checking their reaction to different things (helping you judge if they are happy or confused).
The most common type of user test I run are unmoderated tests using the site UserTesting. This is great for getting the real reaction of users as they browse around the web and is ideal for customer-facing sites. I also love the UserTesting platform as it records everything and provides videos that can easily be clipped for sharing. I’ve had a lot of practice with UserTesting and have written here about how to get great results when setting up your tasks:
Be careful and clear in how you write your tasks as this is the main interface you have with your users. You’re not actually there to clarify your points so the words have to do the work. Try and summarise each task in short, snappy sentences and try to avoid ambiguity. In the past I’ve made them a touch too wordy and that can instantly throw people as they get confused with what is being asked of them.
At this point in the process it’s mostly about wireframing or prototyping. As I’m spending most of my time on this there’s less evidence-gathering going on, but there are a few things that can be done to support the design work.
During the design phase it’s possible I'll come up with a few options and we can't decide which approach to take. Usabilityhub is a site that allows you to run micro user tests with just jpgs or flat designs, so it’s perfect for testing out any work in progress.
This guide contains my advice on how to do design tests, based on my years of experience:
This is a chance to gather user feedback early in the process and shape your design decisions with evidence before committing to building anything. It allows you to quickly test out a couple of options and solve arguments if your team can't agree on a way forward. It's fairly lightweight and won't give you huge insight but it is suitable for answering targeted questions and helping you course-correct.
As I get closer to finishing the design work and creating a prototype, I’ll want to user test it early and tweak it rather than waiting for it to be coded. For this I need a quick method that I can run with an incomplete project. I find it’s best to do this in-person as you can explain the context and talk around any unfinished areas.
Thus the best method to use here is a guerrilla test, and I’ve written up my full one-person method here:
Ideally you’d have two people to carry out a test, one to facilitate and one to take notes but that isn’t always possible. So this method works for when you need some user feedback on your product and you’re a one-man band or the only person prepared to do it. It works best for testing on mobiles as you can do it anywhere.
Once the project is finished and the website is coded up, it’s time to see if what I’ve designed is offering any improvements. There are a few things worth doing at this stage and if you want a fair comparison, you should carry them out in a similar way to previous steps.
The most important of them is to do more user testing, as this gives me the quickest sense of whether what I have worked on has improved on the initial user test. It is also the method with the quickest turnaround on results and so we can quickly tweak anything that hasn’t worked.
See above for my thoughts on user testing methods, but generally I’ll be using UserTesting at this point and I've written more about remote testing here.
Of course if I’ve set up a funnel or a summary dashboard, post-project I’ll be checking in on the key metrics here, usually just once a week. I explained my thinking on this in my (now retired) book:
I like to split my data by date, in particular by week, to see how the performance is changing over time. Weekly measures are in the Goldilocks zone: a day is too short to learn anything meaningful as there are often big fluctuations by day of the week, a month is too long and you’re leaving it too late to solve any problems, but a week is just right.
Finally, we could try to A/B test the new design against the old. It’s something I used to do when working in-house but it’s something I do a lot less of now I work with multiple clients. Mainly because doing a meaningful test on most sites’ traffic involves running it for a long time. Also because in my experience most A/B tests reveal very little difference between the options.
I have written about why for most small websites A/B testing isn't a great use of their time. Also the more I’ve learned about it and read the latest thinking on the subject, the more I think they should only be run by data scientists. There’s just too much room for error for the average user to misinterpret results.
Sign up here to get a guide to my favourite (mostly free) tools for evidence-based designing. Plus a massive, advice-filled reading list.You'll also get my new articles & content emailed to you every couple of weeks. Your email is never shared. Unsubscribe at any time.