Category: UX advice / Evidence-based design methods

Net Promoter Score

What you can learn

Net Promoter Score or NPS is a popular method for measuring customer satisfaction with your products or service and is something used across many industries, allowing for comparison between very different businesses. It has become hugely popular in recent years and you’ve probably found yourself answering the question for several services.

At it’s heart is a simple one-question survey asking 'How likely is it that you would recommend [brand] to a friend or colleague?'. The user is given a scale of 0-10 to answer. A score of 0-6 is considered negative or a detractor, a score of 7-8 is neutral, and 9-10 is positive or a promoter. A simple formula of percentage of promoters minus percentage of detractors is applied to the results to give a total score between -100 (very bad) and 100 (excellent).

The reason this score is seen as a good metric for understanding whether your customers like your service is because they are being asked whether they would put their reputation on the line and promote you to their nearest and dearest. Potentially it’s a good indication of how you compare to competitors (if you can get that data) and tracked over time it can tell you whether you're improving things or things are getting worse for your customers.

However it is a bit of a strange system and does have its flaws (covered in detail below). On it's own it doesn't tell you much but like most quantitative metrics it can give cause to investigate further if things change. To be truly effective it should be part of a customer satisfaction survey that also gathers more detail on the reasons for the score.

How to do it

There are a few ways that companies tend to ask this question of their users, the main ones I've experienced are:

To use NPS data to inform the design process you can keep track of the scores in a spreadsheet with columns for the written feedback to go alongside it. The score would just act as a rough positive/negative direction but the written feedback fields is where to pay real attention.

Individual pieces of feedback aren't much use but as the spreadsheet grows you can categorise the feedback and look for patterns. For example label up ‘struggles with search filters', ‘wants bigger product images', ‘stuck on sign up'. I'd then keep count of how often these issues appeared and could focus attention on those that caused the most problems.

Combining this with live chat and general customer feedback helps give a sense of scale to issues on a website. They would of course need investigating further with user testing to truly understand why people were having struggles.

Watch out for

There are many problems with an over-reliance on NPS, with these being just some of them:

There are even more reasons to be cautious covered in this article.

Example tools (and cost)

You can just use your normal email provider to send out the question to users or Google forms (free), Wufoo (free & from £12/mo), or Survey Monkey (free & from £26/mo) to do something more comprehensive.

There are purpose made plug-ins such as Delighted (free and paid) or Promotor.io (from $50/mo).

How long does it take?

Setting up the method of gathering will probably take about half a day. Checking the results relevant feedback should be a job that takes about 30 mins per week.

How often should you use it?

Often

Sometimes

Rarely

Resources

Last updated on 29 January 2018
14 tools to gather evidence – guide

Here's another method

Page Data

How to learn from website analytics data such as page views, bounce rate, and average time on page. Learn more

Last updated May 2018

View all the methods in the guide