Home > User Testing - Tips, techniques and strategies > 7 common mistakes made when user testing prototypes

User testing tool Loop11 reveals top tips for effective usability testing on prototypes, from web or app benchmarking to mobile prototypes

As most of you already realise, creating prototypes is an integral part of product design. You would be hard pressed to find any successful company who doesn’t create prototypes of their website or app experiences as part of the overall development flow.

Of course, the main reason for creating prototypes is to quickly and cost effectively test ideas and see if the new design is likely to help you achieve your larger goals. At Loop11 we see hundreds of users running usability tests on prototypes of different shapes and sizes. Being able to support our users and see how they are running their tests has given us insight into some of the most common mistakes people make when attempting to run user testing on a prototype.


Download Justinmind and start testing realistic prototypes

Download free


Here are the top 7 tips on how to ensure a successful round of user testing on your prototype.

1: Provide affordance for alternative navigational paths

We frequently see prototypes designed that omit all functionality and navigation except the very specific path the designer wants the user to follow. This in turn doesn’t allow the participants to provide any meaningful feedback with their behaviour. Sure, you can view the heatmaps to see which areas of the prototype participants clicked on or tapped, however, you won’t be able to see other valuable data, such as click stream analysis. In short you can only hope to confirm your own assumptions by funnelling participants through a one-way navigation.

This doesn’t mean you need to build out your entire website or product, rather, simply allow users to navigate to other parts of your product or website which may lead to content instructing them to ‘go back’ and try again. Doing this will enable you to gain important insights such as incorrect IA or labelling assumptions.

 

2: Form benchmarks from which to accurately measure your prototypes against

Most companies, except for complete newbies, will have an existing website or product. If your prototype designs are based around improving something which already exists, then you should first measure what’s already there. This seems obvious; however, many people attempt to make design improvements without having a base to measure their changes against.

Recently, one Loop11 client did have the foresight to run a benchmarking study prior to running their prototype test. Then, once the prototype had been tested, the new design measured favourably against the incumbent design, showing improvement in System Usability Scale (SUS) and Net Promoter Score (NPS) metrics. What was strange though was the task completion rate on the new prototype was lower. This lead the team to dive deeper into the test data where they found (via isolating problem tests then watching those videos) that they actually had some error checking logic in their prototype, which stopped users from completing otherwise relatively straight forward tasks.

If the benchmarking data had not been available, then the task success rate from the new prototype would not have stood out as unusual and the problem would have been replicated in the live website design.

3: When a prototype involves a form, ensure there is adequate instruction

Many prototypes will contain a form as a part of a sign-up process or similar. Often they require very specific inputs, such as credit cards details. Forms are one of the classic situations where not enough thought is given to them because “it’s only a prototype”. The previous example highlights how this logic can have negative results for your testing. Had a little more effort and time been put into the form, the instructions and its validation, then the test would not have been adversely effected and analysis time would have been saved.

Another tip for forms is if the data is not important, then provide dummy information for your users to copy and paste into the form. This limits the chance of having unrelated errors and increases the completion rates of your tests. If participants are faced with a lot of fields to fill out they may choose to abandon your test.

Find out how to create forms in your Justinmind prototypes!

4: Limit your tests to 5 tasks and 7 questions

Having too many tasks and questions is a classic sign that you are either trying to do too much in the one test, or you do not have a clear focus on what you trying to discover. We’ve found the best returns are generated with a maximum of 5 tasks and 7 questions. This allows for one follow up question after each task and two additional questions at the end.

There are of course exceptions to the rule where more tasks or questions are required, but if you aim for a maximum of 5 and 7 and only allow ‘very good reasons’ to add to these then you’ll be in good shape.

Another reason to keep tasks and questions low is the completion rate of your tests. Many companies will either pay participants to complete a test, or offer a reward. An incentive alone will not ensure people will hang around for half an hour completing dozens of questions and tasks. We see completion rates plummet once a test increases beyond 5 tasks and 7 questions. So, consider running multiple tests rather than making participants sit through one big study.

5: Ensure you are testing with the right demographic

When a test is designed with a clear hypothesis in mind then there should also be a very clear picture of who the user is. If you don’t have a user persona in mind, then stop and start figuring it out. The reason why? Well, if you don’t think about the ‘who’ then you probably aren’t measuring or recruiting to those specifications either. What this means is once you have completed a test and are pouring through the wonderful data there is a strong chance that many of your participants are not your target market and you are getting feedback from people who would never use your product. There is no quicker way to building a product failure than to source feedback from people outside of your target market.

6: Explain to users why a prototype is low fidelity

Often a prototype is missing some of the bells and whistles of the live website or app. It may be functionality or it may be the design polish. Participants needs to have their expectations properly set and even if the rationale is obvious, you should still explain simply and clearly what a prototype is and how it is different to the live product. This should allow the participants to focus on the task at hand rather than be distracted by elements they feel should be different.

7: Ensure mobile prototypes are tested on mobile phones

This sounds like a no brainer, right? Well, not always. If your prototype is designed for mobile phone or tablets (or phablets) then go out of your way to ensure that your participants are using the correct devices. Generally, this can be achieved via a screener or some smart auto redirecting. Loop11 often provides custom assistance to clients who have specific device requirements. A great example is one client who had developed prototypes for iPhone and Android. We set up some simple automatic redirects which could identify first if a participant was on a mobile phone or not, and then if they were using an iPhone or an Android device. The participants were then redirected to the correct prototype test.

Imagine if your iPhone prototype had the majority of its test participants using Android phones. There could be some pretty skewed feedback which would in turn send designers down the wrong path for future iterations.

User testing prototypes: a final tip

Finally, always run through your test first to see if any problems exist. You may find anything from spelling errors right through to prototype issues. Loop11 provides users with preview functionality which allows for sharing and completion of the full test prior to launch.

Loop11 has a helpful article detailing how JustInMind and Loop11 can be used together to create superior prototype usability tests and if you’d like some help running a usability study then get in touch with Loop11 consulting to see how they assist.

Cassandra is Marketing Lead at Justinmind

Add comment

Your email address will not be published. Required fields are marked *