|TABLE OF CONTENTS|
|Step 1: Defining the objectives of your study|
|Step 2: Defining tasks & questions|
|Step 3: Recruiting your participants|
|Step 4: Piloting the study|
|Step 5: Analyzing videos|
Usability testing is a technique where representative participants perform tasks on an interface of a website or application.
The tasks typically reflect the most common and important activities a user needs to perform. While testing the participant’s behavior is observed by a facilitator to identify any issues that prevent task completion.
UXprobe supports unmoderated remote usability testing which is similar to usability testing but participants complete tasks in their own environment and on their own device but without a facilitator being present. The tasks are pre-determined and are presented to the participant through the platform. UXprobe makes use of video recordings of participants interacting with the interface while they think aloud.
Before you can start testing you need to understand why the study is being conducted.
1. Why do you want to conduct the study?
- Can the user actually do a transfer from his bank account to another?
- Why do users jump out the flow just before buying our products?
2. About what specifically do you want to get answers?
- Do users understand the terminology used in the labeling of buttons, menus, navigation, etc.
- Can users easily navigate from one screen to the other, or do they get stuck?
In order to get insights from your study, your participants need to have something to do, these assignment are referred to as tasks. These tasks typically reflect the most important activities a user wants to perform or flows you believe your users will have a hard time with.
During a unmoderated remote usability test participants can’t receive any assistance, it needs to be clear to the participants what is expected of them. On one hand you need to provide enough details for the participants to complete the tasks on their own. On the oether hand, you need to avoid excessive information that is not crucial for the task.
Extra tip: do not tell your participants what to do, but rather describe what you expect from them.
Although users might be able to successfully complete these tasks, they might not be happy about them because e.g. it took them too long, they had to provide too much information, etc. It is a good practice to ask after every task questions about how satisfied they were with the task, if they were missing features, etc. Or when they reached the end of the study ask them questions to understand the overall assessment of their experience.
- Closed questions after each individual task to measure ease of task completion.
- Open-ended questions will allow participants to describe their experience and any issues they encounter.
- Customer satisfaction with the product, Net Promoter Score and System Usability Scale (SUS), which can be used to benchmark the product over time and against competitors.
- Open-ended questions should be used to allow participants to provide feedback and to understand why issues are occurring.
Keep the number of tasks during the study limited to about 3 or 5 tasks, and make sure it does not take longer then about 15 minutes otherwise your participants will become tired and stop the study without giving you any useful feedback.
Once you have your tasks & questions defined, you need to share it with representative participants for your product. There are a number of possibilities to share the study:
- Email the study to a number of existing customers; they are representative participants because they are already using your product.
- Social media can be used to recruit participants, but you have to make sure they are representative for your audience.
- Intercept people who are actually using the website or application. Using a pop-up window, you can invite them to participate in your study.
- A panel can be used when you do not have existing customers yet.
Start by writing down your goals and the content of your test (context, tasks, questions, etc.), in a Word document for example. Feel free to use UXprobe’s template (in Word or in PDF). This document is good basis to discuss the content of the test with your colleagues or your boss.
Once the content is definitive, you can introduce it into UXprobe’s User Testing tool. Find here a complete guide to help you set up your study.
Then, try out your study with a subset of participants or in a preview mode. This will allow issues with the prototype, technology, tasks or questions to be ironed out.
Once you have collected your results it is analysis time.
To begin with, look at some overarching metrics such as overall task completion and customer satisfaction. These results are automatically calculated in UXprobe. This will give you an overall feel for the effectiveness of the product.
For tasks where users are not performing well, look at the sessions and start with the ones where users abandoned the task and try to understand why. Observe behavior to identify the elements of the interface that are causing the problems.
Also look at the sessions of the participants who gave bad ratings or bad feedback on your questions, watch them to understand why they were not satisfied.