Whether you’re creating a new mobile application, website, or piece of software, usability testing is a critical part of the design process. User testing provides insight into the user experience (UX) that informs revisions before a product launch. This article defines user testing, explains why it matters for UX design, and offers 5 tips for achieving success.
Usability testing is a technique used in UX design to evaluate a product by testing it on real users. During testing, select target users try out digital prototypes to provide data on the user experience, which helps to validate (or invalidate) product ideas and assumptions. In many cases, designers think they know what users want, when in reality, those assumptions have never been validated by actual users. Usability testing provides the opportunity to test against those assumptions and determine exactly what users want to see in an end product. It helps designers evaluate whether they’re on the right track, and ultimately reduces the risk of wasting time, money, and resources building a product that doesn’t meet user needs or expectations.
There are multiple types of usability testing, such as surveys, interviews, and A/B testing. There are also several metrics for usability testing that can be tracked to provide added insight. With that said, at Checkmate Digital we focus primarily on prototype testing to put concepts in front of our target audience and gather feedback. While we used to take pages of notes and track numerous usability metrics, we’ve refined our process over time to generate useful results more quickly. Below are a few tips for you on how to conduct effective user tests for your own product.
While this may sound like a no-brainer, certain types of usability testing involve reviewing sketched-out paper prototypes or A/B testing of specific copy or features. We’ve tried different approaches; however, we feel the most effective is to get a realistic, digital prototype of a given design in front of real target users to learn from their experience and the feedback they provide.
While we’ve done both moderated and unmoderated user testing, most of our sessions are moderated—meaning one of our team members observes the user as they’re completing tasks and asks questions as they come up. We’ve found this approach works best because, by watching users as they walk through the prototype and perform tasks, we can make observations about the features they use and don’t use. If users get stuck or seem confused during testing, we’re able to dig deeper and learn more about why a particular website or application feature was difficult to navigate.
Many people say you can never have too much data—but that’s not always the case. Instead of collecting as much information as possible, it’s smart to identify specific questions that you would like answered during the tests. What insights are you hoping to glean? Go in with a strong idea of three specific points you want to validate. Then watch and observe how the users address them throughout the testing. The interviewer and notetaker both reference the questions during the tests, and note each time the user does something to validate or invalidate that question.
It’s important to help people feel as comfortable as possible when conducting user tests. Avoid portraying the experience as a test that can either be passed or failed—you want participants to feel comfortable sharing negative comments, because that's usually the most helpful information. If someone provides only positive comments and no constructive criticism, then they didn't allow themselves to be vulnerable for fear of their intelligence being questioned.
To help avoid this, explain before the session begins that their remarks don’t have any bearing on their intelligence—it’s simply to determine if the product design is moving in the right direction. Something that we like to emphasize when we’re conducting our own user tests is that we’re not testing the person, we’re testing the product.
Make sure the scenario you give people at the beginning of the user test provides the right amount of context, without giving anything away. This helps maintain the integrity of the test. Sometimes we like to start by saying, “We’ll be asking you questions throughout the session. You can ask as many questions as you’d like; however, we may not answer all of them so as to skew the results” This helps to avoid injecting our own personal biases into the process.
We believe the best product designs are founded on a data-driven validation process that includes real user feedback. That’s why we reserve day four of our design sprint process for putting a prototype of our clients’ products in front of actual target users to learn critical insights about the direction of the design. If you have an awesome product idea of your own, we’d love to talk strategy and discuss how we can help you bring it to life. Get in touch to get started.