User Research Methods: How to Choose the Right One?
By Adam Fard, Lead UX Designer
By Adam Fard, Senior Product Designer
There are many different user research methods that can help you learn more about your users’ behavior. However, unless you’re running on unlimited time and budget, then chances are that you won’t be able to use multiple user methods at the same time. Instead, you will have to settle on using a specific method at different stages of your design process.
“The goal of a designer is to listen, observe, understand, sympathize, empathize, synthesize, and glean insights that enable him or her to make the invisible visible.”
— Hillman Curtis
Now, this is the part where things can get a bit tricky.
How do you know which user research method is the right one for you? What makes one method better than the other? Is user research even necessary? While it may seem overwhelming at first, choosing — and using — a user research method doesn’t have to be complicated.
Here’s an overview of different user research methods that will help you make the right choice.
Interviews are a great way to gather qualitative data about your users. As the name suggests, this research method requires you to meet with users, typically one at a time, and discuss a wide range of user-related topics, such as their feelings, motivations, objectives, routines, and pain points.
To get the most authentic results out of user interviews, it’s advisable that you:
Ask open-ended questions.
Avoid questions that elicit “yes” or “no” answers, as well as leading and vague questions. These kinds of questions won’t give you the information you need to measure how and why users interact with your product. Instead, ask dialogue-provoking questions to get more out of the user.
Construct follow-up questions.
Be ready for different answers, and prepare follow-up questions based on your research goals. Do not paraphrase the user’s answer. Instead, get them to elaborate on their answer as a means to get more information out of them.
Use a script.
For more reliable and valid results, prepare an interview script beforehand. Doing so will ensure consistency and eliminate variables that may distort — or influence — answers.
Control your reaction.
Keep your reactions neutral at all times during the interview. Strongly reacting to users’ answers may affect how they respond to the rest of the questions.
Organize the collected user data in an easy-to-read format. Drawing up a mind map is one effective way to record and present gathered feedback to the rest of the team.
One of the best things about user interviews is that they are fast, straightforward, and specific. Plus, you can arrange interviews at any stage of your design process, which makes them even more convenient.
Surveys and Questionnaires
Surveys and questionnaires are another way to get feedback from users. They are similar to interviews in the type of information you can gather, but differ in the level of detail. With interviews, you can have a list of follow-up questions ready so you can get down to the very bottom of a user’s experience, while with surveys and questionnaires, you cannot. This is mainly because you do not have the same level of direct interaction with users as you do in interviews.
But there are pros to using surveys and questionnaires. For instance, they allow you to reach out to many more users in a short period of time, collect a larger volume of responses, and are relatively inexpensive to run. Plus, there are countless of online tools that you can use to create, analyze, and group user responses. So, they’re really useful if you need quick feedback or are working with a tighter budget.
A/B testing lets you test out two variations — or more — of a design to determine which one users positively respond to the most. With A/B testing, you can measure the performance of various design elements, including:
Placement of text
Placement of images
Graphs and charts
It’s all about the numbers with A/B testing. So, use this method as a means to get quantitative results about live products or early prototypes.
Card sorting is a cost-effective research method that will help you validate your information architecture. The way card sorting works is simple. Simply write down the major features or topics associated with your product and ask users to group them in a way that’s most logical to them. You can even go ahead and ask users to name the different groups so you can compare their choice of words to yours.
“We tend to be distracted by the voices in our own heads telling us what the design should look like.”
— Michael Bierut
The key purpose of a card sort is to understand how intuitive your workflow is. Namely, how easy is it for users to navigate through your product? For example, can the user correctly guess which feature can be found under a specific category? Does your perception of informative hierarchy align with your users’ perception?
Taking the guesswork out of workflow design is exactly what card sorting is all about. Rather than focusing on how you see the product unraveling, it’s more important to find out how users see the product. As soon as the results from a card sort are in, you can start creating a new workflow or fine-tuning your existing user journey map.
If you have to choose just one user research method to learn about your users, then a usability test is, by far, your best choice. This is because usability tests give you access to comprehensive data about your design in terms of its look, feel, and usability.
Usability testing measures five key components:
1. Learnability. How easy is it for users to accomplish tasks the first time they encounter your design?
2. Efficiency. How fast can users successfully complete tasks?
3. Memorability. Do users remember how to effectively accomplish tasks after not using the product for a certain period of time?
4. Errors. How many errors do users make, and how easily can they recover from them?
5. Satisfaction. How do users feel about using your product?
Essentially, the goal of usability testing is to identify usability problems, collect qualitative and quantitative data, observe different behaviors, and determine user satisfaction.
To successfully conduct a usability test, you need to first find representative users and prepare a list of tasks for the users to complete. Then you need to observe, listen, and take notes.
With the conclusion of your usability test, you should have enough data to distinguish which parts of your design work, which don’t, and which processes need to be tweaked or removed.
Learning more about your users and the way they interact with your product in a moderated and unmoderated setting is essential to building a product that users will love.
Use any one — or a combination — of these user research methods the next time you decide to do your homework. It’s well worth the effort.