Preference testing: Evaluating what users like & why

By
Jack Wolstenholm
May 28, 2024
Preference testing: Evaluating what users like & why

Imagine you are designing a new product. You've mapped out the features, perfected the interface, and you’re confident it will resonate well with the market. However, there’s a crucial step you might be overlooking — understanding your audience's true preferences.

Sure, you can predict what they might like, but without concrete data, you're essentially shooting in the dark. This is where preference testing comes into play. It enables you to refine product features based on concrete data, rather than guesswork.

Through a systematic comparison of design variants, you gather essential feedback directly from your intended audience, helping to ensure that your product resonates with them on every level. Below, we explore the nuances of preference testing, including what it entails, why it's crucial for UX researchers, and how to conduct this type of test well.

What is preference testing?

Preference testing, also known as desirability testing, is a research method that allows you to gauge user reactions to different design options to determine which one your audience prefers. During the process, you present participants with two or more variations of a product's design element — such as a webpage layout, icon style, or color scheme — and ask them to choose the one they like best.

The test focuses solely on gauging aesthetic and emotional aspects without digging into deeper usability issues. You ask participants to provide their choice and, optionally, explain their preference.

Follow-up questions help you understand the reasons behind their preferences. Whether it's the visual appeal, content clarity, or trustworthiness conveyed by the design, each piece of feedback provides insights vital for refining your prototypes.

Preference testing vs. A/B testing

Preference testing and A/B testing have similarities, which is why they're commonly confused. But they aren't the same.

Whereas preference tests seek qualitative data to understand the why behind what users say they like and dislike, A/B testing is a quantitative research method that measures performance based on user behavior. Think engagement, conversion, any rate you can calculate.

Preference tests are typically interactive, either in an interview or survey format, so a user must agree to participate in a researcher's study. (Get our free preference test survey template here.)

On the other hand, A/B tests are typically run asynchronously with a higher volume of participants to collect a significant amount of data. Oftentimes, users don't even know when they're part of an A/B test, since this functionality is built into many modern web hosting and product development tools.

Why is preference testing important?

Preference testing is crucial because it grounds your design decisions on actual user feedback rather than assumptions. When you present various design options and gather feedback on preferences, you gain a clear understanding of what resonates best with users.

You understand their likes and dislikes early in the design process and implement this insight before investing too much time into designs that your users will not find appealing. This approach prevents costly redesigns and ensures your product's features are optimized before full-scale development begins.

Preference testing not only help you evaluate your design choices but also highlights potential areas for improvement. It ensures that the final product effectively meets user expectations, enhancing satisfaction and usability. Through this method, you can refine your designs to better communicate your intended message and evoke the desired emotional response.

When to run a preference test

Determining when to run a preference test is key to maximizing its benefits in your design process. The ideal time to conduct a preference test is during the early stages of your design process. This timing allows you to evaluate design concepts before committing significant resources to development.

Another optimal moment for preference testing is before a major redesign or update. Here, you can compare new design proposals with existing ones, gauging user reaction to proposed changes. In this case, a preference test helps you prevent costly design mistakes and ensures that updates contribute positively to the user experience.

How to conduct a preference test

Conducting a preference test involves the following steps to help you gather the most relevant and actionable data: 

Define objectives

Start by clearly defining what you aim to learn from the preference test. Are you comparing visual appeal, understanding message clarity, or evaluating emotional responses? Setting specific objectives will help you structure the test effectively and determine what to measure.

Select measurement methods

How you measure responses depends on whether your focus is on qualitative or quantitative data. For qualitative feedback, encourage participants to describe their preferences using adjectives or short phrases.

This could involve asking them to select words from a provided list or to freely describe their impressions. For quantitative feedback, utilize scales such as numerical ratings to quantify how much a participant prefers one design over another.

Create design variants

Develop multiple design variants based on the objectives you’ve set. Ensure the design variants are distinctly different enough to warrant meaningful comparison. If the designs are too similar, participants might struggle to discern differences, which can skew your results.

Recruit participants

Recruit a diverse group of participants who represent your target audience. To streamline this process, use a UX research platform like Great Question. You can upload a list from your CRM to create a panel of your users or access a broad panel from Respondent's 3 million B2B and B2C research participants.

Start recruiting research participants with Great Question.

Conduct the test

Present the design variants to the participants in a controlled environment, ensuring that each participant views them in the same order. This minimizes biases and variability in the data collected. Ask participants to choose their preferred design and provide reasons for their choice, focusing on the aspects outlined in your objectives.

Ask specific follow-up questions

To deepen your understanding, avoid vague questions such as "Which design is best?" Instead, focus on specific attributes you want to test, like "Which design best communicates a sense of trust?" or "Which layout is easier to navigate?" These targeted questions help you collect actionable feedback that directly addresses your research objectives.

Analyze feedback

Once the test is complete, analyze the feedback to draw conclusions that align with your initial objectives. Look for patterns or common themes in the responses that can inform future design decisions. 

3 tools you can use for preference testing

Here’s a rundown of some popular tools that can help you streamline your preference testing efforts:

UXtweak

UXtweak specializes in helping researchers set up tasks where respondents can choose their favorite design from a lineup using media such as pictures, sound, and video. This tool also allows you to ask follow-up questions to delve deeper into the respondents' reasoning.

Lyssna

Lyssna offers access to a large and diverse panel of over 530,000 people across more than 100 countries, making it an excellent choice for global preference testing. It supports testing various materials, including products, videos, logos, packaging, and animations.

Useberry

Useberry guides you through setting up a test by selecting a preference type, defining actions for testers, and providing context for the tasks. It enables the addition of multiple variations for direct comparison and includes options for follow-up questions to gather detailed insights on each variation’s performance.

The bottom line

Preference testing is essential in shaping products that truly resonate with users. By integrating this research method early in your design process, you gain crucial insights that align your product with user expectations. This approach enhances user satisfaction and streamlines your design process, saving time and resources while maximizing impact.

As you continue to refine your product or website, remember that each preference test you conduct brings you closer to delivering experiences that users not only need but actually enjoy, too.

Related read: The 2024 prototype testing guide

Jack is the Content Marketing Lead at Great Question, the end-to-end UX research platform for customer-centric teams. Previously, he led content marketing and strategy as the first hire at two insurtech startups, Breeze and LeverageRx. He lives in Omaha, Nebraska.

Similar posts

See the all-in-one UX research platform in action