Your stakeholders are eager to know if it will convert, so they ask you to run a study with a few simple research questions, the main one being: "Would people buy this?"
It seems straightforward. But as experienced UX researchers know, what people say they'll do and what they actually do are often worlds apart. Designing studies that actually get to the bottom of this can be very tricky.
Teresa Torres illustrates this challenge really well in one of her articles:
"I recently asked a woman what factors she considered when buying a new pair of jeans. She didn't hesitate in her answer. She said, 'Fit is my number one factor.' That seems reasonable. It's hard to find a pair of jeans that fit well. I then asked her to tell me about the last time she bought a pair of jeans. She said, 'I bought them on Amazon.' I laughed and asked, 'How did you know if they fit?' She said, 'I didn't, but they were a brand I liked and they were on sale.'"
This example highlights what is called the attitude-behavior gap. The woman thinks fit drives her jean-purchasing decisions, but her actual behavior reveals that brand loyalty and price were the real motivators.
When we ask direct questions about future intent ("Would you buy this?" or “Would you sign up here?”), we're not measuring actual behavior—we're measuring how people think they would behave in a hypothetical situation. The problem with this is that we as humans are notoriously bad at predicting our own future actions.
Getting this wrong isn't just a minor research flaw. It can lead to:
So how do we bridge this gap between what people say and what they'll actually do? Let's explore seven proven methods for uncovering true intent—without relying on what users say they'll do.
To get closer to the true things that make people behave in a certain way, we can apply a set of techniques and tried-and-tested UX research methods.
The approach: Instead of asking "Would you buy this?", ask "Tell me about the last time you purchased *this* or *something very similar*"
Why it works: Past behavior is a strong predictor of future behavior. By understanding how people have actually made similar decisions before, you gain insight into their decision-making process and mental models. This then helps assess whether or not an idea might fit into the world of its anticipated users.
Example questions:
The approach: Create a button, link, or form that appears to offer access to a new feature but instead captures click data and explains that the feature isn't available yet.
Why it works: Fake door testing measures actual behavior (clicks) rather than stated intentions, providing real data on interest levels without requiring full feature development. Because this is not something most people are familiar with, consider the tips below to make sure you’re not hurting the overall experience.
Implementation tips:
Keep in mind that while this is a cost-effective way to test something without having to build the whole thing, it should be used very intentionally and scarcely as it can lead to frustration and disappointment in users.
The approach: If you want to learn about true intent in a familiar environment, like an additional option on an existing website, have users interact with your product in their natural environment without a moderator present, then analyze their behavior.
Why it works: When people know they're being watched, they often change their behavior, also known as the Hawthorne effect. Even with some limitations, unmoderated studies capture more natural interactions.
Tools and techniques:
Unmoderated studies allow participants to interact with products in their natural environment, often providing more authentic behavioural insights than facilitated conversations. This approach is best for concepts that are similar to existing products or services and may not lead to a lot of useful insights for new, innovative concepts.
The approach: Trigger short surveys at specific points in the user journey to capture in-the-moment intentions.
Why it works: By asking questions while users are actually engaged in relevant tasks, you reduce recall bias and capture more accurate data about their motivations.
Best practices:
Intercepts can feel disruptive for users, but they are also a very powerful way of learning about user intent in an extremely contextualised way with minimal bias. They are most useful when combining questions like “Why are you visiting this website today?" followed by some follow-up questions about task success and their overall experience.
The approach: Collect both stated preferences through interviews and actual behavior through analytics, then analyze the discrepancies.
Why it works: This approach directly measures the gap between what people say and what they do, providing insight into unconscious motivations.
Implementation example:
The approach: Create interactive prototypes that include measurable elements like clickable pricing options, feature toggles, or commitment indicators. Put some extra effort into creating realistic and detailed scenarios so there is enough context for participants to really put themselves in the situation.
Why it works: This bridges the gap between hypothetical questions and real product interaction by creating a semi-realistic decision environment. Many usability studies focus on task success and generic satisfaction questions but lack more specific metrics relevant to individual research questions which can be addressed by deeper alignment and planning upfront.
Measurement techniques:
Need fast feedback on your prototypes? Try Unmoderated Prototype Testing in Great Question for free.
The approach: Instead of asking about your specific product, ask users about their experiences with similar products they've already used.
Why it works: This leverages existing behavior patterns and removes the hypothetical element from your questions.
Example methods:
Often, the challenge isn't knowing the right research methods—it's convincing stakeholders not to rely on questions with too direct intentions. Here are some strategies for handling those conversations:
Share examples like Teresa Torres' jeans study to illustrate why direct questions about future behavior are unreliable and can do more harm than good. If possible, share a firsthand experience where you've seen the attitude-behavior gap in action with your own users.
When a stakeholder asks for a "Would you buy this?" study, have alternatives ready that will provide more reliable data instead of simply saying “No” to the request. Act as an advisor, not a gatekeeper.
Instead of a single "Would you buy this?" question, suggest using a combination of metrics that together provide a more reliable indicator of intent. It’s not forbidden to ask the question but it should always be backed with more probing questions, asking for examples, and observing actual behavior.
Collaborating with go-to-market or product teams earlier can help avoid situations where you're pushed to ask problematic questions late in the development process. A lot of the negative examples described above are really just shortcuts so it’s helpful to acknowledge that the underlying interest is a bit more complex and often needs a combination of different methods and studies to generate a reliable answer.
For truly innovative solutions where users have no reference point, traditional intent testing becomes even more problematic. In these cases:
A recent experience I had while working on an innovative B2B platform in a heavily-regulated industry highlights the importance of this well. Directly asking if users would like the new system would have been meaningless because it would have introduced so many changes at once that it’s impossible to even imagine, and we didn’t have any wireframes or prototypes at the time. Instead, we focused on identifying current frustrations and unfulfilled wishes and found strong alignment with the new platform's concept, while also uncovering potential adoption barriers.
The next time stakeholders ask you to find out if users "would buy" a product or feature, remember: the direct approach is often the least reliable one. By employing these alternative methods, you'll gather more accurate insights into user intent—and ultimately build better products that people will actually use.
True intent testing isn't about asking users to predict their future behavior; it's about understanding the complex factors that drive their actual decisions.
By focusing on observation over declaration, past behavior over future intentions, and real metrics over hypothetical scenarios, you'll bridge the attitude-behavior gap and deliver insights your team can truly count on.
Johanna is a freelance Senior UX researcher and UX advisor, co-founder of UX consulting firm Jagow Speicher, and a researcher at heart. Working with diverse UX teams, she helps them mature, run impactful research, manage and optimise their UX practice, design powerful personalisation campaigns, and tackle change management challenges. Outside of work, she's writing about all things UX Research, UX Management, and ResearchOps. Feel free to reach out here or go to my website to learn more. 👋🏼