What they do, not what they say: Smarter ways to test real user intent

By
Johanna Jagow
March 27, 2025
What they do, not what they say: Smarter ways to test real user intent

Picture this: Your team has spent weeks designing a new feature.

Your stakeholders are eager to know if it will convert, so they ask you to run a study with a few simple research questions, the main one being: "Would people buy this?"

It seems straightforward. But as experienced UX researchers know, what people say they'll do and what they actually do are often worlds apart. Designing studies that actually get to the bottom of this can be very tricky. 

The attitude-behavior gap: Why you shouldn't ask "Would you buy this?"

Teresa Torres illustrates this challenge really well in one of her articles:

"I recently asked a woman what factors she considered when buying a new pair of jeans. She didn't hesitate in her answer. She said, 'Fit is my number one factor.' That seems reasonable. It's hard to find a pair of jeans that fit well. I then asked her to tell me about the last time she bought a pair of jeans. She said, 'I bought them on Amazon.' I laughed and asked, 'How did you know if they fit?' She said, 'I didn't, but they were a brand I liked and they were on sale.'"

This example highlights what is called the attitude-behavior gap. The woman thinks fit drives her jean-purchasing decisions, but her actual behavior reveals that brand loyalty and price were the real motivators.

When we ask direct questions about future intent ("Would you buy this?" or “Would you sign up here?”), we're not measuring actual behavior—we're measuring how people think they would behave in a hypothetical situation. The problem with this is that we as humans are notoriously bad at predicting our own future actions.

The real cost of asking the wrong questions

Getting this wrong isn't just a minor research flaw. It can lead to:

  • Product teams building features that never get used
  • Marketing teams creating campaigns that don't convert
  • Companies investing resources based on misleading data

So how do we bridge this gap between what people say and what they'll actually do? Let's explore seven proven methods for uncovering true intent—without relying on what users say they'll do.

Seven approaches to test true intent

To get closer to the true things that make people behave in a certain way, we can apply a set of techniques and tried-and-tested UX research methods.

1. Focus on past behavior, not future intentions

The approach: Instead of asking "Would you buy this?", ask "Tell me about the last time you purchased *this* or *something very similar*"

Why it works: Past behavior is a strong predictor of future behavior. By understanding how people have actually made similar decisions before, you gain insight into their decision-making process and mental models. This then helps assess whether or not an idea might fit into the world of its anticipated users.

Example questions:

  • "Walk me through the last time you signed up for a subscription service."
  • "What was the last app you downloaded, and what made you decide to get it?"
  • "Tell me about the most recent time you abandoned an online purchase."

2. Run fake door testing

The approach: Create a button, link, or form that appears to offer access to a new feature but instead captures click data and explains that the feature isn't available yet.

Why it works: Fake door testing measures actual behavior (clicks) rather than stated intentions, providing real data on interest levels without requiring full feature development. Because this is not something most people are familiar with, consider the tips below to make sure you’re not hurting the overall experience.

Implementation tips:

  • Always explain to users after they click that this was a test
  • Offer something valuable in return for their time (waitlist signup, discount code, etc.)
  • Be transparent about what you're measuring and why
  • Track metrics like click-through rates and compare against baseline engagement

Keep in mind that while this is a cost-effective way to test something without having to build the whole thing, it should be used very intentionally and scarcely as it can lead to frustration and disappointment in users.

3. Conduct unmoderated observation studies

The approach: If you want to learn about true intent in a familiar environment, like an additional option on an existing website, have users interact with your product in their natural environment without a moderator present, then analyze their behavior.

Why it works: When people know they're being watched, they often change their behavior, also known as the Hawthorne effect. Even with some limitations, unmoderated studies capture more natural interactions.

Tools and techniques:

  • Session recording tools to capture natural browsing behavior
  • Heat maps to visualize where users focus attention
  • Navigation path analysis to understand decision flows
  • Conversion funnels to identify drop-off points

Unmoderated studies allow participants to interact with products in their natural environment, often providing more authentic behavioural insights than facilitated conversations. This approach is best for concepts that are similar to existing products or services and may not lead to a lot of useful insights for new, innovative concepts. 

4. Deploy intercept surveys at key moments

The approach: Trigger short surveys at specific points in the user journey to capture in-the-moment intentions.

Why it works: By asking questions while users are actually engaged in relevant tasks, you reduce recall bias and capture more accurate data about their motivations.

Best practices:

  • Keep surveys very short (1-3 questions maximum)
  • Target specific actions or pages
  • Ask about current goals rather than future intentions
  • Include open-ended questions to capture reasoning and supporting insights

Intercepts can feel disruptive for users, but they are also a very powerful way of learning about user intent in an extremely contextualised way with minimal bias. They are most useful when combining questions like “Why are you visiting this website today?" followed by some follow-up questions about task success and their overall experience.

5. Compare stated preferences against behavioral data

The approach: Collect both stated preferences through interviews and actual behavior through analytics, then analyze the discrepancies.

Why it works: This approach directly measures the gap between what people say and what they do, providing insight into unconscious motivations.

Implementation example:

  1. Ask users to rank features by importance in interviews and giving a reason for their ranking
  2. Track actual feature usage through product analytics
  3. Compare the rankings and rationales against usage statistics
  4. Investigate instances where stated preferences and behavior diverge

6. Use prototype tests with built-in metrics

The approach: Create interactive prototypes that include measurable elements like clickable pricing options, feature toggles, or commitment indicators. Put some extra effort into creating realistic and detailed scenarios so there is enough context for participants to really put themselves in the situation.

Why it works: This bridges the gap between hypothetical questions and real product interaction by creating a semi-realistic decision environment. Many usability studies focus on task success and generic satisfaction questions but lack more specific metrics relevant to individual research questions which can be addressed by deeper alignment and planning upfront.

Measurement techniques:

  • Track which pricing tier users select in a prototype
  • Measure time spent exploring different features
  • Record how far users progress through a signup flow
  • Capture which configuration options they select

Need fast feedback on your prototypes? Try Unmoderated Prototype Testing in Great Question for free.

7. Employ competitive analysis techniques

The approach: Instead of asking about your specific product, ask users about their experiences with similar products they've already used.

Why it works: This leverages existing behavior patterns and removes the hypothetical element from your questions.

Example methods:

  • Have users walk through their experience with competitor products
  • Ask what features they've paid for in similar products. This can also be with a slightly different market, e.g. compare eBay (very broad) to Vinted (very clothing-focused)
  • Explore why they chose one solution over another
  • Examine switching behavior between similar products

Strategies for working with stakeholders

Often, the challenge isn't knowing the right research methods—it's convincing stakeholders not to rely on questions with too direct intentions. Here are some strategies for handling those conversations:

1. Educate on the attitude-behavior gap

Share examples like Teresa Torres' jeans study to illustrate why direct questions about future behavior are unreliable and can do more harm than good. If possible, share a firsthand experience where you've seen the attitude-behavior gap in action with your own users.

2. Propose alternative approaches

When a stakeholder asks for a "Would you buy this?" study, have alternatives ready that will provide more reliable data instead of simply saying “No” to the request. Act as an advisor, not a gatekeeper.

3. Use composite metrics

Instead of a single "Would you buy this?" question, suggest using a combination of metrics that together provide a more reliable indicator of intent. It’s not forbidden to ask the question but it should always be backed with more probing questions, asking for examples, and observing actual behavior.

4. Start collaboration earlier

Collaborating with go-to-market or product teams earlier can help avoid situations where you're pushed to ask problematic questions late in the development process. A lot of the negative examples described above are really just shortcuts so it’s helpful to acknowledge that the underlying interest is a bit more complex and often needs a combination of different methods and studies to generate a reliable answer.

Special case: Testing innovative concepts

For truly innovative solutions where users have no reference point, traditional intent testing becomes even more problematic. In these cases:

  1. Focus on needs and pain points rather than direct reactions to the concept.
  2. Be wary of "shiny object syndrome" where users express interest simply because something is new. It’s extremely important to be cautious in the way you frame these kinds of concepts. “What do you think about this new concept?” is already introducing this bias into the mindset of participants. Something more neutral like “this idea”, “this concept”, “this product” will give you more accurate outputs.
  3. Look for matches between current frustrations and your solution's value proposition, and don’t expect participants to be ready and able to explain this to you. Oftentimes, this is the last piece of analysis and synthesis after looking at a set of data points instead of a simple question that people can answer for us.

A recent experience I had while working on an innovative B2B platform in a heavily-regulated industry highlights the importance of this well. Directly asking if users would like the new system would have been meaningless because it would have introduced so many changes at once that it’s impossible to even imagine, and we didn’t have any wireframes or prototypes at the time. Instead, we focused on identifying current frustrations and unfulfilled wishes and found strong alignment with the new platform's concept, while also uncovering potential adoption barriers.

The bottom line: Better questions lead to better products

The next time stakeholders ask you to find out if users "would buy" a product or feature, remember: the direct approach is often the least reliable one. By employing these alternative methods, you'll gather more accurate insights into user intent—and ultimately build better products that people will actually use.

True intent testing isn't about asking users to predict their future behavior; it's about understanding the complex factors that drive their actual decisions.

By focusing on observation over declaration, past behavior over future intentions, and real metrics over hypothetical scenarios, you'll bridge the attitude-behavior gap and deliver insights your team can truly count on.

Johanna is a freelance Senior UX researcher and UX advisor, co-founder of UX consulting firm Jagow Speicher, and a researcher at heart. Working with diverse UX teams, she helps them mature, run impactful research, manage and optimise their UX practice, design powerful personalisation campaigns, and tackle change management challenges. Outside of work, she's writing about all things UX Research, UX Management, and ResearchOps. Feel free to reach out here or go to my website to learn more. 👋🏼

Table of contents
Subscribe to the Great Question newsletter

More from the Great Question blog

See the all-in-one UX research platform in action