What is product validation? How to test your idea before you build

By
Tania Clarke
Published
April 6, 2026
What is product validation? How to test your idea before you build

TL;DR

Product validation is the process of testing whether a product solves a real problem for real people. Does anyone actually want it? Will they pay for it? Can they figure out how to use it? Validation happens at every stage: before you build (concept validation), while you're building (prototype testing), and after launch (live product validation). The goal is always the same: reduce the risk that you've built something nobody wants.

Why validation matters

Most startup failure doesn't come from bad technology. It comes from building something nobody wanted.

Imagine you spend six months building a product. You ship it. Nobody uses it. The code was fine. The design was fine. The problem was that you were solving a problem that either didn't exist, or that people weren't willing to pay to solve.

Validation short-circuits this. Before you invest six months, invest one week: run five conversations with the people you think want your product. Ask them the question you're trying to answer. Watch their reaction. Most of the time you'll learn whether you're on the right track or chasing a ghost.

Types of validation

Concept validation: Does anyone want this idea?

Before you build, test the concept. Will people care? Is this a problem people have? Are they willing to do anything about it?

Tools: Figma prototype, problem interviews, landing page (how many signups?), surveys

When: As early as possible, ideally before code

Example: "I think operations managers spend too much time on manual reporting. I want to build a tool that automates it. Let me talk to five ops managers and ask: how much time do you spend reporting each week? Would you pay for a tool that cut that in half?"

Prototype validation: Can they actually use it?

You've built something (or a prototype of it). Can people figure out how to use it? Can they complete the core task?

Tools: Working prototype, Figma mockup, usability testing

When: Before launch, ideally before the final build

Example: "I built the automation tool. Can an ops manager sign up, connect their data, and generate a report without getting stuck?"

Market validation: Can you reach people who want it?

You have a product people like. Now, can you find them at scale? Can your distribution actually reach users?

Tools: Ad spend, organic growth, B2B sales process, product-led growth metrics

When: Post-launch, as you scale

Example: "I can sell to one customer. Can I reach 100? Do my go-to-market assumptions actually work?"

Revenue validation: Will they pay?

People like your product. Will they actually exchange money for it?

Tools: Pricing tests, trial-to-paid conversion, willingness-to-pay research

When: Before or shortly after launch

Example: "Operations managers love the tool. But at $200/month, how many actually convert from trial to paying customer?"

Validation at different product stages

Pre-launch validation

Focus: Do people want this? Can they use it?

Method: Interviews, prototype testing with outsiders, landing pages

Time: One to three weeks

Sample: 5-10 real people outside your company

Question you're answering: "Should I build this thing?" or "Have I built something people can actually use?"

Pre-launch validation is the highest-leverage research you can do. It's also the cheapest. Five people, 30 minutes each, maybe $500 total in incentives. The return: you don't build something nobody wants.

Post-launch, growth-stage validation

Focus: Can I reach customers? Will they pay?

Method: Ad spend, trial conversions, revenue metrics, user interviews

Time: Ongoing

Sample: Hundreds (real customer data)

Question you're answering: "What's the actual demand? What's my unit economics?"

Post-launch validation is quantitative. You have real customers, real transaction data, real behavior. The challenge is interpreting it. Does low conversion mean the product is wrong or the marketing is wrong? One qualitative interview with someone who churned can answer that.

Build-stage validation

Focus: Am I building the right thing?

Method: Prototype testing, working prototypes, in-the-wild testing with beta users

Time: Throughout build

Sample: 10-50 users (beta testers or research participants)

Question you're answering: "Should I change direction or keep going?"

Build-stage validation keeps you honest while you're shipping. It catches bad assumptions before they become expensive features.

Running validation conversations

Concept validation interviews (problem interviews)

Goal: Does this problem actually exist? Will anyone care?

Setup: Ask about their current workflow without mentioning your idea.

Question: "How do you currently handle [thing you're trying to solve]? What's frustrating about it?"

Listen to whether they:

Describe a problem you assumed existed
Describe a different problem
Say they don't have a problem

Red flag: "I don't really think about it." That means low salience. Low salience means low motivation to pay or change behavior.

Solution validation (prototype testing)

Goal: Can they use it? Does it solve the problem they described?

Setup: Show them the prototype. Give them a task.

Question: "You mentioned you spend a lot of time on [task]. With this tool, try to [specific goal]."

Watch for:

Can they figure out what it does?
Can they complete the task?
Does it actually solve the problem they mentioned?
Are there unexpected needs or workflows you missed?

Red flag: They can't figure out the core task or they complete it but say "I wouldn't use this."

Pricing validation

Goal: What's the right price? Will they actually pay?

Setup: Describe the product and show them different prices.

Question: "If this product costs $50/month, would you use it? $200/month? $500/month?"

Or: "What would you need to pay for this to feel worth it?"

Watch for:

Where price becomes an objection
Whether willingness to pay aligns with your target business model
Differences by user segment (big companies pay more; small companies more price-sensitive)

Red flag: They love the product but balked at any price you suggested. That means they want it free. Check your unit economics.

Distribution validation

Goal: Can you actually reach the people who want this?

Setup: Talk to people in your target market about where they spend time, what they follow, what convinces them to try new tools.

Question: "Where would you find a tool like this? What would make you actually try it?"

Watch for:

If distribution matches your plan (you thought you'd do SEO but ops managers don't discover tools via search)
Influencers or trusted voices in the space
Gatekeepers (procurement, managers who decide for teams)

Red flag: Target customers find tools through channels that don't match your strategy. Your go-to-market plan might be broken.

How many people do you need to talk to

For concept validation: 5-10 people. You're looking for patterns, not statistical significance. Do they all describe the same problem? Or totally different ones?

For prototype validation: 5 people is the standard. Five catches about 85% of usability issues.

For pricing validation: 10-15 people to see if price elasticity differs by user segment.

For distribution validation: As many conversations as it takes to find 2-3 repeating themes (which channels people use, which voices they trust).

Red flags that mean "don't build this"

Nobody describes the problem you're trying to solve
People describe the problem but say it's low priority
They describe a totally different problem
They can't figure out your prototype
They love your prototype but won't pay any amount you suggest
The people you need to reach aren't reachable through channels you can afford

Red flags that mean "change direction"

You built the right solution for the wrong problem
The people who want it aren't the people you planned to sell to
A competitor is solving it better
Your unit economics don't work (the problem isn't worth enough to justify your pricing)

How to stay honest during validation

Ask why, not yes/no

Don't ask: "Would you use this?" (Everyone says yes.)
Do ask: "What would need to be different for you to actually use this?" (Now they describe reality.)

Watch behavior, not words

People are polite. "This is great!" followed by never using it is a false signal. Watch whether they:

Actually try the prototype
Struggle with the core task or breeze through
Ask questions about features you didn't plan
Describe workflows you didn't anticipate

Look for patterns, not outliers

One person loved it, one hated it, three were neutral. That's not a pattern. That's noise. Look for the thing that appears in three of five conversations.

Talk to people outside your circle

Your friends and family will be nice. Talk to strangers who match your user profile. They'll be honest.

The output of validation

After five conversations, you should be able to answer:

Does this problem actually matter to the people I want to sell to?
Can they figure out my solution?
Will they pay?
Can I reach them?

If the answer to all four is yes, you have a shot. If the answer to any is no, you've learned something valuable that saves you months of work building the wrong thing.

Frequently asked questions

Do I need a dedicated researcher to run validation?

No. Concept validation and prototype testing are straightforward enough that any founder or PM can run them. You don't need a research background to ask a good question and listen to the answer.

What's the difference between validation and user testing?

User testing (usability testing) answers: "Can they use it?" Validation answers: "Should I build it?" They're complementary. Run validation early (before or while building) to make sure you're building the right thing. Run user testing before launch to make sure people can figure out how to use it.

I have one user who says they want it. Is that validation?

No. One person wanting something is anecdotal. Validation is 5+ people describing the same problem and reaction. One person might be an outlier.

Can I do validation with existing customers only?

For concept validation, no. Your existing customers are biased (they're already paying you). Talk to people outside your current base. For prototype validation of new features, yes, existing customers are perfect. They know your product and can give meaningful feedback.

Should I tell people I'm going to build something with their feedback?

No. Don't promise. Explain that you're testing an idea and might go in a totally different direction. Honesty prevents disappointment later.

What if I'm validating something in a market where customers don't exist yet?

Find proxies. If you're building for a market that doesn't exist, find people who have the underlying problem (even if they don't think of themselves as your target market yet). Ask them the validation questions anyway.

How long does validation take?

Concept validation: 1-2 weeks (recruiting + five conversations)
Prototype validation: 1-2 weeks (build prototype + recruit + test)
Pricing validation: 1 week
Distribution validation: 2-4 weeks

Can I validate asynchronously?

For live interviews (concept and prototype), synchronous is better. You need to see reactions and ask follow-up questions. For some validation (pricing surveys, distribution patterns), async is fine.

At what stage am I "done" validating?

You're never done. Validation is continuous. At pre-launch, you're done when you can answer the four questions above with confidence. Post-launch, you're watching real customer data and iterating. The goal changes from "should I build this?" to "am I building the right version?"

Validation is one of the highest-leverage activities a founder can do. One week of conversations can save you six months of building the wrong thing. Do it early. Do it before you fall in love with your idea. Do it before you've sunk months into building.

Ready to start? Great Question supports concept validation, prototype testing, pricing research, and distribution research. Set up a study, recruit your users, and find out whether you should actually build what you're thinking about building.

Related: How to validate your vibe-coded app with real users · Prototype testing: the complete guide · User interviews and recruitment · How to test your Lovable app with real users

Tania Clarke is a B2B SaaS product marketer focused on using customer research and market insight to shape positioning, messaging, and go-to-market strategy.

Table of contents
Subscribe to the Great Question newsletter

More from the Great Question blog