Continuous discovery habits: how to build and sustain them

By
Tania Clarke
Published
May 6, 2026
Continuous discovery habits: how to build and sustain them

Teresa Torres spent years watching product teams do discovery wrong. Big batch research every quarter. Insights that arrived too late to change anything. Researchers who couldn't keep up, or product teams who didn't bother trying.

Her book, Continuous Discovery Habits, describes a different way to work. Weekly customer touchpoints, woven into how the product trio operates week to week. Not a research project with a start and end date. An ongoing practice.

Thousands of teams have tried it. Most have struggled to keep it going.

This guide explains what the continuous discovery habits actually are, why they're hard to maintain in practice, and what infrastructure teams need to make them stick.

What are continuous discovery habits? Continuous discovery habits are five practices defined by Teresa Torres in her book Continuous Discovery Habits: (1) interview at least one customer per week, (2) map opportunities in an opportunity solution tree, (3) surface and test assumptions before building, (4) run small experiments continuously, and (5) involve the whole product trio (PM, designer, engineer) in discovery. The goal is to weave customer research into the weekly rhythm of product work, not run it as a periodic project.

TL;DR: Most teams that try continuous discovery stall by week twelve. The habits themselves aren't the problem. Torres' framework is clear. The problem is infrastructure: recruitment that takes weeks instead of days, manual analysis that piles up, and no central place for findings. You need automated recruitment, AI-assisted analysis, and a research management system. Great Question's continuous discovery features handle that infrastructure layer so the habits can actually stick.

The five continuous discovery habits Teresa Torres defines

Before getting into what goes wrong, it's worth being precise about what Torres is actually proposing. The word "habits" is doing real work here: she's describing a way of working that becomes automatic over time, not a process to follow on a project-by-project basis.

The core habits from the book:

1. Interview at least one customer per week Each product trio (product manager, designer, engineer) should talk to at least one customer per week. Not one researcher per week. The whole trio. Together. The goal is for the people building the product to have a continuous, direct connection to the people using it, not filtered through a research report that arrives six weeks later.

2. Map your opportunities in an opportunity solution tree Rather than jumping from a problem to a solution, Torres asks teams to map the opportunity space first. An opportunity solution tree is a visual framework that connects your desired outcome to the opportunities you've discovered, the solutions you're considering, and the experiments you're running. It keeps the team in discovery mode rather than building before they've understood the problem.

3. Surface your assumptions before you build Most product decisions rest on untested assumptions. Torres' habit is to make those assumptions explicit: write them down, rank them by risk and importance, and test the most dangerous ones first. This is what makes continuous discovery genuinely useful. You're not just collecting opinions; you're stress-testing the reasoning behind product decisions.

4. Run small experiments Testing assumptions doesn't require shipping features. Torres describes a range of experiment types (customer interviews, surveys, prototypes, fake doors) that help teams learn faster and with less risk. The habit is to always have experiments running, not just when a major decision is approaching.

5. Involve the whole product trio, not just researchers This is probably the hardest habit for research-heavy organizations to adopt. Torres is explicit that discovery should not be delegated entirely to a single researcher. The PM, designer, and engineer should all be part of the weekly customer conversations. Researchers can support and scale the work, but they shouldn't be the only ones doing it.

These five habits are what distinguish continuous discovery from a well-run research practice. You don't need a research ops team to implement them. You need a product team that takes the practice seriously.

What you do need (and what most teams underestimate) is infrastructure.

Why teams start the habits and why they stall

The appeal is immediate. Instead of quarterly research sprints that feed into decisions already half-made, you talk to customers every week. You catch problems early. You kill bad ideas before committing three months to them.

In practice, the pattern looks like this.

Weeks one through three go well. Researchers recruit a handful of participants. Analysis happens at night and on weekends. Findings get shared Monday morning. Something changes. Product leadership actually adjusts their thinking based on what the team learned. People are excited.

By weeks four through eight, cracks appear. Recruiting the next batch takes longer because you've already contacted most of your panel. Researchers are tired. Analysis takes two weeks now instead of one. There's a backlog of interviews waiting to be coded.

By week twelve, someone asks if you're still doing weekly research. The answer is: not really. The team has settled back into ad-hoc studies whenever a specific decision needs validation.

The teams that describe this pattern aren't lazy. They hit a structural wall. The habits require a rhythm, and the rhythm requires infrastructure that most teams haven't built.

What makes the habits stick

Torres' book focuses on what to do. The infrastructure question is how to do it consistently, week after week, without burning out the people responsible. Great Question's continuous discovery features are built specifically to support this layer.

Habit one requires recruitment that takes days, not weeks

The weekly interview habit falls apart first.

You need one new participant per week, per product trio. That sounds manageable until you realize most teams have a pool of ten to fifteen people they know and trust, and classic recruitment takes three to four weeks. If you're running weekly research, you'll exhaust your panel by week four.

What works is automation. ServiceNow cut recruitment from 118 days to 6 days by building automated qualification workflows. Candidates self-qualify through a short screening form. You get a vetted list ready to schedule the same week.

You also need diversity controls. Running weekly interviews from the same participant pool means you end up talking to your most enthusiastic customers repeatedly. A good recruitment system tracks who you've spoken to, when, and about what. It surfaces people you haven't reached yet.

Great Question's recruitment features connect directly to your customer data and automate qualification and scheduling. You define your criteria once; it handles the rest.

Habits two through four require AI-assisted analysis

The opportunity solution tree, assumption testing, and experiment habits all depend on analysis. You can't spot patterns across five weekly interviews if you're still manually transcribing and coding interview three.

One hour of interview typically requires three to four hours of analysis: listening back, coding themes, pulling highlights, forming patterns. Five interviews a week means roughly twenty hours of analysis before anyone has written a word of findings. Most researchers can't sustain that workload alongside everything else.

AI doesn't eliminate the judgment calls. Those still need a human. But it handles the mechanical work: transcription, initial coding, highlight extraction. Roller's team found that AI-assisted analysis cut their research workload roughly in half. That's the difference between sustainable and not.

The habit of surfacing assumptions also becomes more practical with good tooling. When transcripts are automatically coded and searchable, you can ask what customers have said about onboarding friction across every interview you've ever run, not just this week's batch.

Habit five requires a research management layer

The whole-trio habit creates a coordination problem that grows with team size. If the PM, designer, and engineer are all conducting interviews, where do findings go? How does the researcher know what the PM learned last Tuesday? How do you prevent three different people from contacting the same participant in the same week?

A research management system (sometimes called a research CRM) solves this. One place where every study lives: transcripts, findings, participant history. An insights view that surfaces what the team is learning without requiring people to dig through raw transcripts. And because contact history is tracked centrally, no one accidentally reaches out to the same person twice in a week.

Brex scaled from a handful of researchers to over a hundred by centralizing research this way. When everyone can see what's being learned, more of the organization can act on it.

Great Question's research CRM is built for this: studies, participants, and findings all in one place. Contact history is tracked automatically.

How to structure continuous discovery for your team

There's more than one way to run this, depending on team size and how much you want to centralize.

The dedicated research track model

One researcher owns continuous discovery for a specific area: activation, onboarding, or churn. They recruit weekly, interview alongside a PM or designer, analyze, and share findings.

This works because the researcher builds real context over time. They spot trends that someone doing ad-hoc studies would miss. It requires dedicated time and good recruitment infrastructure, or the researcher ends up spending half their week on logistics.

The embedded researcher model

Researchers sit within product teams and treat continuous discovery as part of how those teams operate.

This works well when there's already strong research culture and enough researchers to go around. It requires clear participant sourcing so embedded teams aren't all drawing from the same pool, and good research management so findings flow between teams.

The hybrid model

A research ops team manages recruitment, the participant database, and analysis infrastructure. Individual researchers or embedded researchers use that infrastructure to run their own studies.

This is the model that scales. Research ops handles the operational parts. Researchers do the parts that require judgment. Asana cut research setup from two weeks to 2-3 days using this kind of structure.

Setting up the weekly cadence

Continuous discovery is really about regularity. Weekly is the most common rhythm, but bi-weekly works for some teams. Some run two parallel study tracks. The point is picking something sustainable and building a system around it.

A rough setup timeline:

Week one: Define your research questions. What are you trying to understand continuously? Pick one area to start: onboarding, activation, or churn. You can expand later.

Week two: Set up recruitment. Define who you need. Build a screening form. Decide whether you're recruiting from your customer base, an external panel, or both. Great Question's recruitment features connect to your customer data if you want to recruit from existing users.

Week three: Build your research management setup. Decide where findings will live, who can access what, and how insights will reach the people who need them. If your team lives in Slack, findings should get to Slack.

Week four: Run your first study. Five or six participants, open-ended questions. Treat this as a test of the infrastructure, not a search for perfect findings.

Week five: Analyze and share. Use AI-assisted analysis for the heavy lifting. Note what worked in the process and what didn't.

Weeks six onward: Run the system. Recruit, interview, analyze, share. Adjust as you go.

The first three weeks are setup. Then the rhythm takes over.

The competitive case for building these habits

Teams that maintain continuous discovery habits hear different things than teams that research quarterly. They spot trends earlier. They understand how customers actually use their product, not just how customers describe using it when asked directly.

Two consequences follow. Bad ideas get killed before the team spends months on them. That saved time compounds quickly. And the product evolves faster, because weekly feedback enables weekly iteration rather than quarterly course corrections.

The hard part is not the framework. Torres' book is clear enough. The hard part is sustaining the practice when recruitment stalls, analysis piles up, and findings stop reaching decision-makers. Fixing it means solving logistics, not rethinking the strategy.

Common obstacles

"We don't have enough participants."

You probably do. You haven't set up automated recruitment yet. Define your target profile, open a screening form to your user base, and you'll typically have more candidates than expected. If your user base is small, supplement with external recruitment, but try your own customers first.

"Researchers don't have time."

They don't if they're manually transcribing and coding everything. With AI handling transcription and initial coding, one researcher can manage five to six interviews per week without working nights. The time math changes substantially.

"We don't have the right tools."

Great Question includes automated recruitment, a research CRM, and AI-assisted analysis built specifically for continuous discovery. If you're using something else, the same principles apply. The infrastructure exists, and most tools connect to each other.

"Our team doesn't see the value yet."

Run continuous discovery in one area for three months. Show the findings. Show the decisions those findings influenced. Show the difference between catching something in month one versus discovering it in month four. The value becomes obvious pretty quickly.

Tools for always-on research

If you're setting up continuous discovery, you need:

  1. Participant recruitment infrastructure. Great Question's recruitment features automate qualification and scheduling so researchers spend time on interviews, not admin.
  2. A research management system. Great Question's research CRM centralizes studies, participants, and findings all in one place, accessible to anyone who needs it.
  3. Interview infrastructure. Great Question's interview features let you run interviews in-platform, transcribe automatically, and organize findings.
  4. Survey capability. Some continuous discovery involves quantitative validation. Great Question's survey platform lets you run this alongside qualitative research.
  5. AI-assisted analysis. Great Question includes AI analysis that transcribes, codes, and highlights key findings so researchers spend time thinking, not transcribing.

Scaling continuous discovery as you grow

Three researchers doing this is manageable. Thirty requires systems.

As you scale: centralize participant management so researchers aren't independently contacting the same people. Build research ops so someone owns the infrastructure and maintains quality. Create documentation for what a good research brief looks like, how many participants per study, and what the minimum bar is before publishing findings. New people onboard faster, and consistency improves.

Brex went from a handful of researchers to over a hundred by building this operational structure deliberately. It didn't happen by accident.

How to measure whether it's working

After three months, check:

How many interviews did you run? A reasonable target is roughly one per researcher per week, sustained.

How many findings did you publish? Roughly one per study, or every two weeks at minimum.

Are insights making it into decisions? At least one decision per quarter should be directly traceable to continuous discovery findings.

Are researchers satisfied? The practice should feel sustainable. If researchers are burning out, something in the infrastructure is broken.

Are you repeating research? If the same questions keep surfacing in new studies, findings aren't reaching the people making decisions.

FAQ

What are the continuous discovery habits Teresa Torres defines?

The five continuous discovery habits Teresa Torres defines are: (1) interview at least one customer per week as a full product trio, (2) map opportunities using an opportunity solution tree, (3) surface and test assumptions before building, (4) run small experiments to validate ideas, and (5) involve the full product trio (not just researchers) in weekly discovery. Torres describes all five in her book Continuous Discovery Habits, including how to start when teams are skeptical or overstretched.

How often should we run customer interviews?

Weekly is the cadence Torres recommends: at least one per product trio per week. Bi-weekly works for some teams. Monthly is too infrequent to maintain the habit; the learning velocity drops and teams start treating it like a project again.

How is "continuous discovery habits" different from just doing more user research?

The main difference is who owns it and when it happens. Traditional user research is typically researcher-driven, project-scoped, and delivered as a report. The continuous discovery habits model has the full product trio interviewing together every week and connecting what they learn directly to the opportunity solution tree and product decisions in real time. The insights don't go through a report. They shape the next week's work.

Does this replace the need for dedicated UX researchers?

No. Torres is clear that researchers have an important role: they support the practice, train non-researchers to interview well, run more in-depth studies when needed, and help teams avoid common pitfalls. What changes is that researchers aren't the only people doing discovery.

Should we interview our own customers or external users?

Both, for different reasons. Existing customers tell you what's working and what isn't once they're using the product. Prospects and people in your target market who don't yet use you tell you whether you're solving a real problem at all. Rotating between them keeps the picture complete.

How many participants do we need per study?

For qualitative research: five to eight. You'll spot the major themes by five. Eight gets you a bit more nuance. Beyond ten, you're spending time for diminishing returns.

How do we prevent participant fatigue?

Track every interaction. Know when each person was last contacted. Don't interview the same person more than once a month. Great Question's research management system tracks this automatically.

What if executives don't see the value?

Show them decisions it changed. "We were about to build this. Three customer conversations in week two told us nobody wanted it. We built something else, and it's our most-used feature." Frame it as risk reduction. That language tends to land.

The gap between knowing the habits and maintaining them

Most teams that read Torres' book understand the habits. They agree they're the right way to work. Then they try it and the logistics become the bottleneck.

The teams that make continuous discovery stick aren't different in their research philosophy. They've invested in infrastructure that makes the habits sustainable: recruitment that doesn't stall, analysis that doesn't pile up, findings that actually reach the people making decisions.

That's the gap. And it's solvable. It doesn't require a large research organization, just systems that keep the rhythm going.

Start this week. Pick one research question. Set up recruitment. Run five interviews. Analyze, share, then do it again next week.

Learn more:

Tania Clarke is a B2B SaaS product marketer focused on using customer research and market insight to shape positioning, messaging, and go-to-market strategy.

Table of contents
Subscribe to the Great Question newsletter

More from the Great Question blog