
In 2024, Figma hired Barron Webster as one of the world's first "Model Designers" - someone who understands how AI models work well enough to shape their behavior, and understands design well enough to make that behavior feel right. He wrote about sycophancy metrics, aesthetic regression testing, and the challenge of defining "quality" when the machine can produce a thousand options before lunch. Since then, an entire community has formed around this shift. The AI Design Field Guide now catalogs techniques from practitioners at OpenAI, Anthropic, Figma, and Notion who represent a new kind of designer entirely.
The same thing is happening in UX research. Quietly, without a job title yet.
The industry narrative says UX researchers should be worried right now:
That's the narrative. Here's what researchers actually talk about when they're being candid.
We ran an anonymised meta-analysis across more than 1,500 conversations with research teams over the past two years: sales calls, demos, support conversations, community discussions, and catalogued the themes that kept coming up. The results weren't what we expected.
We went into this expecting to find anxiety about AI replacing researchers. We'd seen the think pieces, the X and Reddit threads, the 68% of UX researchers "concerned about AI's impact" survey stat. We figured our data would confirm it.
It didn't. The industry narrative says researchers are worried about replacement. The researchers themselves are worried about the expectation of MORE research with the introduction of AI.
Every research platform (including ours) publishes content about "AI transforming UX research." And it's true, in the abstract. AI-assisted analysis is now the top use case at 88%, and adoption has roughly doubled in the past two years.
But there's a gap between what the industry talks about and what's actually driving behavior. When a Head of Research at a HealthTech company told us, "I am the only researcher, it's more about empowering people internally to conduct research, people maybe not being researchers," she wasn't asking about AI. She was describing a structural problem that AI might solve.
When a researcher at an enterprise SaaS company described their workflow - "Someone takes notes, records the call 10 to 20 times, does manual analysis of key themes, puts it in a deck to present back" - they weren't requesting AI features. They were describing a system that's broken.
The AI Researcher isn't a futuristic concept. It's someone who's overwhelmed by the volume of research their organization needs, and is using AI to keep up.
From our meta-analysis, we see three distinct profiles forming. None of them have formal job titles. All of them exist today.
Who they are: The only researcher at a company with 50 to 500 employees. Expected to cover every team, every launch, every customer question. Before AI, they ran maybe one study a quarter. Now they're running four a week and still falling behind.
One researcher told us: "I'm a researcher team of one. I'm their first hire, two years ago." Another: "We don't have dedicated researchers. The designers are doing the research themselves."
What their workflow looks like now:
The most advanced solos are running 5–6 concurrent AI-moderated studies simultaneously, using custom AI agents for everything from discussion guide creation to cross-study synthesis. They're not hiring. AI is the hiring plan. But what actually blocks them isn't AI capability, it's infrastructure: CRMs that can handle hundreds of thousands of contacts, automated incentive processing, email batching across concurrent studies. AI can moderate and analyze. It can't fix a broken contact import or process gift cards from a CSV.
Skills they're building:
The 10x Solo doesn't care about "AI in research" as a category. They care about getting through Tuesday.
Around 36% of conversations in our analysis touched on this: non-researchers needing to do research. Companies aren't trying to eliminate researchers. Demand for customer insight has outstripped the supply of people who know how to get it.
Here's what that looks like day to day: A Head of Research sees a product manager running their own AI-moderated interviews. Great, in theory. In practice, that PM wrote a screener that only qualified people who already loved the product. The AI moderated beautifully. The insights were useless.
That's the Democratizer's job now. Mornings reviewing AI-generated output from studies they didn't design. Afternoons training PMs to write screeners that don't lead the witness. Whatever time is left updating the governance playbooks that keep everything from turning into garbage.
The skills are genuinely new: teaching non-researchers how to do research without destroying the data, building evaluation frameworks for AI-generated insights, and taxonomy architecture — the classification systems that AI uses to organize everything. Less "researcher," more "research quality engineer."
If the 10x Solo is using AI as a multiplier, the Democratizer is the quality layer standing between "anyone can do research now" and "anyone can do bad research now."
This isn't a new role. It's what senior research and research leadership has always been, but with a fundamentally different daily workflow.
Before AI, even senior researchers spent significant time in execution: running studies, reviewing transcripts, compiling reports. The strategic work: designing research programs, connecting findings across studies, translating insight into executive action that happened in the margins.
AI flipped that ratio. The senior researchers thriving right now spend their mornings reviewing AI synthesis across all active studies, flagging what the machine missed. Their afternoons are strategy: designing next quarter's research program, presenting cross-study patterns to leadership, building the evaluation frameworks that determine whether AI output is trustworthy enough to act on.
The skillset isn't new. Research program design, organizational influence, pattern recognition across large datasets, these always mattered for senior researchers. AI just cleared the calendar so they could actually use them full-time. And increasingly, like Barron Webster at Figma, they're building the quality standards and feedback loops for an entirely new way of working.
Priyanka Kuvalekar, a senior UX researcher at Microsoft working on AI experiences for Teams, recently described her transition from architecture to AI research. Her three lessons: learn to evaluate AI in practice, approach AI through an accessibility lens, and understand the technology well enough to ask the right questions, even if you never build it yourself.
What's telling is what she focuses on. Evaluation. Design. Asking the right question. Not coding. Not becoming a data scientist.
Skills that matter more now:
Skills that matter less:
Skills that haven't changed:
If you're a team of one or two: The 10x Solo archetype isn't aspirational — it's what's happening. Focus on prompt engineering, taxonomy design, and evaluation. But don't underestimate infrastructure: a scalable participant CRM, automated incentives, and batching capabilities turn "I can use AI" into "I can run a research program."
If you're scaling a team: The Democratizer role is the one you need but probably haven't hired for. Someone needs to own research quality as more non-researchers run studies with AI assistance.
If you're a senior researcher: Lean into strategy. The researchers thriving right now stopped identifying as "the person who runs studies" and started identifying as "the person who makes sure the organization understands its customers."
If you're hiring: Look for researchers who ask good questions about AI output. Anyone can learn the tools. The judgment about when to trust the output and when to dig deeper is the scarce skill.
Barron Webster's title didn't exist until Figma created it. The AI Design Field Guide exists because someone recognized a new kind of practitioner was emerging.
The same thing is happening in research, in the daily work of thousands of researchers figuring out, study by study, what it means to do research when the machine handles the parts that used to fill their day.
The AI Researcher doesn't replace the human researcher. It is the human researcher: the one who figured out the game changed and decided to change with it.
No. Our meta-analysis found only 0.2% of researchers mentioned replacement concerns. 31% worried about drowning in work. AI is a tool for keeping up, and the human skills — empathy, question framing, reading a room — remain what makes research valuable.
Taxonomy design, prompt engineering for research contexts, evaluation frameworks (knowing when to trust AI output), and research orchestration. The core research skills haven't changed.
A UX researcher who's integrated AI into their workflow while maintaining the judgment to evaluate and improve AI output. Think Figma's "Model Designer," but for research.
Three patterns: "10x Solos" using AI to do the work of a full team, "Democratizers" owning quality as non-researchers adopt AI tools, and senior researchers shifting from execution to full-time strategy and evaluation.
Jack is the Content Marketing Lead at Great Question, the all-in-one UX research platform built for the enterprise. Previously, he led content marketing and strategy as the first hire at two insurtech startups, Breeze and LeverageRx. He lives in Omaha, Nebraska.