More AI, More UXR

By
Carly Hartshorn
Published
March 18, 2026
More AI, More UXR

AI advancements continue reshaping product, design, and research at breakneck speed. And with each new capability comes the same persistent question. Will artificial intelligence eliminate our jobs? I hear variations of these fears constantly. Researchers worry that AI will dilute their expertise, that companies are trading quality for cost reduction, that the professional ground beneath their feet is shifting dangerously.

And the anxiety makes sense.

UX teams face a brutal job market while automation headlines flood their feeds. Yet the fear stems from a fundamental misunderstanding about how technology and human expertise interact economically. History and established economic principles tell a different story. One where AI efficiency expands opportunities rather than eliminates them.

Just because more skill is brought in from any source, doesn't mean there's less value for our skills, or for skills that we can grow into.

▶ Watch the full webinar recording on YouTube

Understanding the Roots of Fear

There are several factors that fuel widespread anxiety. For starters, many see UX as non-essential within their organizations, making them feel easier to replace. Others struggle to communicate their value compared to other roles. Beyond these concerns, negative news spreads faster than positive developments, which amplifies threats before evidence of value surfaces.

The most damaging misconception though involves zero-sum thinking. When AI handles tasks researchers once owned exclusively, people assume less value remains for them to contribute. However, economic theory demonstrates why such thinking proves fundamentally flawed. You see, bringing more skill into a system doesn't diminish human value. In fact, the opposite often holds true.

We're more important sometimes when other skills are brought in, which is counterintuitive. When AI tooling is brought in, sometimes that makes the human in the loop more critical and not less critical.

The thing with AI is that it still faces substantial limitations, and we remain uncertain when constraints will resolve. So what makes it challenging is that effects compound exponentially. Changes seem slow until the entire environment transforms within months. But I've found that rather than fixating on every new tool, we need to examine the broader trajectory and how we fit into that future.

Take a deep breath. It's a marathon, not a sprint, and a lot is happening that is going to change the way we work forever.

Comparative Advantage: The Foundation for Optimism

Let's consider comparative advantage. It represents the most important economic principle for UX researchers to understand right now. David Ricardo introduced the theory in 1817 while explaining international trade patterns. He asked why countries would trade when one nation could produce everything more efficiently. Why wouldn't the superior producer operate in isolation?

Ricardo's answer revolutionized economics. Total output increases when each party specializes where they hold the lowest opportunity cost, not the absolute advantage. Absolute advantage means being better at a task. Comparative advantage focuses on opportunity cost. What you give up to perform that task.

When I spend time writing surveys, I cannot simultaneously conduct interviews or synthesize findings. My opportunity cost for producing surveys equals the foregone value of those alternatives. Comparative advantage lives where opportunity cost registers lowest, and the principle applies at every scale. Consider the extreme scenario, for instance. AI outperforms humans across every UX research task. Even then, economic efficiency dictates division of labor. AI handles tasks where the performance gap proves largest while researchers concentrate on work where human capability shines brightest.

People confuse the comparative piece with the absolute piece, and they look at AI and they say, well, if AI can write a survey quicker, and it's basically the same quality, then I should never write a survey. Well, no, that's not at all what this theory states.

The company employing both ends up with more total output. Perhaps AI drafts initial surveys while I refine questions to eliminate bias. Perhaps I conduct sensitive interviews while AI generates transcripts and identifies patterns for deeper investigation. Comparative advantage persists whether we're discussing entire studies or individual components. Researchers should audit how they spend time and identify where they hold comparative advantage.

Focus energy on higher-risk tasks where AI performs poorly or where errors carry severe consequences. Shifting task loads toward areas where your edge generates disproportionate value creates more output while securing your role's relevance.

O-Ring Theory: Why Humans Become More Critical

Michael Kremer developed the O-ring theory to explain why skilled workers naturally gravitate toward teammates with similar skill levels rather than mixing with less capable colleagues. His theory revealed that teams remain only as strong as their weakest link, and that weakness exerts disproportionate drag on overall performance.

For example, when ten people collaborate on one outcome, a single person at 30% capacity pulls the team down far more than intuition suggests. You might expect 10% efficiency loss but reality often looks like 50% to 75% loss, and for mission-critical tasks, one failure point sinks entire projects.

As we automate tasks and build complex AI systems, humans remaining in those loops become more valuable, not less. Each human represents a potential failure point whose quality disproportionately affects outcomes. Product teams already experience this dynamic. When a PM, designer, researcher, content writer, and engineers collaborate, one weak link drags the project down substantially. As AI handles routine work, remaining human touchpoints grow increasingly critical to success.

Jevons Paradox: Why Efficiency Increases Demand

Perhaps the most counterintuitive reason for optimism comes from Jevons Paradox. William Stanley Jevons observed coal consumption increasing as steam engines became more efficient, contradicting expectations that efficiency would reduce consumption. The paradox requires three conditions. Technological change increases efficiency, efficiency gains reduce consumer prices, and reduced prices drastically increase quantity demanded.

All three exist for UX research.

As AI makes research faster and cheaper, organizations that couldn't afford extensive programs suddenly can. Product managers who waited weeks get preliminary answers in days. Teams that ran quarterly studies now run monthly ones. Efficiency gains don't eliminate researchers, they unleash previously constrained demand.

Historical Patterns Reveal Transformation

Economic theory aligns with what actually happened when automation reached other industries. Consider accounting. When bookkeeping became automated, accountants shifted toward strategic thinking, analysis, and client advisory work where they held comparative advantage. Bureau of Labor Statistics data confirms accounting employment has grown faster than average despite ongoing automation.

This pattern reveals a common thread across automation: labor shifts rather than vanishes, with new roles opening in higher-order thinking, strategic planning, and oversight. Demand for human expertise often grows stronger than before automation arrived.

There's a lot of unknown, there's a lot of risk in both directions. We always talk about the risk of replacing tasks with AI, but there's the opposite, too. There's the risk that we don't adopt automation or AI for tasks that we should.

Where to Focus Your Energy

Current AI capabilities remain limited where human researchers excel. AI struggles with unique design challenges, cultural nuance, novel user behaviors, and ethical considerations requiring value judgments.

Focus on work requiring interpretation of ambiguous findings, synthesis across disparate data sources, strategic recommendations tied to business outcomes, ethical judgment calls, and stakeholder relationship building. These capabilities represent where human researchers add irreplaceable value even as AI handles tactical execution. Bank tellers, farmers, and accountants embraced evolution through technological change. UX researchers can follow the same path with confidence grounded in economic reality and historical precedent. The transformation ahead will reshape our work, but transformation and elimination represent fundamentally different outcomes.

Editor's note: This article is based on a webinar with Caleb from February 2026. Watch the full webinar recording on YouTube.

Table of contents
Subscribe to the Great Question newsletter

More from the Great Question blog