The AI vs. human dilemma: How to make the right call for research

By
Jack Wolstenholm
Published
The AI vs. human dilemma: How to make the right call for research

I’ve spent most of my career trying to understand how humans think, behave, and make decisions. I was trained as a psychologist, built and led research teams, and now focus on AI Insights at Figma where I spend my days experimenting, building, and challenging my own assumptions about how humans and intelligent systems collaborate.

That combination puts me at the center of a tension a lot of teams are feeling right now:

When do you lead with AI, and when do you keep things human?

People want simple rules. I wish I could give you those. But the truth is: this dilemma cuts deeper than tools or workflows. It hits identity, expertise, and our assumptions about what makes us uniquely human.

And if we don’t unpack that honestly, we’ll keep making the wrong calls.

Let me walk you through the framework I use. It’s the same one I teach teams, coach leaders on, and use myself when I’m deciding how to bring AI into research.

Start with the mirror, not the model

Every time a team “tries AI,” someone walks away disappointed. Evangelists expect a flawless superhuman colleague while skeptics expect an incompetent threat.

Both groups are wrong for the same reason.:

AI is really a mirror, a reflection of our own decision-making, our own patterns of thinking. When AI fails us, it just reveals that we never understood our own processes in the first place.

This is the hard part most teams don’t really care for. You can’t integrate AI without first understanding the tacit knowledge, shortcuts, and invisible expertise sitting in your organization.

If you don’t know how your decisions get made, you can’t expect the alien intern (that brilliant but contextless system) to magically figure it out.

And yes, I call AI an alien intern because… well, it’s brilliant. It has tons of capabilities. But aliens have never been on Earth so they don’t know what’s going on in your business.

That’s why, until you teach it context, it will look incompetent. Not because it is, but because your process isn’t visible enough for it to learn.

So before choosing human or AI, first ask:

  • Do we understand the decision-making behind this task?
  • Can we articulate it clearly enough to teach another human?
  • If not, are we projecting our own confusion onto the tool?

This mirror test prevents 90% of “AI vs. human” mistakes I see today.

Identity threat is driving more decisions than capability

When people resist AI, they often defend it with logic:

“It’s biased.”

“It doesn’t understand context.”

“It lacks taste.”

Sometimes that’s true, sometimes it’s not. Often, it’s identity. People embrace AI when it empowers them without threatening their identity and resist when they feel their expertise is being undermined.

Whichever side you’re on, AI forces us to make our thinking visible. And for many experts, that’s deeply uncomfortable. So before you conclude “AI isn’t ready,” pause long enough to ask:

  • Is the limitation real?
  • Or do I feel exposed?

This is one reason I push people (and my own daughter) to explain their thinking out loud. It’s the only way to see what’s tacit, vague, or assumed.

Because thriving with AI requires something researchers haven’t always been trained to do:

Show your work, not just the final insight.

A decision framework: When to choose AI-first vs. human-first

Here’s the mental model I use. It’s simple, practical, and grounded in how teams work.

Choose AI-first when:

  1. You need scale, speed, or iteration. AI is phenomenal at producing 30 prototypes in two seconds or summarizing 300 interview transcripts without burning yourself out.
  2. The stakes are reversible. If the decision is not a “one-way door,” AI gives you cheap optionality.
  3. You’re exploring or improving your approach. My take on synthetic users sums it up: it’s a great tool to iterate on your research approach… but not a replacement for connecting with actual people.
  4. 4. The work is structured, defined, and repeatable. Coding, sentiment analysis, SQL queries, survey design scaffolding…AI’s already a powerful partner here.

Choose human-first when:

  1. The outcome requires judgment, taste, or moral weight. For now (and this window may be shrinking) humans remain the arbiters of meaning.
  2. Context is complex, messy, or culturally sensitive. AI will get better, but it still can’t see the nuanced organizational realities researchers swim in daily.
  3. Participants expect relational cues. A high-stakes interview with a VC is not the same as usability testing an onboarding flow.
  4. The call requires deep empathy or emotional nuance. AI will eventually catch up. But right now, emotional inference is still on shaky ground.

The main goal is to choose the right cognitive signature for the task.

Breaking default patterns in AI adoption

Most teams drift into AI decisions. One group goes all-in. Another dip their feet in. Both are operating on autopilot.

Not sure what the middle ground is? Ask:

What problem were we originally trying to solve?

Because the “AI is amazing” camp and the “AI is unethical” camp share something surprising:

Both groups are making fundamental errors in how they’re treating AI.

The first group treats it like a magical colleague, the second treats it like a hostile one.

It’s neither.

AI is a tool of context. If you haven’t done the work to understand your own, you’re not ready for integration.

Where humans stay non-negotiable

One question I get constantly is this: What will remain uniquely human?

I wish I had a clean answer, but I don’t.

I’m actually not quite sure anymore what’s uniquely human. I can tell you what I think is uniquely human now, but I don’t think it will necessarily stay that way.

But here’s where I believe humans hold the line, at least for the foreseeable horizon:

  • Taste and curation
  • Ethical discernment
  • Meaning-making
  • Prioritization
  • Wisdom

As for my boldest prediction, I think the scarce resource ceases to be information and becomes wisdom about what matters. That wisdom might be the final stronghold of humanity as humans become the arbiters of what moves forward.

Real-world use cases: Moderation, quant, fraud & the messy middle

Because this article is meant for practitioners, let me be concrete about some of AI’s roles in quant research, moderation, and data fraud.

AI moderation

I’m bullish. Not because it’s perfect (it’s not yet) but because the vector is strong.

AI moderators are infinitely patient. They don’t get tired and they don’t unconsciously lead participants, but they still miss context. That’s why the right framing is:

AI = partner. Human = orchestrator.

Quant research

AI is already a powerful co-pilot for:

  • Building surveys
  • Choosing scales
  • Writing SQL
  • Summarizing logs
  • Accelerating analysis
  • Testing scenarios

But – and this is important – I wouldn’t rely on AI for anything. I view AI as a partner, not an equal.

Data fraud

Fraud is exploding because AI makes it easy. But AI will also be the thing that helps us detect it through:

  • Voice-based screeners
  • Articulation analysis
  • Multimodal cues
  • Pattern detection
  • Contextual validation

We’re heading into an arms race, but one where researchers have leverage if we stay curious and adaptive.

The call you make counts – because your role is changing

Researchers often assume our value is in seeing what others don’t see.

But in an AI-saturated world, that changes. Our role becomes:

  • Teaching models how to think,
  • Curating meaning from unlimited options,
  • Bringing wisdom to the table,
  • and protecting the human non-negotiables.

That requires courage, clarity, and a willingness to step into the arena even when the tech isn’t perfect.

With that in mind,  be an early adopter such that you can influence the way these tools are being developed. That gives you a voice you wouldn’t otherwise have.

AI isn’t coming for research but rather for the parts of research we haven’t been honest about.

If you’re wrestling with when to choose humans, when to choose AI, or how to build systems that combine both intelligently, reach out. This is the puzzle I’m obsessed with.

Editor’s note: This article is based on a webinar with Noam from August 2025. Watch the full webinar recording on YouTube.

Jack is the Content Marketing Lead at Great Question, the all-in-one UX research platform built for the enterprise. Previously, he led content marketing and strategy as the first hire at two insurtech startups, Breeze and LeverageRx. He lives in Omaha, Nebraska.

Table of contents
Subscribe to the Great Question newsletter

More from the Great Question blog