
I’ve spent most of my career trying to understand how humans think, behave, and make decisions. I was trained as a psychologist, built and led research teams, and now focus on AI Insights at Figma where I spend my days experimenting, building, and challenging my own assumptions about how humans and intelligent systems collaborate.
That combination puts me at the center of a tension a lot of teams are feeling right now:
When do you lead with AI, and when do you keep things human?
People want simple rules. I wish I could give you those. But the truth is: this dilemma cuts deeper than tools or workflows. It hits identity, expertise, and our assumptions about what makes us uniquely human.
And if we don’t unpack that honestly, we’ll keep making the wrong calls.
Let me walk you through the framework I use. It’s the same one I teach teams, coach leaders on, and use myself when I’m deciding how to bring AI into research.
Every time a team “tries AI,” someone walks away disappointed. Evangelists expect a flawless superhuman colleague while skeptics expect an incompetent threat.
Both groups are wrong for the same reason.:
AI is really a mirror, a reflection of our own decision-making, our own patterns of thinking. When AI fails us, it just reveals that we never understood our own processes in the first place.
This is the hard part most teams don’t really care for. You can’t integrate AI without first understanding the tacit knowledge, shortcuts, and invisible expertise sitting in your organization.
If you don’t know how your decisions get made, you can’t expect the alien intern (that brilliant but contextless system) to magically figure it out.
And yes, I call AI an alien intern because… well, it’s brilliant. It has tons of capabilities. But aliens have never been on Earth so they don’t know what’s going on in your business.
That’s why, until you teach it context, it will look incompetent. Not because it is, but because your process isn’t visible enough for it to learn.
So before choosing human or AI, first ask:
This mirror test prevents 90% of “AI vs. human” mistakes I see today.
When people resist AI, they often defend it with logic:
“It’s biased.”
“It doesn’t understand context.”
“It lacks taste.”
Sometimes that’s true, sometimes it’s not. Often, it’s identity. People embrace AI when it empowers them without threatening their identity and resist when they feel their expertise is being undermined.
Whichever side you’re on, AI forces us to make our thinking visible. And for many experts, that’s deeply uncomfortable. So before you conclude “AI isn’t ready,” pause long enough to ask:
This is one reason I push people (and my own daughter) to explain their thinking out loud. It’s the only way to see what’s tacit, vague, or assumed.
Because thriving with AI requires something researchers haven’t always been trained to do:
Show your work, not just the final insight.
Here’s the mental model I use. It’s simple, practical, and grounded in how teams work.
The main goal is to choose the right cognitive signature for the task.
Most teams drift into AI decisions. One group goes all-in. Another dip their feet in. Both are operating on autopilot.
Not sure what the middle ground is? Ask:
Because the “AI is amazing” camp and the “AI is unethical” camp share something surprising:
Both groups are making fundamental errors in how they’re treating AI.
The first group treats it like a magical colleague, the second treats it like a hostile one.
It’s neither.
AI is a tool of context. If you haven’t done the work to understand your own, you’re not ready for integration.
One question I get constantly is this: What will remain uniquely human?
I wish I had a clean answer, but I don’t.
I’m actually not quite sure anymore what’s uniquely human. I can tell you what I think is uniquely human now, but I don’t think it will necessarily stay that way.
But here’s where I believe humans hold the line, at least for the foreseeable horizon:
As for my boldest prediction, I think the scarce resource ceases to be information and becomes wisdom about what matters. That wisdom might be the final stronghold of humanity as humans become the arbiters of what moves forward.
Because this article is meant for practitioners, let me be concrete about some of AI’s roles in quant research, moderation, and data fraud.
I’m bullish. Not because it’s perfect (it’s not yet) but because the vector is strong.
AI moderators are infinitely patient. They don’t get tired and they don’t unconsciously lead participants, but they still miss context. That’s why the right framing is:
AI = partner. Human = orchestrator.
AI is already a powerful co-pilot for:
But – and this is important – I wouldn’t rely on AI for anything. I view AI as a partner, not an equal.
Fraud is exploding because AI makes it easy. But AI will also be the thing that helps us detect it through:
We’re heading into an arms race, but one where researchers have leverage if we stay curious and adaptive.
Researchers often assume our value is in seeing what others don’t see.
But in an AI-saturated world, that changes. Our role becomes:
That requires courage, clarity, and a willingness to step into the arena even when the tech isn’t perfect.
With that in mind, be an early adopter such that you can influence the way these tools are being developed. That gives you a voice you wouldn’t otherwise have.
AI isn’t coming for research but rather for the parts of research we haven’t been honest about.
If you’re wrestling with when to choose humans, when to choose AI, or how to build systems that combine both intelligently, reach out. This is the puzzle I’m obsessed with.
Editor’s note: This article is based on a webinar with Noam from August 2025. Watch the full webinar recording on YouTube.
Jack is the Content Marketing Lead at Great Question, the all-in-one UX research platform built for the enterprise. Previously, he led content marketing and strategy as the first hire at two insurtech startups, Breeze and LeverageRx. He lives in Omaha, Nebraska.