Recruitment & Admin: The 8 Pillars, 8 Years Later

By
Jack Wolstenholm
Published
October 17, 2025
Recruitment & Admin: The 8 Pillars, 8 Years Later

When the ResearchOps community first defined the 8 Pillars of User Research, it gave a name to a problem every researcher knew intimately.

Recruitment and admin, for example, was a "massive time suck." 

In an era of smaller teams and immense pressure to "do more with less," this logistical burden has only grown heavier. Now, the explosion of AI in user research has arrived as both a potential savior and a new source of complexity.

To help navigate this new landscape, we brought together a panel of industry leaders: Harri Thomas, Chief of Staff at Great Question; Jenni Darroch from Atlassian; and Holly Cole, Executive Director of the ResearchOps Community. 

This article recaps their key insights on the evolution of participant recruitment and how AI is reshaping one of research's most foundational (and frustrating) pillars.

The "admin tax" is getting automated (finally)

For every hour you spend in an insightful user interview, you’ve likely spent three more on the logistical grind to make it happen. You’re navigating time zones, sending endless scheduling polls, tracking down consent forms, and manually processing gift card codes.

This is the "admin tax"—the relentless operational overhead that consumes time, your most valuable resource.

How AI is clearing the logistical hurdles

Industry studies consistently show that researchers spend up to 40% of a project’s timeline not on research, but on logistics. During our panel, Jenni spoke to this universal friction, highlighting the sheer volume of coordination required to stand up even a simple study. The panel agreed that this is precisely where AI is making its first, and most welcome, impact.

AI is giving researchers back their time

The consensus was clear—the low-judgment, repetitive tasks that cause the most burnout are the perfect candidates for automation. AI-powered tools are now capable of handling the entire scheduling process and automating the distribution of research incentives, turning hours of manual work into a process that runs in the background. 

It's about finally freeing you up to do the work that only you can do, not replacing you.

Recruitment's new challenges in the age of AI

The panel was optimistic about AI's potential to accelerate participant recruitment. The conversation highlighted how AI models can help you screen and segment your own customer panel faster than a human ever could.

This allows you to quickly identify and reach out to the right users for a study, dramatically shortening your time-to-insight.

The rise of the "generative AI participant"

But the discussion quickly turned to a more cutting-edge threat of the "generative AI participant." The panel addressed the emerging problem of individuals who use tools like ChatGPT to perfectly craft their screener survey responses. 

They look like your ideal participant on paper, but provide shallow, inauthentic feedback during the session, rendering your hard work useless.

Related read: How to write a great screener survey

Moving from simple screeners to reputation scoring

The industry must evolve beyond simple screeners to combat this. Harri explained that the most advanced platforms are now developing Participant Reputation Scoring, where AI looks at a participant's entire history, not just their answers to one survey. It tracks their no-show rate and behavioral flags to build a score that reflects their reliability and authenticity.

This allows you to filter not just for participants who say they are a fit, but for those with a proven track record of providing high-quality, human feedback.

The new role of ResearchOps

The conversation naturally led to a question on everyone's mind: If AI automates the work that has historically defined the Research Operations role, what does that mean for our jobs? The panel's answer was both clear and empowering—the role isn't disappearing, it's elevating.

Holly framed this evolution as a strategic Admin-to-Advocacy ratio shift. For years, the majority of a ResearchOps professional's time has been consumed by tactical "Admin" work: coordinating schedules, managing tools, putting out fires.

AI is now poised to handle a significant portion of that administrative load. This doesn't make the ResearchOps role redundant. It frees you up to invest your time in the high-value "Advocacy" work that the business desperately needs, like building research enablement programs or partnering with customer success teams to create a holistic voice of the customer program.

AIR: A framework for using AI ethically 

The rush to adopt AI in user research comes with serious risks.

As a research leader, you are now on the front lines of navigating the ethical tightrope of algorithmic bias, data privacy, and insight validity.

The conversation pointed toward the need for a simple framework for evaluating any new AI capability. Enter the "AIR" Framework: automation, insight, and responsibility.

🟢 Automation: The green light for efficiency

This is the safest use case. Ask yourself: "Does this tool automate a low-judgment, repetitive task?" Think of scheduling, transcription, or incentive payments. These are the areas where AI is a clear win, reducing the "admin tax" and increasing your team's operational efficiency.

🟡 Insight: The yellow light for analysis

This is where you must apply caution. Ask yourself, does this tool attempt to generate high-judgment insights or explain the “why?” AI-powered summary tools should be treated as an assistant, not an analyst. All AI-generated insights must be rigorously reviewed and validated by a human researcher.

🔴 Responsibility: The red light for data privacy

This is the hard stop that requires deep scrutiny. Does this tool handle my participants' Personally Identifiable Information (PII)? Any tool that touches this sensitive information must be thoroughly vetted by your legal and security teams. Never feed participant PII into a public AI model.

A look ahead

The Recruitment & Admin pillar still stands as a core component of any successful research practice, but its foundation is being rebuilt with AI. The challenges of finding quality participants and managing the complex logistics of research remain. However, as our panel made clear, the tools and the very skillsets required to meet those challenges are changing at a rapid pace.

The discussion highlighted a clear path forward for every research leader.

The goal is to thoughtfully leverage AI to handle the mechanics of the research process. Doing so allows you, your researchers, and your ReOps team to focus on the uniquely human and most valuable parts of the craft: empathy, deep understanding, and strategic insight.

As your team explores how to integrate these new capabilities, we're here to help. See how Great Question is adding powerful AI features—like AI moderated interviews—to our all-in-one UX research platform built for the enterprise.

Jack is the Content Marketing Lead at Great Question, the all-in-one UX research platform built for the enterprise. Previously, he led content marketing and strategy as the first hire at two insurtech startups, Breeze and LeverageRx. He lives in Omaha, Nebraska.

Table of contents
Subscribe to the Great Question newsletter

More from the Great Question blog

See the all-in-one UX research platform in action