The 8 Pillars, 8 Years Later: People & Environment

By
Jack Wolstenholm
Published
August 8, 2025
The 8 Pillars, 8 Years Later: People & Environment

In 2017, the ResearchOps Community published The Eight Pillars of User Research — a seminal work that's helped countless teams build stronger research practices across the globe.

But eight years is a long time.

Since then, layoffs have rocked the research profession, with teams restructured and AI rewriting the way we all work. “Doing more with less” has become the norm.

Recently, we partnered with the ResearchOps Community to revisit those pillars through a new lens. This 5-part series explores what’s changed (and what hasn’t) in how research is practiced and systemized today.

In Part 1, Emma Boulton, Holly Cole, and Benson Low revisited the two most human pillars: People and Environment.

Here’s what we took away.

The research environment has changed & not just because of AI

The original Eight Pillars described Environment as the conditions that shape how research happens:

  • Organizational maturity
  • Internal politics
  • Access to labs
  • Levels of support.

That definition still holds. But as the panel pointed out, a lot of those conditions have fundamentally shifted.

“Layoffs for a start,” said Emma Boulton, UX Research Leader at Waitrose & Partners and co-author of the original Eight Pillars. “And going from supporting larger research teams to smaller ones. Those have been the two key things that have changed for me personally.”

Holly Cole, Research Operations Consultant at ZeroDegrees and Executive Director of the ResearchOps Community, noted that physical usability labs, once a research staple, have largely disappeared, with a few exceptions for highly in-person workflows. Booking rooms has become its own challenge. And as hybrid becomes the new baseline, some teams are starting to bring back dedicated research spaces just to cut through the logistical noise.

Headcount and tools make a difference, but so do pace and pressure. More researchers are being asked to do more, with less support, under faster timelines.

“I really do feel like business is trying to force the same quality of work that you get with a decently sized research team, by using AI,” said Cole. “And you run into problems.”

The tension between speed & quality

That pressure for speed came up again and again. And while AI was framed as a useful support tool, there was strong caution against letting it replace core research work, especially the kind that requires context, nuance, and ethical judgment.

Cole put it simply: “I think like so many people, I am very worried about the push by business to use AI for research in place of human-adjudicated research.”

We’ve seen the same pattern. AI is being explored for everything from note-taking to analysis to synthesis, and in many cases, it’s helping. But the risks grow when AI is expected to explain “why” people behave a certain way, or generate insights without oversight.

“We’ve been running experiments,” Boulton explained. “Using (AI) for assistant tasks like finding quotes to support synthesis, pulling together video clips. But when we’ve tried it for summarizing... it lacks the human quality. It lacks the nuance. It lacks the storytelling.”

The takeaway here: Use AI responsibly, and define where human judgment is non-negotiable.

People are still the core of the work

The People pillar — originally defined as the humans doing the research, supporting it, and building the community around it in — feels more important than ever.

But the shape of those roles is changing.

“I’m probably now looking more than ever for mixed methods (researchers),” said Boulton. “And minimum proficiency with data.”

The panel also touched on the shift toward specialization, especially in freelance and consulting environments. Clients want experts in specific areas: survey design, tooling, knowledge management, research enablement.

That doesn’t mean generalists are out of place. But even they are being expected to sharpen core areas and speak the language of business more fluently.

“Learn to prompt better,” said Cole. “And learn more about the business aspects – how research impacts business. Be able to talk about that eloquently. That’s going to serve you best as times change and as more companies lay people off.”

What AI can do (if used well)

Despite some concerns, there’s excitement, too. Boulton described recent internal experiments with prototyping:

“We created a prototype with Google Gemini last week. It was scary how fast it came together. But that also meant we could get early feedback on an idea without waiting for a designer to create a polished version.”

Other use cases included:

  • Prompting AI to sift through research repositories and identify relevant past work
  • Using AI to generate rough stimulus for discovery
  • Speeding up video synthesis
  • Drafting research summaries to be edited (not published) by humans

But again, speed without care can lead to superficial outcomes.

“You’re working with something that, on many levels, is an idiot,” said Cole. “It needs very specific instructions. That’s really irritating. And we are not used to that.”

Which brings us to the other side of AI: the operational responsibilities.

Governance, training & ethical guardrails

Several questions during the Q&A focused on the risks of data privacy, hallucinations, and unauthorized use of tools like ChatGPT.

The panel didn’t downplay the risks. Instead, they pointed to the need for clearer governance and better training, particularly for researchers outside of big companies with robust policies.

“We’re only able to use Google Gemini (at Waitrose) because we have a business agreement with Google,” Boulton explained. “If we put our company strategy into it, it’s not going to end up out there... We need to think carefully about what we’re putting into these different AI models.”

And more importantly: researchers and operations pros need to be involved in shaping those policies, not just abiding by them.

This might look like:

  • Setting up internal guidelines for AI usage
  • Running training on prompt hygiene, data handling, and consent
  • Working with privacy and infosec teams to map new workflows

Looking ahead

So, what happens to the 8 Pillars in the next 8 years?

No one on the panel thought the framework itself needed to change. As Boulton put it:

“These are really themes. I don’t think those things will necessarily change hugely... but who knows? The context and capability around them has.”

Cole added that the biggest shift might come from aligning more with product ops. As more product teams do their own research, ResearchOps will need to meet them in the middle while still advocating for rigour, depth, and user representation.

The pillars are holding. But the environment around them is shifting fast.

And it’s clear: this moment is an inflection point. Not a crisis, not a resolution, just a very real opportunity to reimagine how we support research, enable people, and use new tools without losing what makes this craft worth defending.

Part 2 of the series will cover the Scope and Organizational Context pillars.

In the meantime, if this discussion sparked anything you’d like to write about or speak to, reach out to the ResearchOps Community. And if you’re exploring how AI can support your team’s research practice, we’re happy to share what we’re building at Great Question.

Jack is the Content Marketing Lead at Great Question, the all-in-one UX research platform built for the enterprise. Previously, he led content marketing and strategy as the first hire at two insurtech startups, Breeze and LeverageRx. He lives in Omaha, Nebraska.

Table of contents
Subscribe to the Great Question newsletter

More from the Great Question blog

See the all-in-one UX research platform in action