People Who Do Research 2023

‍

Scaling research that rocks đŸ€Ÿ (or at least isn’t rubbish)

With Kate Towsey, Research Operations Manager at Atlassian, author of Research That Scales: The Research Operations Handbook, and keynote speaker at People Who Do Research 2023

By
Jack Wolstenholm
February 16, 2023
Scaling research that rocks đŸ€Ÿ (or at least isn’t rubbish)
“Scale. It’s the business word of the decade."

But what does it all mean? Kate Towsey – Research Operations Manager at Atlassian, author of Research That Scales: The Research Operations Handbook, and creator of the phrase “People Who Do Research” – has some ideas, which she shared in her keynote presentation at the inaugural People Who Do Research virtual conference hosted by Great Question on Wednesday, February 8, 2023.

The word ‘scale’ has become shorthand in business for companies who grow or expand in a proportional, profitable way. Everyone wants to scale everything, but it’s not so cut-and-dry. Towsey sees a particular problem when applied to the field of research, where “there’s not a lot that’s absolutely constant, in ratio, or purely proportional.”

In this session, Towsey shares how her team of 10 has successfully scaled research operations to support 500 people at Atlassian. Listen to Towsey’s talk or read our recap below.

‍

The problem with scaling research

“Research is not cookie-cutter work. It isn’t in any way similar to making millions of chocolate bars, or hundreds of thousands of the same motor car. Every single study you do is unique.”

That’s the problem with trying to scale research. Variation rules, and that doesn’t scale. Interviews, usability testing, ethnographic research, longitudinal studies, and quantitative research are just a few of the different methods that researchers run, all of which require their own specific types of operations, tools, and approaches in order to be effective.

“Research is less of the cookie-cutter manufacturing line, and more like an assortment of confectionaries. It runs on variety.”

This variety is on display not just with UX research methods, but also with the various types of people who do research and those who need research – a careful distinction Towsey breaks down in detail.

People Who Do Research vs. People Who Need Research

People Who Do Research (PWDR) consist of UX researchers, product managers, designers, and anyone else who conducts research to better understand their customers so they can do their jobs well. PWDR need research operations to find out new information. This includes access to participants, tools and vendors, training, support, ethics and privacy, and funding.

But there’s another important group that PWDR don’t think about enough – People Who Need Research (PWNR). These are the product managers, executives, designers, and others who need research operations to understand what other people in the organization already know.

PWNR need access to PWDR, whether it’s in-house or at an agency. Ideally, they’re already observing research that’s going on in the organization, and consuming the reports and videos that follow. Hopefully, they’re engaging through meetings and presentations, and able to find the insights they need in their research repository or by networking.

Understanding the relationship between PWDR and PWNR helps explain the role of research operations.

“It’s a combination of two things: giving people ways to find out and giving people ways to understand.”

Levels of rigor in research, from chatting to academia

“You can’t talk about People Who Do Research without talking about democratization.”

This sheds light on a key variable at play – rigor. More specifically, the various levels of rigor that are required to meet the standards of quality research, depending on the person who is doing the research. Towsey shares examples of research being conducted with different levels of rigor by working her way along a spectrum, from chatting to academia.

  • There’s the product manager who chats with a customer once a week over coffee.
  • There’s the product designer who recruits a cohort of participants and plans a study, but isn’t rigorous on analysis.
  • There’s the researcher (or diligent product manager or designer) who goes to a great level of rigor with applied research and analysis.
  • Then there’s academic levels of research rigor, which may go through an independent review board before ever being deemed worthy of publication.

Given the variety in rigor being applied by PWDR in any given organization, research operations requires variety, too. It requires that you account for various types of people using various methods to research various products, all which have unique needs, speeds, and levels of rigor.

An “assortment of confectionaries” like this is hard to scale. Whether you’re a team of one or manage a team of 10 like Towsey, you’ll need more resources to deliver high-quality research to everyone. As the number of PWDR and PWNR increase, so does the assortment of research that’s required – a reality research operations can’t survive without the right strategy.

The Atlassian Insights Flywheel

Towsey leads a team of 10 research operations professionals responsible for looking after 500 people who do research. They know a thing or two about scale – and where it falls down.

To operationalize the ways people use research to find out and to understand, Towsey believes these worlds should be brought together with a cohesive model. At Atlassian, they call it The Insights Flywheel.

What a learning organization should look like

The inner ring of The Insights Flywheel consists of different ways of learning, with the outer rings made up of different things Towsey and her team can do to support people in engaging with these different ways of learning. All of it revolves around Insights in the center. 

“This is what a learning organization should look like. It’s what research is all about.”

Towsey breaks the flywheel down by quadrant to explain.

  1. Original research: ReOps helps people get access to a full-time researcher or hire a contractor.
  2. Existing insights: ReOps helps people search their repository and sign up to receive new alerts based on their research needs.
  3. Empathy building: ReOps helps people develop weekly customer touchpoints and observe research sessions.
  4. DIY research: ReOps helps people get the tools and coaching they need to succeed.

From there, Towsey highlights the primary use cases of each quadrant, with a specific focus on DIY research.

Democratization through DIY research

DIY research is where many research democratization efforts begin, including those of Towsey’s team at Atlassian. It’s the quadrant of the flywheel where the necessary tools and systems are put in place, and people are coached and trained on how to use them.

“We give them operations to enable them to do great research, or at least that isn’t rubbish, in ways that are really efficient and fast enough for them or slow enough for them depending on their needs.”

Towsey then slices the flywheel into “ways to find out” and “ways to understand” – the two key components of research operations she touched on earlier. The top half of the flywheel consists of ways to find out new things that the organization hasn’t learned yet. The bottom half consists of ways to understand what the organization already knows from past research.

‍

In order to keep the flywheel running smoothly, constant communication is required between these two sections. But even then, The Insights Flywheel presented so far still has its limitations for scaling research, because, as Towsey reminds us, “variation rules.”

That’s why the next part is so important.

Start with standard-issue, then build on top

The Insights Flywheel works best if you start with a cookie. A standard-issue research ops cookie.

“It’s vanilla. Not bad, but not exciting. An excellent baseline for lots of other types of flavors. If you’re starving, a vanilla cookie is better than nothing.”

Standard-issue research operations sets a basic level of requirements and expectations for research in an organization, from participants and consent forms to training and tools. It’s also highly scalable.

“As a team of 10, we’ve scaled standard issue to 500 Atlassians. And it works ok. It’s a good starting point. No one is ever going to starve.”

But the larger the team, the more and more specialists you’ll work with, each of which has their own set of particular needs. If you ignore these needs over time in favor of scalability, the cookie will crumble.

‍

Standard-issue operations will leave many people without the resources they need to conduct specific, important projects, such as enterprise, churn, or accessibility research.

“Standard issue is a great baseline for operations to start with. It’s not rubbish. You can build things on top of it and it’s highly scalable. You just have to remember that a vanilla cookie won’t satisfy everyone.”

If Towsey had to, it’s where she’d start all over again – with one caveat. One of the mistakes Towsey and her team made was presenting their standard issue operations to the organization without sharing what their plans were to build on top of it and meet all the other needs people were asking for.

“Make your standard issue as efficient as possible so you buy yourself space to start looking at these alternative important needs. It’s up to you to look across your organization to determine the standard issue research operations kit that people need to get off the ground and not be hungry. And then later, build on it to help them develop research that’s great.”

The baseline of tools, structures, handbooks, and training you build must be strong enough and flexible enough that you can later build on top of them to meet the specialized needs of your organization.

“If you don’t think about diversity up front, the cookie will crumble.”

Key takeaways

To scale research that rocks đŸ€Ÿ(or at least isn’t rubbish), Towsey recommends:

  • Setting a baseline with standard-issue research operations
  • Figuring out your most important specializations (enterprise, churn, accessibility, etc.)
  • Working out what you’ve already got and what you can specialize to meet the research needs of your organization

Because variation rules, and vanilla cookies scale.

‍

You can follow Kate on Twitter here or visit her website here.

Jack is the Content Marketing Lead at Great Question, the end-to-end UX research platform for customer-centric teams. Previously, he led content marketing and strategy as the first hire at two insurtech startups, Breeze and LeverageRx. He lives in Omaha, Nebraska.

Similar posts

See the all-in-one UX research platform in action