
When the ResearchOps Community named the 8 Pillars of User Research, it gave shape to a messy truth:
Research only works when knowledge can be trusted, found, and used.
Data and knowledge management sits at the center. It's where years of studies, notes, and clips become shared memory. It's also where things break. Teams set up a repository, but insights stall in folders. AI speeds things up while adding new risks. Smaller teams feel the squeeze to do more with less.
To ground this moment, we recently held a webinar with:
This recap covers their discussion about how data and knowledge management has evolved, why "findable" isn't enough, where AI helps and hurts, and the moves that get insights into planning.
Years ago, many teams felt they had nothing. As Johanna said: "Even if it's just a Google Drive folder, you always have something to begin with."
Today, most teams have moved beyond that starting point. Repository tools are mainstream. Studies are documented. The pillar has visibility. But results remain mixed. Storage improved while use still lags. As Johanna observed:
"It's still not quite solved, but at least people are making it more visible for sure and seeing it as it is. It's like a key pillar of a research practice as well."
The work has fundamentally shifted from standing something up to making it useful in daily decisions. This shift reveals a deeper challenge: the gap between having insights and using them.
Storage is solved. Discovery is better. But insights still don't reliably change decisions. Jake called out the trap many teams fall into: "What I’ve seen is findability is too often seen as the end goal."
Search helps people locate studies. It doesn't change product decisions on its own. Real value emerges when knowledge flows into planning, prioritization, and day-to-day calls. It shapes what gets built, not just what gets found.
This means working backward from decision moments: naming the forums you want to influence, understanding who needs what and when, then building for those specific handoffs rather than generic "discoverability."
Related read: The research repository paradox by Thomas Stokes
This shift toward integration is now colliding with AI capabilities. The demos look easy: just drop everything into a tool, add AI search, ask questions in natural language. In practice, adoption often stalls.
Research questions are messy. A one-line chatbot reply rarely settles a product decision. There's also the social friction: asking questions in a public channel feels performative. And tools without supporting rituals or ongoing coaching fade back into old habits.
Johanna warned about a perception problem AI creates:
"It's making it look so easy. Like we don't even have to do anything. We just have to put all our stuff somewhere and then the AI is going to sort it out."
The illusion of effortlessness can actually devalue the knowledge management work itself. It makes teams underestimate the curation, data governance, and quality control still required to keep outputs consistent and trustworthy.
There's also a technical concern. Jake flagged a stability issue to watch: "Your implementation of AI discovery on top of your research content could present people with different answers every time."
When the same question returns different answers across queries, you lose the alignment that makes research valuable. One of research's core contributions is identifying customer problems and building shared understanding around them. Jake emphasized:
"Make sure that researchers’ language, their carefully articulated insights, remains central to the conversation, and they're durable over time."
AI should amplify researcher expertise, not replace the careful articulation that makes insights actionable.
Related read: Building an insight ecosystem with AI & service design by Dhairya Sathvara
These AI challenges point back to a persistent truth: tools don't onboard themselves. Repositories fail when rollouts are rushed, training happens once, and no one owns the follow-through. The fix isn't more features. It's better change management.
Start small. Onboard a pilot group. Watch how they actually search, save, and share. Remove real friction, not assumed friction. Build simple rituals that fit existing workflows: a weekly "what's new" summary, a short show-and-tell in planning sessions, nudges to cite insight summaries in decision docs. Keep coaching as the system evolves. Every new hire needs the same support.
Johanna's approach:
"I always recommend having phased rollouts for things like repositories."
This extends timelines, which can feel risky when budgets are tight and stakeholders want quick wins. But rushing to full rollout often means lower quality data, confused users, and ultimately, a tool that gets abandoned. Phased adoption protects the long-term value you're building.
Related read: Change management for UX research teams by Johanna Jagow
This patient, iterative approach reframes what research operations work actually is. It's system design, not vendor management. Use what the org already has when it lowers barriers (Confluence, SharePoint, Drive). Layer in low-code solutions or vendor features only where they truly pay off. Pilot tiny experiments. Ship fast. Observe what sticks. Adjust.
Johanna has embraced this identity shift as researchers being builders.
"I'm here to build systems, and I'm here to find creative solutions… If I just came in and were saying, for example, ‘I'm building a user panel. I'm just doing it the same way every single time.’ I'd be out of a job so fast because people wouldn’t find it helpful."
This builder mindset recognizes that context varies wildly. What works in a 50-person startup won't work in a regulated financial services company. What works for a team running 10 studies a quarter won't work for one running 100. Jake posed the evolution this way: "Is the research operations role in some cases moving into system definers and builders for these highly tailored automations and use cases?"
The answer increasingly appears to be yes, especially as no-code and AI tools make custom solutions more accessible. But this builder role requires sustained attention, not just launch energy.
Launch isn't the finish line. Many teams roll out a repository, run one training session, then slip into reactive maintenance mode. Adoption drops. Confidence erodes. Value stalls. Jake noted this pattern:
"Many folks switch to just ‘keeping the lights on’ too soon."
The antidote is treating knowledge management like product work. Work backward from outcomes you want to achieve. Pick two specific decision forums (maybe the quarterly roadmap review and the weekly product sync). Design how insights will push into those forums. Decide who owns the nudge, what format it takes, and how research gets cited. Run small experiments. Assign clear owners. Track visible wins. Iterate.
This sustained effort reveals whether insights are actually shaping decisions or just sitting in a better-organized folder.
One challenge in pushing insights into decisions: stakeholders often chase "what's new." This bias isn't wrong. It's human — use it strategically.
Jake's advice:
"Take all the fresh content from your repository and pipe it around as many places as you can."
Surface recent studies where leaders already look: Slack channels, newsletter summaries, dashboard widgets. Fresh content builds visibility and trust. But pair freshness with context so one new data point doesn't oversteer an entire plan.
Johanna offered the counterbalance: "The more you have to confirm something, the more weight you can add to something."
Keep durable insight summaries accessible. Single pages where evidence accumulates over time. When five studies across two years all point to the same customer pain point, that weight matters. New evidence adds to the pile rather than competing with it. This is where metadata and structure become essential.
Related read: How to increase UX research visibility with an internal newsletter by Mia Mishek
Do you still need a taxonomy if AI can "just find it"?
Yes, but keep it small and useful. Tag for things that route insights to action: topic, product area, owning or impacted team, confidence level. Let AI suggest tags to accelerate the work. Keep humans in the loop to ensure quality. Prefer durable insight pages with stable URLs so new evidence lands in one place over time.
The taxonomy isn't primarily for search anymore. It's for pushing the right insights to the right teams at the right moments. And it guards against Jake's earlier caution about AI instability. If your core insights live in researcher-authored summaries with consistent structure and stable links, AI becomes a helpful layer on top rather than the unreliable foundation underneath.
Keep researcher language front and center. Tag it. Link to it. Push it into planning docs. Don't let model updates wobble your core messages.
This care in structure and language serves a bigger purpose. What you build sends a message about the value of research itself.
If you drop raw support calls next to carefully crafted insights from moderated studies, you blur the line between noise and evidence. If AI suggests answers that ignore research rigor, you signal that any data is as good as any other. These choices can dilute the value of staffed, skilled research work. Jake warned:
"We don't want to just become another data warehouse."
Design for care instead: clear insight statements in researcher language, evidence trails showing where claims come from, named owners, explicit update rules. Make it easy to cite research in planning docs, not just search it. Show that research insights deserve different treatment than support tickets. Not because researchers are precious, but because the methods and samples are different, and decisions should account for that.
This messaging work matters more as budget pressures increase and teams face questions about research staffing. Your knowledge system is an artifact that either reinforces or undermines the case for dedicated research.
You don't need to solve everything this quarter. Make progress with three small moves:
Pick your quarterly roadmap review and one weekly forum (stand-up, product sync, design critique). Decide the trigger (new study ships, planning doc opens), owner (who sends it), and format (summary in Slack, link in doc). Keep it simple. Measure whether insights get cited. Adjust.
Tag just four things: topic, product area, owning or impacted team, and confidence level. Let AI suggest tags. A human confirms. Resist the urge to create elaborate taxonomies. You can always add more later once these four prove useful.
One page each. Clear statement in researcher language. Evidence list with links to supporting studies. Named owner. Update rules (when does this get refreshed?). Stable URL. Ask teams to cite them in planning docs instead of individual studies. Watch which ones get used.
Jake's north star for this work:
"If we work backwards from those end-result impacts, we can clarify the actions that can help us get there."
Start with the decision you want to influence. Build the minimum system to get evidence there. Prove it works. Expand.
If data and knowledge management is central to research operations, and AI is automating parts of that work, what happens to the role?
The answer: Compete with outcomes, not features.
AI can suggest tags, summarize transcripts, surface related studies. It can't define what "good" looks like for your organization. It can't decide which decision moments matter most. It can't coach teams through adoption friction. It can't set the quality bar for what counts as an insight versus an observation. It can't build the rituals that make citing research feel normal instead of effortful.
ReOps work is increasingly about defining the system, maintaining standards, and designing the flows that make insights stick. This becomes even more essential as AI scales the mechanical parts. The teams that protect research value will be the ones who clearly articulate where human judgment is required, what quality looks like, and how research gets cited in planning.
That's system design work. Builder work. It's not going away. It's evolving.
Get started free: Evolve your research practice & career with Great Question AI.
Data and knowledge management isn't a feature set. It's a system that turns learning into action.
Eight years in, most teams have tools. The gap is usage. The teams that win treat "findable" as the starting line, not the finish. They build small, deliberate flows into real decisions. They keep researcher language stable and central, even as AI speeds the mechanical work. They phase rollouts carefully, coach continuously, and protect adoption as a long-term investment.
The path forward is surprisingly simple: Pick a few decision moments that matter. Push your best evidence into them. Measure whether plans actually change. Adjust what's not working. Keep going.
The work is contextual, iterative, and never finished. But it's also where research proves its worth. Not in perfectly organized folders, but in better decisions.
Jack is the Content Marketing Lead at Great Question, the all-in-one UXÂ research platform built for the enterprise. Previously, he led content marketing and strategy as the first hire at two insurtech startups, Breeze and LeverageRx. He lives in Omaha, Nebraska.