
Research teams have access to more tools than ever before. Yet most still struggle with scattered data, unclear governance, and platforms that refuse to work together. The explosion from 80 research tools in 2019 to over 800 today hasn't solved the fundamental challenges.
The real question isn't which tool to buy next. It's how to choose the right ones, make them actually integrate, and protect participant privacy while scaling research operations. We recently hosted the fifth webinar in our series on how the 8 Pillars of User Research have evolved since 2019, featuring experts who know these challenges firsthand:
Their conversation reveals what's actually working, what's breaking, and how AI is reshaping the research tool landscape.
The research environment has fundamentally transformed. Pert walked through the evolution from physical usability labs with one-way glass to today's remote-first world. Teams once recruited locally because geography limited options. Now researchers access global, on-demand, and niche participants without leaving their desks. Recording technology tells the same story. Researchers used to watch hours of footage at 2x speed while taking manual notes. Transcripts felt revolutionary when they first arrived. Today, AI handles transcription automatically while flagging themes and generating summaries before you've finished your coffee.
Tools evolved from single-purpose platforms into integrated systems. Teams no longer need one tool for card sorts, another for interviews, and a third for surveys. But growth introduced new complications. Security concerns shifted from protecting business secrets through NDAs to safeguarding participant data through rigorous privacy protections. GDPR compliance became non-negotiable, and study agreements now explicitly detail what happens to participant information and how long teams retain it.
Brad highlighted a staggering statistic from User Interviews' UX tool map. In 2019, researchers chose from roughly 80 tools across five categories. Six years later, that number exploded to 800 tools spanning 42 different use cases. More options should mean better solutions, but that’s not always the case. Instead, many teams face tool sprawl with fragmented ecosystems where platforms don't integrate, training becomes endless, and renewal dates create constant vendor management work.
Brad described spending significant time at previous roles either training people on new tools or negotiating contracts. The costs compound quickly. Teams pay for seats that go unused, build brittle custom integrations that break when vendors push updates, and lose institutional knowledge when data scatters across disconnected platforms. Integration has now become the name of the game. Research tools need to talk to each other and connect with broader company systems like Slack, Atlassian, and data warehouses.
Brad offered clear guidance for cutting through the noise. Start with requirements, not flashy demos. Build a scorecard that maps must-have capabilities against available options. Share that scorecard with vendors to confirm understanding and with finance teams when justifying investments.
The all-in-one versus best-of-breed debate divides teams considerably. Consolidated platforms simplify budgeting, reduce training overhead, and eliminate integration headaches. Teams onboard people once, manage one vendor relationship, and avoid building a career around Zapier. But what are the tradeoffs? Consolidated platforms may require seat juggling when occasional users don't need year-round access. Individual features might not match best-of-breed point solutions in head-to-head comparisons.
Point solutions let teams hand-pick exactly what they want for each research method. You only pay for capabilities you actually use. The flip side includes a fractured ecosystem with multiple training guides, constant contract negotiations on different renewal cycles, and integration work that never ends. Brad framed the choice practically. Research breaks down into six core areas: planning, recruiting and scheduling, gathering data, analysis and synthesis, sharing findings, and knowledge management.
"If you don't have an answer to each of these six things, I would argue you don't have a mature tool stack."
Coverage for each area matters more than the delivery mechanism. Whether teams get there with one platform or ten depends entirely on context, including team size, budget constraints, technical capabilities, and existing infrastructure.
Every conversation about research tools now includes AI. Brad offered needed perspective to cut through the hype. AI moderation can run a thousand interviews across any language in 20 minutes. It transcribes automatically, suggests codes, and surfaces patterns humans might miss when overwhelmed with data. These capabilities unlock research at scales previously impossible.
Yet AI introduces new problems as fast as it solves old ones because teams become overwhelmed with demand. More research burns through participants faster, strains budgets, and risks over-contacting the same people until they stop responding. Participant validity has become a critical concern as well. Bots fill out screeners and professional fakers cycle through identities. So, research teams need robust verification methods and platforms with strong fraud detection built in.
"AI is not going to take your job, but someone using AI probably will."
Brad's advice for actually using AI well centers on fundamentals: take free courses from Anthropic, practice meta prompting to improve results, and focus on tasks where AI genuinely excels. Repetitive, well-defined work like formatting emails or auto-tagging repository entries benefits most. Teams can't just dump 10 transcripts into a chat window and ask what they learned. Break tasks into smaller steps, manage context windows carefully, and keep humans in the loop for making sense of results.
Data privacy requirements will only intensify. Pert made a bold prediction that teams will soon use customer recordings to train synthetic users and predictive models. That future demands bulletproof informed consent and crystal-clear data usage agreements built today.
Here is a simple test. Could you delete a participant from every system if they requested it two years after a study? The honest answer is usually no. Data retention policies and actual deletion capabilities need building into infrastructure now, not later. Accessibility falls squarely under governance. When platforms don't work with assistive devices, teams can't conduct inclusive research.
"We can't be successful if we can't be inclusive, and we can't be inclusive if we can't have accessibility."
Governance extends beyond compliance checkboxes. It shapes who accesses what information, how teams tag and organize insights, and which decision forums receive research proactively versus on request. Ned emphasized the importance of data governance built directly into the platform. Eligibility criteria prevent over-contacting participants, role-based access controls protect sensitive research, and retention policies automatically reduce data footprints.
Strong research operations work now requires system design skills beyond vendor management. The role continues evolving. Ned described it as moving toward "system definers and builders for highly tailored automations and use cases”.
"New technologies like Model Context Protocol enable AI agents to interact across multiple tools securely. That opens possibilities for automated workflows that route insights to appropriate teams, execute data deletion requests across platforms, and surface relevant past research during planning sessions."
Pert fully embraced the shift in mindset. Generic approaches no longer work. A 50-person startup's needs differ wildly from a regulated enterprise's requirements. What works for 10 studies quarterly won't scale to 100. Context determines everything, which means research operations professionals must design custom solutions rather than deploy cookie-cutter approaches.
Teams can make immediate progress without solving everything at once.
Pert offered a thought-provoking vision for where research heads. Researchers won't moderate traditional studies in the future. Instead, they'll become curators and the central nervous system of organizational learning where people come to get all kinds of questions answered. That transformation requires preparation today. The data teams generate now becomes training sets for tomorrow's predictive models and strategic intelligence systems. Set up repositories correctly. Tag consistently. Document clearly. Build durable insight summaries with stable links that accumulate evidence over time.
The path forward focuses less on chasing every new feature or AI capability and more on building systems that make research findable, usable, and protective of both participant privacy and institutional knowledge. Choose tools that solve real problems for specific contexts. Consolidate where it reduces friction. Integrate what remains. Govern consistently. Research operations work becomes more essential as AI automates mechanical tasks, not less. Teams still need people to define quality standards, set up flows that make insights stick, and coach through adoption challenges. That represents system design work and builder work that evolves rather than disappears.
Jack is the Content Marketing Lead at Great Question, the all-in-one UX research platform built for the enterprise. Previously, he led content marketing and strategy as the first hire at two insurtech startups, Breeze and LeverageRx. He lives in Omaha, Nebraska.