Democratization at Dropbox: Empowering everyone to learn from customers

By
Allison Cole
Published
October 27, 2025
Democratization at Dropbox: Empowering everyone to learn from customers

When I joined Dropbox to lead its research democratization efforts, I was fortunate to have leadership in Maria Everton and Kevin Pinkley who backed the vision and empowered me to build. One year later, the results speak for themselves.

We have a culture where learning from customers feels routine, not rare, and where non-researchers contribute without compromising rigor.

The approach is simple on purpose. Invite more people to participate. Give them methods that fit their work. Wrap everything in guardrails. Measure what matters. Repeat.

Here’s a closer look at how we empower everyone to learn from customers at Dropbox.

What democratization means, minus the jargon

Democratization does not mean any person can do any study at any time: strategic projects and advanced methods like conjoint analysis and diary studies remain only available to researchers. It means that the most common research needs are easier to meet, and the expertise of researchers remains intact.

Designers can use structured usability sessions to pressure test flows before a build. Product managers can validate value propositions and copy when a launch date is approaching. I've even worked with a governance program manager who needed to better understand how customers feel about AI. This program is available to everyone at Dropbox.

Each path begins the same way: a shared process. Then you can choose a method that fits the question and check in with research before getting started.

Quality stays high, waste stays low, everyone moves faster, and most importantly — researchers call the shots.

Related read: Lean Thinking in UX Research by Sarah Ulius-Sabel

The three pillars that make access real

At Dropbox, our democratization system has three pillars that support most use cases:

  1. Customer Coffee Chats handle lightweight discovery and concept tests
  2. Office Hours provide coaching, triage, and method checks
  3. Self-Serve Testing on approved platforms power quick moderated and unmoderated studies at scale

Each pillar does one job well, and together, they then form an end-to-end path from idea to insight. When these pieces are present, non-researchers do better work and researchers spend their time where it counts.

1. Customer coffee chats: Built for speed & signal

Customer coffee chats are recurring sessions where internal teams meet with customers on a fixed cadence. People opt in as moderators or observers. They bring prototypes, open questions, and clear tasks.

The goal is simple: learn something useful in under an hour.

Structure is the secret sauce. A reusable discussion guide template is precisely what keeps conversations focused. A moderator checklist removes fumbles like forgotten consent or missing links. A short primer shows how to ask neutral questions and when to pause for detail.

Coaching closes the loop. If someone hasn’t moderated before, a researcher or the program owner joins the first session, then shares direct feedback. That simple practice raises quality quickly, and it builds confidence faster than any slide deck.

AI, as you might’ve guessed, assists in the background. Summaries and transcripts capture the conversation without a human notetaker. Moderators are fully present. Observers focus on patterns and themes, not typing. This is a big reason we switched to Great Question. With Great Question’s transcripts, AI summaries, and Ask AI feature, everyone can get quick insights from the sessions they just had and easily share them with their teams. (Try Great Question AI free for 14 days.)

2. Office hours: The friendly guardrail

Before any new study launches, teams need to book office hours. This is the first guardrail, and it’s intentionally lightweight. A researcher reviews the problem statement, method, and any prior work that might answer the question already.

Sometimes the best next step is a literature review. Sometimes the right study is smaller than the request. Sometimes an unmoderated test beats a survey that tries to boil the ocean. 

This checkpoint prevents leading questions, misaligned tasks, and expensive studies that do not match the decision at hand.

Office hours also handle prioritization. Research calendars are mapped by quarter. When a researcher cannot take on a study, they still advise on the guide, the screener, and the tasks. 

If teams need help, it’s important to give them a place they can get it, without losing ownership of their projects.

As a fallback to office hours, Kevin also built our “Ask a Researcher” Slack channel, which enables anyone to find insights that have been done from prior research by simply asking the Slack bot. It’s great for research enablement, engagement, and visibility across the whole company.

3. Self-serve testing: Fast & in-bounds

Everyone at Dropbox has a seat in approved platforms. That access matters. It means a designer can run a five-task unmoderated test with a prebuilt audience by tomorrow morning. It also means that a PM can validate whether a headline lands with the actual target customer, not a generic persona.

Templates do the heavy lifting. Study plans come prewritten with goals, hypotheses, and success criteria. Task banks cover common flows. Screener libraries and audience presets shave days off recruiting. By the time someone clicks launch, most of the thinking has been vetted.

Guardrails remain. A quick review checks for leading language, stacked tasks, or mismatched metrics. 

The goal is not bureaucracy, but to keep the signal clean, and ensure the study answers the question it was built to answer.

How Dropbox measures democratization success  

Good vibes can only take you so far.

Dropbox tracks demand and adoption, and from there you can ask questions: who (or what) is asking for help. Which teams launch studies, and how often they do. What portion of those studies use approved templates and audiences.

Outcome measures matter more. Did an insight inform a product decision? Did a design ship cleaner because a usability signal showed what to fix? Did a policy or AI language change land better because customers reviewed it first?

Over time, a simple effort score helps. How hard is it for a PM or designer to access customer insight? The lower that number, the healthier the system.

Related read: The Researcher Effort Score by Pedro Vargas

The payoff, in plain terms

When more people can learn from customers, two good things happen: 

  1. Velocity increases, because the people closest to the work can answer tactical questions quickly. 
  2. Quality improves, simply because those same people build the habit of validating assumptions with real users. 

Over the course of time, the organization will then become more customer-centric through daily practice.

The most important shift though is cultural.

Research transforms from a last resort visited in emergencies, to a company habit. Leaders back it. Researchers guide it. Non-researchers practice it with care. And ultimately, customers feel the difference in the product.

Editor’s note: This article is based on a webinar with Allison from September 2025. The full webinar recording is available on YouTube.

Allison Cole is a UX researcher and operations specialist with a rich blend of qualitative and quantitative expertise. Currently, she is the research program manager at Dropbox. Before that, she was a UX researcher at Amazon, TaxAct, and Root. Her passion lies in unraveling the intricate dance between user behaviors and product, guiding teams to create intuitive and impactful digital experiences. From ideation to iteration, her research serves as a compass, guiding decision-making and ensuring that user-centricity remains at the forefront of every endeavor.

Table of contents
Subscribe to the Great Question newsletter

More from the Great Question blog