Democratizing AI in Research without blowing your foot off

By
Ned Dwyer
April 9, 2024
Democratizing AI in Research without blowing your foot off

As expected, AI was a hot topic at Advancing Research 2024 hosted by Rosenfeld Media.

At Great Question, we’re on a mission to democratize UX research at scale, which includes the responsible and practical use of AI. So naturally, a question we heard again and again at the conference was:

“How do you democratize AI without blowing your foot off?” (Or something to that effect.)

The answer is: it’s complicated.

In light of yesterday's launch of Great Question AI (Beta), I'd like to share how what we're building can help teams of all sizes democratize the use of AI in research — safely, securely, and at scale.

Protect your AI neck

First, I want to acknowledge the risks borne by democratizing access to AI. These include:

  • Exposing customer data — specifically personally identifiable information (PII) — to companies and models that could be leaked outside the business
  • Inexperienced folks interpreting and distributing AI outputs without regard to potential bias or hallucination

I also want to acknowledge that Great Question as a tool can play a role in reducing these risks; these risks aren’t borne purely by the environment, team, or processes that they’re used in.

In other words, it’s not all on researchers to act as the gatekeepers or champions for responsible use of AI.

With this in mind, there are four main ways you can protect your customers, the business, and yourselves from the risks of democratized UX research in the world of AI.

1. Put the safety of PII and sensitive data first

I was surprised to hear the number of experienced UX researchers who will import a customer interview into ChatGPT to generate summaries, or query to find themes. Why the surprise? Because doing this exposes your customers’ data to be used in OpenAI’s training data, which means it could potentially be accessed via folks outside of your company.

We also use OpenAI at Great Question but we’ve developed a series of controls to protect the sanctity of your business and customers’ data. These include:

  • A enterprise partnership with OpenAI, which prevents them from using any data in their training sets. We don’t use customer data to train our models either.
  • Tokenizing all PII (e.g. names, emails, locations, etc.) from customer interviews, surveys or prototype tests so it's never exposed to any training models. We then rehydrate the data once it's returned to our platform.
  • Access controls in Great Question to prevent people outside your team accessing this data.
  • Additional security measures that are independently audited as a part of our SOC2 Type II compliance.

2. Tailor access controls to fit your org

When we say Great Question can help you democratize UX research at scale, it's important to note this means as much or as little as you want.

You might democratize access to UX insights via your research repository, a fairly benign concept. Or, you might democratize the ability for anyone to run any research method with anyone from your customer database, a slightly more controversial topic. That's not a decision for us to make. It’s up to you.

It's our job to build flexible, secure tools to help teams safely scale to whatever level of research democratization they desire.

One way we facilitate this is by giving you controls to decide which users can access which features, methods, data or participants. In the case of AI, you can therefore decide which users can leverage AI functionality, and who cannot.

For example, you might decide that only UX researchers can access AI querying given the risks — a perfectly reasonable guardrail to put in place. Maybe you want product managers to only be able to run customer interviews with participants who you select on their behalf.

Whatever you decide, it important to choose a UX research tool that allows you to tailor your access controls to fit your organization's needs.

3. Require disclaimers on AI — and democratized insights

As a UX researcher, you probably have some sense of the biases inherent in AI outputs. But your exec team might not.

They might take the output as reality without further validation and — at the extreme — make a decision that negatively impacts customers, employees, or the business.

One way we can help offset this risk is by injecting disclaimers into AI responses to make sure folks know what they’re looking at, and the risks present.

If something was generated by AI, it should be clear as day. Every. Single. Time.

4. Create traceability to verify accuracy

Large Language Models can produce amazing results, but there are plenty of good reasons to be cautious about the veracity of these claims. LLMs have been known to over generalize, be inconsistent in data selection, and even hallucinate.

The best way to combat this is to make the inputs that went into the generative output traceable.

Which customer interviews, with which customers led to this summary or list of themes? And what part of the interview reinforces that?

In Great Question, all quotes used in AI responses are traceable, meaning they link to the original transcript so you can quickly jump the exact moment they occurred to get more context and verify accuracy. When you enter a query across a study, we give you a summary of the output and a list of themes. But we also give you direct quotes that you can play back from the interview and a list of folks who expressed affiliation with those themes.

The bottom line

You can democratize AI research without blowing your foot off — if you understand the risks it presents to your business and customers, and take the measures needed to do so responsibly.

These measures include:

  • Putting the safety of PII and sensitive data first
  • Tailor-fitting permissions and access controls to meet your organization’s level of democratization
  • Requiring disclaimers on all insights, artifacts, and responses generated by AI
  • Creating traceability on all AI-generated insights so you can easily identify the source and verify the information

At Great Question, we’ve been working on all of the above, so you can democratize UX research at scale (as much or as little as you want) with the responsible use of AI. Try it for free below and be sure to let us know what you think — because we're just getting started.

Ned is the co-founder and CEO of Great Question. He has been a technology entrepreneur for over a decade and after three successful exits, he’s founded his biggest passion project to date, focused on customer research. With Great Question he helps product, design and research teams better understand their customers and build something people want.

Similar posts

See the all-in-one UX research platform in action