It has simultaneously resulted in a great deal of attention and hype for the technology, which, as we’ll see in this report, are not always warranted. People in creative fields are greeting this technology with a mixture of enthusiasm and trepidation, and Design is no exception.
The current hype is one of the reasons I set out to do this research project: I wanted to better understand what’s actually happening with AI in Design and Research, to see where people are finding actual utility and applications for the technology, and what the limitations and best practices are.
The goal of this report is to give people a sense of how, where, and why AI is being applied to work in these fields, without succumbing to the temptation of an easy narrative that promises more than it can deliver, or predicts the coming demise of our fields.
Over the course of two months, I spoke with design and research practitioners in a variety of roles, from junior ICs to heads of design. I also interviewed subject matter experts who have been following AI and LLMs for years, to better contextualize the current moment. And what I found, perhaps unsurprisingly, is that the all-or-nothing nature of the public conversation about AI simply doesn’t reflect the reality of the situation. The use cases for AI-assisted tools in Design and Research are real, but they don’t cover a particularly large swath of the actual work, and all of them still require expertise of one kind or another.
With AI tools becoming more widely available and distributed, people working in Design and Research are finding new ways to incorporate AI into their work. They are using a human-centered framework to identify use cases, and then applying best practices around trust, accuracy, bias, and utility to where AI is being used.
There are three ways AI can be used in people’s work:
Using this framework, rather than a tool-based “generative vs assistive” lens, enables people to find areas of their work where AI provides value.
Participants in this research had collectively come to a set of best practices that enabled them to feel confident in how they were using AI in their work:
With these best practices, designers and researchers are maintaining a sense of agency and control over their work product, as well as ensuring that they are ultimately responsible for its quality.
I gave a talk on my findings at Config 2023. You can listen to it here.
In order to understand the emerging best practices and use cases for AI in Design and Research, I conducted 12 in-depth interviews with a range of individuals working in Design and Research, as well as subject matter experts who have been keeping up with the AI space for the last several years. As part of these interviews, I asked about their own applications for AI in their work, their perspective on the risks and opportunities it presents within the fields of Design and Research, and what their own approach to figuring out best practices are when it comes to the tools that are currently available.
Above all else, it’s useful to note that not everyone is using AI in their practice. In fact, most product designers haven’t found use cases that work for them, although they’re keeping up with the current trends, especially when it comes to thinking about how to thoughtfully design experiences that incorporate AI. And while a lot of researchers have found use cases, there are plenty out there who aren’t incorporating AI into their work at all.
For almost everyone who has found a use for AI tools in their own work, and even for those who are just keeping up with the technology without using it in their own work, there’s a defensive component. People are worried about not keeping up with advances in technology more generally, and this is amplified with AI because of the hype that has been surrounding it since the launch of more broadly available tools like Midjourney and ChatGPT.
Nearly every participant spoke about feeling a sense of concern, or even threat, about what the increasing availability and prevalence of AI tools might mean for their work.
It’s also important to note that the utility of these tools is not exclusively hype. People are finding ways to improve their own craft, spend more time on the parts of their practice they enjoy most, and even expand and augment their abilities and skills. A variety of use cases for AI tools in Design and Research exist, and researchers and designers are finding new ways to discover and hone their approach to applying AI to their own work.
One of the other things to come from this research is a human-centered framework for applying AI to workflows. The results of this research have shown that most successful use cases are derived from people asking what they want AI to do for them and determining whether the technology is capable of that yet, rather than from people trying to figure out what the technology does and then shoehorning it into their existing work.
In other words, people are having the best results with AI by starting from their own needs and then evaluating the technology’s ability to meet them.
This means that the generative vs assistive distinction becomes less important in thinking about how to apply AI to work. Instead, participants who found practical applications and use cases started by saying, “This is something I would prefer not to have to do” or “This is something I don’t have time to do”. A human-centered framework for determining where AI can be useful in different workflows can help identify use cases without the need to try to exhaustively determine what the technology is capable of, and puts human needs back at the center of where AI is applied.
Moving away from generative vs assistive and towards a human-centered perspective, the findings from this research suggest we instead look at the three different ways AI can be used in workflows:
This framework also reduces the burden on people to try to constantly stay up to date on new developments. Participants who used a workflow-first approach were also less likely to consider AI as a threat to their work. Instead, they viewed it as an additional tool to be used to aid them in their work.
Many of us are working in situations where we are not simply trying to understand how and where AI fits into our work; we’re simultaneously trying to help our organizations define where AI fits into the products and services we work on.
One of the other key things that emerged from this research is how extremely individualized preferences are around where and how AI is used in their work.
This is part of why the human-centered approach to designing tools that leverage AI is so critical — because there is no single solution that will fit the spectrum of individual preferences.
The common thread is that every participant involved in the research was focused on finding ways to use AI to help with the parts of their job they found least exciting or satisfying, or that they felt were not “core” to their role.
The challenge is that those parts varied from participant to participant, even within the same discipline. Where one designer was excited to use AI to speed up their research synthesis process, another was eager to keep that part of their work for themselves. Some researchers were excited to have AI handle the first pass at their research plan outlines or interview guides, while others rely on doing those things themselves in order to clarify and refine their research approach.
Above all, this shows the importance of flexibility as we move into a world where AI is incorporated in more and more tools that we use for work. The spectrum of preferences on how and where AI appears, and on what type of use (i.e. additive, augmentative, or substitutive) it’s enabling, present an opportunity and a challenge for those who are building with AI. Finding ways to support this spectrum will mean making an effort to both deeply understand what users want, and to figure out how to build flexibly with AI. It won’t always be possible to ensure that each individual is supported in exactly the way they prefer, but understanding the spectrum of preferences and what it’s possible for the tool to support will enable us to create tools that are genuinely useful to a wide array of people.
Through this research, several best practices emerged as we sought to understand how people are finding ways to apply AI to the practice of research and design. Five principles for successfully using AI in creative workflows were identified:
This was a concern with 100% of the participants in this research project. Every participant had an example of a situation where AI returned results that were not only inaccurate, but also stated with a high degree of confidence.
In the absence of an effective way to understand why the tool is giving the answer it is, or what its sources are, the burden falls entirely on the user to determine accuracy.
The main way researchers and designers are approaching this best practice is by being thoughtful about where and how they use AI in their own practice, and ensuring they aren’t putting themselves in situations where they might be introducing error into their work without realizing it.
A major concern for participants was bias, and particularly the inability to accurately gauge bias ingrained in the output of AI tools because of the opacity of the source material.
With no accurate view into the training data sets, it’s difficult to assess bias.
There was no consensus on how to best mitigate bias ingrained in the tool, but participants started by identifying and making explicit their own bias, as well as applying a critical lens focused on the types of bias most likely to be replicated by large language models. In the absence of an AI-specific set of best practices for mitigating bias, designers and researchers are relying on existing best practices for identifying and mitigating bias in their work.
One of the major concerns with AI tools is understanding exactly what third-party tools are doing with the data that users put into them. In the absence of clear terms of service that specifically state that data will not be used by the company to further train or refine the model powering the AI, it’s vital to be thoughtful about what is and isn’t shared. Ensuring that organizational, participant, and user data is protected is key to navigating the rapidly-evolving landscape of AI-assisted tools available to designers and researchers.
Organizations are still in the midst of trying to understand how and where AI might apply both to the work being done in the organization, and to what the organization is producing. This is especially true for software companies, who are simultaneously navigating the role of AI in software itself and the role of AI in software development. By continually sharing information with their colleagues, participants were able to help define their organization’s AI policies and best practices.
Much of the conversation around AI has centered on the nature of the technology itself, rather than its place in our work. Returning the focus to the user, rather than the tool, means participants were better able to find applicable use cases that mattered to them. In the absence of this user-centered focus, participants felt like they were just “playing around”, or that the technology was a tool in search of a solution.
Participants who found use cases they returned to over and over again started from a perspective of, “What would I be interested in having AI do for me” instead of asking, “What can AI do” and then trying to fit the response into their workflow.
It may feel as though AI is taking over the technology space, but the reality is far more nuanced. Designers and researchers are just beginning to feel out where AI does and doesn’t apply to their work. Many of them have tested AI tools extensively and still haven’t identified places where AI is more than just a novelty. Others have found use cases, but still recognize the limitations of the current tools. But regardless of their current AI usage, the common thread was an effort to understand the responsibilities, trade-offs, and ways of thinking about AI applications.
The framework and best practices discussed above are a starting point as we continue our collective journey towards a world where AI plays a larger role in our work and our lives.
As AI technology advances, we can use this framework to find opportunities where AI can work for each of us individually, while simultaneously applying these best practices to our broader scope of work. Doing this will allow us to maintain a human-centered approach to AI that focuses on enabling everyone to do their best work, rather than on what the AI itself can do.
Jane is the Principal Researcher at Great Question, where she help companies make better decisions faster through insights. Prior to joining Great Question, she created and ran the UX Research practice for Zoom, was head of UX Research for Zapier, ran the Growth Research team for Dropbox, and led UX Research for BitTorrent. She’s worked as a researcher, a designer, a librarian, and an event planner. She lives in Oakland with her partner and their two children, and spends as much time as humanly possible outdoors. In her heart of hearts, she believes research is actually a sales role.