No matter where you stand, there’s a lot to discuss — from the rapid ascent of AI and the ongoing debate about democratization, to an explosion in research tools and widespread layoffs in tech. But, as with any discussion, there’s also a lot of noise. Let’s take a look at each of these trends, both historically and contemporaneously. Hopefully, by considering balanced ideas and actions to take, we can find clarity and comfort in what may be tumultuous times.
Despite an increase in discussion around AI in the media and pop culture recently, the debate as it pertains to jobs is nothing new. Concerns about the rise of automation and the elimination of jobs go back as far as the Industrial Revolution, and AI entered the conversation in the 1960s when some of the first research into artificial intelligence began. While there are some philosophical concerns about widespread unemployment, the ongoing counterargument is that new technologies will create new opportunities. For every job that is eliminated, just as many — if not more — will be replaced. (Remember: every elevator used to have a human operating it.)
In the context of research, there are two main threads to pull here:
Hint: if you think it’s writing scripts, you’re wrong.
There’s been much hand-wringing about AI in the research world lately. From the idea of “artificial users” to LLMs writing our scripts for us, a lot of researchers are worried about the long-term prospects of our jobs (despite anything requiring human interaction being on every list of jobs least likely to be impacted). I think, at the end of the day, it depends on how you define your role. Are you here to guide decisions and provide insights, or are you here to “do research”?
"It depends on how you define your role. Are you here to guide decisions and provide insights, or are you here to do research?"
AI is forcing a discussion that’s long overdue: what is the role of research? If you define your role as the person who “does research” (plans, executes, analyzes, and shares), I believe you’re missing the point.
Anyone beyond the first few years of their career should see themselves as more than a “do-er”. Learning the craft of research is important, but that’s the means to an end of the business value we provide. It’s our job to improve decision-making across our organization by guiding research activities, analyzing and synthesizing feedback into insights, and presenting those findings in a way that’s easy to understand and appropriate for the audience.
Instead of worrying about it taking our jobs, let’s instead think about the ways AI can help us be more efficient and impactful, and do all-around better work. Authors and designers already use AI to solve the “blank page/canvas problem”: getting stuck and not knowing where to start. This effects researchers as well: ever face writer’s block when generating interesting or novel questions pertaining to our subject matter? Instead, you can use LLMs to generate a set of questions you can pick and choose from, refine, or otherwise use as a basis.
Some companies and researchers are also experimenting with leveraging AI to analyze and synthesize results. While it might not be wise to let ChatGPT ingest entire transcripts, it might be interesting to train a model specifically to do research analysis (in a way we’re all comfortable with) or pull novel insights from a pile of data. Others are leveraging chat interfaces to interact with their Insights Repository instead of constructing a convoluted query. This makes it easier for non-researchers to generate their own reports.
Over the past few years, there’s been an explosion of research tooling. As early as 2021, legendary investment firm Andreessen-Horowitz took note of the growing market for research tooling, and began to expand their investments into the space. Some may worry about there being too many options in the space, but I think this is an instance where more is actually better (with the caveat that interoperability is paramount).
Research can vary widely across industry, company size, team composition, and other factors. Having a diverse set of options for tooling allows each of us to make the best decision given our unique set of strengths and constraints. As with any growing market, though, expect to see consolidation, as some of the bigger players acquire smaller ones, or as some companies fail and sell out to others. We’re already seeing this, with UserZoom’s acquisition of EnjoyHQ, Thoma Bravo buying companies to combine them, and whatever is going on with Qualtrics.
When building your tool stack, there are a few major categories to cover. Notably:
Depending on your situation, having a separate tool for each of those may make sense. Or, if you’d like an all-in-one solution, check out Great Question.
What’s important to note, though, is that you don’t need a new tool for each category. Meet your colleagues where they are: you can do a lot with G Suite and Atlassian if needed.
In almost an echo of the concerns about machines and AI taking jobs, the more tooling any practice gets, the less people understand the fundamentals. In some cases, that’s not a bad thing (just ask any architect, engineer, or physicist to do long division). But in others, it might lead people to become dependent upon a tool in a way that is detrimental to their career and to their effectiveness.
Josh Williams, Director of UX Research at Indeed, had an excellent talk at UXRConf 2023 about the growth of tooling in the research field and how to handle tool dependence. As nice as it is to have a tool that can do everything for you, it’s also useful to make sure you know how it’s doing those things. Josh says it (and more) better than I could, so watch his talk for the full run-down.
While access to more tools makes it easier for us to do research ourselves, it’s also proven a major catalyst in the rise of research democratization. Proper tooling is an essential piece of impactful and effective democratization. Not only does it reduce overhead, it actually helps you set up the proper guardrails and guidelines for your people who do research (PWDR).
Notice I said “effective democratization” or, what dscout calls “responsible democratization”. One of the misconceptions (and outright mistakes, for some) in the discussion of research democratization is that “anyone can do research” or that it becomes a free-for-all. I wouldn’t trust myself to design something without a proper style guide (and I spent my early career doing Design work): why would we trust non-researchers to execute research without any guidance?
“Proper democratization still has boundaries, support, and great research behind it. The best way to do this is by focusing on what we’re putting into the equation and what’s coming out of it.” — Zoë Glas, Senior UX Researcher at Google
Instead, enabling our peers to transform from nagging stakeholders or requesters to empowered PWDR who can self-serve their low-stakes needs is one of the most effective ways to increase your team’s bandwidth. By minimizing the amount of time you spend supporting these requests, you free your team to take on larger, more complex, and more impactful research projects. It’s one of the ways to grow from a reactive, tactical, support team into a proactive, strategic, innovative, and impactful function.
There have been a variety of discussions on LinkedIn that suggest democratization is why we’re seeing layoffs in the research field. This goes back to the earlier discussion about AI: if you see your job solely as doing the core research activities of planning, execution, and analysis, it’s natural to draw this conclusion. If you find yourself in that boat, I invite you to reflect upon whether that was satisfying and fulfilling work for you as well as what the business value of that is.
The end-goal of democratization was never to let everyone think they can do research. If anything, it’s to give our colleagues first-hand understanding of how hard it is to do properly and why our expertise is invaluable. Self-service research without a trained, experienced researcher at the helm is quite possibly more dangerous than no research, as it provides a false sense of security while potentially leading teams down the wrong path. I’ve yet to see anyone (researcher or otherwise) suggesting this is how to move forward.
It’s time to talk about layoffs. As of this writing, Layoffs.fyi is reporting 210k layoffs across 794 companies in the tech industry in 2023 (with another 164k in 2022). This is an absolutely staggering number, and likely the largest hit the tech sector has taken since the dot-com bust. However, according to Zippia, this represents ~3% of the entire workforce. Nobody seems to be disclosing the composition of these layoffs, but a lot of people are claiming it’s disproportionately impacting UX and UX research.
Without concrete data to comb through, I think there are two hypotheses to explore:
To address the former: before counting, I felt I'd seen just as many of my colleagues and connections in other disciplines being impacted as I’ve seen researchers. It actually hit recruiting first: if you don’t plan to hire as much (or at all) and money is tight, why keep a recruiting team around? I’ve seen it impact sales, marketing, product, design, and even engineering across my network. Do I see a lot more researchers posting, sharing, and commenting? Yes, I do, but the majority of my network is composed of UXers.
As for the latter: if research is being targeted, what’s at the root of it? At the end of the day, research isn’t an essential function. We aren’t responsible for keeping the lights on, and as much as I will argue with any founder that getting research involved early and often is beneficial, the vast majority of successful startups don’t involve research until they’ve reached a certain size and scale. So, similar to recruiting, when money gets tight and growth isn’t on the roadmap, research is going to be one of those functions under scrutiny. Which leads us to the second point.
At the end of the day, layoffs are a statement of business value. If you haven’t made yourself an irreplaceable and invaluable part of your organization, you’re a candidate for a layoff. If you aren’t doing impactful, insightful research that’s driving important change at the organization, you probably aren’t seen as irreplaceable.
So, how do you change this? Say ‘no’ to the less important things and focus on the work that matters. Manage up and ensure all of your stakeholders understand the value of your work. Show how research can be an essential tool when deciding how to allocate limited resources. These are all productive habits you can start building today.
Research layoffs aren’t AI’s fault. They aren’t democratization’s fault. They aren’t anyone’s fault but our own: we as an industry have collectively failed at making ourselves indispensable.
Regardless of why it’s happening, and despite all the turmoil and distress it’s causing individually, I think this is a good thing. While some have pointed the finger at democratization, likening it to “putting yourself out of a job”, that’s a shallow and myopic view. At the end of the day, if our work isn’t being seen as valuable, we need to change.
One of the things that’s been fascinating for me, personally (as someone who’s intentionally avoided BigTech) is to see the gap between the world I’ve experienced and where the rest of the industry is. By volume, BigTech employs more of us than any other sector. They have more employees, bigger organizations, larger budgets, and more resources. They do a higher volume of research. I’ve been operating under the assumption, then, that they’re more mature, and we should be modeling their structure and their behavior.
"If anything, it’s less of a reckoning and more about BigTech catching up to the past ~5 years of startup innovation."
What this reckoning (and my work over the past 3 years with Auth0) has led me to believe is that actually isn’t the case. The absence of pressure and surplus of resources has allowed these teams to become too comfortable and stagnate. They fell into habits, patterns, and expectations that lead us to where we are now. What’s being called for in this “reckoning” (e.g. pushing back, saying no, being strategic, working cross-functionally beyond product and design, democratizing responsibly, speaking in business terms) are things I’ve seen myself and my peers at smaller startups doing for years. If anything, it’s less of a reckoning and more about BigTech catching up to the past ~5 years of startup innovation.
The goal of this essay isn’t to propose surefire solutions to these problems; it’s to start a balanced, thoughtful discussion about the current UXR landscape. Each of these topics deserve their own post, and as the conversation evolves, so will our coverage.
I learned something very valuable during my time with 1010data: there are two ways to respond to times of change. You can focus on the turmoil, get lost in the noise, and ultimately wind up in a state of despair, or you can identify the opportunity, get excited about the future, and set yourself up for success.
You’re free to make your own decision. I believe in the latter.
Brad (they/them) is a UX Leader, User Researcher, Coach, and Dancer who's been helping companies from early-stage startup to Fortune 500 develop engaging, fulfilling experiences and build top-tier Research & Design practices since 2009. They have helped launch dozens of products, touched hundreds of millions of users, managed budgets ranging from $0 to $10M+, and coached hundreds of Researchers. Born in Buffalo and currently based in Brooklyn, NY, Brad dances with the Sokolow Theatre Dance Ensemble and Kanopy Dance Company, co-organizes the NYC User Research meetup, and served on the Board of ResearchOps from 2018-2021.