How to recognise, prevent &Â respond to fraud in qualitative UXÂ research
During a recent discovery project I led on car finance, I interviewed a participant who, based on the screener, appeared to meet the eligibility criteria. They even left thoughtful responses to the open-ended questions and on paper appeared perfect for the study. Early in the session, however, their responses didn’t align with what they’d previously shared. As I asked a few warm-up questions — something I sometimes do to confirm participant fit — it became clear they hadn’t just exaggerated their experience. They had never owned a car!Â
I’ve come across participants before who overstate their use of a product or slightly misrepresent details to qualify for a study. That’s not uncommon. This was the first time, however, I’d encountered someone who had fabricated their eligibility entirely. I ended the session early and reported the incident to the recruitment platform.
This experience made me reflect on how we identify and manage fraud in qualitative research, particularly in remote contexts, where verification is limited. While fraudulent responses are often discussed in survey-based research, they’re becoming more relevant in qualitative work, too.
Fraudulent participation refers to individuals who deliberately misrepresent their identity or experience to qualify for a study. This might include faking product usage, claiming a health condition they don’t have, or copying someone else’s story to gain entry. Their motivations are usually financial since many studies offer compensation, which can be enough to attract opportunistic behaviour.
This is not the same as inattentiveness or exaggeration. A participant who forgets a detail or inflates how often they use an app is not necessarily being dishonest, but when someone fabricates their eligibility from the outset, like claiming to be a car owner when they’ve never held a licence, that’s fraud.
This type of behaviour is a growing concern, especially in remote qualitative research, where identity checks are minimal or inadequate. This isn’t only found in UX research but most types of paid research. It’s worth noting that this type of fraud not only negatively impacts data quality but also can leave us researchers questioning our own judgement and feeling personally responsible. This can lead to burnout and mistrust in future participants, which can be particularly damaging in qualitative work that relies on rapport and trust.Â
Several overlapping factors have made qualitative research more vulnerable to fraud:
Since the pandemic, most interviews and diary studies are run online. While this makes accessing participants easier and faster, it also makes it easier for people to fabricate information. Without face-to-face interaction and ID checks, participants can hide behind text or audio, and we often have no way to verify whether their story is real.Â
Offering financial compensation for time is fair and ethical but it does open the door to dishonest participation. As Santinele Martino et al. (2024) note, compensation can create “perverse incentives” when study eligibility is tightly defined and desirable. At a time where unemployment is rising, people are looking for ways to make money and online research participation is one of them. In fact, there are popular online communities dedicated to just that: identifying paid research opportunities.Â
This is a relatively new development but with tools like ChatGPT, it’s now possible for participants to generate believable screener responses or interview answers without real experience. Even though AI detection tools exist, their effectiveness to detect the authenticity of content requires further validation (Mistry et al., 2024) and at this point they cannot be trusted. Â
Fraudulent participants often give themselves away — if you know what to look for. Below we discuss several common warning signs.
One of the most reliable signs is inconsistency between the screener and the session. For example, someone may claim to use a product every day, but during the interview they can't name any features or describe their usage in detail.Â
Fraudulent participants often give generic or scripted answers — especially in open-ended questions. They may struggle to share specific experiences, timelines, or terminology. These responses often lack detail and feel “rehearsed.”
If a participant sounds like they’re paraphrasing a product page rather than describing their own interaction, that’s a red flag.
While there are valid reasons participants may prefer audio-only, a pattern of camera refusal, especially when paired with inconsistent responses and other red flags, can signal deception. Several studies on fraudulent participation in online research (e.g., Mistry et al., 2024; Sefcik et al., 2023) have reported clusters of participants who avoided any visual interaction and provided conflicting information about their background.
A participant who focuses heavily on payment (e.g., asking when and how they’ll receive it, or appearing disinterested in the study itself) may be motivated purely by the incentive. While not inherently fraudulent, it’s worth noting when this behaviour appears alongside other warning signs.
In some documented cases, multiple participants gave nearly identical accounts of rare experiences, or told stories that didn’t align with known realities (e.g., someone in their early twenties claiming decades of experience). Repetition, contradiction, or improbable combinations of attributes are all worth noting.
A recent study by Panicker et al. (2024) that involved interviewing 16 HCI researchers reported that fraudulent participants often use generic or copy-paste Gmail addresses, sometimes with common names and number strings. For example, emails with this format [email protected] were more likely to belong to fraudulent participants.
Fraudulent participants may seem distracted, disengaged, or difficult to connect with. Participants might provide one-word answers, not pay full attention to the study, resulting in unusually short interviews that made rapport impossible
There is no way to completely eliminate fraudulent participants from our research. There are, however, a number of steps we can take to help us detect them.Â
Screeners should include open-ended questions that require specific, experience-based answers. For example, “Tell us about the last time you used your insurer’s app.” You can also include logic-check questions (e.g., ask for age and year of birth) or rephrase key questions to test consistency. Manual review of screener responses is often essential to catch subtle red flags.
Related: How to write great screener surveys
A brief onboarding call, or even a short confirmation message, can help validate participants before the session. Asking for details like the name of the product they use, or what region they’re in, can be enough to spot inconsistencies early. If you’re conducting B2B research, you can use LinkedIn to verify participants’ identities.
If possible, use trusted panels or verified communities. Avoid advertising large incentives in public forums. Most popular platforms are working on ways to improve fraud detection — check what steps the one you are using is taking.
Consider delaying or splitting payments (e.g. part after a pre-task, part after the interview), or using gift card platforms that require identity verification. Some platforms also let you flag suspicious participants so others don’t recruit them again.
Related:Â How incentives impact bias in UXÂ research
Make fraud a standard topic in study planning. Decide in advance what to do if someone turns out to be ineligible. Document incidents, and share them internally as learning moments, not just one-off problems.
If you discover during a session that a participant is ineligible due to misrepresentation, don’t panic:
Panicker et al. (2024) recommend documenting these events internally, both for transparency and to build resilience across teams. It is also worth preparing a plan in advance so that you know what to do in the session if fraud is suspected.
It’s important not to conflate fraud with unfamiliarity. A participant who speaks briefly, seems nervous, or has a different communication style may still be a valid contributor. People from marginalised groups or those with less experience in research may appear “inconsistent” simply because they don’t use the language we expect.
Fraud prevention must be balanced with inclusion. In the list of red flags above, no single red flag is definitive; instead, decisions should be based on holistic patterns reviewed in team discussions. Overly aggressive screening or rigid assumptions about how “real” users behave can exclude those who already face barriers to participation.
Stay critical, not cynical and use multiple data points before making a judgement.
Fraudulent participation is a growing issue in qualitative research, but it’s manageable. Having the right mix of awareness, process design, and ethical care, we can reduce risk while keeping research open, inclusive, and human-centred.
This recent incident reminded me that good research isn’t just about asking the right questions, it’s also about ensuring we’re speaking to the right people. In an age of AI-generated stories (and even users) and global participant platforms, that’s a challenge worth preparing for.
Maria is an experienced UX researcher with a PhD in Cognitive Psychology and over a decade of experience across academia and industry. She has built and scaled UX research practices in fast-paced SaaS environments, and recently founded Decaf Before Death, a specialty decaf coffee business. She writes the UX Psychology newsletter and lives in Sheffield, UK, with her partner and two cats.
‍