How to write screener questions that keep bots out

By
Tania Clarke
Published
March 11, 2026
How to write screener questions that keep bots out

If you've spent any time recruiting participants recently, you already know the problem. Panel quality is a persistent issue that plagues UX teams. Too many of the participants that get through turn out to be a poor fit for the study, which you don't find out until you're mid-session and it's too late.

The screener is supposed to prevent all of this. But most screener advice is surface-level stuff - "ask behavioral questions," "keep it short." That doesn't actually help when you're trying to figure out how to weed out someone who's gaming your survey for the incentive.

We've been paying attention to what works across the studies running on Great Question, and the difference between screeners that reliably get the right people and ones that don't usually comes down to a few specific structural choices. This is what we've seen.

The 5 rules

  1. Ask behavioral questions, not just demographic ones. "What project management software tools do you currently use in your day-to-day work?" beats "What's your job title?" Job titles lie. Behavior doesn't. And always add "none of the above" - it catches people who don't actually fit and gives bots one more opportunity to out themselves.
  2. Don't ask leading questions. "Are you interested in improving your budgeting skills?" telegraphs the "right" answer. Every incentive-motivated respondent says yes. A better version: "Which of the following tools have you used to manage your personal finances in the past 6 months?" or "Which of the following actions do you take to manage your personal finances?" — with a mix of behaviors you're looking for and not looking for.
  3. Let people self-disqualify. Always include "None of the above." Forcing people to pick from options that don't fit them gives you contaminated data.
  4. Order strategically. Start broad, narrow later. If someone doesn't meet your basic criteria, there's no reason to ask them five more questions. Respect their time and yours.
  5. Match question type to filter. Rankings reveal priorities. Multiple choice maps behaviors. Open-ended questions verify depth of experience (more on this below). Pick the format that gives you signal, not the one that's easiest to set up.

Use red herring options to catch bots and bad-faith respondents

This is the single most underused screener technique we see. Include one or two answer options that sound realistic but don't actually exist like a fake product name in a "which tools do you use?" question, a made-up company in an industry list, or a nonexistent feature in a product familiarity check.

Real participants skip it. Bots and unqualified panelists who are guessing their way through will select it, making them easy to spot and disqualify.

For example, if you're screening for project management tool users, your options might include Asana, Monday, Jira, Trello — and "Planstack." Anyone who selects Planstack is either not paying attention or not real. Research shows that implementing trap questions like these catches an average of 15% of respondents, with some studies seeing unqualified rates above 30%.

A few guidelines for red herrings: make them sound plausible (a made-up name that fits the category), don't make them obviously fake, and rotate them between studies so they don't become known in panelist communities.

Good vs. bad questions

Behavior over demographics:
Bad: "What is your primary job function?"
Good: "In your current role, do you make purchasing decisions for software tools?"
The answer directly tells you if someone has buying power. No guessing from titles.

Avoid leading questions:
Bad: "Our research shows most successful companies use data-driven decisions. Do you?"
Good: "How does your team typically make decisions about product changes?"
Open-ended. People describe what they do, not what you want to hear. As one of our webinar guests put it: "You will never ostracize somebody by speaking plainly and clearly." That applies to screeners too, write like you're talking, not like you're administering a test.

Questions that actually filter:
Bad: "Do you use analytics tools?" (Everyone says yes.)
Good: "How often does your team review website analytics?" [Daily, Weekly, Monthly, Rarely, Never]
Now you can set a threshold. The frequency question also makes it harder for someone to bluff - "yes" is easy, but committing to "daily" when you actually use analytics quarterly feels wrong, and people hesitate.

Verify with open-ended follow-ups:
If you need high-confidence qualification, add one open-ended question after your multiple-choice filters. "Briefly describe how you use [tool/process] in your typical week." Bots generate vague, generic responses. Real practitioners give specific, idiosyncratic details. This is especially useful for moderated studies where you're investing significant time per session.

Screener structure

Length: 4–7 questions. Completion drops sharply after question 8. If you can't explain what you'd do differently with the answer, cut the question. Over-processing is one of the most common forms of hidden waste in research, and it applies to screeners too, not just reports.

Order:

  1. Warm-up (1–2 questions): Broad and easy. Don't start with the hardest filter.
  2. Core qualification (2–3 questions): Your must-haves. The questions that determine yes or no.
  3. Depth (1–2 questions): Segmentation, scheduling, or logistics.

Logic branching: If someone answers "No" to "Do you use our product?", skip the next five questions. No point asking about features of something they don't use.

Two core templates

Usability testing screener

  1. What's your primary role? [Text]
  2. Do you currently use [product category]? [Yes / No / Used it before]
  3. Which of the following [product category] tools do you use? [List real options + 1 red herring]
  4. How often do you [specific task]? [Daily / Weekly / Monthly / Rarely / Never]
  5. Comfortable sharing your screen for 45 min? [Yes / Audio only / No]

Qualify if: Uses product category AND did not select the red herring AND at least weekly AND can share screen.

Customer interview screener

  1. How long have you been a customer? [Less than 3mo / 3–6mo / 6–12mo / 1–2yr / 2yr+]
  2. Which features do you use regularly? [Checkbox, min 2, include 1 red herring]
  3. How would you describe your experience? [Very satisfied to Very dissatisfied]
  4. Available for a 30-minute call next week? [Yes / No / Maybe]

Qualify if: Customer 3+ months AND uses 2+ real features AND did not select the red herring. Don't disqualify dissatisfied customers — they're often the most valuable.

5 mistakes killing your screener

1. Leading questions. "We're looking for innovative companies. Do you see yours as innovative?" Everyone says yes. Fix: Give neutral options from "early adopter" to "wait and see." Never reveal what you're looking for in the question itself.

2. Yes/no questions that don't filter. "Do you use mobile apps?" (90% say yes.) Fix: "How many mobile apps do you use for work weekly?" [0–2, 3–5, 6–10, 10+]. Force specificity. It's harder to lie about a number than a yes/no.

3. Too many questions. 12 questions = 60% completion rate. Every question you add costs you participants. Fix: Cut to 5–6. Every question must answer "Would this person be good for our study?" If it doesn't change your decision, it's waste.

4. Demographic-heavy screeners. Job title, company size, industry, education... and you still don't know if they do the thing you're studying. Fix: Lead with behavioral questions. Add demographics only if they genuinely predict fit for your specific study.

5. No escape hatch. Forcing people to pick from options that don't fit them contaminates your data. Fix: Always include "None of the above" and "Other." Let people tell you they don't fit so you can trust the people who do.

Measuring effectiveness

Completion rate: Target 70%+. Below that, your screener is too long or confusing. Check if you're asking questions that require too much thought or recall.

Qualification rate: 20–50% is healthy. Below 5% means criteria are too strict — you might be screening out good participants with unnecessary filters. Above 80% means you're not actually filtering. Your screener is decorative.

Show rate: The one that matters most. Target 75%+. Below 60%, review whether your screener oversells ease or undersells time commitment. A screener that qualifies the right people but sets wrong expectations is still broken.

FAQ

How long should a screener take?

2–4 minutes. Each question should take 15–20 seconds to answer. If you're writing questions that require people to pause and think hard, they're too complicated for a screener.

Should I ask demographic questions?

Only if demographics predict fit for your specific study. If you're testing a B2B software tool and job title doesn't determine whether someone uses it, skip it. Use behavioral proxies instead.

Can I reuse screeners across studies?

Keep your core qualification criteria consistent, but swap in 1–2 study-specific questions and rotate your red herring options. Don't copy verbatim, panelists who've seen your screener before will game it.

How do I make screeners feel less like a test?

Write like you're talking. "What tools does your team use?" not "Please identify the software tools currently utilized by your organization." Clarity is kindness — in screeners just as much as in research reports.

Start here

Copy a template above. Add a red herring option. Set your qualification logic. Run it.

After two rounds, you'll know which questions actually filter and which are just taking up space. Cut the dead weight.

Good luck, and don't forget that Great Question has surveys and incentives already built into the platform.

Tania Clarke is a B2B SaaS product marketer focused on using customer research and market insight to shape positioning, messaging, and go-to-market strategy.

Table of contents
Subscribe to the Great Question newsletter

More from the Great Question blog