
If you've spent any time recruiting participants recently, you already know the problem. Panel quality is a persistent issue that plagues UX teams. Too many of the participants that get through turn out to be a poor fit for the study, which you don't find out until you're mid-session and it's too late.
The screener is supposed to prevent all of this. But most screener advice is surface-level stuff - "ask behavioral questions," "keep it short." That doesn't actually help when you're trying to figure out how to weed out someone who's gaming your survey for the incentive.
We've been paying attention to what works across the studies running on Great Question, and the difference between screeners that reliably get the right people and ones that don't usually comes down to a few specific structural choices. This is what we've seen.
This is the single most underused screener technique we see. Include one or two answer options that sound realistic but don't actually exist like a fake product name in a "which tools do you use?" question, a made-up company in an industry list, or a nonexistent feature in a product familiarity check.
Real participants skip it. Bots and unqualified panelists who are guessing their way through will select it, making them easy to spot and disqualify.
For example, if you're screening for project management tool users, your options might include Asana, Monday, Jira, Trello — and "Planstack." Anyone who selects Planstack is either not paying attention or not real. Research shows that implementing trap questions like these catches an average of 15% of respondents, with some studies seeing unqualified rates above 30%.
A few guidelines for red herrings: make them sound plausible (a made-up name that fits the category), don't make them obviously fake, and rotate them between studies so they don't become known in panelist communities.
Behavior over demographics:
Bad: "What is your primary job function?"
Good: "In your current role, do you make purchasing decisions for software tools?"
The answer directly tells you if someone has buying power. No guessing from titles.
Avoid leading questions:
Bad: "Our research shows most successful companies use data-driven decisions. Do you?"
Good: "How does your team typically make decisions about product changes?"
Open-ended. People describe what they do, not what you want to hear. As one of our webinar guests put it: "You will never ostracize somebody by speaking plainly and clearly." That applies to screeners too, write like you're talking, not like you're administering a test.
Questions that actually filter:
Bad: "Do you use analytics tools?" (Everyone says yes.)
Good: "How often does your team review website analytics?" [Daily, Weekly, Monthly, Rarely, Never]
Now you can set a threshold. The frequency question also makes it harder for someone to bluff - "yes" is easy, but committing to "daily" when you actually use analytics quarterly feels wrong, and people hesitate.
Verify with open-ended follow-ups:
If you need high-confidence qualification, add one open-ended question after your multiple-choice filters. "Briefly describe how you use [tool/process] in your typical week." Bots generate vague, generic responses. Real practitioners give specific, idiosyncratic details. This is especially useful for moderated studies where you're investing significant time per session.
Length: 4–7 questions. Completion drops sharply after question 8. If you can't explain what you'd do differently with the answer, cut the question. Over-processing is one of the most common forms of hidden waste in research, and it applies to screeners too, not just reports.
Order:
Logic branching: If someone answers "No" to "Do you use our product?", skip the next five questions. No point asking about features of something they don't use.
Qualify if: Uses product category AND did not select the red herring AND at least weekly AND can share screen.
Qualify if: Customer 3+ months AND uses 2+ real features AND did not select the red herring. Don't disqualify dissatisfied customers — they're often the most valuable.
1. Leading questions. "We're looking for innovative companies. Do you see yours as innovative?" Everyone says yes. Fix: Give neutral options from "early adopter" to "wait and see." Never reveal what you're looking for in the question itself.
2. Yes/no questions that don't filter. "Do you use mobile apps?" (90% say yes.) Fix: "How many mobile apps do you use for work weekly?" [0–2, 3–5, 6–10, 10+]. Force specificity. It's harder to lie about a number than a yes/no.
3. Too many questions. 12 questions = 60% completion rate. Every question you add costs you participants. Fix: Cut to 5–6. Every question must answer "Would this person be good for our study?" If it doesn't change your decision, it's waste.
4. Demographic-heavy screeners. Job title, company size, industry, education... and you still don't know if they do the thing you're studying. Fix: Lead with behavioral questions. Add demographics only if they genuinely predict fit for your specific study.
5. No escape hatch. Forcing people to pick from options that don't fit them contaminates your data. Fix: Always include "None of the above" and "Other." Let people tell you they don't fit so you can trust the people who do.
Completion rate: Target 70%+. Below that, your screener is too long or confusing. Check if you're asking questions that require too much thought or recall.
Qualification rate: 20–50% is healthy. Below 5% means criteria are too strict — you might be screening out good participants with unnecessary filters. Above 80% means you're not actually filtering. Your screener is decorative.
Show rate: The one that matters most. Target 75%+. Below 60%, review whether your screener oversells ease or undersells time commitment. A screener that qualifies the right people but sets wrong expectations is still broken.
2–4 minutes. Each question should take 15–20 seconds to answer. If you're writing questions that require people to pause and think hard, they're too complicated for a screener.
Only if demographics predict fit for your specific study. If you're testing a B2B software tool and job title doesn't determine whether someone uses it, skip it. Use behavioral proxies instead.
Keep your core qualification criteria consistent, but swap in 1–2 study-specific questions and rotate your red herring options. Don't copy verbatim, panelists who've seen your screener before will game it.
Write like you're talking. "What tools does your team use?" not "Please identify the software tools currently utilized by your organization." Clarity is kindness — in screeners just as much as in research reports.
Copy a template above. Add a red herring option. Set your qualification logic. Run it.
After two rounds, you'll know which questions actually filter and which are just taking up space. Cut the dead weight.
Good luck, and don't forget that Great Question has surveys and incentives already built into the platform.
Tania Clarke is a B2B SaaS product marketer focused on using customer research and market insight to shape positioning, messaging, and go-to-market strategy.