Incentivised surveys attract both genuine respondents and fraudsters seeking easy rewards. The protection of data integrity requires a balanced approach: to make it difficult for dishonest participants to succeed without penalising honest users. In-survey quality checks are a key part of a broader fraud-prevention strategy. Surveys without such checks become fraud magnets, allowing inattentive or deceitful respondents to claim worthy incentives.
In a random sample of 61 surveys submitted to Toluna in August 2025, 64% lacked any quality checks, highlighting a gap in industry practices and the need for greater education on fraud prevention.
This article presents three core principles for designing surveys that are both fraud-resistant and user-friendly.

1. Make it difficult for fraudsters, easy for real respondents
The most effective way to fraud-proof a survey is to weave detection into the design, not bolt it on after the fact. When quality checks are subtle and embedded, they don’t interrupt the experience for real participants, but they trip up fraudsters quickly.
Recommendations include:
Avoid obvious screening questions. Fraudsters can easily identify typical screening questions, such as “Did you rent a car in the last seven days?” and answer strategically. To prevent this, embed screeners within neutral groupings — for example, “Which of the following activities have you done in the last seven days?” using a mix of target and decoy options. Nearly 20% of surveys in our dataset contained obvious or leading questions.
- Why it’s important: Fraudsters try to qualify for as many surveys as possible, so an easy screening section is an open invitation. If repeatedly screened out, they waste attempts, while honest respondents complete the survey, reducing fraud opportunities and ensuring higher-quality data.
Use fake or unlikely options. For example, adding a made-up brand in a list of real car brands or adding ‘space travel’ to a list of hobbies.
- Why it’s important: Fraudsters ignore content and often don’t understand the topic, making it hard for them to spot impossible or illogical answer options. As a result, they are more likely to fall into traps embedded in the survey.
Overstatement traps work well too. Include one or two rare or unlikely items in multi-choice lists. If someone claims they’ve bought a DVD, car insurance, and chocolates all in the past seven days, they may be over-claiming.
- Why it’s important: Fraudsters know that selecting many options in multi-select questions, especially in screening sections, increases their chances of staying in a survey. As a result, they often over-select items, attempting to maximize their likelihood of qualifying.
Include open ends. The balance is fine here, as too many open ends can annoy genuine respondents — however, open ends create an extra layer of complexity for fraudsters.
- Why it’s important: Fraudsters have to program bots to provide coherent, non-repetitive responses to open-ended questions. This adds a hurdle, causing some to abandon the survey or submit low-effort answers, which can then be easily identified through open-end automated checks.
Leverage automation to remove poor responses in real time.
- Why it’s important: Fraudsters know most quality checks happen after survey completion. By then, incentives are often already awarded and cashed out, making surveys without real-time terminations especially vulnerable to targeted fraud attacks.
These checks are unintrusive to honest users but stop bad actors before the data is compromised and/or rewards are issued.
2. Basic quality checks have their place, but smart ones are the future
Yes, speeding flags, straight-lining, and attention check questions still help. But they likely catch more real respondents — tired ones, distracted ones, those who are real people, the ones you want. Fraudsters? They know these checks inside out and have adapted their algorithms and tools to circumvent them.
Evolve your defences with smarter techniques:
- Logic checks
These flag inconsistencies that are very unlikely. For example: saying you’re 18 years old with five children; claiming to buy the brand — but not being aware of it.
- Behavioural signals
Monitor how respondents interact with a survey, not just their answers. Behaviours like copy-pasting, unusually fast typing, or non-human mouse movements provide behavioural metadata, offering valuable insight into respondent authenticity and helping identify inattentive or fraudulent participants.
- Aggregated data checks
Checking data at an aggregate level is as important as examining individual respondents. Unexpected KPIs may indicate the survey was compromised by fraudsters not visible individually. Reviewing overall data patterns can reveal anomalies, making it essential to question our data to ensure high-quality, reliable results.
The trick is to continuously evolve in-survey checks alongside fraudsters’ tactics. Traditional quality checks remain important, but as fraudsters constantly advance their methods, we must adapt our approaches to stay one step ahead.
3. Common pitfalls to avoid
With all the quality checks, data points, and talk about fraud, it’s easy to start seeing patterns or issues that may not be there. Even well-meaning quality control can backfire. Here’s what to watch out for:
Over-correcting. Using too many trap questions, strict speed limits, or harsh “one strike and you’re out” rules can remove genuine respondents. This not only risks biased data but also a poor survey experience, reducing future participation and creating more opportunities for fraudsters.
ChatGPT often uses hyphens and semicolons, but so do some humans. Perfect punctuation and spelling alone don’t indicate a bot response, though it’s worth considering alongside other data points for accuracy.
Ignoring cultural, generational, and device nuance. Some regions prefer short responses, others detailed ones. Language, device habits, and survey etiquette vary by market, making these factors especially important in multi-country studies.
Assuming automation is everything. Automated real-time checks are essential, but human review remains important for open ends, contradictions, and context-specific fraud. Combining human and artificial intelligence ensures the highest data quality.
Remember, balance is everything
To fraud-proof a survey, think like a fraudster, then design like a researcher! The best systems are layered, adaptive, and invisible to genuine users. Combine thoughtful questionnaire design, smart quality logic, and agility — always asking if it feels fair to real respondents, because we’re not just filtering out fraud, we’re protecting the very voices we want to hear.
This article was first published in the Q4 2025 edition of Asia Research Media