Hello everyone.
We are operating a SaaS-based research platform where we collect data from customers through surveys and provide data analysis tools. In an effort to prevent bot responses on our survey response pages, we have integrated reCAPTCHA v3.
Here is how we are currently using reCAPTCHA:
1) When a user clicks the โStart Surveyโ button, we load the reCAPTCHA script to begin monitoring user behavior.
2) When the user clicks the final โSubmit Answerโ button for the last question, we execute grecaptcha.execute() with the action name "submit" to retrieve a score.
3) Based on the score, we assess whether the respondent may be a bot.
However, we are currently facing some issues:
1) Some legitimate users are receiving unexpectedly low scores.
2) In contrast, automated scripts using Selenium are sometimes receiving high scores.
We suspect that our current implementation may not be making the best use of action names or behavior signals. We would appreciate any guidance on how to improve the accuracy of bot detection in our case.
Specifically, we are wondering:
1) Would using a more specific action name than "submit" (e.g., "survey_final_submit") help provide better context to reCAPTCHA for scoring?
2) Are there any best practices for dynamically loading reCAPTCHA scripts or tracking more user interactions across the survey that might improve scoring reliability?
3) Are there configuration tips or common pitfalls we might be overlooking?
Any advice or shared experiences would be greatly appreciated. Thank you for your time and support.