Back to All

Snatchbot conversation is not as programmed

Dear Snatchbot-Team,

My team and I am currently using Snatchbot in a study. Of 250 participants that have participated in the study 2 have now replied that our bot is somehow "misbehaving". Those two participants describe conversations and grammar mistakes that we have certainly not programmed, e.g. instead of the two answer-options "good" and "not so good" they can choose between "not so good" and "bowel". Do you have any idea why these mistakes happen and what we can do to prevent them?

Thank you very much in advance!