I'm using a Playbook alongside a Dialogflow agent handing over a from a flow and going through an add passenger flow, I've got examples in place and everything was working until recently.
Now I've started to get ridiculous DANGEROUS_CONTENT errors from the model when I ask perfectly innocuous questions which have been working. This is the debug output:
Transitioned to playbook: "Add Passenger" with context. context: _last_utterance_in: ' I want to add 3 passengers to my booking' Conversation context in prompt using 24 input tokens. Error! Failed to get the next action due to Responsible AI filtering. Blocked by categories: DANGEROUS_CONTENT
TBH This sort of behaviour makes me completely lose confidence in the Gemini models and Playbooks as a working solution.
Is there any way I can stop this happening?
This documentation shows how to configure the safety filters.
thanks for the response @AndrewB I'm aware of the documentation. We currently have "Block few" which is the slackest control so I can't see how this will to stop this happening, although I've since realised that the problem is intermittent. However it also shows that the safety features don't work if they are being triggered by:
I want to add 3 passengers to my booking
Which I guess is the real issue here and needs to be flagged as poor performance with Gemini 1 Pro
Hello, I'm having the same issue.
I'm trying to build an agent that gives safety recommendations to workers in a factory. Some time it does handle sensitive content, but is not really dangerous, we also have the "Block few" rule in our settings.
Any ideas what we can do @AndrewB ?
Just in case Andrew is busy, I fixed this by filling out fill out this form to get access to the restricted "Block none" filter level.
User | Count |
---|---|
2 | |
2 | |
1 | |
1 | |
1 |