I'm testing Gemini Pro & Bison to generate a HTML script given a description of what the HTML script would do. Sometimes it gives correct response, but most of the times it starts generating the HTML script and stops in between, final response I get:
The model response was blocked because of a quality issue or a parameter setting. Try increasing the temperature, top-k, or top-p parameters to generate a different response. (error code: {errorCodes})
errorCode -> 210 in case of Bison
I tried the the advised thing in the message but the error persists.
Please assist me through this issue.
Hello,
I have the same error on the GCP prompt interface. And when I use the stream API I got a partial or empty answer . I think the interface crash due to a empty answer part.
const streamingResp = await generativeModel.generateContentStream(req);
for await (const item of streamingResp.stream) {
process.stdout.write('stream chunk: ' + item.candidates[0].content.parts[0].text); // crash parts[0] is undefined
}
Regards,
Cedric
Yup, same here.
Were you trying to submit a request that would output a high token count? I was getting this on requests, and was surprised at how limited the token response was. I kept reducing the request complexity until it responded.
Did you try adjusting the safety filters?
User | Count |
---|---|
2 | |
1 | |
1 | |
1 | |
1 |