The online prediction against text-bison fails if the prompt contains "ATTEMPTSMandatory". The API calls returns success 200 but the content is empty. I accidentally discovered this. Some issue in my chunking code generated this odd token.
For eg Following input returns good expected response.
{
"instances": [
{ "prompt": "
How to make tea?
"}
],
"parameters": {
"temperature": 0.2,
"maxOutputTokens": 256,
"topK": 40,
"topP": 0.95
}
}
Following input returns empty response
{
"instances": [
{ "prompt": "
How to make tea? ATTEMPTSMandatory
"}
],
"parameters": {
"temperature": 0.2,
"maxOutputTokens": 256,
"topK": 40,
"topP": 0.95
}
}
User | Count |
---|---|
2 | |
1 | |
1 | |
1 | |
1 |