Get hands-on experience with 20+ free Google Cloud products and $300 in free credit for new customers.

Possible bug in text-bison prediction?

The online prediction against text-bison fails if the prompt contains "ATTEMPTSMandatory". The API calls returns success 200 but the content is empty. I accidentally discovered this. Some issue in my chunking code generated this odd token.

 

For eg Following input returns good expected response.

{
   "instances": [
     { "prompt": "
     How to make tea?
"}
   ],
   "parameters": {
     "temperature": 0.2,
     "maxOutputTokens": 256,
     "topK": 40,
     "topP": 0.95
   }
 } 

Following input returns empty response

{
   "instances": [
     { "prompt": "
     How to make tea? ATTEMPTSMandatory
"}
   ],
   "parameters": {
     "temperature": 0.2,
     "maxOutputTokens": 256,
     "topK": 40,
     "topP": 0.95
   }
 } 

 

0 2 366
2 REPLIES 2

It's possible that this token "ATTEMPTSMandatory" is triggering a certain behavior within the model and being interpreted as a special command or instruction that leads to an empty response.

Aside from removing ATTEMPTSMandatory to produce a workaround, you can also try tuning your parameters by setting temperature, maxOutputTokens, topP, and topK different values rather than using the default ones.

I would not expect a LLM to fail like that for just one token. You place this token in any prompt anywhere and the text-bison LLM fails to respond.