Get hands-on experience with 20+ free Google Cloud products and $300 in free credit for new customers.

Truncated response of Palm 2 model

When sending text with length near the input token limit, the output is truncated.
It happens also in the vertex ai studio.
We expect the input and output token limits to be independent, what can be the possible reasons for this behaviour?

5 1 275
1 REPLY 1