This website uses Cookies. Click Accept to agree to our website's cookie use as described in our Privacy Policy. Click Preferences to customize your cookie settings.
When sending text with length near the input token limit, the output is truncated. It happens also in the vertex ai studio. We expect the input and output token limits to be independent, what can be the possible reasons for this behaviour?