Get hands-on experience with 20+ free Google Cloud products and $300 in free credit for new customers.

Help with Gemini-1.5 Pro Model output Token Limit in Vertex AI

Hi everyone,

I’m currently using the Gemini-1.5 Pro model on Vertex AI for transcribing text. However, I’ve run into an issue: the output is getting cropped because of the 8199-token limit.

  1. How can I overcome this limitation? Are there any techniques or best practices to handle larger transcription outputs while using this model?

  2. I’m also curious, does Gemini internally use Chirp for transcription? Or is its transcription capability entirely native to Gemini itself?

Any help or insights would be greatly appreciated! Thanks in advance!

0 1 987
1 REPLY 1