Hi everyone,
I’m currently using the Gemini-1.5 Pro model on Vertex AI for transcribing text. However, I’ve run into an issue: the output is getting cropped because of the 8199-token limit.
How can I overcome this limitation? Are there any techniques or best practices to handle larger transcription outputs while using this model?
I’m also curious, does Gemini internally use Chirp for transcription? Or is its transcription capability entirely native to Gemini itself?
Any help or insights would be greatly appreciated! Thanks in advance!
User | Count |
---|---|
2 | |
2 | |
1 | |
1 | |
1 |