Get hands-on experience with 20+ free Google Cloud products and $300 in free credit for new customers.

Running batch job on LLM

I tried to follow the guide of get batch text predictions. The job ran successfully but there are rows in the result that stated resource exhausted and no result. Any setting that I can do to automatically retry those rows that are failed?

I am using the Python Vertex ai sdk

4 6 1,552
6 REPLIES 6