Is there a way we can set timeout larger? The current limit of 1800 seconds is too restrictive. I am using Workflow together with GCP Batch (each step is a GCP Batch job) and my job would have to run more than 1800 seconds. If Workflow is not able to increase the timeout limit, is there an alternative solution within GCP api that would allow me to schedule several steps of GCP Batch jobs?
Hi @gradientopt,
Welcome to Google Cloud Community!
As of the moment, per this documentation on invoking an HTTP endpoint, maximum timeout is 1800 seconds before throwing an exception.
I would suggest filing a feature request so that our engineers could take a look at this. Please be advised that we don't have a specific ETA for this one but you can keep track of its progress once the ticket has been created.
I am facing the same difficulty when using GCP Batch. Is there a way to address it? Now the outcome is that my workflow indicates it failed due to the 1800s timeout but the batch job will keep running normally.
For those interested in increased Cloud Workflow timeouts, please DM me with the following details to help inform the Workflow team how to best advise.
Trying to run a dbt pipeline in a cloudrun job, as part of a workflow, it takes more than 1800s to complete, so the workflow always fails
needs to be able to run for longer than 1800s. Ideally it would respect the timeout of the cloudrun job it's calling
Steps should be able to have timeouts > 1800, atleast 1 hour
europe-west4
not a lot
cloudrun
Just to share that I found the solution, which applies to you too @borft.
Connectors to GCP Services, as opposed to HTTP request invocations, have a default timeout of 1800 seconds that can be set to up to 31536000 seconds (one year).