Get hands-on experience with 20+ free Google Cloud products and $300 in free credit for new customers.

Cloud Run: Sidecar not respecting timeout

We are seeing a situation where a Cloud Run Service that is deployed with a sidecar and the sidecar is ignoring the request timeout setting.

Our setup is a reverse proxy in front of this sidecar container. When trying to execute a request lasting more than 60 seconds long (the default timeout) we get the following error:
"The request has been terminated because it has reached the maximum request timeout. To change this limit, see https://cloud.google.com/run/docs/configuring/request-timeout".

Here is the picture proving the request length:

xanmanza_0-1706710226811.png

And the picture proving the request timeout setting:

xanmanza_1-1706710255300.png

Has anyone ever encountered an issue like this?

Solved Solved
0 5 1,137
1 ACCEPTED SOLUTION

I had the same issue. I figured out what the problem was. I was using Nginx reverse proxy in front of my container. The default timeout for Nginx is 60 seconds. Increasing the timeout to the same value of "Request timeout" on GCP cloud run (default 300 seconds) solved the issue.

You can add the following to the nginx conf file:
proxy_read_timeout 300;
proxy_connect_timeout 300;
proxy_send_timeout 300;

View solution in original post

5 REPLIES 5

yes, but it was in my yaml...the way I approached checking it was:

1)What is the startup order of the containers?
2)What is the healthcheck and readiness probes status?
3)Verify the sidecar's iam policy to ensure the invoker account has permission
4)ensure the cloud run service agent has permission

Hi @djs_75

  1. The reverse proxy starts first and then the sidecar container.
  2. All health check and readiness probes are healthy by the time I send the request as laid out in the picture
  3. and (4). I'm not sure what you mean here, what permissions are necessary for the sidecar/service to respect the request timeout setting?

3/4. The account set on the Cloud run service requires invoker permission and usually permission to the Cloud Run service agent - additionally both accounts need to be able to access the deployed container (if in Artifact Registry) -- these typically show in the Log Explorer as an IAM failure - if this is crossing projects then you need to also consider that.  I would also ensure the accounts have Network User on the VPC and on the Serverless VPC Connector - the IAM binding for Cloud Run would look like:

SA_EMAIL="$SA_NAME@$PROJECT_ID.iam.gserviceaccount.com"
if [ -z "$(gcloud iam service-accounts list --filter "$SA_EMAIL" --format="value(email)" --project "$PROJECT_ID")" ]; then
gcloud iam service-accounts create $SA_NAME \
--description="Multi-Container Cloud Run Demo" --project "$PROJECT_ID"
fi
gcloud projects add-iam-policy-binding \
$PROJECT_ID \
--member="serviceAccount:$SA_EMAIL" \
--role='roles/secretmanager.secretAccessor'
CLOUD_RUN_NAME=sidecar-cloudrun-example
gcloud run deploy $CLOUD_RUN_NAME --image docker.io/imagename \
--allow-unauthenticated --vpc-connector $NETWORK \
--service-account $SA_EMAIL --region $REGION --port 80

I had the same issue. I figured out what the problem was. I was using Nginx reverse proxy in front of my container. The default timeout for Nginx is 60 seconds. Increasing the timeout to the same value of "Request timeout" on GCP cloud run (default 300 seconds) solved the issue.

You can add the following to the nginx conf file:
proxy_read_timeout 300;
proxy_connect_timeout 300;
proxy_send_timeout 300;

@mkhattatyou beauty! That was it.

I had fiddled around with the nginx timeout settings but not the `proxy_*` timeout settings!

Thank you!