Google Cloud Run job configuration and its peak memory usage

In a Google Cloud Run job configured with 4GB of memory and 2 CPUs, consisting of 20 total tasks with a concurrency limit set to 10, what would be the peak memory usage?

Is the allocated memory per task 3GB (3GB per task * 10 concurrent tasks)?
Or is the memory shared across all the tasks, resulting in 3GB shared for all 10 tasks together?

Considering that the tasks are memory-intensive, with each task averaging 3GB of utilization, would the maximum/total memory usage at peak be 30GB (3GB x 10) or 3GB for all 10 tasks combined, thus 3GB at peak?

4 1 78
1 REPLY 1

Hello @raj_vihari,

Welcome to the Google Cloud Community!

In Google Cloud Run, each task operates within its own container. This means if a Cloud Run job is configured for 4GB of memory and 2 CPUs, each task will individually have access to these resources.

If a task typically uses 3GB of memory and you set a concurrency limit of 10, you can have up to 10 tasks running at the same time, each with up to 4GB of memory. Therefore, while the maximum possible memory usage could reach 40GB (10 tasks x 4GB each), the actual usage is likely around 30GB (10 tasks x 3GB each), given each task averages 3GB.

It's important to note that Cloud Run enforces the memory limit per task, ensuring no single task exceeds 4GB. This setup allows multiple tasks to use their maximum allowed resources concurrently without surpassing their individual limits.

For a more in-depth explanation, you can refer to this StackOverflow post.