Get hands-on experience with 20+ free Google Cloud products and $300 in free credit for new customers.

Cloud Storage Egress Bandwidth limit for each bucket

Soost
New Member

Hi, I am looking to replicate part of the data from Google cloud storage bucket (data size - 2TB) to Alibaba cloud, so the content can be reach out by mainland china. We are aware that there will be egress cost incurred. however, we need to know if there is any bandwidth limit for each cloud storage bucket by default? So, we could configure a max. bandwidth utilization on the "replication" tool, that would help avoid bad end user read access to the objects while the replication in process.  

1 1 2,240
1 REPLY 1

Hi @Soost,

Welcome to Google Cloud Community!

Google Cloud Storage does not have a specific bandwidth limit per bucket, bandwidth quota for Cloud Storage is on a per-project-per-region basis, refer to below table:

Quota Value Notes
Maximum bandwidth for each region or dual-region that has data egress from Cloud Storage to Google services 200 Gbps per project, per region

Egress to Cloud CDN and Media CDN is exempt from this quota. 

Data egress from Cloud Storage dual-regions to Google services counts towards the per-project quota of one of the regions that make up the dual-region. For example, if a Compute Engine instance in us-central1 reads data from a bucket in the nam4 dual-region, the bandwidth usage is counted as part of the overall quota for the us-central1 region.

 You can request a quota increase for regions on a per-project basis. If you want a quota increase for a dual-region, your increase request should be made for one or both of the regions that make up the dual-region.

Maximum egress bandwidth for Google services accessing data from buckets in a given multi-region 50 Gbps per project, per region

Egress to Cloud CDN and Media CDN is exempt from this quota. 

For example, say the project my-project has several Compute Engine instances in us-east1 and several Compute Engine instances in us-west1. The us-east1 instances have a combined 50 Gbps bandwidth quota when reading data from buckets in the us multi-region. The us-west1 instances have a separate 50 Gbps bandwidth quota when reading from buckets in the us multi-region. 

We strongly recommend that you use buckets located in regions or dual-regions for workloads with high egress rates to Google services. For existing multi-region buckets that run large workloads in Google services, you can use Storage Transfer Service to move your data to a region or dual-region bucket.

Except in exceptional cases, increase requests for multi-region bandwidth quotas are unlikely to be approved. To request a quota increase, contact Google Cloud Support.

You can find this table on this google's public documentation.

Hope this helps.