Get hands-on experience with 20+ free Google Cloud products and $300 in free credit for new customers.

Rubrik archiving to GCP bucket suddently comes to a crawl. Showing poor throughput. Help!!

We have been using GCP bucket to archive rubrik backup data.  It has been running fine for over a year and recently network throughput is showing that the transfer rate is low and process is crawling.  It's causing a backlog on rubrik archive jobs.  Is there anything on the GCP side that could be causing this?  Throttling on GCP side?  Bucket too large?  It currently stands at 326 TiB.

0 4 366
4 REPLIES 4

The easiest explanation is that something else in the project is using bandwidth and competing for quota.   Is that a possibility?

No other process but backup archive to gcp.  100% ingress traffic to gcp bucket.  Again, it has been working fine for over a year until recently.  No changes in our network.

do you have an option to submit a support case so we can investigate this in detail?

I'm tempted to suggest running the Cloud Storage PERFDIAG tests described here.  Maybe run those and report what you find.  Make sure you mask out any sensitive info (eg. bucket names).  Try also with a new bucket that you can delete afterwards.  Maybe try also from some source locations other than your own enterprise to try and eliminate "outbound" network congestion problems originating from your environment.