Get hands-on experience with 20+ free Google Cloud products and $300 in free credit for new customers.

CloudSQL Storage Capacity

Has anyone been able to have the GCP team reduce their storage capacity of CloudSQL? My instance was set to automatically expand and I ran a Vacuum Full which inflated my capacity way beyond my needs and beyond my budget. I can't get anyone from GCP to return my calls. Before I start the nightmarish task of recreating my instance, I wanted to see if anyone has been able to get GCP support to help with this.

0 2 141
2 REPLIES 2

Hi @Degree73,

Welcome to Google Cloud Community!

It's definitely concerning when your CloudSQL instance unexpectedly expands and balloons your storage costs. Here are some workarounds that you may try to manage your situation and minimize the impact on your budget.

1. Reduce Data:

  • Vacuum Full: If you've already run Vacuum Full, you might have excessive dead space in your database. Consider running a regular Vacuum or Vacuum Analyze command more frequently to avoid this buildup.
  • Data Pruning: Analyze your data and see if any historical data can be safely deleted or archived.
  • Data Compression: Investigate if you can compress your data within your database to reduce storage footprint.
  • Data Partitioning: This can help you manage your data more effectively, especially if you have large tables with data spread across different time periods.

2. Resizing the Instance: If you can estimate your actual storage needs, you might be able to resize the instance to a smaller size manually in the Cloud Console.

3. Replicating Data: Export your data to a smaller instance or to a different storage service like Google Cloud Storage. Then, import the data into a new CloudSQL instance with a smaller capacity.

4. Consider Alternative: If your use case allows it, consider switching to Cloud Spanner, a fully managed database service that automatically scales and provides high availability.

I hope the above information is helpful.

It's interesting that you mention VACUUM FULL as an option because that is what caused this problem in the first place. In order to perform the operation it used more than twice my storage and automatically expanded my storage capacity to do it. Now I have less storage but a higher capacity, but I am billed on capacity, so it didn't help but hurt in a big way.

I've tried decreasing the storage capacity in both the web console and the CLI. In both cases it tells me that storage capacity cannot be decreased. Is this something that a GCP resource could do on my behalf?