Get hands-on experience with 20+ free Google Cloud products and $300 in free credit for new customers.

disk quota exceed but can not see its used anywhere

my quota is 4096 GB and its reached after I triggered the serverless spark job ( which failed) but its still shows me that its being used and I can not trigger another due to error "Insufficient 'DISKS_TOTAL_GB' quota. Requested 1200.0, available 46.0. "

I checked all "disks" sections under compute / baremetal etc... and I dont have such disk there.
all i have is 5 VM using about 100 GB total ( gcloud compute disks list shows the same )

Has anyone faced such issue ? anyway to resolve this?  pls help!

Solved Solved
0 10 5,376
1 ACCEPTED SOLUTION

I understand your predicament. It seems like you've deleted all the batches but the disk space is still not freed up. Here are a few steps you can take:

  1. Check for any active jobs or instances: Even though you've deleted all batches, there might be some active jobs or instances that are still running and consuming disk space. You can do this by navigating to the Cloud Console and checking the status of your jobs or instances.

  2. Delete unused disks: If there are any unused disks, you can delete them to free up some space. You can do this by navigating to the 'Disks' section in the Cloud Console and deleting any disks that are not in use.

  3. Check for snapshots: Sometimes, snapshots of your disks can also consume disk space. You can check for any snapshots and delete them if they are not needed.

As for the billing, you're right that increasing the quota might lead to additional costs. However, controlling the disk size using the spark.dataproc.driver.disk.size and spark.dataproc.executor.disk.size properties can help manage the costs. The minimum required might be 250GB total, but you can adjust these values based on your specific needs and budget.

View solution in original post

10 REPLIES 10