The region us-east1 does not have enough resources available to fulfill the request. Please try agai

Hi,

I have an application running on App Engine Flexible, and the instance has been down for more than 12 hours. I didn't make changes.

I tried to reinitialize many times, and I have this response:

{"version": "0.0.1", "verbosity": "ERROR", "timestamp": "2022-11-27T18:57:27.406Z", "message": "(gcloud.app.deploy) Error Response: [9] An internal error occurred while processing task /app-engine-flex/flex_await_healthy/flex_await_healthy>2022-11-27T18:57:15.897Z19870.vt.2: The region us-east1 does not have enough resources available to fulfill the request. Please try again later."}

I searched a lot in internet, but all the information I found about this error tells me to wait or change the region. I can't change the zone of an app engine flexible. I could start a new project, but I would have to configure a lot of things.

Somebody knows what's happening?

It's sad that in our basic plan we can't even have a technical support since the problem was caused by Google, not by us.

8 23 3,391
23 REPLIES 23

I am having a similar issue on central. We need some help from Google.

We get similar error when deploying our applications. Hope this gets resolved at the earliest. 

The region us-east1 does not have enough resources available to fulfill the request.

We're having the same issue on central. 

Hi,

5 minutes ago I've created simple appengine flex django app at region us-east1 without any issues. For sure something is happening or was happened, because during weekend, US-CENTRAL1 region for GCE wasn't available. 

best,
DamianS

I am still failing at restarting the instance (11PM Pacific on 11/27/22)

Same behaviour for more than 12 hours.

I'm having the same issue on us-central1.

More than 30 hours and no response from google

been having the same issue for the last 3 days

I am getting the same error trying to restart a VM instance on Compute Engine.

Failed to start instance-1: A e2-micro VM instance is currently unavailable in the us-east1-d zone. Alternatively, you can try your request again with a different VM hardware configuration or at a later time. For more information, see the troubleshooting documentation.

Is this issue due to the increased demand caused by Black Friday/Cyber Monday? Google should plan it better, having more available resources and some people employees available to monitor this better during the weekend. Or, at least, giving us some feedback, or expectation of solution.

All these people/companies (including me) are having very high losses because of these days that google cloud is out.

I chose Google Cloud because I thought Google has a good reputation and was a reliable company.

I'm very disappointed.

We're having the same issue since Saturday

Same here at us-east4, I wasted the available time I had dealing and researching with this issue rather than coding.  Nothing you can do about it either after putting in a support ticket / response.  Looked into alternatives late night.  Leaning to Heroku.  I can't have days where I can't deploy.  Its what you do when you are a start up.  It appears this whole business is to sell you the left over compute rather than sell you a dedicated allocation of compute.  I tried reserving some compute.  No go there either.  Been happy for 2 years plus with GAE Flex.  Time to fire AppEngine I think.  This is not acceptable.  

Same here,  1 week down. It only happens with Flex environment. The services already up are working without problems. The updates or new services cannot be deployed because of that error. Using flex to deploy socket based services. Tried to add roles to service account without luck

Same issue here, trying to start my self setup/configured 4x E2 medium (2 vCPU shared core, 4GB RAM/20GB Standard Persistent Disk)  VM Kubernetes cluster located in us-east1-b which is failing with error "A e2-medium VM instance is currently unavailable in the us-east1-b zone. Alternatively, you can try your request again with a different VM hardware configuration or at a later time. "

This is not a GKE managed cluster.

First time I've had this error in over a yr and a half of setting up my Kub cluster, but the concern is this seems to be ongoing in us-east1-b....

I don't want to go through the shlep of creating machine images and recreating the whole 4 vm cluster setup in a new zone including the load balancer and routing which was all done manually and may not even work and fail with same out of resource error! Also the pricing will be inhibitive - additionally my 4x20GB disks reside on east1-b and will also need to be copied/recreated in the migrated zone with associated network data txfer costs.

I'm running this as a test/learning environment, so $'s are the bottom line as it's paid from my pocket.

I'm not based in US so will try again in morning when hopefully US zones are quieter but I have my doubts reading others similar ongoing issue.

The cynic in me says it's unlikely to be a Black Friday special discount issue with additional resources created ?( I certainly wasn't offered any for my 18 months of loyalty :))

More likely Google wanting/nudging/trying to coerce us to move to more expensive hardware or zones. Happy for someone to prove me wrong or offer a reasonable explanation.

It would be very weird, since they didn't develop a way to migrate easily to another region. I would do it if I could migrate my app engine. But you can't change the region neither delete an app and create it again. Google is doing it badly. Create another project and migrate everything is too much work. If I need to do that, I would do it in aws. I'm tired! 2 days with the same problem, and nobody from google say something...

I had this same issue since Friday. It seems to be recovering. Can anyone else report OVERPROVISIONING on the autoscaler now?

Yes, I think so. We have an app configured with 1 minimum instance up to 4 max and right now, with no requests going to it, I have 2 instances running right now.

3 days and no response from Google. No public notice about that, no support, still not working...

I think it's time to go to tribunal to have our losses back.

Still having this issue, this is getting out of hand it's been 5 days

If you have the option, reduce your instance class. We've had some success
with this.

is it time to move to AWS? this is beyond rediculous!

Finally my application is running again after trying to deploy many times in the last 3 days.