Get hands-on experience with 20+ free Google Cloud products and $300 in free credit for new customers.

unconditional drop overload error on load balancers deployed through Kubernetes Gateway API

Hi, close to the GCP outage, at 2025-06-12 13:13:40 EDT I started having issues on multiple load balancer rules. When accessing the URL that resolves to a load balancer created with Gateway API resources that points to specific backends I only see a unconditional drop overload error as a response.

The backends are created by Gateway API HttpRoutes and point to ClusterIP Kubernetes services that point to a healthy Kubernetes deployment, I know the deployment is healthy since it had enough resources to operate and the deployment pod could be accessed and responds through in-cluster requests, I also know that the requests made to the load balancer IPs would end there since I got these messages from the load balancer logs when trying to query the urls that reached the faulty backends:

 

 

jsonPayload: {
@type: "type.googleapis.com/google.cloud.loadbalancing.type.LoadBalancerLogEntry"
backendTargetProjectNumber: "projects/xxxxxxxxxxxx"
cacheDecision: [2]
remoteIp: "x.x.x.x"
statusDetails: "failed_to_connect_to_backend"
}​

 



After looking at the load balancer backend services availability, I noticed none of the backends were actually available (0 of 0 in all backends) but I'm confused since there's no documentation on Google side for the unconditional drop overload, after further inspection I see it's an error related to Envoy Proxy, probably on GCP side.

 


Backends have been regenerating slowly after a few hours of the issue, but I am still having the same problem even when associating new HttpRoutes and services to our Gateways to these kind of workloads.
 
I'd be really grateful to know if this is an issue Google Cloud acknowledges, if there's a workaround or even if someone else is experiencing this issue. 
 
Thanks in advance.
0 1 205