I'm experiencing a persistent 503 Service Unavailable error when accessing through the LB
```
curl http://<domain> -v
< HTTP/1.1 503 Service Unavailable
< Content-Length: 13
< content-type: text/plain
< via: 1.1 <provider>
< date: <timestamp>
drop overload
```
My current configuration:
```yaml
# HTTPRoute
kind: HTTPRoute
apiVersion: gateway.networking.k8s.io/v1beta1
metadata:
name: <route-name>
namespace: <namespace>
spec:
parentRefs:
- name: <gateway-name>
hostnames:
- "<domain>"
rules:
- backendRefs:
- name: <service-name>
port: <port>
# Gateway
apiVersion: gateway.networking.k8s.io/v1beta1
kind: Gateway
metadata:
name: <gateway-name>
spec:
gatewayClassName: <gateway-class>
listeners:
- name: https
protocol: HTTPS
port: 443
allowedRoutes:
namespaces:
from: All
tls:
mode: Terminate
certificateRefs:
- name: <cert-name>
- name: http
protocol: HTTP
port: 80
allowedRoutes:
namespaces:
from: All
```
Questions:
1. Why am I receiving a 503 Service Unavailable error?
2. What could be causing the "drop overload" message?
3. Are there any misconfigurations in my Gateway or HTTPRoute setup?
I've already tried:
- Verifying the service is running
- Checking network connectivity
- Confirming port configurations
Environment:
- Kubernetes: 1.31
- Gateway API: v1beta1
- Trying to access via HTTP and HTTPS
Any guidance would be greatly appreciated!
Hi @rsrockbot,
Welcome to Google Cloud Community!
When using terraform to update the URL Map of the load balancer to add new host and path rule, there are chances the requests to an existing host and path rule will fail with the error as config_not_found and internal_error. This can happen if terraform occasionally performed a destructive update, that is it removed existing host and path rules in the URL map and then created new and re-created old host and path rules in the URL Map.
If you have any questions and need further assistance with specific configurations, please reach out to our Google Cloud Support team.
Was this helpful? If so, please accept this answer as “Solution”. If you need additional assistance, reply here within 2 business days and I’ll be happy to help.
I believe the issue is that our cluster is routes based and not VPC native. I am going to rebuild our cluster as VPC native and retest.