Get hands-on experience with 20+ free Google Cloud products and $300 in free credit for new customers.

Internal passthrough Network Load Balancer does't pass health check after scale down/up node GKE

Hello,

I have an issue with the Internal passthrough Network Load Balancer:

I use GKE (total 1 node worker) and expose applications via an internal load balancer by creating a service-type load balancer. Everything works after the Loadbalancer has been created. But when I scale down the cluster to 0 nodes and then scale up to 1 node, the loadbalancer does not pass the health check. 

I went to Loadbalancer on GCP Console and saw LB lost backend, I turned on log health check and did not see log health check when scale up cluster, temporary processing direction I recreate LB or change LB's name in GKE, it is success health check and LB can work again 

Has anyone had this case yet, helped me to handle this issue. Thank you!

Solved Solved
0 3 222
1 ACCEPTED SOLUTION

Hi @LongNguyen,

Welcome to Google Cloud Community!

Enabling GKE subsetting can help maintain the association between the Load Balancer and its backends, even when scaling the cluster down to zero nodes. GKE subsetting ensures that the load balancer's backend service is aware of the backend Pods, facilitating consistent traffic routing as nodes are scaled down and up.

  • Modify your GKE cluster to enable subsetting by updating the cluster's configuration.
  • This can typically be done using the gcloud command-line tool or through the Google Cloud Console.

Reference:

GKE Networking Known Issues

Screenshot 2025-02-21 8.43.01 PM.pngBy enabling GKE subsetting, you can mitigate the issue of the Load Balancer losing its backend association during scaling operations.

Was this helpful? If so, please accept this answer as “Solution”. If you need additional assistance, reply here within 2 business days and I’ll be happy to help.

 

View solution in original post

3 REPLIES 3

I have 2 clusters with 2 LB configs equally. With a cluster with 2 nodes, LB still works after scale-down and scale-up. But with a cluster with 1 node, LB lost the backend as above

Hi @LongNguyen,

Welcome to Google Cloud Community!

Enabling GKE subsetting can help maintain the association between the Load Balancer and its backends, even when scaling the cluster down to zero nodes. GKE subsetting ensures that the load balancer's backend service is aware of the backend Pods, facilitating consistent traffic routing as nodes are scaled down and up.

  • Modify your GKE cluster to enable subsetting by updating the cluster's configuration.
  • This can typically be done using the gcloud command-line tool or through the Google Cloud Console.

Reference:

GKE Networking Known Issues

Screenshot 2025-02-21 8.43.01 PM.pngBy enabling GKE subsetting, you can mitigate the issue of the Load Balancer losing its backend association during scaling operations.

Was this helpful? If so, please accept this answer as “Solution”. If you need additional assistance, reply here within 2 business days and I’ll be happy to help.

 

Hello @diannemcm , 

I appreciate your help, however i have not tried the above solution, I am scaling  the cluster to more than 1 node and have not had the problem.

Thanks again. If I try the above solution in the future and it works i will report back.

Top Labels in this Space
Top Solution Authors