Nmap snap of GKE Node -
nmap -T4 172.30.X.1
Starting Nmap 7.92 ( URL Removed by Staff ) at 2025-03-08 12:51 IST
Nmap scan report for gke-pa-private-gke-u-gke-uat-n2d-std--89b06e39-zctf.asia-south1-a.c.myproject.xyz.internal (172.30.X.1)
Host is up (0.00031s latency).
Not shown: 991 closed tcp ports (conn-refused)
PORT STATE SERVICE
22/tcp open ssh
987/tcp open unknown
2020/tcp open xinupageserver
2021/tcp open servexec
7001/tcp open afs3-callback
7002/tcp open afs3-prserver
12000/tcp open cce4x
31038/tcp filtered unknown
31337/tcp filtered Elite
GKE LoadBalance Nmap Snap -
nmap -T4 172.30.X.2
Starting Nmap 7.92 ( URL Removed by Staff ) at 2025-03-08 12:52 IST
Nmap scan report for 172.30.X.2
Host is up (0.00058s latency).
Not shown: 992 closed tcp ports (conn-refused)
PORT STATE SERVICE
22/tcp open ssh
111/tcp open rpcbind
987/tcp open unknown
2020/tcp open xinupageserver
2021/tcp open servexec
7001/tcp open afs3-callback
7002/tcp open afs3-prserver
12000/tcp open cce4x
There are few options you have that I can think of.
1. Break your service into multiple LoadBalancer services, each exposing 5 or fewer ports. This avoids GKE triggering the “all ports” fallback mode.
2. If you control the traffic (HTTP/S or even gRPC), use an Ingress or Gateway with a limited port exposure (like port 80/443), and route internally to services on different ports.
Pros:
Only ports 80/443 exposed publicly.
Services behind Ingress aren't directly exposed.
Cons:
Only viable for Layer 7 traffic.
Ingress config may become complex with non-HTTP protocols.
3. Even if "all ports" are exposed, you can restrict access using:
Kubernetes Network Policies
VPC Firewall rules
This doesn't prevent the NLB from exposing ports, but it blocks access to those ports based on IP or other rules.