Get hands-on experience with 20+ free Google Cloud products and $300 in free credit for new customers.

Can't access Kubernetes service using curl

Hi everyone!

I have created GKE Autopilot private and deployed Prometheus to it. I have Prometheus service in metrics namespace. 

The problem is that I can access this service with kubectl port-forwarding command, but I can't access it with following setup:

  1. execute kubectl proxy
  2. execute curl http://localhost:8001/api/v1/namespaces/metrics/services/prometheus:80/proxy
    Getting response:
    {
    "kind": "Status",
    "apiVersion": "v1",
    "metadata": {},
    "status": "Failure",
    "message": "error trying to reach service: dial tcp 10.115.128.78:9090: i/o timeout",
    "reason": "ServiceUnavailable",
    "code": 503
    }%

But the thing is that:

  1. I can access other kubernetes API endpoints (services, nodes, pods, etc.)
  2. On GKE Autopilot public cluster I can access this prometheus endpoint

So it seems that GKE API blocks part of requests, most probably it is expected, but is there any way to overcome this restriction (maybe some GKE configuration needs to be changed?)

Solved Solved
0 2 3,540
1 ACCEPTED SOLUTION

Hello nekwar,

This is the intended behavior. We only allow ports 443 (https) and 10250 (kubelet) by default for master-to-node traffic. Customers can add additional firewall rules if needed:

https://cloud.google.com/kubernetes-engine/docs/how-to/private-clusters#add_firewall_rules

It's a more secure default and this way customers at least have a workaround of adding additional rules. If we opened up all TCP ports for master-to-node communication and customers didn't want this, they would have to modify the firewall rules GKE created, or have to add deny rules at higher priority, which gets complicated.

If you need to send traffic to a specific pod (rather than a K8S service), I'd recommend using "kubectl port-forward" rather than "kubectl proxy". The former routes traffic through the kubelet, which works, while the latter routes traffic directly from the master to the pod, which doesn't work unless the port is 443.

View solution in original post

2 REPLIES 2

Hello nekwar,

This is the intended behavior. We only allow ports 443 (https) and 10250 (kubelet) by default for master-to-node traffic. Customers can add additional firewall rules if needed:

https://cloud.google.com/kubernetes-engine/docs/how-to/private-clusters#add_firewall_rules

It's a more secure default and this way customers at least have a workaround of adding additional rules. If we opened up all TCP ports for master-to-node communication and customers didn't want this, they would have to modify the firewall rules GKE created, or have to add deny rules at higher priority, which gets complicated.

If you need to send traffic to a specific pod (rather than a K8S service), I'd recommend using "kubectl port-forward" rather than "kubectl proxy". The former routes traffic through the kubelet, which works, while the latter routes traffic directly from the master to the pod, which doesn't work unless the port is 443.

@dionv Thank you very much for your explanation!

Top Labels in this Space