I have a usecase in gcp to create a GKE auto pilot cluster with private endpoints. I was able to deploy it through terraform with a dedicated IP ranges for GKE without any issue. Now I'm trying to access this GKE private cluster nodes via 'kubectl'. I believe we need to have a linux VM instance in the same project where this GKE auto private cluster exists. For that I have created a linux VM with different assigned IP ranges (shared Network/subnetwork).
Issue:- I'm unable to download a 'kubectl' package on this linux VM instance to access gke cluster.
(https://cloud.google.com/kubernetes-engine/docs/how-to/cluster-access-for-kubectl)
Do I have to create this linux VM instance with the same subnetwork that is used to create a GKE auto pilot cluster primary node subnetwork?
Any suggestions/references to overcome this issue would be great.
What error are you seeing? Are you having trouble connecting or actually downloading `kubectl`? Which command(s) are you running?
@garisingh Command to download kubectl package 'sudo yum install kubectl'.
Below timeout error prompted after few minutes,
Hmm ... are the subnets used by your VM private (e.g. RFC 1918)?
Do you have Google Private Access enabled on the subnets?
Correct. Subnets private access is set to true.
Google Private Access is only for accessing other GCP services. @Dg03cloud is trying to install this via `yum` which reaches out to RHEL servers. Your private VM/subnet, do you have a Cloud Router/NAT Gateway? i.e. can you reach the Internet from it?
@glen_yu Yes, cloud nat is enabled on the shared vpc. Does the linux vm instance has to be in a same subnetwork of gke clutser subnetwork?
No, it doesn't work like that. When you create a private GKE cluster, there were a couple of things you had to do:
1. specify a CIDR range for the control-plane (in a standard cluster this needs to be a /28 address, but might be different in Autopilot so consult the docs). But since your already created a cluster, you can just look at its settings and see what you set it to. NOTE: THIS SUBNET/CIDR/NETWORK IS GOOGLE-MANAGED
2. you need to specify a control-plane authorized network. These are the CIDRs that are authorized to connect to the control-plane's network (from step 1 above). NOTE: THIS SHOULD BE A CIDR FROM YOUR VPC/SUBNET
3. this is not an actual step that you do per se, but Google then effective peers the networks from step 1 and step 2 allowing communication between them
4. build your VM in one of the CIDRs you declared in step 2. and from there you should be able to connect to the cluster (assuming you have kubectl, etc.)