Get hands-on experience with 20+ free Google Cloud products and $300 in free credit for new customers.

Unable to enter a pod in the gke cluster

Hi there!

We have our k8s cluster set up with our app, including a neo4j DB deployment and other artifacts. Overnight, we've started facing an issue in our GKE cluster when trying to enter or interact somehow with any pod running in the cluster. The following screenshot shows a sample of the error we get. 

cloudshell2022-11-14_18-54.png

error: unable to upgrade connection: Authorization error (user=kube-apiserver, verb=create, resource=nodes, subresource=proxy)

Our GKE cluster is created as standard (no autopilot) and the versions are

pool-2022-11-14_18-56.png

cluster-2022-11-14_18-58.png

As said before it was working fine regardless of the warning about the versions. However, we haven't been able yet to identify what could have changed between the last time it worked, and now. 

Any clue on what authorization setup might have been changed making it incompatible now is very welcomed

Thank you so much for your attention and participation.

Diego

0 3 1,354
3 REPLIES 3

Hello diego-martinez,

I saw a similar post in StackOverflow about the same error "error: unable to upgrade connection: Authorization error (user=kube-apiserver, verb=create, resource=nodes, subresource=proxy)" . 

According to Harsh Manvar, it is not a kubectl issue but an Authorization issue. It could be failing at kubelet level, as kubelet might be configured to Auth the all requests and API server is not providing the details.

You can verify first access

kubectl auth can-i create pods/exec

yes

kubectl auth can-i get pods/exec

yes

By default system:kubelet-api-admin cluster role defines the permissions required to access that API. You can grant that permission to your apiserver kubelet client user with

kubectl create clusterrolebinding apiserver-kubelet-api-admin --clusterrole system:kubelet-api-admin --user kubernetes

API server uses the --kubelet-client-certificate and --kubelet-client-key flags for auth

 You can read more about it at : https://kubernetes.io/docs/reference/access-authn-authz/kubelet-authn-authz/#kubelet-authentication

 

Thanks for your quick response @dionv ! 

We have a similar cluster Role Binding  object which seems similar to the one you mentioned

diegomartinez_0-1668709247189.png

even though we created a new one as suggested, however we have same issues trying to interact with the cluster.

in the docs it says it must be ensured the cluster uses the --kubelet-client-certificate and --kubelet-client-key flags for auth. I'm not entirely sure how to check this our GKE cluster

I found this troubleshooting point in the docs. as well, we'll explore a bit more on that path

Thanks!

 

Did you solve the issue? I'm experiencing the same.

Top Labels in this Space