I am having problem in provisioning node GKE Node Pool in multi zonal setup (asia-southeast2a, b, c). three instance are created in (one for each zonal), but only asia-southeast2a instance are registered as node, asia-southeast2b and asia-southeast2c don't.
GKE version is
v1.29.8-gke.1096000
I have tried:
result from Node registration checker on problematic instance are complete successfully
0m0.001s [ 440.775499] node-registration-checker.sh[2020]: Thu Oct 17 10:05:51 UTC 2024 - ** Here is a summary of the checks performed: ** [ 440.775656] node-registration-checker.sh[4561]: ------------------------------ [ 440.775723] node-registration-checker.sh[2020]: Service DNS Reachable [ 440.775877] node-registration-checker.sh[4563]: ------------------------------ [ 440.775928] node-registration-checker.sh[2020]: LOGGING true true [ 440.775959] node-registration-checker.sh[2020]: GCS true true [ 440.775994] node-registration-checker.sh[2020]: GCR true true [ 440.776094] node-registration-checker.sh[2020]: Master N/A true [ 440.776210] node-registration-checker.sh[4565]: ------------------------------ [ 440.776252] node-registration-checker.sh[2020]: Master Healthz Version [ 440.776369] node-registration-checker.sh[4567]: ------------------------------ [ 440.776409] node-registration-checker.sh[2020]: ok v1.29.8-gke.1096000 [ 440.776512] node-registration-checker.sh[4569]: ------------------------------ [ 440.776557] node-registration-checker.sh[2020]: Service Account: 74465351471-compute@developer.gserviceaccount.com - enabled: true [ 440.776592] node-registration-checker.sh[2020]: Kubelet logs available: Yes(see above) [ 440.776624] node-registration-checker.sh[2020]: Thu Oct 17 10:05:51 UTC 2024 - ** Completed running Node Registration Checker **
How should i resolve this issue?
Solved! Go to Solution.
I have fixed it by upgrading Kubernetes version to v1.30, and now the instance is registered as node normally.
for context i was trying to play around with zonal deployment, consolidating all deployment to zonal a. It seems as old version 1.29 was in deprecation, and after all node & instance vacated zonal b or c, GKE could no longer register new instance in zonal b or zonal c as node.
I have fixed it by upgrading Kubernetes version to v1.30, and now the instance is registered as node normally.
for context i was trying to play around with zonal deployment, consolidating all deployment to zonal a. It seems as old version 1.29 was in deprecation, and after all node & instance vacated zonal b or c, GKE could no longer register new instance in zonal b or zonal c as node.