Get hands-on experience with 20+ free Google Cloud products and $300 in free credit for new customers.

Credentials for GKE deployed image app?

Hi - total newbie here.... I have a python app that runs in Docker locally if i set the console path to point to the file path where the GCloud services credential file are held locally.

I have deployed a GKE autopilot container using the docker image pushed into Artefact Registry but the workload falls over. I can't tell why but I haven't done anything specifically about the credentials so I'm sure that wouldn't help. I have a compute engine service account and I guess I was hoping that this would be associated with the GKE and 'just work'.... I'd really appreciate a beginners guide to setting up the GKE so that it can see the cloud service credentials for the project I'm running?

Thanks for any adivice!

Solved Solved
0 7 543
1 ACCEPTED SOLUTION

The way that ChatGPT diagnosed this was that when I tried: kubectl logs <pod_name> in the CLI it says exec format error - have now rebuilt the image by uploading to Google Build and the workload has now deployed successfully - now to move on to solving the various role / credentials issues 😉

View solution in original post

7 REPLIES 7

Glad to see you trying out Autopilot.  In order for workloads to access other Google services, Autopilot uses the concept of Workload Identity.  The basic steps are

- Create a Kubernetes Service Account for your workload
- Create a Google Service Account which has the right roles/permissions need to access the desired service(s).  For testing, you should be able to use the default compute engine account, but for a production environment it's a best practice to create a service account with the minimal permissions needed

- bind the KSA and the GSA

You should be able to follow this guide

Thanks again for the quick response. I followed all of the instructions for the credentials and have a Kubernetes account. Just checking on the sequence here: I created the container, did all of the IAM stuff then deployed my artefact image to KBE. The workload created by that process now says: Pod errors: Unschedulable, Does not have minimum availability. I thought Auto Pilot would look after things in the black box. My app is a Python app that polls a database and does things based on what it sees. There is no external access required for http stuff but it does use client libraries for eg Mongo, OpenAI, Azure, Google... feeling like I'm watching an autopilot crash my plane 😉

What do you see with kubectl get pods <pod-name> it will tell you the pods status . May be scheduling is off on node "corodon"

you can check node status too in the same way and if it's cordoned, you can set uncorden.

"kubectl uncordon node-name"

 

thanks - 4 nodes - all show 'Ready' status,  each one says it is already uncordoned...

I deleted the last workload and redeployed and it fell over immediately with 

Pod errors: CrashLoopBackOf
Does not have minimum availability

Describing my pods I see that they all pull the image out of the registry, start the container, pull the image, then after container created, get Back-off restarting failed container

ChatGPT tells me: Your Docker image is built for the ARM64 architecture, but your Kubernetes nodes are using the AMD64 architecture. so it thinks this is the fundamental problem I am encountering. I'll look to see how I can build a new image for my python app on an AMD64 machine...

The way that ChatGPT diagnosed this was that when I tried: kubectl logs <pod_name> in the CLI it says exec format error - have now rebuilt the image by uploading to Google Build and the workload has now deployed successfully - now to move on to solving the various role / credentials issues 😉

Top Labels in this Space
Top Solution Authors