Get hands-on experience with 20+ free Google Cloud products and $300 in free credit for new customers.

Removing Original Google User from Project with GKE

rd
Bronze 3
Bronze 3

Within our Google Workspace organization, userA set up a Google Cloud Project that contains an essential application using Google Kubernetes Engine. We need to delete userA from Google Workspace and have several other users manage the project instead. We have added these other users to the project as Owners.

The application was set up by userA by connecting to the cloudshell. So, the installation by helm charts along with various customizations were all done on userA@cloudshell. When other users sign in to the cloudshell, we do not see the files set up by userA, and we do not see a way to sign into userA like we would be able to do with a compute engine user.

How can we make the application files created by userA accessible to all users and ensure they are not deleted along with userA?

We are new to Kubernetes, so we may be missing some essential understanding of how the cloudshell and/or file creation and ownership works in this context. Any guidance is much appreciated!

Thank you.

Solved Solved
0 4 251
1 ACCEPTED SOLUTION

Generally speaking cloudshell is just a temporary type of environment and that should not be used by default across the Org for this kind of tasks unless the config files are stored - as they should be - on a private repository, being under GCP, GitHub, BitBucket etc.

Getting back now to your issue, I'd try to get all the details from that cloudshell instance by logging in and getting every information out of the "history" command.

With the output of history command you'll be able to put together pieces of information that may help you to understand how everything was configured and managed by userA.

Gather all the files that you can find on that cloudshell instance (from userA) onto a directory and push them to a Git repository. At least you'll have a copy of those files in case you need them until you manage to put all pieces together.

Your co-workers should install kubectl onto their laptops locally and manage Kubernetes from their machines instead of cloudshell, same for helm. Make sure all of them have the same level of access and visibility - this will help with the knowledge transfer later when userB and userC leaves the Org.

You mentioned about a 3rd party company that created helm charts / deployment process for you. If you're still in contract with them or have a good relationship I'd write them and ask for a runbook / documentation of how the entire process works.

Either way you should be able to reverse engineer pretty much everything based on that cloudshell output and what you have actually deployed into your K8s cluster. I mean you can get everything you need just from your K8s cluster, deployment files, configmaps, secrets, ingresses etc.. Helm isn't really doing any magic at all, it will deploy exactly the same files onto your K8s cluster.

"I suppose in reality the only thing we cannot figure out how to access/move from userA is the files and directories that customize the application to our organization."

Just push to a git repo everything from that cloudhsell instance.

The App configuration (users, env vars etc.) should be present in the K8s configmaps. API keys, passwords should be all in K8s secrets. As for the image versions (app versions) those should be inside K8s deployments. All these can be exported and stored safely.

View solution in original post

4 REPLIES 4

I'm not quite an expert and I may give you the wrong information here.

If userA is part of your Org have you tried to impersonate that userA session? I mean try to log in as userA and use its cloud shell instance. I know it sounds a bit of a privacy issue but if that user is under your Org then IMO you can do that too / force majeure.

As far as I know cloud shell is a per user type of service, I may be wrong but it make sense from a security perspective to be so / isolated at the user level. Also, if a cloud shell "instance" isn't used for 120 days that disk attached to it gets deleted.

I'd say if that's a critical / production related issue I'd open a support ticket with GCP asap.

Thank you for the help, Sebastian, that information is very helpful. userA is part of our organization and we are able to at least sign into that user for now to continue managing the project, but need to remove them eventually.

The other users added as co-owners are for example able to run kubectl commands in the cloudshell to view the Kubernetes application’s pods, names spaces, etc. I suppose in reality the only thing we cannot figure out how to access/move from userA is the files and directories that customize the application to our organization. For context, another company created and updates the general Kubernetes application, we simply install it by helm charts and then customize it by updating certain files like the value yamls with our own api keys and other such organization specific details.

If we are unable to access these files from another user, I wonder if when userA is deleted the application would still keep running unaffected and the only downside would be that when there is an update, we would have to re-add the helm chart and re-customize the values specific to our organization?

Generally speaking cloudshell is just a temporary type of environment and that should not be used by default across the Org for this kind of tasks unless the config files are stored - as they should be - on a private repository, being under GCP, GitHub, BitBucket etc.

Getting back now to your issue, I'd try to get all the details from that cloudshell instance by logging in and getting every information out of the "history" command.

With the output of history command you'll be able to put together pieces of information that may help you to understand how everything was configured and managed by userA.

Gather all the files that you can find on that cloudshell instance (from userA) onto a directory and push them to a Git repository. At least you'll have a copy of those files in case you need them until you manage to put all pieces together.

Your co-workers should install kubectl onto their laptops locally and manage Kubernetes from their machines instead of cloudshell, same for helm. Make sure all of them have the same level of access and visibility - this will help with the knowledge transfer later when userB and userC leaves the Org.

You mentioned about a 3rd party company that created helm charts / deployment process for you. If you're still in contract with them or have a good relationship I'd write them and ask for a runbook / documentation of how the entire process works.

Either way you should be able to reverse engineer pretty much everything based on that cloudshell output and what you have actually deployed into your K8s cluster. I mean you can get everything you need just from your K8s cluster, deployment files, configmaps, secrets, ingresses etc.. Helm isn't really doing any magic at all, it will deploy exactly the same files onto your K8s cluster.

"I suppose in reality the only thing we cannot figure out how to access/move from userA is the files and directories that customize the application to our organization."

Just push to a git repo everything from that cloudhsell instance.

The App configuration (users, env vars etc.) should be present in the K8s configmaps. API keys, passwords should be all in K8s secrets. As for the image versions (app versions) those should be inside K8s deployments. All these can be exported and stored safely.

This has given us a lot of helpful information to get on the right track. We have been working with Kubernetes for the first time to implement this application, so we are very appreciative of you taking the time to clarify these points. Thank you, Sebastian.

Top Labels in this Space