Get hands-on experience with 20+ free Google Cloud products and $300 in free credit for new customers.

Compute Engine VM deletion

I have three Compute Engine resources that no matter how often I try to delete them through the dashboard, they keep coming back. This means I keep incurring costs of resources that I am no longer using, and I do not know how to proceed with deleting these resources. I have looked everywhere, and even the Google docs mention that can just press delete for these VM instances to go away. 

I also have Kubernetes Engine isntances that are the same. I believe these two are linked, but no matter which order I disable and delete items, they always come back.

Any advice on what other steps I can do would be greatly appreciated.

Solved Solved
0 4 893
1 ACCEPTED SOLUTION

It is a little hard to know exactly from your description, but my first point to check would be Managed Instance Groups (MIG), see here: 

https://cloud.google.com/compute/docs/instance-groups

A MIG is used to deploy a set of VMs to fulfil a task and if you delete a VM manually that is part of a MIG, it will take steps to restore the VM.  So check the instance group section of the console to see what you have configured there and assuming there is a MIG, if you delete it, it will take care of also deleting its instances and they shouldn't reappear.

Kubernetes also uses similar constructs underneath, but I would suggest deleting the cluster (or adjust the node pools) from the GKE area of the console if you want to remove these resources.

Failing that, you can also review the audit logs to see what account is creating the resources, look for a 'compute.instances.insert' call and in the expanded log message you should also see a 'principalEmail' which should give an indication as to which person or process deployed the resource.

Hope that helps,

Alex

 

View solution in original post

4 REPLIES 4

It is a little hard to know exactly from your description, but my first point to check would be Managed Instance Groups (MIG), see here: 

https://cloud.google.com/compute/docs/instance-groups

A MIG is used to deploy a set of VMs to fulfil a task and if you delete a VM manually that is part of a MIG, it will take steps to restore the VM.  So check the instance group section of the console to see what you have configured there and assuming there is a MIG, if you delete it, it will take care of also deleting its instances and they shouldn't reappear.

Kubernetes also uses similar constructs underneath, but I would suggest deleting the cluster (or adjust the node pools) from the GKE area of the console if you want to remove these resources.

Failing that, you can also review the audit logs to see what account is creating the resources, look for a 'compute.instances.insert' call and in the expanded log message you should also see a 'principalEmail' which should give an indication as to which person or process deployed the resource.

Hope that helps,

Alex

 

Thanks for the reply, I will take a look at the MIG as I had been deleting them before, but somehow the VM's still came back whenever I did that. Currently, I only have one VM that I have set to Stop and zero MIG so I am hopeful that I am close to solving my issue.

The GKE clusters seem to be gone as well, so it seems that it is now decoupled from those.

I will take a look at the audit logs, as those will hopefully have more explanation on who owns what and where the start-up signals are coming from, thanks for the info regarding what calls to look for.

In any case, I will hopefully be able to come back with good news later.

One more possibility might be that you have "scheduled VMs" in play ... see:

https://cloud.google.com/compute/docs/instances/schedule-instance-start-stop

In the end is seemed to be a combination of going into the VMs themselves, then making sure to first delete Instance Groups as mentioned by @alexmoore in the accepted solution, and finally deleting the VMs themselves. It seemed that there was some delay with actions happening, so making sure to keep the page focused after deleting/stopping elements was also a key factor. Could not check the scheduled VMs part mentioned by @kolban, as the Instance Groups were gone and I only had a stopped single VM at the time, but I believe that scheduled VMs were definitely playing a part due to Kubernetes Engine acting up as well.

Overall it was a trial of patience to first disconnect clusters from the groups, deleting the group, and then finally deleting the VMs that had instantiated. I was able to also check the API Overview section to confirm that Compute/Kubernetes were finally off and doing no requests.