As above, I set the object version management to the most recent 3. However, the previous files are not automatically deleted.
Solved! Go to Solution.
Hi,
Am I right in understanding that your goal here is to delete older backups but always keep at least 3?
The Object Versioning applies to objects with the same name - i.e. when you change an object, it keeps the previous version. In your case each backup has a different name - i.e. with a datestamp. So for each object, there is only ever one version, it never reaches 3 versions and so the policy does not apply. For more details on object versioning see: https://cloud.google.com/storage/docs/object-versioning
Several ways too approach this, some options:
1. Carry out the backup using the same object name each time and have the bucket versioning keep historic copies for you, that way the lifecycle policy will work as you intended.
2. You could explore using an 'Age' based approach, for example if you create backups on a daily basis, you could delete backups older than 3 days - however this has a risk, if your backup process stops working and you don't notice, then after 3 days the data would be lost.
3. Use an external approach to cycle older copies - for example Cloud Workflows you could create a very simple workflow that scans the bucket and removes older versions based on the filename. This should have little, or if you're able to use the free tier, no cost: https://cloud.google.com/workflows/docs
There may be other approaches, but these are a few that come to mind.
Hope that helps,
Alex
Hi,
Am I right in understanding that your goal here is to delete older backups but always keep at least 3?
The Object Versioning applies to objects with the same name - i.e. when you change an object, it keeps the previous version. In your case each backup has a different name - i.e. with a datestamp. So for each object, there is only ever one version, it never reaches 3 versions and so the policy does not apply. For more details on object versioning see: https://cloud.google.com/storage/docs/object-versioning
Several ways too approach this, some options:
1. Carry out the backup using the same object name each time and have the bucket versioning keep historic copies for you, that way the lifecycle policy will work as you intended.
2. You could explore using an 'Age' based approach, for example if you create backups on a daily basis, you could delete backups older than 3 days - however this has a risk, if your backup process stops working and you don't notice, then after 3 days the data would be lost.
3. Use an external approach to cycle older copies - for example Cloud Workflows you could create a very simple workflow that scans the bucket and removes older versions based on the filename. This should have little, or if you're able to use the free tier, no cost: https://cloud.google.com/workflows/docs
There may be other approaches, but these are a few that come to mind.
Hope that helps,
Alex
Yes, thank you. I'll try setting it based on age 2. Thank you so much for your reply. It was very helpful. I will try it.