So, I understand your concerns about the unexpected impact of GCS soft delete on your operations. It's important to reach out to Google Cloud Support for assistance and clarification on the billing aspect, expressing the negative impact on performance and the desire for advanced communication on feature rollout. Inquire about benchmarking details for Hadoop GCS connector with soft delete objects to better understand performance implications.
Pero cómo google hace pruebas en sus recursos sin permiso?!
have you found a way to disable the soft-delete feature from a bucket (or better yet, project and org-wide)? I have had trouble finding a soft-delete related setting in the UI or terraform provider
the below commands can help to disable soft-delete-objects , it's per bucket or can use gs://* for all the bucket in a project
### To disable soft delete
$gcloud storage buckets update --clear-soft-delete gs://BUCKET-NAME
but i face some issues executing disable commands reported in the below link:
https://issuetracker.google.com/issues/327390321
To automatically change the retention duration of GCS soft delete, each time a new GCS bucket is created in your org, you can use a combination of Audit Log -> Eventarc -> Cloud Run. See this code example.
User | Count |
---|---|
2 | |
2 | |
1 | |
1 | |
1 |