Hi Team,
We have integrated AWS CloudTrail and AWS Guardduty with Chronicle Secops SIEM and our AWS S3 cost has been spiked.
Is there a way we can reduce the cost (apart from creating new S3 bucket or limiting S3 bucket data) ?
One possible reason will be the total files inside the S3 bucket and not deleted after transfer.What's the option you set after transfer?
You maybe open a GCP support case to check the transfer and see how many files transfered.
Thank you @hzmndt
There’s also a tool that helps manage S3 costs within AWS called storage lens.
You may also want to look into what compression is available.
Thank you @dnehoda !
Adding some generic comments to hzmndt answer ;
1. If your CloudTrial logs have some non-AWS, you could try offloading some of these logs to Chronicle Forwarders instead, but this will require some administartion ovehead, Or you could rely on other ingestion methods, like Ingestion APIs or Pulling from the source instead of pushing to your buckets/data lake -unless you have analytics running on top of them- .
2. For AWS logs I think CloudWatch supports data aggregation by time bins in the logs insights, not sure if these can be pushed to the S3 buckets or not.
2. Some log sources may support data aggregation on the expense of losing some fields values -like removing source ports from network logs -, for example ; https://www.cisco.com/c/en/us/support/docs/smb/switches/cisco-350-series-managed-switches/smb5275-co...
but that will come on the expense of losing possibly valuable information.
Thank you @AbdElHafez !
Hi,
Yes cost can spike if there is a lot of data on your bucket.
Google SecOps with the feed will, every 15 minutes, get all elements from a bucket to see which new elements are to be ingested.
Cost in AWS result in the API calls for listObjects on a bucket. This call return only 1000 elements. Therefore if you have more than that present in the current bucket, it result in multiple calls every 15 minutes, explaining this increase of the global cost.
To reduce it, you can:
- Delete non relevant items in the bucket (either by authorizing Google SecOps to delete file after read or setup lifecycle on those elements).
- Use AWS SQS queues for Google SecOps to only be notified of new elements arriving in the bucket
Note: Google recommends when possible to have a dedicated bucket for Google SecOps ingestion andnot reuse one already setup for your internal needs that can have a high retention of those files in the bucket.
Thank you @jpetitg