Hello,
I'm looking for best practices/tips on crafting SentinelOne EDR Cloud Funnel queries / field selection for exporting to SecOps.
We're perhaps not surprisingly running into issues due to the huge amounts of data being generated by SentinelOne and are looking for ways to slash the data without losing detection capability.
Thus far we've reduced the number of fields to only include those mapped by the SecOps UDM parser and we're investigating if we can make do with only the "Behavioral Indicator" type of events.
I would greatly appreciate if anyone here with hands-on experience of using SentinelOne CloudFunnel could share some of their experiences and advice!
Are you asking for secops search queries and how to refine those? Or, are you asking how to not collect as much data. If the latter, I don't think that is possible without some kind of intermediary tool such as cribl that would reject certain logs. The way this currently functions is that the whole endpoint of /activities is quereied when we GET logs from S1.
I would think, no matter what, you would want all that data though. We may just want to look at specific events types or patterns when creating a secops search.
In practical terms I am looking to optimize the data selection query and field selection in SentinelOne CloudFunnel for data of use for threat detection in SecOps SIEM.
A default/open query within CloudFunnel results in roughly 6-700MBytes of data per endpoint per day, with several thousands of endpoints this quickly amounts to an unmanageable number.
Filtering out file events reduces our CloudFunnel data export by 2 orders of magnitude. Here's a filter that works for us. All fields selected.
NOT(event.type in ('File Creation' ,'File Modification' ,'File Deletion' ,'File Rename'))
Side note: In our experience the built-in parser for 'SentinelOne Singularity Cloud Funnel' as an ingestion error rate of 1%, significantly higher than other log types we're ingesting. The latest update from 2025-03-25 was not an improvement. Your mileage may vary but wanted to throw it out there as something to expect.
Hye @robhoopr did you open a ticket on this for the error rate?
What's your reasoning for excluding all file events, aside from reducing data volume?
Reducing the data volume was the only motivation. For us, it was the difference between >100GB and <100MB daily ingest (per S1 estimations). I'll leave it to you to decide if those file events provide enough security value to justify the cost of logging them in SecOps.
Hey @robhoopr some of those types may be needed in an IR. The Deletions specifically.
Maybe, but those events can also be reviewed in and collected from the S1 console in the case of an emergency therefore logging SecOps SIEM isn't the only option.
It might be possible to use a SOAR action to collect file events on demand when IR starts. Some combination of the following should get the job done.