Hello, community,
Today I'm facing something weird and can't figure out how this is possible...
I've implemented a custom detection rule on my Chronicle SIEM instance that detects when a particular user deletes more than 20 files over 10 seconds. To test the rule I've deleted 26 files.
On the SIEM side the rule correctly detects file deletion events and raises an alert grouping all 26 file deletions.
On the SOAR side I correctly received a new case with an alert but inside the alert, I can see 120 events... Looking at these events I can find a lot of duplicates...
I can't figure out why this strange behavior. The Chronicle connector has been configured from a while and with other alerts works fine pulling the correct number of events.
What I'm missing?
Thank you!
A
this could be due to the ontology mapping configured for the source of the alert but I could be wrong...
if it was 126 events I could see maybe a event per file deleted but good luck, I would look at the ontology mapping personally though and maybe inspect the raw alert and see what there is common among the duplicated fields within the raw alert data ingested.
Hi Nalyd,
the problem is that the deleted files were about twenty... I can't figure out why there are duplicates.
Hello,
I actually just reached out about this to support. This is what they said
"""
Duplicate events have to be created because during ingestion we do not allow multiple similar entities (e.g. 2 or more email entities in the same event) per event. For this reason, when the Connector sends raw alert data having an event with multiple entries of email IDs (same behavior will be observed for multiple Hostnames or IP addresses and all "blue" entities, i.e. non-artifacts), the ETL will create multiple events to prevent data loss. -- Support
"""
So basically, if your logs have multiple values in one field for each value under that field. The connector will create a separate event. This happens for every field that is used to extract the entity.
Hi @mokatsu thanks for your response.
Based on your suggestions I've checked the events and found that the user has multiple fields containing email addresses that are used to create user entities in fact, in the graph I can find as many user entities as the number of these email fields.
Now my question is: how can I solve this? I need to have as many events as the number of deleted files (that is the real number of events).
I wonder if the expand is happening on each entity. So hashes, email, etc
So a new row or event is created for each instance by hash, email
Example:
example event: {"message": some message", "emails": ["email@gmail.com," "email2@gmail.com"], "hash": ["hash1", "hash2","hash3"]}
log1.1:
{"message": some message", "emails": "email@gmail.com,", "hash": "hash1"}
{"message": some message", "emails": "email@gmail.com,", "hash": "hash2"}
{"message": some message", "emails": "email@gmail.com,", "hash": "hash3"}
{"message": some message", "emails": "email2@gmail.com,", "hash": "hash1"}
{"message": some message", "emails": "email2@gmail.com,", "hash": "hash2"}
{"message": some message", "emails": "email2@gmail.com,", "hash": "hash3"}
Essential all possible combination of the log but no arrays
Hi @mokatsu this is exactly what happens, for every entity that comes as an array of values a new event will be created to cover all possible combinations of the elements of these arrays...
This is so annoying.. any suggestions on how to solve this?
As far as I know, you cannot. This is being done to ensure there is no data loss and entities are properly extracted.
What exactly is your issue? (other than more events and it looking scary at times.) Is this causing some automation/playbook issues?
HI @mokatsu
the problem comes when I implement detection rules SIEM side like the one I described above.
If I have a rule that is meant to match when a user deletes a large number of files above a threshold I would expect an alert for each file deletion event... As it is by now I see one alert with the same file deleted and duplicated 4/5 times... This gives some hard times for our analysts.
@AThebrand Unfortunately I dont really have a solution. The only thing I can think of would be to create a new events panel within some playbook automations then use html to display a table.
Not a direct answer, but you may find some inspiration in this post about repeated fields: https://medium.com/@thatsiemguy/working-with-repeated-fields-in-chronicle-siem-91289a8051c
Hi,
Facing the same issue, did you find any solution?
Thanks.
Here are a few things to check and troubleshoot:
Hello. This behavior was recently addressed in version 51 of the Google Chronicle integration. 'Disable Event Splitting' should be the new default in the connector, and the end result should be much less duplication and overlap with the ingested events, with the expectation that the updated Ontology will split out the necessary entities using the comma delimiter.
Thank you very much for the information! I have upgraded to version 51 and will monitor the results. Google roll out updates faster than I can keep up 😄