This will be broken up into multiple posts due to post length requirements. One of our SOC workstreams is to review BlackBerryPROTECT Script Control events (Blocks or Alerts for ActiveScripts, MacroScripts, PowerShell Scripts, and PowerShell Console Commands). One of the primary challenges with triaging these is that they can be very noisy and can be a bit time-consuming to investigate. Therefore, while building the playbooks for these Siemplify Cases, noise reduction was very important. Please note that we’re not quite done with the automation in the playbooks, and there are additional improvements being developed and tested. However, I wanted to provide our current process for the benefit of the community in developing noise reduction strategies.
Our BlackBerry events are routed to our client Kibana, and every hour we pull alerts from Kibana to Siemplify. We tune out a large number of known Script Control events that we know are benign and noisy before the alerts are ever ingested into Siemplify. These must be consistently and regularly updated to address new, noisy Script Control events that begin occurring within a client environment. However, if it’s not yet known whether a script is benign, we needed a way to eliminate the noise that these alerts could cause within Siemplify. To simplify the complex Script Control triage process, we do not group Script Control cases, which means that each alert would be an individual case in Siemplify, which makes noise reduction even more important. In short, our current noise reduction strategy is designed to make it to where duplicate Script Filepaths do not need to be triaged by analysts unless they are already determined to be malicious or unless there are client-specific exclusions that need to be created or followed-up with the client on.
One of our workstreams is reviewing BlackBerryPROTECT Script Control events (Blocks or Alerts for ActiveScripts, MacroScripts, PowerShell Scripts, and PowerShell Console Commands). One of the primary challenges with triaging these is that they can be very noisy and can be a bit time-consuming to investigate. Therefore, while building the playbooks for these Siemplify Cases, noise reduction was very important. Please note that we’re not quite done with the automation in the playbooks, and there are additional improvements being developed and tested. However, I wanted to provide our current process for the benefit of the community in developing noise reduction strategies.
Our BlackBerry events are routed to our client Kibana, and every hour we pull alerts from Kibana to Siemplify. We tune out a large number of known Script Control events that we know are benign and noisy before the alerts are ever ingested into Siemplify. These must be consistently and regularly updated to address new, noisy Script Control events that begin occurring within a client environment. However, if it’s not yet known whether a script is benign, we needed a way to eliminate the noise that these alerts could cause within Siemplify. To simplify the complex Script Control triage process, we do not group Script Control cases, which means that each alert would be an individual case in Siemplify, which makes noise reduction even more important. In short, our current noise reduction strategy is designed to make it to where duplicate Script Filepaths do not need to be triaged by analysts unless they are already determined to be malicious or unless there are client-specific exclusions that need to be created or followed-up with the client on.
Tagging and Noise Reduction Playbook
As the only reliable behavior identifier from Script Control alerts, the most important Script Control attribute is the Script Filepath which is mapped to as a Process Entity. A Script Filepath can either be to a specific script file (e.g., .vbs, .js, .ps1, .xlsb, etc.), or it can be the contents of an actual PowerShell Console command (e.g., [*COMMAND*] [System.Security.Principal.WindowsIdentity]::GetCurrent().Name).
After adding the initial Script Control tags, we check whether the Process Entity is on one of six Custom Lists:
If the Process Entity is contained in the Badlist Custom list or the Script SHA256 is contained in the Badlist SHA256 Custom List, an internal ticket is automatically opened in our ticketing system and a notification sent to the Tier 2 Analyst slack channel to follow-up before opening a client-facing ticket. If the Process Entity is found in any of the other 4 Custom Lists, the alert is automatically closed with an “AutoClosed” case tag to assist with search filters later if needed. Otherwise, the case moves to an Enrichment Playbook.
Enrichment Playbook
The entire Enrichment playbook isn’t directly relevant to noise reduction, but one of the first steps in this playbook is to add the Process Entity to the Assessment Custom List. As a result, any future alerts that come in, from any client environment, for this specific Script will be AutoClosed during the Tagging and Noise Reduction playbook described above. This prevents a flood from duplicate scripts which generally don’t require an individual triage and investigation. Following enrichment and initial triage instructions, the case moves to the Tier 1 Analysis playbook.
Link to Part 2.