Tier 1 Analysis Playbook
The Tier 1 analyst uses enrichment and instruction to perform the initial analysis and to make an initial determination for whether a script should be 1) added to the Safelist Custom List, 2) sent to Tier 2 as malicious or for further investigation, 3) added to the Exclusion Custom Lists, or 4) added to the Exclusion Follow-Up Custom List (these will later be re-arranged).
If the Tier 1 analyst chooses 2, the Tier 1 analyst is giving additional triage instructions which differ depending on whether the Process Entity is a script file or a PowerShell Console Command, before being escalated to Tier 2 and then going to the Tier 2 Analysis Playbook.
If the Tier 1 analyst chooses 1, 3, or 4, the case is tagged accordingly and sent to the Tuning queue for tuning verification before any Process Entities are permanently added to Safelist, Exclusion list, or the Exclusion Follow-Up list.
Tier 2 Analysis Playbook
The Tier 2 analyst uses input from the Tier 1 analyst and further analysis to make a determination whether a script 1) is malicious and requires a client ticket 2) can be added to the Safelist Custom List, 3) can be added to the Exclusion Custom List, or 4) can be added to the Exclusion Follow-Up Custom List.
If the Tier 2 analyst chooses 1, the Process Entity is added to the Badlist Custom List. If the script event is for an actual script file rather than a PowerShell Console command, then that malicious script’s SHA256 is also added to the Badlist SHA256 Custom List. The Process Entity is also removed from the Assessment List to ensure that future alerts for that script are appropriately triaged and ticketed to the client. The case then moves to the ticketing playbook which doesn’t directly impact our noise reduction strategy and isn’t included in this post.
If the Tier 2 analyst chooses 2, then the Process Entity is added to the Safelist, which will still result in auto-closing for future duplicate alerts, but is removed from the Assessment List for correct categorization.
If the Tier 2 analyst chooses 3, then the Tier 2 analyst will add an exclusion for that script in that client’s BlackBerryPROTECT tenant. The Process Entity is also added to the Exclusion Custom List, which will result in auto-closing for future duplicate alerts in that specific client environment. The Process Entity is also removed from the Assessment List to ensure that future alerts in other client environments for that same script are appropriately triaged and ticketed to the client.
If the Tier 2 analyst chooses 4, then the Tier 2 analyst will follow up with the client to verify whether the client requires an exclusion for that script in their BlackBerryPROTECT tenant. The Process Entity is also added to the Exclusion Follow-Up Custom List, which will result in auto-closing for future duplicate alerts in that specific client environment. The Process Entity is also removed from the Assessment List to ensure that future alerts in other client environments for that same script are appropriately triaged and ticketed to the client. The next improvement on the Noise Reduction roadmap for this playbook will be for the Tier 2 analyst to indicate whether the client requires an exclusion for the script or not. After doing so, the Process Entity will be removed from the Exclusion Follow-Up Custom List, and depending on the client’s response, be moved to the Exclusion Custom List or to a client-specific Safelist.
Tuning Verification Playbook
A case moves to the Tuning Verification Playbook if the determination during the Tier 1 Analysis playbook is that the script should be 1) added to the Safelist Custom List, 2) added to the Exclusion Custom List, or 3) added to the Exclusion Follow-Up Custom List. Essentially, the Tier 2 or Tier 3 analyst will review the Tier 1 analyst’s notes and determination before approving or rejecting the Process Entity being moved to one of these Custom Lists and being removed from the Assessment custom list.
While this noise reduction strategy is specific to a single workstream in our SOC, and there are plans and opportunities for further customization and automation improvements, this strategy can be applied to other similar use cases where a particular alert type is potentially very noisy and to prevent a flood of duplicate cases where it can take a bit of time and detailed analysis before a conclusion can be reached. I’d love to hear any feedback that others may have and hope that this is helpful to at least some other users within the Siemplify Community!
Noise Reduction Strategy Summary
Link to Part 1.