The suggest Security Assessment Framework outlined here provides a structured approach to evaluate the resilience of a security stack against various specified attack use cases, such as malware delivery, command and control (CnC) communication, and lateral movement within a network. Each scenario is described with detailed prerequisites, including network configurations and specific technologies involved like NGFW (Next-Generation Firewall), Network Sandbox, SIEM (Security Information and Event Management), and others. The framework focuses on actions within different network environments (e.g., Desktop LAN and later Cloud can be added). It categorizes actions and suggests tags for easier selections of actions.
This framework can be an approach to help MSV customers maximizing the utility and effectiveness of Mandiant Security Validation in their security environments.
In Part 1 , I will outline the MSV evaluation process and in Part 2 , I will post some of the use cases that can be used as guidelines when choosing and executing attacks.
MSV life cycle (revise as deemed necessary)
The following high MSV assessment life cycle can be one of the approaches to ensure the security stack remains effective against evolving cybersecurity threats.
1.Plan:
2.Run Attacks:
3.Analyze Results:
4.Close Gaps:
5.Retest:
6.Ongoing Monitoring and Adjustment:
Until the next post, stay tuned!
Solved! Go to Solution.
Hello @tameri , thank you for this post.
I agree with your framework and we regularly apply it. Internally we call this framework "scenario based tests" because we select a scenario (e.g. existing threat landscape, threat profile, security stack, ...) and we run tests following all the points you outlined.
I'd like to share also the second type of test we do in our testing strategy. We call it "recurring tests".
We create a set of evaluations (we call them "special evaluations") that contain some sample of every type of attack we can select from MSV library. The way we choose these actions is driven by:
- recent threat intelligence information coming from our CSIRT/SOC group (we include latest malware, TTPs and actors related to our vertical)
- the need to cover all stages of attacks (reconnaissance, delivery, exploitation, execution, c&c, action on target)
At the moment our special evaluations contains about 300 actions. We run these evaluations every weekend.
We than collect results and we plot them against time, to check that our security posture remains the same in the time.
Thanks to recurring tests, in the past, we spotted situation where the firewall stopped blocking some actions due to a partially failed upgrade, or alerts coming from the SIEM stopped working due to an overload.
The special evaluation is updated 2-3 times a year to be sure it contains the latest threats.
Every week we also try to "close gaps" (or at least some of them) for actions not detected/prevented/alerted.
The test of the following week is used also to confirm the effectiveness of the change.
Thanks for starting this interesting discussion.
Paolo
Hello @paolocarrara , Thanks for sharing your experience.
Regarding the weekly specially evaluation exercise, do it manually? if yes, Did you consider adding Advanced Environmental Drift Analysis module (AEDA) module to MSV?
it can automate this manual process at scale. you create monitors, which are scheduled jobs ,that repeatedly run security content with explicitly defined results that, if not met, it generates and send you notifications.
here more information about it, https://docs.mandiant.com/home/msv-monitors-advanced-environmental-drift-analysis-aeda
Hello @tameri,
yes we do it manually. I know the AEDA module but, at the moment, we don't have the possibility to buy it ๐