Google Chronicle Slient Log Source Monitoring

Is there a way for Google Chronicle Enterprise customers to monitor and receive notifications about silent log sources through Google Chronicle?

0 15 434
15 REPLIES 15

@kasundssc Please take a look at https://bindplane.com/docs/how-to-guides/secops-silent-host-monitoring to see if that solution can meet your needs.

Hi @kentphelps ,

Thanks for the update. Unfortunately, Bindplane is not feasible for AIX, Sun Solaris, and devices for which we don't install agents. 

@kentphelps  Thanks for the response. 

However, this might not be practical as we use AIX, Solari and devices that don't install agents. 

You could use a dashboard in SecOps to track ingestion, but for notifications the recommendation is Cloud Monitoring - https://cloud.google.com/chronicle/docs/ingestion/ingestion-notifications-for-health-metrics

Cloud monitoring is not giving asset-wise ingestions.

@cmorris @kentphelps Do you have any alternative for this ?

Non-agent silent log monitoring is being considered for the product feature roadmap but no date as yet.

You could create a rule that has a specific threshold for each host maybe using a data table that could trigger everytime one of the hosts didnt meet a certain marker.  

I did it, and the result is not accurate because the rule works on the UDM value, and UDM requires normalized data. This will take time, and the result will be inaccurate. Therefore, Yara-l is not the way forward.

Yara-L is the only way to write rules in SecOps.  How is the result inaccurate - i would believe you can use a threshold that wouldnt be an absolute value.  Something greater than > X 

I did the same, and it will work only if you set the time for more than 30 minutes. Otherwise, it will result in inaccurate results. Because this requires logs to be normalized in order to provide accurate results. 

Yes but otherwise you would not have a solution.   Anything you do with any SIEM tool has a delay.  SecOps is not technically a network monitoring tool - it's a Security analysis tool.  In these "other" use cases, we can use Secops for these but that is the trade off.    Even in rules, dashboards and other areas of the tool, we have a minor delay.  There is the first seen / last seen capabilty as well as cloud monitoring which I believe is what support recommended.  I can try to come up with a cloud logging solution and share that here over the next couple days.  

@dnehoda  I will partially accept what you said. However, availability is an important factor to consider in the CIA triad. Additionally, if logs are not being ingested properly, overall monitoring may not be accurate. Thatโ€™s why I emphasize the importance of silent log source monitoring. Furthermore, I will share the rule I use to monitor silent log sources, and you can advise me if any changes are needed.

rule Silent_Log_Monitoring {
    meta:
        version = "1.5"
        description = "Asset Visibility"
        severity = "Medium"

    events:
        $e.metadata.log_type = "WINEVTLOG"
        $e.principal.hostname = $hostname
        $hostname in %VISIBILITY_WATCHLIST

    match:
        $hostname over 30m
       
    outcome:
        $last_ingetsted = timestamp.get_timestamp(max($e.metadata.event_timestamp.seconds))
        $last_checked_at = timestamp.get_timestamp(timestamp.current_seconds())
        $diff = timestamp.current_seconds() - (max($e.metadata.event_timestamp.seconds))
        $status = if ($diff > 1800,"Offline","Online")
        $host = array_distinct($hostname)
   
    condition:
        $e and $status = "Offline"

}

Log ingest and validation takes some time, so no matter what the solution, there will always be a delay. Just as long as you know that expectation going into it, the solution with the above detectiion rule and the ref list should be perfect for your use case.   

@dnehoda @Do you have a better solution than this