Hi Team,
Can anyone provide an insight on how can we create an alert if a log source (Let's assume a principal.hostname) stops sending logs for 30 mins?
Hi @srijankafle , you can archive by using ingestion notifications , you can find it here
https://cloud.google.com/chronicle/docs/ingestion/ingestion-notifications-for-health-metrics
Hi @tameri ,
What I understand from that docs is that can monitor and alert if a Log Type stops sending a log.
Assuming I have multiple O365 feed ingested to my Chronicle and have used namespace as a field to identify my organization's sub tenants, I would only be able to detect if both feeds stops forwarding the logs.
This is not something that can help me achieve what I am looking for. If we are able to get an alert based on a particular field (namespace in this example or principal.namespace) that would be great.
Hello srijankafle,
Had the same need for individual LS monitoring. I was unsuccessful either using only the SIEM.
As you said, the health metrics are not granular enough (cannot go deeper than for a whole log_type, no chance to detect the absence of logs for individual Domain Controllers for example).
The workaround I worked on is to use BigQuery (events table).
By defining a tuple "hostname :: log_type" forming a logsource ID and grouping the logs by this field, you can get the last log time by individual LogSource, so you can detect which LS are not sending logs correctly (evaluate the last log time against a threshold timing).
The difficulty with this workaround is that the field containing the hostname of a LogSource varies depending on the log_type (most of the time it will reside in pricipal.hostname but it can also reside in intermediary for example) so you have to establish a mapping for your logs to know which field query to create the logsource_id (hostname :: log_type).
Also, if you want to automate the missing logs detection, you'll need something like a Cloud Function to perform the BigQuery requests and push the found faulty logsources to Chronicle, so you can then create a rule based on these custom logs and send alerts to your SOAR.
I cannot help you further. Event though it's crucial to control the perimeter over which you apply detections, imho it's a lot of complexity for a feature that should be available natively. So this topic is on stand-by on my side at the moment 😕
Can you use the feed_name filter in Cloud Monitoring to detect if an individual O365 feed goes down?
Hi @adam9 ,
I will give it a try, but there are also logs coming in from collector. Or other agent based source. We need to be able to categories agents as critical and then get alerts for silent logs if there are any.
I assume since the agents we use are not native agents (nxlog or Wazuh or Cribl) this is not provided by Chronicle. But this seem to be a big limitation for Chronicle since almost all platform provides equivalent info.