Has anyone dealt with instances where sample events are showing later event timestamps than the ingested timestamps? If so, any troubleshooting recommendations?
Thank you
Most commonly the reason is you're not parsing time zone so your event time zone assumes GMT. very common for certain log types like firewalls that don't consider it necessary to send the actual time zone they're reporting time in. So the firewalls assume "local time" whatever that is, but don't report the time zone in the logs, and so Chronicle assumes UTC. This is extremely common.
Best solution is to set local time on the firewalls to GMT.
if that doesn't work for some reason, Google support can change that in the back end if yours are consistent (i.e. always GMT-5 or something, you can send a ticket with your collector id(the forwarded id if you're using forwarders or for Bindplane this can be obtained via dashboard or stat search against ingestion_metrics), log type and desired time zone.
If your time zones are all over the place this has to be fixed on parser level unfortunately but I recommend normalizing firewall timezones, it fixes a lot of headaches.
Thanks for that input; very insightful. These are actually Defender for Endpoint logs.
@AJMSecOps use below search or native dashboard, you can find out the avg delay for the 3 key timestamps, suggest to review the timestamp to check where is the delay:
The following timestamps are related to events:
metadata.event_timestamp
UDM field. Rules and UDM searches use the metadata.event_timestamp
field for queries.metadata.collected_timestamp
UDM field.metadata.ingested_timestamp
UDM field.target.ip != ""
match:
principal.ip
outcome:
$avg_seconds = avg(metadata.event_timestamp.seconds)
$avg_collected_seconds = avg(metadata.collected_timestamp.seconds)
$avg_ingested_seconds = avg(metadata.ingested_timestamp.seconds)
https://cloud.google.com/chronicle/docs/detection/timestamp-definitions