Hi everyone,
I am working on SecOps Native Dashboards and encountered an issue while using the avg function in YARA-L.
Below is my query:
metadata.vendor_name="ABC"
$uid=about.labels["uid"]
match:
$uid
outcome:
$duration=avg(duration.seconds)
The result values are automatically rounded down to the floor value, but I want the values to retain floating-point precision.
I tried using the window.avg function and also applied the round function, as shown below:
metadata.vendor_name="ABC"
$uid=about.labels["uid"]
match:
$uid
outcome:
$duration=math.round(avg(duration.seconds), 3)
However, the results still return the floor value.
Why is this happening? Is there any alternative or solution to achieve the desired results?
Thanks,
Prashant Nakum
I'm not seeing this behavior in the dashboard currently. The system I ran this on is similar to a production tenant, but you can see the same search being run, the first time with avg...
And the second with window.avg and math.round to 3 decimals. I am not aware of an update that causes this to change and I'm not sure what exactly to tell you but I'm not seeing the behavior you are describing based on this example.
Thanks for your quick response!
In my scenario, I need to calculate the average duration for each uid. To achieve this, I included uid in the match section and calculated the average of the duration in the outcome section.
However, the results (as shown in the screenshot) are rounded down.
The actual values I want are shown in the screenshot below:
Is there any alternative solution to retain the floating-point precision? Any suggestions would be greatly appreciated.
Regards,
Prashant Nakum
Hi @prashant_nakum,
Very weird that it's returning a whole number. Does the below work by any chance?
Thanks @AymanC,
I tried this approach as well, but I’m still getting the same result as before.
I also tried the following approach, but result remain same as before.
Are there any other alternative solutions to achieve the desired result?
Regards,
Prashant Nakum
Alright, I did a bit of a deep dive on seconds and nanos which I believe is where the issue resides. The duration piece and the calculation on a time was a piece that I believe we were missing that is a crucial clue to what you are trying to achieve. It also looks like something like a transaction command in splunk that you are attempting to bring across which is fine.
While we display the time in the UI down to the milliseconds, we use two fields to represent this, seconds and nanos. The relationship between these is the piece we were missing from this. The seconds value will be the epoch time (integer) and the nanos will contain the non-integer remainder of that time multiplied by 1B and store it in the nano field.
So, to assemble a time down to the millisecond (3 decimals), we need to reassemble the seconds and nanos together which can be done with this formula.
metadata.ingested_timestamp.seconds + (metadata.ingested_timestamp.nanos / 1000000000)
If we apply that to a query or dashboard, I think this will get you going in the direction you are trying to go. I have a few sample outputs here based on computing a duration using ingested and collected timestamps.
metadata.event_type = "NETWORK_HTTP"
network.session_id = "K-JwS3Wio0KILemx4lQ-AA"
//Example dates
//2025-04-02T11:42:37.452
//2025-04-02T12:00:57.603
match:
network.session_id
outcome:
$ingest_stamp = timestamp.get_timestamp(max(metadata.ingested_timestamp.seconds))
$event_stamp = timestamp.get_timestamp(max(metadata.event_timestamp.seconds))
$ingest = max(metadata.ingested_timestamp.seconds + (metadata.ingested_timestamp.nanos / 1000000000))
$event = max(metadata.event_timestamp.seconds + (metadata.event_timestamp.nanos / 1000000000))
$duration = avg((metadata.ingested_timestamp.seconds + (metadata.ingested_timestamp.nanos / 1000000000)) - (metadata.event_timestamp.seconds + (metadata.event_timestamp.nanos / 1000000000)))
$duration_round = math.round(avg((metadata.ingested_timestamp.seconds + (metadata.ingested_timestamp.nanos / 1000000000)) - (metadata.event_timestamp.seconds + (metadata.event_timestamp.nanos / 1000000000))),2)
Thanks for your response!
I’m not calculating the duration based on the ingested timestamp and event timestamp. Instead, I have the network.session_duration.seconds field available which is in seconds(integer), and I need to directly compute its average.
Both fields, uid and network.session_duration.seconds, are available. My goal is to calculate avg(network.session_duration.seconds) grouped by uid. However, as mentioned earlier, the values are getting rounded down but I want it in floating point format.
Regards,
Prashant Nakum
Sorry, to clarify, while I used those timestamps, the concept can still be applied to something like network.session_duration.seconds and network.session_duration.nanos.
network.session_duration.seconds > 0
network.session_id = "CsfELbFMoai1SMbBl"
outcome:
$network_session_duration_seconds = network.session_duration.seconds
$network_session_duration_nanos = network.session_duration.nanos
I went back and expanded on this and also laid it into a dashboard chart (see below) to make sure there isn't a difference between search and dashboarding (there shouldn't be) and I got decimal points and then rounded as expected just for seconds and then a bit more precision with nanos (if needed) so I'm not sure what to tell you on this. I would suggest opening a ticket at this point if you are certain that you should getting non-integer results based on your data set.
The other thing, and this is a bit speculative is that