Get hands-on experience with 20+ free Google Cloud products and $300 in free credit for new customers.

Log-based metrics visualisation issues

I have set up a bunch of log-based metrics to track certain KPI's throughout our micro-service structure. The specific one I'm looking into at the moment is intended to extract Latency from a specific log label. The logs themselves aren't very complex and the filter seems to be doing its' job - the visualisation however is a different story.

The problem:
The 99th percentile somehow shows a much higher value than anywhere in the logs. The peaks of the diagram also tends to shift by quite a lot with every refresh.

Basic facts:
I've set the alignment window to 1m.
The Latency field spans from ~100ms to ~12.000ms.
No other labels are attached to this metric.
The issue occurs both in the Metrics Explorer and in an external Grafana instance.

Question(s):
Could it be a "cardinality-issue" due to the large variance in the extracted field?
If not... what am I doing wrong?

0 1 723
1 REPLY 1

Hello @Osterholm,

As stated in this document:


The number of time series in a metric depends on the number of different combinations of label values. The number of time series is called the cardinality of the metric, and it must not exceed 30,000.

Because you can generate a time series for every combination of label values, if you have one or more labels with high number of values, it isn't difficult to exceed 30,000 time series. You want to avoid high-cardinality metrics.

As the cardinality of a metric increases, the metric can get throttled and some data points might not be written to the metric. Charts that display the metric can be slow to load due to the large number of time series that the chart has to process.


You can confirm that the logs are continuously flowing and meet the query criteria for the visualization. You might also want to review the metric configuration to ensure the filter and aggregation settings match your requirements.