Get hands-on experience with 20+ free Google Cloud products and $300 in free credit for new customers.

Units for Cloud SQL Database - Memory utilization in GCP

I have set an alert for "Cloud SQL Database - Memory utilization" with a threshold value of 0.8, assuming the range to be from 0-1.0. However, I frequently receive alerts. Is this something that should be set in terms of percentages? If I want to set the threshold at 80%, should I simply input 80?

Solved Solved
0 3 2,399
2 ACCEPTED SOLUTIONS

In Google Cloud, metrics thresholds for alerts are usually set as a proportion from 0 to 1. This means that if you want to set the threshold at 80%, you would set the threshold value to 0.8, not 80. This is because the value is considered as a fraction of total memory, not a percentage value.

So, your setup is correct. You have set the threshold for "Cloud SQL Database - Memory utilization" to 0.8, which translates to 80% of the total memory.

View solution in original post

The "db-f1-micro" machine type in Google Cloud is a shared-core machine type, which means it has access to a shared pool of physical CPU performance. This machine type comes with 614 MB of memory, which is quite small.

High memory utilization, especially on a machine type with a relatively small amount of memory like db-f1-micro, could indeed be due to the small memory size. The memory of a database is used for multiple purposes such as caching data, query execution, and maintaining connections. Even if the service is not currently in use, it's possible that the database is using memory for background processes or maintaining connections.

This behavior might be unavoidable to an extent with the given machine type and its available memory. However, it's always a good idea to understand what is causing the high memory usage. For example, if you have a lot of idle connections, you might be able to reduce memory usage by closing those. Or if a lot of memory is being used for caching, you might be able to tweak the database's caching settings.

If your application can tolerate the potential for increased latency due to the small memory size and shared CPU performance, then this might not be a problem. However, if you're seeing performance issues, or if you're worried about running out of memory, you might want to consider moving to a machine type with more memory.

View solution in original post

3 REPLIES 3

In Google Cloud, metrics thresholds for alerts are usually set as a proportion from 0 to 1. This means that if you want to set the threshold at 80%, you would set the threshold value to 0.8, not 80. This is because the value is considered as a fraction of total memory, not a percentage value.

So, your setup is correct. You have set the threshold for "Cloud SQL Database - Memory utilization" to 0.8, which translates to 80% of the total memory.

Thank you for your response!

By the way, I have a service running on a Shared-Core Machine Type "db-f1-micro" with a minimum configuration of 10GB HDD. Even when the service is not in use, the memory utilization of the database is constantly over 80%. Is this due to the small memory size and is this something unavoidable?

The "db-f1-micro" machine type in Google Cloud is a shared-core machine type, which means it has access to a shared pool of physical CPU performance. This machine type comes with 614 MB of memory, which is quite small.

High memory utilization, especially on a machine type with a relatively small amount of memory like db-f1-micro, could indeed be due to the small memory size. The memory of a database is used for multiple purposes such as caching data, query execution, and maintaining connections. Even if the service is not currently in use, it's possible that the database is using memory for background processes or maintaining connections.

This behavior might be unavoidable to an extent with the given machine type and its available memory. However, it's always a good idea to understand what is causing the high memory usage. For example, if you have a lot of idle connections, you might be able to reduce memory usage by closing those. Or if a lot of memory is being used for caching, you might be able to tweak the database's caching settings.

If your application can tolerate the potential for increased latency due to the small memory size and shared CPU performance, then this might not be a problem. However, if you're seeing performance issues, or if you're worried about running out of memory, you might want to consider moving to a machine type with more memory.