Monitoring CloudSQL's Point in Time Recovery Storage Usage?

Hello,

I read in this answer that the Cloud Storage bucket used to store WAL archives to support the "Point in Time Recovery" feature is not able to be accessed normally by users.

Therefore, is there any way to monitor the size of the archived WAL logs? My main goal here is to be able to monitor the extra charges if I were to enable the Point in Time recovery feature.

For extra context, I am using CloudSQL for PostgreSQL and have the archived WAL logs stored in Cloud Storage as the PIT Recovery feature was enabled just this week.

Thanks beforehand.

0 4 434
4 REPLIES 4

 

Monitoring the storage usage of archived WAL logs for PITR in Cloud SQL is essential for managing costs and ensuring adequate storage capacity. While the GCS bucket used for storing WAL archives is not directly accessible to users, there are effective methods to track storage usage and set up alerts to prevent unexpected charges.

Emphasize Metric Verification

Before relying on the bytes_used_by_data_type metric, always verify its availability and applicability for monitoring archived WAL log storage usage.

Clarify API and Console Usage

The Cloud Monitoring API offers a powerful approach to monitoring PITR storage usage. However, for those less familiar with APIs, the Google Cloud Console provides a user-friendly interface for monitoring, creating dashboards, and setting up alerts. The console is often the best starting point for basic monitoring tasks.

Examples and Tutorials

Google Cloud provides a wealth of examples and tutorials to guide users through the process of using monitoring tools and setting up alerts. These resources can be invaluable for those new to cloud monitoring and alert management.

Cost Management

Regularly review billing and usage reports to stay informed about PITR storage usage and identify any unexpected increases in costs. This proactive approach helps manage expenses and optimize storage utilization.

Refer to the latest Google Cloud documentation for the most current and accurate information, as cloud services and their APIs are continually evolving. Utilize the resources and recommendations provided to effectively monitor PITR storage usage and maintain control over cloud storage costs.

Hi @ms4446 ,

Thanks for the answer, but I don't think I fully understand your explanation regarding my concerns. So is the bytes_used_by_data_type metric (specifically WAL) already sufficient to monitor the Cloud Storage usage for the PITR feature?

Or instead, I have to check the usage manually whenever the billing comes? I wasn't able to find any documentation that mentions PITR storage usage.

Thanks beforehand

Hi @stevensim ,

Indeed, the availability and specificity of the bytes_used_by_data_type metric for tracking archived WAL log storage usage should be thoroughly verified. Consulting the latest Google Cloud SQL documentation and Cloud Monitoring documentation, or directly contacting Google Cloud support, is crucial to ensure the metric provides the necessary information.

API Usage

In practice, the Cloud Monitoring API should be used to retrieve and analyze existing metric data for archived WAL log storage usage. The documentation and examples provided by Google Cloud should be consulted for the correct implementation details.

Practical Approach

Given the potential for metric limitations, it's advisable to consider a combined approach of monitoring general storage metrics and reviewing billing reports to gain a comprehensive understanding of the costs associated with PITR. This dual approach provides a broader perspective on storage usage.

Setting Up Alerts

Setting up alerts based on storage usage thresholds remains a valid strategy for proactive cost management. However, the implementation depends on the availability of a specific metric to track PITR WAL storage usage. If the metric is not available, alternative methods, such as monitoring general storage metrics and setting alerts accordingly, may be necessary.

While the bytes_used_by_data_type metric offers a potential avenue for monitoring PITR storage usage, verifying its availability and specificity is paramount. Consulting Google Cloud documentation and employing a combined approach of monitoring general storage metrics and billing reports can provide a more reliable and comprehensive understanding of storage usage associated with PITR.

As per the documentation, I assume that there should be no extra cost for logs storage in cloudstorage. But I would recommend to check with GCP support for cost and PITR monitoring. Also once PITR stores logs in cloudstorage, `bytes_used_by_data_type` will be 0 as there are no bytes stored in instance. 

As per doc:
For instances having write-ahead logs stored in Cloud Storage, the logs are stored in the same region as the primary instance. This log storage (up to 35 days for Cloud SQL Enterprise Plus edition and seven days for Cloud SQL Enterprise edition, the maximum length for point-in-time recovery) generates no additional cost per instance.