Get hands-on experience with 20+ free Google Cloud products and $300 in free credit for new customers.

How do we monitor subnet usage in GCP

Task Description: We have a Google Cloud Platform (GCP) project where we need to monitor the IP address utilization of our subnets continuously. The Network Analyzer tool in GCP provides insights on the subnet allocation ratios and priorities. We want to create a solution that retrieves this data, sends metrics to Cloud Monitoring, and creates an alarm to alert us when the IP utilization reaches a certain threshold.

The solution should include:

  1. A Cloud Function (or any other suitable GCP service) that periodically retrieves the subnet data and allocation ratios from the Network Analyzer API.
  2. The Cloud Function should process the retrieved data and send custom metrics to Cloud Monitoring, including the subnet name, allocation ratio, and priority.
  3. An alert policy should be created in Cloud Monitoring to monitor the custom metrics and trigger an alarm when the allocation ratio exceeds a specified threshold (e.g., 75%) or when the priority changes from LOW or MEDIUM to HIGH or CRITICAL.
  4. The solution should be implemented using Infrastructure as Code (IaC) principles, preferably using Terraform or Google Cloud Deployment Manager. The IaC script should provision the necessary resources, such as the Cloud Function, Cloud Monitoring metrics, and alert policy.
1 2 1,165
2 REPLIES 2

Hello @vinay1469  ,Welcome on Google Cloud Community.

It looks like you are asking for end-2-end , complex and completed solution. Let me ask you, what you did so far with those requirements? Any functions code? Maybe MQL query ? Anything ? Don't get me wrong, we are always happy to help, but your question looks like typical task for Cloud Architect and we will not going to build ready-to-go solution without any single attempts and help from your end. 

So please describe what you've did so far to achieve this task, where you had issues and we will more than happy to help or somehow guide you in this matter. 

--
cheers,
DamianS
LinkedIn medium.com Cloudskillsboost

Hey @DamianS 
Sorry for not providing the full context.
This is the script i was using for cloud function but didn't have much luck

import logging
from google.cloud import logging as cloud_logging
from google.cloud.logging_v2.types import LogMetric

logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)

def create_log_metric(request):
# Set up the Cloud Logging API client
logging_client = cloud_logging.Client()

# Set the project ID and metric details
project_id = "your-project-id"
metric_name = "criticalHighNetworkIssue"
description = "Critical or High Impact insight from Network Analyzer"

# Define the log filter
log_filter = (
'LOG_ID("networkanalyzer.googleapis.com%2Fanalyzer_reports") '
'AND (jsonPayload.priority="CRITICAL" OR jsonPayload.priority="HIGH") '
'AND jsonPayload.type = "ERROR"'
)

# Create the log-based metric
metric = LogMetric(
name=f"projects/{project_id}/metrics/{metric_name}",
description=description,
filter=log_filter,
)

try:
response = logging_client.metrics.create(
request={"parent": f"projects/{project_id}", "metric": metric}
)
logger.info(f"Log metric created: {response.name}")
return f"Log metric created: {response.name}"
except Exception as e:
logger.error(f"Error creating log metric: {str(e)}")
return f"Error creating log metric: {str(e)}"

def monitor_logs(event, context):
# Set up the Cloud Logging API client
logging_client = cloud_logging.Client()

# Set the project ID and metric name
project_id = "your-project-id"
metric_name = "criticalHighNetworkIssue"

# Retrieve the log-based metric
metric = logging_client.metrics.get(
request={"name": f"projects/{project_id}/metrics/{metric_name}"}
)

# Query the logs based on the metric filter
query = metric.filter_
entries = logging_client.list_entries(filter_=query)

# Process the log entries
for entry in entries:
logger.info(f"Critical or High Impact insight: {entry.payload}")

# Return a success message
return "Log monitoring completed successfully."

And has gone through medium findings they have implemented this via SDK and able to create a log based metrics and alarm. I want to automate this process with terraform to rollout to all the existing clusters. So what is the best and easiest way to get the subnet IP monitoring

Top Labels in this Space