I set up gcp aggregated sink from the whole organization into a separate project.
Further, I can view events from the entire organization through the "Logs Explorer", but only if I manually specify "Refine scope
"-> "Scope by storage" -> "MYNAMESTORAGE/_AllLogs".
But I cannot select an aggregated receiver when I create an Alerting Policy, there is no "Scope by storage" option, and the filter works on the logs of the project itself, and not on the file, which is a log aggregator for the entire organization.
Please tell me how to configure Alerting Policy based on aggregated sink ?
Nobody can help me?
Hello @SergeyN , the log-based alerts are defined based on the log filters. Can you provide the description of the log filter that you used? Can you see any logs when you press the "PREVIEW LOGS" button?
Thank you
@leoy , please read my questions more carefully, all the information is already indicated there.
I set up an organization aggregated log and forwarded it to another XX project. After that I go to the XX project and set up my filter, but it only works if I switch to the storage that I previously specified as the destination for the aggregated log. ("Scope by storage" -> "MYNAMESTORAGE/_AllLogs".) - and it's work.
How can I choose in the alert mechanism in a similar way not the project repository by default, but the one I created where all the data for the aggregated log is collected ?
@SergeyN , I apologies for the caused displeasure. I was misguided by the unfamiliar term "aggregated receiver" that you previously used. This is why I asked for more information. Looking at the current version of the Alert Policy UI I see
This is why I was asking for the Alert query. When you define a scope in Log Explorer before you start creating an alert, you get the following notification in the second step:
(I redacted my scope in the screenshot)
This should help to define an alerting policy. If you do not see the expected logs in the preview when defining the policy, please post the query. In general, the preview should work exactly the same as Log Explorer query and display the same logs for the same query filter and scope.
@leoy Thank you. This solution doesn't work for me. Format filter
logName: "projects/myname/logs/cloudaudit.googleapis.com%2Factivity"
does not indicate in any way that instead of taking logs from the _Required _Default storages, you need to take logs from the user storage (where aggregated logs from the entire organization are stored).
I need to explicitly specify a filter like this
source: "projects/myname/locations/global/buckets/mybucketname/views/_AllLogs" but didn't find it
@SergeyN I think you might confuse between scope and filter definitions. Let me rephrase the problem.
Logs from multiple sources (i.e. having different prefixes before [LOG_ID] part of the logName field) are ingested into a single logging bucket for aggregation purposes. You would like to define one or more alert policies for these log entries in one of your projects. If the project does not "own" the logging bucket resource then to view it in Log Explorer you need to define the scope by storage. However, the values of the logName field for these entries will be the same (i.e. before the entries went through the sink) AFAIK. So, if you want to define an alert policy for all logs that were generated by resources of the specific project, you will need to set the scope by storage to point to your aggregation sink and to define the filter to something like:
logName: projects/[PROJECT_ID]
See the documentation to learn more about logging query language syntax.
Hope it will help.
Of course, let's continue to talk only about "scope". I'm fine with filters.
Please tell me how to make an alert, where is the scope "log buckets", which I created to collect aggregated sinks at the organization level.
When you create an alert from Log Explorer, the alert "inherits" the scope you defined in Log Explorer. In the screenshots that I attached to my previous reply you can see a screen for creating an alert with "default" (current project) scope in the first screenshot and the step #2 of the screen when the scope was changed in another screenshot.
So, the order of actions to create an alert for a scope by storage would be:
Please, let me know if you have a problem to preview your log entries.
@leoy , why are you repeating banal things to me? I'm trying to explain to you that if the log storage is user-defined, then it is not taken into account in the alert mechanism.
1. Create an aggregate sink at the organization level.
2. Set the destination to user-defined logging storage in another project
3. Try to use at least one alert rule that will work based on the previously created user-defined logging storage.
It won't work at the moment. Why?
Even in Logs Explorer, you need to "refine scope" to see the aggregated logs.
@SergeyN I'd like to point out that this forum is a community resource and not an official support channel. While I do want to help you to resolve your problem, I do not have capacity at the moment to fully reconstruct the exact setup of what you are describing and reproduce the problem and the solution. You can open a support ticket if the problem is urgent and you need a dedicated professional resource to address it.
I am trying to understand what exactly does not work for you, but you will need to help me there. Your previous explanations aren't detailed enough. In your last post you wrote
Try to use at least one alert rule that will work based on the previously created user-defined logging storage.
When I try to follow this description, it works for me. Here what I do in the Cloud Console window:
The alert will be created for the refined scope. I provide these steps in details to show exactly what I did to get the desired result. If these steps do not work for you please, point at which step you stop.
You wrote "The alert will be created for the refined scope.", but most likely you did not complete these steps or did them in a different way.
When I select "Refine scope", when viewing in Logs Explorer I see the logs I need, but when I create an alert, it does not work with the same filter, because the "Refine scope" parameter is not inherited.
Therefore, when you say that you get the desired result, it can mean 2 reasons:
1) you are using the default logging storage in the project, not using custom storage (which I have is an organization-wide aggregator)
2) standard logs from your project match from logs from user storage
Below I show screenshots:
- in Logs Explorer I see the logs I need
- when creating an alert, they are no longer found
Hi @SergeyN , thank you for explaining the problem in details. I researched the support for non-project scopes for log-based alerts. Unfortunately, this functionality is not supported. 🙁 There is a work-in-progress to add the scope support to log-based alerts.
If you or your employer have subscription for customer support within Google Cloud, please, open a ticket and ask them to create a technical blocker issue about this problem.
I'm frustrated, I hope you can help me)
My company currently does not have technical support, otherwise I would not have written on this forum.
@leoy Could you initiate the development of this functionality?
I emphasized the importance of the problem. I can only promise to follow up about this. Can you please provide here or mail me the name of your company. All work is prioritized based on the customer requests. Having the customer name can be helpful to prioritize this work.
@SergeyN as a temporary workaround you can create log-based metrics on your log bucket and then use the created log-based metrics for defining new alert(s).
This is a best workaround I can propose at the moment.
@leoy , I don't understand. How can I set up one receiver (so as not to pay for receiving logs 3 times) and send it to 3 different places:
1) gcs
2) logging storage
3) splunk
So far, I see only the possibility of making 3 different receivers for these purposes and therefore pay 3 times for receiving data.
Is it possible that this reply was meant for the different post? I will post the answer to your question there.
I'm also experiencing this problem (non-project scopes for log-based alerts). We are aggregering logs from multiple projects in a single sink to a single log bucket. Not being able to create alerts on events in this bucket seems weird.
@amojamo does the workaround I previously posted not work for you?
Hey, no, but I might have done something wrong? I tried to create the logging metrics as in the alpha docs, declaring a bucket.
gcloud alpha logging metrics create ownership_assignments_changes \
--description="Ensure log metric filter and alerts exist for project ownership changes" \
--log-filter="(protoPayload.serviceName="cloudresourcemanager.googleapis.com") \
AND (ProjectOwnership OR projectOwnerInvitee) OR (protoPayload.serviceData.policyDelta.bindingDeltas.action="REMOVE" \
AND protoPayload.serviceData.policyDelta.bindingDeltas.role="roles/owner") \
OR (protoPayload.serviceData.policyDelta.bindingDeltas.action="ADD" \
AND protoPayload.serviceData.policyDelta.bindingDeltas.role="roles/owner")" \
--bucket-name="projects/project123/locations/location123/buckets/audit-bucket"
But I still have to click "REFINE SCOPE" and select the bucket when selecting "View logs for metric" in the "Log-based Metrics" screen (Screenshot 1 and Screenshot 2)
This means I can't create alerts on this logging metric, right? Because even if I select the audit bucket as the refined scope, creating an alert won't take that into consideration.
Also, selecting "Create alert from metric" from the "Log-based metrics" view also doesn't seem to work (Screenshot 3).
And FYI, there are logs in said bucket (Screenshot 4):
Sincerely,
Adrian M
Thank you for the detailed description. I will try to address your questions.
I tried to create the logging metrics as in the alpha docs, declaring a bucket. But I still have to click "REFINE SCOPE" and select the bucket when selecting "View logs for metric" in the "Log-based Metrics". Might I have done something wrong?
The syntax looks valid to me. The UI might be confusing but it makes sense that when you select "View logs..." you see the results of the filtered query. So, you still have too define the explicit scope to provide the info that you passed for the log-based metrics using --bucket-name.
This means I can't create alerts on this logging metric, right? Because even if I select the audit bucket as the refined scope, creating an alert won't take that into consideration.
The reason why you see nothing in the alert's preview chart is because log-based metrics are not calculated back in time. They start to measure only after the moment you create it.
Try to generate events that create new logs and check that you can see these metrics. You do not need to go to the alert UI for this. You can check them in Metric Explorer.
You're right! I generated a few new logs by adding owner role on a few accounts in different projects, and they show up in the alert policy based on the logging metric I created 🙂 Thanks a lot for taking the time to assist me on this random thread... I hope this helps others aswell.
Great! Would you mind to mark it as an accepted answer?
I'm not the original poster, so I don't think I can mark it as an answer unfortunately.
Hello, @leoy
I'm having a similar issue, but in my case it hasn't been resolved by adding in more logs.
I've created the following metric:
gcloud alpha logging metrics create prod-bucket-monitoring --description="monitor prod logging bucket" --log-filter="resource.type=gce_instance" --bucket_name="projects/prod-project/locations/global/buckets/prod"
It created it fine and I can see it in the logs based metrics panel, however when I try and view the logs of that metric, nothing comes up, if I switch the scope to the prod bucket, I can see the logs flowing.
Keep in mind that in my setup, I have it so that there's a logging sink setup from the original project "origin-prod" to the prod-project, the "prod" bucket of prod-project project.
There's about dozens of new logs being added every second, so I should be able to see it via the metrics right? But it says 0 Bytes and I can't see any there.
Any ideas?
Oh, I think I understand, it doesn't display the logs themselves, rather it creates a type of counter metric that counts each log's logname="logthing1" (etc...) matching filter that you provide in the gcloud command.
So for my example, I created a new metric and I have like 10 different logname=",,," with different names, and the filter I provided was for --log-filter="healthCheck".
So now it will count however many "healthCheck" it finds and create a graph of it in metrics explorer for that metric.
I guess that works for me.
Thank you for the update. You are correct. The log-based metrics do not produce logs themselves but generate metrics from the logs using a provided query - to select the logs, and regular expression - to retrieve information or just to count the query result for counter metric.