Hello,
we need to implement monitoring/alerting to observe our Data Lake system.
Could you please direct me how to create Alert if Workflow is failed for some reason?
I spend some time trying to figure it out from GCP documentation, also I checked Cloud Metrics, and I can't find reource related to Dataproc Workflow. I see just Dataproc Cluster resources.
I'm asking just for directions. I have more experience with AWS, and this is my first GCP project, so maybe I have overlooked something.
Thank you in advance
You may use Stackdriver for this purpose. I've found this SO posts [1, 2] that can help you on the set-up and will stand as a reference for your scenario.
Thank you.
Maybe I understood it wrongly, but this is for job logs analyzing?
I'm more interested in about how to get notified when Workflow status is failed like this.
Jobs were not triggered at all, since workflow execution failed before that. In this case because N2_CPU quota.
If the references have been unhelpful, you may file a PIT-Feature Request. An engineer working closely with the product will check your scenario and offer solution
I'm getting
by clicking that link
Can you try accessing this link: https://issuetracker.google.com