Hi Team, I have a question regarding one of the requirements we are working on.
We have integrated Chronicle with our GCP (Via web hook) to get the cloud armor logs. We leveraged pub-sub with Cloud Armor to push the logs to Chronicle.
Once integration is successful, we started getting messages in Chronicle but the data was encoded with base63 encoding. Does it has to be this way for data transmission?
Second questions is how to we decode it on chronicle (any parser) so that we can read it and trigger some alerts/further actions ?
Any help would be greatly appreciated. Thanks in advance!
View files in slack
We're expanding our direct integration with GCP to collect more logs and have some new features in private and public preview. They include:
Webhook preview (
https://cloud.google.com/chronicle/docs/preview/feed-management/configure-http-endpoints
) was intended for 3rd party products (not GCP), but the Pub/Sub support is built on top of that and includes built-in decoding of the base64 payload.
If you want to try getting Cloud Armor logs via native ingestion, add this to your GCP log export filter:
log_id("requests")
Note that this will pull in all load balancer requests, so you may want to filter it down specifically to Cloud Armor depending on your log volume.
Thanks
@adam9
. Sorry I just noticed that I have put both the implementation option in the previous message but I want to confirm you that we have done cloud-native integration with the help of pub-sub.
The screenshot that I have shared is what we got into Chronicle side of it. My question was when logs were pushed to chronicle, the
data
parameteer data showing in encoded format. Is that the request data that we have to decode it?
So, how do we filter down only the cloud armor logs?
I don't have any sample filters yet for Cloud Armor, but you can experiment with it in GCP Log Explorer first. You can reference https://cloud.google.com/armor/docs/request-logging and https://cloud.google.com/armor/docs/audit-logging depending on what logs you're looking for
If you get the right filter into the native ingestion (log export filter), then you don't need to worry about the base64 encoded message
Cool. Will try at my end and update here if I run into any issues. Thanks for your quick response @adam9 . Appreciate your support!
@spawar_apex
how are you sending to chronicle from pubsub?
I think the issue is that you are sending the pubsub message directly into chronicle, but pubsub has added its own information onto the message and encoded your payload. Can you try the following:
Also, due to the way base64 works, we can see parts of that data. Not sure if youโre comfortable with that? If not, might be worth removing the screenshot
@Ion_Todd - I understood that pub-sub must be sending data in base64 encoded which it is sending but having decoded before sending it to chronicle adds another additional step. then I have to leverage cloud functions to decode and then send it to through ingestion API.
We used CF for awhile to achieve a similar use case. It becomes weird when youโre doing high volume logs though, so moved away from it.
Did you find a good filter for cloud armor only logs? We just pull in all loadbalancer logs and then run queries on them to identify WAF hits. Heres a basic search that works on our environment (and assumes youve turned on the default OWASP rules):
metadata.log_type = "GCP_LOADBALANCING" AND security_result.rule_id = /.*owasp.*/
Hi @spawar_apex
What filter did you end up using, or have you moved on from using native ingestion for CloudArmor?
Thanks
Hi @J0x , Google Chronicle has export filters that ingests cloud armor logs. So, you can add these into export global ingestion filters configuration to streamline CA logs.
GCP_LOADBALANCING:
log_id("requests") This includes logs from Google Cloud Armor and Cloud Load Balancing
Thank you @spawar_apex
Best,
Jeremy