Get hands-on experience with 20+ free Google Cloud products and $300 in free credit for new customers.

Message logging / Cloud logging / Logging Best practice

Hello,

We have a requirement to log the northbound request/response and southbound request/responses along with error responses if any. At the moment we've considered two approaches.

  1. (option 1 )Place multiple javascript policies at preflow, conditional flow, post flows and default fault rule to capture the log parameters as context variables and set them in the message logging policy in the post client flow where I will put all those variable in a single message logging policy.
  2. (option 2) Place multiple message logging policies at preflow, conditional flow, post flows and  force the default fault rule ?

My question is which of the above is a better approach? Would there be a performance impact if we used too many message logging policies?

  1. what should be done as best practice if I want to generalize this logging strategy across multiple proxies for consistency and reusability , how can this be achieved a shared flow for example ? 
  2. what about If I want to set different log levels for error/info , how can this be done using a single message log policy placed in post client flow which does not allow javascript policies to check on the status code. also someone mentioned the conditions tag checking on status code does not also work.

Also I could not find any article or docs relating to logging best practices, if someone can share a coding sample reference that would be really helpful.

 

1 3 469
3 REPLIES 3

Is this the same question that was asked before? @dareenhamdy  are you a chatbot? 🙂

I have some extra questions as well as you can see. It also never hurts to always ask about best practices.

There are always new fast paced updates.
if you came across any article please share it with me.


True!

In terms of how to collect information for logging, the practice has not changed.
Using JavaScript policies in the preflow, conditional flows, postflow, and fault rule to gather the necessary variables, and then logging them all at once in a single MessageLogging policy in the postclientflow.

To make this logging approach reusable across multiple proxies, you can bundle the logging policies into a shared flow and attach it to the postclientflow of each proxy. This way, you maintain consistency without duplicating effort.

Regarding log levels, unfortunately, the logLevel attribute isn’t something you can dynamically reference as a flow variable. You’d likely need separate MessageLogging policies with different static logLevel values. However, you can define a custom flow variable like flow.logLevel based on your logic (e.g., checking for errors or status codes) and use it as a condition to decide which policy to execute. BTW, services like Dynatrace support sending logs via HTTP with more flexibility. For example, you can send logs with a severity level directly in the request body, which allows you to use a single logging policy while maintaining granular control over log levels.

One more thing to consider is security. Logging might seem harmless, but if done poorly, it can introduce risks, especially when user input is involved. There’s something called log poisoning, and it’s a good example of how things can go wrong. Let me explain.

When you log raw user input, like headers or query parameters, without sanitising it, attackers can exploit this to inject malicious content into your logs. For instance, they could break JSON structure with unescaped quotes or even inject characters that trigger unexpected behavior in your log management system.

Here’s a real-world example: during one assessment, I discovered a case where an Apigee proxy logged the User-Agent header directly into a JSON log string. The attacker crafted a User-Agent value with a stray double-quote, which corrupted the JSON. As a result, the logs for that request were invalid and ignored by the logging system. This gave the attacker a perfect way to hide their tracks.

To avoid this, always sanitise, validate or encode inputs before logging. In Apigee, you can use the escapeJSON() function to escape any user-provided values. For example:

 

<MessageLogging name="Log-Secure">
    <Syslog>
        <Message>
            {"timestamp": "{system.timestamp}","ua": "{escapeJSON(request.header.user-agent)}", "method": "{request.verb}"}
        </Message>
        <Host>your_log_endpoint</Host>
        <Port>your_port</Port>
        <Protocol>TCP</Protocol>
    </Syslog>
</MessageLogging>

 

Another important thing to keep in mind is being careful with sensitive information in logs. It’s surprisingly easy to accidentally log something confidential, like API keys, tokens, or PII, especially when you’re dealing with request headers or query parameters.

Ideally, sensitive data should be masked or redacted at the level of your log management system, but you can also handle this directly in your proxy. For example, I wrote a script that can identify and mask sensitive information in request headers and query parameters before it gets logged. You can find it here.