Hello everyone,
I’m currently working on a project where I have three different backend APIs that serve the same business purpose, but are technically very different. Specifically:
They differ in HTTP methods (e.g., some use GET, others POST)
The request body structures, query parameters, headers, and response formats are all different
What I want to achieve is to expose a single unified API facade via Apigee X. This unified API would act as the public interface for clients, while internally routing requests to the appropriate backend and transforming both the request and response payloads accordingly.
The goal is to shield clients from backend differences, so they only interact with one stable, unified API regardless of which backend actually handles the request.
My questions are:
Is this architecture achievable with Apigee X alone — given the level of transformation needed for requests/responses, across multiple diverse backends? (if yes how to do that in api proxies)
If not fully feasible with Apigee X, what Google Cloud products or architectural patterns would be better suited for building this kind of middleware layer?
Are there any best practices or examples from similar use cases?
To be transparent, I have some doubts that Apigee alone is flexible enough to handle the depth of transformation required here(specially that we have not to add important latencies to api calls)more then 5B calls per month. But I would really appreciate input from experts in the community to either validate or challenge this assumption.
Thanks in advance!
Hi @pato17 ,
Yes, Apigee X can handle this — it supports API proxies with custom request/response transformations, conditional flows, and policies to route across different backends. If Apigee alone isn’t enough, consider adding Cloud Functions or Cloud Run behind it for more complex logic.
Hi,
Yes, Apigee can absolutely handle this use case. It is a standard use for Apigee. Proxies in Apigee are constructed of two parts:
For this use case, a single Proxy Endpoint would be defined to represent the combined API to the client and to handle any AuthN/AuthZ needs and determine which backend is needed.
Then three Target Endpoints would be added to the proxy. One for each of the backends. Each Target Endpoint will handle the backend specific transformations, request and response from the backend.
Conceptually, take a look at the Apigee flow: https://cloud.google.com/apigee/docs/api-platform/fundamentals/what-are-flows#designingflowexecution...
The flow would then look like:
-> Client calls Proxy Endpoint's request side:
-> Selected Target Endpoint:
-> Proxy Endpoint's response:
Thanks for the help. I'll be dealing with heavy data transformations. Would it be better to handle these in individual API proxies for each backend, and then have a facade API proxy call those? Would that add a lot of overhead and cost more( double the number of calls)? what about analytics will i have analytics by target if use direct way) And would it increase the overall proxy processing time significantly? Also, are there any examples of proxy patterns for this kind of scenario in github community?
Breaking out the backends into separate proxies with an additional proxy will actually increase the overhead and add a small amount of latency from the overhead of the call chain. I'd recommend keeping them together.
If the transformations are heavy, I recommend using the JavaCallout policy over JavaScript or Python. JavaScript is the easiest and is good for light weight transformations. However, Java is much more efficient within our platform: https://cloud.google.com/apigee/docs/api-platform/reference/policies/java-callout-policy
For analytics, you will be able to see the client calls to the proxy. And you'll be able to see activity against the backends. If there is a correlation of information or more detail needed on why a backend was chosen for a particular call, I can see at least two approaches. These both add additional logging information with the difference being where it is sent:
1) Data Capture Policy
https://cloud.google.com/apigee/docs/api-platform/reference/policies/data-capture-policy
The DataCapture policy captures data (such as payload, HTTP headers, and path or query parameters) from an API proxy for use in Analytics. You can use captured data in custom Analytics reports, as well as to implement monetization, and monitoring rules.
2) Message Logging Policy
https://cloud.google.com/apigee/docs/api-platform/reference/policies/message-logging-policy
The MessageLogging policy lets you log custom messages to Cloud Logging or syslog. You can use the information in the logs for various tasks, such as tracking down problems in the API runtime environment.
Examples:
LLM Routing: selectively routing calls to different LLMs
https://github.com/GoogleCloudPlatform/apigee-samples/tree/main/llm-routing
Cheers,
@pato17 wrote:
specially that we have not to add important latencies to api calls
what is your latency budget? What would be "a significant latency" to you? How much time does the existing API consume, end-to-end, without an Apigee proxy?
without apigee x is about 100ms to 130ms=> a significant latency will be more then 150ms