We need an interim solution to convert an Async process to a sync process. Trying to understand if it can be done via APIGEE.
In the current Async process here are the high-level steps.
1. client X sends request to proxy AP1 which forwards to target T1.
2. Target T1 sends the request to PROXY AP2 which forwards to backend target T2
-AND target T1 also returns a default response(R1) to proxy AP1
-Proxy AP1 returns the response R1 to client X. --- SO THIS COMPLETES ASYNC REQUEST FROM CLIENT X
3.Then target T2 makes a request by calling proxy AP3 which forwards to target endpoint T3. -- THIS IS THE ASYNC RESPONSE.
NOTE: CLIENT X will retrieve this response independently (does not wait for a response) and the cycle completes.
To make this Sync using APIGEE - the idea is :
1 Client X sends a SYNC request to proxy AP1 which forwards to target T1.
2. Target T1 forwards the request to proxy AP2 also as a SYNC request.
3. Now proxy AP2 needs to forward request to backend target T2 and wait. Can this be done with a waitForComplete()??
( ?? not sure if this will work - since we want it to wait for the response call to come in not for the current request to complete?? )
Any alternative?
Couple of things to NOTE here:
-Target T1 does not return the default response to AP1, as in the async process.
so T1 is waiting for the response and so are AP1 and client X.
-Request payload includes a unique ID (like a transactionID) that we're hoping to use
matchup with response.
4. Target T2 makes a request (this is the async response) by calling proxy AP3. and includes the unique transactionId from payload in AP2.
5. proxy AP3 will forward the request to proxy AP2.
6. proxy AP2 will
have conditional flow to check if this is coming from AP3 ( ??could check http header user-agent)
check the transactionId from payload- match with transactionId from waiting request ( ??how to do this if there are multiple waiting requests??)
extract the response ( R1) from AP3
set R1 as response to waiting request from AP2
7. Proxy AP2 will now send response R1 to waiting request from target T1
8. Target T1 will send response R1 to waiting request from proxy AP1
9. proxy AP1 will send response R1 to waiting request from client X.
I know it sounds convoluted, but this would be a interim solution until a longer term one can be rolled out.
Hope this makes sense. Is this even possible? TIA for your thoughts and suggestions.
What problem are you REALLY trying to solve? You said you want to convert an async exchange into a synchronous one. What is the purpose of THAT? WHY do you want to do it?
to answer your specific questions.
Could you make an API Proxy (AP1) that
yes , you can do that. But, Apigee in general is not a good "integration engine". It's possible to stitch together a few synchronous calls into one API in Apigee, using JavaScript/httpClient, or ServiceCallout. But for asynch, in general, Apigee will not be the "right tool". Why? ok think about observability. What is the average runtime for this polling-and-retry sequence? What percentage of calls returns with a timeout after N retries? What is the average latency of the single retried call? You can't see any of that if you bury all the retry logic inside an API proxy.
There are better tools for this. In particular, Application Integration. It lets you build loops and retries and timeouts, and scatter/gather and etc. Including processing with asynchronicity. I'd rather build it in something like that.
Not all asynch things SHOULD be converted into a synchronous call. Something that takes two weeks to run, should not be converted to synchronous. So that's why I asked WHY are you investigating this. What is the goal?
btw, httpClient.Request.waitForComplete() is no longer recommended for use. The recommended method is to use a callback function. But that doesn't matter, it works similarly. The callback gets invoked in your JS logic when the request completes.
I know it sounds convoluted,
yes it does. That is a little concerning.
Maybe it's not as complicated as it sounds. I'd like to see two sequence diagrams, that show the BEFORE and AFTER/PROPOSED interactions and actors. Also the transaction ID - where does it originate, which actor holds it, when is it used, etc? Why are there so many? Why T1, T2, T3, with multiple layers of API proxy (AP2, AP2, AP3) between? It seems like a lot of moving pieces.
Thanks for taking the time to review and for your prompt and detailed response. All valid questions- I guess my post had too many unnecessary details - So, I've tried to clean it up and just keep the relevant parts...Attached are the before & after sequence diagrams. BEFORE
AFTER
SO basically, it's originally implemented as an ASYNC request, where the client sends the Async request to the API proxy AP1, that forwards it to the target T1-end of request transaction. T1 at the SOURCE_SYS level, triggers a response transaction, by collecting data from another backend source, and invokes AP2 to deliver the response to CLIENT -referred to as Target T2 in the diag. This works and no problems here.
Now the client is expecting a SYNC interface. While the team takes its time to come up with the SYNC response solution at the SOURCE SYSTEM level(cleaner and long-term solution, but larger effort), we want to implement an interim /quicker workaround.
Agree that not ALL ASYNC requests can be converted to SYNC, but this is intended to be SYNC and so hoping latency between T1 and the backend source is not a concern here. but we want to do this with very minimal if not zero changes on the T1 side.
What I need help with is:
- How can I keep the client waiting - since AP1 would have to forward the request as Async and will receive a default ACK from T1, thus completing the request. Can this be achieved without changes on the T1 side? would httpClient.Request.waitForComplete() OR Callback function work in this case? and how? How can AP1 be setup to poll for the response.
Assuming AP2 would be calling AP1 with the response, how will the async response, sent by T1 to AP2, matchup with the correct waiting request in AP1? We were considering a 'transactionId' that seems to be a unique field, included in the request payload from the client. But could we just use the callback function URL to connect to right request? again, keeping changes to SOURCE_SYS (T1) to a minimum.
Thanks for your time.
would httpClient.Request.waitForComplete() OR Callback function work in this case? and how? How can AP1 be setup to poll for the response.
httpClient.waitForComplete() and httpClient with a Callback function is equivalent, functionally. The latter is more efficient and is therefore recommended, strongly. waitForComplete is an antipattern at this point. (At some point Apigee may deprecate waitForComplete). Given that, people should always just use the httpClient with the callback function.
Which brings us to.... How can you configure the proxy to poll?
You can do it, but not in JavaScript. There is no setTimeout() in the JavaScript available in Apigee. There's no "sleep" function. So there is no good way to "wait a little" between polling with httpClient.
You MIGHT be able to do it in jython, but I am not a python/jython expert, so I cannot give you specifics.
If I were doing this, I would embed the poll-with-retry-and-backoff logic in an external system like App Integration or a Cloud Run service container.
Thankyou. Which one of the 2 options (App Integration and Cloud Run) is a better option in terms of effort and complexity (considering a learning curve for both).
Cloud Run is super simple. With the "Deploy from source" approach, you can build an app and run "gcloud run deploy" and you have your logic running in the cloud.
Conversely, App Integration is a more fully-featured integration platform, with a myriad of connectors for plugging into datastores, queueing systems, and 3rd party apps.
If you're just doing a "polling with retry and backoff", to me that sounds like 100 lines of nodejs code or a similar amount of python or C# code, and you could deploy that to Cloud Run pretty easily.
const MAX_CYCLES = 8;
const BASE_WAIT_TIME_MS = 100;
const app = require("express")(),
morganLogging = require("morgan");
require("console-stamp")(console, "[HH:MM:ss.l]");
app.use(morganLogging("combined"));
const port = process.env.PORT || 5950;
const delayTime = function (cycleCount) {
const jitterFactor = 1 + 0.3 * (0.5 - Math.random());
return (
BASE_WAIT_TIME_MS + cycleCount * (BASE_WAIT_TIME_MS / 2) * jitterFactor
);
};
/*
* return {continuePolling, reason} based on the payload.
**/
const checkContinue = function (cycleCount, _ignoredPayload) {
if (cycleCount >= MAX_CYCLES) {
return { continuePolling: false, reason: "cycles" };
}
// The following is contrived logic - it just examines the current time.
// It should examine payload here instead.
const d = new Date().valueOf();
if (String(d).endsWith("3")) {
return { continuePolling: false, reason: "endsIn3" };
}
if (String(d).endsWith("7")) {
return { continuePolling: false, reason: "endsIn7" };
}
return { continuePolling: true, reason: null };
};
app.get("/", function (_request, response) {
const url = "https://my-service-to-poll.com/whatever";
let cycleCount = 0;
async function sendOne() {
const options = {
method: "GET",
headers: {
Accept: "application/json, text/plain, */*",
"Content-Type": "application/json"
}
};
console.log(`cycle ${cycleCount} GET ${url}`);
const [status, _headers, json] = await fetch(url, options).then(
async (res) => [res.status, res.headers, await res.json()]
);
cycleCount++;
if (status == 200) {
// successful call, check if completed.
const { continuePolling, reason } = checkContinue(cycleCount, json);
if (continuePolling) {
// delay before trying again.
setTimeout(sendOne, delayTime(cycleCount));
return;
}
console.log(`stop polling reason: ${reason}`);
} else {
console.log(`unsuccessful: ${status}`);
}
// If reaching here, we stopped polling because it succeeded, or we reached the
// limit of poll cycles. Either way, terminate the chain.
console.log(`poll-tries: ${cycleCount.toFixed(0)}`);
console.log(`content: ${JSON.stringify(json, null, 2)}\n`);
json.cycles = cycleCount;
response
.header("Content-Type", "application/json")
.status(200)
.send(JSON.stringify(json, null, 2) + "\n");
return;
}
// start polling
sendOne();
});
app.use(function (_request, response) {
response
.header("Content-Type", "text/plain")
.status(404)
.send("unknown request");
});
app.listen(port, function () {
console.log("polling proxy server listening on " + port);
});