Hi community,
We are on-boarding a number of APIs onto Apigee which are deployed to either AWS or to an on-premises data center. Is there any benefit to running Edge micro if the target APIs may not be co-located?
We're particular interested in reducing latency and the number of requests which have to route externally.
Thanks
Can you help me understand more about co-location? Are you saying they are in same machine or same network?
But lets take an example :
As a simple rule of thumb, you should offer api management functionality close to the client .So the answer would be to deploy edge micro on-premise in this scenario
The latency issues will be there and you can deal CDN or have a local server. However the benefits are that you can get API Management functionalities like oauth, quota , spike arrest at a point where it matters the most.
Our infrastructure is as below & we want to use Apigee OAuth2 policy to verify client access.:
If A calls B or B calls A it currently goes out to Apigee Edge in the cloud. We could run Edge micro either in AWS or on-prem but the traffic will still have to leave their respective networks.
Is Edge micro going to help us reduce latency in this scenario or are we better off just using Edge cloud?
Thanks
This doesn't looks like a usual micro gateway pattern. I believe the cost of adding a edge micro will offset any benefit local token validation.
@Srinandan Sridhar - Your suggestion please.
1) MG does not support Apigee Edge's OAuth v2 policy. It only supports JWT based tokens
2) In the use cases explained, I don't see how MG can reduce latency. If the consumer is on AWS, the provider is on GCP (or vice versa). MG helps if the consumers and provider are in the same network (by adding the gateway also to the same network).
Great thanks for the information.
User | Count |
---|---|
1 | |
1 | |
1 | |
1 | |
1 |