This article aims to explain how routing works in Apigee X. It is divided into 3 sections:
Apigee X is a SaaS service which is hosted inside a Google Cloud managed project and VPC, implementation of which is invisible to the Google Cloud customer. The Google managed Apigee X VPC is peered with a customer managed VPC via service networking and thus accessible from that VPC. This holds true for all the service projects inside a shared VPC as well.
Each Apigee X instance comes with an IP address that is accessible from the customer VPC, and VMs/Clusters inside that VPC can hit that IP address to access the APIs exposed via Apigee.
This whole setup of traffic coming to Apigee is called Northbound Flow. There are few approaches to implementing northbound flow, 2 of the most common are discussed below.
In this approach, instead of the managed instance groups, there is a PSC NEG(Network endpoint group) which is responsible for forwarding requests to the Apigee runtime. This removes the need to manage compute engine instances.
In both of these cases, load balancer can be external and/or internal depending on the the API clients.
The traffic going from Apigee X to the backends is called Southbound flow. There can be the following 4 scenarios based on where the backends are.
Backend in the same VPC will be accessible directly from Apigee if appropriate firewall rules are set.
If the backends are in VPCs different from the one Apigee is peered at, additional configuration is required. If VPC peering is set up between the Apigee VPC and the backend VPCs, the routing would work as it would for any other GCP service communication.
If peering can not be used, then another option is to use Private service connect(PSC). This would set up the connection between the two VPCs without any kind of peering.
In case of networks residing in on-premise or multi-cloud systems, some type of VPN or Interconnect setup is required.
Apigee has a construct for an Environment and an Environment Group. Each Environment group can contain one or more Environments.
https://www.example.com/shopping/cart/addItem
|_____________| |___________| |_____|
| | |
hostname basepath resource
Proxies are deployed to an Environment while the hostnames on which the clients are going to access the API proxy are defined at an Environment Group level.
Each Environment Group can be configured to listen to requests from one or more hostnames but one hostname cannot be in more than one group.
The hostname and a proxy basepath(across multiple proxy deployments) makes a unique combination in Apigee. For more information on this, visit link
The diagram below shows how a request flows through Apigee.
The code for a basic proxy endpoint looks like this.
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<ProxyEndpoint name="default">
<PreFlow name="PreFlow">
<Request/>
<Response/>
</PreFlow>
<Flows/>
<PostFlow name="PostFlow">
<Request/>
<Response/>
</PostFlow>
<HTTPProxyConnection>
<BasePath>/mock</BasePath>
</HTTPProxyConnection>
<RouteRule name="default">
<TargetEndpoint>default</TargetEndpoint>
</RouteRule>
</ProxyEndpoint>
Parent blocks for PreFlow and PostFlow can be seen here which further contains request and response blocks, which are the actual containers for Apigee policies.
There is a block for HTTPProxyConnection, which defines the base path of the proxy on which Apigee expects the traffic. Route rule block follows this, which is responsible for routing of the traffic and will be discussed later in this article.
The code for a target endpoint looks like this:
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<TargetEndpoint name="default">
<PreFlow name="PreFlow">
<Request/>
<Response/>
</PreFlow>
<Flows/>
<PostFlow name="PostFlow">
<Request/>
<Response/>
</PostFlow>
<HTTPTargetConnection>
<URL>https://mocktarget.apigee.net</URL>
</HTTPTargetConnection>
</TargetEndpoint>
The PreFlow and PostFlow blocks are similar to the ones in Proxy endpoint. Instead of HTTPProxyConnection, there is a block for HTTPTargetConnection here, which defines which backend(or target in Apigee’s nomenclature) to send the request to.
The diagram above shows 2 target endpoints, which is configured for 2 different backends.
There can be multiple target endpoints within the same proxy. These target endpoints are referenced in the proxy endpoint via route rules.
When a proxy is created from the console, 1 default proxy endpoint and 1 default target endpoint is created. This default proxy endpoint references the default target endpoint in its default route rule.
Additional target endpoints can be created, each one can point to different backend servers. Apart from the default route rules, additional rules can be created along with a condition which is responsible for the routing decision. Only the default route rule is without any condition.
A conditional route rule block in the proxy endpoint looks like this:
<RouteRule name="test">
<TargetEndpoint>test</TargetEndpoint>
<Condition>proxy.pathsuffix MatchesPath "/test"</Condition>
</RouteRule>
There are two sub-blocks here, first for the TargetEndpoint that the route rule is pointing to and the second defines the condition for the execution of the route rule which in this case is for a Path Match.
Imagine having a microservice architecture for an e-commerce application. There are different services for catalog, cart, users etc with different hostnames. APIs can be exposed from a single endpoint via Apigee, and the routing decision will happen through the path suffix. The route rules for this use case will look something like this:
<RouteRule name="product">
<TargetEndpoint>product-target-endpoint</TargetEndpoint>
<Condition>proxy.pathsuffix MatchesPath "/product"</Condition>
</RouteRule>
<RouteRule name="catalog">
<TargetEndpoint>catalog-target-endpoint</TargetEndpoint>
<Condition>proxy.pathsuffix MatchesPath "/catalog"</Condition>
</RouteRule>
<RouteRule name="user">
<TargetEndpoint>user-target-endpoint</TargetEndpoint>
<Condition>proxy.pathsuffix MatchesPath "/user"</Condition>
</RouteRule>
<RouteRule name="default">
<TargetEndpoint>default</TargetEndpoint>
</RouteRule>
Notice that the default route rule is defined last and doesn’t have any conditions. This should be followed while proxy development.
Note: Route rules are one of the ways to handle routing, Any custom code or Apigee policies can be used to directly edit the target URL which can also be used for routing.
This is how routing works in and out of Apigee. I hope this article provides some clarity on the traffic flow to and from Apigee, any feedback would be appreciated.