How routing works in Apigee X

This article aims to explain how routing works in Apigee X. It is divided into 3 sections:

  1. The first part explains how traffic comes to Apigee X from various clients.
  2. The second part explains how to configure the traffic flow to backends in different networks and datacenters.
  3. The third part focuses on two things.
    1. First one is which proxy to execute based on the request hostname and base path
    2. Second, how the request and response flow works inside an Apigee X proxy and how to customize the routing behavior for multiple backends.

 

Routing to Apigee X

Apigee X is a SaaS service which is hosted inside a Google Cloud managed project and VPC, implementation of which is invisible to the Google Cloud customer. The Google managed Apigee X VPC is peered with a customer managed VPC via service networking and thus accessible from that VPC. This holds true for all the service projects inside a shared VPC as well. 

Each Apigee X instance comes with an IP address that is accessible from the customer VPC, and VMs/Clusters inside that VPC can hit that IP address to access the APIs exposed via Apigee. 

devashishpatil_1-1687846089619.png

This whole setup of traffic coming to Apigee is called Northbound Flow. There are few approaches to implementing northbound flow, 2 of the most common are discussed below.

Load Balancer + Managed instance group

 
devashishpatil_2-1687846150337.png
In the architecture diagram above, there is a managed instance group which is responsible for IP forwarding and which acts as a backend service for the load balancer. All the requests coming from the clients through the load balancer are forwarded to the Apigee runtime. 

 

Load Balancer + PSC NEG

devashishpatil_3-1687846231589.png

In this approach, instead of the managed instance groups, there is a PSC NEG(Network endpoint group) which is responsible for forwarding requests to the Apigee runtime. This removes the need to manage compute engine instances.

In both of these cases, load balancer can be external and/or internal depending on the the API clients.

Routing from Apigee X

The traffic going from Apigee X to the backends is called Southbound flow. There can be the following 4 scenarios based on where the backends are.

Apigee to backends in the same VPC

Backend in the same VPC will be accessible directly from Apigee if appropriate firewall rules are set.devashishpatil_11-1687846581699.png

Apigee to backends in different Google Cloud VPCs

If the backends are in VPCs different from the one Apigee is peered at, additional configuration is required. If VPC peering is set up between the Apigee VPC and the backend VPCs, the routing would work as it would for any other GCP service communication. 

If peering can not be used, then another option is to use Private service connect(PSC). This would set up the connection between the two VPCs without any kind of peering.

How routing works in Apigee X (3).pngApigee to backends in on-premise or other clouds:

In case of networks residing in on-premise or multi-cloud systems, some type of VPN or Interconnect setup is required.

How routing works in Apigee X (4).pngApigee to backends on the internet

 
In this case, services hosted on the internet can directly be referenced by Apigee by their IP address or hostname.
Some internet services might require allowlisting of the IPs which are accessing those services. Apigee uses Cloud NAT for egress, and static IP addresses can be reserved for those. These IP addresses can be configured for allowlisting at the Internet service.
 
 

Routing inside Apigee X

Which Proxy deployment to execute?

Apigee has a construct for an Environment and an Environment Group. Each Environment group can contain one or more Environments.

https://www.example.com/shopping/cart/addItem
        |_____________| |___________| |_____|
               |             |           |
            hostname      basepath     resource

Proxies are deployed to an Environment while the hostnames on which the clients are going to access the API proxy are defined at an Environment Group level.

Each Environment Group can be configured to listen to requests from one or more hostnames but one hostname cannot be in more than one group.

The hostname and a proxy basepath(across multiple proxy deployments) makes a unique combination in Apigee. For more information on this, visit link

How does traffic flow work inside a proxy?

The diagram below shows how a request flows through Apigee.

devashishpatil_13-1687847173474.png
  • Each request goes through a bunch of flows inside Apigee
  • The flow from Client to Server(left to right) is called request flow
  • The flow from Server to Client(right to left) is called response flow
  • An Apigee proxy is divided into 2 parts, proxy endpoint and target endpoint.
  • Traffic passes through each endpoint twice, first during the request flow and second during the response flow.
  • These flow executions are further divided into 3 parts, called PreFlow, Conditional Flow, PostFlow. Conditional flows only run if the underlying condition when defining the flow becomes true. Conditional flows are optional.

 

devashishpatil_12-1687847069904.png

The code for a basic proxy endpoint looks like this.

<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<ProxyEndpoint name="default">
    <PreFlow name="PreFlow">
        <Request/>
        <Response/>
    </PreFlow>
    <Flows/>
    <PostFlow name="PostFlow">
        <Request/>
        <Response/>
    </PostFlow>
    <HTTPProxyConnection>
        <BasePath>/mock</BasePath>
    </HTTPProxyConnection>
    <RouteRule name="default">
        <TargetEndpoint>default</TargetEndpoint>
    </RouteRule>
</ProxyEndpoint>

Parent blocks for PreFlow and PostFlow can be seen here which further contains request and response blocks, which are the actual containers for Apigee policies.

There is a block for HTTPProxyConnection, which defines the base path of the proxy on which Apigee expects the traffic. Route rule block follows this, which is responsible for routing of the traffic and will be discussed later in this article.

The code for a target endpoint looks like this:

<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<TargetEndpoint name="default">
    <PreFlow name="PreFlow">
        <Request/>
        <Response/>
    </PreFlow>
    <Flows/>
    <PostFlow name="PostFlow">
        <Request/>
        <Response/>
    </PostFlow>
    <HTTPTargetConnection>
        <URL>https://mocktarget.apigee.net</URL>
    </HTTPTargetConnection>
</TargetEndpoint>

 The PreFlow and PostFlow blocks are similar to the ones in Proxy endpoint. Instead of HTTPProxyConnection, there is a block for HTTPTargetConnection here, which defines which backend(or target in Apigee’s nomenclature) to send the request to. 

The diagram above shows 2 target endpoints, which is configured for 2 different backends.

There can be multiple target endpoints within the same proxy. These target endpoints are referenced in the proxy endpoint via route rules.

When a proxy is created from the console, 1 default proxy endpoint and 1 default target endpoint is created. This default proxy endpoint references the default target endpoint in its default route rule.

Additional target endpoints can be created, each one can point to different backend servers. Apart from the default route rules, additional rules can be created along with a condition which is responsible for the routing decision. Only the default route rule is without any condition.

A conditional route rule block in the proxy endpoint looks like this:

<RouteRule name="test">
    <TargetEndpoint>test</TargetEndpoint>
    <Condition>proxy.pathsuffix MatchesPath "/test"</Condition>
</RouteRule>

There are two sub-blocks here, first for the TargetEndpoint that the route rule is pointing to and the second defines the condition for the execution of the route rule which in this case is for a Path Match.

Example

Imagine having a microservice architecture for an e-commerce application. There are different services for catalog, cart, users etc with different hostnames. APIs can be exposed from a single endpoint via Apigee, and the routing decision will happen through the path suffix. The route rules for this use case will look something like this:

<RouteRule name="product">
    <TargetEndpoint>product-target-endpoint</TargetEndpoint>
    <Condition>proxy.pathsuffix MatchesPath "/product"</Condition>
</RouteRule>
<RouteRule name="catalog">
    <TargetEndpoint>catalog-target-endpoint</TargetEndpoint>
    <Condition>proxy.pathsuffix MatchesPath "/catalog"</Condition>
</RouteRule>
<RouteRule name="user">
    <TargetEndpoint>user-target-endpoint</TargetEndpoint>
    <Condition>proxy.pathsuffix MatchesPath "/user"</Condition>
</RouteRule>
<RouteRule name="default">
    <TargetEndpoint>default</TargetEndpoint>
</RouteRule>

Notice that the default route rule is defined last and doesn’t have any conditions. This should be followed while proxy development.

Note: Route rules are one of the ways to handle routing, Any custom code or Apigee policies can be used to directly edit the target URL which can also be used for routing.

This is how routing works in and out of Apigee. I hope this article provides some clarity on the traffic flow to and from Apigee, any feedback would be appreciated.

Contributors
Version history
Last update:
‎06-26-2023 11:50 PM
Updated by: