How to globally expose Apigee for internal traffic

TL:DR

This article provides a step-by-step guide on how to leverage the “global-access” feature of the GCP Internal Load Balancer to expose different API services within your organisation over multiple regions. The end-to-end architecture takes advantage of Apigee as the API Gateway between backends and Load Balancers but it’s applicable to other scenarios as well.

Context

While working with many customers, I realised quickly that the first step of all companies willing to expose some API services to the Internet is to start “internally”, within the organisation. This aims to simplify the process of linking back-end systems or data between the multitude of applications that control internal operations. The reason behind this choice is that most companies prefer to start with a “family and friends approach”, wait for their systems to be properly tested and validated, and only at that time, make the APIs available to the world (and maybe start monetising out of it).

Internal API exposure with Apigee X is quite easy but until a few weeks ago (Q4 2022), the entire architecture was leveraging the GCP L7 Regional Internal Load Balancers as a single entry point for the incoming API traffic. As you can imagine, this approach was regional-centric where each Apigee instance, deployed in a specific location, could only serve traffic coming from the same region as it has been deployed.
Now, this is quite annoying, in particular if you want to embrace a multi-region approach, where API calls are coming from different regions and you need to rely on the clients to implement the routing logic towards the Internal Load Balancer appointed to answer their requests.

fpreli_0-1672328604639.png

 

Standard approach for Internal API expose with Apigee before “Global Access”

Everything changes with the introduction of the global access feature for the GCP Internal Load Balancer. By pairing this capability with the existing DNS Geolocation routing policy, every cloud architect is now capable of creating a cloud architecture that automatically steers internal API traffic to the proper service based on client’s geolocation. Before deep diving into the technical steps needed to create such architecture, let’s better present these two enabling features.

Global Access

Looking at the official documentation, when enabled the “global access flag makes your Internal Load Balancer (either HTTP or TCP/UDP) accessible to clients in all regions”.

Now, think about what this means from an architectural perspective: it means clients across different regions can send internal API requests to a single entry point without caring the complexity and the bond of deciding which should be the Internal Regional Load Balancer appointed to serve the requests.
From a technical perspective, it is quite simple, you just need to create a new forwarding rule (or update an existing one) and specify “ — allow-global-access”.

DNS Geolocation routing policy

The geo-location policy allows you to map source geographies with a target list of IP addresses. This allows you to deploy regional instances of your workloads and utilize DNS to direct traffic to the right instances based on where the traffic originates.

If we look at our use case with an example, you can put your workloads behind an Internal HTTP(S) Load Balancing in us-central and duplicate the setup in eu-west and utilize the geo-location routing policy to direct US traffic to the us-central Internal HTTP(S) Load Balancing and European to the eu-west Internal HTTP(S) Load Balancing. In addition to that, traffic originating from other areas would preferentially choose us-central or eu-west depending on how close they are to those regions.

Let’s put all pieces together

Just simple as that, the following graph shows what your architecture will look like if you combine global-access and DNS Geolocation routing policy capabilities.

fpreli_1-1672328604654.png

 

The new recommended architecture for Internal API Traffic exposure

Step-by-step guide

The following list of steps allow you to recreate the solution we just designed starting from scratch. Well, actually I’m just assuming you have your own Apigee deployment spread across two different regions, “europe-west2” and “us-central1”.

  1. First of all, create a dedicated Virtual Private Cloud (VPC) for the internal traffic and allocate three IP ranges as subnets across the world
gcloud compute networks create internal-vpc --subnet-mode=custom

gcloud compute networks subnets create us-subnet \
--network=internal-vpc \
--range=10.1.2.0/24 \
--region=us-central1

gcloud compute networks subnets create europe-subnet \
--network=internal-vpc \
--range=10.3.4.0/24 \
--region=europe-west2

gcloud compute networks subnets create asia-subnet \
--network=internal-vpc \
--range=10.5.6.0/24 \
--region=asia-east1

2. Second, let’s allocate two “proxy_only” subnets that are going to be used for the L7 Internal Load Balancers scalability

gcloud compute networks subnets create proxy-only-subnet-usc1 \
--purpose=REGIONAL_MANAGED_PROXY \
--role=ACTIVE \
--region=us-central1 \
--network=internal-vpc \
--range=10.129.0.0/23

gcloud compute networks subnets create proxy-only-subnet-euw1 \
--purpose=REGIONAL_MANAGED_PROXY \
--role=ACTIVE \
--region=europe-west2 \
--network=internal-vpc \
--range=10.130.0.0/23

3. Third, create the necessary firewall rules for the traffic to flow (and to ssh-connect into the Test-VMs and simulate requests)

gcloud compute firewall-rules create fw-allow-ssh \
--network=internal-vpc \
--action=allow \
--direction=ingress \
--target-tags=allow-ssh \
--rules=tcp:22

gcloud compute firewall-rules create fw-allow-health-check \
--network=internal-vpc \
--action=allow \
--direction=ingress \
--source-ranges=130.211.0.0/22,35.191.0.0/16 \
--target-tags=load-balanced-backend \
--rules=tcp

gcloud compute firewall-rules create fw-allow-proxies \
--network=internal-vpc \
--action=allow \
--direction=ingress \
--source-ranges=10.129.0.0/23,10.130.0.0/23 \
--target-tags=load-balanced-backend \
--rules=tcp:80,tcp:443,tcp:8080

4. The fourth step is to create the PSC Network Endpoint Group the Internal Load Balancers use to expose the multi-region Apigee deployment

gcloud compute network-endpoint-groups create apigee-pscneg-usc1 \
--network-endpoint-type=private-service-connect \
--psc-target-service=<SERVICE_ATTACHMENT_USC1> \
--region=us-central1 \
--network=internal-vpc \
--subnet=us-subnet \
--project=<PROJECT_ID>

gcloud compute network-endpoint-groups create apigee-pscneg-euw2 \
--network-endpoint-type=private-service-connect \
--psc-target-service=<SERVICE_ATTACHMENT_EUW2> \
--region=europe-west2 \
--network=internal-vpc \
--subnet=europe-subnet \
--project=<PROJECT_ID>

5. Almost there: let’s create the Internal Load Balancers components we need (backend, url-maps and target proxy)

# Starting with the backend services
gcloud compute backend-services create l7-ilb-backend-usc1 \
--load-balancing-scheme=INTERNAL_MANAGED \
--protocol=HTTPS \
--region=us-central1

gcloud compute backend-services create l7-ilb-backend-euw2 \
--load-balancing-scheme=INTERNAL_MANAGED \
--protocol=HTTPS \
--region=europe-west2

gcloud compute backend-services add-backend l7-ilb-backend-usc1 \
--network-endpoint-group=apigee-pscneg-usc1 \
--network-endpoint-group-region=us-central1 \
--project=<PROJECT_ID>

gcloud compute backend-services add-backend l7-ilb-backend-euw2 \
--network-endpoint-group=apigee-pscneg-euw2 \
--network-endpoint-group-region=europe-west2 \
--project=<PROJECT_ID>

# Then the url-maps
gcloud compute url-maps create url-l7-ilb-usc1 \
--default-service=l7-ilb-backend-usc1 \
--region=us-central1 \
--project=<PROJECT_ID>

gcloud compute url-maps create url-l7-ilb-euw2 \
--default-service=l7-ilb-backend-euw2 \
--project=<PROJECT_ID> \
--region=europe-west2

# And finally the target proxies
gcloud compute target-http-proxies create l7-ilb-proxy-usc1 \
--url-map=url-l7-ilb-usc1 \
--url-map-region=us-central1 \
--region=us-central1

gcloud compute target-http-proxies create l7-ilb-proxy-euw2 \
--url-map=url-l7-ilb-euw2 \
--url-map-region=europe-west2 \
--region=europe-west2

6. Finally, let’s leverage on the “global-access” feature to make the Internal Load Balancer(s) available from everywhere within your organisation

gcloud compute forwarding-rules create l7-ilb-fr-usc1 \
--load-balancing-scheme=INTERNAL_MANAGED \
--network=internal-vpc \
--subnet=us-subnet \
--address=10.1.2.99 \
--ports=80 \
--region=us-central1 \
--target-http-proxy=l7-ilb-proxy-usc1 \
--target-http-proxy-region=us-central1 \
--allow-global-access

gcloud compute forwarding-rules create l7-ilb-fr-euw2 \
--load-balancing-scheme=INTERNAL_MANAGED \
--network=internal-vpc \
--subnet=europe-subnet \
--address=10.3.4.99 \
--ports=80 \
--region=europe-west2 \
--target-http-proxy=l7-ilb-proxy-euw2 \
--target-http-proxy-region=europe-west2 \
--allow-global-access

Note: as you can see, the flag we need is the “ — allow-global-access”, exposing the forwarding rule (so the chosen IP) across regions.

7. And last but not least, let’s create the DNS record-set routing the incoming internal traffic based on the client geolocation (I’ve used “test.apigee.com” as my DNS record)

gcloud dns managed-zones create apigee-ilb-global-zone \
--dns-name=test.apigee.com \
--networks=internal-vpc \
--visibility=private \
--description=for-apigee

gcloud dns record-sets create test.apigee.com \
--ttl=60 \
--type=A \
--zone=apigee-ilb-global-zone \
--routing-policy-type=GEO \
--routing-policy-data="us-central1=10.1.2.99;europe-west2=10.3.4.99"

Test, test, test …

In order to show how the newly created architecture works, let’s create three Virtual Machines, one in Europe, one in the US and one in Asia.

gcloud compute instances create europe-client-vm \
--zone=europe-west2-b \
--image-family=debian-11 \
--image-project=debian-cloud \
--tags=allow-ssh \
--subnet=europe-subnet

gcloud compute instances create us-client-vm \
--zone=us-central1-b \
--image-family=debian-11 \
--image-project=debian-cloud \
--tags=allow-ssh \
--subnet=us-subnet

gcloud compute instances create asia-client-vm \
--zone=asia-east1-b \
--image-family=debian-11 \
--image-project=debian-cloud \
--tags=allow-ssh \
--subnet=asia-subnet

Now, what happens if you call an API Proxy deployed in Apigee (both instances) from each of these VMs?

curl "http://test.apigee.com/httpbin" -i -v

If we perform the request from the “us-client-vm”, as soon as it is evaluated, the DNS Manager recognises the client location and resolves “test.apigee.com” with the IP associated with the forwarding rule in the US.

fpreli@us-client-vm:~$ curl "http://test.apigee.com/httpbin" -i -v
* Trying 10.1.2.99:80...
* Connected to test.apigee.com (10.1.2.99) port 80 (#0)
> GET /httpbin HTTP/1.1
> User-Agent: curl/7.74.0
> Accept: */*

In continuity of what we have seen before, on the other hand, if the caller is deployed in Europe, the “l7-ilb-fr-euw2” takes care about the API request as this is the closest Internal Load Balancer from a Geolocation perspective.

fpreli@europe-client-vm:~$ curl "http://test.apigee.com/httpbin" -i -v
* Trying 10.3.4.99:80...
* Connected to test.apigee.com (10.3.4.99) port 80 (#0)
> GET /httpbin HTTP/1.1
> User-Agent: curl/7.74.0
> Accept: */*

Last but not least, if we perform the request from the “asia-client-vm”, the DNS Manager recognizes its location and redirects the request to the Internal Load Balancer deployed in us-central1 as the closest among the two, well done.

fpreli@asia-client-vm:~$ curl "http://test.apigee.com/httpbin" -i -v
* Trying 10.1.2.99:80...
* Connected to test.apigee.com (10.1.2.99) port 80 (#0)
> GET /httpbin HTTP/1.1
> User-Agent: curl/7.74.0
> Accept: */*

So what’s next?

So now customers can expose APIs internally with a global footprint but there are still few things that we should know:

As per the current features availability, the backend-service exposed by the forwarding rule must reside in the same region as the Internal (Regional) Load balancer exposing the service.

For the scope of this article, this is not the end of the world since this is not impacting the way Apigee exposes its multi-region capabilities but for other cases, where backend should be “globally scoped”, it might be a limitation.

A “health-check” capability is not available for DNS Geolocation routing policy leveraging L7 forwarding rules.

This feature should allow you to smoothly fail over a healthy forwarding rule in case the exposed backend-service is not available but unfortunately, as of today (Q4 2022) is not there yet.

So what should we do for both these potential enhancements? Just be a little bit patient and cross your fingers, something might be boiling in the pot.

This is the end, I hope this article might help you satisfy your requirements or solve a customer’s request. If you have any feedback or questions, feel free to reach out or drop a comment.

Contributors
Version history
Last update:
‎12-29-2022 07:46 AM
Updated by: