In the world of API management, Apigee emerges as a leading platform, offering robust solutions for designing, securing, and scaling APIs. Choosing the right networking approach can significantly impact the performance, security, and scalability of your API infrastructure. This article aims to demystify Apigee's networking blueprints, guiding you through the available options and helping you select the best fit for your specific requirements.
Virtual machines in VPC networks in Google Cloud and on-premises can access Google and third-party APIs and services privately, without external IP addresses. Access methods differ for services running in VPC networks vs. Google's production infrastructure. Services in VPC networks, such as Apigee X, use Private Service Access (VPC peering) or Private Service Connect, whereas services in Google's production infrastructure use Private Google Access or Private Service Connect, as you can see in the diagram below:
Available private access methods are the following:
More information about general private access models can be found here: https://cloud.google.com/vpc/docs/private-access-options
When discussing Apigee X connectivity, we use the terms "northbound" and "southbound" to describe the direction in which API traffic is flowing.
When provisioning a new Apigee X organization, you have three distinct networking options to choose from, each with its own advantages and considerations:
This table in public documentation describes the features/approaches available with PSC-based and PSC-only options.
The selection of appropriate connectivity options within Apigee X can be a complex task given the diverse range of possibilities. This is why a structured approach, like using network blueprints and customer profile mappings, simplifies this decision-making process.
The diagram below visually represents the concept of aligning customer profiles with Apigee X Southbound and Northbound network blueprints. These customer profiles are shaped by three factors, which subsequently influence the selection of the most suitable network blueprint.This process ensures the chosen connectivity solution fits the organization's specific needs and restrictions.
Let’s dive deep into each of the Network Blueprints.
Let’s get started with Northbound Blueprints, where we identify two customer profiles, and associated network blueprints:
For most customers with single or multi-region needs, the recommended default blueprint leverages Google Cloud's Application Load Balancer (ALB) with Private Service Connect (PSC). This setup provides flexible and secure access to Apigee X.
The diagram below illustrates this blueprint for a Hub & Spoke Architecture.
External and Internal Application Load Balancers will be used with PSC Backends to reach Apigee X. Flow of Traffic will be the following:
Key Considerations:
Multi-regional setups:
Global Proximity Routing: The Global External and Cross-region Internal ALBs distribute incoming traffic based on proximity to the API consumer.
There might be customers with specific requirements for high availability and disaster recovery, needing their APIs to remain accessible even if one region experiences an outage. These customers with multi-regional setups with cross-regional failover needs, should use Google Cloud's Application Load Balancer (ALB) with Managed Instance Groups (MIGs) and PSA (VPC Peering).
The diagram below illustrates a robust architecture designed for high availability, leveraging Managed Instance Groups (MIGs) with Apigee X to achieve cross-regional failover, ensuring your APIs remain accessible even in the face of regional outages.
The core of this architecture is to deploy Managed Instance Groups (MIGs) in front of your Apigee X instances in multiple regions.
The flow of Traffic will be similar to previous blueprint, except that this time the Load balancer will forward traffic to the closest MIG instead of PSC backend, and the MIG will forward the traffic and reach Apigee instance via VPC Peering. Another difference is that in the event of a regional outage, traffic will be automatically rerouted to the healthy region.
Key considerations:
The following link provides details on how to configure this blueprint for multi-regional setups with MIGs.
Let’s dive deep into Southbound Blueprints. The diagram below provides a visual representation of how to choose the right Apigee X Southbound Blueprint by matching it to the customer profile:
The most suitable network blueprint will depend on the following factors that determine a customer profile:
Southbound Blueprints are divided into two main categories:
Now, let’s map these four customer profiles to southbound blueprints.
The deciding factor in profile selection is about customer non-overlapping private IP address space. If a customer has enough IP address space available (a non-overlapping /22 range available per Apigee instance), Private Service Access based on VPC Peering can be used.
Customers with ample IP address space and isolated GCP backends can utilize this blueprint, which leverages PSA to access hybrid backends and PSC to access GCP backends
Key considerations:
If a customer uses a commonly used Hub & Spoke architecture with VPC Peering and has sufficient non-overlapping /22 ranges available, VPC Peering (PSA) can be used. However, due to the non-transitive nature of VPC Peering, Apigee X will not be able to access GCP backends in the VPC spokes unless it is connected to a Shared Services VPC connected via VPN as shown in the diagram below:
Key considerations:
Large enterprise customers commonly experience IP address shortages, resulting in an insufficient number of /22 IP ranges for each Apigee X instance. For these customers, we recommend a combined approach using Private Service Connect (PSC) and Private Service Access (PSA). PSC facilitates private connectivity, while PSA enables DNS peering to your VPC, which is necessary because PSC alone doesn't support DNS resolution.
The deciding factor in the next blueprint selection then becomes the number and volatility of on-premises/multi-cloud backends.
If there are none or very few on-premises/multi-cloud backends, and these backends don’t change much overtime, PSC with TCP Internal Proxy with Hybrid NEGs can be used for private connectivity as depicted in the diagram below:
Key Considerations:
If there are many on-premises/multi-cloud backends and/or number of backends is dynamic, PSC with Secure Web Proxy as an explicit/forward proxy can be used as depicted in the diagram below:
Key Considerations:
Visit this article that explains how to configure Apigee Southbound with PSC and SWP.
Apigee networking offers a range of flexible blueprints to cater to diverse customer needs. Careful consideration of factors like network topology, IP address space and number and volatility of external backends are essential for selecting the appropriate blueprint.
This terraform code, found in the cloud-foundation-fabric repository, generates the resources needed to establish Apigee X on Google Cloud. It supports the creation of a variety of Apigee architectures.
The development of these blueprints would not have been possible without the support and expertise of Pieter Leys and Luis Cuellar. Their contributions were crucial in defining the robust and practical blueprints outlined in this article.
I enjoyed reading this networking guide! Thanks @iigeregi 🤙🏻
Thanks!
@iigeregiThank you for thoughtfully laying out the networking blueprints for Apigee-X and providing valuable insights into the details of networking design patterns! any thoughts—how can we ensure optimal northbound and southbound traffic monitoring? Are there specific monitoring templates or configurations recommended for such setups?
Apart from what Apigee can provide, PSC Metrics can be used: https://cloud.google.com/vpc/docs/monitor-private-service-connect-connections
As well as VPC Flow logs https://cloud.google.com/vpc/docs/flow-logs enabled on interconnect or VPC subnets, together with Flow Analyzer to visualize these flows https://cloud.google.com/network-intelligence-center/docs/flow-analyzer/overview
Hey @iigeregi , in our use case, even with non overlapping /22 e /28 and a lot of Hybrid backends , we went with SWP, but we peered Apigee X directly with our Shared VPC Host project.
This way we have "direct" access for GCP backends and we can use private DNS to reach our onprem through interconnect. It's even an additional option 🙂
Thanks Diogo, yes, I know, that is an additional great option! In fact, there are many options, but the idea of this blueprints is to give some initial architectures that will fit a high percentage of customer requirements. I'll see if I can include it 🙂