Exploring Apigee Networking Blueprints

Introduction

In the world of API management, Apigee emerges as a leading platform, offering robust solutions for designing, securing, and scaling APIs. Choosing the right networking approach can significantly impact the performance, security, and scalability of your API infrastructure. This article aims to demystify Apigee's networking blueprints, guiding you through the available options and helping you select the best fit for your specific requirements.

Google Cloud Private Access Models

Virtual machines in VPC networks in Google Cloud and on-premises can access Google and third-party APIs and services privately, without external IP addresses. Access methods differ for services running in VPC networks vs. Google's production infrastructure. Services in VPC networks, such as Apigee X, use Private Service Access (VPC peering) or Private Service Connect, whereas services in Google's production infrastructure use Private Google Access or Private Service Connect, as you can see in the diagram below:

private access models.pngAvailable private access methods are the following:

  • Private Google Access (PGA):This service privately connects to Google infrastructure managed services like BigQuery, Vertex AI, and Google Cloud Storage.
  • Private Service Access (PSA): This service facilitates communication with Google Cloud managed services through VPC Peering, encompassing services like Cloud SQL, Memcached, and Apigee X itself.
  • Private Service Connect (PSC): This service enables private, service-oriented connectivity within a VPC network to Google infrastructure and Google Cloud managed services, offering enhanced scalability, granularity, and multi-tenancy.

More information about general private access models can be found here: https://cloud.google.com/vpc/docs/private-access-options

Apigee X Connectivity Models

When discussing Apigee X connectivity, we use the terms "northbound" and "southbound" to describe the direction in which API traffic is flowing.

  • Northbound connectivity refers to the communication that originates from the client and is directed towards Apigee X. This typically encompasses API requests, data submissions, or any other form of client-initiated interaction with the API proxy.
  • Southbound connectivity refers to communication that flows from Apigee X towards the backend services. This encompasses the forwarding of API requests to the appropriate backend, fetching data from backend services, and relaying responses back to the client.

When provisioning a new Apigee X organization, you have three distinct networking options to choose from, each with its own advantages and considerations:

  • PSA-based (VPC Peering-based option): This model leverages managed VPC Peering and DNS Peering to establish seamless communication. This offers a robust and integrated solution for many use cases.
  • PSC-only based (Non-VPC Peering-based option): This model eliminates the need for VPC Peering, resulting in a simplified connectivity option. However, this simplicity may come at the cost of reduced flexibility compared to the other models. At the time of this writing, PSC-only did not support DNS peering, which is usually necessary for Southbound HTTPS communications. Therefore, this option is not used for Southbound in this article.
  • PSA in combination with PSC: This hybrid approach combines the strengths of both PSA and PSC models. It frequently incorporates load balancers, enabling advanced traffic management capabilities and flexibility to tailor the setup to specific requirements. Additionally, this model allows for different methods to be used for northbound and southbound traffic.

This table in public documentation describes the features/approaches available with PSC-based and PSC-only options.

Customer Profile to Blueprint Mapping

The selection of appropriate connectivity options within Apigee X can be a complex task given the diverse range of possibilities. This is why a structured approach, like using network blueprints and customer profile mappings, simplifies this decision-making process.

The diagram below visually represents the concept of aligning customer profiles with Apigee X Southbound and Northbound network blueprints. These customer profiles are shaped by three factors, which subsequently influence the selection of the most suitable network blueprint.This process ensures the chosen connectivity solution fits the organization's specific needs and restrictions.

apigee-network-blueprints.png

Let’s dive deep into each of the Network Blueprints.

Apigee X Northbound Network Blueprints

Let’s get started with Northbound Blueprints, where we identify two customer profiles, and associated network blueprints:

image9.png

Northbound Blueprint 1 - Application Load Balancer with PSC Backends

For most customers with single or multi-region needs, the recommended default blueprint leverages Google Cloud's Application Load Balancer (ALB) with Private Service Connect (PSC). This setup provides flexible and secure access to Apigee X.

The diagram below illustrates this blueprint for a Hub & Spoke Architecture.

image6.png

External and Internal Application Load Balancers will be used with PSC Backends to reach Apigee X. Flow of Traffic will be the following:

  • External Traffic:
    • API consumers on the internet send HTTPS requests.
    • The External ALB receives the requests and applies Cloud Armor security policies.
    • The External ALB then forwards the traffic to the Apigee X instance via PSC NEG and Service Attachment.
  • Internal Traffic:
    • API consumers within the customer's network send requests. If API consumers are onprem, they will reach Internal ALB through a hybrid connectivity (VPN or Interconnect). If consumers are in GCP in spoke VPCs, communication will happen through the central Hub VPC via VPC Peering.
    • The Internal ALB receives the requests.
    • The Internal ALB then forwards the traffic to the Apigee X instance via PSC NEG and Service Attachment.

Key Considerations:

  • Network Topology: Works fine with any architecture, with Hub&Spoke with VPC Peering or Multiple Shared VPCs.
  • Load Balancers: 
    • Supported load balancers are the ones supporting PSC Backends, which are all External and Internal ALBs except for Classic external load balancers. More information in this public link.
    • Multi-regional setups:

      • Global Proximity Routing: The Global External and Cross-region Internal ALBs distribute incoming traffic based on proximity to the API consumer.

      • Cross-regional failover is not possible because PSC NEGs do not support Health-checks.
  • Security: Cloud Armor (WAF) security policies can be applied to PSC NEGs of an Internal or External ALB.
  • Custom Domains (FQDN): The ALB handles the routing of traffic based on custom domain names.
  • TLS: The ALB terminates TLS connections, offloading the processing from the backend services. Mutual TLS (mTLS) is supported on both External ALB and Internal ALB for enhanced security.
  • Scalability: Up to 50 projects with 20 PSC endpoints each to a single Service attachment, equal to Apigee Instance.
Northbound Blueprint 2 - Application Load Balancer with MIGs and VPC Peering

There might be customers with specific requirements for high availability and disaster recovery, needing their APIs to remain accessible even if one region experiences an outage. These customers with multi-regional setups with cross-regional failover needs, should use Google Cloud's Application Load Balancer (ALB) with Managed Instance Groups (MIGs) and PSA (VPC Peering).

The diagram below illustrates a robust architecture designed for high availability, leveraging Managed Instance Groups (MIGs) with Apigee X to achieve cross-regional failover, ensuring your APIs remain accessible even in the face of regional outages.

image8.png

The core of this architecture is to deploy Managed Instance Groups (MIGs) in front of your Apigee X instances in multiple regions.

The flow of Traffic will be similar to previous blueprint, except that this time the Load balancer will forward traffic to the closest MIG instead of PSC backend, and the MIG will forward the traffic and reach Apigee instance via VPC Peering. Another difference is that in the event of a regional outage, traffic will be automatically rerouted to the healthy region.

Key considerations:

  • Network Topology: Works fine with any architecture, with Hub&Spoke with VPC Peering or Multiple Shared VPCs.
  • Load Balancers:
    • Global External (Classic included) and Cross-region internal are supported.
    • Multi-regional setups
      • Global Proximity Routing: The Global External and Cross-regional Internal ALBs distribute incoming traffic to the MIGs based on proximity to API client. Each MIG is associated with an Apigee X instance in a specific region.
      • Cross-regional failover: MIG with VPC Peering option can provide End-to-end Health-checks. So if an Apigee X instance in one region becomes unavailable, end-to-end Health-checks of ALBs will detect the failure and route traffic to the MIG in the healthy region, ensuring uninterrupted service.
  • Security: Cloud Armor (WAF) security policies can be applied to theExternal ALB backend-service composed of MIGs.
  • Custom Domains (FQDN): The ALB handles the routing of traffic based on custom domain names.
  • TLS: The ALB terminates TLS connections, offloading the processing from the backend services. Mutual TLS (mTLS) is supported on both External ALB and Internal ALB for enhanced security.

The following link provides details on how to configure this blueprint for multi-regional setups with MIGs.

Apigee X Southbound Network Blueprints

Let’s dive deep into Southbound Blueprints. The diagram below provides a visual representation of how to choose the right Apigee X Southbound Blueprint by matching it to the customer profile:

southbound-apigee.png

The most suitable network blueprint will depend on the following factors that determine a customer profile:

  • Non-overlapping /22 range available per region: This indicates that the customer has a block of IP addresses (a /22 CIDR range) per Apigee X instance that does not overlap with other networks, which is crucial for establishing certain types of network connections, such as VPC Peering.
  • GCP Backends in Isolated VPCs: This means that the customer's backend services are deployed within Google Cloud Platform (GCP) Virtual Private Clouds (VPCs), and these VPCs are isolated from each other.
  • Quantity and stability of Hybrid backends Apigee X needs to reach. Changes may require frequent configuration updates causing an operational overhead.

Southbound Blueprints are divided into two main categories:

  • Hybrid Connectivity: This refers to connecting Apigee X to backend services that are located outside of GCP, such as on-premises data centers or other cloud environments.
  • GCP Connectivity: This refers to connecting Apigee X to backend services that are located within GCP.

Now, let’s map these four customer profiles to southbound blueprints.

Customers with enough private IP address space

The deciding factor in profile selection is about customer non-overlapping private IP address space. If a customer has enough IP address space available (a non-overlapping /22 range available per Apigee instance), Private Service Access based on VPC Peering can be used.

Southbound Blueprint 1 - Mixed PSA (VPC Peering) for Hybrid and PSC for GCP backends

Customers with ample IP address space and isolated GCP backends can utilize this blueprint, which leverages PSA to access hybrid backends and PSC to access GCP backends

  • If a customer has non-overlapping /22 ranges available, VPC Peering (PSA) can be used for Hybrid Connectivity.
  • If the customer has isolated GCP backends to reach, then PSC is recommended to use to reach these backends.

image10.png

Key considerations:

  • Network Topology: The architecture works well with any Hub & Spoke topology with VPC Peering and Isolated VPCs.
  • PSA IP Addressing Requirements per instance:
    • A non-overlapping /22 CIDR range is required to run the each Apigee X runtime.
    • A non-overlapping, available /28 CIDR range is used by Apigee X for troubleshooting purposes and cannot be customized.
  • Mutual TLS (mTLS) can be used for end-to-end encryption to on-premises environments.
  • Scalability: Up to 1000 Endpoint Attachments (PSC Endpoints) can be created per Apigee X organization.
Southbound Blueprint 2 - PSA-only (VPC Peering) and VPN

If a customer uses a commonly used Hub & Spoke architecture with VPC Peering and has sufficient non-overlapping /22 ranges available, VPC Peering (PSA) can be used. However, due to the non-transitive nature of VPC Peering, Apigee X will not be able to access GCP backends in the VPC spokes unless it is connected to a Shared Services VPC connected via VPN as shown in the diagram below:

image4.png

Key considerations:

  • Network Topology: This blueprint is suitable for Hub&Spoke with VPC Peering topologies. Apigee X will be deployed in Shared Services VPC connected via VPN.
  • VPN Considerations: When using VPNs, be aware of 3 Gbps/250k pps bandwidth limitations per tunnel.
  • PSA - IP Addressing Requirements per instance are the same:
    • A non-overlapping /22 CIDR range to run Apigee X run-time, can be specified.
    • A non-overlapping, available /28 CIDR range, used by Apigee X to access the instance for troubleshooting purposes and cannot be customized or changed.
  • Mutual TLS (mTLS) can be used for end-to-end encryption to on-premises environments.
Customers with IP address shortage

Large enterprise customers commonly experience IP address shortages, resulting in an insufficient number of /22 IP ranges for each Apigee X instance. For these customers, we recommend a combined approach using Private Service Connect (PSC) and Private Service Access (PSA). PSC facilitates private connectivity, while PSA enables DNS peering to your VPC, which is necessary because PSC alone doesn't support DNS resolution.

The deciding factor in the next blueprint selection then becomes the number and volatility of on-premises/multi-cloud backends.

Southbound Blueprint 3 - Mixed PSC with TCP Internal Proxy and PSA for DNS Peering

If there are none or very few on-premises/multi-cloud backends, and these backends don’t change much overtime, PSC with TCP Internal Proxy with Hybrid NEGs can be used for private connectivity as depicted in the diagram below:

image7.png

Key Considerations:

  • Network Topology:
    • Supports any topology with Shared VPC, including Hub & Spoke and multiple Shared VPCs.
    • Private connectivity will be provided by PSC with TCP Internal Proxy:
      • Needs 1 Service Attachment and 1 TCP Proxy per on-premises backend.
      • Hybrid NEGs require IP:port configuration, and do not support FQDNs.
    • We will configure PSA for DNS resolution. To avoid direct VPC peering to the Hub VPC, a Shared VPC model will be used. Apigee will be deployed in a service project with its own isolated internal VPC. Finally, DNS peering will be set up between this internal VPC and PSA
  • Mutual TLS (mTLS) can be used for end-to-end encryption to on-premises environments.
  • Scalability: Up to 1000 PSC endpoints or Endpoint Attachments are allowed per organization.
Southbound Blueprint 4 - PSC with SWP as a Forward Proxy (and PSA for DNS Peering)

If there are many on-premises/multi-cloud backends and/or number of backends is dynamic, PSC with Secure Web Proxy as an explicit/forward proxy can be used as depicted in the diagram below:

image1.png

Key Considerations:

  • Network Topology:
    • Supports any topology with Shared VPC, including Hub & Spoke and multiple Shared VPCs.
    • Requires DNS Peering to the Internal VPC in the Service Project if we want to reach SWP via https.
    • Single Service Attachment and SWP to reach all on-premises backends per region.
    • SWP will be configured without TLS Intercept.
    • SWP can be shared by Apigee and GCP Workloads to access the internet.
  • Mutual TLS (mTLS) can be used for end-to-end encryption to on-premises environments.
  • Cost: One SWP per region is needed and it has an hourly cost, that TCP Internal Proxy does not have.

Visit this article that explains how to configure Apigee Southbound with PSC and SWP.

Recap and Resources

Apigee networking offers a range of flexible blueprints to cater to diverse customer needs. Careful consideration of factors like network topology, IP address space and number and volatility of external backends are essential for selecting the appropriate blueprint.

apigee-network-blueprints.png

 

This terraform code, found in the cloud-foundation-fabric repository, generates the resources needed to establish Apigee X on Google Cloud. It supports the creation of a variety of Apigee architectures.

Acknowledgements

The development of these blueprints would not have been possible without the support and expertise of Pieter Leys and Luis Cuellar. Their contributions were crucial in defining the robust and practical blueprints outlined in this article.

Comments
franblanco82
Bronze 1
Bronze 1

I enjoyed reading this networking guide! Thanks @iigeregi 🤙🏻

iigeregi
Staff

Thanks!

aramkrishna6
Bronze 5
Bronze 5

@iigeregiThank you for thoughtfully laying out the networking blueprints for Apigee-X and providing valuable insights into the details of networking design patterns! any thoughts—how can we ensure optimal northbound and southbound traffic monitoring? Are there specific monitoring templates or configurations recommended for such setups?

iigeregi
Staff

Apart from what Apigee can provide, PSC Metrics can be used: https://cloud.google.com/vpc/docs/monitor-private-service-connect-connections

As well as VPC Flow logs https://cloud.google.com/vpc/docs/flow-logs enabled on interconnect or VPC subnets, together with Flow Analyzer to visualize these flows https://cloud.google.com/network-intelligence-center/docs/flow-analyzer/overview 

dgteixeira
Bronze 3
Bronze 3

Hey @iigeregi , in our use case, even with non overlapping /22 e /28 and a lot of Hybrid backends , we went with SWP, but we peered Apigee X directly with our Shared VPC Host project. 
This way we have "direct" access for GCP backends and we can use private DNS to reach our onprem through interconnect. It's even an additional option 🙂

iigeregi
Staff

Thanks Diogo, yes, I know, that is an additional great option! In fact, there are many options, but the idea of this blueprints is to give some initial architectures that will fit a high percentage of customer requirements. I'll see if I can include it 🙂

Version history
Last update:
‎03-27-2025 09:27 AM
Updated by: