The (not so) mysterious journey of an incoming API request in Apigee hybrid

This article was co-authored by Daniel Strebel and Joel Gauci

[NOTE for users of Apigee hybrid 1.8+] The Apigee-managed ingress gateway of Apigee hybrid introduced breaking changes for the approach described in this article. In particular the ingress configuration via istio resources is no longer supported. Please refer to the most recent documentation of Apigee hybrid for this topic.

This article aims to shed some light on how incoming API requests are routed internally within Apigee hybrid. We will explain how Apigee is routing traffic to the correct environment message processor pods and how you can control this behaviour using Kubernetes Custom Resources and the apigeectl overrides functionality. Even though all the necessary routing components are generated automatically by Apigee tooling, as described in the official installation documentation, the lower level concepts described in this article should help you in the design of more sophisticated topologies and troubleshoot connectivity issues for your deployments.

Overview Ingress Path

This section presents an overview of an ingress path and the main components of the Apigee runtime plane, which are part of this journey. It also describes some different configuration options that are available when installing Apigee hybrid.

Did you know that the Apigee hybrid runtime plane is composed of different Custom Resources (CR), which play a critical role in the operation and functioning of the runtime plane?

These Custom resources are extensions of the Kubernetes API. They can be installed into it by creating Custom Resource Definitions (CRDs). Regardless of how they are installed, the new resources are referred to as Custom Resources to distinguish them from built-in Kubernetes resources (like pods). If you want to learn more about CRDs in Kubernetes, here is the link to the official Kubernetes documentation.

The Apigee hybrid runtime defines the following CRDs, on the apigee namespace:

  • ApigeeOrganization,
  • ApigeeEnvironment,
  • ApigeeDeployment,
  • ApigeeRouteConfig,
  • ApigeeRoute,
  • ApigeeDataStore,
  • ApigeeTelemetry

These CRDs are managed by an Apigee controller manager that is part of the runtime and that is installed during the initialization phase of the runtime installation, on the apigee-system namespace.

You can see the pod of the apigee controller manager using the following command:

kubectl get pods -n apigee-system


Coming back to the overview of an ingress path, here is a picture, which puts the different components of the runtime in perspective:

We decide here to highlight the main components involved in the ingress path.

The different components defined on the istio-system namespace are provisioned during the Anthos Service Mesh (ASM) installation (cf. Install ASM).

The only exception are the cryptographic objects (private key and certificate) used on the Istio Gateway, which are created as a Kubernetes opaque secret during the Apigee runtime installation process.

The name of this secret is a concatenation of the Apigee organization name ($ORG on the previous picture), and the environment group name that has been set on this organization ($ENV_GROUP on the previous picture).

The private key and certificate are defined on the Apigee hybrid overrides (yaml) configuration file.

You have two options to create these crypto objects:

  1. Using your own PKI and trusted Certificate Authority (CA)
  2. Using cert manager, a Kubernetes operator for certificate management that is installed on the Apigee hybrid runtime (cf. Install Cert Manager)

Please refer to the following community article if you want to know more about this topic: Free, trusted SSL Certificates for Apigee hybrid ingress on GKE.

Here is a table based on the previous picture, which sums up apigee and istio-system namespaced components of the hybrid runtime and Apigee CR responsible for their creation:

Apigee hybrid component K8s Resource Name Apigee CR
Gateway $ORG-$ENV_GROUP-$id ApigeeRoute
VirtualService $ORG-$ENV_GROUP-$id ApigeeRoute
DestinationRule apigee-runtime-$ORG-$ENV-$id ApigeeDeployment
Service apigee-runtime-$ORG-$ENV-$id ApigeeDeployment
ReplicaSet apigee-runtime-$ORG-$ENV-$id-$version-$code ApigeeDeployment

ApigeeDeployment

apigee-runtime-$ORG-$ENV-$id ApigeeEnvironment
Secret (for ingress gateway) $ORG-$ENV_GROUP or $ORG-$ENV_GROUP-cacert N/A
  • $ORG: name of the Apigee organization
  • $ENV: Apigee environment
  • $ENV_GROUP: Apigee environment group
  • $id: unique identifier generated during the hybrid runtime installation
  • $version: Apigee hybrid version
    • examples: v133, v134, v140...
  • $code: code for the component type
    • example: xxvi7 for runtime type of components

It is important to note that the runtime pod (at the bottom of the previous picture) is created by a ReplicaSet and exposed by a Kubernetes service. This runtime pod contains the Apigee Message Processor (MP), which executes policies: security, mediation, traffic management and extensions.

In the next chapter, we describe how the different Apigee CRs are created as well as the ingress gateway’s secret

Configuration Options

In this section, we describe the different options to configure the Apigee hybrid runtime.

First, we present the Apigee hybrid runtime configuration file (overrides.yaml) and we discuss the apigeectl command and its different outputs regarding the components of the ingress path.

Overrides.yaml

If you have already installed Apigee hybrid, you may have already asked yourself this question: “why is the Apigee hybrid runtime configuration file named overrides.yaml?”

apigeectl is a command-line interface (CLI) for installing and managing Apigee hybrid in a Kubernetes cluster. For information on downloading and installing apigeectl, see Download and install apigeectl

Once apigeectl has been installed, here are the different files and directories that you can see from the root (the example below is based on version 1.3.4 of apigeectl):

The default Apigee hybrid runtime configuration is defined in the ./config/values.yaml file.

The configuration you want to promote on your runtime cluster “overrides” a part of these values and this is the reason for the name of the configuration file: overrides.yaml.

The list of all of the configuration properties that you can use to customize the runtime plane of your Apigee hybrid deployment is presented in the Configuration property reference doc.

Regarding the ingress path presented in this article, here are the properties of the overrides.yaml file that come into play in the creation of various kubernetes components of the runtime:

  • Virtualhosts: Virtual hosts allow Apigee hybrid to handle API requests to a specified environment group. The list of configuration properties for virtual hosts are presented here. Most notably, a virtualhost entry configures the SSL certificate for the hostnames that are configured on the environment group (see below).
  • Environments: this property defines the environments to which you can deploy your API proxies. Each environment provides an isolated context for running API proxies. Therefore, a runtime pod (Message Processor) only processes API traffic for a specific and dedicated environment. Environments and its related properties are presented here.

As an example, here is an extract of a simple overrides.yaml file. It emphasizes how virtual hosts and environments can be configured:

At least one hostname must be configured for each environment group (envgroup). As envgroups are defined on the Apigee management plane, the envgroup’s configuration must be done through the User Interface (UI) or the Apigee API.

In this example, the envgroup’s name is test-group. Cryptographic objects (certificate and key) are configured for each envgroup and related to the hostname that is used by the istio ingress gateway.

Envgroups are associated to at least one environment (test in this example)

On the Apigee UI, you can see the envgroup and its hostname(s), as well as the environment(s) associated to this envgroup, as shown on the following picture:

In the example above the characteristics of the envgroup are:

  • Group name: test-group
  • Hostnames: hybrid.iloveapis.io
  • Environments: test

Should you need detailed explanation and examples on how to use envgroups and environments, please refer to the Apigee doc dealing with Environments and Environment Groups.

Apigeectl

In this section, we present the apigeectl CLI and the Kubernetes resources that are created during the initialization and configuration phases of the Apigee hybrid runtime installation.

Initialization phase

Before the initialization phase, it is necessary to install 2 types of resources on the target cluster:

  1. Cert manager
  2. Anthos Service Mesh (ASM)

All the details about the installation of these resources are detailed in the Apigee hybrid doc.

The output of the ASM installation consists in the following istio-system namespaced components:

  • Pods: ingress gateway and istiod related pods
  • Services: istio-ingressgateway of type LoadBalancer and istiod related services
  • Deployment and ReplicaSets
  • HorizontalPod Autoscaler (HPA)
  • Secrets: these do not include the secret used on the istio ingress gateway of the hybrid runtime

The downloading and installation of the apigeectl CLI is presented here.

The command used to initialize the Apigee hybrid runtime operates in 2 steps:

  1. A dry run initialization, which is different regarding the kubectl version you are using
  2. The initialization, if no errors have been triggered during the dry run phase

If your kubectl version is 1.17 and older, use the following dry run command for initialization:

apigeectl init -f overrides.yaml --dry-run=true

If your kubectl version is 1.18 and newer, use the following dry run command for initialization:

apigeectl init -f overrides.yaml --dry-run=client

If there are no errors, execute the init command as follows:

apigeectl init -f overrides.yaml

What is created during the init phase?

The apigeectl CLI uses the apigee-operators plugin files in order to install the following components:

  • Apigee Custom Resource Definitions (CRDs)
  • Apigee resources: some of them are used by the Apigee Controller Manager (Apigee Operator) like: service accounts, cluster roles and bindings… these resources are set on the apigee-system namespace
  • Apigee Admission Controller and Admission Webhooks
  • Apigee Controller Manager components installed on the apigee-system namespace and whose aim is to execute requests originating from the Apigee Admission Controller
  • Apigee EnvoyFilter: a dedicated access log file is added to the Envoy ingress gateway using EnvoyFilter mechanism. For more details, please refer to Enriching Envoy Access Logs with Custom Data for Apigee hybrid
  • Here is a list of the Apigee CRDs created during the init phase:

 

  • apigeedatastorebackups.apigee.cloud.google.com
  • apigeedatastores.apigee.cloud.google.com
  • apigeedeployments.apigee.cloud.google.com
  • apigeeenvironments.apigee.cloud.google.com
  • apigeeorganizations.apigee.cloud.google.com
  • apigeerouteconfigs.apigee.cloud.google.com
  • apigeeroutes.apigee.cloud.google.com
  • apigeetelemetries.apigee.cloud.google.com
  • cassandradatareplications.apigee.cloud.google.com

Details of each CRD can be found in the apigee-operators.yaml file present in the apigeectl tool:

$APIGEECTL_HOME/plugins/apigee-operators/apigee-operators.yaml


...where APIGEECTL_HOME is the home directory of an apigeectl installation.Once the init phase has been completed, you should see the “Apigee controller manager” and “Apigee resources install” pods, when executing the following command:

kubectl get pods -n apigee-system

While the controller manager pod must be in “running” state, the other pod must be “completed” as it is referenced by a Kubernetes Job (apigee-resources-install), whose aim is to install different types of resources as described above.

Apigee runtime components configuration phase

The next step is the installation of the runtime components and among others the different Apigee Custom Resources.

We proceed in 2 steps as for the init phase.

If your kubectl version is 1.17 and older, use the following dry run command for initialization:

apigeectl apply -f overrides.yaml --dry-run=true

If your kubectl version is 1.18 and newer, use the following dry run command for initialization:

apigeectl apply -f overrides.yaml --dry-run=client


If there are no errors, execute the init command as follows:

apigeectl apply -f overrides.yaml


You can check the status of the deployment, run the following command:

apigeectl check-ready -f overrides.yaml


Please refer to the Apigee hybrid runtime installation for more details.Once all the pods are in “running” or “completed” state, the installation of the Apigee hybrid runtime components has been completed.

What is created during the configuration phase?

In this section, we focus on the runtime components related to the ingress path, which are created during the final configuration phase.

Based on the values of the configuration properties of your Apigee hybrid runtime (overrides.yaml), here are the different CRs created:

Which configuration property (overrides.yaml) ? Which CR is created ? Purpose of the CR
virtualhosts[].sslCertPath

virtualhosts[].sslKeyPath

or:

virtualhosts[].sslSecret

ApigeeRouteConfig Reference secret of istio Gateway for each hostname of an envgroup
envs[] ApigeeEnvironment Create an ApigeeDeployment CR for each environment scoped resource: runtime(*), udca, synchronizer
virtualhosts[].name

ApigeeRoute

This CR contains hostnames and routing information and is used to create Istio Gateway and VirtualService

(*): Regarding the ingress path, the ApigeeDeployment CR is responsible for creating the following components on the apigee namespace:

  • Istio DestinationRule, to route from the Istio Gateway and VirtualService to the right Message Processor/runtime pod. Subsets are used for routing to a specific environment and version of a runtime pod
  • Service: a service is created in front of the MP/runtime pods for each environment
  • ReplicaSet: a replicaset is created for each environment of an Apigee organization to manage MP/runtime pods dedicated to a specific environment and version. The replica set is responsible for the creation of the MP/runtime pods

Istio Gateway and VirtualService are created on the apigee namespace based on the ApigeeRoute CR. Gateways and VirtualServices are scoped to an envgroup of an Apigee organization.

The Gateways are applied to the Envoy proxy running on a pod with label: app: istio-ingressgateway. Their specification describes the port (443) that should be exposed, the type of protocol to use (HTTPS), the TLS credential name and hostnames.

The VirtualService contains the routing information based on the basePath property of the different API proxies. This resource is automatically updated when new proxies are deployed to the environment.

Secrets of the Ingress Gateway

Private keys and certificates used on the ingress gateway are stored into a Kubernetes opaque secret. These cryptographic objects are defined on the overrides.yaml file and are transformed into secret during the Apigee configuration step, through the virtualhosts.yaml template file.

This file is present in the apigeectl tool:

$APIGEECTL_HOME/templates/virtualhosts.yaml

...where APIGEECTL_HOME is the home directory of an apigeectl installation.

API Proxy deployment with Apigee hybrid

There are two runtime components, which an active role in the deployment process of an API Proxy:

  • Synchronizer: scoped at the environment level of an Apigee organization
  • Watcher: scoped at the Apigee organization level

This means there are at least as many synchronizers as environments defined on an organization and there is at least one watcher (as there is at least one watcher per organization).

Let’s see what are the exact roles of the synchronizer and watcher...

API Proxy deployments are two stage processes with Apigee hybrid:

  • Stage #1: the synchronizer downloads a proxy revision. The MP/runtime pod polls the synchronizer and loads the proxy revision in the runtime. The watcher component is polling the Runtime for deployment status. At a specified interval, the watcher component sends the deployment status to the control plane. When the control plane receives a successful deployment status, stage 2 is triggered.
  • Stage #2: The watcher is polling the control plane for ingress changes (envgroup, routing information). When watcher detects a new API Proxy (therefore a new basePath), it updates the necessary ApigeeRoute CR for Istio (Gateway and VirtualService), in order to configure the Ingress. Once this configuration is successfully applied, the watcher communicates the ingress status to the control plane. ApigeeRoute CRs are created from the envgroups defined in the virtualhosts.

The ApigeeRoute CR is then able to generate/modify the Istio Gateway (new envgroup) and VirtualService (new API proxy or basePath). As the API proxy configuration has already been deployed on MP/runtime pods, the full ingress path is now operational.

Incoming Request Step by Step

The purpose of this session is to step through the end to end routing path for an incoming request. We will look at a correctly configured setup in this section. If you are interested in troubleshooting possible routing errors, please check the subsequent section. The scenario assumes that we have a setup where we have deployed an Apigee hybrid runtime for an environment “env1” that is part of an environment group “envgroup1”.

The environment group has a hostname configured to “api.envgroup1.example.com”. We also have an API proxy deployed in “env1” with a base path of “/my-proxy/v1.

1. Client issues a request

The request path starts by a client that is making a request against the API proxy that is running on Apigee hybrid. In our example this could look something like this:

curl https://api.envgroup1.example.com/my-proxy/v1/something

2. Resolve server IP via DNS

The client application will resolve the hostname of api.envgroup1.example.com to an IP address. This IP address corresponds to the load balancer for the ingress service for the runtime cluster.

3. TLS Handshake with the ASM ingress

The client app performs a TLS handshake with the ASM ingress. The ASM ingress has the TLS credentials for each environment group hostname configured via the gateway object (this resource is automatically generated by Apigee). You can see the gateway configuration via the Kubernetes API.

 

kubectl get gateway -n apigee -o yaml

This should list something like this configuration that points to the Kubernetes secret that contains the TLS credentials:

apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata: …
spec:
  selector:
    app: istio-ingressgateway
  servers:
  - hosts:
    - api.envgroup1.example.com
    port:
      name: apigee-https-443
      number: 443
      protocol: HTTPS
    tls:
      credentialName: xxx-kvm-envgroup1
      mode: SIMPLE

4. ASM Ingress routing

Once the request reaches the AMS ingress the traffic is decrypted and re-encrypted to be sent to the runtime pods for environment “env1”. The routing information for this is contained in the VirtualService resource in the apigee namespace.

This resource is also automatically generated for every environment group via the Apigeectl tool and the user provided overrides as described above. The virtualservice resource is also automatically updated when you deploy proxies to an environment that is part of the environment group.

kubectl get virtualservice -n apigee -o yaml

The virtual service references the gateway resource from above and contains the routing rules to direct the traffic to the correct destination. The hosts array specifies again the hostname to match only requests for a specific environment group. The uri matches are used to match the base paths of a proxy such that all traffic for this basepath is routed to the correct environment.

 

apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata: …
spec:
  gateways:
  - ...-envgroup1-f8181c8
  hosts:
  - api.envgroup1.example.com
  http:
  - match:
    - uri:
        regex: /my-proxy/v1(/[^/]+)*/?
    route:
    - destination:
        host: apigee-runtime-...--env1-11b180d.apigee.svc.cluster.local
        port:
          number: 8443
          subset: v140-hvszw
        weight: 100
      timeout: 300s

Looking at the destination entry you will see that the destination is defined by a specific subset identifier.

This is again automatically managed for you but is used to identify different versions of the Apigee runtime and helps with rolling updates.

You can inspect the destination rule for a specific runtime by running this command:

kubectl get destinationrules apigee-runtime-DR_NAME -n apigee -o yaml

This should contain something like this:

apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata: …
spec:
  host: apigee-runtime-...--env1-11b180d.apigee.svc.cluster.local
  subsets:
  - labels:
      com.apigee.apigeedeployment: apigee-runtime-...--env1-11b180d
      com.apigee.revision: v140-hvszw
      com.apigee.version: v140
    name: v140-hvszw
  trafficPolicy:
    tls:
      mode: SIMPLE


Troubleshooting

Scenario 1 Proxy is not reachable

Problem

A proxy is deployed to an Apigee environment but is not reachable via the ingress hostname/IP.

Troubleshooting

  • Can we see the call in the ingress logs?

Details about how to access Envoy access logs of Apigee hybrid runtime are presented in this community article

  • Is the environment attached to an environment group?
curl -H "Authorization: Bearer $TOKEN" https://apigee.googleapis.com/v1/organizations/my-org/envgroups/envgroup1/attachments
  • Does the host header correspond to a hostname that is registered for that environment group?
curl -H "Authorization: Bearer $TOKEN" https://apigee.googleapis.com/v1/organizations/my-org/envgroups/envgroup1

If not, set the correct hostname(s) using the Apigee API or UI

  • Is there a virtual service resource that tells the ASM ingress:If not, set the correct hostname(s) using the Apigee API or UI
  • To accept traffic for the environment group hostname
  • To route traffic to the correct environment runtime based on the proxy basepath
kubectl get virtualservice -n apigee -o yaml

And look for something like this

apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata: …
spec:
  gateways:
  - ...-envgroup1-f8181c8
  hosts:
  - api.envgroup1.example.com
  http:
  - match:
    - uri:
        regex: /my-proxy/v1(/[^/]+)*/?
    route:
    - destination:
        host: apigee-runtime-...--env1-11b180d.apigee.svc.cluster.local
        port:
          number: 8443
          subset: v140-hvszw
        weight: 100
      timeout: 300s
  • Can the proxy be called from within the runtime pod?
kubectl exec -it $(kubectl get pods -n $NAMESPACE -l org=${ORG},env=${ENV},app=apigee-runtime --output=jsonpath='{.items[0].metadata.name}' | head -n 1) -n apigee -- curl -k https://localhost:8443/httpbin/v0/anything

If this fails, the proxy was most likely not deployed properly. Try to undeploy and redeploy the proxy from the Apigee API or the UI.

Scenario 2 TLS credentials missing / invalid

Problem

When calling the API we get a TLS error. Doing a curl against the API the error message looks like this:

LibreSSL SSL_connect: SSL_ERROR_SYSCALL in connection to api.envgroup1.example.com

Troubleshooting

This usually means that the certificate reference on the Gateway is missing or invalid. Check the secrets for your gateways with the following command.

kubectl get gateway -n apigee -o jsonpath='{range .items[*].spec.servers[*]} {.tls.credentialName}{"\t"}{.hosts[0]}{"\n"}{end}'

This will list the tls credential secrets (1) together with the hostnames (2) that they are used for:

xxx-envgroup1 api.envgroup1.example.com

To validate that all referenced secrets exist you can do the following:

for SECRET_REF in $(kubectl get gateway -n apigee -o jsonpath='{range .items[*].spec.servers[*]}{.tls.credentialName}{" "}{end}'); do kubectl get secret -n istio-system $SECRET_REF; done;
  • If you use a secret reference in your Apigee overrides yaml, you should make sure that secret is created.If there are any errors about missing secrets then these need to be created.
  • If you use a file path for your certificate and key within your overrides yaml, you should verify that the secret was properly generated by apigeectl. You can use the --print-yaml together with the --dry-run=client flag on apigeectl to see if the secret is created.

If the secret exists, check if it is valid:

kubectl get secret -n istio-system [SECRET_NAME] --template={{.data.cert}} | base64 -d | openssl x509 -text

A useful command to see which certificate is returned by the ingress is the following, based on openssl:

openssl s_client -connect $(kubectl get svc -l app=istio-ingressgateway -o custom-columns=:status.loadBalancer.ingress[0].ip -n istio-system):443

Conclusion

We demonstrated the routing components for Apigee hybrid and how they can be configured using the apigeectl overrides functionality. Apigee users are advised not to create any of the intermediary Kubernetes resources themselves and stick to the official apigeectl overrides process whenever possible. Furthermore the Apigee generated routing resources such as the istio custom resources are managed by the Apigee controllers and do not allow any manual modifications to them. Any edit will automatically be overridden in order to prevent any form of config drift that would make it impossible to incorporate things like newly deployed API proxies. The underlying concepts that were introduced in this article are interesting to help understand the internal routing processes and sometimes helpful in troubleshooting connectivity issues.

Related Links

Contributors
Comments

Great writeup. Thanks @joel_gauci 

Sirisha_n
Participant I

Good stuff!

jkmurthy
Community Visitor

hi @joel_gauci - this is an excellent article which has lots of insights into a request flow. Could you please let me know if this can be updated to reflect the latest Hybrid versions ? For instance, I could not find gateway, destinationrules, virtualservice etc. Let me know if I am missing something

Version history
Last update:
‎01-05-2024 02:15 AM
Updated by: