How to upgrade Apigee MIGs without disrupting traffic

TLDR; How to upgrade your Apigee network bridge MIG VMs to a newer image without disrupting your existing API traffic.

Problem

Some Apigee customers are running into compliance issues because the image they are running in their Apigee network bridge MIGs is EOL.

This article explains how to upgrade the MIG VMs without disrupting API traffic.

There are basically 3 main strategies when it comes to updating the image in a MIG:

  • Rolling updates.
  • Canary updates.
  • Blue/green deployment.

You can read more about it in the official documentation.

I would recommend going with a canary update or the blue/green deployment for this particular use case.

You can also decide to migrate your current setup to Private Service Connect: documentation.

Canary update

You can follow this update strategy when you want to update only a subset of the VMs to the new image. This is recommended so that you can verify that everything is working as expected before moving forward with the update.

You can follow the steps listed in the documentation.

Update Options

If you want the update to be applied immediately, choose the Automatic option or proactive.

You will have to define the Maximum surge and Maximum unavailable. The first one limits how many new additional instances will be created above your target size during the update, so you can control how fast the new update gets applied according to your budget. For the Apigee MIGs you can leave the default value.

The Maximum unavailable option allows you to configure how many instances are unavailable at any time during an automated update. For example, if you set maxUnavailable to 5, then only 5 instances are taken offline for updating at a time. Use this option to control how disruptive the update is to your service and to control the rate at which the update is deployed.

You should always choose a Maximum unavailable option of 0 to avoid any disruption of API traffic.

The first step is to create a new instance template with the new image or machine type. In my example I will use the example from the Apigee documentation.

The instructions in this section use environment variables to refer to repeatedly used strings. We recommend that you set these before continuing:

PROJECT_ID=your-project-id
NEW_MIG_NAME=apigee-mig-latest   # You can choose a different name if you like
VPC_NAME=default       # If you are using a shared VPC, use the shared VPC name
VPC_SUBNET=default     # Private Google Access must be enabled for this subnet
REGION=us-central1        # The same region as your Apigee runtime instance
APIGEE_ENDPOINT=10.16.0.2. # See the tip below for details on getting this IP address value

Tip: To get the correct Apigee instance IP address, use the Instances API, as the following example shows:

No data residency:

curl -i -X GET -H "Authorization: Bearer $AUTH" \
  "https://apigee.googleapis.com/v1/organizations/$PROJECT_ID/instances"

Data residency:

curl -i -X GET -H "Authorization: Bearer $AUTH" \

  "https://$CONTROL_PLANE_LOCATION-apigee.googleapis.com/v1/organizations/$PROJECT_ID/instances"

Apigee responds with details about the instance; for example:

{
"instances": [
  {
    "name": "my-runtime-instance",
    "location": "us-west1",
    "host": "10.16.0.2",
    "port": "443"
  },
  ...
]
}

 The instance IP address, which you can assign to the $APIGEE_ENDPOINT environment variable, is the value of the host field. In this example, the value is 10.16.0.2.

In a multi-region installation, the API call returns instance details for each regional location. In that case, you need to create a separate managed instance group (MIG) for each location when following the steps in the next section.

Create the new instance template:

gcloud compute instance-templates create $NEW_MIG_NAME \
--project $PROJECT_ID \
--region $REGION \
--network $VPC_NAME \
--subnet $VPC_SUBNET \
--tags=https-server,apigee-mig-proxy,gke-apigee-proxy \
--machine-type e2-medium --image-family debian-12 \
--image-project debian-cloud --boot-disk-size 20GB \
--no-address \
--metadata ENDPOINT=$APIGEE_ENDPOINT,startup-script-url=gs://apigee-5g-saas/apigee-envoy-proxy-release/latest/conf/startup-script.sh

In this example I’m assuming that your existing Apigee MIG is named `apigee-proxy-us-central1´ and it is running an older image of Debian 10. You can check the name of your existing Apigee MIG under the loadbalancer backend section in the GCP console or under the Compute Engine section.

Set an environment variable with the old MIG Name:

OLD_MIG_NAME=apigee-proxy-us-central1

Once we have the new instance template created, we can start with the update process:

gcloud compute instance-groups managed rolling-action start-update INSTANCE_GROUP_NAME \
    --version=template=CURRENT_INSTANCE_TEMPLATE_NAME \
    --canary-version=template=NEW_TEMPLATE,target-size=SIZE \
    [--zone=ZONE | --region=REGION]

 Replace the following:

INSTANCE_GROUP_NAME: the instance group name.

CURRENT_INSTANCE_TEMPLATE_NAME: the instance template that the instance group is currently running.

NEW_TEMPLATE: the new template that you want to canary.

SIZE: the number or percentage of instances that you want to apply this update to. You must apply the target-size property to the --canary-version template. You can only set a percentage if the group contains 10 or more instances.

ZONE: for zonal MIGs, provide the zone.

REGION: for regional MIGs, provide the region.

For example, the following command performs a canary update that rolls out apigee-mig-latest to 50% of instances in the group:

gcloud compute instance-groups managed rolling-action start-update $OLD_MIG_NAME \
    --version=template=$OLD_MIG_NAME \
    --canary-version=template=$NEW_MIG_NAME,target-size=50%

You can monitor de status of the MIG VMS by running this command:

gcloud compute instance-groups managed describe $OLD_MIG_NAME \
  --region=$REGION
 Rolling forward the canary update
gcloud compute instance-groups managed rolling-action start-update $OLD_MIG_NAME \
    --version=template=$NEW_MIG_NAME \
    --region=$REGION

Blue Green Deployment

Another approach, maybe the most conservative one, or “safe” would be to use a blue green deployment by deploying a new MIG with the new template and add it to your existing service backend.

The instructions in this section use environment variables to refer to repeatedly used strings. We recommend that you set these before continuing:

Set the environment variables:

PROJECT_ID=your-project-id
NEW_MIG_NAME=apigee-mig-green   # You can choose a different name if you like
VPC_NAME=default       # If you are using a shared VPC, use the shared VPC name
VPC_SUBNET=default     # Private Google Access must be enabled for this subnet
REGION=us-central1        # The same region as your Apigee runtime instance
APIGEE_ENDPOINT=10.16.0.2 # See the tip below for details on getting this IP address value
AUTH=$(gcloud auth print-access-token)

Tip: To get the correct Apigee instance IP address, use the Instances API, as the following example shows:

No data residency:

curl -i -X GET -H "Authorization: Bearer $AUTH" \
  "https://apigee.googleapis.com/v1/organizations/$PROJECT_ID/instances"

Data residency:

curl -i -X GET -H "Authorization: Bearer $AUTH" \
  "https://$CONTROL_PLANE_LOCATION-apigee.googleapis.com/v1/organizations/$PROJECT_ID/instances"

Apigee responds with details about the instance; for example:

{
"instances": [
  {
    "name": "my-runtime-instance",
    "location": "us-west1",
    "host": "10.16.0.2",
    "port": "443"
  },
  ...
]
}

Create a new instance template:

gcloud compute instance-templates create $NEW_MIG_NAME \
--project $PROJECT_ID \
--region $REGION \
--network $VPC_NAME \
--subnet $VPC_SUBNET \
--tags=https-server,apigee-mig-proxy,gke-apigee-proxy \
--machine-type e2-medium --image-family debian-12 \
--image-project debian-cloud --boot-disk-size 20GB \
--no-address \
--metadata ENDPOINT=$APIGEE_ENDPOINT,startup-script-url=gs://apigee-5g-saas/apigee-envoy-proxy-release/latest/conf/startup-script.sh

Create a new MIG (green):

gcloud compute instance-groups managed create $NEW_MIG_NAME --project $PROJECT_ID --base-instance-name apigee-mig-green --size 2 --template $NEW_MIG_NAME --region $REGION

Set autoscaling:

gcloud compute instance-groups managed set-autoscaling $NEW_MIG_NAME --project $PROJECT_ID --region $REGION --max-num-replicas 3 --target-cpu-utilization 0.75 --cool-down-period 90

Set named ports for the new MIG:

gcloud compute instance-groups managed set-named-ports $NEW_MIG_NAME --project $PROJECT_ID --region $REGION --named-ports https:443

Add the new MIG(green) to the backend service. Make sure you add the MIG to Apigee proxy backend service. (In this example the Apigee load balancer backend service is called apigee-proxy-backend):

gcloud compute backend-services add-backend apigee-proxy-backend --project $PROJECT_ID --instance-group $NEW_MIG_NAME --instance-group-region $REGION --balancing-mode UTILIZATION --max-utilization 0.8 --global

Once you verify that everything is working as expected, you can remove the old MIG from the backend service. One easy way to check whether the new MIGs are proxying traffic is to check if the health check was successful under the Load Balancer backend service configuration:

epbgonzalez_2-1728513525396.png

You can now remove the old MIG from the backend service:

gcloud compute backend-services remove-backend apigee-proxy-backend --project $PROJECT_ID --instance-group $OLD_MIG_NAME --instance-group-region $REGION  --global

 

Version history
Last update:
‎10-09-2024 03:51 PM
Updated by: