Get hands-on experience with 20+ free Google Cloud products and $300 in free credit for new customers.

Help Needed: HTTP to HTTPS Redirection Issue with GKE Ingress

Hello,

I am hosting my application using GKE and configured Ingress using YAML to create load balancing and redirect HTTP to HTTPS using FrontendConfig. The configuration has been working well for the past year. However, a couple of months ago, I noticed that the HTTP to HTTPS redirection stopped working.

Here are the steps I've taken to troubleshoot the issue:

  • Checked my load balancer's health checks, and all were green.
  • Reviewed the Ingress logs but did not find anything abnormal.
  • Tried deleting the FrontendConfig and recreating it. However, I wasn't able to achieve this because the Ingress automatically recreates them. When I try to edit them, I receive an error stating that the static IP is invalid because it is used by another resource. I understand that the Ingress keeps reserving it even if the FrontendConfig is deleted.
  • Deleted and reapplied the Ingress configuration, but the issue persists.

When I use curl -I http://domain-name, it returns a 200 status code.

Here is my Ingress YAML configuration:

 

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: staging-ingress
  namespace: staging
  annotations:
    networking.gke.io/FrontendConfig: https-redirect-grower
    kubernetes.io/ingress.global-static-ip-name: staging-web-static-ip2
    networking.gke.io/managed-certificates: staging-the-grower-com-managed-cert
spec:
  rules:
    - host: staging.the-grower.com
      http:
        paths:
          - path: /*
            pathType: ImplementationSpecific
            backend:
              service:
                name: frontend
                port:
                  number: 80
          - path: /api/*
            pathType: ImplementationSpecific
            backend:
              service:
                name: backend
                port:
                  number: 3033

 

 

 

---
apiVersion: networking.gke.io/v1beta1
kind: FrontendConfig
metadata:
  name: https-redirect-grower
  namespace: staging
spec:
  sslPolicy: gke-ingress-ssl-policy
  redirectToHttps:
    enabled: true
    responseCodeName: MOVED_PERMANENTLY_DEFAULT

 


Any idea why this configuration stopped working and how to solve it?

Thank you for your assistance.

1 7 928
7 REPLIES 7

An Ingress object is associated with one or more Service objects, each of which is associated with a set of Pods. Can you share your Service manifest?

Are you doing container-native LB through Ingress or through Standalone Zonal NEGs? 

Hi dar10,
Thank you for your response. Here are the manifests for my Service objects:

apiVersion: v1
kind: Service
metadata:
  name: backend
  namespace: staging
spec:
  type: NodePort
  selector:
    app: backend
  ports:
    - protocol: TCP
      port: 3033
      targetPort: 3033
      name: http
apiVersion: v1
kind: Service
metadata:
  name: frontend
  namespace: staging
spec:
  type: NodePort
  selector:
    app: frontend
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80
      name: http

Regarding load balancing, I am using container-native load balancing through the GKE Ingress controller. I have not set up standalone Zonal NEGs.

I appreciate your help in diagnosing this issue.

Best regards,

It looks like the frontend service manifest is missing the annotation below in the metadata node:

cloud.google.com/neg: '{"ingress": true}'

Also, make sure the service port number in the ingress manifest matches the port number in the service manifest (80).

Thanks! 

I've added the annotation below metadata and the frontend manifest looks like the following:

apiVersion: v1
kind: Service
metadata:
  name: frontend
  namespace: staging
  annotations:
    cloud.google.com/neg: '{"ingress": true}'
spec:
  type: NodePort
  selector:
    app: frontend
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80
      name: http

And I've double-checked the ingress manifest and the services manifest has the same ports. 

Did you run the kubectl apply -f <your-resource-kind-manifest> commands?

You should run the aforementioned command one for your deployment, one for your service and one for your ingress resource kinds.

Upon completing you should have:

1. A global external HTTP(S) load balancer (classic). 2. A target HTTP(S) proxy. 3. A backend service in each zone. 4. A global health check attached to the backend service. 5. A Network Endpoint Group (NEG). The endpoints in the NEG and the endpoints of the Service are kept in sync.

A detailed walkthrough of this setup is described in my book:

Cabianca, Dario. Google Cloud Platform (GCP) Professional Cloud Network Engineer Certification Companion: Learn and Apply Network Design Concepts to Prepare for the Exam (Certification Study Companion Series) (p. 388). Apress. Kindle Edition.

 

Hello again,

I've deleted both the ingress and frontend service, and applied the changes using `kubectl apply -f ingress` and `kubectl apply -f frontend`. 

The issue persists, I have no redirect from Http to Https.
After applying the ingress, it created the following:

  • Load Balancer: 
    • type: Application (Classic)
    • Access Type: External
    • Protocol: Http and Https
    • Scope: Global
    • And has 3 linked backend to it which are healthy
  • 3 Backend Services
    • Type: Backend service (Classic)
    • Scope: Global
    • Protocol: Http
  • Frontend:
    • Type: Application (classic)
    • Scope: Global
    • Addresses: 1 has a port 80 and 1 has a port 443
    • IP version: IPv4
    • Protocol: Http
    • Network tier: Premium
  • Forwarding Rules
  • Target Proxy
  • Certificates

And Here is the yaml manifest for services and ingress:

apiVersion: v1
kind: Service
metadata:
  name: frontend
  namespace: staging
  annotations:
    cloud.google.com/neg: '{"ingress": true}'
spec:
  type: NodePort
  selector:
    app: frontend
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80
      name: http
apiVersion: v1
kind: Service
metadata:
  name: backend
  namespace: staging
spec:
  type: NodePort
  selector:
    app: backend
  ports:
    - protocol: TCP
      port: 3033
      targetPort: 3033
      name: http
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: staging-ingress
  namespace: staging
  annotations:
    networking.gke.io/FrontendConfig: https-redirect-grower
    kubernetes.io/ingress.global-static-ip-name: staging-web-static-ip2
    networking.gke.io/managed-certificates: staging-the-grower-com-managed-cert
spec:
  rules:
    - host: staging.the-grower.com
      http:
        paths:
          - path: /*
            pathType: ImplementationSpecific
            backend:
              service:
                name: frontend
                port:
                  number: 80
          - path: /api/*
            pathType: ImplementationSpecific
            backend:
              service:
                name: backend
                port:
                  number: 3033


---
apiVersion: networking.gke.io/v1beta1
kind: FrontendConfig
metadata:
  name: https-redirect-grower
  namespace: staging
spec:
  sslPolicy: gke-ingress-ssl-policy
  redirectToHttps:
    enabled: true
    responseCodeName: MOVED_PERMANENTLY_DEFAULT

Note: The Service’s annotation, cloud.google.com/neg: '{"ingress": true}', enables container-native load balancing. However, the load balancer is not created until you create an Ingress for the Service.

Cabianca, Dario. Google Cloud Platform (GCP) Professional Cloud Network Engineer Certification Companion: Learn and Apply Network Design Concepts to Prepare for the Exam (Certification Study Companion Series) (p. 387). Apress. Kindle Edition.