Hello,
I am trying to get my web-app published from my gke cluster with HTTPS load balance
could not get it to work, i always get " error: server error ".
I can confirm the pods are running and apache is serving the pages locally and directly from each pod,
checked the following:
1. apache running and serving directly from pods
2. firewall rules for loadbalance IP ranges open for my whole network ( was not sure how to add rule only for a cluster, the docs show how to add a tag to a VM instance , not to a cluster of pods ).
3. created a layer 4 loadbalance with a simple config and temp IP , it works with port 80 and shows the web page from all pods.
---------------------------------------------------------------------------------------
Now i am trying to use a layer 7 load balance with https
I have reserved a static global IP and pointed a domain. ( called it ingress-webapps )
configured ingress and service according to what i found in the documents,
the https load balance is created, but does not actually serv my pages, instead it gives a 502 for http and https.
would love some help in getting this thing going, I could not find any more resources for support in this case, it seems that every blog\forum\post i found is either outdated, or does not cover the scope of this issue directly.
my service yaml is:
apiVersion: v1
kind: Service
metadata:
name: mctwordpress
spec:
selector:
app: mctwordpress
type: NodePort
ports:
- port: 80
targetPort: 80
my ingress config yaml is:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-webapps
annotations:
kubernetes.io/ingress.class: "gce"
kubernetes.io/ingress.global-static-ip-name: "ingress-webapps"
spec:
rules:
- http:
paths:
- path: "/*"
pathType: ImplementationSpecific
backend:
service:
name: mctwordpress
port:
number: 80
in my deployment i created a probe under spec
ports:
- containerPort: 80
readinessProbe:
httpGet:
scheme: HTTP
path: /live.html
port: 80
initialDelaySeconds: 10
periodSeconds: 5
live.html
html]# cat live.html
server is live
Thanks in advanced for any help!
Have you checked the service using port-forwarding if the service is accessible?
did not check that, how do i implement port forwarding?
You can check using this command, it will forward the connection to the local port.
kubectl port-forward svc/mctwordpress 80
Check this link to lean more : https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/
could not run this command, its gives a "timeout".
)$ kubectl port-forward svc/mctwordpress 8080
error: timed out waiting for the condition
but , i tried accessing each pod individually on the lan, and the web page is loading correctly from each individual lan ip per pod.
root@mct ~]# curl -v 10.104.2.18|head -n 1
* Rebuilt URL to: 10.104.2.18/
* Trying 10.104.2.18...
* TCP_NODELAY set
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0* Connected to 10.104.2.18 (10.104.2.18) port 80 (#0)
> GET / HTTP/1.1
> Host: 10.104.2.18
> User-Agent: curl/7.61.1
> Accept: */*
>
< HTTP/1.1 200 OK
< Date: Fri, 03 Feb 2023 21:12:44 GMT
< Server: Apache/2.4.38 (Debian)
< Vary: Accept-Encoding
< Transfer-Encoding: chunked
< Content-Type: text/html; charset=UTF-8
I have a small update about this issue,
it seems that the configuration does not pass over the "network endpoint" addresses and ports from the NEG to the loadbalance.
I have run kubectl get pod -o wide to get each ip manually from the cluster,
then added them manually in the load balance backend configuration, now the web page loads correctly,
but this seems to be counter productive to my goal of having this fully automated, if i would put the cluster on autopilot mode to scale , it will not update the NEG correctly and will not reflect on the load balance backend service until i manually get the pod IP and place it manually...
any insight from someone at google about what am i doing wrong here?