Hi, i can not explain the network functioning of LB IB.
This is the LB configuration:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx-ingress-ingress-nginx-controller LoadBalancer 10.8.10.xxx 35.238.xxx.xxx 80:30827/TCP,443:31284/TCP 110m
This LB route the request to a NIGNX deployment, that is mapped via selector.
I have verified that the LB IP is configured on the kubernetes node as a NAT:
sudo iptables -t nat -L -n -v | grep 35.238.xxx.xxx
13 760 KUBE-EXT-DVY6TQGQIGJDUZSA tcp -- * * 0.0.0.0/0 35.238.xxx.xxx /* default/nginx-ingress-ingress-nginx-controller:https loadbalancer IP */ tcp dpt:443
2 100 KUBE-EXT-VLCQLM6DY3B5RMCV tcp -- * * 0.0.0.0/0 35.238.xxx.xxx /* default/nginx-ingress-ingress-nginx-controller:http loadbalancer IP */ tcp dpt:80
If i try to connect from internet to the service i can see that the request arrive to the kubernetes node whit the destination IP equal to the LB IP:
22:26:49.256026 IP 87.1.xxx.xxx.57683 > 35.238.xxx.xxx.443: Flags [S], seq 116677831, win 65535, options [mss 1452,nop,wscale 6,nop,nop,TS val 467467743 ecr 0,sackOK,eol], length 0
This is confirmed by the connection natted by the node:
sudo conntrack -L | grep 87.1.xxx.xxx
tcp 6 86387 ESTABLISHED src=87.1.xxx.xxx dst=35.238.xxx.xxx sport=57683 dport=443 src=10.4.1.7 dst=10.4.1.1 sport=443 dport=23026 [ASSURED] mark=0 use=1
My question is, how is it possible that the kubernetes node receive the request with the destination IP of the LB?
That IP (35.238.xxx.xxx) I think should be active on the load balancer itself. I figure the architecture as follow:
client > LB (35.238.xxx.xxx) > kubernetes_node:nodePort (NGINX Controller) > Service (Application deployment)
Thanks in advance for the help
Solved! Go to Solution.
Have a look at https://cloud.google.com/load-balancing/docs/network. The TCP Load Balancer configured by GKE is actually a pass-through load balancer so the backend (in your case nginx) receives packets with the source and destination from the actual client.
Have a look at https://cloud.google.com/load-balancing/docs/network. The TCP Load Balancer configured by GKE is actually a pass-through load balancer so the backend (in your case nginx) receives packets with the source and destination from the actual client.