Hello,
I've been trying to get VRRP IP Proto 112 working for the last few days with no joy.
So far, I've tried to get traffic flowing between PODs within a single node (also with multi) with the default or custom VPC. Used cluster with default configuration and also with enabled dataplane v2. TCP/UDP/ICMP IPv4 protocols can flow just fine. I'm unable to find any documentation that could shed some light on how to get it working through the primary/default vpc assigned to eth0. VRRPs work just fine with additional networks assigned to other interfaces.
Has anyone encountered similar limitations?
I believe VRRP is associated with Keepalive. As per documentation[1] Keepalive uses the IP Virtual Server (IPVS) kernel module to provide a transport layer (Layer 4). IPVS or IP virtual server uses the Virtual Router Redundancy Protocol (VRRP) to achieve high availability. Keepalived also implements Virtual Router Redundancy Protocol (VRRP).I suggest reviewing the documentation[1] for more information on how keepalive is associated with VRRP and how to configure it.
Now, associating keepalive with VRRP to Kubernetes control plane[2] the documentation[2] shows the use of keepalive in Kubernetes control plane, configuration of VRRP and manifesting keepalive in a static pod.
[1]https://github.com/shawnsong/kubernetes-handbook/blob/master/load-balancer/setup-keepalived.md
Thank you @VannGuce .
Actually, I had pods with keepalived configured and running, but traffic was stuck on both nodes.
Today I found that the CILIUM_FORWARD chain and CILIUM_OUTPUT_raw in raw table in iptables have interfaces lxc+, while such don't exist in the dataplane v2.
$ ip -br link
lo UNKNOWN 00:00:00:00:00:00 <LOOPBACK,UP,LOWER_UP>
eth0 UP 42:01:0a:80:00:2b <BROADCAST,MULTICAST,UP,LOWER_UP>
docker0 DOWN 02:42:7a:18:c1:62 <NO-CARRIER,BROADCAST,MULTICAST,UP>
cilium_net@cilium_host UP 2a:59:ba:e6:35:1d <BROADCAST,MULTICAST,NOARP,UP,LOWER_UP>
cilium_host@cilium_net UP 26:d9:2f:af:3b:45 <BROADCAST,MULTICAST,NOARP,UP,LOWER_UP>
gke28dda7e5534@if2 UP ba:bc:27:f3:0e:be <BROADCAST,MULTICAST,UP,LOWER_UP>
gkec3b8ab422a1@if2 UP de:1f:2a:52:97:28 <BROADCAST,MULTICAST,UP,LOWER_UP>
gkebacabba5ca7@if2 UP 0e:4d:fe:9e:88:11 <BROADCAST,MULTICAST,UP,LOWER_UP>
gke4caf43461d6@if2 UP b6:7e:f9:94:48:78 <BROADCAST,MULTICAST,UP,LOWER_UP>
gke021bec221f9@if2 UP 1a:72:b4:8c:d1:9e <BROADCAST,MULTICAST,UP,LOWER_UP>
gke21461b5a532@if2 UP 7e:9d:e8:74:f6:ac <BROADCAST,MULTICAST,UP,LOWER_UP>
gkef7a867c3028@if2 UP 2a:fe:67:7a:c3:b9 <BROADCAST,MULTICAST,UP,LOWER_UP>
I added few rules with gke+ interfaces to all nodes, and proto 112 traffic started to flow between PODs.
sudo iptables -I CILIUM_FORWARD 7 -o gke+ -m comment --comment "cilium: any->cluster on gke+ forward accept" -j ACCEPT
sudo iptables -I CILIUM_FORWARD 7 -i gke+ -m comment --comment "cilium: cluster->any on gke+ forward accept (nodeport)" -j ACCEPT
sudo iptables -I CILIUM_OUTPUT_raw 2 -t raw -o gke+ -m mark --mark 0xa00/0xfffffeff -m comment --comment "cilium: NOTRACK for proxy return traffic" -j CT --notrack
sudo iptables -I CILIUM_OUTPUT_raw 5 -t raw -o gke+ -m mark --mark 0x800/0xe00 -m comment --comment "cilium: NOTRACK for L7 proxy upstream traffic" -j CT --notrack
It comes to the question now: why do rules with non-existing interfaces exist in the FORWARD chain? Was it intended? If yes, then when is that used?