Get hands-on experience with 20+ free Google Cloud products and $300 in free credit for new customers.

How do veth interfaces connect to each other on GKE node?

I have been playing with GKE cluster recently. I have a question regarding their CNI. I have read from GCP documents and other articles that there is a bridge which all veth interfaces connect to. However, I cannot find it on the nodes.

See below.  There is no such cbr0 interface found when I use brctl to list all the bridge devices.

 

gke-xxxxxxxxx /home/uuuuuuu # brctl show
bridge name	bridge id		STP enabled	interfaces
docker0		8000.0242fd0b0cf4	no		
gke-xxxxxxxxxx /home/uuuuuuu # 

 

 

 

 

veth6eba4cdf: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1460
        inet 10.188.2.1  netmask 255.255.255.255  broadcast 10.188.2.1
        inet6 fe80::c4e3:b0ff:fe5f:63da  prefixlen 64  scopeid 0x20<link>
        ether c6:e3:b0:5f:63:da  txqueuelen 0  (Ethernet)
        RX packets 518695  bytes 133530347 (127.3 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 461561  bytes 118274839 (112.7 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

veth8bcf1494: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1460
        inet 10.188.2.1  netmask 255.255.255.255  broadcast 10.188.2.1
        inet6 fe80::70cb:c4ff:fe8c:a747  prefixlen 64  scopeid 0x20<link>
        ether 72:cb:c4:8c:a7:47  txqueuelen 0  (Ethernet)
        RX packets 50  bytes 3455 (3.3 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 28  bytes 2842 (2.7 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

vethbb2135c7: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1460
        inet 10.188.2.1  netmask 255.255.255.255  broadcast 10.188.2.1
        inet6 fe80::1469:daff:fea0:8b5b  prefixlen 64  scopeid 0x20<link>
        ether 16:69:da:a0:8b:5b  txqueuelen 0  (Ethernet)
        RX packets 216393  bytes 79901345 (76.1 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 231128  bytes 58157122 (55.4 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

vetheee4e8e3: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1460
        inet 10.188.2.1  netmask 255.255.255.255  broadcast 10.188.2.1
        inet6 fe80::ec6c:3bff:fef3:70c2  prefixlen 64  scopeid 0x20<link>
        ether ee:6c:3b:f3:70:c2  txqueuelen 0  (Ethernet)
        RX packets 301000  bytes 39175118 (37.3 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 294063  bytes 606562084 (578.4 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

 

 
What is behind these veth interfaces? 
 
Doc I referred to in the question is here
 
Thanks in advance
0 1 653
1 REPLY 1

glen_yu
Google Developer Expert
Google Developer Expert

Is your GKE cluster running in native mode?  And what settings did you enable?  Do you have Network Policy enabled or installed a separate CNI such as Cilium (or maybe you enabled Dataplane V2?).  The choices when creating with GKE cluster will affect that components it uses for networking.  

 

For example, I believe the default networking mode is VPC-native, which means in addition to assigning a primary CIDR for your nodes, you assign a secondary CIDR for pods and services.  You can read more about VPC-native networking here , but essentially your pods is routable to other services, VMs, pods, etc. as if they were VMs on your VPC.  

 

The purpose of the cbr0 bridge to to route the traffic between containers, but with VPC-native, there's no need for that so cbr0 isn't created.  Also as per the note in the documents:

"The virtual network bridge cbr0 is only created if there are Pods which set hostNetwork: false"  

I haven't really tried it personally, but you can probably create a GKE cluster to use ROUTES networking mode and then create a deployment and specify hostNetwork: false and see what happens there. 

 

In general though, I think you'd want to use VPC-native networking so I wouldn't expect to see cbr0

 

Hope that helps

 

Top Labels in this Space
Top Solution Authors