Get hands-on experience with 20+ free Google Cloud products and $300 in free credit for new customers.

SingleStack IPV6 LoadBalancer Issues

I'm using the https://github.com/strimzi/strimzi-kafka-operator to deploy an IPV4 and IPV6 stack.  I've configured with the maintainer that everything looks fine as far as they can tell.  You can follow the conversation on here if you're interested.

Here's a snippet of what I'm trying to apply:


 

 

 

apiVersion: kafka.strimzi.io/v1beta2
kind: Kafka
metadata:
  name: esnet-kafka
  annotations:
    strimzi.io/node-pools: enabled
    strimzi.io/kraft: enabled
  labels:
    app: kafka
spec:
  kafka:
    version: 3.7.0
    replicas: 6
    listeners:
      - name: plain
        port: 9092
        type: internal
        tls: false
      - name: tls
        port: 9093
        type: internal
        tls: true
        authentication:
          type: tls
      - name: external
        port: 9094
        type: loadbalancer
        tls: true
        authentication:
          type: tls
        configuration:
          ipFamilyPolicy: SingleStack
          ipFamilies:
              - IPv4
      - name: externalv6
        port: 9095
        type: loadbalancer
        tls: true
        authentication:
          type: tls
        configuration:
          ipFamilyPolicy: SingleStack
          ipFamilies:
              - IPv6

 

 

 

The last section is the most relevant, titled externalv6.  The behavior that i'm seeing is very bizarre.  After initial deployment everything seems to work correctly, I get a V6 LB that has both internal and external V6 addresses.

After a few minutes the V6 address disappears and a V4 addresses is assigned.  Does anyone have any insight on what could be causing this.  Here's the LB manifest.

kubectl get svc esnet-kafka-kafka-externalv6-bootstrap

 

 

 

NAME                                    TYPE        CLUSTER-IP
esnet-kafka-kafka-externalv6-bootstrap LoadBalancer 2600:2d00:0:4:4afb:e970:5bb5:9a54 
EXTERNAL-IP    PORT(S)        AGE
104.198.50.144 9095:32183/TCP 20m

 

 

 

The full yaml of the service can be seen below:

 

 

 

apiVersion: v1
kind: Service
metadata:
  creationTimestamp: "2024-07-07T23:19:24Z"
  labels:
    app: kafka
    app.kubernetes.io/instance: esnet-kafka
    app.kubernetes.io/managed-by: strimzi-cluster-operator
    app.kubernetes.io/name: kafka
    app.kubernetes.io/part-of: strimzi-esnet-kafka
    strimzi.io/cluster: esnet-kafka
    strimzi.io/component-type: kafka
    strimzi.io/kind: Kafka
    strimzi.io/name: esnet-kafka-kafka
  name: esnet-kafka-kafka-externalv6-bootstrap
  namespace: kafka
  ownerReferences:
  - apiVersion: kafka.strimzi.io/v1beta2
    blockOwnerDeletion: false
    controller: false
    kind: Kafka
    name: esnet-kafka
    uid: f4913405-0dcd-4dce-9afa-c40630bb1e2f
  resourceVersion: "12181"
  uid: 5f861754-923f-4dd6-a30e-bf0804a156e5
spec:
  allocateLoadBalancerNodePorts: true
  clusterIP: 2600:2d00:0:4:4afb:e970:5bb5:9a54
  clusterIPs:
  - 2600:2d00:0:4:4afb:e970:5bb5:9a54
  externalTrafficPolicy: Cluster
  internalTrafficPolicy: Cluster
  ipFamilies:
  - IPv6
  ipFamilyPolicy: SingleStack
  ports:
  - name: tcp-externalv6
    nodePort: 32183
    port: 9095
    protocol: TCP
    targetPort: 9095
  selector:
    strimzi.io/broker-role: "true"
    strimzi.io/cluster: esnet-kafka
    strimzi.io/kind: Kafka
    strimzi.io/name: esnet-kafka-kafka
  sessionAffinity: None
  type: LoadBalancer
status:
  loadBalancer:
    ingress:
    - ip: 104.198.50.144

 

 

I should also mention, that the original IPV6 address allocated to the service is reachable, but connecting to the cluster fails. 

Any ideas? I tried KubeDNS and CloudDNS with the same results.  Is there something I could check to provide some insight on what is going on?

 

0 0 105
0 REPLIES 0
Top Labels in this Space
Top Solution Authors