Get hands-on experience with 20+ free Google Cloud products and $300 in free credit for new customers.

GKE using CSI Driver with standard-rwx

Hi everyone,

I had been working with GKE and standard-rwo storage, but now am in need to use standard-rwx in order to share pvc with different pods in different nodes.

I manage to configure my GKE deployment to have CSI driver enabled and I can see standard-rwx available in my storageClass options.

I create a pvc.yml like

 

 

 

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: {{ include "odoo.fullname" . }}-data-pvc
  labels:
    {{- include "odoo.labels" . | nindent 4 }}
spec:
  accessModes:
    - ReadWriteMany
  storageClassName: standard-rwx
  resources:
    requests:
      storage: 1Ti

 

 

 


But when deploying my pod complains with error:

 

 

 

Warning  FailedMount  3m8s (x4 over 13m)  kubelet  MountVolume.MountDevice failed for volume "pvc-c1cdd0f2-a9ee-4639-88d2-af8711c4d735" : rpc error: code = DeadlineExceeded desc = context deadline exceeded
  Warning  FailedMount  113s (x6 over 13m)  kubelet  Unable to attach or mount volumes: unmounted volumes=[odoo-data], unattached volumes=[], failed to process volumes=[]: timed out waiting for the condition

 

 

 


I read the documentation at https://cloud.google.com/kubernetes-engine/docs/how-to/persistent-volumes/filestore-csi-driver but haven't found the root of the problem. Hope someone can give me some direction to be able to use the standard-rwx

thanks in advance

0 6 4,176
6 REPLIES 6

Can you share/post your StorageClass definition for `standard-rwx`?

Also, is your GKE cluster using the `default` network or a custom network?

Hi @garisingh 

This is the definition

 

allowVolumeExpansion: true
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  annotations:
    components.gke.io/component-name: filestorecsi
    components.gke.io/component-version: 0.10.14
    components.gke.io/layer: addon
  creationTimestamp: "2024-01-09T21:48:38Z"
  labels:
    addonmanager.kubernetes.io/mode: EnsureExists
    k8s-app: gcp-filestore-csi-driver
  managedFields:
  - apiVersion: storage.k8s.io/v1
    fieldsType: FieldsV1
    fieldsV1:
      f:allowVolumeExpansion: {}
      f:metadata:
        f:annotations:
          .: {}
          f:components.gke.io/component-name: {}
          f:components.gke.io/component-version: {}
          f:components.gke.io/layer: {}
        f:labels:
          .: {}
          f:addonmanager.kubernetes.io/mode: {}
          f:k8s-app: {}
      f:parameters:
        .: {}
        f:tier: {}
      f:provisioner: {}
      f:reclaimPolicy: {}
      f:volumeBindingMode: {}
    manager: kube-addon-manager
    operation: Update
    time: "2024-01-09T21:48:38Z"
  name: standard-rwx
  resourceVersion: "494"
  uid: e8d4526b-423c-46bc-848f-8b3c064cbe2a
parameters:
  tier: standard
provisioner: filestore.csi.storage.gke.io
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer

 

The network is define as follow in GKE using terraform

 

resource "google_container_cluster" "primary" {
  name     = "${local.stage}-cluster"
  location = var.region

  deletion_protection = false
  
  release_channel {
    channel = "REGULAR"
  }

  enable_autopilot = true

  network    = google_compute_network.vpc.name
  subnetwork = google_compute_subnetwork.subnet.name

  ip_allocation_policy {
    services_ipv4_cidr_block      = "10.108.0.0/20"
  }

  addons_config {

    gce_persistent_disk_csi_driver_config {
      enabled = true
    }

  }

}

 

thanks a lot!

I believe you'll need to create your own storage class as the built-in ones assume your clusters are using the "default" network.  (See here for more info)

You'll need to create a StorageClass like

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: filestore-example
provisioner: filestore.csi.storage.gke.io
volumeBindingMode: Immediate
allowVolumeExpansion: true
parameters:
  tier: standard
  network: $NETWORK_NAME

You'd replace $NETWORK_NAME with the value you set for "google_compute_network.vpc.name" in your Terraform value.yaml

We have the same trouble. I activate the Filestore CSI driver so on it create automatically the several storage class: 

enterprise-multishare-rwx   filestore.csi.storage.gke.io   Delete          WaitForFirstConsumer   true                   2d6h

enterprise-rwx              filestore.csi.storage.gke.io   Delete          WaitForFirstConsumer   true                   2d6h

premium-rwo                 pd.csi.storage.gke.io          Delete          WaitForFirstConsumer   true                   59d

premium-rwx                 filestore.csi.storage.gke.io   Delete          WaitForFirstConsumer   true                   2d6h

standard                    kubernetes.io/gce-pd           Delete          Immediate              true                   59d

standard-rwo (default)      pd.csi.storage.gke.io          Delete          WaitForFirstConsumer   true                   59d

 

standard-rwx                filestore.csi.storage.gke.io   Delete          WaitForFirstConsumer   true                   2d6h

zonal-rwx                   filestore.csi.storage.gke.io   Delete          WaitForFirstConsumer   true                   2d6h

When I create a PVC with standard-rwx,  as soon I create the Pod using the PVC, the PV is well created but the pod doesn't manage to use it. I've got this event:

MountVolume.MountDevice failed for volume "pvc-a02dcb94-0867-4dc1-aa09-b92108e66a63" : rpc error: code = DeadlineExceeded desc = context deadline exceeded

 

we've got the same error

@oliv Please double check if the network you provided in storage class and your cluster network are same.

Top Labels in this Space