Get hands-on experience with 20+ free Google Cloud products and $300 in free credit for new customers.

NetApp volume no longer will mount - problem error

I had a NetApp volume mounted on my Google CE VM server for a few weeks and it worked fine. Today I noticed that a simple df command was hanging on my VM. After investigating I noticed the volume was no longer responding. So I unmounted it, and now when I try to mount the volume, it just hangs.

Here is how I try to mount it, this worked just fine before:
mount -t nfs -o rw,hard,rsize=65536,wsize=65536,vers=4.1,tcp 10.54.0.4:/datashare  /datashare

I tried to ping 10.54.0.4 from my server but it did not work. Should ping work to 10.54.0.4 ?

What else can I do to debug this?


Some other notes:
To create my first storagepool and netapp vol I had to do this first:

I pressed the SET UP CONNECTION button, a new popup window appeared and I saw:
A check mark already next to ENABLE SERVICE NETWORKING API
Here step 2 offered
SELECT ONE OR MORE IP RANGES or CREATE A NEW ONE
or
USE AN AUTOMATICALLY ALLOCATED IP RANGE <- I selected this one and the pressed CONTINUE button.
Here step 3 showed:
Network = default
allocated IP range = default-ip-range (automatically allocated)
A CREATE CONNECTION button.
I pressed the CREATE CONNECTION button.
After a long wait I saw:
Private services access connection for network default has been successfully created. You will now be able to use the same network across all your project's managed services. If you would like to change this connection, please visit the Networking page.

At that networking page which is: VPC NETWORK in the GCP GUI

on the PRIVATE SERVICES ACCESS tab I saw:
under the ALLOCATED IP RANGE tab:
name= default-ip-range
Internal IP range= 10.54.0.0/20
service provider = netapp.servicenetworking.goog
connection name= sn-netapp-prod

under the PRIVATE CONNECTIONS TO SERVICE tab:
connection name= sn-netapp-prod
assigned allocation= default-ip-range
service producer= netapp.servicenetworking.goog
export custom routes= enabled

Solved Solved
0 4 512
1 ACCEPTED SOLUTION

I enabled Netapp api. Then when I went to create my first storage-pool and via the GUI some first time network work was done by walking through the GUI, which was: Allocating a IP range for a private network, and creating private services access connection for peering.  When that was done and the volume was working here’s how networking looked:
Screenshot from 2024-09-16 10-27-57.png

A few weeks later I installed BACKUP AND DR which also required Allocating a IP range for a private network, and creating private services access connection for peering like as with NETAPP above. During the networking setup for Backup, I choose to use the network items that were created for  NetApp above, since these were being offered I assume it would be OK. But in hindsight if you are setting up anything new that requires these network items I think it might be best  to allocate a new IP range and peering for that application.  Here is how networking looks after installing BACKUP AND DR:
Screenshot from 2024-09-16 10-25-48.png

Notice in picture below that  subnet mask has changed
  from 172.24.224.0/28 and 172.24.224.0/28   to   172.24.224.0/28  and  172.24.224.64/26  <-Note this change did not occur until the Backup appliance VM was deployed during the BACKUP install process and this is what broke routing to the Netapp volume IP address!

SOLUTION
To fix the fact that the Netapp volume IP addresses is no longer pingable from our VM:
I disabled the BACKUP api, then I deleted the servicenetworking-googleapi-com VPC NETWORK PEERING that was created during BACKUP install. I did notice that deleting the servicenetworking-googleapi-com VPC NETWORK PEERING also had the effect of removing its corresponding  PRIVATE CONNECTIONS TO SERVICES.

After those were removed the netapp vol was working again, and we could ping the volume IP address and the routes looked like before.

 

 

View solution in original post

4 REPLIES 4

We can retrieve this data about the vol using this cmd:
   gcloud netapp volumes describe datashare --project=xxxxxx --location xxxxxx

backupConfig:
  backupChainBytes: 'xxxxxxx'
  backupPolicies:
  - projects/xxxxxx/locations/us-central1/backupPolicies/xxxxxxxxxxxx
  backupVault: projects/xxxxx/xlocations/xxxxx/backupVaults/xxxx
  scheduledBackupEnabled: true
capacityGib: '100'
createTime: '2024-07-23T01:20:48.985Z'
encryptionType: SERVICE_MANAGED
exportPolicy:
  rules:
  - accessType: READ_WRITE
    allowedClients: 0.0.0.0/0
    hasRootAccess: 'true'
    kerberos5ReadOnly: false
    kerberos5ReadWrite: false
    kerberos5iReadOnly: false
    kerberos5iReadWrite: false
    kerberos5pReadOnly: false
    kerberos5pReadWrite: false
    nfsv3: false
    nfsv4: true
mountOptions:

  • export: /datashare

  exportFull: 10.54.0.4:/datashare
  instructions: |-
    Setting up your instance
    Open an SSH client and connect to your instance.
    Install the nfs client on your instance.
    On Red Hat Enterprise Linux or SuSE Linux instance:
    sudo yum install -y nfs-utils
    On an Ubuntu or Debian instance:
    sudo apt-get install nfs-common

    Mounting your volume
    Create a new directory on your instance, such as "/datashare":
    sudo mkdir /datashare
    Mount your volume using the example command below:
    sudo mount -t nfs -o rw,hard,rsize=65536,wsize=65536,vers=4.1,tcp 10.54.0.4:/datashare /datashare
    Note. Please use mount options appropriate for your specific workloads when known.
  protocol: NFSV4
name: projects/xxxxxx/locations/xxxxxx/volumes/datashare
network: projects/xxxxxx/global/networks/default
protocols:

  • NFSV4

restrictedActions:

  • DELETE

securityStyle: UNIX
serviceLevel: STANDARD
shareName: datashare
state: READY
stateDetails: Available for use
storagePool: xxxxxxxx
unixPermissions: '0770'
usedGib: '1


Do I need to ENABLE servicenetworking-googleapis-com connection circled below in red?
If yes, how was this working before?

 

ksnip_20240903-155533.pngksnip_20240903-155438.pngksnip_20240903-155321.png

Also I cannot do a traceroute  to the volume IP. Does that make sense?
# traceroute 10.54.0.4
traceroute to 10.54.0.4 (10.54.0.4), 30 hops max, 60 byte packets
1 * * *
2 * * *
3 * * *
4 * * *
5 * * *

I enabled Netapp api. Then when I went to create my first storage-pool and via the GUI some first time network work was done by walking through the GUI, which was: Allocating a IP range for a private network, and creating private services access connection for peering.  When that was done and the volume was working here’s how networking looked:
Screenshot from 2024-09-16 10-27-57.png

A few weeks later I installed BACKUP AND DR which also required Allocating a IP range for a private network, and creating private services access connection for peering like as with NETAPP above. During the networking setup for Backup, I choose to use the network items that were created for  NetApp above, since these were being offered I assume it would be OK. But in hindsight if you are setting up anything new that requires these network items I think it might be best  to allocate a new IP range and peering for that application.  Here is how networking looks after installing BACKUP AND DR:
Screenshot from 2024-09-16 10-25-48.png

Notice in picture below that  subnet mask has changed
  from 172.24.224.0/28 and 172.24.224.0/28   to   172.24.224.0/28  and  172.24.224.64/26  <-Note this change did not occur until the Backup appliance VM was deployed during the BACKUP install process and this is what broke routing to the Netapp volume IP address!

SOLUTION
To fix the fact that the Netapp volume IP addresses is no longer pingable from our VM:
I disabled the BACKUP api, then I deleted the servicenetworking-googleapi-com VPC NETWORK PEERING that was created during BACKUP install. I did notice that deleting the servicenetworking-googleapi-com VPC NETWORK PEERING also had the effect of removing its corresponding  PRIVATE CONNECTIONS TO SERVICES.

After those were removed the netapp vol was working again, and we could ping the volume IP address and the routes looked like before.