Get hands-on experience with 20+ free Google Cloud products and $300 in free credit for new customers.

Filestore directory becomes inaccessible/undeletable when containing long link - "Remote I/O error "

Summary:

I recently created a Compute Engine instance (with boot image, ubuntu 24.04), and successfully created and mounted a FileStore share (NFS4) via NFS/mount.

When restoring my data from a 500GB+ .tar file, I found that some (just a few) directories could not be listed nor deleted.

Details:

for problematic directory "foo" in restored ./bar/foo/ I can

$ cd bar 

and

ls -1 .

, which produces "foo". In addition

cd ./bar/foo

works but, then

ls 

results in  

reading directory '.': Remote I/O error

while trying to remove foo by

 cd ./bar; sudo rm -rf foo

( or a variety of similar actions) results in

rm: cannot remove 'foo': Remote I/O error

I CAN mv/rename "foo" to something else (e.g. "baz") but then I can not delete the "something else".

A bit of exploration in creating directories, files, and links shows that after moving foo to a new name, I can recreate a new subdirectory called foo, and then create regular files and links in the recreated foo. However, when I create a symbolic link in foo AND the that link's target file name length exceeds 127  (!) , the problem manifests, then any preexisting contents of foo become unreadable and foo itself becomes undeletable (as above) .

This is not intermittent, I have over a dozen examples of this behavior.

Request (Help!):

  1. How can I manage links with targets longer than 127 characters on FileStore with NFS4.1 ? (so I can usefully restore my tar files. The archive command (tar -xvf ) seems to complete without error and the archive contains a few links with long targets, generally long absolute paths)
  2. How can I delete these old directories and release their content (short of a backup and restore to a new fileshare). Given that any attempt to remove them (or rename / remove) results in "rm: cannot remove 'foo': Remote I/O error" [considered alternative seems backup and restore to a new FS share, backup may simply not save the problematic links].

Note this problem does not arise on AWS EC2/EFS/NFS4, nor on my local directories local Ubuntu 20.04 VM.  

Many thanks in advance,

Solved Solved
0 6 617
1 ACCEPTED SOLUTION

Resolution!

Indeed, it looks like this was a server-side defect. Based on reports to me it was an NFS software issue that effected regional shares in us-central1.  The problematic software version was 3.26.0.1, and problem was fixed by an update to 3.27.0.4 that was rolled out a few days after my report.

The files generated by the script (and my migration) were indeed there (albeit inaccessible) and became visible (and deletable) once the server software was updated. My thanks to Google support and the FileStore team.

View solution in original post