Summary:
I recently created a Compute Engine instance (with boot image, ubuntu 24.04), and successfully created and mounted a FileStore share (NFS4) via NFS/mount.
When restoring my data from a 500GB+ .tar file, I found that some (just a few) directories could not be listed nor deleted.
Details:
for problematic directory "foo" in restored ./bar/foo/ I can
$ cd bar
and
ls -1 .
, which produces "foo". In addition
cd ./bar/foo
works but, then
ls
results in
reading directory '.': Remote I/O error
while trying to remove foo by
cd ./bar; sudo rm -rf foo
( or a variety of similar actions) results in
rm: cannot remove 'foo': Remote I/O error
I CAN mv/rename "foo" to something else (e.g. "baz") but then I can not delete the "something else".
A bit of exploration in creating directories, files, and links shows that after moving foo to a new name, I can recreate a new subdirectory called foo, and then create regular files and links in the recreated foo. However, when I create a symbolic link in foo AND the that link's target file name length exceeds 127 (!) , the problem manifests, then any preexisting contents of foo become unreadable and foo itself becomes undeletable (as above) .
This is not intermittent, I have over a dozen examples of this behavior.
Request (Help!):
Note this problem does not arise on AWS EC2/EFS/NFS4, nor on my local directories local Ubuntu 20.04 VM.
Many thanks in advance,
Solved! Go to Solution.
Resolution!
Indeed, it looks like this was a server-side defect. Based on reports to me it was an NFS software issue that effected regional shares in us-central1. The problematic software version was 3.26.0.1, and problem was fixed by an update to 3.27.0.4 that was rolled out a few days after my report.
The files generated by the script (and my migration) were indeed there (albeit inaccessible) and became visible (and deletable) once the server software was updated. My thanks to Google support and the FileStore team.