Is there some kind of rate-limit that would make Filestore much slower than Persistent Disk for copying many small files?
I have filestore + persistent disk which are similar on paper (bandwidth + iops), but in practice can be 10x slower.
CFor instance, copying 30k files takes 10 seconds for Persistent Disk, but 100 seconds for Filestore, using gsutil -m rsync for both. My filestore IOPS and bandwidth are 1-2% of maximum during this time.
On other hand, fio benchmarks make filestore + PD look similar, in accordance with advertised rates.
My guess is that there's some kind of undocumented rate-limit, perhaps around small-file creation. I can't get past 300 files/second, even going to HIGHSCALE tier with 60TB. Any ideas?
The practical impact is that operations like "pip install; conda install" take much longer on filestore than on persistent disk.