When using GCP batch, I'd like to download an "index" file at the start of a job and cache it on the local container's file system.
But, this only works if the same container is used to run different tasks. Otherwise, if the container is restarted, any files on it will be lost since container file systems are ephemeral.
So, I'm wondering if the container file system is ephemeral.
I tried doing mounting /cache:/cache, but this gave the error:
docker: Error response from daemon: error while creating mount source path '/cache': mkdir /cache: read-only file system.
So, I'm not sure if we're allowed to use the base file system.
Solved! Go to Solution.
If your tasks contain only containers, they are likely running on a host VM with Container-Optimized OS. You can reference a list of writable paths on COS file system and pick one to mount into your containers.
Note that if you let Batch retry tasks on failures, a different VM might be picked for the retry and your index file will need to be downloaded again. If you have multiple tasks in the job running on one VM, you probably want to coordinate tasks to down the index file only once.