Get hands-on experience with 20+ free Google Cloud products and $300 in free credit for new customers.

Is all disk space shown with `df -h` available to gcp batch?

When running `df -h`, on a GCP batch VM, I see the following:

```

Filesystem Size Used Avail Use% Mounted on
 
2024-05-09 12:18:05.691 PDT
overlay 28G 3.1G 25G 12% /
```
 
Does this mean, I have around 25G of scratch space on a GCP batch VM, or ... is the container restricted to only use a small portion of this space?
3 1 238
1 REPLY 1

Hi @vedantroy-genmo,

Below is an example if I submit a small Batch container-only Job which uses Batch Container-Optimized OS Image as the default image (with 30GB as book disk size) with machine type as `e2-highcpu-2`:

```

~ $ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/root 2.0G 1.2G 820M 59% /
devtmpfs 986M 0 986M 0% /dev
tmpfs 989M 0 989M 0% /dev/shm
tmpfs 396M 460K 395M 1% /run
tmpfs 989M 964K 988M 1% /etc/machine-id
tmpfs 256K 0 256K 0% /mnt/disks
tmpfs 989M 4.0K 989M 1% /tmp
overlayfs 989M 964K 988M 1% /etc
/dev/sda8 11M 24K 11M 1% /usr/share/oem
/dev/sda1 26G 1.9G 24G 8% /mnt/stateful_partition
tmpfs 2.0M 108K 1.9M 6% /var/lib/cloud

```

Which file system is used depends on where the file is stored, e.g. for `/etc`, it is under overlayfs. Usually by default the majority of the disk space is in `/mnt/stateful_partition` for Batch Container-Optimized OS image, and usually the Batch actions will use the space in that file system (e.g. when you do GCS mounting).

OOC, any specific error such as `No space left on device` you have met for your Batch Job? If so, could you provide more details on the Job detail (e.g. Job UID, Job requested JSON)?

Thanks!