Get hands-on experience with 20+ free Google Cloud products and $300 in free credit for new customers.

Setting permissions for mount point

I'm using a persistent disk mount with:

"volumes": [
{
"deviceName": "scratchSSD",
"mountPath": "${workDir}",
"mountOptions": "rw,async"
}
]

in my POST request to the batch API.    This disk gets mounted to ${workDir} with rwxr-xr-x and is owned by the user root.  However, my docker image does not run as root, and consequently I can't write to the disk.

Is there a way to control the users and/or permissions of the mount point?  I've tried setting uid and umask in mountOptions, but neither are recognized.

Thanks!

 

0 9 1,967
9 REPLIES 9

Hi jimv,

We are using "mount" command to help with disk mounting and if there are mount options specified, we will use them directly. What was the uid and umask setting you use? From https://superuser.com/questions/175987/how-can-i-automatically-set-write-permissions-on-mounting-a-u... Could "umask=0,uid=nobody,gid=nobody" help? 

Thanks,

Wen

These options don't appear to be supported.  I get the error:

mount: /mnt/disks/scratchSSD: wrong fs type, bad option, bad superblock on /dev/sdb, missing codepage or helper program, or other error.

 

Hi jimv,

I am trying to reproduce this issue. May I know more about the disk setting you use? Is it an existing disk (which we only support ro), new disk, what type and size? What is the OS for the job?

Thanks,

Wen

 

It's a persistent disk (pd-50) and is based off of  gcr.io/google.com/cloudsdktool/google-cloud-cli:alpine , but configured to run as non-root.  

The problem is easy to reproduce with any docker image that doesn't run on root.  For example, this random image of docker image from dockerhub: frjaraur/non-root-nginx

To reproduce with the above image, POST something like this to Batch (you might need to edit the serviceAccount):

{
"allocationPolicy": {
"instances": [
{
"policy": {
"disks": [
{
"deviceName": "mySSD",
"newDisk": {
"sizeGb": "50",
"type": "pd-ssd"
}
}
],
"machineType": "e2-standard-2"
}
}
],
"labels": {
"batch-job-id": "redacted"
},
"location": {
"allowedLocations": [
"regions/us-central1",
"zones/us-central1-a",
"zones/us-central1-b",
"zones/us-central1-c",
"zones/us-central1-f"
]
},
"serviceAccount": {
"email": "redacted"
}
},
"logsPolicy": {
"destination": "CLOUD_LOGGING"
},
"taskGroups": [
{
"parallelism": "1",
"taskCount": "1",
"taskSpec": {
"computeResource": {
"cpuMilli": "2000",
"memoryMib": "2000"
},
"maxRetryCount": 5,
"maxRunDuration": "3600s",
"runnables": [
{
"container": {
"imageUri": "frjaraur/non-root-nginx",
"entrypoint": "/bin/sh",
"commands": [
"-c",
"echo Hello world! This is task ${BATCH_TASK_INDEX}. This job has a total of ${BATCH_TASK_COUNT} tasks. >/mnt/disks/mySSD/out.txt"
]
}
}
],
"volumes": [
{
"deviceName": "mySSD",
"mountPath": "/mnt/disks/mySSD",
"mountOptions": "rw,async"
}
]

}
}
]
}

could modify our docker image to run as root, but this doesn't seem like a great solution.  I was hoping there was another way to control the permission of the destination mount.  The documentation is not clear on the available mount options -- as the man page for mount indicates the options differ depending on the type of system being mounted.  I don't know what a persistent disk would be -- is it NFS?  If so, that's a problem because NFS doesn't have an option to set the uid and gid.  

This might be confusing things, but I'd actually also be OK with using a local  ssd instead of a persistent disk, but I can't get that to work at all.  It immediately fails with no error message (i.e. if i change pd-ssd above to local-ssd, and 50 to 375)

I was able to get local-ssd mounted by changing the machine type to n1-standard-2, and setting the type to 'local-ssd'.  (This contradicts the example in the documentation, which uses 'local_ssd'.  That didn't work for me.)

However, I have the same problem as the persistent disk, there's seemingly no way to set the permissions of the mount point unless I rejigger my docker image to run as root, do some chmod-ing commands, switch back to a regular user, and proceed with the rest of my commands.   

Does granting permission for all users on the mounted disk a better default behavior for you? We can consider making this more controllable in the future.

Thanks for reporting the doc bug.

Hi jimv,

I found a shortcut workaround. You can have a script runnable to grant the permission before running container runnable like the following job spec:

 

 

{
    "allocationPolicy": {
      "instances": [
        {
          "policy": {
            "disks": [
              {
                "deviceName": "mySSD",
                "newDisk": {
                  "sizeGb": "50",
                  "type": "pd-ssd"
                }
              }
            ],
            "machineType": "e2-standard-2"
          }
        }
      ],
      "location": {
        "allowedLocations": [
          "regions/us-central1"
        ]
      }
    },
    "logsPolicy": {
      "destination": "CLOUD_LOGGING"
    },
    "taskGroups": [
      {
        "parallelism": "1",
        "taskCount": "1",
        "taskSpec": {
          "computeResource": {
            "cpuMilli": "2000",
            "memoryMib": "2000"
          },
          "maxRetryCount": 5,
          "maxRunDuration": "3600s",
          "runnables": [
          {
                       "script": {
                          "text": "chmod 777 /mnt/disks/mySSD"
                                  }
          },
            {
             "container": {
                            "imageUri": "frjaraur/non-root-nginx",
                            "entrypoint": "/bin/sh",
                            "commands": [
                                "-c",
                                "echo Hello world! This is task ${BATCH_TASK_INDEX}. This job has a total of ${BATCH_TASK_COUNT} tasks. >/mnt/disks/mySSD/out.txt; sleep 300"
                            ]
                        }
            }
          ],
          "volumes": [
                  {
                      "deviceName": "mySSD",
                      "mountPath": "/mnt/disks/mySSD",
                      "mountOptions": "rw,async"
                  }
          ]

        }
      }
    ]
}

 

 

I got the following result:

TestResult.png

 

Please let me know if it works for you.

Thanks!

Wen

 

 

 

Hi jimv,

We now support to grant permissions for all users for new disks as default. Please let us know whether it works for you or not.

Thanks,

Wen

 

Hi Wen,

Can you elaborate on how this will be specified? I don't a reference to this and I seem to be running into the exact same issue. 

Kind regards

Sander