Hi,
I am trying to run my ML model in Colab utilizing a custom VM with multiple GPUs. I can successfully spin up a 2 GPU DeepLearning VM and connect to a Colab notebook via port-forwarding to a locally-hosted connection (Jupyter Notebook), as shown here.
Although I can connect to custom runtimes directly WITHOUT port-forwarding to a locally-hosted connection, I can only access 1 of the 2 GPUs this way (i.e.
I can successfully connect to a locally-hosted, port-forwarded runtime and verify that the notebook can access the 2 GPUs; however I am running into issues when trying to mount my Google Drive.
I know that ocamlfuse was offered as a suggestion to this Drive issue, however, none of the download options work. Specifically, it seems like a locally-hosted port-forwarded runtime doesn't allow terminal inputs, so I can't "Press [ENTER]" to allow the download, as shown below:
User import cursor shows up for a direct connection to a custom or hosted runtime:
User import cursor fails to show up/accept inputs in a locally-hosted, port-forwarded custom VM.
In general, it seems like terminal commands don't work in Colab in a locally-hosted runtime.
Another option is PyDrive, which I've used in the past. However, since PyDrive relies on authentication through a local port, I can't get it to work on my locally-hosted custom VM.
In short I'm looking for tips/suggestions for any of the following issues:
1) An alternative workflow to run my ML model using multiple GPUs (i.e. that's not through port-forwarding to a locally-hosted connection)
2) How to get that user cursor to show up (enabling me to download ocamlfuse)
3) How to authenticate in PyDrive, given I'm already using a local port connection to host my runtime.
4) Alternatives to accessing my Drive/Drive files.
Thank you so much!