Hi everybody,
I've succesfully mounted a CS bucket on Workbench from both JupyterLab Interface and JupyterLab terminal (with "gcsfuse").
The mounted bucket works well: I can write, read, delete and so on my python files.
There's a problem anyway: when I stop and restart the instance, I can see the name of the mounted bucket, but I can't access my files from both the JupyterLab interface and the terminal.
It seems like at restart time the bucket is unmounted by the system. If this supposition was true, the mounting facility would be completely unuseful.
Can someone help me to understand what's going on and suggest some solution?
Thank you all.
Marco
Hi @marcousescloud,
Welcome to Google Cloud Community!
You're experiencing an issue where a GCS bucket mounted with gcsfuse
is not automatically reconnected after restarting your Workbench instance. To ensure consistent access to your GCS bucket, you'll need to set up an automated process for mounting it at startup.
gcsfuse
mounts your GCS bucket as a local filesystem, which is temporary and doesn't persist across reboots. To maintain access, you need to automate the mounting process.
1. Create a Startup Script:
(e.g., mount-gcs-bucket.sh)
with the following content:#!/bin/bash
# Script to mount GCS bucket using gcsfuse
# Create a directory for the mount point if it doesn’t exist
mkdir -p /path/to/mount/point
# Mount the GCS bucket
gcsfuse my-bucket /path/to/mount/point
my-bucket
with your bucket name and /path/to/mount/point
with the local directory where you want to mount the bucket.chmod +x /path/to/mount-gcs-bucket.sh
2. Add the Script to Startup (Workbench-Specific):
mount-gcs-bucket.sh
in the designated location.3. Add the Script to Startup (Linux Instance):
/etc/rc.local:
Add this line to /etc/rc.local
before the exit 0
line./path/to/mount-gcs-bucket.sh
systemd
: Create a systemd
service file (e.g., /etc/systemd/system/gcs-mount.service
)[Unit]
Description=Mount GCS Bucket
After=network-online.target
Wants=network-online.target
[Service]
Type=simple
ExecStart=/path/to/mount-gcs-bucket.sh
Restart=on-failure
[Install]
WantedBy=multi-user.target
systemd
, enable, and start the servicesudo systemctl daemon-reload
sudo systemctl enable gcs-mount
sudo systemctl start gcs-mount
4. Verify Permissions and Configuration:
gcsfuse
to authenticate.5. Check Logs and Errors:
systemd
Services: Use sudo journalctl -u gcs-mount.service
to view service logs./etc/rc.local
: Check general system logs or any specific logs added to your script.Additional Considerations:
/mnt
or /media
for your GCS bucket.rclone
or cloud storage libraries for different use cases.I hope the above information is helpful.
hi dawnberdan,
thank you for your kind and detailed answer and sorry for the delay in answering.
Following your suggestions, I've tried a number of different ways and configurations.
At the end of the day, I've found that the following work well for me:
gcsfuse --implicit-dirs --dir-mode "777" --file-mode "777" -o allow_other "CS_bucket" "path"
Thank you again.
User | Count |
---|---|
2 | |
1 | |
1 | |
1 | |
1 |