Get hands-on experience with 20+ free Google Cloud products and $300 in free credit for new customers.

Running an instance group with two docker containers per vm

So.. here's the story:

I have a managed instance group which does heavy processing tasks. The machines in the instance group have to read input data and write output data from a storage bucket.

Now, in order to have a cleaner code (the application should be able to read/write from either a posix file system or a bucket), I want to mount the bucket as a drive. I looked around and found rclone to be the right tool for the job.

Now, since I am running on COS, I can't really do a lot on the host system (I tried...), so I thought the right solution is to run rclone from it's official docker container. This container basically mounts the bucket into a folder that's shared with the host. Then the host folder is mounted to the other docker container which runs the application.

To set everything up, I started a machine in my instance group, ssh'd into it and set it all up and it worked great.

Now, in order to run it automatically, I added the "docker run" line running the rclone container in the startup script of the instance template. And the result is that the vm's in the instance group start, the bucket mount works but the system does not even seem to attempt to start the application container. Seems to me like whichever entity is in charge of starting the container, identifies there's already a running container and refrains from starting the application container since the rclone container is running.

I also tried running the application container from the startup script, but can't authenticate with artifact registry, since the startup script is run by root, when I try to authenticate with a service account, it tries to write the credentials in /root/.docker which is unwriteable in COS.

So basically, looking for any advice to resolve this before I give up and go write some code to read/write/list from the storage bucket using APIs.

0 2 206
2 REPLIES 2

Hi @talkenig,

Welcome to Google Cloud Community!

You can directly interact with Google Cloud services such as Artifact Registry and Secret Manager through their REST APIs or client libraries to work with Google Cloud APIs without using the gcloud command-line tool. Here are the guides, sample command and workaround that might help:

1. Authenticate with Google Cloud APIs
Using OAuth 2.0 for Authentication:

  1. Create and Download a Service Account Key:

    • Navigate to IAM & Admin > Service Accounts.

    • Create a service account with necessary permissions.

    • Generate a JSON key file for this service account and download it.

  2. Set Up Authentication:
    Use the service account key in your application or script by setting the GOOGLE_APPLICATION_CREDENTIALS environment variable:

 

export GOOGLE_APPLICATION_CREDENTIALS=/path/to/your-service-account-key.json

 


2. Access Artifact Registry
Here’s the documentation for sample curl command to get an access token
Using REST API:

1. Obtain an OAuth 2.0 Token
Use the setrvice account JSON key to obtain an access token.

Example using curl and jq:

 

ACCESS_TOKEN=$(curl -s -X POST -H "Content-Type: application/json" \
    --data "{\"grant_type\":\"urn:ietf:params:oauth:grant-type:jwt-bearer\",\"assertion\":\"$(cat /path/to/your-service-account-key.json | jq -r .private_key)\"}" \
    "https://oauth2.googleapis.com/token" | jq -r .access_token)

 

2. List Repositories
Example using curl:

 

curl -H "Authorization: Bearer $ACCESS_TOKEN" \

"https://artifactregistry.googleapis.com/v1/projects/your-project-id/locations/your-location/repositories"

 

3. List Images in a Repository
Example using curl:

 

curl -H "Authorization: Bearer $ACCESS_TOKEN" \

"https://artifactregistry.googleapis.com/v1/projects/your-project-id/locations/your-location/repositories/your-repository/dockerImages"

 

4. Pull an Image
Use the Docker client directly if you have the image URL and the access token.
Example:

 

echo $ACCESS_TOKEN | docker login -u _json_key --password-stdin https://your-registry-url
docker pull your-registry-url/your-image:tag

 


3. Access Secret Manager
Using REST API:

1.Obtain an OAuth 2.0 Token
Example using curl and jq:

 

ACCESS_TOKEN=$(curl -s -X POST -H "Content-Type: application/json" \
    --data "{\"grant_type\":\"urn:ietf:params:oauth:grant-type:jwt-bearer\",\"assertion\":\"$(cat /path/to/your-service-account-key.json | jq -r .private_key)\"}" \
    "https://oauth2.googleapis.com/token" | jq -r .access_token)

 

2. Access Secrets:

List Secrets
Example using curl:

 

curl -H "Authorization: Bearer $ACCESS_TOKEN" \

"https://secretmanager.googleapis.com/v1/projects/your-project-id/secrets"

 


Access a Specific Secret Version
Example using curl:

 

curl -H "Authorization: Bearer $ACCESS_TOKEN" \

"https://secretmanager.googleapis.com/v1/projects/your-project-id/secrets/your-secret-id/versions/latest:access"

 


Get the Secret Value
Example using jq:

 

SECRET_VALUE=$(curl -s -H "Authorization: Bearer $ACCESS_TOKEN" \
    "https://secretmanager.googleapis.com/v1/projects/your-project-id/secrets/your-secret-id/versions/latest:access" \
    | jq -r .payload.data | base64 --decode)

 


4. Automate with Startup Script

Here’s an example of a startup script that:
1. Authenticates with Artifact Registry
2. Pulls an image.
3. Starts the required containers.

Example Startup Script:

 

#!/bin/bash

# Authenticate with Artifact Registry
ACCESS_TOKEN=$(curl -s -X POST -H "Content-Type: application/json" \
    --data "{\"grant_type\":\"urn:ietf:params:oauth:grant-type:jwt-bearer\",\"assertion\":\"$(cat /path/to/your-service-account-key.json | jq -r .private_key)\"}" \
    "https://oauth2.googleapis.com/token" | jq -r .access_token)

echo $ACCESS_TOKEN | docker login -u _json_key --password-stdin https://your-registry-url

# Pull the application image
docker pull your-registry-url/your-app-image:tag

# Run the rclone container
docker run -d \
    --name rclone \
    --mount type=bind,source=/path/on/host,target=/mnt \
    rclone/rclone:latest mount yourbucket: /mnt --allow-other --umask 002

# Run the application container
docker run -d \
    --name my-application \
    --mount type=bind,source=/path/on/host,target=/data \
    your-registry-url/your-app-image:tag

 


I hope the above information is helpful.

Hi and thanks for the script!

Two issues with this script:

1. You need some way to get the service account file to the vm which is not trivial. Or to authenticate by a different method. For example using an oauth token fetched directly from the API:

curl "http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/token" -H "Metadata-Flavor: Google"

2. This script will fail on this line:

echo $ACCESS_TOKEN | docker login -u _json_key --password-stdin https://your-registry-url

When run on COS, since the script is run as root, the docker config dir will default to /root/.docker which is unwriteable on COS. 

So you first need to 

export DOCKER_CONFIG=/path/top/any/wrieable/dir/

 So basically after I figured this out, I authenticated with artifact registry. I did use a service account key file, since I have a filestore instance mounted to the vm which can be simply done with 

cat /path/to/service/account/key.json | docker login -u _json_key --password-stdin https://us-central1-docker.pkg.dev

Then, I pulled the image, ran it with some additional options: 

--device /dev/fuse --cap-add SYS_ADMIN --security-opt apparmor:unconfined

And mounted the storage bucket from within the container entrypoint. 

Bottom line: This should be made way simpler:

1. Instance templates should allow a user to specify any number of running containers

2. Instance templates should allow the user to specify any additional arguments that will be passed to docker run.

I would appreciate if you convey this to the product team.

The only thing I dislike about the solution of starting the container manually from the startup script is that the application log messages appear twice: Once from the container logging service and once from the startup script logging service. Wonder if there's a solution for that.

Thanks again for the willingness to assist here!