Hi, just wondering if anyone else has come across a similar scenario and might be able to help me out.
I'm looking at using Cloud Build to build our software - we use Maven to do our builds currently and would like to continue to do so.
At the moment, each build server has its own local Maven repository, which helps build performance; artifacts are cached locally when needed which avoids round trips to jFrog Artifactory to download dependencies. This need for a local repository is one of the obstacles we face in moving our builds to a serverless architecture.
I've done a proof of concept using Cloud Build and a Cloud Storage bucket to persist the local Maven repository:
steps:
# Grab the compressed Maven dependencies repo from the bucket and uncompress it
- id: 'Get cached Maven dependencies'
name: gcr.io/cloud-builders/gsutil
entrypoint: bash
volumes:
- name: 'maven-repository'
path: '/root/.m2'
args:
- '-c'
- |
gsutil cp gs://authentic-devops-maven/m2.tar.gz m2.tar.gz || exit 0
tar -xpzf m2.tar.gz --directory / || exit 0
# Use Authentic custom Maven settings.xml file
- name: 'gcr.io/cloud-builders/gsutil'
id: 'Get Maven settings.xml'
args: ['cp', 'gs://authentic-devops-maven/authenticSettings.xml', 'custom-settings.xml']
# Run the actual build. Use maven.repo.local setting to use the uncompressed TAR file as the local
# Maven repo in the container. Get relevant jFrog secrets for use as environment variables in the
# build
- id: 'run-maven-build'
name: maven:3-jdk-8
entrypoint: mvn
volumes:
- name: 'maven-repository'
path: '/root/.m2'
args: ['--settings', 'custom-settings.xml','clean', 'deploy']
secretEnv: ['JFROG_USERNAME', 'JFROG_PASSWORD', 'BASE_REPOSITORY']
# Re-compress updated M2 repo cache and save it the storage bucket
- id: 'Upload updated Maven dependencies cache'
waitFor: [ 'run-maven-build']
name: gcr.io/cloud-builders/gsutil
entrypoint: bash
volumes:
- name: 'maven-repository'
path: '/root/.m2'
args:
- '-c'
- |
tar -cpzf m2.tar.gz /root/.m2
gsutil cp m2.tar.gz gs://authentic-devops-maven/m2.tar.gz
However, I've read that using a Cloud Storage bucket in such a way in a production setting isn't ideal as it isn't really suited for concurrent read-write operations.
Has anyone experimented with using persistent disks with Cloud Build for such a scenario i.e. attaching a persistent disk to a Cloud Build instance? That would be better as a persistent disk should be able to handle parallel, concurrent read-writes.
Thanks in advance, Geoff