Get hands-on experience with 20+ free Google Cloud products and $300 in free credit for new customers.

Finetuning a Computer vision model from HuggingFace

Hello,

I am relatively new to Google Cloud and have been working on fine-tuning a customized Vision Transformer model using Huggingface and PyTorch over the past few weeks. My training data consists of images stored in a Google Cloud Storage Bucket, and I have successfully tested my training script on AWS Sagemaker. The script has been containerized and is currently stored in Artifact Registry.

My goal is to train this custom model on Google Cloud and subsequently deploy it. Using a combination of the official documentation and assistance from ChatGPT, I have managed to initiate the training process. However, I am finding it challenging to locate comprehensive guidance on the best practices or recommended approaches for building an end-to-end pipeline for model training and deployment on Google Cloud.

Could anyone provide insights or references to resources that outline the optimal steps for constructing such a pipeline within the Google Cloud ecosystem? Any advice on integrating the various Google Cloud services—such as Vertex AI Platform—into this workflow would be greatly appreciated.

Thank you for your assistance.

0 3 268
3 REPLIES 3