Get hands-on experience with 20+ free Google Cloud products and $300 in free credit for new customers.

Can I iteratively train a fine tune model?

if I train a model and later on gather a new batch of training material. Can i further fine-tune my existing model? Or do I need to run a fine-tune job from scratch on a base model using the combined training material.

3 7 3,198
7 REPLIES 7

Hey @MuaazOsaidTahir,

 

Thanks for bringing this question to the Google Cloud Community. I am learning alongside all of you! You can further fine-tune your existing model with a new batch of training material which is actually one of the game-changing benefits of fine-tuning. Here's why it works and how to approach it:

 

Why Fine-Tuning Works This Way:

  • Knowledge Retention: When you fine-tune a model, you start with a pre-trained model that has already learned general patterns and features relevant to your task. Fine-tuning adjusts those existing weights rather than starting from completely random weights. This means your model retains the knowledge from the previous training.
  • Efficiency: Further fine-tuning leverages the knowledge your model already has. This usually leads to faster convergence and better results with less data compared to training a new model from scratch.

How to Fine-Tune with New Data:

  1. Combine Datasets: Simply combine your original training data with your new batch of training material.
  2. Fine-Tune Again: Use the fine-tuning process on your existing fine-tuned model. Generally, you'll want to use a slightly lower learning rate than the initial fine-tuning as you're primarily making smaller adjustments.
  3. Save and Deploy: Save your further fine-tuned model for deployment or further use.

Important Considerations:

  • Data Distribution: Ensure that your new data has a similar distribution to your original data. Significant differences in the data could potentially confuse the model and decrease performance.
  • Overfitting: Be mindful of overfitting, especially if your new batch of data is small. You might need to adjust regularization or early stopping techniques.

I am by no means an expert in this area, so I encourage other's to jump in and share their perspective! Let me know if you have any specific scenarios or technologies in mind, and I can see if one of our internal SMEs can provide more tailored guidance!

Thank you for the response @Roderick !

Can I get any help on how to start the job on the custom tuned model through nodejs sdk,

I have previously tuned a model on a base model through it, now I cant find any help in the docs from where i can create a job on a tuned model.

As I am learning too always so, any help will be appreciated😊.
Thanks.

Are you perhaps looking for this documentation: 

 
Although this is just a sample, it might be a good place to start.

Thanks @nceniza!

I've followed this doc and I was able to start the job for tuning the base model.
But i haven't been able to find a way to tune over the model which I have already tuned.

Hi has there been any update on this, as I don't see a way of fine tuning a already tuned model anywhere else? 

Hi, I also don't see any further documentation regarding this, is there any way you can provide us with some direction, my assumption is to follow the same steps but to change the model parameter to the tuned model, but I am not sure if that would work? 

@Roderick  @nceniza  

How do I re-tune a tuned model? The model that I tuned and deployed from the base model in "Create Tuning" is not showing up.

The tuned model is deployed and visible in the model registry, and the version is labeled as 1. So, it seems that version 2 is possible. However, there is no documentation on how to re-tune a tuned model