Get hands-on experience with 20+ free Google Cloud products and $300 in free credit for new customers.

AUTOMl Training Accuracy

Hi experts,

I am new to auto ml, how ever i successfully trained a model on annotated data and successfully deployed and test it. 

The only issue I am facing that I do have the evaluation matrics of the my model but i am unable to track down "Training accuracy or validation accuracy " of my model.  When i click on Training-Training pipleline-My model. it takes me to evaluation window. which i reckon is the final result .

where can i find training accuracy to compare testing/validation and training ? 

i would greatly appreciate your help. 

0 1 156
1 REPLY 1

Hi @samy255,

Welcome to Google Cloud Community!

Users often face challenges accessing detailed training metrics on AutoML platforms. Although these platforms typically offer comprehensive evaluation metrics, they may not always present training and validation accuracy clearly.

Possible Reasons for Missing Metrics: 

  • Some AutoML platforms focus on high-level performance metrics and may not provide detailed training information.
  • To safeguard sensitive information, platforms might avoid storing extensive training logs.
  • Tracking and saving every metric during training can be computationally intensive, especially for complex models.

Here are the workarounds and Potential Solutions:

  • Review Platform Documentation: Examine the documentation in detail to identify any hidden options or features that might provide access to training metrics.
  • Utilize Experiment Tracking Tools: Consider integrating external tools like MLflow or TensorBoard to log training metrics throughout the AutoML process. While this requires additional setup, it offers detailed control over metrics.
  • Inspect Model Artifacts: If your AutoML platform allows for the download of model artifacts, you may be able to retrieve training metrics from the saved model files.
  • Rebuild the Training Process: For enhanced control and transparency, try reconstructing the model from scratch using a traditional machine learning framework such as TensorFlow or PyTorch. This approach provides access to all training metrics.

Some factors to consider in training metrics on AutoML:

  • Monitor training and validation accuracy to assess model generalization. High training accuracy with low validation accuracy suggests overfitting.
  • To avoid overfitting, AutoML frequently employs early stopping. As a result, training may conclude before the maximum number of epochs are reached, preventing you from seeing the final training precision.

I hope the above information is helpful.