My company runs a job board and one of the things we were using Google Cloud for is to classify each job posting (eg. marketing, hr, finance, sales, etc...).
Four years ago, in Natural Language AutoML, we created/imported a dataset, trained the model, and deployed - all worked smoothly and has done so for the duration.
Now we are told that product is deprecated and we need to migrate to VertexAI. That sucks but we have no option. Fortunately, there is a AutoML-Vertex Migration tool that Google provided so we followed all the steps and migrated the dataset and the model to VertexAI.
The problem is that we can't get it to work. When I navigate to the model on Vertex AI and click on the Deploy & Test tab and try to put some prediction sample data in, we get an error "Unable to test model [our model name here]" There link on the error to "View Issues" which returns the exceptionally helpful "The operation failed due to the following error(s): Internal error encountered."
It seems like our only option is to try to upload a new dataset to VertexAI, train a model on that dataset, and see if it works. The problem is this costs around $100 (at least it did on AutoML) and I'd rather not spend $100 for something that Google is forcing me to do.
I have very little hope that anyone at Google is even looking at these support items but thought I'd throw it out there just to see if someone else has this problem.
Screenshot of the error: https://drive.google.com/file/d/1FU43yuv_SvSJ5_yxmbf2n805jNFpdnZ7/view?usp=sharing
Solved! Go to Solution.
I appreciate your reply but, frankly, it was too general to be helpful. Can you tell me where on my Google Cloud Console I can check 'logs' to find out what is happening when I try to test the model giving the "Internal error encountered" message?
I ended up creating a new dataset, model, and training the model. This time it worked but it leads me to think Google did not do a thorough job with its migration tool.
Then, when I got it all working, it took me a LONG time to figure out how to call the model properly at an endpoint instead of the model like AutoML. The only code examples I could find for .NET was here (link) and frankly, those were not terribly clear either. Fortunately, I was able to figure out the correct code with help from StackOverflow (link). Putting it here so if others need it at a future date, this might save them some headaches.
(Something like below would have been FAR more helpful from Google.)
//First, make sure your service account (principal) used in the authjson
//has Vertex Service AI Agent ROLE assigned to it because the deprecated AutoML Predictor ROLE won't work.
//Then, make sure you are using Google.Cloud.AIPlatform.V1 package in your code
using Google.Api;
using Google.Cloud.AIPlatform.V1;
using Google.Protobuf.WellKnownTypes;
...
//[Put this code in where you want to generate the prediction request]
//Get the authorization json key for the service account. Go into IAM & Admin, Service Accounts,
//select your principal, click "Keys" tab, and click "Add Key" button. It will save a json to
//your computer with the correct info to use.
//You might need to double-quote things to escape it properly in code.
string authjson = @"
{
""type"": ""service_account"",
""project_id"":
etc...
}
";
string projectID = "12345"; //project ID of the deployed model.
string endpointPredictionID = "56789"; //endpointID of the deployed model.
//Go to Vertex AI in Google Cloud, and go to Online Prediction menu under "Deploy and Use"
//Find the endpoint you want and copy the ID
GoogleCredential credential1 = GoogleCredential.FromJson(authjson).CreateScoped(PredictionServiceClient.DefaultScopes);
//We actually need 2 Endpoints! One for the PredictionServiceClientBuilder and another for the PredictRequest below
var endpointClientBuilder = @"us-central1-aiplatform.googleapis.com";
PredictionServiceClientBuilder ClientBuilder = new PredictionServiceClientBuilder
{
Credential = credential1,
Endpoint = endpointClientBuilder
};
PredictionServiceClient client = ClientBuilder.Build();
var structVal = Google.Protobuf.WellKnownTypes.Value.ForStruct(new Struct
{
Fields =
{
["mimeType"] = Google.Protobuf.WellKnownTypes.Value.ForString("text/plain"),
["content"] = Google.Protobuf.WellKnownTypes.Value.ForString("text string to be categorized goes here")
}
});
PredictRequest predictionrequest = new PredictRequest()
{
EndpointAsEndpointName = EndpointName.FromProjectLocationEndpoint(projectID, "us-central1", endpointPredictionID),
Instances = { structVal }
};
PredictResponse response = client.Predict(predictionrequest);
return response.ToString();
Hi @mdgibbons,
Welcome and thank you for reaching out to our community.
I understand that it is challenging to migrate an application to a new platform for there are a number of things that need to be considered prior to the move such as compatibility.
Aside from the "Internal error encountered" message that you got, consider checking your logs as there might be relevant entries that can help you narrow down troubleshooting.
The service status summary page suggests that there are no reported issues related to your use case.
Adding here some resources that you can use as reference to ensure that you didn't miss anything:
You can also reach out to Google Cloud Vertex AI support to discuss this more in detail.
Hope this helps.
I appreciate your reply but, frankly, it was too general to be helpful. Can you tell me where on my Google Cloud Console I can check 'logs' to find out what is happening when I try to test the model giving the "Internal error encountered" message?
I ended up creating a new dataset, model, and training the model. This time it worked but it leads me to think Google did not do a thorough job with its migration tool.
Then, when I got it all working, it took me a LONG time to figure out how to call the model properly at an endpoint instead of the model like AutoML. The only code examples I could find for .NET was here (link) and frankly, those were not terribly clear either. Fortunately, I was able to figure out the correct code with help from StackOverflow (link). Putting it here so if others need it at a future date, this might save them some headaches.
(Something like below would have been FAR more helpful from Google.)
//First, make sure your service account (principal) used in the authjson
//has Vertex Service AI Agent ROLE assigned to it because the deprecated AutoML Predictor ROLE won't work.
//Then, make sure you are using Google.Cloud.AIPlatform.V1 package in your code
using Google.Api;
using Google.Cloud.AIPlatform.V1;
using Google.Protobuf.WellKnownTypes;
...
//[Put this code in where you want to generate the prediction request]
//Get the authorization json key for the service account. Go into IAM & Admin, Service Accounts,
//select your principal, click "Keys" tab, and click "Add Key" button. It will save a json to
//your computer with the correct info to use.
//You might need to double-quote things to escape it properly in code.
string authjson = @"
{
""type"": ""service_account"",
""project_id"":
etc...
}
";
string projectID = "12345"; //project ID of the deployed model.
string endpointPredictionID = "56789"; //endpointID of the deployed model.
//Go to Vertex AI in Google Cloud, and go to Online Prediction menu under "Deploy and Use"
//Find the endpoint you want and copy the ID
GoogleCredential credential1 = GoogleCredential.FromJson(authjson).CreateScoped(PredictionServiceClient.DefaultScopes);
//We actually need 2 Endpoints! One for the PredictionServiceClientBuilder and another for the PredictRequest below
var endpointClientBuilder = @"us-central1-aiplatform.googleapis.com";
PredictionServiceClientBuilder ClientBuilder = new PredictionServiceClientBuilder
{
Credential = credential1,
Endpoint = endpointClientBuilder
};
PredictionServiceClient client = ClientBuilder.Build();
var structVal = Google.Protobuf.WellKnownTypes.Value.ForStruct(new Struct
{
Fields =
{
["mimeType"] = Google.Protobuf.WellKnownTypes.Value.ForString("text/plain"),
["content"] = Google.Protobuf.WellKnownTypes.Value.ForString("text string to be categorized goes here")
}
});
PredictRequest predictionrequest = new PredictRequest()
{
EndpointAsEndpointName = EndpointName.FromProjectLocationEndpoint(projectID, "us-central1", endpointPredictionID),
Instances = { structVal }
};
PredictResponse response = client.Predict(predictionrequest);
return response.ToString();
User | Count |
---|---|
2 | |
1 | |
1 | |
1 | |
1 |