Announcements
The Google Cloud Community will be in read-only from July 16 - July 22 as we migrate to a new platform; refer to this community post for more details.
Get hands-on experience with 20+ free Google Cloud products and $300 in free credit for new customers.

Custom extractors failing

Is anyone else seeing errors when making requests to their custom extractor versions? I'm getting the following response when sending a process request via curl:

{ "error": { "code": 500, "message": "Internal error encountered.", "status": "INTERNAL" } }

 They were working earlier today, but now all are failing. Have checked the GCP status page but predictably everything is fine 🐶🔥

Interestingly it's only happening for my own trained versions. Requests to foundation model pretrained-foundation-model-v1.0-2023-08-22 are still working.

3 6 599
6 REPLIES 6

I have the same issue with custom extractors. In the documentai workbench area my latest model shows an f1 score of 0 (was 71 yesterday, no changes have been made). Trying to upload a test document results in "Failed to Preview Document". And of course, the endpoint no longer functions. Everything was fine yesterday. Previous custom models are working, its only affecting 2 of mine. 

Interesting. None of my custom models work at all but the f1 score hasn't changed. If I test via the console I see the same error as you "Failed to Preview Document".

The new model I trained overnight is showing an f1 score of 0 and it also fails.

Also seeing my trained custom extractor versions failing with 500s. Quite concerning, and it's not clear that GCP knows about it.

Per the other response, F1 for a new model was also 0 (I bet the evaluation can't run because inference against the model is broken), and both the "Upload Sample Doc" and auto-labeling features are broken.

I've raised a case in the Google Cloud support portal. I'm told that "Document AI Product team is already working on this with priority" but no other detail. The GCP status page is still showing green ticks.

If you're experiencing errors specifically with your custom extractor versions, but not with the pre-trained foundation models, it suggests that the issue may be related to the custom models themselves rather than a widespread problem with the platform or infrastructure.

Here are a few troubleshooting steps you can take:

  • Check Logs: Look into the logs for your custom extractor versions to see if there's any additional information provided about the internal error. This might give you more context on what's going wrong.
  • Review Recent Changes: Consider any recent changes you or your team might have made to the custom extractor versions. Did you recently update the models or change any configurations? Sometimes seemingly small changes can have unintended consequences.
  • Test with Different Inputs: Try running your custom extractor versions with different inputs to see if the error persists across all inputs or if it's specific to certain cases. This can help narrow down the problem.
  • Rollback Changes: If you recently made any changes to your custom extractor versions, consider rolling back those changes to a previous version that was known to work. This can help determine if the issue is related to the recent changes.

Thanks @Poala_Tenorio. That was my suspicion as well, however the same model worked just fine a day earlier. The logs were also no help as the Document AI API was returning "Internal error encountered." with no other useful information.

The issue appears to have been resolved, however I didn't hear anything back from the team on the cause of the issue. A postmortem would've been nice to understand what exactly went wrong and if there's anything that can be done in future to avoid or work around the issue.