Once data is extracted using document ai and is in bucket, how do i do further processing and show my results on my frontend. And is it possible to perform batch processing locally as in document AI uses my documents from my local storage instead of saving it in google cloud storage
Solved! Go to Solution.
Extracted data in Google Cloud Storage Bucket: When Document AI extracts data from your documents, it typically stores the results in a Google Cloud Storage (GCS) bucket. You can set up a Cloud Function or a simple script to listen for new documents in this bucket and trigger further processing when new data is added.
Further processing: Your processing logic can be implemented using various technologies and languages like Python, Node.js, or Java. You can use Google Cloud services, such as Cloud Functions or Compute Engine, to execute your processing code. This can involve data transformation, enrichment, or integration with other services or databases.
Display results on the frontend: Once the processing is complete, you can send the results to your frontend using an appropriate API, websockets, or other communication methods. The frontend can be developed using web technologies like HTML, CSS, and JavaScript, or through mobile app development if you have a mobile interface.
Regarding your question about batch processing from local storage, Document AI is primarily designed to work with documents stored in Google Cloud Storage. However, you can still process documents from your local storage by uploading them to a Cloud Storage bucket first, and then triggering Document AI processing. This approach ensures that the data is available to Document AI for extraction and analysis.
In summary, Document AI is tightly integrated with Google Cloud services, but you can use Google Cloud Functions and other technologies to automate and extend the processing and integrate the results into your frontend, even if your documents originate from local storage.
Extracted data in Google Cloud Storage Bucket: When Document AI extracts data from your documents, it typically stores the results in a Google Cloud Storage (GCS) bucket. You can set up a Cloud Function or a simple script to listen for new documents in this bucket and trigger further processing when new data is added.
Further processing: Your processing logic can be implemented using various technologies and languages like Python, Node.js, or Java. You can use Google Cloud services, such as Cloud Functions or Compute Engine, to execute your processing code. This can involve data transformation, enrichment, or integration with other services or databases.
Display results on the frontend: Once the processing is complete, you can send the results to your frontend using an appropriate API, websockets, or other communication methods. The frontend can be developed using web technologies like HTML, CSS, and JavaScript, or through mobile app development if you have a mobile interface.
Regarding your question about batch processing from local storage, Document AI is primarily designed to work with documents stored in Google Cloud Storage. However, you can still process documents from your local storage by uploading them to a Cloud Storage bucket first, and then triggering Document AI processing. This approach ensures that the data is available to Document AI for extraction and analysis.
In summary, Document AI is tightly integrated with Google Cloud services, but you can use Google Cloud Functions and other technologies to automate and extend the processing and integrate the results into your frontend, even if your documents originate from local storage.
User | Count |
---|---|
2 | |
1 | |
1 | |
1 | |
1 |