Goal: serve prediction request from a Vertex AI Endpoint by executing custom prediction logic.
Expected Workflow:
1. Upload a pretrained image_quality.pb model (developed in a non vertex-ai pythonic environment) in a gcs bucket
2. Port existing image inference logic into a container and serve the prediction functionality through a vertex AI endpoint.
3. Use Vertex AI api for logging and capturing metrics inside the custom inference logic.
4. Finally we want to pass a list of images (stored in another gcs bucket) to that endpoint.
5. We also want to see the logs and metrics in tensorboard.
Existing Vertex AI code samples provide examples for custom training , invoking model.batch_predict / endpoint.predict , but don't mention how to execute custom prediction code.
It would be great if someone can provide guidelines and links to documents/code in order to implement the above steps.
Thanks