How can I utilize the mega GPU during endpoint deployment for vertex ai work? Are there any model for examples or other resources that I can use to better grasp this?
What kind of model are you deploying on the endpoint? Can you clarify on what you mean in this statement "utilizing the GPU during endpoint deployment"?
Thank you for responding. It is a simple tensorflow tabular classification model, and deployment takes about 18-20 minutes after model registration. I decided to use a mega GPU since I want to shorten this time as much as I can; perhaps this decision will shorten the time for deployment.
Thank you for clarifying. As far as I know the deployment of an endpoint is handled at the backend of GCP so it is not possible to use a GPU in order to shorten the deployment time.
okay thanks , but when I use another account for the same , it's done within 2-3 min only !!!!