Get hands-on experience with 20+ free Google Cloud products and $300 in free credit for new customers.

deploying model on vertex ai deploymentResourcePool to an endpoint located in another project.

I'am trying to deploy a custom trained model to a deployment resource pool that is located in project-1 to an endpoint located in project-2 , I have granted the editor role for project-1 to user account (u1) which also has editor role in project-2. when I try to deploy the model from user account (u1) ,I get the following error:

grpc_message:"DeploymentResourcePool 'projects/{project-1}/locations/us-central1/deploymentResourcePools/drlpool' does not exist.

*the deployment resource pool (drlpool) exists and also deploys successfully if the endpoint and the deployment Resource Pool are in the same project.

Solved Solved
0 1 705
1 ACCEPTED SOLUTION

Could you please check the roles granted to your service account as advised in this Stack Overflow question:

For example you have Project A and Project B, assuming that Project A hosts the model.

  • Add service account of Project B in Project A and provide at least roles/aiplatform.user predefined role. See predefined roles and look for roles/aiplatform.user to see complete roles it contains.
  • This role contains aiplatform.endpoints.* and aiplatform.batchPredictionJobs.* as these are the roles needed to run predictions.

    See IAM permissions for Vertex AI

    |Resource|Operation|Permissions needed| |---|---|---| |batchPredictionJobs|Create a batchPredictionJob|aiplatform.batchPredictionJobs.create (permission needed on the parent resource)| |endpoints|Predict an endpoint|aiplatform.endpoints.predict (permission needed on the endpoint resource)|

With this set up, Project B will be able to use the model in Project A to run predictions.

Note: Just make sure that the script of Project B points to the resources in Project A like project_id and endpoint_id.


If after that are you still having issuesIf after that you are still having issues it would be better to export the model from project-1 and import into project-2, as shown in the documentation:

The Model and Endpoint components expose the functionalities of the Vertex AI endpoint and model resources. You can import existing model resources that you've trained outside of Vertex AI, or that you've trained using Vertex AI and exported. After you import your model, this resource is available in Vertex AI. You can deploy this model to an endpoint and then send prediction requests to this resource.

View solution in original post

1 REPLY 1

Could you please check the roles granted to your service account as advised in this Stack Overflow question:

For example you have Project A and Project B, assuming that Project A hosts the model.

  • Add service account of Project B in Project A and provide at least roles/aiplatform.user predefined role. See predefined roles and look for roles/aiplatform.user to see complete roles it contains.
  • This role contains aiplatform.endpoints.* and aiplatform.batchPredictionJobs.* as these are the roles needed to run predictions.

    See IAM permissions for Vertex AI

    |Resource|Operation|Permissions needed| |---|---|---| |batchPredictionJobs|Create a batchPredictionJob|aiplatform.batchPredictionJobs.create (permission needed on the parent resource)| |endpoints|Predict an endpoint|aiplatform.endpoints.predict (permission needed on the endpoint resource)|

With this set up, Project B will be able to use the model in Project A to run predictions.

Note: Just make sure that the script of Project B points to the resources in Project A like project_id and endpoint_id.


If after that are you still having issuesIf after that you are still having issues it would be better to export the model from project-1 and import into project-2, as shown in the documentation:

The Model and Endpoint components expose the functionalities of the Vertex AI endpoint and model resources. You can import existing model resources that you've trained outside of Vertex AI, or that you've trained using Vertex AI and exported. After you import your model, this resource is available in Vertex AI. You can deploy this model to an endpoint and then send prediction requests to this resource.