I'am trying to deploy a custom trained model to a deployment resource pool that is located in project-1 to an endpoint located in project-2 , I have granted the editor role for project-1 to user account (u1) which also has editor role in project-2. when I try to deploy the model from user account (u1) ,I get the following error:
grpc_message:"DeploymentResourcePool 'projects/{project-1}/locations/us-central1/deploymentResourcePools/drlpool' does not exist.
*the deployment resource pool (drlpool) exists and also deploys successfully if the endpoint and the deployment Resource Pool are in the same project.
Solved! Go to Solution.
Could you please check the roles granted to your service account as advised in this Stack Overflow question:
For example you have Project A and Project B, assuming that Project A hosts the model.
roles/aiplatform.user
predefined role. See predefined roles and look for roles/aiplatform.user
to see complete roles it contains.This role contains aiplatform.endpoints.*
and aiplatform.batchPredictionJobs.*
as these are the roles needed to run predictions.
See IAM permissions for Vertex AI
|Resource|Operation|Permissions needed| |---|---|---| |batchPredictionJobs|Create a batchPredictionJob|aiplatform.batchPredictionJobs.create (permission needed on the parent resource)| |endpoints|Predict an endpoint|aiplatform.endpoints.predict (permission needed on the endpoint resource)|
With this set up, Project B will be able to use the model in Project A to run predictions.
Note: Just make sure that the script of Project B points to the resources in Project A like project_id
and endpoint_id
.
If after that are you still having issuesIf after that you are still having issues it would be better to export the model from project-1 and import into project-2, as shown in the documentation:
The Model and Endpoint components expose the functionalities of the Vertex AI
endpoint
andmodel
resources. You can import existingmodel
resources that you've trained outside of Vertex AI, or that you've trained using Vertex AI and exported. After you import yourmodel
, this resource is available in Vertex AI. You can deploy thismodel
to anendpoint
and then send prediction requests to this resource.
Could you please check the roles granted to your service account as advised in this Stack Overflow question:
For example you have Project A and Project B, assuming that Project A hosts the model.
roles/aiplatform.user
predefined role. See predefined roles and look for roles/aiplatform.user
to see complete roles it contains.This role contains aiplatform.endpoints.*
and aiplatform.batchPredictionJobs.*
as these are the roles needed to run predictions.
See IAM permissions for Vertex AI
|Resource|Operation|Permissions needed| |---|---|---| |batchPredictionJobs|Create a batchPredictionJob|aiplatform.batchPredictionJobs.create (permission needed on the parent resource)| |endpoints|Predict an endpoint|aiplatform.endpoints.predict (permission needed on the endpoint resource)|
With this set up, Project B will be able to use the model in Project A to run predictions.
Note: Just make sure that the script of Project B points to the resources in Project A like project_id
and endpoint_id
.
If after that are you still having issuesIf after that you are still having issues it would be better to export the model from project-1 and import into project-2, as shown in the documentation:
The Model and Endpoint components expose the functionalities of the Vertex AI
endpoint
andmodel
resources. You can import existingmodel
resources that you've trained outside of Vertex AI, or that you've trained using Vertex AI and exported. After you import yourmodel
, this resource is available in Vertex AI. You can deploy thismodel
to anendpoint
and then send prediction requests to this resource.
User | Count |
---|---|
2 | |
1 | |
1 | |
1 | |
1 |