AI powered applications with Application Integration and Vertex AI
Vertex AI is a unified platform for harnessing the potential of generative AI in Google Cloud. It provides a comprehensive suite of tools and services that enable data scientists and ML engineers to seamlessly develop, deploy, and manage generative AI models.
We are excited to introduce the new Vertex AI - Predict task in Application Integration which will help you integrate with 100+ pre-existing foundational models and custom trained models of your choice.
To use the Vertex AI task with a pre-existing model, simply create a new Integration and add a Vertex AI task. In the task configuration, define the pre-existing model you want to use in the endpoint field(E.g: We can set publishers/google/models/text-bison@001 as endpoint for Palm 2 text model)
The Vertex AI task accepts a JSON payload as input. You can use the Data Mapper to create a local JSON variable that represents the payload. This variable can then be used to template the input prompt and the variables will be "rendered" by the resolve template functionality in Data Mapper.
For example, a local variable named "PalmPromptRequest" of type JSON can be created. The variable default's value can be set to:
{
"instances": [{
"prompt": "$TextPrompt$"
}],
"parameters": {
"temperature": 0.2,
"maxOutputTokens": 768.0,
"topP": 0.8,
"topK": 40.0
}
}
Notice that we can create a fully "rendered" JSON value with the help of 1 variable (TextPrompt, that can come from the input variable). With this TextPrompt string variable setup, we can simply use the Resolve Template function just like this:
That's it! With this, we can dynamically adjust the prompt values for the integration !
Word of caution - The Vertex AI endpoints might not be available in all Google Cloud regions. Native GET_INTEGRATION_REGION() Data Mapper function could be used to define the Vertex AI region, but in cases where the Integration is running on a region that does not have a Vertex AI endpoint - will cause an error. In such a situation, you can simply use a default region(like 'us-central1').
Outputs from the Vertex AI API are quite complete and can give you a lot of detail such as metadata, safety attributes, citations and more. Check below a sample of such a response.
{
"predictions": [
{
"citationMetadata": {
"citations": []
},
"safetyAttributes": {
"blocked": false,
"scores": [
0.1,
0.1
],
"categories": [
"Death, Harm & Tragedy",
"Religion & Belief"
]
},
"content": "I saw a rainbow in the sky today,\nIt was a beautiful sight to see.\nThe colors were so bright and vibrant,\nIt made me feel happy inside.\n\nI thought about all the things that make me happy,\nAnd I realized that there are so many things to be grateful for.\nI have a loving family and friends,\nI have a roof over my head,\nAnd I have food to eat.\n\nI am so lucky to be alive,\nAnd I am grateful for every day that I get to spend on this earth.\nI will never take anything for granted,\nAnd I will always remember to appreciate the beauty of the world around me.\n\nThank you for the rainbow,\nIt was a reminder that there is still good in the world.\nIt gave me hope for the future,\nAnd it made me believe that anything is possible."
}
],
"metadata": {
"tokenMetadata": {
"inputTokenCount": {
"totalBillableCharacters": 13,
"totalTokens": 3
},
"outputTokenCount": {
"totalTokens": 183,
"totalBillableCharacters": 569
}
}
}
}
Many times while creating a simple integration, all we want is simply the final content of the prediction. We can again use the visual Data Mapper from Application Integration to respond only with this as the final output variable.
For our sample integration, our output is simply the content. We created a simple Data Mapper task that extracts that value and adds it to our Integration output variable.
You could even create your integration to be used as a sub-integration. That is, other Integrations (let's say, before adding text to a Jira Ticket or adding a row to a Database) could invoke our "GenAI" sub-integration to generate, summarize or classify text before inserting the final input! Here is a sample integration that can be used like this as a reference to this post.
Custom models should be deployed to an endpoint in Vertex AI. That endpoint can be configured in Vertex AI tasks with an endpoint configured as “endpoints/<endpoint-id>” instead of “publishers/google/models/<model-id>” used for pre-existing models.
Vertex AI and Application Integration are powerful tools that can be used to create and deploy innovative AI-powered applications. The new Vertex AI - Predict task in Application Integration makes it even easier to integrate Vertex AI with your existing applications.
Here are some examples of how you can use the Vertex AI task to create innovative AI-powered applications:
These are just a few examples of the many ways that Vertex AI task can be used to create innovative AI-powered applications. With Vertex AI and Application Integration, the possibilities are endless.
Note : Special thanks to @carloscabral for helping me write this post.
Great to see. This is really putting Gen AI to use in business processes.
Now users can make their integration and automation intelligent with pre-built Gen AI tasks in Application Integration. This gives access to 100s of models in Vertex Model garden and to custom models deployed by the user.