Get hands-on experience with 20+ free Google Cloud products and $300 in free credit for new customers.

OneTwo and Vertex AI Reasoning Engine: exploring advanced AI agent development on Google Cloud

lolejniczak17

The emergence of generative AI has opened new possibilities for intelligent applications capable of complex problem-solving and sophisticated interactions. Google Cloud's Vertex AI Reasoning Engine is a purpose-built platform designed to streamline the development and deployment of such applications in enterprise environments.

At the heart of these applications are intelligent "agents"—sophisticated programs powered by advanced language models like Gemini. These agents are designed to tackle complex user queries by reasoning through problems, planning solutions, and coordinating with external systems. In essence, they serve as the brains behind a new generation of AI-driven experiences.

This blog post offers a practical guide for experienced Google Cloud practitioners on how to build and deploy intelligent agents using the OneTwo open-source framework and the efficiency of Vertex AI Reasoning Engine.Platform architecturePlatform architecture

We'll explore a real-world example, break down the code, and show you how easy it is to leverage this technology to create next-generation AI-driven applications. Let's go! 

 OneTwo: a lightweight and versatile framework

OneTwo stands out for its lightweight and versatile nature, making it perfect for both research and production environments. Key advantages include:

  • a model-agnostic API for easy model swapping
  • support for complex computation graphs
  • automatic request batching for efficiency
  • caching for reproducibility

OneTwo logoOneTwo logo

For a deeper understanding of this fantastic framework, we recommend exploring its official website.

Vertex AI Reasoning Engine: simplifying the development lifecycle

In one of the previous blog posts, we showed you how to develop, deploy and query agents using the OneTwo library and Cloud Run in a Do It Yourself (DIY) approach. This involved creating a Dockerfile, building a Docker image, pushing it to a GCP Artifact Repository, and finally deploying it as a Cloud Run Service.

Reasoning Engine simplifies this process with a standardized interface to a variety of frameworks, single click deployments and a centralized Agent Registry. This registry functions like Vertex AI's Model Registry or a Service Registry for distributed applications, allowing you to manage, monitor, and maintain your agents efficiently.

Building an agent with OneTwo

Let's build a practical AI agent that can provide real-time currency exchange rates. We’ll use Vertex AI Colab as our working environment, a cloud-based Jupyter notebook environment that provides easy access to GPUs and pre-installed libraries for machine learning and AI development. Vertex AI Colab screenshotVertex AI Colab screenshot

Step 1: Setting Up Your Environment

First thing we need to do is install the OneTwo library. You can do this directly in your Vertex AI Colab notebook using the following command:

!pip install git+https://github.com/google-deepmind/onetwo

Our agent will have access to a tool that calls the Frankfurter API (https://api.frankfurter.app/), a reliable source for exchange rate data. This tool will be implemented as a Python function!

import requests

def get_exchange_rate(
   currency_from: str = "USD",
   currency_to: str = "EUR",
   currency_date: str = "latest",
):
   """Retrieves the exchange rate between two currencies on a specified date.

   Uses the Frankfurter API (https://api.frankfurter.app/) to obtain exchange rate data.

   Args:
       currency_from: The base currency (3-letter currency code). Defaults to "USD" (US Dollar).
       currency_to: The target currency (3-letter currency code). Defaults to "EUR" (Euro).
       currency_date: The date for which to retrieve the exchange rate. Defaults to "latest" for the most recent exchange rate data. Can be specified in YYYY-MM-DD format for historical rates.

   Returns:
       dict: A dictionary containing the exchange rate information.
            Example: {"amount": 1.0, "base": "USD", "date": "2023-11-24", "rates": {"EUR": 0.95534}}
   """

   response = requests.get(
       f"https://api.frankfurter.app/{currency_date}",
       params={"from": currency_from, "to": currency_to},
   )
   return response.json()

To use this python function as a tool we need the following declaration:

from onetwo.stdlib.tool_use import llm_tool_use
exchange_rate_tool = llm_tool_use.Tool(
   name='get_exchange_rate',
   function=get_exchange_rate,
   ##example=EXAMPLES,
)

We’ll also use a tool called “Finish” to present the final answer to the user once the model is ready. This helps ensure a clear and concise response, summarizing the relevant information gathered by the agent.

finish_tool = llm_tool_use.Tool(
   name='Finish',
   function=lambda x: x,
   description='Function for returning the final answer.',
)

Step 2: Creating the OneTwo Agent with templates

I personally find one of the biggest advantages of the OneTwo framework to be how easy it is to customize its ReACT template with your own examples. Here is mine:

from typing import Callable, Sequence
from onetwo.agents.react import ReActStep, ReActState

REACT_FEWSHOTS_EXCHANGE = [
   ReActState(
       inputs="What's the exchange rate from US dollars to British currency today??",
       updates=[
           ReActStep(
               thought=(
                   'The user wants to know the exchange rate between USD and GBP today'
               ),
               action=llm_tool_use.FunctionCall(
                   function_name='get_exchange_rate',
                   args=('USD','GBP','latest'),
                   kwargs={},
               ),
               observation='{"amount": 1.0, "base": "USD", "date": "2023-11-24", "rates": {"GBP": 0.95534}}',
               fmt=llm_tool_use.ArgumentFormat.JSON,
           ),
           ReActStep(
               is_finished=True,
               thought='The API response contains the exchange rate for today. The user should be informed about the current exchange rate. Extract the rate for GBP from the rates dictionary',
               action=llm_tool_use.FunctionCall(
                   function_name='Finish', args=('The exchange rate from USD to GBP today (2024-07-01) is 1 USD = 0.95534 GBP',), kwargs={}
               ),
               observation='The exchange rate from USD to GBP today (2024-07-01) is 1 USD = 0.95534 GBP',
               fmt=llm_tool_use.ArgumentFormat.PYTHON,
           ),
       ],
   ),

]

With our groundwork laid, we’re ready to create a custom template that harnesses the power of the OneTwo Agent. This template will act as the blueprint for our agent, defining its structure and behavior.

class OneTwoAgent:
   def __init__(
           self,
           model: str,
           tools: Sequence[Callable],
           project: str,
           location: str,
       ):
       self.model_name = model
       self.tools = tools
       self.project = project
       self.location = location

   def set_up(self):
       """All unpickle-able logic should go here.

       The .set_up() method should not be called for an object that is being
       prepared for deployment.
       """
       import vertexai

       import os
       from onetwo.backends import vertexai_api
       from onetwo import ot
       from onetwo.builtins import llm
       from onetwo.agents import react
       from onetwo.stdlib.tool_use import llm_tool_use
       from onetwo.stdlib.tool_use import python_tool_use

       backend = vertexai_api.VertexAIAPI(generate_model_name=self.model_name)
       backend.register()


       self.react_agent = react.ReActAgent(
            exemplars=REACT_FEWSHOTS_EXCHANGE,
            environment_config=python_tool_use.PythonToolUseEnvironmentConfig(
            tools=self.tools,
           ),
           ##max_steps=20,
           ##stop_prefix=''
       )


   def query(self, input: str):
       """Query the application.

       Args:
           input: The user prompt.

       Returns:
           The output of querying the application with the given input.
       """
       from onetwo import ot
       answer = ot.run(self.react_agent(inputs=input))
       return {"answer": answer}

In this template:

  1. __init__: Initializes the agent with the model name, tools (functions it can use), project, and location details.
  2. set_up: This method handles logic that can't be easily saved for deployment (like connecting to the Vertex AI API). We set up a ReActAgent which is a type of OneTwo Agent that works in a "reasoning-act" loop.
  3. query: This is the heart of our agent. It takes the user's input, runs the ReAct agent to get a response, and returns that response as a dictionary.

Next, we’ll create an instance of our OneTwoAgent, providing it with the model name, the tools it can use (exchange_rate_tool and finish_tool), and project/location details. We then call the set_up method to prepare it for action.

agent = OneTwoAgent(
   model='gemini-1.5-flash-001',  # Required.
   tools=[exchange_rate_tool, finish_tool],  # Optional.
   project='genai-app-builder',
   location='us-central1',
)
agent.set_up()

Now our agent is ready to answer questions about exchange rates! We can use it like this:

response = agent.query(

input="What is the exchange rate from US dollars to Polish currency?"

)

Code screenshotCode screenshotStep 3: Deploying with Reasoning Engine

One option to deploy OneTwo Agents to Cloud Run is through a multi step process of creating a Dockerfile, building an image, pushing it to an Artifact Repository, and finally deploying it as a Cloud Run Service.

Reasoning Engine transforms this complex workflow into a single, intuitive action. All we need is the reasoning_engines.ReasoningEngine.create method. Let's see how it works:

import vertexai
from vertexai.preview import reasoning_engines

PROJECT_ID = "genai-app-builder"  # @param {type:"string"}
LOCATION = "us-central1"  # @param {type:"string"}
STAGING_BUCKET = "gs://lolejniczak-imagen2-segmenttaion"  # @param {type:"string"}
vertexai.init(project=PROJECT_ID, location=LOCATION, staging_bucket=STAGING_BUCKET)

remote_agent = reasoning_engines.ReasoningEngine.create(
    OneTwoAgent(
      model='gemini-1.5-flash-001',  # Required.
      tools=[exchange_rate_tool, finish_tool],  # Optional.
      project='genai-app-builder',
      location='us-central1',
   ),
   requirements=[
       "google-cloud-aiplatform==1.55.0",
       "pydantic",
       "requests",
       "blinker==1.4",
       "sentencepiece",
       "git+https://github.com/google-deepmind/onetwo"
   ],
   display_name="OneTwoAgent",
   description="A conversational agent that can answer questions about the world.",
)

In this code we:

  1. Import necessary packages: vertexai and reasoning_engine
  2. Initialize Vertex AI: Using your project ID, location, and staging bucket to set up connection to Vertex AI.
  3. Create the remote agent: The magic happens here!
    1. We pass our agent instance (the OneTwoAgent we created earlier).
    2. We list the requirements – the Python libraries needed to run our agent.
    3. We give our agent a display_name (like "OneTwoAgent") and a description.

This single command handles the entire deployment process for you!

Notebook screenshotNotebook screenshotStep 4: Querying the deployed agent

Once complete, your agent is ready to be queried, either through the REST API or the Python SDK. Let’s demonstrate querying with the Python SDK:

remote_agent.query(

input="What's the exchange rate from US dollars to Polish currency today?"

)

Screenshot of notebookScreenshot of notebook

Conclusion: advanced AI agent development on Google Cloud

This guide demonstrates the effortless deployment of OneTwo agents, intelligent applications that leverage advanced language models, onto the Vertex AI Reasoning Engine. We highlighted how the key to this simplicity lies in a custom template that streamlines the development process and seamlessly integrates with Reasoning Engine’s infrastructure and how the template eliminates the need for manual Dockerfile creation, image building etc.

This streamlined approach empowers developers to rapidly prototype, test, and deploy their agents, accelerating the development cycle and enabling them to focus on building innovative AI-driven applications.

Get started

Now it's your turn! Experiment with building your own AI agents using the provided code examples along with with the following developer resources:

We encourage you to share your experiences, questions, and feedback in the comments below. Happy coding!

4 0 3,582