Hi,
I am building a conversational agent in the Reasoning Engine framework (or LangChain on Vertex AI as it is also called), inspired by this example: https://github.com/GoogleCloudPlatform/generative-ai/blob/main/gemini/reasoning-engine/tutorial_vert...
Now, I want to pass system instructions to the agent, and I discovered there are two methods to do it. Either to use argument called system_instruction directly, or to define system instruction using custom prompt template. See the bellow code snippets (1.) and (2.).
Both of them work fine when I query the agent locally, I can clearly see that the agent follows the specified instructions. However, when I deploy the agent to the reasoning engine - snippet (3.), the instructions are ignored, even for the same exact queries within a clear session. For example, I instructed the agent to introduce himself, and when I ask him locally "Who are you?", he answers according to my instructions. But once deployed to reasoning engine, it gives this original Gemini's answer "I am Gemini, a large language model created by Google.", completely ignoring my instructions. I wasn't able to find a way how make the same instructions effective for deployments to reasoning engine.
I understand the product is still in review, but this is a very basic stuff in my opinion, hard to believe I wasn't able to find any helpful documentation or tutorial for that. Any advice or idea somebody please?
1. Using system_instruction argument
from vertexai.preview import reasoning_engines
agent = reasoning_engines.LangchainAgent(
model=model,
system_instruction="some testable system instruction",
chat_history=get_session_history,
model_kwargs={"temperature": 0},
tools=[search_datastore],
agent_executor_kwargs={"return_intermediate_steps": True},
)
2. Using custom prompt template
from langchain_core import prompts
from langchain.agents.format_scratchpad.tools import format_to_tool_messages
from vertexai.preview import reasoning_engines
system_instruction = "some testable system instructions"
prompt = {
"history": lambda x: x["history"],
"input": lambda x: x["input"],
"agent_scratchpad": (lambda x: format_to_tool_messages(x["intermediate_steps"])),
} | prompts.ChatPromptTemplate.from_messages(
[
("system", system_instruction),
prompts.MessagesPlaceholder(variable_name="history"),
("user", "{input}"),
prompts.MessagesPlaceholder(variable_name="agent_scratchpad"),
]
)
agent = reasoning_engines.LangchainAgent(
model=model,
prompt=prompt,
chat_history=get_session_history,
model_kwargs={"temperature": 0},
tools=[search_datastore],
agent_executor_kwargs={"return_intermediate_steps": True},
)
3. Deployment to Reasoning Engine
remote_agent = reasoning_engines.ReasoningEngine.create(
agent,
requirements=[
"google-cloud-aiplatform[langchain,reasoningengine]==1.69.0",
"langchain-google-community==1.0.8",
"google-cloud-discoveryengine==0.12.2"
],
display_name="assistant",
)
Solved! Go to Solution.
After some more experiments, I have finally found how the prompt should be defined so the system instruction is reflected also in the reasoning engine, the remote agent.
in case it might help someone, it should look like this:
prompt = {
"history": lambda x: x["history"],
"input": lambda x: x["input"],
"agent_scratchpad": (lambda x: format_to_tool_messages(x["intermediate_steps"])),
"system_instruction": lambda x: system_instruction
} | prompts.ChatPromptTemplate.from_messages(
[
("system", "{system_instruction}"),
prompts.MessagesPlaceholder(variable_name="history"),
("user", "{input}"),
prompts.MessagesPlaceholder(variable_name="agent_scratchpad"),
]
)
Note that I had to use lambda function, even if my instruction is a string constant. Otherwise the exception is raised: TypeError: Expected a Runnable, callable or dict.
After some more experiments, I have finally found how the prompt should be defined so the system instruction is reflected also in the reasoning engine, the remote agent.
in case it might help someone, it should look like this:
prompt = {
"history": lambda x: x["history"],
"input": lambda x: x["input"],
"agent_scratchpad": (lambda x: format_to_tool_messages(x["intermediate_steps"])),
"system_instruction": lambda x: system_instruction
} | prompts.ChatPromptTemplate.from_messages(
[
("system", "{system_instruction}"),
prompts.MessagesPlaceholder(variable_name="history"),
("user", "{input}"),
prompts.MessagesPlaceholder(variable_name="agent_scratchpad"),
]
)
Note that I had to use lambda function, even if my instruction is a string constant. Otherwise the exception is raised: TypeError: Expected a Runnable, callable or dict.
I also want to reply on this as I ran into the same issue and it was an indeed an issue in the Vertex AI SDK.
https://github.com/googleapis/python-aiplatform/issues/5046
User | Count |
---|---|
2 | |
2 | |
1 | |
1 | |
1 |