Hi, I'm trying to create a playbook in the conversational agents module, and I've created a tool that takes the user's question text as input; this tool is an external API that in turn queries other models.
The problem is that the agent passes the user's text directly to the tool and doesn't modify it to include explicit references from the conversation.
Example:
-User: Which are the 3 projects with the highest sales?
-Response: project_1: 10, project_2:9, project_3:8
-User: What were the monthly sales for the third project?
Here the agent sends the explicit question to the tool, but I need it to send: What were the monthly sales for project project_3?
I've tried modifying the instruction prompt and I've also tried putting several examples, but it keeps sending the plain question without context. What could I do in this case?
prompt:
- When the user asks about sales or supplies:
- 1. Analyze if the question has implicit references to the previous conversation (pronouns, relative terms, etc.)
- 2. If there are any, modify the text by replacing implicit references with their specific value from the context
- 3. Use ${TOOL:general_tool} with the modified text
- When modifying the text:
- Replace "this", "that", "those" with the specific name
- Replace "the previous", "the last" with the concrete value
- Convert relative quantities ("the third", "the first ones") into exact values
- Keep any additional parameters from the original question
- Don't add information or request clarifications
- Use only the information available in the context
- Return the tool's response without modifications
Solved! Go to Solution.
Hi @gebejaranod,
Welcome to Google Cloud Community!
I understand that you are trying to create a playbook wherein the conversational agent is forwarding the user's query to an external API without enriching it with context from the ongoing conversation.
Here are some approaches that you can try to enhance the instruction prompt:
1. Ensure that the instruction prompt explicitly mentions analyzing the entire conversation history, not just the most recent message.
2. Include explicit patterns or examples of common references and how they should be resolved.
3. Fallback Mechanism: Before querying ${TOOL:general_tool}, confirm that the context replacement was successful. If the replacement is invalid or the tool returns an error or an empty response, provide a predefined fallback response, such as "I'm sorry, I couldn't find an answer to your question using the available information."
In addition, you may enhance the agent's ability to handle implicit references by incorporating context-aware patterns. Include diverse examples in the prompt illustrating how to translate various common reference patterns (e.g., pronouns, relative quantifiers, temporal references like "last month," elliptical constructions) into explicit, tool-ready queries. The more varied and comprehensive the examples, the better the agent's performance will be.
Was this helpful? If so, please accept this answer as “Solution”. If you need additional assistance, reply here within 2 business days and I’ll be happy to help.
Hi @gebejaranod,
Welcome to Google Cloud Community!
I understand that you are trying to create a playbook wherein the conversational agent is forwarding the user's query to an external API without enriching it with context from the ongoing conversation.
Here are some approaches that you can try to enhance the instruction prompt:
1. Ensure that the instruction prompt explicitly mentions analyzing the entire conversation history, not just the most recent message.
2. Include explicit patterns or examples of common references and how they should be resolved.
3. Fallback Mechanism: Before querying ${TOOL:general_tool}, confirm that the context replacement was successful. If the replacement is invalid or the tool returns an error or an empty response, provide a predefined fallback response, such as "I'm sorry, I couldn't find an answer to your question using the available information."
In addition, you may enhance the agent's ability to handle implicit references by incorporating context-aware patterns. Include diverse examples in the prompt illustrating how to translate various common reference patterns (e.g., pronouns, relative quantifiers, temporal references like "last month," elliptical constructions) into explicit, tool-ready queries. The more varied and comprehensive the examples, the better the agent's performance will be.
Was this helpful? If so, please accept this answer as “Solution”. If you need additional assistance, reply here within 2 business days and I’ll be happy to help.
User | Count |
---|---|
2 | |
2 | |
1 | |
1 | |
1 |