I'm developing a Vertex AI agent using the agent builder to answer user questions about investment products. The agent covers 12 products across equity, fixed income, real estate, private equity, and real assets. It uses an OpenAPI-based tool to access the dataset and respond to queries.
The agent sometimes fabricates answers when asked questions that are outside its dataset, instead of admitting it doesn't know. This is particularly problematic as it's dealing with financial information.
Here are questions it can answer well. All these are in our dataset and hence it can answer well.
So here are my questions.
1. How can we instruct the agent to rely solely on data in the dataset and not fabricate answers?
2. Is it possible to instruct the agent to refuse to answer questions about specific topics within the domain?
3. Are there best practices for preventing AI agents from making up answers in financial contexts?
4. Has anyone successfully implemented a "confidence threshold" for Vertex AI agents?
5. Are there specific prompting techniques that work well for maintaining strict boundaries on an agent's knowledge?
Any insights, experiences, or suggestions would be greatly appreciated. Thank you in advance for your help!
Solved! Go to Solution.
Hi @AnanthM,
Welcome to Google Cloud Community!
You're encountering a common challenge in AI development. One of the common challenges for an agent is by strictly following the knowledge base and refraining from inventing information. This is especially crucial in financial sectors where accuracy is extremely important and significant.
Here's a breakdown of your questions and potential solutions:
1. Relying Solely on Dataset Data
2. Refusing to Answer Specific Topics
3. Best Practices for Preventing Fabrication
4. Confidence Thresholds
5.Techniques for Boundary Maintenance
Additionally, ensure your dataset is comprehensive, accurate, and up-to-date and continuously monitor the agent's performance and adjust your training data, instructions, and prompts as needed.
By implementing these strategies and consistently testing and refining your agent, you can improve its accuracy and reliability, minimizing the risk of fabricated answers in a financial context.
I hope the above information is helpful!
i'am having the same issue. the bot is responding random stuff without limit
Hi @AnanthM,
Welcome to Google Cloud Community!
You're encountering a common challenge in AI development. One of the common challenges for an agent is by strictly following the knowledge base and refraining from inventing information. This is especially crucial in financial sectors where accuracy is extremely important and significant.
Here's a breakdown of your questions and potential solutions:
1. Relying Solely on Dataset Data
2. Refusing to Answer Specific Topics
3. Best Practices for Preventing Fabrication
4. Confidence Thresholds
5.Techniques for Boundary Maintenance
Additionally, ensure your dataset is comprehensive, accurate, and up-to-date and continuously monitor the agent's performance and adjust your training data, instructions, and prompts as needed.
By implementing these strategies and consistently testing and refining your agent, you can improve its accuracy and reliability, minimizing the risk of fabricated answers in a financial context.
I hope the above information is helpful!
User | Count |
---|---|
2 | |
2 | |
1 | |
1 | |
1 |