I noticed when posting to the PaLM REST API that the examples field is completely ignored. See my code. Can you please help, is it currently unsupported, or do I have a syntax error?
main.py
from dotenv import load_dotenv
import requests
import json
import os
def main():
load_dotenv()
API_KEY = os.getenv("PALM_API_KEY")
MODEL_NAME = os.getenv("MODEL_NAME")
api_url = f"https://generativelanguage.googleapis.com/v1beta3/models/{MODEL_NAME}:generateMessage?key={API_KEY}"
json_request_body = {
"prompt": {
"context": f"{os.getenv('CONTEXT')}",
"examples": [
{
"input": { "content": "What about insurance. What insurance do you recommend?" },
"output": { "content": "Protect your financial well-being! Ensure you have comprehensive insurance coverage for health, life, and property. This serves as a vital safeguard against unforeseen financial challenges that may arise." }
}
],
"messages": [
{ "content": f"{get_prompt()}" }
],
},
"temperature": float(os.getenv("TEMPERATURE")),
"candidate_count": int(os.getenv("CANDIDATE_COUNT")),
"topP": float(os.getenv("TOP-P")),
"topK": int(os.getenv("TOP-K")),
}
response = requests.post(api_url, json=json_request_body)
print(json.dumps(response.json(), indent=2))
main()
def get_prompt():
print("Type a prompt:")
return input()
if __name__ == "__main__":
main()
I suspect it might be a limitation of the chat-bison@001 model rather than it being the PaLM API specifically. I noticed that I had similar issues when using the GA Vertex AI endpoint and using chat-bison@001 instead of chat-bison. I think the question should rather be, when will the PaLM API support the latest chat-bison model?