Get hands-on experience with 20+ free Google Cloud products and $300 in free credit for new customers.

Gemini 1.5 Pro model returns the same joke when invoked by API

Hello Experts!

I recently started with Gemini and wrote a simple backend to query Gemini models to retrieve a joke/pun. However, the issue I see is the API always returns the same joke. This does not happen with Gemini chat app: https://gemini.google.com/

I tried to give context to the model using "System Instructions" and the prompt is as simple as "Tell me a joke".  Its the same joke that is returned every time - is this a bug?

PromptPrompt

Model: gemini-1.5-pro-001

Region: asia-south1

Any idea what am I doing wrong? How do I ensure that the joke is not repeated?

 

 

0 5 1,607
5 REPLIES 5

AndrewB
Community Manager
Community Manager

You are using single turn prompt so each request does not have the context of the previous. If you test the same prompt in a multiturn chat, you'll see the joke change each time. The Gemini chat app you also tested is holding the context unless you click 'new chat'.

Hello @AndrewB! Thank you for the repsonse.

Even with the chat history on which is served as context, I see a significant overlap in the response returned.

Is there a way to avoid this repetition?

Screenshot 2024-08-01 at 14.23.40.png

Hello,

Thank you for contacting the Google Cloud Community.

I have gone through your reported issue, however it seems like this is an issue observed specifically at your end. It would need more specific debugging and analysis. To ensure a faster resolution and dedicated support for your issue, I kindly request you to file a support ticket by clicking here. Our support team will prioritize your request and provide you with the assistance you need.

For individual support issues, it is best to utilize the support ticketing system. We appreciate your cooperation!

 

I can say i have the same issue with Gemini Pro. I tried using the gemini node js library and plain rest calls also. Via postman i get different jokes, but by code its always the same.

Wonder if there was a solution to this? ChatGPT on the otherhand is doing ok 

 Sorry the error was on my part, the prompt was wrong. Corrected it with history and it worked.