I am utilizing Gemini Pro Models to generate text for the job description. However, the generated output is not consistently complete, as there is often some missing information. Is there a limit to the response generated by the generator?
Solved! Go to Solution.
Hi,
I found solution. It is due to their is token limit inital is set .
But I always generate incomplete text(I want to generate job descriptions using a generator and Gemini models). I don't know if it an issue with a generator or Dialogflow cx.
did you check the number of characters? because it could be incomplete because you are reaching 4K characters
it is around always 90 to 95 characters
that is weird, I cannot help you more with this topic. Not sure why this is happening
Hi,
I found solution. It is due to their is token limit inital is set .
ooooh!! good catch!
This only works if it is done from Vertex IA, I have the same problems from Dialog Flow CX and I cannot find any section where the ML configuration can be done for the Token limit and temperature configuration
You can modify the LLM config in any Gen AI feature on Dialogflow CX
Gemini specifications given here
https://ai.google.dev/models/gemini
I am looking for a similar but opposite use case. Looking for a prompt that will enforce the response does NOT exceed 105 characters. Attempted stating this requirement in multiple places but still getting some responses over 130.
User | Count |
---|---|
2 | |
2 | |
1 | |
1 | |
1 |