Im currently doing the google cloud skill boost course "Conversational AI on Vertex AI and Dialogflow CX". One of the videos states:
"You can add several generators in one fulfillment, which would be executed sequentially, one taking the output from the other as input. That way you can chain multiple LLMs, which might make it easier for debugging specific steps. One generator could provide information about destinations and the other could format or summarize that destination output to the user. That way you only have one concise response, but you have 2 separate prompts with clear instructions."
And the final answer, the one I print is $request.generative.fallback_location. I can see that the second generator isnt taking the first generated output as a parameter:
Both generators are giving me an output and the second generator's output should be the [AI] text:
But its just asking me what text do I want to extract. I wonder if I misunderstood someting or doing something wrong?
Hi @dzelt17,
Welcome to Google Cloud Community!
It's possible that the chaining between your generators isn't set up correctly. Check if the second generator is iterating over the output of the first generator, and the values are flowing through properly. If you only partially iterate, the second generator might not receive all the values it needs. This also happens when the first generator terminates prematurely, the second generator will stop receiving values and might not function as expected.
Hope this helps
User | Count |
---|---|
2 | |
1 | |
1 | |
1 | |
1 |