Get hands-on experience with 20+ free Google Cloud products and $300 in free credit for new customers.

Reasoning Engine Error

Hi,

I am consistently getting this error after a couple of inputs and then it is hosed:

FailedPrecondition: 400 Reasoning Engine Execution failed. Error Details: most recent call last

"Please ensure that function response turn comes immediately after a function call turn. And the number of function response parts should be equal to number of function call parts of the function call turn.

The above exception was the direct cause of the following exception:\n\nTraceback (most recent call last.......

Has anyone encountered this error?

 

Thanks

 

0 14 4,769
14 REPLIES 14

Hello,

I believe you are facing the error "FailedPrecondition: 400 Reasoning Engine Execution failed." You can try the following troubleshooting:
Use pip show google-cloud-aiplatform to double check the SDK version. Make sure that it is v1.47.0 or higher to include the reasoning engine SDK.

Regards,
Jai Ade

 

Hello,

Thank you for your engagement regarding this issue. We haven’t heard back from you regarding this issue for sometime now. Hence, I'm going to close this issue which will no longer be monitored. However, if you have any new issues, Please don’t hesitate to create a new issue . We will be happy to assist you on the same.

Regards,
Jai Ade

Hello @jaia . I found this post and made sure I'm running on the latest SDK version (1.57.0), but I'm also getting the same InvalidArgument error as the original post. Any thoughts on what else might be the problem?

Would we be able to get more context into what you are trying to run?

Hi @annawhelan , @jaia  , I have the same error. I am doing a parallel function calling with Gemini 1.5 Pro. After the first request to gemini model , it responses with 3 functions call. So far all is ok. But when i try to back to the model the response from the API, I get the error:

Error:
400 Please ensure that function response turn comes immediately after a function call turn. And the number of function response parts should be equal to number of function call parts of the function call turn.

My code thar returns  the API response to the Model is:

 
response = chat.send_message(
    Part.from_function_response(
        name="search_wikipedia",
        response={
            "content": api_response,
        },
    ),
)

When the model just response with one call function, this code above work ok but when the model response with more than one call function , the code above doesn't work.

Hi @Enrique_Suarez: I've run into this myself with Function Calling. I believe the issue you are running into happens when Gemini returns 2 or more function calls, then you need to response with the same number of function call responses otherwise that conversation turn is not considered complete.

For example, if I have three tools defined and Gemini predicts a function call for each of the three tools, then I should return all three API responses in the next chat response to the Gemini API, as in:

response = chat.send_message(
    [Part.from_function_response(
        name="search_wikipedia",
        response={
            "content": api_response_search_wikipedia,
        },
    ),
    Part.from_function_response(
        name="suggest_wikipedia",
        response={
            "content": api_response_suggest_wikipedia,
        },
    ),
    Part.from_function_response(
        name="summarize_wikipedia",
        response={
            "content": api_response_summarize_wikipedia,
        },
    ),],
)

See if that approach works for you, and I see that I need to update the sample notebook at https://github.com/GoogleCloudPlatform/generative-ai/blob/main/gemini/function-calling/parallel_func... to account for this error handling, so I'll go off and do that now!

Thanks, I will try that.

Just one question, what if your model output has the same function call but three times in parallel function? How should the code be to return the API response to the model?

I mean you just declare one function and pass it as a tool to the AI model. 

Each function call has an associated set of function parameters, and in the case of multiple function calls to the same function, the parameters will be varied in most cases. For example, I'm updating the sample notebook that makes 3 different queries to the same tool:

api_response = []

# Loop over multiple function calls
for function_call in function_calls:
    print(function_call)

    # Make external API call
    result = wikipedia.summary(function_call["search_wikipedia"]["query"])

    # Collect all API responses
    api_response.append(result)

response = chat.send_message(
    [
        Part.from_function_response(
            name="search_wikipedia",
            response={
                "content": api_response[0],
            },
        ),
        Part.from_function_response(
            name="search_wikipedia",
            response={
                "content": api_response[1],
            },
        ),
        Part.from_function_response(
            name="search_wikipedia",
            response={
                "content": api_response[2],
            },
        ),
    ],
)

 I also try to return the function parameters in the API response object when possible, to ensure that it's clear to Gemini which API response came from which set of function parameters.

I have an open PR (https://github.com/GoogleCloudPlatform/generative-ai/pull/938/) to update the sample notebook and address the issue you were running into.

Hello @koverholt  I think the problem is still happening , and the response from function call is not stable , The strange part fro me is that sometimes I receive "InvalidArgument: 400 Please ensure that function response turn comes... " and other time I don't . 

I think that happen with me on that lap "Enhance Gemini with access to external services with function calling: Challenge Lab L400"

and that still happen even if outside the lab.
"

 

Sorry to hear you are running into troubles! Feel free to +1 and/or add a comment to https://issuetracker.google.com/issues/344921847 or https://issuetracker.google.com/issues/331927553 or other related issues in that public issue tracker if they match up with you're experiencing related to Function Calling. Or you can open a new bug if you're seeing something different than those issues. You might also consider trying gemini-pro-experimental to see if the behavior is any different.

I am getting the invalid argument status consistently which is odd because I had this working before switching to FirebaseVertexAI.  From what I can tell the model response with the function call is getting added to the chat history.

example:

Optional(
[
FirebaseVertexAI.ModelContent(role: Optional("user"), parts: [FirebaseVertexAI.ModelContent.Part.text("Search my collection for 1792")]),
FirebaseVertexAI.ModelContent(
role: Optional("model"),
parts: [
FirebaseVertexAI.ModelContent.Part.text("Hey Bryan, you\'ve got a great collection going! You\'re a user since December 21st, 2022, and have been building an impressive 492 items across four collections. Let\'s see what you\'ve got in 1792. \n\n"),
FirebaseVertexAI.ModelContent.Part.functionCall(FirebaseVertexAI.FunctionCall(name: "itemsInMyCollection", args: ["query": FirebaseVertexAI.JSONValue.string("1792")])) ] ) ])

I setup my function response:

let responseContent: [ModelContent] = [ModelContent(role: "function"parts: .functionResponse(FunctionResponse(name: functionCall.nameresponse: [                          "searchResults": googleJSONValue]))])]

I try to respond: 

if let response = try await chat?.sendMessage(responseContent)

Then the error:

[FirebaseVertexAI] Response payload: {

  "error": {

    "code": 400,

    "message": "Please ensure that function call turn comes immediately after a user turn or after a function response turn.",

    "status": "INVALID_ARGUMENT"

  }

}

The "FailedPrecondition" error you're encountering indicates that the reasoning engine is having trouble processing your function calls correctly. Here are some steps to troubleshoot this issue:

  1. Check Function Call Structure: Ensure that the function response follows immediately after the function call. There should be no other actions or delays in between.

  2. Match Response Parts: Verify that the number of parts in your function response matches the number of parts in your function call. This alignment is crucial for the reasoning engine to process correctly.

  3. Input Validation: Double-check the inputs you're sending in the function calls. Ensure they are valid and in the expected format.

  4. Rate Limits: If you’re sending multiple requests in quick succession, consider implementing rate limiting or adding delays to avoid overwhelming the engine.

  5. Debugging: Enable detailed logging to capture the input and output of each function call. This will help identify where the error occurs.

  6. Review Documentation: Consult the latest documentation for any updates or changes in API behavior that may affect your implementation.

  7. Reach Out for Support: If the issue persists, consider reaching out to support with the error logs for further assistance.

These steps should help you diagnose and resolve the issue.

I am having the same issue. I completed the challenge lab and am still not getting the score. Not even the support team could figure this out.Enhance Gemini with access to external services with function calling: Challenge Lab L400Screenshot 2024-10-01 at 12.52.13 PM.pngScreenshot 2024-10-01 at 12.52.27 PM.pngScreenshot 2024-10-01 at 12.52.40 PM.pngScreenshot 2024-10-01 at 12.52.49 PM.png

 

Hi @sokpara , I hope you're doing well. Have you successfully completed the lab? If so, could you kindly share the solution with me? I’m facing the same issue. Thanks in advance!