-
Notifications
You must be signed in to change notification settings - Fork 704
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use tool output as agent output #117
Comments
Apologies if there's mistakes in my understanding of how AutoGen works, or how this library works. New to the space (as many of us are, I imagine!) |
That's a cool feature. Currently don't have support for it, but we'll talk about it and see if we can add it! |
Sometimes I query a lot of web data through an agent, but the agent summarizes the web information, resulting in missing information. I hope to receive the complete information obtained through web search. Perhaps using an intermediate variable can achieve this task, but this solution is cool |
Wasn't expecting anything so quickly. Thanks! Had a play, and this definitely solves my usecase. Left 2 small comments on the PR but even if it was merged exactly as-is I'd be a happy user. |
Hey @TimoVink @rm-openai Thanks so much for raising and implementing this feature — it’s super helpful! 🙌 I’d love to try this out in my own project. Could you share any details or code samples on how to use the new tool_use_behavior feature with an agent? A quick example or some guidance on how to set it up would be really appreciated! Thanks again for the quick turnaround and the awesome work! |
## Context By default, the outputs of tools are sent to the LLM again. The LLM gets to read the outputs, and produce a new response. There are cases where this is not desired: 1. Every tool results in another round trip, and sometimes the output of the tool is enough. 2. If you force tool use (via model settings `tool_choice=required`), then the agent will just infinite loop. This enables you to have different behavior, e.g. use the first tool output as the final output, or write a custom function to process tool results and potentially produce an output. ## Test plan Added new tests and ran existing tests Also added examples. Closes #117
Is there some equivalent to AutoGen's
reflect_on_tool_use=False
? That is: Some way for an agent to use the output of one of its tools as its own output, without needing another roundtrip to the LLM?Worked example:
I believe the way this works is:
generate_lorem(1000)
In practice we already have our answer after step 3, but then have this extra roundtrip in steps 4&5, which can get expensive in terms of both tokens and time if the output from our tool is large.
Is there functionality or design pattern to work around this? I think it would really help push the composability of agents even further than you already have with this SDK!
Real world usecase: I have an agent whose goal it is to retrieve a chunky piece of content. It has ~5 tools at its disposal. The "main" tool actually retrieves the data, the other tools help the agent determine the correct parameters with which to invoke the main tool. I would like to then use this "data retrieval agent" inside other agents which can then use this dataset to do interesting things.
I can work around this by making my "data retrieval agent" only return the correct parameters with which to call the main tool, and leave the actual retrieval to other agents, but then it feels like my agent isn't encapsulating a nice isolated chunk of work.
Love the library so far, thanks for your work!
The text was updated successfully, but these errors were encountered: