Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Dealing with output from assistant agent #165

Closed
sridhar21111976 opened this issue Oct 9, 2023 · 19 comments
Closed

Dealing with output from assistant agent #165

sridhar21111976 opened this issue Oct 9, 2023 · 19 comments
Labels
0.2 Issues which are related to the pre 0.4 codebase documentation Improvements or additions to documentation

Comments

@sridhar21111976
Copy link

Hi Team,

Is there a way to hide the transactions between the assistant agent and the user proxy agent and get only the required final output from assistant agent, once the user proxy agent has terminated the transaction..?

I trying to use this to solve complex scenarios involving multi step, but interested only in the final answer. unless a human input is required for a given step.

@qingyun-wu
Copy link
Contributor

There is no such fine-grained printing mechanism yet. However, one workaround is to log all the history and retrieve the conversations that satisfy certain conditions by post-processing the log history. Check code examples about logging as follows:

Enable logging: https://github.com/microsoft/autogen/blob/main/test/agentchat/test_assistant_agent.py#L122

Check the logged info: https://github.com/microsoft/autogen/blob/main/test/agentchat/test_assistant_agent.py#L150

Please let me know if this does not address your needs.

@sridhar21111976
Copy link
Author

Hi Qingyun-wu

The idea was to avoid all the intermedite outputs. So logging and parsing through the entire content gets to become complex.
What might be useful is like a Verbose turn off and only get the final output post the TERMINATE exchange.

Also I am seeing the agent often does not recognise an existing function and says function does not exist - thought it identifies the right function needed. This is intermittent. Probably to do with some cache issue... not sure..

Also is there an option to flush cache, will be nice to understand what is level of informatin is cached.

@sonichi
Copy link
Contributor

sonichi commented Oct 10, 2023

@sridhar21111976
Copy link
Author

Thank you Sonichi, Will give that a try.. Any sampe code is appreciated..
This is great stuff team... I have been doing a LLM to LLM talk to achieve this so far... this is making it simple.... Bit more stability on function recognition and respecting the description text is needed...It sometimes ignores the text in the definition.

@sonichi sonichi added the documentation Improvements or additions to documentation label Oct 22, 2023
@sonichi
Copy link
Contributor

sonichi commented Oct 22, 2023

Some of the answers are worth adding to the documentation website.

@sridhar21111976
Copy link
Author

sridhar21111976 commented Oct 25, 2023

Hi Sonichi,

I tried the last message option. Given the last interaction is a TERMINATE command to end conversation, the last message printed is TERMINATE. I have just worked around by asking the agent to format the final answer thru prompt as something like {answer} TERMINATE. Then I am trimming TERMINATE word from the final answer.... Is there any other elegant way to do this..?

Also have question around memory - what is the default chat history / memory length, any option to control or reset this..
I can see chat_history parameter - Boolean - is this the only option...?

Also if I build an application using Autogen - Is this expected to remain for ever and be supported..?

@sonichi
Copy link
Contributor

sonichi commented Oct 28, 2023

The chat history keeps growing in memory until
https://microsoft.github.io/autogen/docs/reference/agentchat/conversable_agent#clear_history

No one can promise forever but look, autogen is known by the public for only 1 month and already has a big vibrant community. It's not going to die anytime soon.

@karlo-franic
Copy link

karlo-franic commented Dec 7, 2023

Hi Sonichi,

I tried the last message option. Given the last interaction is a TERMINATE command to end conversation, the last message printed is TERMINATE. I have just worked around by asking the agent to format the final answer thru prompt as something like {answer} TERMINATE. Then I am trimming TERMINATE word from the final answer.... Is there any other elegant way to do this..?

Also have question around memory - what is the default chat history / memory length, any option to control or reset this.. I can see chat_history parameter - Boolean - is this the only option...?

Also if I build an application using Autogen - Is this expected to remain for ever and be supported..?

I still haven't found a way to not get any output from initiate_chat(), but it seems individual messages from the assistants can be returned with
print(list(assistant._oai_messages.values())[0][-3]['content'])
For me it was -3 because -1 was 'TERMINATE' and -2 was blank, but it could vary depending on the output you get.

@ksivakumar
Copy link

I am trying to get the last message and for context, have called something like:

user_proxy.initiate_chat(group_chat_manager, message="This is my message")

This works and I get the full conversation. I'm trying to use the last_message() function, but when I do something like,

group_chat_manager.last_message()

It says group_chat_manager was not part of any recent conversations. I tried calling last_message with other agents that was part of the groupchat and I get the same error. Can someone please provide actual use of the function, linking to the documentation is not clear enough.

@Neeraj319
Copy link

silent = True only works for the first message after that it again prints the messages

@sonichi
Copy link
Contributor

sonichi commented Apr 16, 2024

silent = True only works for the first message after that it again prints the messages

That sounds a bug. Help is appreciated!

cc @cheng-tan @Hk669 @giorgossideris @krishnashed @WaelKarkoub

@sonichi
Copy link
Contributor

sonichi commented Apr 16, 2024

I am trying to get the last message and for context, have called something like:

user_proxy.initiate_chat(group_chat_manager, message="This is my message")

This works and I get the full conversation. I'm trying to use the last_message() function, but when I do something like,

group_chat_manager.last_message()

It says group_chat_manager was not part of any recent conversations. I tried calling last_message with other agents that was part of the groupchat and I get the same error. Can someone please provide actual use of the function, linking to the documentation is not clear enough.

Could you try the new "ChatResult" returned from the chat? https://microsoft.github.io/autogen/docs/tutorial/conversation-patterns#group-chat

@WaelKarkoub
Copy link
Contributor

@Neeraj319 would you be able to open up a new issue? And if possible include a minimal example to reproduce it

@krishnashed
Copy link
Contributor

silent = True only works for the first message after that it again prints the messages

That sounds a bug. Help is appreciated!

cc @cheng-tan @Hk669 @giorgossideris @krishnashed @WaelKarkoub

Sure @sonichi, let me check it out!

@Neeraj319
Copy link

I have opened an issue: #2402

@yonitjio
Copy link

yonitjio commented May 24, 2024

Not sure if this can help, but if you want to completely turn of the output by creating a silent console, something like this

class SilentConsole(console.IOConsole):
    def print(self, *objects: Any, sep: str = " ", end: str = "\n", flush: bool = False) -> None:
        pass

and set it as the default

base.IOStream.set_global_default(SilentConsole())
base.IOStream.set_default(SilentConsole())

@teyang-lau
Copy link

Would be a useful feature if we can set the verbosity level for each agent in a group chat. As you can imagine, some parts of the conversation are not relevant/useful/interesting to the end user, and so we might not want to show it to them. As the conversation gets longer, the amount of stuff shown on a UI also gets longer, so being able to hide them will improve the end-user experience

jackgerrits added a commit that referenced this issue Oct 2, 2024
@rysweet rysweet added 0.2 Issues which are related to the pre 0.4 codebase needs-triage labels Oct 2, 2024
@rysweet
Copy link
Collaborator

rysweet commented Oct 12, 2024

several proposed solutions in the issue, and other relevant issues opened. marking won't fix for 0.2

@rysweet rysweet closed this as not planned Won't fix, can't repro, duplicate, stale Oct 12, 2024
@jacobodetunde
Copy link

generate_reply() will produce the last message

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
0.2 Issues which are related to the pre 0.4 codebase documentation Improvements or additions to documentation
Projects
None yet
Development

No branches or pull requests