-
Notifications
You must be signed in to change notification settings - Fork 5.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
SK assistant for agentchat #5134
base: main
Are you sure you want to change the base?
SK assistant for agentchat #5134
Conversation
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #5134 +/- ##
==========================================
+ Coverage 70.08% 70.30% +0.21%
==========================================
Files 179 181 +2
Lines 11620 11715 +95
==========================================
+ Hits 8144 8236 +92
- Misses 3476 3479 +3
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. |
python/packages/autogen-ext/src/autogen_ext/agents/semantic_kernel/_sk_assistant_agent.py
Outdated
Show resolved
Hide resolved
kernel=self._kernel, | ||
) | ||
# Convert SK's list of responses into a single final text | ||
assistant_reply = "\n".join(r.content for r in sk_responses if r.content) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For SK's list of responses, are they separate messages or a single message? Is it possible for this to contain messages that may contain tool calls?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It returns a list
async def get_chat_message_contents(
self,
chat_history: "ChatHistory",
settings: "PromptExecutionSettings",
**kwargs: Any,
) -> list["ChatMessageContent"]:
But it can return tool calls, it is a good catch. This depends on the configuration of the prompt settings. In the model adapter we are forcing semantic kernel to return tool calls to keep the same contract of the chat completion client. Here in the sample I'm using semantic kernel to automatically execute the function but this is not being forced in the configuration. Do you think we should force it or allow it to return tool calls? We may need to be more explicit about this behavior in the docs and maybe log some warnings
Why are these changes needed?
Adds an agentchat chat agent based on semantic kernel that allows the integration of most of the semantic kernel ecosystem.
Still need to update message conversion to handle non-text types, add docstring, etc. But the basics are thre.
Related issue number
Closes #4741
Checks