You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Model client might need them, but assistant agent in agentchat never provides an option to set them or pass them to the client.
I discovered this while trying to use the skadapter with assistant agent + deepseek r1.
(python) gaganbansal@MacBook-Pro ~ % python ~/Downloads/ollamask.py
Error processing publish message for assistant/d88e2d75-f480-4e3a-990d-692ca482e085
Traceback (most recent call last):
File "/Users/gaganbansal/workspace/autogen/python/packages/autogen-core/src/autogen_core/_single_threaded_agent_runtime.py", line 409, in _on_message
return await agent.on_message(
^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/gaganbansal/workspace/autogen/python/packages/autogen-core/src/autogen_core/_base_agent.py", line 113, in on_message
return await self.on_message_impl(message, ctx)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/gaganbansal/workspace/autogen/python/packages/autogen-agentchat/src/autogen_agentchat/teams/_group_chat/_sequential_routed_agent.py", line 48, in on_message_impl
return await super().on_message_impl(message, ctx)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/gaganbansal/workspace/autogen/python/packages/autogen-core/src/autogen_core/_routed_agent.py", line 485, in on_message_impl
return await h(self, message, ctx)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/gaganbansal/workspace/autogen/python/packages/autogen-core/src/autogen_core/_routed_agent.py", line 268, in wrapper
return_value = await func(self, message, ctx) # type: ignore
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/gaganbansal/workspace/autogen/python/packages/autogen-agentchat/src/autogen_agentchat/teams/_group_chat/_chat_agent_container.py", line 53, in handle_request
async formsgin self._agent.on_messages_stream(self._message_buffer, ctx.cancellation_token):
File "/Users/gaganbansal/workspace/autogen/python/packages/autogen-agentchat/src/autogen_agentchat/agents/_assistant_agent.py", line 386, in on_messages_stream
model_result = await self._model_client.create(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/gaganbansal/workspace/autogen/python/packages/autogen-ext/src/autogen_ext/models/semantic_kernel/_sk_chat_completion_adapter.py", line 342, in create
kernel = self._get_kernel(extra_create_args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/gaganbansal/workspace/autogen/python/packages/autogen-ext/src/autogen_ext/models/semantic_kernel/_sk_chat_completion_adapter.py", line 301, in _get_kernel
raise ValueError("kernel must be provided either in constructor or extra_create_args")
ValueError: kernel must be provided either in constructor or extra_create_args
There should be a way to pass the kernel or extra kwargs to the client from the agent. I think the same is true for json output etc.
How can we reproduce it (as minimally and precisely as possible)?
importasynciofromsemantic_kernelimportKernelfromsemantic_kernel.memory.null_memoryimportNullMemoryfromsemantic_kernel.connectors.ai.ollamaimportOllamaChatCompletion, OllamaChatPromptExecutionSettingsfromautogen_agentchat.agentsimportAssistantAgent, CodeExecutorAgentfromautogen_agentchat.conditionsimportTextMentionTermination, MaxMessageTerminationfromautogen_agentchat.uiimportConsolefromautogen_agentchat.teamsimportRoundRobinGroupChatfromautogen_ext.models.semantic_kernelimportSKChatCompletionAdapterfromautogen_ext.code_executors.localimportLocalCommandLineCodeExecutorasyncdefmain():
# Step 1: Initialize the Semantic Kernelkernel=Kernel(memory=NullMemory())
# Step 2: Configure the Ollama clientollama_client=OllamaChatCompletion(
service_id="ollama",
host="http://localhost:11434", # Local Ollama serverai_model_id="deepseek-r1", # Replace with your desired Ollama model
)
model_client=SKChatCompletionAdapter(sk_client=ollama_client)
assistant=AssistantAgent(
name="assistant",
system_message="You are a helpful assistant. Write all code in python. If the correct output is generated by code execution by user reply 'TERMINATE'. Code must be returned using ``` ```",
model_client=model_client
)
code_executor=CodeExecutorAgent(
name="code_executor",
code_executor=LocalCommandLineCodeExecutor(work_dir="coding"),
)
# The termination condition is a combination of text termination and max message termination, either of which will cause the chat to terminate.termination=TextMentionTermination("TERMINATE") |MaxMessageTermination(10)
# The group chat will alternate between the assistant and the code executor.group_chat=RoundRobinGroupChat([assistant, code_executor], termination_condition=termination)
# `run_stream` returns an async generator to stream the intermediate messages.stream=group_chat.run_stream(task="Print hello world.")
# `Console` is a simple UI to display the stream.awaitConsole(stream)
asyncio.run(main())
AutoGen version
main
Which package was this bug in
AgentChat
Model used
deepseek-r1
Python version
No response
Operating system
No response
Any additional info you think would be helpful for fixing this bug
No response
The text was updated successfully, but these errors were encountered:
What happened?
Model client might need them, but assistant agent in agentchat never provides an option to set them or pass them to the client.
I discovered this while trying to use the skadapter with assistant agent + deepseek r1.
@lspinheiro @ekzhu
What did you expect to happen?
There should be a way to pass the kernel or extra kwargs to the client from the agent. I think the same is true for json output etc.
How can we reproduce it (as minimally and precisely as possible)?
AutoGen version
main
Which package was this bug in
AgentChat
Model used
deepseek-r1
Python version
No response
Operating system
No response
Any additional info you think would be helpful for fixing this bug
No response
The text was updated successfully, but these errors were encountered: