Skip to content

MCP Workbench does not trigger get_prompt() in Anthropic-style MCP servers #6652

@oscar60805

Description

@oscar60805

What is the doc issue?

Summary

I'm currently developing an Anthropic MCP-style server, and I'm attempting to use AutoGen's AssistantAgent with McpWorkbench to interact with it. However, I noticed that the get_prompt() handler defined on the server is never triggered during task execution. This makes it impossible for the agent to receive structured task workflows defined by the server.

What I'm trying to do

I have an MCP server that defines a prompt template using the @server.get_prompt() decorator. This prompt is crucial because it provides detailed task instructions and tool usage flow to the LLM.

In the Claude-style agent flow, this decorator is triggered automatically, and the prompt gets injected before the task starts.

Example:

```python
@server.get_prompt()
async def handle_get_prompt(name: str, arguments: dict[str, str]) -> GetPromptResult:
    ...
    return GetPromptResult(messages=[PromptMessage(role="user", content=TextContent(text=prompt_text))])

But when I use AutoGen with the following setup:

server_params = StdioServerParams(
    command="python",
    args=["main.py"],
    cwd="./servers/my-mcp-server"
)

async with McpWorkbench(server_params) as workbench:
    agent = AssistantAgent(
        name="csv_agent",
        model_client=my_model,
        workbench=workbench,
        ...
    )
    await agent.run(task="Analyze defects in the CSV.")

There is no call made to get_prompt(), and the agent starts reasoning without the workflow provided by the server.

What I expected
I expected AutoGen to:

Call get_prompt() on the server.

Receive the structured prompt.

Use that as the full task input to the agent (instead of requiring the developer to manually input the full task string).

Questions
Is AutoGen currently designed to ignore the MCP get_prompt() endpoint?

If so, is there a recommended way to inject the prompt dynamically from MCP into the agent’s task?

Would you consider a feature or API like agent.run_from_prompt_name(prompt_name, arguments) that does this automatically?

Why it matters
The structured prompt returned by get_prompt() often includes step-by-step instructions and tool references that are crucial for complex tasks. Without calling this endpoint, AutoGen agents may misinterpret the task, or ignore available tools altogether.

Thanks for any insights!

Link to the doc page, if applicable

Not a documentation issue. This is a behavior inconsistency when using MCP servers with AutoGen.

Metadata

Metadata

Assignees

Labels

awaiting-op-responseIssue or pr has been triaged or responded to and is now awaiting a reply from the original posterproj-extensions

Type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions