Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions AGENTS.md
Original file line number Diff line number Diff line change
Expand Up @@ -37,6 +37,7 @@ Each example .py file should have a corresponding file with the same name under
* Identifiers (functions/classes/vars): English
* User-facing output/data (e.g., example responses, sample values): Spanish
* HITL control words: bilingual (approve/aprobar, exit/salir)
* Agent and workflow names: English ("TravelPlannerAgent" should be the same in both versions, not "AgentePlanificadorDeViajes")

Use informal (tuteo) LATAM Spanish, tu not usted, puedes not podes, etc. The content is technical so if a word is best kept in English, then do so.

Expand Down
10 changes: 10 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -194,6 +194,7 @@ You can run the examples in this repository by executing the scripts in the `exa
| [agent_with_subagent.py](examples/agent_with_subagent.py) | Context isolation with sub-agents to keep prompts focused on relevant tools. |
| [agent_without_subagent.py](examples/agent_without_subagent.py) | Context bloat example where one agent carries all tool schemas in a single prompt. |
| [agent_summarization.py](examples/agent_summarization.py) | Context compaction via summarization middleware to reduce token usage in long conversations. |
| [workflow_magenticone.py](examples/workflow_magenticone.py) | A MagenticOne multi-agent workflow. |
| [agent_middleware.py](examples/agent_middleware.py) | Agent, chat, and function middleware for logging, timing, and blocking. |
| [agent_knowledge_aisearch.py](examples/agent_knowledge_aisearch.py) | Knowledge retrieval (RAG) using Azure AI Search with AgentFrameworkAzureAISearchRAG. |
| [agent_knowledge_sqlite.py](examples/agent_knowledge_sqlite.py) | Knowledge retrieval (RAG) using a custom context provider with SQLite FTS5. |
Expand All @@ -204,15 +205,24 @@ You can run the examples in this repository by executing the scripts in the `exa
| [agent_mcp_local.py](examples/agent_mcp_local.py) | An agent connected to a local MCP server (e.g. for expense logging). |
| [openai_tool_calling.py](examples/openai_tool_calling.py) | Tool calling with the low-level OpenAI SDK, showing manual tool dispatch. |
| [workflow_rag_ingest.py](examples/workflow_rag_ingest.py) | A RAG ingestion pipeline using plain Python executors: fetch a document with markitdown, split into chunks, and embed with an OpenAI model. |
| [workflow_fan_out_fan_in_edges.py](examples/workflow_fan_out_fan_in_edges.py) | Fan-out/fan-in with explicit edge groups using `add_fan_out_edges` and `add_fan_in_edges`. |
| [workflow_aggregator_summary.py](examples/workflow_aggregator_summary.py) | Fan-out/fan-in with LLM summarization: synthesize expert outputs into an executive brief. |
| [workflow_aggregator_structured.py](examples/workflow_aggregator_structured.py) | Fan-out/fan-in with LLM structured extraction into a typed Pydantic model (`response_format`). |
| [workflow_aggregator_voting.py](examples/workflow_aggregator_voting.py) | Fan-out/fan-in with majority-vote aggregation across multiple classifiers (pure logic tally). |
| [workflow_aggregator_ranked.py](examples/workflow_aggregator_ranked.py) | Fan-out/fan-in with LLM-as-judge ranking: score and rank multiple candidates into a typed list. |
| [workflow_agents.py](examples/workflow_agents.py) | A workflow with AI agents as executors: a Writer drafts content and a Reviewer provides feedback. |
| [workflow_agents_sequential.py](examples/workflow_agents_sequential.py) | A sequential orchestration using `SequentialBuilder`: Writer and Reviewer run in order while sharing full conversation history. |
| [workflow_agents_streaming.py](examples/workflow_agents_streaming.py) | The same Writer → Reviewer workflow using `run(stream=True)` to observe `executor_invoked`, `executor_completed`, and streaming `output` events in real-time. |
| [workflow_agents_concurrent.py](examples/workflow_agents_concurrent.py) | Concurrent orchestration using `ConcurrentBuilder`: run specialist agents in parallel and collect merged conversations. |
| [workflow_conditional.py](examples/workflow_conditional.py) | A minimal workflow with conditional edges: the Reviewer routes to a Publisher (approved) or Editor (needs revision) based on a sentinel token. |
| [workflow_conditional_structured.py](examples/workflow_conditional_structured.py) | The same conditional-edge routing pattern, but with structured reviewer output (`response_format`) for typed branch decisions instead of sentinel string matching. |
| [workflow_conditional_state.py](examples/workflow_conditional_state.py) | A stateful conditional workflow with iterative revision loops: stores the latest draft in workflow state and publishes from that state after approval. |
| [workflow_conditional_state_isolated.py](examples/workflow_conditional_state_isolated.py) | The stateful conditional workflow using a `create_workflow(...)` factory to build fresh agents/workflow per task for state isolation and thread safety. |
| [workflow_switch_case.py](examples/workflow_switch_case.py) | A workflow with switch-case routing: a Classifier agent uses structured outputs to categorize a message and route to a specialized handler. |
| [workflow_multi_selection_edge_group.py](examples/workflow_multi_selection_edge_group.py) | LLM-powered multi-selection routing using `add_multi_selection_edge_group` to activate one-or-many downstream handlers. |
| [workflow_converge.py](examples/workflow_converge.py) | A branch-and-converge workflow: Reviewer routes to Publisher or Editor, then converges before final summary output. |
| [workflow_handoffbuilder.py](examples/workflow_handoffbuilder.py) | Autonomous handoff orchestration using `HandoffBuilder` (agents transfer control without human-in-the-loop). |
| [workflow_handoffbuilder_rules.py](examples/workflow_handoffbuilder_rules.py) | Handoff orchestration with explicit routing rules using `HandoffBuilder.add_handoff()`. |
| [agent_otel_aspire.py](examples/agent_otel_aspire.py) | An agent with OpenTelemetry tracing, metrics, and structured logs exported to the [Aspire Dashboard](https://aspire.dev/dashboard/standalone/). |
| [agent_otel_appinsights.py](examples/agent_otel_appinsights.py) | An agent with OpenTelemetry tracing, metrics, and structured logs exported to [Azure Application Insights](https://learn.microsoft.com/azure/azure-monitor/app/app-insights-overview). Requires Azure provisioning via `azd provision`. |
| [agent_evaluation_generate.py](examples/agent_evaluation_generate.py) | Generate synthetic evaluation data for the travel planner agent. |
Expand Down
2 changes: 1 addition & 1 deletion examples/agent_middleware.py
Original file line number Diff line number Diff line change
Expand Up @@ -88,7 +88,7 @@
client = OpenAIChatClient(
base_url="https://models.github.ai/inference",
api_key=os.environ["GITHUB_TOKEN"],
model_id=os.getenv("GITHUB_MODEL", "openai/gpt-4o"),
model_id=os.getenv("GITHUB_MODEL", "openai/gpt-4.1-mini"),
)
else:
client = OpenAIChatClient(api_key=os.environ["OPENAI_API_KEY"], model_id=os.environ.get("OPENAI_MODEL", "gpt-4o"))
Expand Down
2 changes: 1 addition & 1 deletion examples/agent_summarization.py
Original file line number Diff line number Diff line change
Expand Up @@ -77,7 +77,7 @@
client = OpenAIChatClient(
base_url="https://models.github.ai/inference",
api_key=os.environ["GITHUB_TOKEN"],
model_id=os.getenv("GITHUB_MODEL", "openai/gpt-4o"),
model_id=os.getenv("GITHUB_MODEL", "openai/gpt-4.1-mini"),
)
else:
client = OpenAIChatClient(api_key=os.environ["OPENAI_API_KEY"], model_id=os.environ.get("OPENAI_MODEL", "gpt-4o"))
Expand Down
2 changes: 1 addition & 1 deletion examples/agent_with_subagent.py
Original file line number Diff line number Diff line change
Expand Up @@ -82,7 +82,7 @@
client = OpenAIChatClient(
base_url="https://models.github.ai/inference",
api_key=os.environ["GITHUB_TOKEN"],
model_id=os.getenv("GITHUB_MODEL", "openai/gpt-4o"),
model_id=os.getenv("GITHUB_MODEL", "openai/gpt-4.1-mini"),
)
else:
client = OpenAIChatClient(api_key=os.environ["OPENAI_API_KEY"], model_id=os.environ.get("OPENAI_MODEL", "gpt-4o"))
Expand Down
2 changes: 1 addition & 1 deletion examples/agent_without_subagent.py
Original file line number Diff line number Diff line change
Expand Up @@ -70,7 +70,7 @@
client = OpenAIChatClient(
base_url="https://models.github.ai/inference",
api_key=os.environ["GITHUB_TOKEN"],
model_id=os.getenv("GITHUB_MODEL", "openai/gpt-4o"),
model_id=os.getenv("GITHUB_MODEL", "openai/gpt-4.1-mini"),
)
else:
client = OpenAIChatClient(api_key=os.environ["OPENAI_API_KEY"], model_id=os.environ.get("OPENAI_MODEL", "gpt-4o"))
Expand Down
10 changes: 10 additions & 0 deletions examples/spanish/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -189,6 +189,7 @@ Puedes ejecutar los ejemplos en este repositorio ejecutando los scripts en el di
| [agent_with_subagent.py](agent_with_subagent.py) | Aislamiento de contexto con subagentes para mantener los prompts enfocados en herramientas relevantes. |
| [agent_without_subagent.py](agent_without_subagent.py) | Ejemplo de inflado de contexto cuando un solo agente carga todos los esquemas de herramientas en un mismo prompt. |
| [agent_summarization.py](agent_summarization.py) | Compactación de contexto mediante middleware de resumen para reducir el uso de tokens en conversaciones largas. |
| [workflow_magenticone.py](workflow_magenticone.py) | Un workflow multi-agente MagenticOne. |
| [agent_middleware.py](agent_middleware.py) | Middleware de agente, chat y funciones para logging, timing y bloqueo. |
| [agent_knowledge_aisearch.py](agent_knowledge_aisearch.py) | Recuperación de conocimiento (RAG) usando Azure AI Search con AgentFrameworkAzureAISearchRAG. |
| [agent_knowledge_sqlite.py](agent_knowledge_sqlite.py) | Recuperación de conocimiento (RAG) usando un proveedor de contexto personalizado con SQLite FTS5. |
Expand All @@ -199,15 +200,24 @@ Puedes ejecutar los ejemplos en este repositorio ejecutando los scripts en el di
| [agent_mcp_local.py](agent_mcp_local.py) | Un agente conectado a un servidor MCP local (p. ej. para registro de gastos). |
| [openai_tool_calling.py](openai_tool_calling.py) | Llamadas a herramientas con el SDK de OpenAI de bajo nivel, mostrando despacho manual de herramientas. |
| [workflow_rag_ingest.py](workflow_rag_ingest.py) | Un pipeline de ingesta para RAG con ejecutores Python puros: descarga un documento con markitdown, lo divide en fragmentos y genera embeddings con un modelo de OpenAI. |
| [workflow_fan_out_fan_in_edges.py](workflow_fan_out_fan_in_edges.py) | Fan-out/fan-in con grupos de aristas explícitos usando `add_fan_out_edges` y `add_fan_in_edges`. |
| [workflow_aggregator_summary.py](workflow_aggregator_summary.py) | Fan-out/fan-in con resumen por LLM: sintetiza salidas de expertos en un brief ejecutivo. |
| [workflow_aggregator_structured.py](workflow_aggregator_structured.py) | Fan-out/fan-in con extracción estructurada por LLM en un modelo Pydantic tipado (`response_format`). |
| [workflow_aggregator_voting.py](workflow_aggregator_voting.py) | Fan-out/fan-in con agregación por voto mayoritario entre clasificadores (conteo de lógica pura). |
| [workflow_aggregator_ranked.py](workflow_aggregator_ranked.py) | Fan-out/fan-in con LLM como juez: puntúa y ordena múltiples candidatos en una lista tipada. |
| [workflow_agents.py](workflow_agents.py) | Un workflow con agentes de IA como ejecutores: un Escritor redacta contenido y un Revisor da retroalimentación. |
| [workflow_agents_sequential.py](workflow_agents_sequential.py) | Una orquestación secuencial usando `SequentialBuilder`: Escritor y Revisor se ejecutan en orden compartiendo todo el historial de la conversación. |
| [workflow_agents_streaming.py](workflow_agents_streaming.py) | El mismo workflow Escritor → Revisor usando `run(stream=True)` para observar los eventos `executor_invoked`, `executor_completed` y `output` en tiempo real. |
| [workflow_agents_concurrent.py](workflow_agents_concurrent.py) | Orquestación concurrente usando `ConcurrentBuilder`: ejecuta agentes especialistas en paralelo y junta las conversaciones. |
| [workflow_conditional.py](workflow_conditional.py) | Un workflow mínimo con aristas condicionales: el Revisor enruta al Publicador (aprobado) o al Editor (necesita revisión) según una señal de texto. |
| [workflow_conditional_structured.py](workflow_conditional_structured.py) | El mismo patrón de enrutamiento con aristas condicionales, pero usando salida estructurada del revisor (`response_format`) para decisiones tipadas en vez de matching por cadena. |
| [workflow_conditional_state.py](workflow_conditional_state.py) | Un workflow condicional con estado y bucle iterativo: guarda el último borrador en el estado del workflow y publica desde ese estado tras la aprobación. |
| [workflow_conditional_state_isolated.py](workflow_conditional_state_isolated.py) | El workflow condicional con estado usando una fábrica `create_workflow(...)` para crear agentes/workflow nuevos por tarea y así aislar estado e hilos de agente. |
| [workflow_switch_case.py](workflow_switch_case.py) | Un workflow con enrutamiento switch-case: un agente Clasificador usa salidas estructuradas para categorizar un mensaje y enrutarlo al manejador especializado. |
| [workflow_multi_selection_edge_group.py](workflow_multi_selection_edge_group.py) | Enrutamiento multi-selección con LLM usando `add_multi_selection_edge_group` para activar uno o varios manejadores. |
| [workflow_converge.py](workflow_converge.py) | Un workflow con rama y convergencia: Revisor enruta a Publicador o Editor y luego converge antes del resumen final. |
| [workflow_handoffbuilder.py](workflow_handoffbuilder.py) | Orquestación de handoff autónoma usando `HandoffBuilder` (los agentes se transfieren el control sin HITL). |
| [workflow_handoffbuilder_rules.py](workflow_handoffbuilder_rules.py) | Orquestación de handoff con reglas explícitas usando `HandoffBuilder.add_handoff()`. |
| [agent_otel_aspire.py](agent_otel_aspire.py) | Un agente con trazas, métricas y logs estructurados de OpenTelemetry exportados al [Aspire Dashboard](https://aspire.dev/dashboard/standalone/). |
| [agent_otel_appinsights.py](agent_otel_appinsights.py) | Un agente con trazas, métricas y logs estructurados de OpenTelemetry exportados a [Azure Application Insights](https://learn.microsoft.com/azure/azure-monitor/app/app-insights-overview). Requiere aprovisionamiento de Azure con `azd provision`. |
| [agent_evaluation_generate.py](agent_evaluation_generate.py) | Genera datos sintéticos de evaluación para el agente planificador de viajes. |
Expand Down
2 changes: 1 addition & 1 deletion examples/spanish/agent_middleware.py
Original file line number Diff line number Diff line change
Expand Up @@ -89,7 +89,7 @@
client = OpenAIChatClient(
base_url="https://models.github.ai/inference",
api_key=os.environ["GITHUB_TOKEN"],
model_id=os.getenv("GITHUB_MODEL", "openai/gpt-4o"),
model_id=os.getenv("GITHUB_MODEL", "openai/gpt-4.1-mini"),
)
else:
client = OpenAIChatClient(api_key=os.environ["OPENAI_API_KEY"], model_id=os.environ.get("OPENAI_MODEL", "gpt-4o"))
Expand Down
2 changes: 1 addition & 1 deletion examples/spanish/agent_summarization.py
Original file line number Diff line number Diff line change
Expand Up @@ -76,7 +76,7 @@
client = OpenAIChatClient(
base_url="https://models.github.ai/inference",
api_key=os.environ["GITHUB_TOKEN"],
model_id=os.getenv("GITHUB_MODEL", "openai/gpt-4o"),
model_id=os.getenv("GITHUB_MODEL", "openai/gpt-4.1-mini"),
)
else:
client = OpenAIChatClient(api_key=os.environ["OPENAI_API_KEY"], model_id=os.environ.get("OPENAI_MODEL", "gpt-4o"))
Expand Down
2 changes: 1 addition & 1 deletion examples/spanish/agent_with_subagent.py
Original file line number Diff line number Diff line change
Expand Up @@ -82,7 +82,7 @@
client = OpenAIChatClient(
base_url="https://models.github.ai/inference",
api_key=os.environ["GITHUB_TOKEN"],
model_id=os.getenv("GITHUB_MODEL", "openai/gpt-4o"),
model_id=os.getenv("GITHUB_MODEL", "openai/gpt-4.1-mini"),
)
else:
client = OpenAIChatClient(api_key=os.environ["OPENAI_API_KEY"], model_id=os.environ.get("OPENAI_MODEL", "gpt-4o"))
Expand Down
2 changes: 1 addition & 1 deletion examples/spanish/agent_without_subagent.py
Original file line number Diff line number Diff line change
Expand Up @@ -70,7 +70,7 @@
client = OpenAIChatClient(
base_url="https://models.github.ai/inference",
api_key=os.environ["GITHUB_TOKEN"],
model_id=os.getenv("GITHUB_MODEL", "openai/gpt-4o"),
model_id=os.getenv("GITHUB_MODEL", "openai/gpt-4.1-mini"),
)
else:
client = OpenAIChatClient(api_key=os.environ["OPENAI_API_KEY"], model_id=os.environ.get("OPENAI_MODEL", "gpt-4o"))
Expand Down
7 changes: 5 additions & 2 deletions examples/spanish/workflow_agents.py
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@
client = OpenAIChatClient(
base_url="https://models.github.ai/inference",
api_key=os.environ["GITHUB_TOKEN"],
model_id=os.getenv("GITHUB_MODEL", "openai/gpt-4o-mini"),
model_id=os.getenv("GITHUB_MODEL", "openai/gpt-4.1-mini"),
)
else:
client = OpenAIChatClient(
Expand Down Expand Up @@ -76,7 +76,10 @@ async def main():
prompt = "Escribe una publicación de LinkedIn de 2 frases: \"Por qué tu piloto de IA se ve bien, pero falla en producción.\""
print(f"Prompt: {prompt}\n")
events = await workflow.run(prompt)
print(events.get_outputs())

for output in events.get_outputs():
print("===== Salida =====")
print(output)

if async_credential:
await async_credential.close()
Expand Down
Loading