Skip to content

Commit af44c28

Browse files
authored
Merge pull request #9 from madebygps/feature/agent-otel-appinsights
Add OTel App Insights examples
2 parents 34412d4 + 45c5417 commit af44c28

File tree

10 files changed

+683
-18
lines changed

10 files changed

+683
-18
lines changed

.env.sample

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -10,5 +10,7 @@ OPENAI_MODEL=gpt-3.5-turbo
1010
GITHUB_MODEL=gpt-5-mini
1111
GITHUB_TOKEN=YOUR-GITHUB-PERSONAL-ACCESS-TOKEN
1212
OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4317
13+
# Optional: Set to export telemetry to Azure Application Insights
14+
APPLICATIONINSIGHTS_CONNECTION_STRING=InstrumentationKey=YOUR-KEY;IngestionEndpoint=https://YOUR-REGION.in.applicationinsights.azure.com/
1315
# Optional: Set to log evaluation results to Azure AI Foundry for rich visualization
1416
AZURE_AI_PROJECT=https://YOUR-ACCOUNT.services.ai.azure.com/api/projects/YOUR-PROJECT

README.md

Lines changed: 39 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -174,6 +174,7 @@ You can run the examples in this repository by executing the scripts in the `exa
174174
| [openai_tool_calling.py](examples/openai_tool_calling.py) | Tool calling with the low-level OpenAI SDK, showing manual tool dispatch. |
175175
| [workflow_basic.py](examples/workflow_basic.py) | A workflow-based agent. |
176176
| [agent_otel_aspire.py](examples/agent_otel_aspire.py) | An agent with OpenTelemetry tracing, metrics, and structured logs exported to the [Aspire Dashboard](https://aspire.dev/dashboard/standalone/). |
177+
| [agent_otel_appinsights.py](examples/agent_otel_appinsights.py) | An agent with OpenTelemetry tracing, metrics, and structured logs exported to [Azure Application Insights](https://learn.microsoft.com/azure/azure-monitor/app/app-insights-overview). Requires Azure provisioning via `azd provision`. |
177178
| [agent_evaluation.py](examples/agent_evaluation.py) | Evaluate a travel planner agent using [Azure AI Evaluation](https://learn.microsoft.com/azure/ai-foundry/concepts/evaluation-evaluators/agent-evaluators) agent evaluators (IntentResolution, ToolCallAccuracy, TaskAdherence, ResponseCompleteness). Optionally set `AZURE_AI_PROJECT` in `.env` to log results to [Azure AI Foundry](https://learn.microsoft.com/azure/ai-foundry/how-to/develop/agent-evaluate-sdk). |
178179
179180
## Using the Aspire Dashboard for telemetry
@@ -233,6 +234,44 @@ If you're running locally without Dev Containers, you need to start the Aspire D
233234

234235
For the full Python + Aspire guide, see [Use the Aspire dashboard with Python apps](https://aspire.dev/dashboard/standalone-for-python/).
235236

237+
## Exporting telemetry to Azure Application Insights
238+
239+
The [agent_otel_appinsights.py](examples/agent_otel_appinsights.py) example exports OpenTelemetry traces, metrics, and structured logs to [Azure Application Insights](https://learn.microsoft.com/azure/azure-monitor/app/app-insights-overview).
240+
241+
### Setup
242+
243+
This example requires an `APPLICATIONINSIGHTS_CONNECTION_STRING` environment variable. You can get this automatically or manually:
244+
245+
**Option A: Automatic via `azd provision`**
246+
247+
If you run `azd provision` (see [Using Azure AI Foundry models](#using-azure-ai-foundry-models)), the Application Insights resource is provisioned automatically and the connection string is written to your `.env` file.
248+
249+
**Option B: Manual from the Azure Portal**
250+
251+
1. Create an Application Insights resource in the [Azure Portal](https://portal.azure.com).
252+
2. Copy the connection string from the resource's Overview page.
253+
3. Add it to your `.env` file:
254+
255+
```sh
256+
APPLICATIONINSIGHTS_CONNECTION_STRING=InstrumentationKey=...;IngestionEndpoint=...
257+
```
258+
259+
### Running the example
260+
261+
```sh
262+
uv run examples/agent_otel_appinsights.py
263+
```
264+
265+
### Viewing telemetry
266+
267+
After running the example, navigate to your Application Insights resource in the Azure Portal:
268+
269+
* **Transaction search**: See end-to-end traces for agent invocations, chat completions, and tool executions.
270+
* **Live Metrics**: Monitor real-time request rates and performance.
271+
* **Performance**: Analyze operation durations and identify bottlenecks.
272+
273+
Telemetry data may take 2–5 minutes to appear in the portal.
274+
236275
## Resources
237276
238277
* [Agent Framework Documentation](https://learn.microsoft.com/agent-framework/)

examples/agent_otel_appinsights.py

Lines changed: 98 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,98 @@
1+
import asyncio
2+
import logging
3+
import os
4+
import random
5+
from datetime import datetime, timezone
6+
from typing import Annotated
7+
8+
from agent_framework import ChatAgent
9+
from agent_framework.observability import create_resource, enable_instrumentation
10+
from agent_framework.openai import OpenAIChatClient
11+
from azure.identity.aio import DefaultAzureCredential, get_bearer_token_provider
12+
from azure.monitor.opentelemetry import configure_azure_monitor
13+
from dotenv import load_dotenv
14+
from pydantic import Field
15+
from rich import print
16+
from rich.logging import RichHandler
17+
18+
# Setup logging
19+
handler = RichHandler(show_path=False, rich_tracebacks=True, show_level=False)
20+
logging.basicConfig(level=logging.WARNING, handlers=[handler], force=True, format="%(message)s")
21+
logger = logging.getLogger(__name__)
22+
logger.setLevel(logging.INFO)
23+
24+
# Configure OpenTelemetry export to Azure Application Insights
25+
load_dotenv(override=True)
26+
configure_azure_monitor(
27+
connection_string=os.environ["APPLICATIONINSIGHTS_CONNECTION_STRING"],
28+
resource=create_resource(),
29+
enable_live_metrics=True,
30+
)
31+
enable_instrumentation(enable_sensitive_data=True)
32+
logger.info("Azure Application Insights export enabled")
33+
34+
# Configure OpenAI client based on environment
35+
API_HOST = os.getenv("API_HOST", "github")
36+
37+
async_credential = None
38+
if API_HOST == "azure":
39+
async_credential = DefaultAzureCredential()
40+
token_provider = get_bearer_token_provider(async_credential, "https://cognitiveservices.azure.com/.default")
41+
client = OpenAIChatClient(
42+
base_url=f"{os.environ['AZURE_OPENAI_ENDPOINT']}/openai/v1/",
43+
api_key=token_provider,
44+
model_id=os.environ["AZURE_OPENAI_CHAT_DEPLOYMENT"],
45+
)
46+
elif API_HOST == "github":
47+
client = OpenAIChatClient(
48+
base_url="https://models.github.ai/inference",
49+
api_key=os.environ["GITHUB_TOKEN"],
50+
model_id=os.getenv("GITHUB_MODEL", "openai/gpt-5-mini"),
51+
)
52+
else:
53+
client = OpenAIChatClient(
54+
api_key=os.environ["OPENAI_API_KEY"], model_id=os.environ.get("OPENAI_MODEL", "gpt-5-mini")
55+
)
56+
57+
58+
def get_weather(
59+
city: Annotated[str, Field(description="City name, spelled out fully")],
60+
) -> dict:
61+
"""Returns weather data for a given city, a dictionary with temperature and description."""
62+
logger.info(f"Getting weather for {city}")
63+
weather_options = [
64+
{"temperature": 72, "description": "Sunny"},
65+
{"temperature": 60, "description": "Rainy"},
66+
{"temperature": 55, "description": "Cloudy"},
67+
{"temperature": 45, "description": "Windy"},
68+
]
69+
return random.choice(weather_options)
70+
71+
72+
def get_current_time(
73+
timezone_name: Annotated[str, Field(description="Timezone name, e.g. 'US/Eastern', 'Asia/Tokyo', 'UTC'")],
74+
) -> str:
75+
"""Returns the current date and time in UTC (timezone_name is for display context only)."""
76+
logger.info(f"Getting current time for {timezone_name}")
77+
now = datetime.now(timezone.utc)
78+
return f"The current time in {timezone_name} is approximately {now.strftime('%Y-%m-%d %H:%M:%S')} UTC"
79+
80+
81+
agent = ChatAgent(
82+
name="weather-time-agent",
83+
chat_client=client,
84+
instructions="You are a helpful assistant that can look up weather and time information.",
85+
tools=[get_weather, get_current_time],
86+
)
87+
88+
89+
async def main():
90+
response = await agent.run("What's the weather in Seattle and what time is it in Tokyo?")
91+
print(response.text)
92+
93+
if async_credential:
94+
await async_credential.close()
95+
96+
97+
if __name__ == "__main__":
98+
asyncio.run(main())

examples/spanish/README.md

Lines changed: 39 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -175,6 +175,7 @@ Puedes ejecutar los ejemplos en este repositorio ejecutando los scripts en el di
175175
| [openai_tool_calling.py](openai_tool_calling.py) | Llamadas a funciones con el SDK de OpenAI de bajo nivel, mostrando despacho manual de herramientas. |
176176
| [workflow_basic.py](workflow_basic.py) | Usa Agent Framework para crear un agente basado en flujo de trabajo. |
177177
| [agent_otel_aspire.py](agent_otel_aspire.py) | Un agente con trazas, métricas y logs estructurados de OpenTelemetry exportados al [Aspire Dashboard](https://aspire.dev/dashboard/standalone/). |
178+
| [agent_otel_appinsights.py](agent_otel_appinsights.py) | Un agente con trazas, métricas y logs estructurados de OpenTelemetry exportados a [Azure Application Insights](https://learn.microsoft.com/azure/azure-monitor/app/app-insights-overview). Requiere aprovisionamiento de Azure con `azd provision`. |
178179
| [agent_evaluation.py](agent_evaluation.py) | Evalúa un agente planificador de viajes usando evaluadores de [Azure AI Evaluation](https://learn.microsoft.com/azure/ai-foundry/concepts/evaluation-evaluators/agent-evaluators) (IntentResolution, ToolCallAccuracy, TaskAdherence, ResponseCompleteness). Opcionalmente configura `AZURE_AI_PROJECT` en `.env` para registrar resultados en [Azure AI Foundry](https://learn.microsoft.com/azure/ai-foundry/how-to/develop/agent-evaluate-sdk). |
179180

180181
## Usar el Aspire Dashboard para telemetría
@@ -234,6 +235,44 @@ Si ejecutas localmente sin Dev Containers, necesitas iniciar el Aspire Dashboard
234235

235236
Para la guia completa de Python + Aspire, consulta [Usar el Aspire Dashboard con apps de Python](https://aspire.dev/dashboard/standalone-for-python/).
236237

238+
## Exportar telemetría a Azure Application Insights
239+
240+
El ejemplo [agent_otel_appinsights.py](agent_otel_appinsights.py) exporta trazas, métricas y logs estructurados de OpenTelemetry a [Azure Application Insights](https://learn.microsoft.com/azure/azure-monitor/app/app-insights-overview).
241+
242+
### Configuración
243+
244+
Este ejemplo requiere la variable de entorno `APPLICATIONINSIGHTS_CONNECTION_STRING`. Puedes obtenerla automáticamente o manualmente:
245+
246+
**Opción A: Automática con `azd provision`**
247+
248+
Si ejecutas `azd provision` (consulta [Usar modelos de Azure AI Foundry](#usar-modelos-de-azure-ai-foundry)), el recurso de Application Insights se provisiona automáticamente y la cadena de conexión se escribe en tu archivo `.env`.
249+
250+
**Opción B: Manual desde el Portal de Azure**
251+
252+
1. Crea un recurso de Application Insights en el [Portal de Azure](https://portal.azure.com).
253+
2. Copia la cadena de conexión desde la página de resumen del recurso.
254+
3. Agrégala a tu archivo `.env`:
255+
256+
```sh
257+
APPLICATIONINSIGHTS_CONNECTION_STRING=InstrumentationKey=...;IngestionEndpoint=...
258+
```
259+
260+
### Ejecutar el ejemplo
261+
262+
```sh
263+
uv run examples/spanish/agent_otel_appinsights.py
264+
```
265+
266+
### Ver telemetría
267+
268+
Después de ejecutar el ejemplo, navega a tu recurso de Application Insights en el Portal de Azure:
269+
270+
* **Búsqueda de transacciones**: Ve trazas de extremo a extremo para invocaciones de agentes, completados de chat y ejecuciones de herramientas.
271+
* **Métricas en vivo**: Monitorea tasas de solicitudes y rendimiento en tiempo real.
272+
* **Rendimiento**: Analiza duraciones de operaciones e identifica cuellos de botella.
273+
274+
Los datos de telemetría pueden tardar entre 2 y 5 minutos en aparecer en el portal.
275+
237276
## Recursos
238277

239278
* [Documentación de Agent Framework](https://learn.microsoft.com/agent-framework/)
Lines changed: 98 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,98 @@
1+
import asyncio
2+
import logging
3+
import os
4+
import random
5+
from datetime import datetime, timezone
6+
from typing import Annotated
7+
8+
from agent_framework import ChatAgent
9+
from agent_framework.observability import create_resource, enable_instrumentation
10+
from agent_framework.openai import OpenAIChatClient
11+
from azure.identity.aio import DefaultAzureCredential, get_bearer_token_provider
12+
from azure.monitor.opentelemetry import configure_azure_monitor
13+
from dotenv import load_dotenv
14+
from pydantic import Field
15+
from rich import print
16+
from rich.logging import RichHandler
17+
18+
# Configura logging
19+
handler = RichHandler(show_path=False, rich_tracebacks=True, show_level=False)
20+
logging.basicConfig(level=logging.WARNING, handlers=[handler], force=True, format="%(message)s")
21+
logger = logging.getLogger(__name__)
22+
logger.setLevel(logging.INFO)
23+
24+
# Configura la exportación de OpenTelemetry a Azure Application Insights
25+
load_dotenv(override=True)
26+
configure_azure_monitor(
27+
connection_string=os.environ["APPLICATIONINSIGHTS_CONNECTION_STRING"],
28+
resource=create_resource(),
29+
enable_live_metrics=True,
30+
)
31+
enable_instrumentation(enable_sensitive_data=True)
32+
logger.info("Exportación a Azure Application Insights habilitada")
33+
34+
# Configura el cliente de OpenAI según el entorno
35+
API_HOST = os.getenv("API_HOST", "github")
36+
37+
async_credential = None
38+
if API_HOST == "azure":
39+
async_credential = DefaultAzureCredential()
40+
token_provider = get_bearer_token_provider(async_credential, "https://cognitiveservices.azure.com/.default")
41+
client = OpenAIChatClient(
42+
base_url=f"{os.environ['AZURE_OPENAI_ENDPOINT']}/openai/v1/",
43+
api_key=token_provider,
44+
model_id=os.environ["AZURE_OPENAI_CHAT_DEPLOYMENT"],
45+
)
46+
elif API_HOST == "github":
47+
client = OpenAIChatClient(
48+
base_url="https://models.github.ai/inference",
49+
api_key=os.environ["GITHUB_TOKEN"],
50+
model_id=os.getenv("GITHUB_MODEL", "openai/gpt-5-mini"),
51+
)
52+
else:
53+
client = OpenAIChatClient(
54+
api_key=os.environ["OPENAI_API_KEY"], model_id=os.environ.get("OPENAI_MODEL", "gpt-5-mini")
55+
)
56+
57+
58+
def get_weather(
59+
city: Annotated[str, Field(description="City name, spelled out fully")],
60+
) -> dict:
61+
"""Devuelve datos meteorológicos para una ciudad: temperatura y descripción."""
62+
logger.info(f"Obteniendo el clima para {city}")
63+
weather_options = [
64+
{"temperature": 22, "description": "Soleado"},
65+
{"temperature": 15, "description": "Lluvioso"},
66+
{"temperature": 13, "description": "Nublado"},
67+
{"temperature": 7, "description": "Ventoso"},
68+
]
69+
return random.choice(weather_options)
70+
71+
72+
def get_current_time(
73+
timezone_name: Annotated[str, Field(description="Timezone name, e.g. 'US/Eastern', 'Asia/Tokyo', 'UTC'")],
74+
) -> str:
75+
"""Devuelve la fecha y hora actual en UTC (timezone_name es solo para contexto de visualización)."""
76+
logger.info(f"Obteniendo la hora actual para {timezone_name}")
77+
now = datetime.now(timezone.utc)
78+
return f"La hora actual en {timezone_name} es aproximadamente {now.strftime('%Y-%m-%d %H:%M:%S')} UTC"
79+
80+
81+
agent = ChatAgent(
82+
name="weather-time-agent",
83+
chat_client=client,
84+
instructions="Eres un asistente útil que puede consultar información del clima y la hora.",
85+
tools=[get_weather, get_current_time],
86+
)
87+
88+
89+
async def main():
90+
response = await agent.run("¿Cómo está el clima en Ciudad de México y qué hora es en Buenos Aires?")
91+
print(response.text)
92+
93+
if async_credential:
94+
await async_credential.close()
95+
96+
97+
if __name__ == "__main__":
98+
asyncio.run(main())

infra/main.bicep

Lines changed: 30 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -144,6 +144,33 @@ module openAi 'br/public:avm/res/cognitive-services/account:0.7.1' = {
144144
}
145145
}
146146

147+
// Log Analytics workspace for Application Insights
148+
var logAnalyticsName = '${prefix}-loganalytics'
149+
module logAnalytics 'br/public:avm/res/operational-insights/workspace:0.9.1' = {
150+
name: 'loganalytics'
151+
scope: resourceGroup
152+
params: {
153+
name: logAnalyticsName
154+
location: location
155+
tags: tags
156+
}
157+
}
158+
159+
// Application Insights for OpenTelemetry export
160+
var appInsightsName = '${prefix}-appinsights'
161+
module appInsights 'br/public:avm/res/insights/component:0.4.2' = {
162+
name: 'appinsights'
163+
scope: resourceGroup
164+
params: {
165+
name: appInsightsName
166+
location: location
167+
tags: tags
168+
workspaceResourceId: logAnalytics.outputs.resourceId
169+
kind: 'web'
170+
applicationType: 'web'
171+
}
172+
}
173+
147174
output AZURE_LOCATION string = location
148175
output AZURE_TENANT_ID string = tenant().tenantId
149176
output AZURE_RESOURCE_GROUP string = resourceGroup.name
@@ -154,3 +181,6 @@ output AZURE_OPENAI_CHAT_MODEL string = azureOpenaiChatModel
154181
output AZURE_OPENAI_CHAT_DEPLOYMENT string = azureOpenaiChatDeployment
155182
output AZURE_OPENAI_EMBEDDING_MODEL string = azureOpenaiEmbeddingModel
156183
output AZURE_OPENAI_EMBEDDING_DEPLOYMENT string = azureOpenaiEmbeddingDeployment
184+
185+
// Specific to Application Insights
186+
output APPLICATIONINSIGHTS_CONNECTION_STRING string = appInsights.outputs.connectionString

infra/write_dot_env.ps1

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -17,3 +17,5 @@ Add-Content -Path .env -Value "AZURE_OPENAI_CHAT_DEPLOYMENT=$azureOpenAiChatDepl
1717
Add-Content -Path .env -Value "AZURE_OPENAI_CHAT_MODEL=$azureOpenAiChatModel"
1818
Add-Content -Path .env -Value "AZURE_OPENAI_EMBEDDING_DEPLOYMENT=$azureOpenAiEmbeddingDeployment"
1919
Add-Content -Path .env -Value "AZURE_OPENAI_EMBEDDING_MODEL=$azureOpenAiEmbeddingModel"
20+
$appInsightsConnectionString = azd env get-value APPLICATIONINSIGHTS_CONNECTION_STRING
21+
Add-Content -Path .env -Value "APPLICATIONINSIGHTS_CONNECTION_STRING=$appInsightsConnectionString"

infra/write_dot_env.sh

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -12,3 +12,4 @@ echo "AZURE_OPENAI_CHAT_DEPLOYMENT=$(azd env get-value AZURE_OPENAI_CHAT_DEPLOYM
1212
echo "AZURE_OPENAI_CHAT_MODEL=$(azd env get-value AZURE_OPENAI_CHAT_MODEL)" >> .env
1313
echo "AZURE_OPENAI_EMBEDDING_DEPLOYMENT=$(azd env get-value AZURE_OPENAI_EMBEDDING_DEPLOYMENT)" >> .env
1414
echo "AZURE_OPENAI_EMBEDDING_MODEL=$(azd env get-value AZURE_OPENAI_EMBEDDING_MODEL)" >> .env
15+
echo "APPLICATIONINSIGHTS_CONNECTION_STRING=$(azd env get-value APPLICATIONINSIGHTS_CONNECTION_STRING)" >> .env

pyproject.toml

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -14,6 +14,7 @@ dependencies = [
1414
"faker",
1515
"fastmcp",
1616
"opentelemetry-exporter-otlp-proto-grpc",
17+
"azure-monitor-opentelemetry",
1718
"azure-ai-evaluation>=1.15.0",
1819
"agent-framework-core @ git+https://github.com/microsoft/agent-framework.git@98cd72839e4057d661a58092a3b013993264d834#subdirectory=python/packages/core",
1920
"agent-framework-devui @ git+https://github.com/microsoft/agent-framework.git@98cd72839e4057d661a58092a3b013993264d834#subdirectory=python/packages/devui",

0 commit comments

Comments
 (0)