Skip to content

Add httpx to auto-instrumented libraries in configure_azure_monitor()#46284

Open
benke520 wants to merge 2 commits intoAzure:mainfrom
benke520:fix/add-httpx-instrumentation
Open

Add httpx to auto-instrumented libraries in configure_azure_monitor()#46284
benke520 wants to merge 2 commits intoAzure:mainfrom
benke520:fix/add-httpx-instrumentation

Conversation

@benke520
Copy link
Copy Markdown
Member

@benke520 benke520 commented Apr 13, 2026

Description

Fixes #46286

configure_azure_monitor() auto-instruments several HTTP client libraries (requests, urllib, urllib3) so that outgoing HTTP requests inject W3C traceparent headers for distributed tracing. However, httpx is not included in the supported instrumentation list.

The OpenAI Python SDK — which is the HTTP transport for Microsoft's own Azure AI Agent Framework (MAF) — uses httpx as its HTTP client, not requests. This means:

  1. configure_azure_monitor() sets up tracing correctly for local spans
  2. But outgoing HTTP calls from the OpenAI SDK to Azure AI Foundry do not carry traceparent headers
  3. The Foundry platform starts independent traces with separate trace_ids
  4. Distributed tracing between local agent code and the Foundry platform is broken

Before / After

Before (without httpx instrumentation)

Foundry server-side spans (invoke_agent, chat gpt-5.4) are missing from the trace — they exist in App Insights but under separate, independent trace_ids. The local trace shows only MAF's local spans with no visibility into what happened on the Foundry platform:

Foundry Agent Chat (21.3 s)
├── WeatherAgent-RemoteOnFoundry  invoke_agent (7.9 s)
│   ├── gpt-5.4  chat (5.6 s)           ← local span only, no server-side child
│   ├── get_weather  execute_tool (902 ms)
│   └── gpt-5.4  chat (1.4 s)           ← local span only
├── WeatherAgent-RemoteOnFoundry  invoke_agent (10.4 s)
│   ├── gpt-5.4  chat (2.3 s)
│   ├── get_weather  execute_tool (401 ms)
│   └── gpt-5.4  chat (7.8 s)
└── WeatherAgent-RemoteOnFoundry  invoke_agent (2.9 s)
    └── gpt-5.4  chat (2.9 s)

After (with httpx instrumentation)

With traceparent propagated via httpx, the Foundry server-side spans join the same distributed trace. Each HTTP POST to Foundry now shows the server-side invoke_agentchat gpt-5.4 spans as children:

Foundry Agent Chat (24.1 s)
├── WeatherAgent-RemoteOnFoundry  invoke_agent (8.3 s)
│   ├── gpt-5.4  chat (5.8 s)
│   │   └── POST /api/projects/.../openai/v1/responses (2.9 s)  ← httpx span
│   │       └── invoke_agent (1.4 s)                             ← Foundry server
│   │           └── gpt-5.4-2026-03-05  chat (83 ms)            ← Foundry model call
│   ├── get_weather  execute_tool (608 ms)
│   └── gpt-5.4  chat (1.9 s)
│       └── POST /api/projects/.../openai/v1/responses (1.8 s)
├── WeatherAgent-RemoteOnFoundry  invoke_agent (6.6 s)
│   └── gpt-5.4  chat (2.6 s)
│       └── POST /api/projects/.../openai/v1/responses (2.5 s)
│           ├── invoke_agent (1.7 s)
│           │   └── gpt-5.4-2026-03-05  chat (85.7 ms)
│           └── get_weather  execute_tool (707 ms)
│   └── gpt-5.4  chat (3.3 s)
│       └── POST /api/projects/.../openai/v1/responses (3.2 s)
│           └── invoke_agent (1.4 s)
│               └── gpt-5.4-2026-03-05  chat (585.3 ms)
└── WeatherAgent-RemoteOnFoundry  invoke_agent (9.2 s)
    └── gpt-5.4  chat (9.2 s)
        └── POST /api/projects/.../openai/v1/responses (9.2 s)
            └── invoke_agent (3.2 s)
                └── gpt-5.4-2026-03-05  chat (2.0 s)

Key differences:

  • httpx POST spans now appear under each chat span — these are the actual HTTP calls to Foundry
  • Foundry server-side invoke_agent + chat gpt-5.4 spans are children of the HTTP POST, confirming traceparent propagation works
  • The entire agent invocation is visible as a single end-to-end distributed trace

Root Cause

In _constants.py, the allowlist does not include httpx:

_FULLY_SUPPORTED_INSTRUMENTED_LIBRARIES = (
    _AZURE_SDK_INSTRUMENTATION_NAME,
    "django",
    "fastapi",
    "flask",
    "psycopg2",
    "requests",
    "urllib",
    "urllib3",
)

Even if a user manually installs opentelemetry-instrumentation-httpx, _setup_instrumentations() in _configure.py skips it because "httpx" not in _ALL_SUPPORTED_INSTRUMENTED_LIBRARIES.

Changes

  • _constants.py: Add "httpx" to _FULLY_SUPPORTED_INSTRUMENTED_LIBRARIES
  • setup.py: Add opentelemetry-instrumentation-httpx==0.61b0 as a dependency

Current Workaround

Users must manually install and instrument httpx:

pip install opentelemetry-instrumentation-httpx
from opentelemetry.instrumentation.httpx import HTTPXClientInstrumentor
HTTPXClientInstrumentor().instrument()

Impact

This affects all Python users of:

  • Azure AI Agent Framework (MAF)agent-framework, agent-framework-foundry
  • OpenAI Python SDKopenai>=1.0 (uses httpx)
  • Any Python library using httpx as an HTTP client with Azure Monitor

The OpenAI Python SDK (used by Azure AI Agent Framework / MAF) uses httpx
as its HTTP client. Without httpx instrumentation, outgoing requests to
Azure AI Foundry do not carry W3C traceparent headers, breaking distributed
tracing between local agent code and the Foundry platform.

Changes:
- _constants.py: Add 'httpx' to _FULLY_SUPPORTED_INSTRUMENTED_LIBRARIES
- setup.py: Add opentelemetry-instrumentation-httpx==0.61b0 dependency
Copilot AI review requested due to automatic review settings April 13, 2026 19:48
@github-actions github-actions bot added the Monitor - Distro Monitor OpenTelemetry Distro label Apr 13, 2026
Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR extends configure_azure_monitor()’s auto-instrumentation support to include httpx, enabling W3C traceparent propagation for outgoing HTTP calls made via httpx (notably used by the OpenAI Python SDK).

Changes:

  • Add httpx to the allowlist of fully supported auto-instrumented libraries.
  • Add opentelemetry-instrumentation-httpx==0.61b0 to the package dependencies so the instrumentor is available by default.

Reviewed changes

Copilot reviewed 2 out of 2 changed files in this pull request and generated 3 comments.

File Description
sdk/monitor/azure-monitor-opentelemetry/setup.py Bundles the httpx OpenTelemetry instrumentation package so it can be auto-enabled.
sdk/monitor/azure-monitor-opentelemetry/azure/monitor/opentelemetry/_constants.py Allows httpx to be discovered/enabled by the distro’s instrumentation setup.

Comment on lines 78 to 82
"django",
"fastapi",
"flask",
"httpx",
"psycopg2",
Copy link

Copilot AI Apr 13, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

httpx was added to the supported instrumentation allowlist, but there is no corresponding instrumentation smoke test (there are per-library tests under tests/instrumentation/ for django/fastapi/flask/psycopg2/requests/urllib/urllib3). Add a test_httpx.py (or equivalent) to ensure opentelemetry.instrumentation.httpx can be imported/instrumented in CI and that this new support doesn’t silently regress.

Copilot uses AI. Check for mistakes.
Comment on lines 80 to 82
"flask",
"httpx",
"psycopg2",
Copy link

Copilot AI Apr 13, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The README’s “Officially supported instrumentations” table currently lists the bundled/supported libraries, but it doesn’t include httpx. Since httpx is now treated as fully supported, the docs should be updated to keep the public supported-instrumentations list in sync with _FULLY_SUPPORTED_INSTRUMENTED_LIBRARIES.

Copilot uses AI. Check for mistakes.
"django",
"fastapi",
"flask",
"httpx",
Copy link

Copilot AI Apr 13, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Adding httpx to _FULLY_SUPPORTED_INSTRUMENTED_LIBRARIES will change the default instrumentation_options produced by _get_configurations(). There are tests that assert an exact instrumentation_options dict (e.g., in tests/utils/test_configurations.py around test_get_configurations_env_vars_rate_limited / test_get_configurations_rate_limited_sampler_param), and they currently don’t include httpx, so this change will break the test suite unless those expected dictionaries are updated accordingly.

Suggested change
"httpx",

Copilot uses AI. Check for mistakes.
- Add tests/instrumentation/test_httpx.py smoke test (matches existing
  pattern for django/flask/requests/etc.)
- Add 'httpx' to all instrumentation_options expected dicts in
  tests/utils/test_configurations.py (13 assertions)
- Add httpx row to README 'Officially supported instrumentations' table
  with link definitions
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Monitor - Distro Monitor OpenTelemetry Distro

Projects

None yet

Development

Successfully merging this pull request may close these issues.

configure_azure_monitor() does not auto-instrument httpx — breaks distributed tracing for OpenAI SDK / Azure AI Agent Framework (MAF)

2 participants