Skip to content

Support OTel gen_ai semconv v1.40#109

Merged
krisztianfekete merged 3 commits intomainfrom
feature/otel-semcon-v1-40
Apr 7, 2026
Merged

Support OTel gen_ai semconv v1.40#109
krisztianfekete merged 3 commits intomainfrom
feature/otel-semcon-v1-40

Conversation

@krisztianfekete
Copy link
Copy Markdown
Contributor

@krisztianfekete krisztianfekete commented Apr 7, 2026

This PR adds support for OTel GenAI semantic conventions v1.40.0 to the trace consumer pipeline. Previously we only extracted a handful of gen_ai.* attributes from incoming traces. Now we pull out provider identity, response model metadata, finish reasons, cache token usage, tool type/description, error classification, and request parameters like temperature and max tokens when instrumentors include them.

The practical effect is richer trace analysis out of the box. Sessions now show which provider served each request, the actual model that responded (which can differ from what was requested), and prompt caching stats for cost visibility.

Fixes #106

Copy link
Copy Markdown

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Adds end-to-end support for richer OTel GenAI semantic conventions (up to v1.40.0) by extracting additional gen_ai.* attributes in the backend and surfacing them in the UI (provider, response model, cache token usage, etc.), improving trace/session observability.

Changes:

  • Backend: centralizes new GenAI attribute keys and expands extraction of provider/response metadata, cache token usage, finish reasons, error types, and request params.
  • UI: extends trace/session/inspector types and rendering to display provider/response model and cache token metrics.
  • Examples/deps: updates OpenAI-related dependencies and sets semconv stability opt-in for the OpenAI Agents example.

Reviewed changes

Copilot reviewed 15 out of 16 changed files in this pull request and generated 4 comments.

Show a summary per file
File Description
uv.lock Bumps openai dependency version.
ui/src/lib/types.ts Adds provider/responseModel/agentId fields and cache token metrics types.
ui/src/context/TraceProvider.tsx Propagates new metadata fields into table rows during streaming evaluation.
ui/src/components/streaming/SessionMetadata.tsx Refactors metadata rendering and adds cache token display (currently has an unused local).
ui/src/components/streaming/SessionCard.tsx Displays provider badge and cache token summary; passes invocations into metadata panel.
ui/src/components/inspector/PerformanceSection.tsx Displays provider/response model and cache token metrics in the inspector performance table.
ui/src/components/inspector/InspectorView.tsx Passes trace info (provider/model/responseModel) into the comparison/performance panel.
ui/src/components/inspector/ComparisonPanel.tsx Threads traceInfo through to PerformanceSection.
ui/src/api/client.ts Maps streamed traceMetadata fields into partial TraceResult.
src/agentevals/trace_metrics.py Extracts provider/response model and cache token totals into trace metadata/metrics.
src/agentevals/trace_attrs.py Adds new GenAI semconv attribute constants (v1.40.0 coverage).
src/agentevals/streaming/ws_server.py Extracts richer per-invocation model info from spans (provider/response models/finish reasons/cache tokens/etc.).
src/agentevals/extraction.py Adds extract_extended_model_info_from_attrs + tool type/description extraction.
src/agentevals/api/models.py Extends TraceConversionMetadata with agentId/provider/responseModel.
examples/zero-code-examples/openai-agents/run.py Enables OTEL_SEMCONV_STABILITY_OPT_IN=gen_ai_latest_experimental.
examples/zero-code-examples/openai-agents/requirements.txt Updates example requirements for OpenAI/OpenAI Agents.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

elif finish_reasons_raw:
finish_reasons = [finish_reasons_raw]

temperature = attrs.get(OTEL_GENAI_REQUEST_TEMPERATURE)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

a lot of repeated functional;ity, consider creating a helper that tries to cast to a provided type and returns the value (or default value)

for provider when gen_ai.provider.name is absent (backward compat with
pre-v1.37.0 instrumentors).
"""
provider = attrs.get(OTEL_GENAI_PROVIDER_NAME)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

provider = attrs.get(OTEL_GENAI_PROVIDERNAME) or attrs.get(OTEL_GENAI_SYSTEM)

if not provider:
provider = attrs.get(OTEL_GENAI_SYSTEM)

finish_reasons_raw = attrs.get(OTEL_GENAI_RESPONSE_FINISH_REASONS)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

extract this to a separate function (e.g. _parse_finish_reasons)

except (TypeError, ValueError):
cache_read = 0

return {
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

consider having a TypedDict for this

Copy link
Copy Markdown
Contributor

@peterj peterj left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

minor feedback

@krisztianfekete krisztianfekete merged commit 81bf034 into main Apr 7, 2026
4 checks passed
@krisztianfekete krisztianfekete deleted the feature/otel-semcon-v1-40 branch April 7, 2026 15:58
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Support OpenTelemetry Semantic Conventions v1.40.0

3 participants