Telemetry
Tracing spans, cost tracking, and tool execution metrics
Telemetry
A3S Code emits structured telemetry via OpenTelemetry-compatible tracing spans. Every LLM call, tool execution, and agent turn is instrumented with detailed attributes for observability, cost tracking, and performance analysis.
Span Hierarchy
The agent produces a nested span tree for each execution:
a3s.agent.execute ← top-level span per send()/stream()
├── a3s.agent.turn ← one per agent turn (LLM call + tool loop)
│ ├── a3s.context.resolve ← context provider resolution
│ ├── a3s.llm.completion ← LLM API call
│ ├── a3s.tool.execute ← tool execution (one per tool call)
│ ├── a3s.llm.completion ← follow-up LLM call after tool results
│ └── a3s.tool.execute ← ...
├── a3s.agent.turn ← next turn
│ └── ...
└── (end)Each span carries attributes that describe what happened during that phase.
Span Attributes
Agent-level (a3s.agent.execute, a3s.agent.turn)
Prop
Type
LLM-level (a3s.llm.completion)
Prop
Type
Tool-level (a3s.tool.execute)
Prop
Type
Context-level (a3s.context.resolve)
Prop
Type
Cost Tracking
The telemetry module tracks LLM costs per-call using LlmCostRecord:
Prop
Type
Model Pricing
Cost is calculated using ModelPricing:
cost_usd = (prompt_tokens * input_per_million / 1_000_000)
+ (completion_tokens * output_per_million / 1_000_000)A built-in pricing registry (default_model_pricing()) covers common models from Anthropic, OpenAI, and others. Custom pricing can be specified in the agent config's cost block per model.
Cost Aggregation
CostSummary provides aggregated cost data with breakdowns:
- By model — cost per model across all sessions
- By day — daily cost trends
- Total — overall cost across the aggregation window
Use aggregate_cost_records() to produce summaries from a collection of LlmCostRecord entries, with optional session and time-range filters.
Tool Metrics
ToolMetrics tracks per-tool execution statistics within a session:
Prop
Type
These metrics are recorded on tracing spans via record_tool_result() and can be aggregated for dashboards and alerting.
Integration
The telemetry module uses standard tracing spans and attributes. To collect telemetry data:
- Configure a tracing subscriber — Any OpenTelemetry-compatible collector works (Jaeger, Zipkin, OTLP exporters)
- Spans are emitted automatically — No additional code needed beyond subscriber setup
- Cost records — Available via the
LlmCostRecordtype for custom aggregation
use tracing_subscriber::prelude::*;
// Example: export to OTLP collector
let tracer = opentelemetry_otlp::new_pipeline()
.tracing()
.install_batch(opentelemetry_sdk::runtime::Tokio)?;
tracing_subscriber::registry()
.with(tracing_opentelemetry::layer().with_tracer(tracer))
.init();
// All agent operations now emit structured spans
let result = session.send("Analyze this codebase").await?;Helper functions record_llm_usage() and record_tool_result() record metrics on the current active span. The TimedSpan guard automatically measures elapsed time for any scoped operation.
API Reference
Span hierarchy
Prop
Type
Cost tracking fields
Prop
Type
Setup (Rust)
use tracing_subscriber::prelude::*;
// stdout (development)
tracing_subscriber::fmt().init();
// OTLP (production)
let tracer = opentelemetry_otlp::new_pipeline()
.tracing()
.install_batch(opentelemetry_sdk::runtime::Tokio)?;
tracing_subscriber::registry()
.with(tracing_opentelemetry::layer().with_tracer(tracer))
.init();Environment variables
Prop
Type