Telemetry
Tracing spans, cost tracking, and tool execution metrics
Telemetry
A3S Code emits structured telemetry via OpenTelemetry-compatible tracing spans. Every LLM call, tool execution, and agent turn is instrumented with detailed attributes for observability, cost tracking, and performance analysis.
Span Hierarchy
The agent produces a nested span tree for each execution:
a3s.agent.execute ← top-level span per send()/stream()
├── a3s.agent.turn ← one per agent turn (LLM call + tool loop)
│ ├── a3s.context.resolve ← context provider resolution
│ ├── a3s.llm.completion ← LLM API call
│ ├── a3s.tool.execute ← tool execution (one per tool call)
│ ├── a3s.llm.completion ← follow-up LLM call after tool results
│ └── a3s.tool.execute ← ...
├── a3s.agent.turn ← next turn
│ └── ...
└── (end)Each span carries attributes that describe what happened during that phase.
Span Attributes
Agent-level (a3s.agent.execute, a3s.agent.turn)
Prop
Type
LLM-level (a3s.llm.completion)
Prop
Type
Tool-level (a3s.tool.execute)
Prop
Type
Context-level (a3s.context.resolve)
Prop
Type
Cost Tracking
The telemetry module tracks LLM costs per-call using LlmCostRecord:
Prop
Type
Model Pricing
Cost is calculated using ModelPricing:
cost_usd = (prompt_tokens * input_per_million / 1_000_000)
+ (completion_tokens * output_per_million / 1_000_000)A built-in pricing registry (default_model_pricing()) covers common models from Anthropic, OpenAI, and others. Custom pricing can be specified in the agent config's cost block per model.
Cost Aggregation
CostSummary provides aggregated cost data with breakdowns:
- By model — cost per model across all sessions
- By day — daily cost trends
- Total — overall cost across the aggregation window
Use aggregate_cost_records() to produce summaries from a collection of LlmCostRecord entries, with optional session and time-range filters.
Tool Metrics
ToolMetrics tracks per-tool execution statistics within a session:
Prop
Type
These metrics are recorded on tracing spans via record_tool_result() and can be aggregated for dashboards and alerting.
Integration
The telemetry module emits standard OpenTelemetry spans. Spans fire automatically during every
send() / stream() call — no changes to your agent code are needed.
Telemetry is emitted automatically. Configure the collector via environment variables before starting your process:
import { Agent } from '@a3s-lab/code';
// Set env vars before running (or in your shell):
// OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4318
// OTEL_SERVICE_NAME=my-agent
const agent = await Agent.create('agent.hcl');
const session = agent.session('/project');
// Spans are emitted automatically — no extra code needed
const result = await session.send('Analyze this codebase');import os
from a3s_code import Agent
# Set env vars before running (or in your shell):
# OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4318
# OTEL_SERVICE_NAME=my-agent
os.environ['OTEL_EXPORTER_OTLP_ENDPOINT'] = 'http://localhost:4318'
agent = Agent.create('agent.hcl')
session = agent.session('/project')
# Spans are emitted automatically — no extra code needed
result = session.send('Analyze this codebase')API Reference
Span hierarchy
Prop
Type
Cost tracking fields
Prop
Type
Setup
Configure the OTLP collector endpoint via environment variables — no subscriber setup needed in application code:
# Development: stdout logging
export RUST_LOG=a3s_code_core=debug
# Production: OTLP exporter
export OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4318
export OTEL_SERVICE_NAME=my-agentSpans are collected and exported automatically by the native Rust layer.
Environment variables
Prop
Type