Accessing Traces
Retrieve and analyze trace data from your agents and pools
After running agents with tracing enabled, you can access the trace data to analyze execution details, costs, performance, and errors.
Getting the Tracer
Get the global tracer instance to access stored data.
from peargent.observability import get_tracer
tracer = get_tracer()Listing Traces
Retrieve a list of traces with optional filtering.
traces = tracer.list_traces(
agent_name: str = None, # Filter by agent name
session_id: str = None, # Filter by session ID
user_id: str = None, # Filter by user ID
limit: int = 100 # Max number of traces to return
)Getting a Single Trace
Retrieve a full trace object by its unique ID.
trace = tracer.get_trace(trace_id: str)Trace Object Structure
The Trace object contains the following properties:
| Property | Type | Description |
|---|---|---|
id | str | Unique identifier for the trace. |
agent_name | str | Name of the agent that executed. |
session_id | str | Session ID (if set). |
user_id | str | User ID (if set). |
input_data | Any | Input provided to the agent. |
output | Any | Final output from the agent. |
start_time | datetime | When execution started. |
end_time | datetime | When execution ended. |
duration_ms | float | Total duration in milliseconds. |
total_tokens | int | Total tokens used (prompt + completion). |
total_cost | float | Total cost in USD. |
error | str | Error message if execution failed. |
spans | List[Span] | List of operations within the trace. |
Example:
print(f"Trace ID: {trace.id}")Span Object Structure
The Span object represents a single operation (LLM call, tool execution, etc.):
| Property | Type | Description |
|---|---|---|
span_type | str | Type of span: "llm", "tool", or "agent". |
name | str | Name of the model or tool. |
start_time | datetime | Start timestamp. |
end_time | datetime | End timestamp. |
duration_ms | float | Duration in milliseconds. |
cost | float | Cost of this specific operation. |
Example:
print(f"Span duration: {span.duration_ms}ms")LLM Spans only:
| Property | Type | Description |
|---|---|---|
llm_model | str | Model name (e.g., "gpt-4o"). |
llm_prompt | str | The prompt sent to the model. |
llm_response | str | The response received. |
prompt_tokens | int | Token count for prompt. |
completion_tokens | int | Token count for completion. |
Example:
print(f"Model used: {span.llm_model}")Tool Spans only:
| Property | Type | Description |
|---|---|---|
tool_name | str | Name of the tool executed. |
tool_args | dict | Arguments passed to the tool. |
tool_output | str | Output returned by the tool. |
Example:
print(f"Tool output: {span.tool_output}")Printing Traces
Print traces to the console for debugging.
tracer.print_traces(
limit: int = 10, # Number of traces to print
format: str = "table" # "table", "json", "markdown", or "terminal"
)Printing Summary
Print a high-level summary of usage and costs.
tracer.print_summary(
agent_name: str = None,
session_id: str = None,
user_id: str = None,
limit: int = None
)Aggregate Statistics
Get a dictionary of aggregated metrics programmatically.
stats = tracer.get_aggregate_stats(
agent_name: str = None,
session_id: str = None,
user_id: str = None,
limit: int = None
)Returned Stats Dictionary:
| Key | Type | Description |
|---|---|---|
total_traces | int | Total number of traces matching filters. |
total_cost | float | Total cost in USD. |
total_tokens | int | Total tokens used. |
total_duration | float | Total duration in ms. |
total_llm_calls | int | Total number of LLM calls. |
total_tool_calls | int | Total number of tool executions. |
avg_cost_per_trace | float | Average cost per trace. |
avg_tokens_per_trace | float | Average tokens per trace. |
avg_duration_ms | float | Average duration per trace. |
agents_used | List[str] | List of unique agent names found. |
Example:
print(f"Total cost: ${stats['total_cost']}")What's Next?
Cost Tracking Deep dive into cost analysis, token counting, and optimization strategies.