peargent.

Accessing Traces

Retrieve and analyze trace data from your agents and pools

After running agents with tracing enabled, you can access the trace data to analyze execution details, costs, performance, and errors.

Getting the Tracer

Get the global tracer instance to access stored data.

from peargent.observability import get_tracer
tracer = get_tracer()

Listing Traces

Retrieve a list of traces with optional filtering.

traces = tracer.list_traces(
    agent_name: str = None,  # Filter by agent name
    session_id: str = None,  # Filter by session ID
    user_id: str = None,     # Filter by user ID
    limit: int = 100         # Max number of traces to return
)

Getting a Single Trace

Retrieve a full trace object by its unique ID.

trace = tracer.get_trace(trace_id: str)

Trace Object Structure

The Trace object contains the following properties:

PropertyTypeDescription
idstrUnique identifier for the trace.
agent_namestrName of the agent that executed.
session_idstrSession ID (if set).
user_idstrUser ID (if set).
input_dataAnyInput provided to the agent.
outputAnyFinal output from the agent.
start_timedatetimeWhen execution started.
end_timedatetimeWhen execution ended.
duration_msfloatTotal duration in milliseconds.
total_tokensintTotal tokens used (prompt + completion).
total_costfloatTotal cost in USD.
errorstrError message if execution failed.
spansList[Span]List of operations within the trace.

Example:

print(f"Trace ID: {trace.id}")

Span Object Structure

The Span object represents a single operation (LLM call, tool execution, etc.):

PropertyTypeDescription
span_typestrType of span: "llm", "tool", or "agent".
namestrName of the model or tool.
start_timedatetimeStart timestamp.
end_timedatetimeEnd timestamp.
duration_msfloatDuration in milliseconds.
costfloatCost of this specific operation.

Example:

print(f"Span duration: {span.duration_ms}ms")

LLM Spans only:

PropertyTypeDescription
llm_modelstrModel name (e.g., "gpt-4o").
llm_promptstrThe prompt sent to the model.
llm_responsestrThe response received.
prompt_tokensintToken count for prompt.
completion_tokensintToken count for completion.

Example:

print(f"Model used: {span.llm_model}")

Tool Spans only:

PropertyTypeDescription
tool_namestrName of the tool executed.
tool_argsdictArguments passed to the tool.
tool_outputstrOutput returned by the tool.

Example:

print(f"Tool output: {span.tool_output}")

Printing Traces

Print traces to the console for debugging.

tracer.print_traces(
    limit: int = 10,           # Number of traces to print
    format: str = "table"      # "table", "json", "markdown", or "terminal"
)

Printing Summary

Print a high-level summary of usage and costs.

tracer.print_summary(
    agent_name: str = None,
    session_id: str = None,
    user_id: str = None,
    limit: int = None
)

Aggregate Statistics

Get a dictionary of aggregated metrics programmatically.

stats = tracer.get_aggregate_stats(
    agent_name: str = None,
    session_id: str = None,
    user_id: str = None,
    limit: int = None
)

Returned Stats Dictionary:

KeyTypeDescription
total_tracesintTotal number of traces matching filters.
total_costfloatTotal cost in USD.
total_tokensintTotal tokens used.
total_durationfloatTotal duration in ms.
total_llm_callsintTotal number of LLM calls.
total_tool_callsintTotal number of tool executions.
avg_cost_per_tracefloatAverage cost per trace.
avg_tokens_per_tracefloatAverage tokens per trace.
avg_duration_msfloatAverage duration per trace.
agents_usedList[str]List of unique agent names found.

Example:

print(f"Total cost: ${stats['total_cost']}")

What's Next?

Cost Tracking Deep dive into cost analysis, token counting, and optimization strategies.