Decision Graph

The Decision Graph reconstructs agent decision trees from trace events stored in Cortex. It assembles events into tree structures for causal analysis, enabling questions like "why did the agent take this action?" and "what happened in this session?"

TraceEventNode

Each event in the graph is represented as a TraceEventNode:

typescript
type TraceEventNode = {
  id: string;                    // Unique event identifier
  type: string;                  // tool_call | llm_call | decision | delegation | error
  agentId: string;               // Which agent produced this event
  timestamp: string;             // ISO 8601
  children: TraceEventNode[];    // Child events caused by this event
  fields: Record<string, string>;// Event-specific metadata
};

Nodes form a forest (multiple root events per session). Parent-child relationships are established through the parentEvent field on each trace event.

Event types

TypeFieldsDescription
tool_calltoolName, input, outputA tool invocation with duration
llm_callmodel, promptTokens, completionTokens, totalTokensAn LLM API call
decisiondescription, alternatives, chosen, reasoningA decision point with options
delegationparentId, childId, task, runIdSubagent delegation
errorerror, contextAn error occurrence

Decision tree construction

The buildFromSession(sessionKey) method reconstructs a full session tree:

  1. Fetches all trace events for the session from Cortex
  2. Sorts events by timestamp (ascending)
  3. Builds a node map keyed by event ID
  4. Links parent-child relationships via parentEvent
  5. Identifies root nodes (events without a parent in the session)
  6. Calculates tree depth
typescript
type DecisionTree = {
  rootEventId: string;          // First root event
  events: TraceEventNode[];     // Root-level event nodes (forest)
  depth: number;                // Maximum tree depth
};

Example tree

session_start
├── llm_call (claude-sonnet-4-6, 1200 tokens)
│   └── tool_call (Read, 45ms)
├── decision ("use vitest or jest", chosen: "vitest")
│   └── tool_call (Write, 120ms)
└── llm_call (claude-sonnet-4-6, 800 tokens)

Causal chain analysis

The explainAction(eventId) method answers "why did this happen?" by walking up the parent chain:

  1. Starts from the target event
  2. Follows parentEvent links to the root
  3. Builds a chain of CausalChainLink entries
  4. Stops when there is no parent or a cycle is detected (via visited set)
typescript
type CausalChainLink = {
  eventId: string;
  type: string;
  agentId: string;
  timestamp: string;
  summary: string;    // Human-readable summary of the event
};

Event summaries

The graph generates natural-language summaries for each event type:

  • tool_call: Tool call: Read (45ms)
  • llm_call: LLM call: claude-sonnet-4-6 (1200 tokens, 350ms)
  • decision: Decision: use vitest or jest -> vitest
  • delegation: Delegation: orchestrator -> reviewer (code review)
  • error: Error: ENOENT: no such file

Event querying

The queryEvents(agentId, from?, to?, types?) method supports filtered queries:

  • Filter by agent ID (required)
  • Filter by time range (optional)
  • Filter by event types (optional, comma-separated)

Use cases

Debugging agent behavior

Trace a tool call back to the decision that caused it:

bash
mayros observe explain <eventId>

Session replay

Reconstruct the full decision tree for a session to understand the agent's reasoning path.

Performance analysis

Identify slow operations by examining durationMs on tool_call and llm_call nodes.

Multi-agent coordination

Track delegation chains across parent and child agents to understand distributed workflows.