Instrument your first agent

Add behavioral tracing to your agent in two lines of code.

Basic setup

my_agent.py
import spooled
from spooled.wrappers import wrap_openai
from openai import OpenAI

spooled.init(agent_id="my_agent")
client = wrap_openai(OpenAI())  # Wraps the client for auto-capture

# Your agent code runs normally below this line.
response = client.chat.completions.create(
    model="gpt-4o",
    messages=[{"role": "user", "content": "Hello"}],
)

spooled.shutdown()

Wrap your LLM client and Spooled records execution structure automatically:

  • LLM calls — model name, token counts, latency
  • Tool calls — function name, argument shapes, results
  • HTTP requests — method, URL, status code
  • Hash chain — SHA-256 linked ordering for tamper evidence
Note
Spooled captures execution structure, not call content. Prompts, responses, and tool argument values are stripped at the SDK level. See Privacy Architecturefor exactly what is and isn't transmitted.

Decorator API (recommended)

For explicit control, use the @spooled.trace decorator:

my_agent.py
import spooled

@spooled.trace(agent_id="my_agent")
def run_agent(query: str):
    # Your agent logic here
    result = call_llm(query)
    data = search_tool(result)
    return summarize(data)

run_agent("What's the weather?")

You can also mark individual functions as tools or observation points:

@spooled.tool()
def search_tool(query: str):
    return requests.get(f"https://api.example.com/search?q={query}")

@spooled.observe()
def validate_result(result):
    assert result is not None

Auto-instrumented libraries

Spooled automatically captures calls to these libraries with no additional code:

  • OpenAI — sync and async (openai.ChatCompletion)
  • Anthropic — sync and async (anthropic.Messages)
  • AWS Bedrock — invoke model calls
  • requests — HTTP requests
  • httpx — sync and async HTTP
  • aiohttp — async HTTP sessions

Framework integrations (LangChain, LlamaIndex, AutoGen) use callback handlers. See Frameworks.

Shutdown

Always call shutdown() when your agent finishes to flush the trace:

spooled.shutdown(success=True)

If using the @spooled.trace decorator, shutdown is handled automatically.

View the trace

spooled list traces
spooled view trace <run-id>

List traces first, then view one by run ID to see the full interaction sequence with types, timing, and hash chain.

Generate a baseline

Run your agent at least 3 times, then:

spooled ci update-baseline \
    --from .spooled/traces/ \
    --out baselines/ \
    --min-runs 3

Commit the baselines directory to git alongside your code.

Compare against baseline

spooled ci compare \
    .spooled/traces/<latest-trace>.jsonl \
    --baseline baselines/

This compares the latest trace against the baseline and exits with code 1 if the agent is blocked by policy. For automated CI on every PR, see the CI/CD guide.