AI agent observability

AI agent observability: see what they do, why, and what it costs

Debug agent decisions, monitor LLM calls, and track exactly where every token and dollar goes.

Automatic setup
# 1. Install the Spanora skill
npx skills add spanora/skills

# 2. Ask your AI agent to set it up
> Integrate Spanora into this project
Tracesupport-agent
Success
Timeline
OPsupport-agent
2.4s
LLMgpt-4o
$0.0089
Toolsearch-kb
480ms
LLMgpt-4o
$0.0053
0
Tokens
$0.0000
Cost
0.0s
Duration
Works with
OpenTelemetryVercel AI SDKLangChainOpenAIAnthropic

LLM cost tracking and analytics: how much, how often, how reliable

Track spend, success rates, and agent performance — all in one dashboard.

spanora.ai/dashboard
Total Traces
0
Total Cost
$0.00
Success Rate
0.0%
Avg Duration
0.0s
Cost trend7d
$4$0MonTueWedThuFriSatSun
Outcomes7d
Mon
Tue
Wed
Thu
Fri
Sat
Sun
Agent leaderboard
AgentCostTracesSuccess
support-agent
$5.42
48696.1%
code-reviewer
$4.21
41293.4%
data-pipeline
$2.85
38691.7%

LLM monitoring and debugging built for AI engineers

Everything you need to trace, debug, and monitor your AI agent executions.

OTEL-Native

Works with standard OpenTelemetry data. No vendor lock-in, no proprietary SDK required.

Trace Timeline

Gantt-style visualization of every LLM call, tool invocation, and operation in your execution.

Cost Tracking

Auto-calculated costs for 100+ models. Know exactly how much each execution costs.

Prompt Inspection

Full prompt input and output for every LLM call. Debug unexpected behaviors instantly.

Tool Monitoring

Track tool call status, inputs, outputs, and durations across your agent executions.

Multi-Tenant

Group traces by session, user, or organization. Understand who drives usage and cost.

OpenTelemetry-native: open standards, not vendor lock-in

Your observability data should be yours. We use OpenTelemetry natively so you're never locked into a proprietary format.

OTEL-native
Works with standard OpenTelemetry data out of the box. No custom protocol required.
SDK-optional
Our SDK adds DX sugar, but raw OTLP HTTP works perfectly on its own.
Standard protocols
OTLP HTTP for ingestion — the same protocol you already use.
Zero lock-in
Switch providers anytime. Your traces are standard OTEL data.
100+
models priced
< 5 min
setup time
OTLP HTTP
standard protocol
Proprietary approach
  • Custom SDK required — vendor-specific format
  • Data locked in proprietary storage
  • Migration means re-instrumenting everything
  • Black-box pricing tied to data volume
Our approach
  • Standard OTEL data — works with any exporter
  • Your data in your format, always portable
  • Switch providers without changing a line of code
  • Transparent pricing, no data lock-in premium

How AI agent monitoring works

Get from zero to full observability in under five minutes.

1

Run one command

Auto-detects your framework, AI SDK, and package manager. Installs and configures everything.

npx skills add spanora/skills
2

See Everything

Traces, costs, prompts, and tool calls appear in real-time on your dashboard.

spanora.ai/traces

Simple, transparent pricing for AI observability

Start free, scale as you grow. No surprises.

Free

$0/month
  • 1,000 spans/month
  • 24-hour retention
  • Trace timeline
  • Prompt inspection
Get Started Free
Popular

Starter

$29/month
  • 50,000 spans/month
  • 30-day retention
  • Everything in Free
  • Email support
Get Started

Pro

$99/month
  • 500,000 spans/month
  • 90-day retention
  • Everything in Starter
  • Priority support
  • Session/user/org grouping
Get Started

Enterprise

Custom
  • Unlimited spans
  • Custom retention
  • Everything in Pro
  • Dedicated support
  • Custom integrations
Contact Sales

Stop guessing — start monitoring your AI agents

Free AI observability tier. No credit card required. Set up in under 5 minutes.