You're shipping AI features. But you have no idea what they cost, whether they work, or if they're actually helping your business.
💸
"We spent $18K on AI last month."
"We have no idea which feature caused it."
LLM costs are invisible until the invoice arrives. By then, the damage is done. Most teams discover cost spikes weeks late with no way to trace them back to a feature, user segment, or prompt change.
🤷
"Our AI quality dropped 20%."
"We found out from a customer complaint."
There's no HTTP 500 for a bad AI response. A model can confidently hallucinate thousands of times a day and your error monitoring shows zero errors. You only find out when users churn.
🧩
"We use 5 different tools."
"None of them talk to each other."
User analytics in one tool. Infrastructure monitoring in another. LLM tracking in a third. Error tracking somewhere else. Revenue in Stripe. None of it is connected. Answering "is our AI driving retention?" requires a week of data work.
Most AI teams track LLM calls in one place, users in another, and revenue somewhere else — and still can't answer "are our AI features actually driving value?" ObsrvAI was built to answer that question by keeping all your data in one place.
Everything your AI product needs to observe
AI metrics, user analytics, revenue, and infrastructure — in one self-hosted dashboard.
AI Cost Overview
$569.90
↑ 18% vs last period
This month
29,693 calls
claude-opus-4-5
$284.20
12,481
gpt-4o
$196.50
8,204
gemini-1.5-pro
$89.40
5,116
claude-haiku-3-5
$12.80
3,892
LLM Calls Today
9,502
↑ 12.4%
Success rate99.2%
Avg tokens1,842
Avg cost$0.019
Quality Score
Avg eval score
Pass rate91.2%
Hallucinations0.4%
Safety flags0.1%
Agent Session Trace
research-agent · session_a9f2
Goal: Compile Q2 competitor analysis
Completed · 2.73s
search_web
820ms
extract_data
340ms
llm_summarise
1240ms
validate_output
180ms
write_memory
90ms
5
Tool calls
3
LLM calls
2
Memory ops
4,820
Total tokens
AI Assistant
Which model is costing the most this week?
◈
claude-opus-4-5 is your top spend at $284.20 this week (42% of total). It has the highest per-call cost at $0.023 avg but also the best quality score at 91%. Consider routing simpler tasks to claude-haiku-3-5. You could save ~$180/week with minimal quality impact.
Why did p95 latency spike on Tuesday?
◈
Your p95 jumped from 890 ms to 2,340 ms on Tuesday between 14:00–17:00 UTC. Call volume to claude-opus-4-5 rose 3x during that window — likely a batch job. The spike has since resolved.
PDF Reports
Investor Report
Founder
AI Cost Analysis
AI Ops
Unit Economics
Finance
Safety Audit
Compliance
+ 7 more report types · browser PDF
Business MRR
$11,400
↑ 11.8% MoM
ARR$136,800
Churn2.1%
New MRR+$1,600
Smart Alerts
p95 latency
2,340 ms> 2,000 ms
cost / hour
$4.20> $3.50
error rate
0.8%resolved
Slack · PagerDuty · Email · Webhook
API Latency
264 ms
p95 · ↓ 8%
p50148 ms
p99890 ms
Error rate0.8%
Token Usage · Last 14 days
44.2M
↑ 26% vs prior period
31.8M
Input tokens
12.4M
Output tokens
User Analytics
DAU
2,847
MAU
18,240
DAU/MAU · 15.6%
New users (30d)1,204
Retention D764%
Live dashboard preview
See exactly what you get
This is the actual dashboard — click through the tabs to explore AI metrics, business analytics, and infrastructure monitoring.
app.obsrvai.com/ai/overview
ObsrvAI
Overview
AI Metrics
Business
Infrastructure
Users
Alerts
Reports
LLM Calls
29.7K
↑ 12.4% vs last week
Total Cost
$569
↑ 18% vs last week
Avg Latency
264ms
↓ 8% p95
Success Rate
99.2%
↑ 0.3% vs prior
Call volume trend
Active alerts
p95 latency2,340 ms
cost/hour$4.20
error rate✓ normal
Why ObsrvAI
Built differently, from the ground up
Every decision in ObsrvAI was made for one reason: give AI product teams answers, not more dashboards.
🔗AI + Business in one place
Connect AI quality directly to business outcomes
See that your p95 latency spiked on Tuesday, quality scores dropped 15%, and high-value users churned within 48 hours — all in one place. ObsrvAI brings AI observability, user analytics, and revenue data together so you can see the full picture without writing SQL.
🤖Agent-first design
First-class observability for AI agents
Most tools track LLM calls. ObsrvAI tracks agent sessions end-to-end: every step, every tool invocation, every memory read/write, every RAG retrieval — in a waterfall view that shows exactly where your agent spent time, where it failed, and why.
🔒Your infra. Your data. Full control.
Runs on your servers — data never leaves your infrastructure
Deploy the full stack with one docker-compose command. Your database, your Redis, your servers. ObsrvAI is software you run — not a cloud you send data to. No vendor lock-in, no data residency concerns, no surprise bills based on event volume.
💬AI-powered analytics
Ask your data questions in plain English
The built-in AI assistant knows your metrics. Ask "which model is costing the most this week?" or "show me users who churned after a low-quality AI response" and get an answer instantly — no SQL, no pivot tables, no BI tool required.
Everything included out of the box
All features available on every plan — no add-ons, no hidden limits.
💰
LLM cost tracking
Per-call cost by model, feature, and user
🤖
Agent session tracing
Full waterfall: steps, tools, memory, RAG
📊
Quality scoring & evals
Pass/fail evals, hallucination and safety flags
📈
User analytics
DAU/MAU, retention, funnels, segments
💳
Revenue tracking
MRR, ARR, churn from your revenue events
🏗️
Infrastructure monitoring
API latency, error rates, health checks
🔔
Smart alerts
Slack, PagerDuty, email, webhook delivery
💬
AI chat assistant
Ask questions about your data in plain English
📄
PDF reports
11 report types including investor and founder reports
🔒
Self-hosted & private
Your data never leaves your infrastructure
🧩
Model drift detection
Catch quality regressions across model versions
🔑
PII detection
Flag sensitive data in prompts and responses
Three lines to full visibility
Works with any LLM provider. Zero config. Data in seconds.
npm install @useobsrvai/sdk
const obs = new ObsrvAI({
apiKey: 'oai_your_key',
baseUrl: 'https://collector.yourdomain.com',
})
// Data flows to your database
🗄️
TimescaleDB
Your data, your disk
⚡
Redis
Caching layer
📡
Collector API
Ingest + query
📊
Dashboard
Next.js web UI
One dashboard, everything included
Connect what you use. See what's relevant.
Every plan includes the full dashboard. What populates depends on what you instrument with the SDK — no feature gates, no add-ons.
🤖
AI Observability
→Track LLM calls, agents, or RAG pipelines
✓ LLM cost tracking (50+ models)
✓ Agent session & tool tracing
✓ RAG pipeline monitoring
✓ Quality scoring & evals
✓ Model drift detection
✓ Safety & hallucination flags
✓ Prompt version management
✓ Token usage analytics
📈
Business Analytics
→Track user events or revenue webhooks
✓ DAU / WAU / MAU analytics
✓ MRR, ARR, churn, LTV
✓ Funnel & conversion analysis
✓ Cohort retention
✓ Feature adoption tracking
✓ User explorer & segments
✓ Error grouping & tracking
✓ Founder & investor reports
⬡
Infrastructure
→Track API requests from your backend
✓ Endpoint health & uptime
✓ P50 / P95 / P99 latency
✓ Error rates by endpoint
✓ Environment comparison
✓ Alert rules & notifications
✓ AI chat assistant for your data
✓ 11 PDF report types
✓ Cross-domain alert rules
All views included in every plan — nothing gated
Wire up AI, business, and infrastructure independently. Each section populates as you instrument it.
Most teams stitch together separate tools for AI tracing, user analytics, infrastructure monitoring, and error tracking. ObsrvAI replaces all of them — with full context across every layer in one place.
LLM Tracing
AI observability
+
User Analytics
Business metrics
+
Infra Monitoring
API & errors
+
Error Tracking
Crash reporting
→
ObsrvAI
All of the above + more
4→1
Tools replaced
One SDK, one dashboard, one bill
< 5min
Time to first insight
vs. weeks of data engineering
3–6mo
Engineering time saved
vs. building it yourself
Pricing based on scale, not features
Every plan includes the full dashboard. You pay for the number of projects and level of support — not for which metrics you can see.
14-day free trial — no credit card required. Your data lives in your own database.
All plans are self-hosted. Your data lives in your PostgreSQL database on your servers. ObsrvAI never sees your telemetry data. License fees cover software access and support — not storage or compute.
Stop flying blind. Start observing.
Full platform. All features. Your data stays on your infrastructure.