Now in public beta

Understand how AI
actually behaves
inside your product

See cost, latency, and errors per product feature, using real runtime data from your application.

Start for freeNo credit card required
Features tracked
chat-assistant
$0.0037
code-generator
$0.0042
summarizer
$0.0018
Total AI Cost
Live
$127.43
This month · 4 features
Jan 1Jan 12
Avg Latency
1.2s
↓ 18% from last week
Total Requests
12.4k
Last 30 days
3 errors detected
Feature-level analytics

Know exactly which AI features
drive cost and risk

Vendor dashboards show usage by model or API key. Orbit shows usage by product feature — so you can understand which parts of your product are expensive, slow, or failing in production.

Every AI call is tagged with the feature that triggered it. This lets you see cost, latency, and errors in the product context where decisions are made.

Cost per featureIdentify the features driving most of your AI spend
Request volumeUnderstand usage patterns and changes across features
Error attributionPinpoint which product features are failing
Features · 7 tracked
chat-assistant
↗ 100%
Cost
$12.84
Requests
2.4k
Latency
1.8s
Errors
2.1%
code-generator
↗ 100%
Cost
$45.20
Requests
890
Latency
3.2s
Errors
0.8%
content-writer
↗ 100%
Cost
$8.90
Requests
1.2k
Latency
2.1s
Errors
1.2%
summarizer
↗ 100%
Cost
$3.45
Requests
3.1k
Latency
0.9s
Errors
0.3%
Top Feature
code-generator
64% of total cost
Cost Trend
Last 30 days
Dec 4Jan 3
By Environment
Production75%
Staging20%
Dev5%
Avg / Request
$0.0024
Total Tokens
2.4M
Cost intelligence

See which AI costs matter
before they escalate

Understand AI spend as it happens using deterministic calculations from real runtime data — not delayed invoices or estimates.

View cost trends over time, break down spend by environment, and pinpoint the specific requests and features driving usage.

Real-time cost visibilityUpdated as requests occur
Environment breakdownClearly separate prod, staging, and dev spend
Token-level detailSee input and output tokens per request
Error visibility

Debug failures
before users notice

See error rates by feature and model. Understand which parts of your product are breaking and why — with detailed error logs and failure reasons.

Track error trends over time. Catch regressions early. Know exactly which model and feature combination is causing issues.

Error rate by featureSee which features are failing
Error type breakdownInvalid models, rate limits, timeouts
Recent error logsFull context for debugging
Total Errors
47
Error Rate
2.8%
Success Rate
97.2%
Affected
3 features
By Type
model_not_found24
rate_limit_exceeded15
invalid_request8
Recent Error
model_not_found
Feature: code-generator
Model 'gpt-5' does not exist
Cost by Provider
OpenAI$89.40
Anthropic$32.10
Other$5.93
Model Performance
gpt-4o
1.8s$0.0030.2%
gpt-4o-mini
0.9s$0.00040.1%
claude-3-opus
2.4s$0.0150.5%
Insight
gpt-4o-mini is 7x cheaper
with similar error rates
Model analytics

Compare models
across your product

See which models power each feature. Compare cost, latency, and error rates to make better decisions about model selection.

Track cost per provider, performance by model, and identify opportunities to optimize your model choices.

Cost by providerOpenAI, Anthropic, and more
Latency comparisonAverage response times by model
Error ratesReliability metrics per model
Privacy & Security

Built for correctness

Orbit shows you what's actually happening in your application — without proxies, scraping, or hidden assumptions.

SDK-based collection

Metrics are captured directly from your application runtime. No external monitoring or traffic interception.

No request interception

Orbit never sits between your app and your AI provider. Your requests go directly to OpenAI, Anthropic, etc.

Deterministic metrics

Cost, latency, and error rates are calculated from real request data — not estimates or statistical sampling.

No API key access

Your provider API keys stay in your application. Orbit only receives usage metadata, never credentials.

Why Orbit

Vendor dashboards show API usage.
Orbit shows how your product uses AI.

Use Orbit alongside OpenAI and Anthropic dashboards for product-level visibility.

Capability
Providers
Orbit
View AI usage by model
View total cost
Feature-level cost
Feature-level latency
Feature-level errors
Product-centric view
SDK-based runtime data
Integration

Get started in minutes

One npm package. Wrap your OpenAI client. See your data instantly.

01

Install the SDK

npm install @with-orbit/sdk

02

Wrap your client

One line to instrument OpenAI

03

See your data

Real-time metrics in your dashboard

app.ts
import { Orbit } from '@with-orbit/sdk';
import OpenAI from 'openai';

// Initialize Orbit
const orbit = new Orbit({
  apiKey: process.env.ORBIT_API_KEY
})

// Wrap your OpenAI client
const openai = orbit.wrapOpenAI(new OpenAI(), {
  feature: 'chat-assistant'
})

Understand AI behavior
at the feature level

Feature-level cost, latency, and error visibility from real runtime data — so you know what to fix and optimize.

Start for free