BurnGuard

Documentation

Everything you need to set up and use BurnGuard for real-time AI cost monitoring.

Quick Start

Get up and running with BurnGuard in under a minute.

1

Sign in with Google

Go to burnguard.dev and sign in with Google. Find your API key on the Settings page — it starts with bg_.

2

Install the agent

Install the BurnGuard agent globally via npm.

npm install -g @burnguard/agent
3

Set your API key

Export your API key as an environment variable, or pass it directly when starting the agent with burnguard start --api-key=bg_your_api_key_here.

export BURNGUARD_API_KEY=bg_your_api_key_here
4

Start monitoring

Start the daemon and open your dashboard to see real-time metrics.

burnguard start

CLI Reference

All available commands for the BurnGuard CLI agent.

burnguard start

Start the monitoring daemon. Runs in the background by default.

burnguard start --foreground

Start in foreground mode. Logs are printed directly to stdout instead of written to a log file.

burnguard start --api-key=bg_xxx

Start the daemon with an explicit API key instead of reading from the environment variable.

burnguard stop

Gracefully stop the running daemon process.

burnguard status

Check whether the daemon is currently running and display its PID.

burnguard logs

Show recent daemon log output.

burnguard logs -f

Follow daemon logs in real-time (similar to tail -f).

burnguard help

Show help information and list all available commands.

Backward Compatibility

Running burnguard --api-key=bg_xxx without a subcommand still works and starts the agent in foreground mode. This preserves compatibility with scripts written for versions prior to v1.2.0.

Configuration

All configuration options can be passed as CLI flags or environment variables.

OptionEnv VariableDefaultDescription
--api-keyBURNGUARD_API_KEYYour BurnGuard API key (required)
--backend-urlBURNGUARD_BACKEND_URLwss://api.burnguard.devWebSocket endpoint for the backend
--log-dirBURNGUARD_LOG_DIR~/.openclawDirectory to watch for JSONL log files
--reconnect-interval\u20145000Reconnection interval in milliseconds
--max-reconnect\u201450Maximum reconnection attempts
--batch-size\u201450Maximum metrics per batch
--batch-interval\u20142000Batch flush interval in milliseconds

File Locations

Key file paths used by BurnGuard and OpenClaw.

~/.burnguard/agent.pid
PID file for the daemon process. Used by the CLI to detect whether the agent is already running.
~/.burnguard/agent.log
Daemon log output. View with burnguard logs or follow in real-time with burnguard logs -f.
~/.openclaw/agents/<agentId>/sessions/<sessionId>.jsonl
OpenClaw session log files containing nested JSONL with token usage and cost data. Automatically detected and parsed by the agent.

Supported Models

BurnGuard supports 51 models across 9 providers. Pricing is shown per 1 million tokens.

Anthropic 9 models

ModelInput / 1M tokensOutput / 1M tokens
Claude Opus 4$15.00$75.00
Claude Opus 4.5$15.00$75.00
Claude Sonnet 4$3.00$15.00
Claude Sonnet 4.5$3.00$15.00
Claude Sonnet 3.5$3.00$15.00
Claude Opus 3$15.00$75.00
Claude Sonnet 3$3.00$15.00
Claude Haiku 3.5$0.25$1.25
Claude Haiku 3$0.25$1.25

OpenAI 12 models

ModelInput / 1M tokensOutput / 1M tokens
GPT-4o$2.50$10.00
GPT-4o Mini$0.15$0.60
GPT-4 Turbo$10.00$30.00
GPT-4$30.00$60.00
GPT-3.5 Turbo$0.50$1.50
GPT-4.5$75.00$150.00
o1$15.00$60.00
o1 Mini$3.00$12.00
o1 Pro$150.00$600.00
o3$10.00$40.00
o3 Mini$1.10$4.40
o4 Mini$1.10$4.40

Google Gemini 6 models

ModelInput / 1M tokensOutput / 1M tokens
Gemini 2.5 Pro$1.25$10.00
Gemini 2.5 Flash$0.15$0.38
Gemini 2.0 Flash$0.10$0.40
Gemini 2.0 Flash Lite$0.08$0.30
Gemini 1.5 Pro$1.25$5.00
Gemini 1.5 Flash$0.08$0.30

Meta Llama 6 models

ModelInput / 1M tokensOutput / 1M tokens
Llama 4 Scout$0.15$0.50
Llama 4 Maverick$0.20$0.60
Llama 3.3 70B$0.60$0.60
Llama 3.1 405B$3.00$3.00
Llama 3.1 70B$0.60$0.60
Llama 3.1 8B$0.05$0.05

Mistral 6 models

ModelInput / 1M tokensOutput / 1M tokens
Mistral Large$2.00$6.00
Mistral Small$0.10$0.30
Codestral$0.30$0.90
Mixtral 8x22B$2.00$6.00
Mixtral 8x7B$0.70$0.70
Mistral Nemo$0.15$0.15

DeepSeek 3 models

ModelInput / 1M tokensOutput / 1M tokens
DeepSeek V3$0.14$0.28
DeepSeek R1$0.55$2.19
DeepSeek Coder V2$0.14$0.28

Cohere 3 models

ModelInput / 1M tokensOutput / 1M tokens
Command R+$2.50$10.00
Command R$0.15$0.60
Command$1.00$2.00

Amazon Nova 3 models

ModelInput / 1M tokensOutput / 1M tokens
Amazon Nova Pro$0.80$0.32
Amazon Nova Lite$0.06$0.24
Amazon Nova Micro$0.04$0.15

xAI 3 models

ModelInput / 1M tokensOutput / 1M tokens
Grok 3$3.00$15.00
Grok 3 Mini$0.30$0.50
Grok 2$2.00$10.00

OpenRouter Support

All models above work seamlessly via OpenRouter. BurnGuard automatically strips provider prefixes (e.g. anthropic/claude-sonnet-4 becomes claude-sonnet-4) so costs are tracked correctly regardless of how you route requests.

Cost Tracking

BurnGuard uses a three-tier approach to calculate costs as accurately as possible.

1

Pre-calculated cost (preferred)

Highest accuracy

If OpenClaw includes a cost.total field inside usage data, BurnGuard uses that value directly. The value is converted from dollars to cents. This works for any model, even those not in our pricing table.

2

Cache-aware calculation

Built-in pricing

For models in our pricing table, BurnGuard calculates costs using cache-aware rates when the pre-calculated cost is unavailable.

Token typeRate
Standard input tokensFull input rate
Cache read tokens10% of input rate
Cache creation tokens125% of input rate
Output tokensFull output rate
3

Fallback

Tokens only

If neither a pre-calculated cost nor a matching model price is found, tokens and sessions are still tracked but cost shows as $0.00.

Model Name Normalization

Different providers append dates, version strings, or prefixes to model names. BurnGuard normalizes all model identifiers so costs are matched correctly.

PatternRaw inputNormalized
OpenRouter prefixesanthropic/claude-sonnet-4claude-sonnet-4
Anthropic datesclaude-sonnet-4-20250514claude-sonnet-4
OpenAI datesgpt-4o-2024-08-06gpt-4o
Google previewsgemini-2.5-pro-preview-05-06gemini-2.5-pro
Mistral datesmistral-large-2411mistral-large
DeepSeek aliasesdeepseek-chatdeepseek-v3

API Reference

The BurnGuard REST API is available at https://api.burnguard.dev.

Authentication

Sign in with Google OAuth to get your API key. All endpoints except the Google OAuth endpoints require the X-API-Key header with your API key. The WebSocket endpoint uses the api_key query parameter instead.

GET/api/auth/google

Returns Google OAuth consent URL. Pass redirect_uri as query parameter.

POST/api/auth/google/callback

Exchange Google authorization code for BurnGuard API key. Creates or links user account.

Request body

{
  "code": "google_auth_code",
  "redirect_uri": "https://burnguard.dev/login/callback"
}
GET/api/auth/meRequires auth

Get authenticated user info.

Response

{
  "success": true,
  "data": {
    "id": "usr_abc123",
    "email": "user@example.com",
    "tier": "free",
    "created_at": "2026-01-15T10:30:00Z"
  }
}
POST/api/metricsRequires auth

Batch ingest an array of metrics. Used by the BurnGuard agent to stream data.

Request body

[
  {
    "session_id": "sess_abc123",
    "model": "claude-sonnet-4",
    "input_tokens": 1200,
    "output_tokens": 350,
    "cost_cents": 5,
    "timestamp": 1707840000
  }
]
GET/api/metrics?range=1h|1d|7d|30dRequires auth

Query metrics filtered by time range. Returns individual metric entries.

GET/api/metrics/summaryRequires auth

Aggregated daily totals and a by_model breakdown of tokens and cost.

GET/api/metrics/daily?days=7Requires auth

Daily aggregated metrics including day, session count, total tokens, and cost.

GET/api/sessionsRequires auth

List user sessions. Returns the 50 most recent sessions ordered by started_at descending.

GET/api/budgetRequires auth

Get current budget settings for the authenticated user.

PUT/api/budgetRequires auth

Update budget settings.

Request body

{
  "daily_limit_cents": 1000,
  "monthly_limit_cents": 10000
}
GET/api/ws?api_key=bg_xxx

WebSocket upgrade endpoint for real-time metric streaming. Authentication is provided via the api_key query parameter instead of the X-API-Key header.

Uses query parameter authentication instead of the header.

Response Format

All API responses follow a consistent envelope format. Successful responses include { success: true, data: ... }. Error responses include { success: false, error: "..." }.

Plans & Pricing

Start free and upgrade as your usage grows.

FeatureFreePro$9/moTeam$29/mo
Real-time dashboard
Session history24 hours30 days90 days
Agents monitored15Unlimited
Budget alerts
Cost optimization tips
Kill switch
Shared dashboard
Budget allocation
SupportCommunityEmailPriority
Get started for free

No credit card required. Upgrade anytime from your dashboard.