Track and visualize your AI coding tool usage
Website • Quick Start • Methodology • Contributing
Vibetracking aggregates usage statistics from AI coding assistants and presents them in a gamified interface. Compare your AI coding habits with developers worldwide.
- Global Leaderboards - Ranked by estimated API spending
- Personal Dashboards - Activity heatmaps and usage patterns
- Model Breakdowns - See which AI models you use most
- Fun Comparisons - Your tokens = novels written, marathons run
- Streak Tracking - Track your coding consistency
| Tool | Data Location |
|---|---|
| Claude Code | ~/.claude/projects/*/conversation.jsonl |
| Cursor | CSV from cursor.com API |
| Codex | ~/.codex/sessions/*/conversation.jsonl |
| Gemini | ~/.gemini/tmp/*/chats/session-*.json |
| OpenCode | ~/.local/share/opencode/storage/message/**/*.json |
| Amp | ~/.ampcode/sessions/**/*.json |
| Droid | ~/Library/.../googleAiStudio/history/*.json |
# Scan your local AI tool data and open your dashboard
bunx vibetrackingThis will:
- Scan your machine for AI coding tool session files
- Fetch pricing data for accurate cost estimates
- Open your browser to import the data via GitHub OAuth
We track 5 distinct token types for accurate cost calculation:
| Type | Description |
|---|---|
| Input Tokens | Tokens sent to the model (prompts, context, files) |
| Output Tokens | Tokens generated by the model (responses, code) |
| Cache Read | Cached prompt tokens reused from previous requests |
| Cache Write | Prompt tokens cached for future requests |
| Reasoning | Thinking/chain-of-thought tokens (extended thinking models) |
Formula:
cost = (input × input_rate)
+ ((output + reasoning) × output_rate)
+ (cache_read × cache_read_rate)
+ (cache_write × cache_write_rate)
Pricing Sources:
- LiteLLM - Primary source for most models
- OpenRouter - Fallback for specialized models
Pricing data is cached for 24 hours.
- Claude Code - JSONL with global deduplication via
messageId:requestIdcomposite keys - Codex - JSONL with delta calculation from
total_token_usage - Cursor - CSV from cursor.com API, cache write calculated from input differentials
- Gemini - JSON with
tokens.thoughtsmapped to reasoning tokens - OpenCode, Amp, Droid - Standard JSON parsing with direct token extraction
- Local scanning only - All parsing happens on your machine via the CLI
- Explicit submission - Data only leaves your device when you click submit
- No telemetry - We don't track your usage or send analytics
- Open source - The CLI is fully open source
vibetracking/
├── src/ # Next.js web application
│ ├── app/ # App Router pages & API routes
│ ├── components/ # React components
│ └── lib/ # Utilities & Supabase client
├── packages/
│ ├── cli/ # Bun-based CLI tool
│ └── core/ # Native Rust module (NAPI-RS)
├── supabase/migrations/ # Database schema
└── public/ # Static assets
| Component | Technology |
|---|---|
| Web Framework | Next.js 16 + React 19 |
| Database | Supabase (PostgreSQL) |
| Authentication | GitHub OAuth |
| Styling | Tailwind CSS v4 |
| Charts | Recharts v3 |
| CLI Runtime | Bun |
| Native Parsing | Rust (NAPI-RS) |
| Deployment | Vercel |
- Bun >= 1.0
- Node.js >= 22
- Rust (for native module)
- Supabase account
- GitHub OAuth app
# Clone and install
git clone https://github.com/lfglabs-dev/vibetracking.dev.git
cd vibetracking.dev
bun install
# Build native module
cd packages/core && bun run build && cd ../..
# Start dev server
bun devCreate .env.vibetracking:
NEXT_PUBLIC_SUPABASE_URL=https://your-project.supabase.co
NEXT_PUBLIC_SUPABASE_ANON_KEY=your-anon-key
SUPABASE_SERVICE_ROLE_KEY=your-service-role-key┌─────────────────────────────────┐
│ Your Machine │
│ Claude, Cursor, Codex, etc. │
└───────────────┬─────────────────┘
│
▼
┌─────────────────────────────────┐
│ CLI (bunx vibetracking) │
│ - Parallel file scanning │
│ - SIMD-accelerated parsing │
│ - Price lookup & aggregation │
└───────────────┬─────────────────┘
│
▼
┌─────────────────────────────────┐
│ Browser Import │
│ /import#encoded_data │
│ GitHub OAuth login │
└───────────────┬─────────────────┘
│
▼
┌─────────────────────────────────┐
│ Supabase (PostgreSQL) │
│ users, daily_activity, │
│ token_usage, user_stats │
└─────────────────────────────────┘
- Fork the repository
- Create your feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
The CLI is based on tokscale by @junhoyeo.
MIT