A unified web console for managing AI models, testing them in an interactive playground, monitoring usage and costs in real-time, and browsing the full model catalog — all from a single dashboard.
# Query the AI Hub API curl "https://ai-hub.koder.dev/v1/chat/completions" \ -H "Authorization: Bearer $KODER_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "model": "gpt-4o", "messages": [ {"role": "user", "content": "Hello!"} ], "stream": true }'
A complete platform for managing, testing, and monitoring AI model usage across your organization.
Monitor requests, token consumption, costs, and provider health status at a glance. Track trends with interactive charts and real-time metrics.
Test any model with configurable parameters. Adjust temperature, max tokens, and system prompts. Compare two models side-by-side in real time.
Browse all available models with pricing, capabilities, and context window details. Filter by provider, type, or search by name.
Create, revoke, and manage API keys with per-key rate limits. Track last usage, set expiration dates, and copy keys securely.
Deep-dive into usage patterns with breakdowns by model, API key, and provider. Configurable time periods and CSV export options.
Manage team members, set spending budgets, configure rate limits, and set up alerts for cost and quota thresholds across your organization.
Test any model instantly with full parameter control. No code required — just pick a model, type a prompt, and see results in real time.
// Playground session: GPT-4o // Temperature: 0.7 | Max Tokens: 2048 User: "Explain the difference between fine-tuning and RAG in two sentences." Assistant: Fine-tuning modifies the model's weights by training on domain-specific data, permanently embedding new knowledge into the model itself. RAG (Retrieval-Augmented Generation) keeps the model unchanged but augments each prompt with relevant documents retrieved at query time.
Browse every model available through the Koder AI Gateway. Compare pricing, context windows, and capabilities at a glance.
# List available models via API curl "https://ai-hub.koder.dev/v1/models" \ -H "Authorization: Bearer $KEY" { "data": [ { "id": "gpt-4o", "provider": "openai", "pricing": { "input": 2.50, "output": 10.00 } }, { "id": "claude-4-sonnet", "provider": "anthropic", "pricing": { "input": 3.00, "output": 15.00 } }, ... ] }
Understand exactly how your organization uses AI. Break down costs by model, API key, team, and time period.
# Fetch usage analytics curl "https://ai-hub.koder.dev/v1/usage?period=30d" \ -H "Authorization: Bearer $KEY" { "total_requests": 284,391, "total_tokens": 1,247,000,000, "total_cost": "$1,842.50", "top_models": [ { "model": "gpt-4o", "cost": "$980.20" }, { "model": "claude-4-sonnet", "cost": "$520.10" } ] }
See how Koder AI Hub stacks up against other AI management consoles.
| Feature | Koder AI Hub | OpenRouter | AWS Bedrock | Azure AI Studio | Google Vertex AI |
|---|---|---|---|---|---|
| Interactive playground | ✓ | ✓ | Partial | ✓ | ✓ |
| Side-by-side model comparison | ✓ | — | — | ✓ | — |
| Unified multi-provider API | ✓ | ✓ | — | — | — |
| Real-time usage analytics | ✓ | Partial | ✓ | ✓ | ✓ |
| Per-key rate limits | ✓ | — | Partial | ✓ | Partial |
| Open-source | ✓ | — | — | — | — |
| Self-hosted option | ✓ | — | — | — | — |
| No vendor lock-in | ✓ | ✓ | — | — | — |
| Budget alerts & spending caps | ✓ | — | ✓ | ✓ | ✓ |
Koder AI Hub is a unified web console for managing AI model access across multiple providers. It includes a playground for testing models, API key management, real-time usage analytics, and organization settings — all in one place.
We support OpenAI, Anthropic, Google, Meta (Llama), Mistral, and more. New providers are added regularly. All models are accessible through a single OpenAI-compatible API.
Yes! The Playground includes a comparison mode where you can send the same prompt to two different models simultaneously and see their responses side by side. Great for evaluating quality, latency, and cost trade-offs.
Absolutely. We use TLS encryption for all API traffic, authenticate via Koder ID (OIDC), and never store your prompt data beyond the session. API keys are hashed at rest and can be revoked instantly.
Yes. Koder AI Hub is open-source and can be deployed on your own infrastructure. The entire stack (gateway, billing, console) runs as a single binary or Docker container.
You pay for the tokens you consume at each model's published rate. The Hub console itself is free to use. Self-hosted deployments have no platform fees at all.
One console. Every model. Full visibility into usage and costs.