AI Command Center

Meet AI Hub

A unified web console for managing AI models, testing them in an interactive playground, monitoring usage and costs in real-time, and browsing the full model catalog — all from a single dashboard.

# Query the AI Hub API
curl "https://ai-hub.koder.dev/v1/chat/completions" \
  -H "Authorization: Bearer $KODER_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "gpt-4o",
    "messages": [
      {"role": "user", "content": "Hello!"}
    ],
    "stream": true
  }'

Everything You Need

A complete platform for managing, testing, and monitoring AI model usage across your organization.

Dashboard

Monitor requests, token consumption, costs, and provider health status at a glance. Track trends with interactive charts and real-time metrics.

Playground

Test any model with configurable parameters. Adjust temperature, max tokens, and system prompts. Compare two models side-by-side in real time.

Model Catalog

Browse all available models with pricing, capabilities, and context window details. Filter by provider, type, or search by name.

API Keys

Create, revoke, and manage API keys with per-key rate limits. Track last usage, set expiration dates, and copy keys securely.

Usage Analytics

Deep-dive into usage patterns with breakdowns by model, API key, and provider. Configurable time periods and CSV export options.

Settings

Manage team members, set spending budgets, configure rate limits, and set up alerts for cost and quota thresholds across your organization.

Interactive Playground

Test any model instantly with full parameter control. No code required — just pick a model, type a prompt, and see results in real time.

  • Chat with any model from any provider
  • Adjust temperature, top-p, max tokens, and system prompts
  • Compare two models side-by-side
  • View token counts, latency, and cost per request
// Playground session: GPT-4o
// Temperature: 0.7 | Max Tokens: 2048

User: "Explain the difference between
  fine-tuning and RAG in two sentences."

Assistant:
Fine-tuning modifies the model's weights
by training on domain-specific data, permanently
embedding new knowledge into the model itself.

RAG (Retrieval-Augmented Generation) keeps
the model unchanged but augments each prompt
with relevant documents retrieved at query time.

Model Catalog

Browse every model available through the Koder AI Gateway. Compare pricing, context windows, and capabilities at a glance.

  • OpenAI, Anthropic, Google, Meta, Mistral, and more
  • Filter by provider, modality, or capability
  • View per-token pricing for input and output
  • New models added automatically as providers release them
# List available models via API
curl "https://ai-hub.koder.dev/v1/models" \
  -H "Authorization: Bearer $KEY"

{
  "data": [
    { "id": "gpt-4o",
      "provider": "openai",
      "pricing": { "input": 2.50, "output": 10.00 } },
    { "id": "claude-4-sonnet",
      "provider": "anthropic",
      "pricing": { "input": 3.00, "output": 15.00 } },
    ...
  ]
}

Usage Analytics

Understand exactly how your organization uses AI. Break down costs by model, API key, team, and time period.

  • Real-time dashboard with interactive charts
  • Cost breakdown by model, key, and provider
  • Set budget alerts and spending caps
  • Export reports as CSV for accounting
# Fetch usage analytics
curl "https://ai-hub.koder.dev/v1/usage?period=30d" \
  -H "Authorization: Bearer $KEY"

{
  "total_requests": 284,391,
  "total_tokens": 1,247,000,000,
  "total_cost": "$1,842.50",
  "top_models": [
    { "model": "gpt-4o", "cost": "$980.20" },
    { "model": "claude-4-sonnet", "cost": "$520.10" }
  ]
}

How It Compares

See how Koder AI Hub stacks up against other AI management consoles.

FeatureKoder AI HubOpenRouterAWS BedrockAzure AI StudioGoogle Vertex AI
Interactive playgroundPartial
Side-by-side model comparison
Unified multi-provider API
Real-time usage analyticsPartial
Per-key rate limitsPartialPartial
Open-source
Self-hosted option
No vendor lock-in
Budget alerts & spending caps

Frequently Asked Questions

Koder AI Hub is a unified web console for managing AI model access across multiple providers. It includes a playground for testing models, API key management, real-time usage analytics, and organization settings — all in one place.

We support OpenAI, Anthropic, Google, Meta (Llama), Mistral, and more. New providers are added regularly. All models are accessible through a single OpenAI-compatible API.

Yes! The Playground includes a comparison mode where you can send the same prompt to two different models simultaneously and see their responses side by side. Great for evaluating quality, latency, and cost trade-offs.

Absolutely. We use TLS encryption for all API traffic, authenticate via Koder ID (OIDC), and never store your prompt data beyond the session. API keys are hashed at rest and can be revoked instantly.

Yes. Koder AI Hub is open-source and can be deployed on your own infrastructure. The entire stack (gateway, billing, console) runs as a single binary or Docker container.

You pay for the tokens you consume at each model's published rate. The Hub console itself is free to use. Self-hosted deployments have no platform fees at all.

Take control of your AI infrastructure

One console. Every model. Full visibility into usage and costs.

Download View on Flow