Multi-Model AI Routing
SuperBuilder connects to multiple AI providers and lets you choose the right model for each task. Swap models mid-conversation, configure per-project defaults, or let SuperBuilder route automatically based on task complexity.
Supported Providers
SuperBuilder works with:
- Anthropic — Claude Haiku, Sonnet, and Opus through the Anthropic API directly
- OpenRouter — access to dozens of models including Claude, Gemini, Llama, Mistral, and more through a single API key
- Vercel AI Gateway — enterprise-grade routing with rate limiting, caching, and spend controls
- Local models — run models locally via compatible inference servers (Ollama, LM Studio, etc.)
You configure your provider credentials once in settings and use any model they support from any thread.
The Model Lineup
Different tasks warrant different models. Here's how to think about it:
Claude Haiku — the fast, cheap model. Use it for:
- Searching and reading your codebase to answer questions
- Quick explanations of what code does
- Low-stakes refactors on small files
- Exploratory tasks where you're not sure what you want yet
Cost: roughly 1/20th of Opus. Speed: noticeably faster.
Claude Sonnet — the balanced model. Use it for:
- Most day-to-day coding tasks
- Writing tests, fixing bugs, implementing features
- Multi-file changes with moderate complexity
- Standard refactors and code reviews
Sonnet is the right default for most work. Good quality, reasonable cost, solid speed.
Claude Opus — the most capable model. Use it for:
- Complex architectural decisions
- Large refactors spanning many files
- Hard bugs that require deep reasoning
- Anything where you've tried a lighter model and it didn't get it right
Cost: the most expensive. Worth it when the task genuinely requires it.
Switching Models Mid-Conversation
You can switch models mid-thread without losing context. Common pattern:
- Start with Haiku to explore the codebase and understand the problem
- Switch to Sonnet to implement the solution
- If the solution gets complex, upgrade to Opus for the hard parts
The conversation history carries over. The new model sees everything that was said before.
Per-Project Model Defaults
Configure default models per project in settings:
- Interactive threads — the model used when you're actively in a conversation
- Background agents — the model used for scheduled or queued tasks
- Event loop — the model used for GitHub event responses
For most teams, Sonnet as the default with Opus available for complex tasks covers 90% of cases.
Provider Profiles
SuperBuilder supports multiple accounts per provider — useful if you have a personal API key and a team API key, or if you're managing multiple OpenRouter accounts.
Provider profiles let you:
- Switch between API keys without re-entering credentials
- Set per-profile spending limits
- Use different providers for different projects
OpenRouter Access
OpenRouter gives you access to models from many labs through one API key. This is useful when:
- You want to try models from different providers without managing multiple API keys
- You need a specific model that isn't available through your primary provider
- You want fallback routing — if one provider is down, use another
SuperBuilder's model selector shows OpenRouter models alongside native provider models so you can switch seamlessly.
Local Model Support
For teams with data sensitivity requirements or cost constraints, SuperBuilder can route to local models running on your machine or a local server. Supported via any OpenAI-compatible inference endpoint:
- Ollama — run Llama, Mistral, Gemma, and others locally
- LM Studio — local model runner with a model library
- Custom endpoints — any server that speaks the OpenAI API format
Local models have no per-token cost, but their quality is lower than frontier models. Good for: bulk operations, tasks where data can't leave your network, cost-sensitive batch work.
Frequently Asked Questions
How do I know which model is being used for a given response?
The model name appears in the thread view alongside each response. The cost display also reflects which model was used.
Can I use different models for different tools within the same task?
Not at the sub-task level today — a task uses one model. But you can switch models between tasks in a conversation.
What if a provider is down?
SuperBuilder shows provider status indicators. If your primary provider is unavailable, you can switch to another in seconds without losing your work.
Do I need an API key for each provider?
Yes. SuperBuilder uses your credentials to call providers directly — it's not a proxy. This means you get your provider's pricing directly and you're in control of your keys.
Can I set a fallback model?
Not yet through the UI, but it's on the roadmap. For now, OpenRouter's native fallback routing covers this use case.