🧠Core

Multi-Model AI Routing

SuperBuilder routes tasks to the right AI model automatically, or lets you choose per task. Use fast, cheap models for exploration and powerful models for complex work — from one interface.

Multi-Model AI Routing

SuperBuilder connects to multiple AI providers and lets you choose the right model for each task. Swap models mid-conversation, configure per-project defaults, or let SuperBuilder route automatically based on task complexity.

Supported Providers

SuperBuilder works with:

You configure your provider credentials once in settings and use any model they support from any thread.

The Model Lineup

Different tasks warrant different models. Here's how to think about it:

Claude Haiku — the fast, cheap model. Use it for:

Cost: roughly 1/20th of Opus. Speed: noticeably faster.

Claude Sonnet — the balanced model. Use it for:

Sonnet is the right default for most work. Good quality, reasonable cost, solid speed.

Claude Opus — the most capable model. Use it for:

Cost: the most expensive. Worth it when the task genuinely requires it.

Switching Models Mid-Conversation

You can switch models mid-thread without losing context. Common pattern:

  1. Start with Haiku to explore the codebase and understand the problem
  2. Switch to Sonnet to implement the solution
  3. If the solution gets complex, upgrade to Opus for the hard parts

The conversation history carries over. The new model sees everything that was said before.

Per-Project Model Defaults

Configure default models per project in settings:

For most teams, Sonnet as the default with Opus available for complex tasks covers 90% of cases.

Provider Profiles

SuperBuilder supports multiple accounts per provider — useful if you have a personal API key and a team API key, or if you're managing multiple OpenRouter accounts.

Provider profiles let you:

OpenRouter Access

OpenRouter gives you access to models from many labs through one API key. This is useful when:

SuperBuilder's model selector shows OpenRouter models alongside native provider models so you can switch seamlessly.

Local Model Support

For teams with data sensitivity requirements or cost constraints, SuperBuilder can route to local models running on your machine or a local server. Supported via any OpenAI-compatible inference endpoint:

Local models have no per-token cost, but their quality is lower than frontier models. Good for: bulk operations, tasks where data can't leave your network, cost-sensitive batch work.

Frequently Asked Questions

How do I know which model is being used for a given response?

The model name appears in the thread view alongside each response. The cost display also reflects which model was used.

Can I use different models for different tools within the same task?

Not at the sub-task level today — a task uses one model. But you can switch models between tasks in a conversation.

What if a provider is down?

SuperBuilder shows provider status indicators. If your primary provider is unavailable, you can switch to another in seconds without losing your work.

Do I need an API key for each provider?

Yes. SuperBuilder uses your credentials to call providers directly — it's not a proxy. This means you get your provider's pricing directly and you're in control of your keys.

Can I set a fallback model?

Not yet through the UI, but it's on the roadmap. For now, OpenRouter's native fallback routing covers this use case.

SuperBuilder

Try it with SuperBuilder

Free to download. Bring your own API key. No subscription required to get started.

Download for Mac