Models
TatsuCode lets you switch between cloud, subscription, and local models quickly.
Use:
/models
This opens the model selector with filtering and capability labels.
Provider Groups
Models are grouped by source:
- OpenAI Plus/Pro (subscription OAuth)
- GitHub Copilot (subscription OAuth)
- OpenRouter (API key)
- Local Providers (Ollama, LM Studio, custom endpoints)
Some provider routes can be temporarily limited by upstream authorization status. If one route is unavailable, use another provider (for example, Gemini via OpenRouter).
Notable Model Families
Claude Family
| Model | Context | Best For |
|---|---|---|
| Claude Opus 4.6 | 200K | Deep reasoning, architecture decisions |
| Claude Sonnet 4.5 | 200K | Strong day-to-day coding |
| Claude Haiku 4.5 | 200K | Fast edits and iteration |
OpenAI Family
| Model | Context | Best For |
|---|---|---|
| GPT-5.3 Codex | 400K | Advanced coding and refactoring |
| GPT-5.3 Codex Spark | 128K | Very fast coding iteration |
| GPT-5.2 / 5.1 Codex variants | 400K | High-quality coding and reasoning |
Gemini Family
| Model | Context | Best For |
|---|---|---|
| Gemini 3 Pro | ~1M | Large codebase context and long tasks |
| Gemini 3 Flash | ~1M | Faster long-context tasks |
GitHub Copilot Route (Examples)
Depending on plan, client, and current availability, Copilot can expose mixes such as:
- GPT-5 family (including coding-focused variants)
- Claude family (including Opus/Sonnet variants)
- Gemini family (including Pro/Flash variants)
- Fast coding models (for example Grok Code Fast)
Official pricing reference (subject to change):
- Copilot Free: $0 (limited)
- Copilot Pro: $10/month or $100/year
- Copilot Pro+: $39/month or $390/year
- Copilot Business: $19/user/month
- Copilot Enterprise: varies by enterprise agreement
Official docs:
Model Capabilities
Context Window
The context window controls how much code and history the model can process in one turn.
- ~1M context: Excellent for wide codebase understanding
- 400K context: Great for serious multi-file coding tasks
- 200K context: Strong for most production workflows
Vision
Many flagship models support image input for:
- UI screenshot debugging
- visual regression checks
- diagram or mockup understanding
Reasoning Controls
For supported models, tune reasoning depth:
/reason-effort
/reason-display
Use higher effort for complex debugging and planning; lower effort for speed.
Choosing the Right Model
Quick Pick Guide
| Task | Recommended Starting Point |
|---|---|
| Fast coding iterations | Claude Haiku 4.5, GPT-5.3 Codex Spark |
| Main coding workflow | Claude Sonnet 4.5, GPT-5.2/5.3 Codex |
| Very large context tasks | Gemini 3 Pro |
| Deepest analysis | Claude Opus 4.6, high-effort GPT-5.x Codex |
Cost Strategy
- Use your connected subscription routes first
- Keep OpenRouter for breadth and fallback
- Use local models for private/offline workflows
Model List Management
/models-add
/models-remove
- Add only what you actively use
- Keep a “fast + deep” pair in your favorites
- Switch models by task rather than sticking to one for everything
Usage and Quota
/usage
/usage-quota
/usageshows session token usage/usage-quotashows subscription/API limit information where supported
If context grows too large:
/compact
Recent builds improved auto-compaction reliability and long-session recovery.
Next Steps
- Providers — connect and manage provider access
- Settings — reasoning, temperature, and display options
- AI Capabilities — what TatsuCode can do with these models