Docs v0.9.93

Models

TatsuCode lets you switch between cloud, subscription, and local models quickly.

Use:

/models

This opens the model selector with filtering and capability labels.


Provider Groups

Models are grouped by source:

  • OpenAI Plus/Pro (subscription OAuth)
  • GitHub Copilot (subscription OAuth)
  • OpenRouter (API key)
  • Local Providers (Ollama, LM Studio, custom endpoints)
Some provider routes can be temporarily limited by upstream authorization status. If one route is unavailable, use another provider (for example, Gemini via OpenRouter).

Notable Model Families

Claude Family

ModelContextBest For
Claude Opus 4.6200KDeep reasoning, architecture decisions
Claude Sonnet 4.5200KStrong day-to-day coding
Claude Haiku 4.5200KFast edits and iteration

OpenAI Family

ModelContextBest For
GPT-5.3 Codex400KAdvanced coding and refactoring
GPT-5.3 Codex Spark128KVery fast coding iteration
GPT-5.2 / 5.1 Codex variants400KHigh-quality coding and reasoning

Gemini Family

ModelContextBest For
Gemini 3 Pro~1MLarge codebase context and long tasks
Gemini 3 Flash~1MFaster long-context tasks

GitHub Copilot Route (Examples)

Depending on plan, client, and current availability, Copilot can expose mixes such as:

  • GPT-5 family (including coding-focused variants)
  • Claude family (including Opus/Sonnet variants)
  • Gemini family (including Pro/Flash variants)
  • Fast coding models (for example Grok Code Fast)

Official pricing reference (subject to change):

  • Copilot Free: $0 (limited)
  • Copilot Pro: $10/month or $100/year
  • Copilot Pro+: $39/month or $390/year
  • Copilot Business: $19/user/month
  • Copilot Enterprise: varies by enterprise agreement

Official docs:


Model Capabilities

Context Window

The context window controls how much code and history the model can process in one turn.

  • ~1M context: Excellent for wide codebase understanding
  • 400K context: Great for serious multi-file coding tasks
  • 200K context: Strong for most production workflows

Vision

Many flagship models support image input for:

  • UI screenshot debugging
  • visual regression checks
  • diagram or mockup understanding

Reasoning Controls

For supported models, tune reasoning depth:

/reason-effort
/reason-display

Use higher effort for complex debugging and planning; lower effort for speed.


Choosing the Right Model

Quick Pick Guide

TaskRecommended Starting Point
Fast coding iterationsClaude Haiku 4.5, GPT-5.3 Codex Spark
Main coding workflowClaude Sonnet 4.5, GPT-5.2/5.3 Codex
Very large context tasksGemini 3 Pro
Deepest analysisClaude Opus 4.6, high-effort GPT-5.x Codex

Cost Strategy

  • Use your connected subscription routes first
  • Keep OpenRouter for breadth and fallback
  • Use local models for private/offline workflows

Model List Management

/models-add
/models-remove

  • Add only what you actively use
  • Keep a “fast + deep” pair in your favorites
  • Switch models by task rather than sticking to one for everything

Usage and Quota

/usage
/usage-quota

  • /usage shows session token usage
  • /usage-quota shows subscription/API limit information where supported

If context grows too large:

/compact

Recent builds improved auto-compaction reliability and long-session recovery.


Next Steps

  • Providers — connect and manage provider access
  • Settings — reasoning, temperature, and display options
  • AI Capabilities — what TatsuCode can do with these models

Type to search documentation

Use to navigate, Enter to select