Reasoning is becoming a core primitive in modern AI systems — not just what model you use, but how much thinking it does before answering.
In Tellers, this is exposed through three presets: FAST, AUTO, and MAX.
These are not just “speed modes.” They are evolving configurations that control:
- which model is used
- how much reasoning is applied
- how much latency and cost you’re willing to trade for quality
What “Reasoning” Actually Means
When a model uses reasoning, it spends more compute internally before producing an answer.
That usually means:
- breaking down the problem into steps
- evaluating multiple options
- maintaining more context across a complex task
In video editing, this matters a lot.
There’s a big difference between:
- “cut the last 5 seconds”
- and
- “turn 2 hours of interviews into a coherent 60-second story with pacing, b-roll, and narrative structure”
The second task benefits massively from deeper reasoning.
But reasoning is not free — it increases both latency and cost.
The Three Modes
FAST
FAST is designed for speed and iteration.
- Model: GPT-5.4 Nano
- Reasoning: minimal
- Cost: very low
Use it when:
- doing quick edits (cuts, trims, transitions)
- testing prompts
- generating many variations
- iterating rapidly
FAST keeps the feedback loop extremely tight.
AUTO
AUTO is the default balance.
- Model: GPT-5.4 Mini (adaptive)
- Reasoning: dynamic
AUTO decides how much reasoning is needed per task.
Simple actions stay fast.
Complex requests get more thinking.
This is the best mode for:
- everyday editing
- rough cuts
- b-roll insertion
- general iteration
You don’t need to think about optimisation — it handles it for you.
MAX
MAX is designed for complex, high-quality outputs.
- Model: GPT-5.5
- Reasoning: medium (by default)
Use it when:
- restructuring long-form content
- derushing large volumes of footage
- building multi-layer timelines
- generating sequences with dependencies
- making creative decisions across an entire video
MAX is slower and significantly more expensive — but much more capable.
Cost & Model Trade-offs
There is a real cost difference between modes:
- GPT-5.5 is ~25× more expensive than Nano
- reasoning adds additional token usage
- deeper reasoning increases latency
GPT-5.5 is also more efficient than previous models in how it uses reasoning tokens, but the cost gap remains substantial.
You can also manually select:
- GPT-5.5 Pro → ~6× more expensive than GPT-5.5
This is useful if you want to push the limits of the agent — but it will consume credits very quickly.
Why Presets Exist
Most users don’t want to manage:
- model selection
- reasoning levels
- cost vs performance trade-offs
FAST, AUTO, and MAX abstract this away.
They will continue to evolve as models improve — the goal is simple:
give you the best configuration for your intent, without manual tuning
Picking the Right Mode
| Task | Recommended mode |
|---|---|
| Quick trims, cuts, transitions | FAST |
| Rapid iteration / testing | FAST |
| Everyday editing | AUTO |
| B-roll / captions / adjustments | AUTO |
| Long-form restructuring | MAX |
| Derushing large footage sets | MAX |
| Complex AI-generated sequences | MAX |
You can switch modes at any time — they are session-level, not project-level.
What is the difference between FAST, AUTO, and MAX modes in Tellers?
FAST, AUTO, and MAX are evolving presets that control which model and reasoning level Tellers uses. FAST prioritises speed and low cost, AUTO balances speed and intelligence, and MAX prioritises deeper reasoning and higher-quality outputs for complex tasks.
What models are used behind each mode?
Currently, FAST uses GPT-5.4 Nano, AUTO defaults to GPT-5.4 Mini (but can adapt), and MAX uses GPT-5.5 with medium reasoning. These mappings may evolve as models improve.
Does MAX mode cost more?
Yes. GPT-5.5 is significantly more expensive than Nano (around 25×), and reasoning adds additional token usage. MAX should be used when the task justifies the cost.
Can I select a model manually?
Yes. You can override presets in settings and choose specific models and reasoning levels, including GPT-5.5 Pro for maximum capability.
What is 'reasoning' in AI models?
Reasoning refers to how much compute the model uses to internally work through a problem before answering. More reasoning generally improves quality on complex tasks but increases latency and cost.
Reasoning control is not just a technical feature — it directly impacts how fast you work, how much you spend, and the quality of your output.
The key is simple:
- use FAST to move
- use AUTO to work
- use MAX when it really matters