As of April 28, 2026, Tellers has made Seedance 2 accessible to all users. If you have been waiting to try the current top-ranked AI video generation model in your workflow, you no longer need to.
What Seedance 2 Is
Seedance 2.0 is ByteDance’s flagship video model. It currently holds an Elo score of 1,269 on the Artificial Analysis Video Arena — a crowdsourced blind evaluation benchmark — ranking ahead of Sora 2, Veo 3, and Runway Gen-4.5.
What separates Seedance 2 from earlier models is not raw resolution but motion quality. The model penalises physically implausible motion during generation, not as a post-processing step. This means gravity, fabric draping, water displacement, and momentum in character movement behave correctly at the frame level — a detail that becomes obvious when you scrub through a clip on a timeline rather than watch it at normal speed.
Key Capabilities
- Multi-shot generation: A single prompt can produce multiple cuts with natural transitions — up to 15 seconds total. You are not limited to one continuous clip.
- Native audio: Dialogue, sound effects, and ambient noise are generated in the same pass as the video. No separate sync step.
- Rich reference inputs: Feed up to 9 reference images, 3 video clips, and 3 audio files into a single generation. This makes it practical for brand-consistent content, not just one-off clips.
- Face reference and lip-sync: Upload a portrait and generate footage with lifelike expressions and synchronized dialogue.
- Resolution: Up to 1080p across multiple aspect ratios.
Multi-shot generation, beyond a single model
Generating multiple shots inside a single generation is not new — models like Sora already handle this particularly well. It remains one of the most efficient ways to get short, coherent sequences with strong internal consistency.
Where Tellers adds value is at a different layer.
Instead of relying on a single model to carry an entire sequence, Tellers orchestrates multiple generations across models when needed — while maintaining continuity in characters, voice, visual style, and timing. The agent decides when to use a single multi-shot generation (for efficiency and local coherence), and when to break a scene into multiple controlled generations to ensure consistency at a larger scale.
In practice, this means you can combine the strengths of different models while keeping a unified result — even across longer videos or more complex structures.
When to Reach for Seedance 2
Seedance 2 performs particularly well when motion realism matters — action sequences, product demonstrations with physical interaction, or any footage you intend to review frame-by-frame in an editor.
For static shots or simple text-to-video where speed and cost are the priority, other models on Tellers may be more efficient. But when you need footage that holds up under scrutiny in post, Seedance 2 is currently the strongest option available.
Using Seedance 2 on Tellers
Tellers is first and foremost a chat-based video editing interface.
You can describe your intent at a high level — a prompt, a script, or just a rough idea — and the agent will determine which models to use, how to sequence generations, and how to assemble the result into a coherent video.
This includes:
- chaining multiple generations when needed,
- maintaining consistent voices across longer content,
- mixing generated footage with your own media,
- adding editable overlays, music, and effects as separate layers.
Everything is editable. You can iterate conversationally, refine specific parts, or take manual control of the timeline at any point.
Under the hood, the agent can call multiple models autonomously — including Seedance 2 — to produce the final result you asked for, even if it requires several coordinated generations.
If you are building with the Tellers API, Seedance 2 is accessible through the same endpoints you already use for other video models.
FAQ
Is Seedance 2 available to all Tellers users?
Yes. As of April 28, 2026, Seedance 2 is accessible to all users on Tellers — no waitlist or special tier required.
What is Seedance 2?
Seedance 2.0 is ByteDance's flagship AI video generation model, released in early 2026. It currently ranks #1 on the Artificial Analysis Video Arena with an Elo score of 1,269 — ahead of Sora 2, Veo 3, and Runway Gen-4.5.
How long can Seedance 2 clips be?
Up to 15 seconds per generation. Within that duration, the model can produce multiple shots with natural cuts, so a single output can contain more than one scene.
Does Seedance 2 generate audio?
Yes — audio is generated alongside the video in the same pass. Dialogue, ambient sound, and effects stay in sync without post-processing.
What input does Seedance 2 support?
Text prompts, reference images (up to 9), video clips (up to 3, 15 seconds total), and audio files (up to 3). You can combine multiple input types in a single generation.
What other video models are available on Tellers?
Tellers supports Runway Gen 4.5, LTX Video (with first and last frame control), Kling, Hailuo, Veo 3.1, and more. Seedance 2 is now part of that lineup for all users.
Seedance 2 is now part of the default Tellers lineup. Open the app and try it in your next project.