All posts

Google I/O 2026: What AI Video Creators Should Watch

Google I/O 2026 runs May 19-20 — nine days from now. For AI video creators, the relevant question is not “will Google announce something.” It is: what would actually change about your workflow if they do?

Here is what is confirmed, what is likely, and what is worth watching for if you build with AI video generation today.

What Is Confirmed

The dates are public. I/O runs May 19-20, 2026, with Android sessions starting May 12. The keynote and developer tracks are streamed and free to access online. Google’s public framing covers “AI breakthroughs and updates in products across the company, from Gemini to Android and more.”

That is the entire confirmed surface area. Everything below it is informed speculation, including the parts we discuss below.

What Is Likely: Veo 4

Google has used I/O for Veo announcements twice before. Veo 1 launched at I/O 2024. Veo 3 launched at I/O 2025. Veo 3.1 shipped in January 2026. A May 2026 reveal of Veo 4 fits the cadence, and major outlets have flagged it as expected. Prediction markets put the odds of a Veo 4 launch before June 2026 around 69%.

What Google has not done is confirm any of it. There is no published feature list, no API documentation, no pricing. Reporting around expected capabilities — longer clips, native 4K, better character consistency, finer camera controls — is rumor synthesis, not Google sources.

This matters because the gap between “expected” and “shipped” is wide in AI video. Models miss windows. Features get cut. APIs lag the demo by weeks or months. Treat the rumored feature list as a starting point for evaluation, not a roadmap to plan against.

What to Actually Watch For

If Veo 4 is announced, the headline numbers will not tell you whether to integrate it into your pipeline. These are the dimensions that will:

  • API availability and pricing: A model behind a closed waitlist is not a production tool. Look for clear pricing, rate limits, and a documented endpoint, not just a research demo.
  • Clip length and conditioning: How long can a single generation run, and what conditioning is supported — start frame, end frame, reference image, motion vectors, audio?
  • Latency and time to first frame: For agent-driven editing, generation latency dominates the user experience. A model that takes four minutes per clip is unusable for iterative workflows regardless of output quality.
  • Audio synchronization: Native dialogue, ambience, and SFX layers matter more than visual fidelity for most narrative video work.
  • Camera control vocabulary: Whether the model accepts cinematic terms — dolly, rack focus, whip pan, orbital — at the level of accuracy a director would expect.
  • Output rights and watermarks: Commercial use, indemnification, and visible or invisible watermark policies vary across providers and matter for paid work.

These are the same dimensions that matter for any new model, not just Veo 4. The point is: do not let the announcement narrative override the evaluation discipline.

How Tellers Plans to React

Tellers is multi-model by design. The agent already orchestrates across Veo, Seedance, Runway, Kling, HappyHorse, and others, picking the right tool for each shot rather than locking the platform to one provider.

When Veo 4 — or any other significant model — becomes available with a stable API, we evaluate it on the dimensions above and ship it as another option if it earns its place. We do not pre-announce integrations we have not built, and we do not gate the platform on any single launch. That is the model-agnostic posture we have written about before, and it is what lets us treat I/O as one signal among many rather than a deadline.

If Veo 4 ships and meets the bar, Tellers users will see it appear in the agent’s toolset shortly after we have validated it. If it ships and does not meet the bar, the rest of the stack continues to work without change.

FAQ

When is Google I/O 2026?

Google I/O 2026 runs May 19-20, 2026, with Android-focused content starting May 12. The keynote and developer sessions are streamed and free to attend online.

Will Veo 4 be announced at Google I/O 2026?

Google has not confirmed Veo 4. However, Google announced Veo 1 at I/O 2024 and Veo 3 at I/O 2025, and Veo 3.1 shipped in January 2026. A May 2026 reveal fits the established cadence, but until Google says so officially, treat it as expectation rather than fact.

What features are likely in Veo 4?

Industry reporting points to longer clip durations, native 4K output, stronger character consistency, and more precise camera controls. None of this is officially confirmed by Google. Treat the feature list as informed speculation until the announcement lands.

How does Tellers handle new model launches?

Tellers is multi-model by design. When a new generation or editing model becomes available via API and meets our quality and reliability bar, we evaluate it and ship it as another option in the agent's toolset. We do not bet the platform on any single provider.

Should I delay video projects until Veo 4 ships?

Probably not. Existing video models — including Veo 3.1, Seedance 2, Runway Gen-4.5, Kling 3.0, and HappyHorse 1.0 — are already production-ready for most workflows. Wait only if your project requires capabilities none of them currently support.


If you want to start building AI video editing workflows that do not depend on any single model launch, open Tellers. The agent already handles model selection — so when the next generation of video models ships, your tools do not need to change.