On August 2, 2026 — 97 days from today — Article 50 of the EU AI Act becomes fully enforceable. For anyone creating or deploying AI-generated video content professionally, this is a compliance deadline worth planning for now.
The obligations are broader than most people assume. They do not only apply to companies building AI tools. They apply to deployers too — any business that uses AI-generated video in professional communications, marketing, or content distributed to the public.
What Article 50 Actually Requires
The core obligation is two-part:
- Providers of generative AI systems must ensure outputs are marked in a machine-readable format as AI-generated or AI-manipulated.
- Deployers (brands, agencies, and teams using AI video in professional contexts) must disclose to audiences when content is AI-generated and must not strip those markings from the final output.
For video, “machine-readable marking” means embedding provenance data into the file — either via a standard like C2PA or via steganographic watermarks embedded at the pixel level.
The Technical Landscape
C2PA Content Credentials
C2PA (Coalition for Content Provenance and Authenticity) is the closest thing to an industry standard today. It embeds a cryptographically signed record into file metadata — a tamper-evident log of how the content was created, by whom, and with which tools.
As of 2026, over 6,000 organisations have joined the coalition, and platforms including LinkedIn and TikTok now display a “CR” (Content Credentials) badge on supported content.
The key limitation: social media platforms systematically strip C2PA metadata during upload processing. Instagram, X, YouTube, and Facebook all remove these credentials when you publish. Your file can leave your system compliant and arrive at the viewer without any provenance signal.
Steganographic Watermarks
A more resilient approach is steganographic watermarking — hiding provenance data inside the image or video pixels themselves, not just the metadata container. This survives re-encoding, platform processing, and format conversion.
The tradeoff: there is not yet a universal open standard for steganographic video watermarks the way C2PA has standardised metadata.
In practice, combining C2PA metadata with steganographic watermarks gives the most defensible compliance posture.
Who Is Affected
If you are:
- Producing AI-generated video for clients with EU audiences
- Publishing AI video in advertising, editorial, or brand communications
- Building applications or automated workflows on top of AI video APIs
…then Article 50’s deployer obligations apply to you, regardless of where you are based.
The determining factor is not your location — it is whether EU audiences receive the content in a professional context.
What to Do Before August 2
Five concrete steps worth taking now:
- Audit your pipeline. Map where AI video is generated in your workflow and what happens to provenance metadata during editing, transcoding, and export.
- Check your platform’s documentation. Does your AI video tool embed C2PA credentials in output files? If not, plan an additional step.
- Decide on disclosure format. A machine-readable watermark satisfies the technical requirement. A visible label or end-card also satisfies the audience disclosure requirement and tends to be simpler to audit.
- Plan for metadata stripping. If you distribute to social platforms, embedded metadata will likely not survive the upload. Maintain a separate provenance record on your own infrastructure.
- Document your governance process. Regulators look for evidence of intentional compliance design, not just a watermark present on one file.
The Broader Shift
August 2, 2026 is not an isolated deadline. It sits inside a broader move toward mandatory content provenance. Hardware manufacturers — Sony, Nikon, Canon, Google — now ship devices with C2PA support by default. Major platforms are building detection and labelling UI. The window to treat content provenance as a “future problem” is closing.
For teams building AI video workflows — whether through a platform like Tellers or directly on model APIs — now is the time to understand what provenance your current pipeline provides and where the gaps are. The teams with clean, auditable content pipelines before August will be in a significantly stronger position than those scrambling to retrofit compliance afterward.
When does EU AI Act Article 50 become enforceable?
August 2, 2026 — 97 days from today.
What does Article 50 require?
Providers of generative AI systems must mark outputs in machine-readable formats as AI-generated. Deployers — businesses using AI-generated content in professional communications — must disclose AI use to audiences and ensure watermarks are not stripped.
Does it apply to me if I am not based in the EU?
If your content is distributed to EU audiences or your platform is accessible in the EU, the obligations apply regardless of where you are headquartered.
Do I need a visible watermark?
No. Article 50 requires machine-readable marking, not a visible one. C2PA metadata or steganographic watermarks embedded in the pixel data both satisfy the technical requirement, though a visible label also satisfies the disclosure obligation.
What is C2PA?
C2PA (Coalition for Content Provenance and Authenticity) is an open technical standard for embedding cryptographically signed content credentials into media files. It is one of the primary mechanisms used to satisfy Article 50's watermarking requirements.
What are the fines for non-compliance?
Up to €15 million or 3% of total global annual turnover, whichever is higher. These penalties apply to both AI providers and deployers.