LLM Watch
Oct 8, 2025
Gov. Gavin Newsom speaks during the State of the State in Sacramento on March 8, 2022. Photo Credit: Miguel Gutierrez Jr., CalMatters
AI content must be disclosed: SB 53 mandates clear labels on AI-generated audio or visual content that impersonates or resembles real people [1].
Focus on preventing deception: The law is designed to counter deepfakes, misinformation, and impersonation risks in media, campaign content, and consumer-facing content.
Setting a U.S. precedent: As the home to many leading AI firms, California’s new law may influence regulatory frameworks across other states and at the federal level [2].
Businesses must move quickly: Any company producing AI-generated content—marketing, media, virtual personas—needs to embed disclosure workflows to remain compliant.
On September 29, 2025, Governor Gavin Newsom signed Senate Bill 53 into law, creating a new transparency requirement for generative AI in California. The law requires that any AI-generated content designed to appear like a real person must be labeled “clearly and conspicuously” as AI-generated [1]. In doing so, California positions itself at the vanguard of AI regulation in the U.S., aiming to balance innovation with protections against deception.
SB 53’s mandate is relatively narrow but powerful: any entity using generative AI to create content (video, audio, images) portraying a person realistically must include a conspicuous disclosure that the content is AI-generated. The goal is to eliminate ambiguity where viewers might believe they’re seeing or hearing real people.
The disclosure must be noticeable to a typical user, not hidden in fine print or buried metadata. That helps safeguard against scenarios where AI-generated personas or “deepfakes” are passed off as genuine in public media, campaigns, or consumer targeting.
The bill draws heavily from the California report on frontier AI policy, produced by an expert working group led by Fei-Fei Li, which recommended a framework for transparency, safety reporting, and whistleblower protections.
The rapid sophistication of generative AI tools has made it all too easy to fabricate convincing audio, images, and video. SB 53 is explicitly aimed at curbing misuse: disinformation, political manipulation, extortion, and scams using impersonation. In his signing statement, Newsom emphasized that California must lead both in innovation and in guarding against misuse of powerful AI tools.
However, critics argue SB 53 has limitations: it does not mandate independent audits, “kill switches,” or liability for harms, potentially diluting its enforcement strength. The law omits intrusive enforcement provisions in part to avoid chilling innovation in California’s massive AI ecosystem.
AI firms and content creators now must build disclosure systems, digital watermarks, visible textual labels, or embedded metadata interpretable by platforms, in their workflow pipelines. It’s no longer optional but a compliance requirement.
SB 53 also requires large AI developers to disclose safety protocols and report critical incidents (e.g., misuse, unexpected behavior). These requirements raise the bar for operational accountability in AI deployment [2].
The law includes protections for insiders who raise safety or misuse concerns, aiming to enforce internal accountability [2].
Noncompliance risks legal exposure, reputational damage, or regulatory scrutiny. For AI companies, transparency may become a competitive differentiator, in the eyes of regulators, users, and investors.
SB 53 is not merely a state law, it’s a turning point. For organizations using generative AI, transparency is now a legal imperative, not an ethical optional. California’s move signals to tech companies, investors, and regulators that trust, accountability, and safety will weigh as heavily as product performance. The ripple effects could force national regulations, create marketplace advantages for “safe and transparent AI” vendors, and shift public expectations around media authenticity. SB 53 may influence the contours of future North American AI governance. In short: we are entering a new regulatory era where labeling is just the start, not the finish.
SB 53, the landmark AI transparency bill, is now law in California.The Verge. Sep 29, 2025. https://www.theverge.com/ai-artificial-intelligence/787918/sb-53-the-landmark-ai-transparency-bill-is-now-law-in-california
California’s Newsom signs law requiring AI safety disclosures. Reuters. Sep 29, 2025. https://www.reuters.com/legal/litigation/californias-newsom-signs-law-requiring-ai-safety-disclosures-2025-09-29/
California Gov. Gavin Newsom signs landmark bill creating AI safety measures. AP News. Sep 29, 2025. https://apnews.com/article/9f888a7cbaa57a7dec9e210785b83280