LLM Watch

LLM Watch

California Enacts Landmark AI Transparency Law with SB 53 

Daniel Brooks

By: Daniel Brooks

Friday, October 3, 2025

Oct 3, 2025

3 min read

Gov. Gavin Newsom speaks during the State of the State in Sacramento on March 8, 2022.
Gov. Gavin Newsom speaks during the State of the State in Sacramento on March 8, 2022.
Gov. Gavin Newsom speaks during the State of the State in Sacramento on March 8, 2022.

Gov. Gavin Newsom speaks during the State of the State in Sacramento on March 8, 2022. Photo Credit: Miguel Gutierrez Jr., CalMatters

Key Takeaways

  • AI content must be disclosed: SB 53 mandates clear labels on AI-generated audio or visual content that impersonates or resembles real people [1].

  • Focus on preventing deception: The law is designed to counter deepfakes, misinformation, and impersonation risks in media, campaign content, and consumer-facing content.

  • Setting a U.S. precedent: As the home to many leading AI firms, California’s new law may influence regulatory frameworks across other states and at the federal level [2].

  • Businesses must move quickly: Any company producing AI-generated content—marketing, media, virtual personas—needs to embed disclosure workflows to remain compliant.

On September 29, 2025, Governor Gavin Newsom signed Senate Bill 53 into law, creating a new transparency requirement for generative AI in California. The law requires that any AI-generated content designed to appear like a real person must be labeled “clearly and conspicuously” as AI-generated [1]. In doing so, California positions itself at the vanguard of AI regulation in the U.S., aiming to balance innovation with protections against deception.

SB 53 forces disclosure on AI-generated personas

SB 53’s mandate is relatively narrow but powerful: any entity using generative AI to create content (video, audio, images) portraying a person realistically must include a conspicuous disclosure that the content is AI-generated. The goal is to eliminate ambiguity where viewers might believe they’re seeing or hearing real people.

The disclosure must be noticeable to a typical user, not hidden in fine print or buried metadata. That helps safeguard against scenarios where AI-generated personas or “deepfakes” are passed off as genuine in public media, campaigns, or consumer targeting.

The bill draws heavily from the California report on frontier AI policy, produced by an expert working group led by Fei-Fei Li, which recommended a framework for transparency, safety reporting, and whistleblower protections.

Lawmakers are targeting deepfakes and deception risks

The rapid sophistication of generative AI tools has made it all too easy to fabricate convincing audio, images, and video. SB 53 is explicitly aimed at curbing misuse: disinformation, political manipulation, extortion, and scams using impersonation. In his signing statement, Newsom emphasized that California must lead both in innovation and in guarding against misuse of powerful AI tools.

However, critics argue SB 53 has limitations: it does not mandate independent audits, “kill switches,” or liability for harms, potentially diluting its enforcement strength. The law omits intrusive enforcement provisions in part to avoid chilling innovation in California’s massive AI ecosystem.

Companies must embed transparency into their workflows

Disclosure systems are no longer optional

AI firms and content creators now must build disclosure systems, digital watermarks, visible textual labels, or embedded metadata interpretable by platforms, in their workflow pipelines. It’s no longer optional but a compliance requirement.

Incident reporting and public accountability are mandatory

SB 53 also requires large AI developers to disclose safety protocols and report critical incidents (e.g., misuse, unexpected behavior). These requirements raise the bar for operational accountability in AI deployment [2].

Whistleblower protections create internal accountability

The law includes protections for insiders who raise safety or misuse concerns, aiming to enforce internal accountability [2].

Legal exposure grows as transparency becomes a trust signal

Noncompliance risks legal exposure, reputational damage, or regulatory scrutiny. For AI companies, transparency may become a competitive differentiator, in the eyes of regulators, users, and investors.

Why this matters: transparency is now law, not ethics

SB 53 is not merely a state law, it’s a turning point. For organizations using generative AI, transparency is now a legal imperative, not an ethical optional. California’s move signals to tech companies, investors, and regulators that trust, accountability, and safety will weigh as heavily as product performance. The ripple effects could force national regulations, create marketplace advantages for “safe and transparent AI” vendors, and shift public expectations around media authenticity. SB 53 may influence the contours of future North American AI governance. In short: we are entering a new regulatory era where labeling is just the start, not the finish.

Sources

  1. SB 53, the landmark AI transparency bill, is now law in California.The Verge. Sep 29, 2025. https://www.theverge.com/ai-artificial-intelligence/787918/sb-53-the-landmark-ai-transparency-bill-is-now-law-in-california

  2. California’s Newsom signs law requiring AI safety disclosures. Reuters. Sep 29, 2025.  https://www.reuters.com/legal/litigation/californias-newsom-signs-law-requiring-ai-safety-disclosures-2025-09-29/

  3. California Gov. Gavin Newsom signs landmark bill creating AI safety measures. AP News. Sep 29, 2025. https://apnews.com/article/9f888a7cbaa57a7dec9e210785b83280

Share this article

Related Articles

Related Articles

Related Articles

Subscribe to PromptWire

Don't just follow the AI revolution—lead it. We cover everything that matters, from strategic shifts in search to the AI tools that actually deliver results. We distill the noise into pure signal and send actionable intelligence right to your inbox.

We don't spam, promised. Only two emails every month, you can

opt out anytime with just one click.

Copyright

© 2025

All Rights Reserved

Subscribe to PromptWire

Don't just follow the AI revolution—lead it. We cover everything that matters, from strategic shifts in search to the AI tools that actually deliver results. We distill the noise into pure signal and send actionable intelligence right to your inbox.

We don't spam, promised. Only two emails every month, you can

opt out anytime with just one click.

Copyright

© 2025

All Rights Reserved

Subscribe to PromptWire

Don't just follow the AI revolution—lead it. We cover everything that matters, from strategic shifts in search to the AI tools that actually deliver results. We distill the noise into pure signal and send actionable intelligence right to your inbox.

We don't spam, promised. Only two emails every month, you can

opt out anytime with just one click.

Copyright

© 2025

All Rights Reserved