California’s New Regulations about Artificial Intelligence (AI): What SB 53, SB 243, AB 316 and AB 853 Mean for Your Business
California has passed three regulations: SB 53, AB 316, andAB 853.
SB 53 (Transparency in Frontier ArtificialIntelligence Act): Applies to developers of “frontier” AI models (high compute, high revenue-earning). Requires them to publish governance and safety frameworks, report critical incidents (within 15 days), protect whistleblowers, and publish transparency reports.
AB 316 (Artificial Intelligence: DefensesAct): Bars a business from defending itself by claiming its AI acted“autonomously” to excuse harm caused. In other words, developers/users remainliable.
AB 853 (Amendment to the California AITransparency Act) Delays specific compliance dates and adds obligations on large online platforms and device makers, such as requiring AI detection tools for users, provenance metadata in content, and disclosure mechanisms on platforms and capture devices.
With these regulations, California establishes a new standard for AI transparency, safety, and accountability. Instead of banningAI, the state’s recent laws emphasize disclosures, incident reporting, liability, and provenance. Here’s a practical, high-level guide to what each law covers, who is affected, and how to approach compliance. We, at KendraCyber, believe that these regulations could serve as a foundation for federal rules.
SB 53 -Transparency in Frontier AI (TFAIA)
What it does (in brief)
SB 53 targets developers of the very large “frontier” models and requires transparency and safety processes. It defines a frontier model as one trained with more than 10^26 operations, and a large frontier developer as a frontier developer with >$500 million in prior‑year gross revenue. Large frontier model developers must publish a “frontier AI framework” covering catastrophic‑risk assessment and mitigations; all frontier developers must publish transparency reports before deploying new or substantially modified frontier models. The law creates critical safety incident reporting to the Office of Emergency Services (OES) and adds whistleblower protections for covered employees. Civil penalties can reach $1 million per violation, enforced by the Attorney General. Signed Sept 29, 2025; analyses indicate the law is expected to take effect Jan 1, 2026 (with some agency reports due later).
Who’s impacted?
Frontier‐model labs and tech companies training or significantly fine-tuning cutting-edge models - think major foundation‑model developers and any well-resourced company above the revenue threshold. Cloud and platform providers that train large models should also assess exposure.
How to think about compliance.
Set up a cross‑functional program to
(1) draft and publish the frontier AI framework aligned with national/international standards (e.g., NIST AIRMF),
(2) operationalize catastrophic‑risk thresholds and third‑party testing,
(3) implement OES incident reporting SOPs (with 24‑hour/15‑day triggers as applicable),
(4) stand up anonymous internal reporting channels and anti‑retaliation policies, and
(5) maintain immutable evidence of assessments and decisions.
SB 243 - CompanionChatbots (Minors’ Safety & Disclosures)
What it does (in brief)
SB 243 is the first state law specifically regulating companion chatbots - AI designed to sustain social, human-like interactions. It requires clear disclosure that users are interacting with AI (including periodic reminders), protocols to prevent exposure to sexual content for minors, and mental‑health safety measures (e.g., directing users to crisis help). The law includes annual reporting on harms related to suicidal ideation - phased in with key requirements and full reporting by mid-2027.(Note: A broader minors‑bill was vetoed the same day; SB 243’s guardrails were signed.)
Who’s impacted?
Companies offering AI companion apps (wellness, “friend”/relationship bots), social platforms that host or distribute companion chatbots inCalifornia, and any service likely to interact with minors in an emotionally supportive way. (Exemptions include chatbots used solely for customer support, certain game-limited bots, and some stand‑alone devices.)
How to think about compliance.
Implement age-appropriate design and age gating; add up-front and periodicAI disclosures; deploy content filters to block sexual content for minors; adopt evidence-based suicide/self-harm response protocols; and build a pipeline to collect de-identified metrics for the Office of Suicide Prevention starting in 2027. Train trust and safety teams and log interventions for auditability.
AB 316 - “AI Did It” Is Not a Defense
What it does (in brief)
AB 316 bars defendants who developed, modified, or used AI from arguing in civil cases that “the AI autonomously caused the harm.” In other words, deploying AI doesn’t shift responsibility away from the human or company behind the product or use case. Chaptered Oct13, 2025; effective Jan 1, 2026.
Who’s impacted?
Any sector using AI in products or operations—health and medical devices(diagnostics/triage), Fintech/Insuretech (underwriting/decisioning), consumer platforms (recommendations, chatbots), advertising, robotics, and autonomous features. You can't point to autonomy as a shield if AI contributes to harm (e.g., defamation, faulty advice, physical injury).
How to think about compliance
Treat AI as part of your product‑safety and QA program: pre-deployment testing, human-in-the-loop controls, guardrails, and monitoring.Maintain traceable logs and model/feature documentation to prove due care. Update contracts and insurance(indemnities, allocation of risk) and align your incident‑response plan to include AI failures.
AB 853 - California AI Transparency Act (CAITA) 2.0: Provenance & Labels
What it does (in brief)
AB 853 updates CAITA to make AI content detectable and labeled end-to-end. It:
(1) requires GenAI providers with >1,000,000monthly users to make an AI‑detection tool available to the public;
(2) requires large online platforms (≥2,000,000monthly users) to detect provenance metadata, display it in the UI, allow inspection/download, and not strip compliant metadata (operative Jan 1, 2027);
(3) requires capture‑device manufacturers(phones, cameras, recorders) to embed latent provenance by default in new devices starting Jan 1, 2028; and (4) shifts CAITA’s general operative date to Aug 2, 2026. Civil penalties are typically $5,000 per violation per day.
Who’s impacted?
- GenAI providers at scale (model/tool creators with >1M MAU).
- Large platforms (social, file‑sharing, mass‑messaging, stand‑alone search) with ≥2M monthly users distributing user content.
- Device OEMs shipping cameras/phones/recorders in California.
Expect emerging reliance on C2PA-style standards for interoperability.
How to think about compliance.
GenAI providers: Ship a public detection tool and ensure your systems apply persistent, standards-compliant latent disclosures.
Platforms: Build a provenance pipeline - ingest, detect, surface labels clearly, and avoid stripping metadata.
OEMs: Design default provenance embeds and user options. All: Map owners, deadlines(2026/2027/2028), and evidence (test plans, UI proofs, logs).
A simple compliance lens
- Scope & thresholds. Confirm whether you’re a frontier developer/large frontier developer (SB 53), an operator of companion chatbots (SB 243), a business deploying AI in any risky context (AB 316), or a GenAI provider/platform/OEM (AB 853). Document your determination.
- Programs & policies. Establish the frontier AI framework (if applicable), trust and safety protocols (minors, mental health), incident reporting playbooks, and provenance labeling implementation guides.
- Engineering workstreams. Build or retrofit logging, red‑teaming, model cards, filters, provenance detection/embedding, and UI disclosures; verify they’re standards‑aligned.
- Legal posture. Update terms, DPAs, vendor contracts, and insurance for AB 316’s liability posture; prepare AG-facing evidence for SB 53 and OSP reporting for SB 243.
- Timeline. Plan for Jan 1, 2026 (SB 53; AB 316), Aug 2, 2026 (CAITA operative), Jan 1, 2027 (platform provenance UI), July 1, 2027 (SB 243 OSP reporting), and Jan 1, 2028 (device‑level provenance).
This post is a high-level summary, not legal advice. Forspecific obligations and exceptions, consult the statutory text and counsel.