The rapid advancement of artificial intelligence (“AI”) has spurred remarkable innovation for the healthcare industry, while also resulting in swiftly emerging regulatory frameworks. On October 13, 2025, Governor Gavin Newsom signed into law California Senate Bill 243 (“SB 243”) – the first law in the nation to address the “human interface” of AI chatbots, especially... Continue Reading
The rapid advancement of artificial intelligence (“AI”) has spurred remarkable innovation for the healthcare industry, while also resulting in swiftly emerging regulatory frameworks. On October 13, 2025, Governor Gavin Newsom signed into law California Senate Bill 243 (“SB 243”) – the first law in the nation to address the “human interface” of AI chatbots, especially those used by minors, by establishing strict requirements around transparency, safety, and behavioral integrity. Healthcare providers, technology companies, and digital platform operators must now anticipate and prepare for a regulatory landscape that establishes meaningful obligations around AI’s emotional and psychological impact on users. SB 243 will take effect on January 1, 2026.
SB 243: What You Need to Know
SB 243 amends California’s Business and Professions Code (Chapter 22.6) to impose unique protocols for AI chatbots, with the aim of protecting minors from emotional manipulation, unsafe interactions, and the misuse of artificial intimacy. Critical provisions of SB 243 include:
- AI Notification: Operators must clearly and conspicuously notify users when they are engaging with a chatbot powered by AI, especially if there is potential for users to mistakenly believe they are interacting with a human.
- Prevention Protocols: Before permitting chatbots to interact with users, operators must establish robust protocols aimed at preventing the generation of content related to suicide or self-harm. If a user expresses suicidal thoughts or intentions, the operator is required to promptly direct them to crisis service providers, such as suicide hotlines or crisis text lines. Additionally, operators must make the details of these intervention protocols readily accessible by publishing them on their website.
- Enhanced Protections for Minors: For users identified as minors, operators must:
- AI Disclosure: Disclose the chatbot’s artificial nature.
- Break Reminders: Provide break reminders at least every three hours during extended interactions.
- Restriction of Harmful Content: Enforce measures to prevent the chatbot from generating or encouraging sexually explicit content.
- Audit and Reporting: Beginning July 1, 2027, operators will be required to maintain meticulous records, proactively manage and disclose crisis-related chatbot interactions, adhere to strict privacy requirements, and ensure their prevention and reporting processes are grounded in established best practices.
- Civil Remedies: Individuals injured by a violation may bring a civil action for injunctive relief, damages (with a minimum of $1,000 per violation), and reasonable attorneys’ fees and costs.
Why SB 243 Matters for Healthcare Organizations
For healthcare providers and digital health innovators, SB 243 brings new challenges but also new opportunities to lead in responsible AI use. For organizations utilizing virtual support services, behavioral health applications, or educational platforms, SB 243 introduces compliance risks if their existing systems: (i) permit chatbots to simulate intimate or emotionally supportive relationships without appropriate safeguards; (ii) lack effective protocols to escalate crisis situations; or (iii) do not provide clear, conspicuous disclosures identifying interactions as AI-driven rather than human. Healthcare organizations deploying chatbot technologies must carefully assess whether their offerings classify them as “operators” under state law and ensure their systems and administrative practices comply with all related regulations to mitigate compliance risks and legal exposure.
Simultaneously, SB 243 heralds a new era of “Artificial Integrity” and the expectation that AI systems should reflect human values and safeguard the vulnerable. For providers serving minors or managing sensitive patient interactions, missteps in regulatory compliance or ethical boundaries could result not only in legal penalties, but also reputational harm.
Looking Ahead: The Future of AI Integrity in Health Care
SB 243 introduces a major change in healthcare AI regulation by emphasizing the quality and integrity of AI interactions to enhance patient safety and transparency. Healthcare organizations, technology companies, and other operators can reduce legal and compliance risks and strengthen patient trust by implementing clear disclosures, effective crisis-response protocols, and strong safeguards for minors.







