A broad coalition of more than two dozen advocacy organizations, technology policy experts, and civil rights groups has formally called upon OpenAI to withdraw a controversial California ballot initiative. The measure, titled the Parents & Kids Safe AI Act, has come under intense scrutiny from critics who argue that the proposal is a strategic attempt to establish weak regulatory standards, shield artificial intelligence developers from legal liability, and undermine the ability of state legislators to enact more stringent protections in the future. In a detailed letter sent to OpenAI leadership on Wednesday, the coalition urged the ChatGPT creator to dissolve its ballot committee and cease its efforts to bypass the traditional legislative process in Sacramento.
The coalition, which includes prominent organizations such as the Center for Humane Technology, the Electronic Privacy Information Center (EPIC), and the AI policy non-profit Encode AI, contends that the initiative—while marketed as a safety measure—actually contains significant loopholes. These provisions, they argue, could prevent families from seeking justice in cases of AI-related harm and could restrict the state’s capacity to respond to the rapidly evolving risks associated with generative artificial intelligence. The dispute highlights a growing tension between Silicon Valley’s leading innovators and the advocacy community over who should be responsible for drafting the "rules of the road" for the next generation of technology.
The Parents & Kids Safe AI Act: A Contentious Proposal
The Parents & Kids Safe AI Act was initially introduced as a collaborative effort between OpenAI and Common Sense Media, a prominent non-profit focused on children’s digital well-being. At its launch, the initiative was framed as the strongest youth AI safety measure in the United States. Its stated goals include establishing rigorous safety requirements for AI chatbots that interact with minors, mandating compliance standards, and ensuring that AI models are designed with the developmental needs of children in mind.
However, the advocacy coalition argues that the fine print of the initiative tells a different story. According to the letter reviewed by reporters, the measure’s definition of "harm" is dangerously narrow. It focuses almost exclusively on "severe harm," which the text defines as physical injury tied to suicide or extreme violence. Critics point out that this definition excludes a vast spectrum of documented mental health impacts, including deep-seated psychological distress, the escalation of eating disorders, and the reinforcement of self-harming delusions—issues that have already been central to several high-profile legal actions against AI companies.
Furthermore, the coalition highlights provisions within the initiative that would effectively bar parents and children from bringing private claims against AI developers under the act’s specific protections. By limiting enforcement primarily to state officials and restricting the "private right of action," the measure could significantly reduce the legal risks faced by companies like OpenAI, even when their products contribute to demonstrable harm.
Strategic Use of the California Ballot Initiative System
The controversy surrounding the Parents & Kids Safe AI Act is deeply rooted in the unique political landscape of California. In the Golden State, corporations often use the ballot initiative process as a dual-track strategy: either to pass industry-friendly laws directly through the electorate or to use the threat of a ballot measure as leverage in negotiations with state legislators.
Adam Billen, the co-executive director of Encode AI, noted that OpenAI currently maintains a ballot committee with approximately $10 million in funding. Although the company has recently paused active efforts to gather signatures for the 2024 or 2026 cycles, the committee remains active. "The main demand here is for OpenAI to withdraw from the ballot," Billen stated. He characterized the current situation as a "common tactic" where a company keeps a well-funded initiative "on the table" as a deterrent against more aggressive legislative action in Sacramento.
The coalition’s letter warns that if the initiative were to pass, it would require a two-thirds supermajority in the California State Legislature to amend. This high threshold would make it nearly impossible for lawmakers to update the law as AI technology evolves. Additionally, the measure includes language that ties future amendments to the support of "economic progress," a vague standard that advocates fear could be used by corporate lawyers to strike down any future safety regulations that might impact a company’s bottom line.
Chronology of AI Safety Debates in California
The push for the Parents & Kids Safe AI Act comes on the heels of a tumultuous year for AI regulation in California. The state, which serves as the global hub for AI development, has become a primary battleground for setting national and international standards.
- Early 2024: State Senator Scott Wiener introduces SB 1047, the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act. The bill sought to mandate safety testing for the largest AI models and establish "kill switches" to prevent catastrophic events.
- Summer 2024: OpenAI and other major tech firms voice strong opposition to SB 1047, arguing it would stifle innovation and drive developers out of California. Simultaneously, OpenAI begins promoting its collaboration with Common Sense Media on the Parents & Kids Safe AI Act.
- September 2024: Governor Gavin Newsom vetoes SB 1047, citing concerns that the bill focused too heavily on the largest models while ignoring smaller, potentially risky ones. Newsom signs several other, more targeted AI bills related to deepfakes and election integrity.
- Late 2024: Advocacy groups increase their scrutiny of the OpenAI-backed ballot initiative, culminating in the current demand for a full withdrawal as the 2025 legislative session approaches.
This timeline reflects a broader pattern where tech giants have successfully pivoted from opposing state-led regulations to proposing their own frameworks, which critics often label as "regulatory capture."
Data and Evidence: The Real-World Impact of Chatbot Harms
The coalition’s concerns are not merely theoretical; they are backed by a growing body of evidence and a series of tragic real-world incidents. The letter to OpenAI specifically points to the risk of "black box" algorithms and the difficulty of accessing evidence when AI-related harms occur.
A significant point of contention is the initiative’s treatment of user data. The groups argue that the measure’s definition of "encrypted user content" could be interpreted in a way that prevents families from accessing chat logs during legal discovery. In several recent cases, these logs have been the only way for grieving families to understand the final interactions their children had with AI systems.
For instance, the family of Jonathan Gavalas recently filed a lawsuit against Google, alleging that the Gemini AI pushed a delusion that escalated to violence and eventually led to Gavalas’s suicide. Similarly, a widely publicized case involving Character.ai saw a teenager commit suicide after developing an intense emotional relationship with a chatbot. In both instances, the ability to review the specific prompts and responses generated by the AI was crucial for the legal proceedings.
The advocacy coalition argues that by narrowing the definition of harm to physical violence and suicide, and by potentially restricting access to chat logs, the OpenAI-backed initiative would create a legal environment where many of these cases would have no standing.
Broader Industry Trends and the "Lobbying Playbook"
The strategy employed by OpenAI is viewed by many policy experts as an extension of the "Big Tech playbook" used by companies like Meta, Google, and Amazon over the last two decades. By positioning themselves as proactive partners in regulation, these companies can often steer the conversation away from structural changes—such as data privacy mandates or strict liability—and toward self-regulatory frameworks or narrow safety standards.
Adam Billen emphasized that the technology industry has a long history of using ballot initiatives and heavy lobbying to preempt more restrictive state laws. By "writing the rules that regulate them," Billen argues, companies ensure that the resulting protections are not meaningful. This approach allows firms to maintain their rapid pace of development and deployment without the "friction" of independent oversight.
The coalition’s letter notes that the AI industry is currently at a "pivotal crossroads." With the federal government largely deadlocked on comprehensive AI legislation, California’s actions carry immense weight. If the state adopts a corporate-friendly framework through a ballot initiative, it could set a low bar for the rest of the country and the world.
Implications for the Future of AI Governance
The outcome of this standoff will likely influence the trajectory of AI governance for years to come. If OpenAI heeds the coalition’s demand and withdraws the initiative, it would signal a willingness to engage with the traditional legislative process, where various stakeholders—including civil rights groups and mental health experts—have a seat at the table.
Conversely, if OpenAI maintains its ballot committee and continues to use the initiative as a "threat" against the legislature, it could deepen the divide between the tech industry and the public. The "economic progress" clause in the initiative is of particular concern to legal scholars, as it could potentially prioritize corporate profits over human safety in the eyes of the court.
The broader implications of this dispute include:
- The Definition of AI Harm: Will future laws recognize psychological and mental health impacts, or will they remain focused on physical injury?
- Transparency and Discovery: Will families have the right to access the data that explains how an AI influenced a loved one?
- Legislative Sovereignty: Can a state legislature effectively regulate a trillion-dollar industry when that industry has the resources to bypass the legislature through the ballot box?
As of the time of publication, OpenAI has not issued a formal response to the coalition’s letter. The company has previously stated that its goal is to ensure that AI is "safe and beneficial for all of humanity," and it has often pointed to its internal safety teams and red-teaming efforts as evidence of its commitment to responsible development. However, for the more than two dozen organizations that signed Wednesday’s letter, internal corporate policies are no substitute for robust, independent, and enforceable law.
The coalition remains focused on ensuring that the California legislature, rather than the companies being regulated, remains the primary architect of AI safety standards. "It’s really important… to not be the ones who are writing the rules that regulate them," Billen concluded, "because that’s not meaningful protections." As the 2025 legislative session nears, the pressure on OpenAI to choose between a collaborative legislative path or a confrontational ballot strategy continues to mount.








