California governor Gavin Newsom has blocked legislation aimed at addressing catastrophic safety and security risks associated with the development and deployment of artificial intelligence. The Safe and Secure Innovation for Frontier Artificial Intelligence Models Act (SB 1047) sought to mitigate “novel threats to public safety and security” that could arise from AI systems that are not properly subject to human control. These potential threats include the creation and proliferation of weapons of mass destruction, such as biological, chemical, and nuclear weapons, or weapons capable of cyber attacks on critical infrastructure. In certain respects, the bill seemed straight out of a sci-fi film, reminiscent of The Terminator. 

California Governor Vetoes AI Safety Bill

Among its provisions, the bill would have required large, high-value AI models implement various safety measures, such as the ability to perform a full system shutdown (i.e. a kill switch)., establish a written safety and security protocol prior to training a model, and conduct safety testing before being released to the public. Developers would also have been required to retain an independent third-party auditor for annual compliance audits—which would be shared with the Attorney General upon request—and report any AI safety incidents. Additionally, the bill would have prohibited the use of large AI models for purposes unrelated to the training or reasonable evaluation of the model, or for commercial use if there is an unreasonable risk of causing or enabling “critical harm.”

Since its introduction, the controversial legislation has sharply divided political and tech industry leaders. Opponents argued that the bill would have stifled open-source innovation and made California an unfavorable home for tech companies. Jason Kwon, OpenAI’s Chief Strategy Officer, reportedly stated “SB 1047 would threaten that growth, slow the pace of innovation, and lead California’s world-class engineers and entrepreneurs to leave the state in search of greater opportunity elsewhere.” Critics also emphasized the SB 1047 is too narrowly focused on catastrophic harms, which may not pose such imminent risks as the bill suggests.

Senator Scott Wiener, who authored the bill, responded directly to OpenAI’s opposition, stating, “SB 1047 is a highly reasonable bill that asks large AI labs to do what they’ve already committed to doing, namely, test their large models for catastrophic safety risk.” The bill also garnered support from billionaire tech mogul and owner of xAI, Elon Musk, and more than 100 current and former employees of frontier AI companies, including OpenAI, Anthropic, Google’s DeepMind, and Meta.

Given the spirited debate surrounding the bill, it comes as little surprise that Governor Newsom ultimately chose to veto SB 1047. In his veto message, Governor Newsom agreed with SB 1047 supporters, emphasizing that California “cannot wait for catastrophe to occur before taking action” and “safety guardrails should be implemented.” However, he raised several concerns about the bill’s framework.

Notably, Governor Newsom highlighted the bill’s regulatory framework broadly applies stringent requirements across the board, without regard to “whether an Al system is deployed in high-risk environments, involves critical decision-making or the use of sensitive data.” In contrast, earlier this year, the European Union’s Artificial Intelligence (AI) Act introduced a tiered regulatory approach, categorizing AI models into risk levels: unacceptable, high-risk, limited, and minimal. The EU’s framework specifically takes into account the very factors raised by Governor Newsom, such as the environment in which AI is deployed. For instance, AI systems operating in “high-risk” sectors like education, healthcare, and critical infrastructure face heightened compliance requirements.

Newsom also argued the bill was “not informed” and lacked “an empirical analysis of AI systems and capabilities,” suggesting that any effective AI regulation must “keep pace with the technology itself.” This echoed Yann LeCun, Meta’s Chief AI Scientist, statement on X that the regulation was “predicated on the illusion of ‘existential risks’ pushed by a handful of delusional think-tanks, and dismissed as nonsense (or at least widely premature) by the vast majority of researchers and engineers in academia, startups, larger companies, and investment firms.”

As the nation’s lodestar for progressive regulation and tech innovation, this is certainly not the end for AI regulation in California. Rather, Governor Newsom indicated that the state is actively working with experts to identify AI’s specific risks and craft targeted legislation accordingly. Any AI regulatory measures passed in California are likely to influence similar efforts across the country. At Quandary Peak Research, computer experts with experience in AI & ML  are closely monitoring developments in order to provide insightful and effective consultation on critical issues emerging from new regulatory frameworks.

AI Compliance Consulting by Software Experts at Quandary Peak

Our experts have extensive knowledge about software that employs artificial intelligence and machine learning techniques. Contact us to speak with a ML/AI expert who understands the unique challenges of evaluating AI software and source code for litigation and due diligence matters.