California Governor Gavin Newsom has vetoed a landmark artificial intelligence (AI) safety bill that faced strong opposition from major tech companies. The bill aimed to introduce some of the first AI regulations in the US, requiring advanced AI models to undergo safety testing and include a “kill switch” to shut down systems if they became a threat. It also mandated official oversight for developing powerful “Frontier Models” of AI.
However, Mr. Newsom argued that the bill could stifle innovation and potentially drive AI developers out of the state. He expressed concerns that it would impose stringent regulations on all AI systems, regardless of their risk level or use of sensitive data. “The bill applies stringent standards to even the most basic functions—so long as a large system deploys it,” he stated.
The bill’s author, Senator Scott Wiener, criticized the veto, saying it leaves companies free to continue developing “extremely powerful technology” without government oversight. Despite this, Mr. Newsom announced plans to develop AI safeguards with the help of experts, aiming to protect the public from potential AI risks.
California, home to many leading AI companies like OpenAI, plays a critical role in the global tech industry. Newsom’s decision has significant implications for AI regulation both nationally and internationally. Alongside the veto, the governor has signed 17 other bills, including legislation to combat misinformation and deep fakes created by generative AI.
Mr Wiener said the decision to veto the bill leaves AI companies with “no binding restrictions from US policy makers, particularly given Congress’s continuing paralysis around regulating the tech industry in any meaningful way.”
Efforts by Congress to impose safeguards on AI have stalled.
OpenAI, Google and Meta were among several major tech firms that voiced opposition to the the bill and warned it would hinder the development of a crucial technology.