SACRAMENTO, Calif. (AP) 鈥 Efforts in California to establish first-in-the-nation safety measures for the largest artificial intelligence systems cleared an important vote Wednesday that could pave the way for U.S. regulations on the technology evolving at warp speed.
The proposal, aiming to reduce potential risks created by AI, would require companies to test their models and publicly disclose their safety protocols to prevent the models from being manipulated to, for example, wipe out the state鈥檚 electric grid or help build chemical weapons 鈥 scenarios experts say could be possible in the future with such rapid advancements in the industry.
The bill is among hundreds lawmakers are voting on during its final week of session. Gov. Gavin Newsom then has until the end of September to decide whether to sign them into law, veto them or allow them to become law without his signature.
The measure squeaked by in the Assembly Wednesday and requires a final Senate vote before reaching the governor's desk.
Supporters said it would set some of the first much-needed safety ground rules for large-scale AI models in the United States. The bill targets systems that require more than $100 million in data to train. No current AI models have hit that threshold.
鈥淚t鈥檚 time that Big Tech plays by some kind of a rule, not a lot, but something,鈥 Republican Assemblymember Devon Mathis said in support of the bill Wednesday. 鈥淭he last thing we need is for a power grid to go out, for water systems to go out.鈥
The proposal, authored by Democratic Sen. Scott Wiener, faced fierce opposition from venture capital firms and tech companies, including OpenAI, Google and Meta, the parent company of Facebook and Instagram. They say safety regulations should be established by the federal government and that the California legislation takes aim at developers instead of targeting those who use and exploit the AI systems for harm.
A group of several California House members also opposed the bill, with Former House Speaker Nancy Pelosi calling it 鈥 .鈥
Chamber of Progress, a left-leaning Silicon Valley-funded industry group, said the bill is 鈥渂ased on science fiction fantasies of what AI could look like.鈥
鈥淭his bill has more in common with Blade Runner or The Terminator than the real world," Senior Tech Policy Director Todd O鈥橞oyle said in a statement after the Wednesday vote. 鈥淲e shouldn鈥檛 hamstring California鈥檚 leading economic sector over a theoretical scenario.鈥
The legislation is supported by Anthropic, an AI startup backed by Amazon and Google, after Wiener adjusted the bill earlier this month to include some of the company's suggestions. The current bill removed the penalty of perjury provision, limited the state attorney general's power to sue violators and narrowed the responsibilities of a new AI regulatory agency. Social media platform X owner Elon Musk also threw his support behind the proposal this week.
Anthropic said in a letter to Newsom that the bill is crucial to prevent catastrophic misuse of powerful AI systems and that 鈥渋ts benefits likely outweigh its costs.鈥
Wiener said his legislation took a 鈥渓ight touch鈥 approach.
鈥淚nnovation and safety can go hand in hand鈥攁nd California is leading the way,鈥 Weiner said in a statement after the vote.
He also slammed critics earlier this week for dismissing potential catastrophic risks from powerful AI models as unrealistic: 鈥淚f they really think the risks are fake, then the bill should present no issue whatsoever.鈥
Wiener's proposal is among California lawmakers proposed this year to build public trust, fight algorithmic discrimination and outlaw deepfakes that involve elections or pornography. With AI increasingly affecting the daily lives of Americans, state legislators have tried to strike a balance of reigning in the technology and its potential risks without stifling the booming homegrown industry.
California, home of 35 of the world鈥檚 top 50 AI companies, has been an early adopter of AI technologies and could soon to address highway congestion and road safety, among other things.
Newsom, who declined to weigh in on the measure earlier this summer, had warned against AI overregulation.