Among Silicon Valley’s tech sector, a California bill mandating AI developers to implement safety measures to prevent “critical harms” against humanity has generated buzz.
Under California’s “Safe and Secure Innovation for Frontier Artificial Intelligence Models Act,” sometimes referred to as SB 1047, AI developers would be obliged to follow safety procedures to stop events such as mass fatalities or large cyberattacks.
The proposed rules also mandate an “emergency stop” button for artificial intelligence models, call for yearly third-party audits of AI safety standards, establish a new Frontier Model Division (FMD) to supervise compliance, and apply hefty fines for infractions.
Congress has objected, though, with US Congressman Ro Khanna releasing a statement on Aug. 13 opposing SB 1047, expressing concern that “the bill as currently written would be ineffective, punish individual entrepreneurs and small businesses, and hurt California’s spirit of innovation.”
Representing Silicon Valley, Khanna said, “To protect workers and address potential risks including misinformation, deepfakes, and an increase in wealth disparity,” AI legislation is needed.
Silicon Valley has also opposed the measure, with venture capital companies like Andreessen Horowitz claiming it will tax entrepreneurs and discourage creativity.
Claiming it would “burden startups because of its arbitrary and shifting thresholds,” 16Z chief legal officer Jaikumar Ramaswamy wrote a letter to Senator Scott Wiener, one of the bill’s authors, on Aug. 2.
Prominent industry researchers like Fei-Fei Li and Andrew Ng, who feel it will damage the AI ecosystem and open-source development, have also objected.
Computer scientist Li told Fortune on August 6: “If passed into law, SB-1047 will harm our budding AI ecosystem, especially the parts of it that are already at a disadvantage to today’s tech giants: the public sector, academia, and ‘little tech.'”
Big tech firms argue, meanwhile, that overregulation of artificial intelligence could stifle free expression and drive tech innovation outside of California.
In a June article on X, Meta’s main artificial intelligence scientist, Yann LeCun, warned the law will impede research initiatives, arguing “regulating R&D would have apocalyptic consequences for the AI ecosystem.”
Currently headed to the Assembly, where it must pass by Aug. 31, the measure passed the Senate with bipartisan backing in May.
You can also freely share your thoughts and comments about the topic in the comment section. Additionally, don’t forget to follow us on our Telegram, YouTube, and Twitter channels for the latest news and updates.