California Bill Could Regulate AI Safety

California Senate Bill 1047: A Landmark Step in AI Governance

A new bill that has advanced to the California Senate Assembly floor represents both a significant step forward in AI governance as well as a risk to the technology’s innovative growth. Officially called California Senate Bill 1047 – and also known as the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act – this bill is meant to regulate large-scale AI models in the state of California.

What Does California Senate Bill 1047 Entail?

Authored by State Senator Scott Wiener, this bill would require AI companies to test their models for safety. Specifically, the bill targets “covered models,” which are AI models that exceed certain compute and cost thresholds. Any model that costs more than $100 million to train would fall under the jurisdiction of this bill.

\"\"

Elon Musk has thrown his support behind this bill.

Key Provisions of Senate Bill 1047

As of August 27, 2024, the bill has passed the California Assembly Appropriations Committee and will soon advance to the Assembly floor for a final vote. California Senate Bill 1047 has a variety of requirements for builders of large AI models. One of these is to create a “full shutdown” ability that enables someone in authority to immediately shut down an unsafe model during nefarious or dangerous circumstances.

Moreover, developers will be required to generate a written safety and security protocol in the event of a worst-case scenario with the AI model. Companies such as Amazon, Google, Meta, and OpenAI have already made voluntary pledges to the Biden Administration to ensure the safety of their AI products. This new bill would give the Californian government certain powers to enforce the bill’s regulations.

Accountability and Oversight

California Senate Bill 1047 would require companies to retain an unredacted and unchanged copy of the safety and security protocol for the model for as long as the model is in use, plus five years. This is meant to ensure that developers maintain a complete and accurate record of their safety measures, thereby allowing for thorough audits and investigations if needed. If an adverse event were to occur with the model, this regulation should help developers prove they were adhering to safety standards – or that they weren’t.

In essence, the bill seeks to prohibit companies from making a model commercially available if there is an unreasonable risk of causing or enabling harm. The bill aims to provide a structured framework for accountability while safeguarding public interest.

Regulatory Framework and Potential Impact

The bill also proposes the establishment of the Board of Frontier Models within the Government Operations Agency. This group would provide high-level guidance on AI policy and regulation, approve regulations proposed by the Frontier Model Division, and ensure that oversight measures keep pace with the explosion of AI technology.

Additionally, the California Attorney General would gain the power to address potential harms caused by AI models. This includes taking action against developers whose AI models cause severe harm or pose imminent public safety threats. The Attorney General would also be empowered to bring civil actions against non-compliant developers and impose penalties for violations.

The Debate: Innovation vs. Regulation

If the bill passes, developers will have until January 1, 2026 to begin annually retaining a third-party auditor to perform an independent compliance audit. Developers will also be required to retain an unredacted copy of the audit report and grant access to the Attorney General upon request.

This bill has sparked significant debate among the Silicon Valley elite. Critics argue that it could hamper innovation in the AI community. Given that many of the U.S.’s AI companies are based in California, the implications of this bill could reverberate throughout the entire U.S. tech industry. Some see the regulations as potentially slowing companies down and allowing foreign organizations to gain ground.

There are also questions regarding the definitions of “covered models” and “critical harm.” While both phrases appear numerous times within the bill, some consider their actual definitions too broad or vague, raising concerns about potential overregulation.

On the other hand, supporters of the bill, including Elon Musk, have expressed their backing. Musk stated on X that he has been “an advocate for AI regulation, just as we regulate any product/technology that is a potential risk.”

Conclusion: A Pivotal Moment for AI Development

As of right now, we do not know if and when the bill will pass the Assembly floor’s final vote. Once it does, it will go to the Governor for either a signature or a veto. California has the opportunity to shape the future of AI development with this bill, and it will be interesting to see which way the decision swings.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top