California State Senator Scott Wiener officially introduced amendments to his latest legislation, SB 53, on Wednesday, aiming to require some of the world’s largest artificial intelligence companies to publicly disclose their safety and security protocols and to issue reports following any safety incidents. If enacted, California would become the first state to impose significant transparency measures on leading AI developers, including major players like OpenAI, Google, Anthropic, and xAI.
Wiener’s earlier bill, SB 1047, had proposed similar transparency requirements, but encountered staunch opposition from Silicon Valley and was ultimately vetoed by Governor Gavin Newsom. Following this setback, Newsom formed an AI policy group featuring prominent leaders, such as Fei-Fei Li, a leading researcher from Stanford and co-founder of World Labs, to develop safety standards for AI in the state.
The AI policy group recently released their comprehensive recommendations, emphasizing the need for industry mandates to provide information about their systems, striving for a “robust and transparent evidence environment.” According to a press release from Senator Wiener’s office, the amendments to SB 53 were substantially shaped by this report.
“The bill continues to be a work in progress, and I look forward to working with all stakeholders in the coming weeks to refine this proposal into the most scientific and fair law it can be,” Wiener stated in the announcement.
SB 53 seeks to achieve a balance that Governor Newsom asserted had been lacking in SB 1047 by instituting important transparency requirements for major AI developers while simultaneously avoiding disruption to California’s burgeoning AI industry.
“These are concerns that my organization and others have been talking about for a while,” Nathan Calvin, Vice President of State Affairs from the nonprofit AI safety organization Encode, shared in an interview with TechCrunch. “Having companies explain to the public and government what measures they’re taking to address these risks feels like a bare minimum, reasonable step to take.”
The proposed legislation also includes protections for whistleblowers at AI labs who may perceive that their company’s technology poses a “critical risk” to society—delineated as contributing to the death or injury of over 100 individuals or inflicting damage exceeding $1 billion.
Furthermore, SB 53 plans to establish CalCompute, a public cloud computing resource intended to support startups and researchers engaged in large-scale AI development.
Unlike SB 1047, the new bill does not hold AI model developers accountable for potential harms caused by their models. It has been crafted to avoid imposing burdens on startups and researchers who refine AI models developed by others or utilize open-source models.
With these updated amendments, SB 53 is now set to advance to the California State Assembly Committee on Privacy and Consumer Protection for consideration. If it secures approval there, the bill will undergo further legislative scrutiny before potentially landing on Governor Newsom’s desk.
In parallel, New York Governor Kathy Hochul is reviewing a similar proposal, the RAISE Act, which would also mandate large AI firms to release safety and security reports.
The progress of state-level AI legislation, including SB 53 and the RAISE Act, faced uncertainty earlier this year as federal lawmakers contemplated a 10-year moratorium on state-level AI regulations to avoid a fragmented landscape of laws. However, this proposal was overwhelmingly rejected in a Senate vote of 99-1 in July.
“Ensuring AI is developed safely should not be controversial — it should be foundational,” remarked Geoff Ralston, former president of Y Combinator, in a statement to TechCrunch. “Congress should be leading, demanding transparency and accountability from the companies building frontier models. But with no serious federal action in sight, states must step up. California’s SB 53 is a thoughtful, well-structured example of state leadership.”
Despite ongoing efforts, lawmakers have yet to garner backing from AI companies for mandated transparency measures. While Anthropic has broadly supported the push for greater transparency and expressed cautious optimism about the AI policy group’s recommendations, major firms like OpenAI, Google, and Meta have shown more resistance.
Leading AI developers generally issue safety reports for their AI models, though their consistency has waned in recent months. For instance, Google chose to withhold a safety report for its latest advanced AI model, Gemini 2.5 Pro, until months after its release, and OpenAI opted not to provide a safety report for its GPT-4.1 model until a subsequent study indicated it may be less aligned than its predecessors.
While SB 53 is a more tempered approach compared to previous AI safety proposals, it still has the potential to compel AI companies to share more information than they currently do. For now, these companies will be closely monitoring Senator Wiener as he navigates once again through these critical boundaries.AI