Insights
Dec 25, 2025
News
Artificial Intelligence
Americas
NewDecoded
7 min read
Image by NewsPhotosFeatured
Governor Kathy Hochul has officially signed the Responsible AI Safety and Education (RAISE) Act into law, establishing New York as a primary regulator of high-stakes artificial intelligence. This legislation sets a nation-leading benchmark for transparency by targeting frontier models, which are the most powerful AI systems currently in development. By mandating rigorous safety frameworks, the state aims to protect the public from catastrophic risks while the federal government continues to lag behind in implementing common-sense regulations.
The act specifically applies to large developers whose AI models require aggregate computing costs exceeding $100 million to train. These organizations are now legally required to draft and publish detailed safety protocols addressing critical harms such as automated crime or bioweapon development. Furthermore, companies must report any safety incidents to the state within a narrow 72-hour window after determining an occurrence, a timeline significantly tighter than many previous voluntary standards.
Enforcement of these rules falls under a newly created oversight office within the Department of Financial Services. The New York Attorney General is also empowered to pursue civil actions against developers who fail to report incidents or provide false statements. Non-compliance carries steep financial consequences, with penalties reaching $1 million for initial violations and up to $3 million for subsequent offenses.
For the tech industry, this law represents a significant shift toward mandatory accountability. Large developers must now integrate compliance into their core engineering processes to meet New York's specific disclosure timelines. Because these requirements are among the strictest in the country, they effectively create a high-bar national standard that businesses must follow to maintain access to the New York market.
Despite the heavy regulatory hand, the state continues to promote technical growth through its Empire AI consortium. This public-private partnership provides academic researchers with the massive computing power needed to innovate safely and ethically. Governor Hochul emphasizes that this balanced approach ensures New York remains a global hub for responsible technology without stifling progress.
Governor Kathy Hochul signed the Responsible AI Safety and Education (RAISE) Act into law on December 19, 2025, establishing New York as a primary regulator of the world's most advanced artificial intelligence systems. The legislation requires developers of frontier AI models to publish comprehensive safety protocols and report incidents of critical harm to the state within a strict 72-hour window. This move positions New York at the forefront of AI governance, setting a rigorous benchmark for transparency that targets models requiring massive computing power or high training costs. [https://www.governor.ny.gov/news/governor-hochul-signs-nation-leading-legislation-require-ai-frameworks-ai-frontier-models]
Under the new law, a dedicated oversight office will be established within the New York State Department of Financial Services to monitor developer compliance. This office will assess safety frameworks and issue annual reports to ensure that powerful models are not being used to facilitate bioweapon creation, large-scale cyberattacks, or automated criminal activity. The Attorney General is authorized to pursue civil actions for non-compliance, with penalties reaching up to $1 million for a first violation and $3 million for subsequent offenses. [https://www.dfs.ny.gov/]
The RAISE Act reflects a strategic collaboration between state leaders and the technology industry following extensive negotiations to balance safety with innovation. Earlier versions of the bill carried significantly higher fines, but the final signed legislation focuses on a transparency-first approach designed to work alongside the Empire AI consortium. Governor Hochul noted that this unified framework with California fills a critical void left by the federal government, which has struggled to implement common-sense regulations to protect the public.
For businesses, the impact of this legislation is immediate and operationally demanding. Developers of frontier models must now integrate formal state reporting mechanisms into their internal safety cycles to meet the 72-hour notification requirement. This mandate forces a shift from voluntary internal safety testing toward mandatory, public-facing accountability. Organizations will likely need to expand their legal and technical compliance teams to navigate the intersection of New York and California regulations.
Beyond individual company operations, the law signifies a change in how technology hubs view the risks of frontier models. By leveraging the expertise of the Department of Financial Services, New York is treating AI safety as a core component of economic and public security. This framework ensures that while innovation continues to thrive through public-private partnerships, the state retains the power to intervene when digital breakthroughs pose a threat to the general welfare.
The enactment of the RAISE Act marks a pivotal shift in the American technology landscape as New York and California form a regulatory pincer around the AI industry. By aligning with California’s safety framework, New York creates a de facto national standard that forces global developers to adopt high-level transparency protocols regardless of federal inaction. The decision to house oversight within the Department of Financial Services indicates that state leaders now view AI risk as a matter of systemic stability, comparable to financial market volatility. This move signals the end of the "move fast and break things" era for frontier model development, replacing it with a rigorous state-supervised compliance regime.