The New York RAISE Act: A Developer's Guide to AI Regulation
- Maryanne
- Jun 22
- 4 min read
The Cliff Notes Version RAISE Act

What the bill is: S6953B / A6453B, a frontier-model safety bill that targets labs spending >$100 million in training compute cost on a single model. It requires public risk reports, an internal safety plan, and 72-hour incident reporting—plus a duty to withhold any release that could plausibly kill ≥100 people or cause ≥$1 billion in harm.
Civil penalties: $10 million first offense, $30 million thereafter.
Who it would reach: Any company or organization (not individual developers) whose
model can be used in New York, no matter where the GPUs are racked. If a Wall Street quant can hit your API from Midtown, you're in scope.
"Frontier model" in one breath: The bill defines it as an AI system trained with ≥$100 million in compute—or that later demonstrates capabilities equivalent to that spend. To put this in perspective, that's months of training on thousands of high-end GPUs, or roughly equivalent to training models like GPT-4. Think large language models, giant multimodal systems, and next-gen agentic architectures. Startups working with open-source models on smaller budgets would be outside the scope.
Where we are right now: The bill passed the New York State Senate (58-1) in June 2025 but did not receive an Assembly vote before the legislative session ended. The bill would need to be reintroduced in a future session to advance further.
Why This Isn't Just "California SB 1047—East Coast Remix"
Clear numerical trigger – $100M compute spend is easier to verify than California's fuzzier "10^26 FLOPs."
Civil not criminal – Enforcement stays with AG fines; no personal liability or mandatory "kill-switch" like the early SB draft.
Academic carve-out – Accredited universities doing pure research get a pass, easing fears of chilled innovation.
Bottom line: legislators sanded down the sharpest edges, which is partly why the Senate vote sailed through at 58–1.
Preparing for AI Regulation (If This or Similar Legislation Passes)
Your PM just asked: "Does our Gen-AI service count?" If you can't answer in 30 seconds, here's where to start if similar legislation passes.
Run the Compute Math
Pull your cloud billing data (Azure Cost Management, AWS Cost Explorer, GCP billing) for your largest training runs. The $100 million threshold is substantial—think enterprise-scale training that runs for months on thousands of GPUs. For reference:
Small research projects: typically $1K-$100K range
Production models at most companies: $100K-$10M range
Frontier models subject to regulation: $100M+ range
If you're unsure where you fall, document your largest training costs and consult with legal counsel about whether your models would trigger compliance requirements.
Draft the Public Safety Report
The bill lets you redact trade secrets, but everything else (misuse scenarios, mitigations, model card) goes public. Template a Markdown doc in the repo now so Legal isn't blindsided.
Spin Up a Red-Team Guild
At least one AppSec lead, one ML engineer, and one "break-all-the-things" senior. Their mission: quarterly adversarial evals; findings feed the Safety Plan.
72-Hour Incident SOP
Piggyback your existing P1/SEV-0 runbook: PagerDuty → Slack #war-room → auto-generated timeline. Add an "AG Filing" checklist item.
CI/CD Blockers
Use the Responsible AI dashboard in Azure ML; fail the release if test prompts cross your catastrophic-risk rubric.
Consuming vs. Building: You're Still in the Splash Zone
Procurement Audits – Expect MSAs with a "RAISE clause" asking whether your vendors comply.
Regional Feature Flags – Some global APIs may geofence New York. Have fallbacks so Wall Street users don't get 503 errors.
The Bigger Governance Picture
If the RAISE Act becomes law, New York leapfrogs other states by policing capability instead of sector-specific use cases (hiring, housing, credit). Congress is flirting with preemption, but—ask any GDPR veteran—state rules often set de facto national baselines.
Industry's fork in the road:
Comply & Iterate – Treat it like GDPR or CCPA: painful now, trust dividend later.
Route Around – Block New York traffic and hope no one copies the statute. History suggests that clock is short.
Concrete Next Steps for the .NET Crowd
Action | Azure-Friendly Tooling | Timebox |
Safety Plan template | Azure OpenAI Risk Management add-on (preview) | 2 days |
Red-team harness | Prompt Flow + Dev Test Labs | 1 sprint |
Incident queue | Logic Apps → Service Bus → PagerDuty | 1 week |
Public report hosting | Static Web Apps (free tier) | 1 hr |
Compliance gate in CI | GitHub Actions Policy Check | 4 hrs |
Final Word
Regulation is finally catching up to AI's rapid development pace. While the New York RAISE Act didn't advance beyond the Senate this session, it represents a significant regulatory blueprint that other states may adopt. Big enterprise buyers are already using similar frameworks as vendor checklists. Better to build these controls into your development lifecycle now than scramble when compliance becomes mandatory.
Sources
New York State Senate - S6953B/A6453B Bill Text and Status
International Association of Privacy Professionals (IAPP) - Analysis of RAISE Act provisions and penalties
Times Union - Coverage of New York AI regulation legislative progress
Note: This analysis is based on the legislative text and publicly available information as of the article's writing. Readers should consult current sources and legal counsel for the most up-to-date information on AI regulation compliance.
Comments