The next phase of AI regulation will not wait for the Executive Branch, Congress, or the lawyer at the end of the release cycle. Compliance evidence has to be assembled while the system assembles, or it will not assemble fast enough.
China offers the clearest working example. On Aug. 31, 2023, Baidu opened Ernie Bot to the Chinese public, one of the first government-approved consumer generative AI services in the world. The rule authorizing the launch had been in force for only 16 days. The operating model behind the approval had been years in the making.
The same operating pressure is now arriving in the U.S., but it is arriving through state legislatures more than federal agencies. A draft federal preemption executive order was paused in November 2025. A congressional attempt to impose a 10-year moratorium on state AI laws failed in the Senate by a 99-1 vote. Into that gap, the states moved. California enacted the Transparency in Frontier Artificial Intelligence Act in September 2025, requiring frontier developers to publish safety frameworks and protecting whistleblowers, with key obligations taking effect later. California’s training-data transparency law took effect in January 2026. Colorado’s AI Act, which governs algorithmic discrimination in consequential decisions such as employment, lending, and healthcare, becomes enforceable on June 30, 2026. EU’s AI Act adds another layer for U.S. firms serving European customers.

The patchwork is the regulatory environment. Most U.S. compliance models still treat AI governance as a legal review gate at the end of the development cycle. That model will break at AI velocity. Chinese firms moved through a stricter version of this operating problem because they rebuilt the model.
Compliance as an engineering constraint
The substance is not foreign to U.S. compliance officers. Algorithm filing addresses transparency. Security assessment addresses model risk. Content moderation addresses fraud, deepfake disclosure, and consumer harm. Pre-launch review establishes accountability. The Chinese regime adds political content controls on top of these categories, but the underlying compliance work maps to concerns any market with consumer-facing AI has to address.
U.S. compliance vocabulary reads this as a heavy regime. Chinese AI firms treat it as a published evidence schema. The filing package becomes a release artifact. Compliance staff defines the evidence schema upfront. Engineering teams then populate it during development, documenting the model’s purpose, data inputs, safety controls, test results, and content moderation procedures as the product is built rather than after it ships. Product and engineering treat the schema as part of launch readiness, not as a post-release task.
The result is what I call compliance velocity: the ability to move from feature freeze to regulatory readiness without rebuilding the evidence package after the fact. Filing materials are assembled during model training. Review windows compress because the artifacts arrive completely. Legal review confirms compliance posture rather than discovering it.
Why standardization compressed, rather than stalled, deployment
The contrarian finding will register quickly with most compliance officers. Prescriptive regulation did not slow Chinese deployment. Under the right operating model, it compressed it.
When regulators publish exact filing schemas, security assessment criteria, and approval timelines, compliance becomes engineering work. Engineers automate against published specifications. Compliance officers build templates against published criteria. Auditors review against the same standard that the regulator uses. Variance shrinks. Cycle time drops.
The U.S. model inverts this. Fragmented state requirements and case-by-case enforcement leave each company building its own interpretation. Each interpretation requires its own legal review. Each review extends the cycle.
The throughput tells the story. China’s generative AI filing category grew from 64 services in 2023 to 238 in 2024, reaching 346 by April 2025. That trajectory depends on standardized inputs, and standardized inputs require regulators to publish them in operational form. Most U.S. state frameworks have not yet reached that specificity, which means U.S. compliance organizations have to build the operational layer themselves.
What this means for U.S. compliance officers
The U.S. environment will not standardize to Chinese specifications. Few U.S. companies want to sell into China at the cost of disclosing model details, training data, and IP to the Chinese government. The practical regulatory floor for most U.S. companies is being set elsewhere, by the EU AI Act and by states with large markets, such as California, Colorado, and New York. The operational lesson from China transfers anyway, because the bottleneck is structural, not jurisdictional.
NIST AI RMF subcategories, EU AI Act Annex IV documentation, and state disclosure rules each map to artifacts that data science teams already produce or should produce as a matter of course. Treat those requirements as an evidence specification rather than a compliance interpretation. Colorado’s AI Act makes this more than good governance. Organizations that discover and cure violations and can show compliance with the latest NIST AI RMF, or another recognized framework, may have an affirmative defense. That turns schema work into legal evidence, not just operational hygiene.
That work fails if compliance staff sit at the end of the pipeline. The legal-as-final-gate model breaks when models retrain monthly and deploy weekly. Evidence has to be assembled during development, or it does not assemble fast enough. The compliance officer’s seat moves into the product squad, not the end-of-cycle review.
Then the metrics follow. The time from feature freeze to compliance sign-off becomes a product KPI, tracked alongside time-to-market and security posture. Reducing it means the operating model is working. Letting it grow means the legal review gate has quietly reasserted itself.
Chinese AI firms did not achieve faster compliance by lowering standards. They achieved it by industrializing the workflow. The regulatory environment matters, but the immediate bottleneck within most U.S. companies is the architecture for producing evidence of compliance.
The firms that launched on Aug. 31, 2023, did not treat compliance as a document scramble. They treated it as launch infrastructure. U.S. compliance leaders do not need China’s regulatory model. They do need to answer the operational question China’s system exposes: when AI rules become enforceable, can your company produce compliance evidence at product speed?
Collin Hogue-Spears is an independent researcher writing on AI governance and regulatory compliance. His forthcoming book, From Lab to Life: How AI Works in China (Gatekeeper Press, July 2026), examines how regulation, capability, and distribution operate as a single system in the Chinese AI market.


