AI tools are arriving through the back door of enterprise software — no contract, no due diligence, no TPRM trigger — and most manufacturing compliance functions have no idea they are already inside.
Picture this: A business unit requests a predictive procurement tool to optimize raw material sourcing. It clears standard approval channels: procurement review, a data processing agreement (DPA), and IT access. Eighteen months later, during a GDPR review, someone notices the platform’s latest release activated a generative AI feature now processing supplier pricing data, production volume forecasts, and preferred vendor lists through a large language model. The foundation model operator does not appear in the vendor register. No new contract, no updated DPA, no TPRM trigger. The vendor issued a changelog entry. Nobody flagged it.
I have had this kind of conversation with compliance leads at four different manufacturing groups in the past year. The details vary. But the structural gap is identical every time.
About the Author

Lydia Montalbano is an attorney who has spent the better part of the last decade building, developing, and leading integrity and compliance infrastructures in places where they barely existed yet — most recently at AkzoNobel, a Dutch-American manufacturer, where she covers 45+ countries. The views expressed in this article are the author’s own and do not represent the positions or opinions of her employer or any affiliated organization.
Why manufacturing carries more risk than most
The TPRM conversation has so far been dominated by financial services and healthcare. The data exposure profile in manufacturing is arguably more complex, and the intake gaps are just as wide. SAP S/4HANA, SAP’s flagship enterprise resource planning (ERP) platform, widely used across manufacturing, now ships with Joule, an embedded generative AI assistant, activated by default in recent releases.
Product lifecycle management platforms like PTC Windchill and Siemens Teamcenter have introduced AI-assisted design review features. Predictive maintenance tools across OT infrastructure — compressors, coating lines, mixing equipment — are increasingly feeding operational data into cloud-hosted models.
The data these tools are processing is not generic. It includes proprietary formulations, process parameters representing decades of R&D investment, raw material supplier terms, and in some sectors, technical specifications carrying dual-use classification under EU export control regulation or the U.S. Export Administration Regulations (EAR). If that data is processed by a foundation model whose operator, training methodology, and retention practices are unknown to your TPRM team — and in most programs today, they are — no DPA clause will close that gap after the fact.
The trigger that never fires
Traditional TPRM runs on commercial events: A contract is signed, a purchase order is raised, and an IT access request is submitted. AI capabilities in SaaS products do not arrive that way. They arrive as feature updates. A vendor activates a new capability in your existing licensed environment, pushes a changelog entry that procurement never reads, and the risk profile of a tool you approved two years ago changes overnight without triggering a single workflow in your vendor management program.
The EU AI Act makes this operationally urgent. Article 26 places explicit obligations on deployers — including manufacturers — to document the intended purpose of AI systems, monitor outputs, and maintain evidence of controls. You cannot satisfy that obligation for a system whose AI component you did not know existed. Not knowing is not a mitigating factor in a regulatory examination. It is the finding.
The sub-processor chain nobody is mapping
Even organizations that have started asking vendors about AI use are missing the next layer.
The SaaS vendor is your third party. The foundation model powering their AI feature is your fourth party: OpenAI, Anthropic, Google DeepMind, or a model the vendor itself may have limited visibility into.
That provider may run on infrastructure operated by a fifth party. In manufacturing, where tier-1 supplier risk is already difficult to map, clear sight of fifth-party AI data flows is not credible in most programs. A vendor that cannot tell you which foundation model processes your production data, or whether it retrains on customer inputs, has not done the assessment needed to answer you. Treat it as a material finding, not an administrative gap.
Three changes that close the gap
None of this requires a new framework — only targeted changes to what manufacturing compliance and procurement teams already have.
- Decouple AI intake from commercial events. Build a workflow that triggers TPRM re-assessment whenever a vendor release note or changelog references AI or large language model functionality — regardless of whether any purchasing action has occurred. For platforms like SAP or PTC, where update cycles are predictable, this is a scheduling exercise, not a technology investment. The gap exists because no one closed it, not because closing it is difficult.
- Add an AI-specific addendum to your due diligence questionnaire. The Standardized Information Gathering questionnaire (SIG) and the Cloud Security Alliance’s Consensus Assessments Initiative Questionnaire (CAIQ) remain useful for cloud security fundamentals, but were not written for foundation model risk. Your addendum should require, at minimum: The identity of every AI model used and its provider; confirmation of whether customer data is used for model training; a current sub-processor list specific to AI functionality; and evidence of ISO/IEC 42001 certification or a roadmap to it. For vendors handling export-controlled data, add a question on where model inference occurs geographically. Vendors who cannot answer have not assessed their own exposure.
- Change who sits in the room. TPRM governance committees in manufacturing were built around legal, procurement, and IT security. AI risk requires your CISO, your data privacy officer, and — where OT infrastructure is involved — your plant operations lead. A committee without the depth to interrogate model architecture and training data provenance will approve risks it cannot see. This is not a resource constraint. It is a governance design failure.
The perimeter has already moved
Manufacturing has been ransomware’s primary target sector for five consecutive years, accounting for more than two-thirds of all industrial victims in 2025 alone, according to the Dragos 2026 OT Cybersecurity Year in Review. AI-enabled tools embedded in operational and commercial software represent the next iteration of that pattern: capabilities that expand organisational data exposure, delivered through channels that existing risk controls were not built to detect.
AI Act deployer obligations are live now for high-risk systems and will extend progressively. The organisations closing the TPRM gap are not waiting for an enforcement action — they are rewriting intake logic, updating questionnaires, and changing committee composition. It is the most consequential risk management work a manufacturing compliance function can be doing right now.
That gap will not close itself.



No comments yet