As AI adoption accelerates, organizations are quietly deprioritizing the very safeguards that keep them compliant — creating governance blind spots, regulatory exposure, and stakeholder trust gaps that compound faster than most leaders realize. Compliance teams don’t have to wait for the consequences to hit: implement the following concrete steps to put technoethics back at the center of AI strategy.
While business leaders remain fixated on external threats, many are overlooking the risks festering within their own organizations. According to BDO’s 2025 Techtonic States Report: Turn Risk Into Resilience, 59 percent of business leaders view technoethics and data ownership as critical to their organization’s growth today. But that figure is predicted to drop to 52 percent by 2028 and continue to trend downward.
This development highlights a concerning trend: Many organizations are increasingly neglecting the essential safeguards required as artificial intelligence (AI) becomes more deeply integrated in everyday business operations.
About the Authors

Ric Opal is the Global Digital Leader, Principal, and U.S. National Leader for Cyber and IT Solutions at BDO Digital. He oversees BDO Digital’s global stream and leads BDO USA’s Cyber and IT Solutions segment, driving a unified digital strategy that integrates people, process, and technology to deliver measurable client value.
What’s driving this deprioritization?
The decline in prioritizing technoethics stems from a fundamental misunderstanding of AI, compounded by internal misalignment around how to govern its capabilities and risks. Newer members of the workforce, for example, may be more eager to adopt AI tools without recognizing their security implications. Seasoned executives, on the other hand, may be more cautious but underestimate the scale, pervasiveness, and impact of AI use across the organization.
This misalignment inevitably leads to governance blind spots or missteps. For example, software development teams and chief technology officers may become de facto AI governance committee leaders, designing systems and policies without meaningful input from legal, internal audit, or compliance functions–or the board’s perspective. As a result, organizations may assume that they can use AI tools and agents under traditional privacy controls (they cannot) and that their existing incident response protocols are sufficient for AI-related incidents (they are not).
Without proper governance, business leaders are also less likely to thoroughly vet vendors and their associated risks. Most organizations buy pre-trained AI models rather than training their own, and have little to no visibility into a vendor’s training data, decision logic, or model evaluation. When it comes to these insights, companies — particularly small and mid-sized businesses — have limited negotiating power, leaving them contractually bound to opacity.
About the Authors

Karen Schuler is the global head of Privacy, Data, and AI Governance at BDO USA. She and her team advise Global 500 and mid-market companies about the importance of implementing privacy and data protection at the onset of design, investigating data leaks and mishandling of personal data, managing 24/7 data subject request contact centers, and implementing and administering an array of privacy technologies.
The dangers of deprioritizing technoethics
When tech teams make governance decisions in silos, and opaque vendor contracts create information blind spots, organizations risk noncompliance and jeopardize stakeholder trust.
Organizations must operate across jurisdictions with incompatible and often conflicting requirements. Failure to track and adhere to these requirements can lead to compliance risks, potential penalties for algorithmic bias, and intensified scrutiny of data practices. California’s Automated Decision-Making opt-out rights, for instance, function entirely differently from approaches in other states, forcing companies to build multiple compliance frameworks simultaneously.
Meanwhile, on a global level, the EU AI Act’s GPAI regulations introduce further complications for multinational organizations. In some cases, this regulatory fragmentation has become so severe that legal and compliance teams are bottlenecking AI procurement, requiring extensive contract review before any technical decisions can be locked in.
But even as counsel teams scrutinize contracts, some discover they cannot answer the questions that matter most. Stakeholder trust is eroding faster than organizations realize. Internal audit and compliance teams are finding they cannot answer basic client questions about how AI systems make decisions, while employees simultaneously deploy shadow AI tools without IT oversight, creating risks and ethical questions that leadership does not know exist. This visibility gap sets off a vicious cycle as companies cannot govern what they cannot see, and therefore, are unable to provide the full transparency that stakeholders increasingly demand.
This opacity on technology use can compound into other data liabilities for the organization. When a company does not have full visibility into its data foundation, it is less likely to maintain clean, complete, and accurate data, reducing quality and compounding risk over time. The principle of “garbage in, garbage out” is especially salient for AI systems, where poor data quality produces flawed decisions at scale. Without clean data and robust security measures, the entire AI infrastructure sits on a shaky foundation — similar to a house built without proper groundwork or alarm systems.
Five steps for compliance teams to implement today
Instead of waiting for regulatory clarity or other business priorities to subside, legal and compliance teams must actively work toward prioritizing technoethics now:
1. Start with a risk reality check. While privacy impact assessments have existed for decades, most have not been updated to include detailed AI-specific questions beyond, “Are you using AI?” Organizations need to implement multiple discovery methods to learn where employees are actually using AI, so they can anticipate issues rather than only address incidents that fully materialize.
2. Embed compliance into AI deployment from the start. Legal and compliance teams must be involved in every AI-related decision, so they can scrutinize data ownership terms, model transparency requirements, and liability provisions that will bind the organization for years.
3. Rethink data governance for jurisdictional requirements. Effective governance builds on visibility: knowing what data exists, where it lives, who owns it, who accesses it, and how it is protected. Organizations should map their value chains to capture dependencies and address cloud, data, and AI sovereignty early to avoid costly rebuilds when regulations shift.
4. Invest in people and break down silos. Technology has become fundamental to virtually every aspect of a business; therefore, organizations should not expect employees to use technology without also requiring them to understand its ethical use. Organizations should hire for multidisciplinary capabilities and build cross-functional governance teams from the start, not after problems emerge.
5. Stay ahead with proactive scenario planning. Even if an organization is not currently privy to the EU AI Act or China’s AI regulations, it should use these frameworks as guidelines for future risk assessments. To build a strong governance foundation, legal and compliance teams should proactively question the impact of AI on company operations to avoid disruption.
Technoethics are not fading from the corporate agenda. Rather, technoethics must move from the periphery to the center as AI adoption accelerates. Legal and risk teams must become embedded in AI strategy at the outset of any technical decision. Organizations that build strong data foundations and mandate responsible governance will not only be able to innovate safely, but more confidently. Treating ethics as an afterthought, however, leads to contractual opacity, regulatory exposure, and stakeholder distrust — problems far more expensive to fix than prevent.



No comments yet