The current AI policy and regulation landscape is still emerging globally. While some regulations and standards exist, governments, industry, and security leaders have critical gaps to close, especially around artificial intelligence (AI).
The European Union Artificial Intelligence Act is one of the most significant regulations organizations must comply with today. As the world’s first comprehensive legal framework for AI, the EU AI Act aims to ensure that AI in Europe is safe, encourages trustworthy innovation, and respects the rights of EU citizens. It categorizes AI systems by risk level, banning unacceptable risks, tightening rules for high-risk applications like healthcare, and mandating transparency for lower-risk ones. This regulate-first approach to responsible AI development is a great start, but it only addresses security at a high level.
In contrast, the U.S. administration introduced America’s AI Action Plan in July 2025. The plan prescribes an AI adoption strategy focused on deregulation and global leadership through innovation. The plan is organized around three strategic pillars: innovation, infrastructure, and international diplomacy and security.
Under this framework, the government aims to accelerate AI research (especially in explainability, robustness, and open models), streamline permitting for data centers, and establish shared benchmarks for industry adoption. It also elevates national security as a core concern, emphasizing protections against misuse, export control of U.S. AI systems, and coordination with allies to shape global norms. Additionally, this plan shifts the focus from “AI safety” to “AI security,” recognizing the evolving risks, but stops short of prescribing actual security best practices and requirements.
About the Author

Kayla is the director of AI security and policy advocacy at Zenity, a vendor in Agentic AI trust, risk, and security management (TRiSM). Throughout her career, she has served as a practitioner in vulnerability management, security operations, crowdsourced security, and as an advocate for the advancement of Agentic AI security and governance.
Beyond the White House, NIST has been the most active U.S. body shaping AI guidance. Its voluntary NIST AI Risk Management Framework (AI RMF) continues to be a key resource, with an upcoming update required through the AI Action Plan. NIST has even gone further though, launching two major ongoing reviews outside of the AI RMF to include:
● New Cyber AI Community Profile tied to NIST CSF 2.0
● Expanded work on agentic AI overlays for NIST SP 800-53 Rev 5
With the most recent updates, NIST has begun incorporating agentic AI into these community profiles and cybersecurity framework updates, marking one of the first explicit federal acknowledgments of AI agents. And while this activity shows policy is moving faster than ever, realistically, it still lags behind the organizational needs of today.
The Agentic AI Blind Spot
As organizations work to comply with the EU AI Act and select frameworks or standards that best align with their needs, a blind spot has emerged: AI agents. To harness the power of agentic AI securely, organizations require a nuanced approach that treats security as a business enabler rather than a blocker.
Currently, there is no structured regulatory guidance that specifically addresses agentic AI. This gap creates a “no man’s land” for organizations trying to push ahead with innovation, but without clear rules of the road. Left unchecked, this gap will lead to stalled AI initiatives, blocked by internal security teams or, worse, headline-making breaches. Policymakers and standards bodies recognize this omission.
For example, as mentioned above, NIST is now explicitly discussing agentic AI in its community profiles and detection pillar updates. However, organizations cannot rely on regulations arriving in time.
Don’t wait for policy, prepare now
While new agentic AI guidelines are in development, security leaders should prepare now so that innovative AI projects can move forward without being derailed. The only sustainable way to innovate is to prioritize security as a core design principle. We know that any standard or regulation introduced for agentic AI will likely cover some of the well-known security best practices, so, with the lack of something strictly defined by regulators today, the key is to get started with what you know will be included.
Where should you start?
● Master the basics: Effective preparation starts with visibility. Build a comprehensive inventory of every system, purpose, capability, and risk, and align this documentation with current and emerging requirements. Without visibility, both compliance and security will fall short.
● Integrate threat modeling: Create a habit of conducting analysis of the potential threats as a core part of the development lifecycle from the start. Use frameworks like the OWASP Agentic AI Threats and Mitigations Guide to identify potential threats and strengthen defenses before deployment.
● Data and Model Security still matter: Model security is critical, whether building on top of a private foundational model (e.g., Google or Microsoft) or creating a custom model from something more open source. Make sure to focus on the data protection layer, ensuring that where sensitive and critical data resides and how it flows is documented and labeled properly.
● Plan for continuous monitoring: Security cannot rely on a single safeguard; it must be built across every layer of the system. Threats like prompt injection and other attacks cannot be stopped at the perimeter alone. Every step of the agent chain must be monitored for anomalies.
● Include agents in insider-threat programs: Agents have all the autonomy of employees, and in some cases, more. Monitor them for unusual activity that could indicate compromise or nefarious actions.
Security is key to agentic AI’s role as an innovation enabler
When we step back and compare regulatory approaches, the EU’s strict framework may hinder innovation, while the U.S.’s deregulatory approach risks leaving organizations without guidance. Neither are perfect, and neither fully addresses agentic AI.
This leaves the responsibility on organizations themselves. Those who act now by implementing security basics, integrating threat modeling, and continuously monitoring agents will not only avoid being blocked by internal security but can also unlock AI’s potential as a competitive advantage.
So, don’t wait for the perfect regulation. Instead, pick a best practice and get started today. Security shouldn’t be a brake on AI innovation. It should be the seatbelt that allows organizations to accelerate confidently.
Companies don’t have to choose between security and innovation. They should embrace both.
This leaves the responsibility on organizations themselves. Those who act now by implementing security basics, integrating threat modeling, and continuously monitoring agents will not only avoid being blocked by internal security but also unlock AI’s potential as a competitive advantage.
So, don’t wait for the perfect regulation. Instead, pick a best practice and get started today. Security shouldn’t be a brake on AI innovation. It should be the seatbelt that allows organizations to accelerate confidently.
Companies don’t have to choose between security and innovation. They should embrace both.
            
            


                
                
                
                
                
No comments yet