Client Alert By Reiko Feaver: Agentic AI: New Risks, New Strategies

Client Alert By Reiko Feaver: Agentic AI: New Risks, New Strategies

Authored by CM Law Partner Reiko Feaver, this article outlines practical steps organizations should take now, including risk assessments, technical controls, governance structures, and contract updates.

Agentic AI — AI systems that autonomously take actions across an organization’s systems — is rapidly moving from concept to deployment. Unlike conventional AI tools that generate text or images in response to a single prompt, agentic AI systems can take autonomous action—reading files, querying databases, sending communications, and interacting across multiple applications with little or no human review.

This autonomy creates tremendous opportunity, but it also introduces risks that existing governance and contracting frameworks were not designed to address. Because agents act autonomously, coordinate across systems, work together, and find unintended pathways to their goals, failures may go undetected until the damage is done — and tracing the point of failure can be extremely difficult. This alert summarizes key risk categories and practical steps organizations should consider before and when deploying agentic AI.

Key Risk Categories

Behavioral risk. Agents can be compromised by malicious inputs (such as prompt injection attacks), human configuration errors, actions of other agents, or misaligned optimization — where the agent pursues its assigned goal through unintended and harmful means. Unlike a cyberattack that breaks protections, an agent compromise exploits the agent’s own capabilities.

Supply chain risk. Agentic systems depend on models, APIs, integrations, and infrastructure from multiple third parties. Each component is a potential vulnerability, and changes by any provider — even silent model updates — can alter the agent’s behavior without warning.

Governance gaps. The speed and autonomy of agents challenge traditional oversight. “Shadow AI” — unauthorized deployment of AI agents by employees — poses a particularly acute and often invisible risk.

Legal and regulatory exposure. Emerging frameworks, including the EU AI Act, NIST guidance, and state-level AI legislation, are establishing new compliance obligations targeted at autonomous systems. Compliance programs built for predictable software cannot constrain agentic AI.

Practical Steps for Organizations

    • Conduct an Agentic AI Risk Assessment

Before deploying any agent, map the systems it will access, the data it will process, and the actions it will take. Identify where human-in-the-loop controls are needed — particularly for irreversible actions, communications on behalf of the organization, financial transactions, and access to regulated data. This assessment also forms the foundation for procurement and governance decisions.

    • Implement Targeted Technical Controls

Key measures include data classification and minimization, least-privilege access configurations, comprehensive logging of all agent actions, and continuous monitoring for anomalous behavior. Organizations should also consider adversarial testing and red-teaming to proactively identify vulnerabilities before they are exploited.

    • Build Governance Structures

Establish a cross-functional AI governance council with authority over agent deployments. Maintain a central inventory of all deployed agents. Assign a named individual accountable for each agent’s configuration and actions. Develop and regularly practice an incident response plan, and ensure training at all levels of the organization so that personnel understand the risks and the importance of governance policies.

    • Update Contracting Practices

Existing technology contracts are unlikely to adequately address agentic AI risks. Key areas to evaluate include:

      • Clearly defining the agent’s permitted actions and required human oversight points
      • Allocating liability for autonomous agent actions and specifying who bears responsibility when oversight controls fail
      • Requiring vendor notification and approval rights for model or system changes
      • Addressing IP ownership of inputs, outputs, and agent customizations
      • Evaluating AI-specific insurance requirements, as general liability policies increasingly exclude AI-related incidents
      • Including meaningful audit rights and operational safeguards such as rollback and kill-switch capabilities

In Sum

Agentic AI offers meaningful advantages, but the governance, technical, and legal frameworks that have served organizations well for traditional technology are not sufficient for systems that act autonomously. Organizations that assess their exposure now, implement targeted controls, and adapt their contracting practices will be far better positioned to capture the benefits while managing the risks. Those who wait for a regulatory mandate or an incident to force action face a far more costly and disruptive path.

If you have questions about the benefits/exposures of Agentic AI to your organization, please feel free to reach out to Reiko directly at rfeaver@cm.law.


About CM Law

CM Law (cm.law) – formerly Culhane Meadows – is the largest national, full-service, women-owned & managed (WBE) law firm in the United States. Designed to provide experienced attorneys with an optimal way to practice sophisticated law while maintaining a superior work/life balance, the firm offers fully remote work options, a transparent, merit and math-based compensation structure, and a collaborative culture. Serving a diverse clientele—from individuals and small businesses to over 40 Fortune-ranked companies—CM Law is committed to delivering exceptional legal services across a broad spectrum of industries.


The foregoing content is for informational purposes only and should not be relied upon as legal advice. Federal, state, and local laws can change rapidly and, therefore, this content may become obsolete or outdated. Please consult with an attorney of your choice to ensure you obtain the most current and accurate counsel about your particular situation.