← Content
AI · 8 min read · April 28, 2026

Agentic AI Security Requires Layered Defense, Not Just Prompt Guards

A new framework maps AI agent vulnerabilities across seven architectural layers and four time horizons, revealing that 93% of research ignores the slowest, most dangerous threats.

Source: arxiv/cs.LG · Kexin Chu · open original ↗ ↗
Share: X LinkedIn

Agentic AI systems need security models that account for persistent memory, tool use, and multi-agent coordination across extended time horizons.

  • Layered Attack Surface Model (LASM) maps seven distinct architectural components vulnerable to different threat classes.
  • Attack temporality spans four classes: instantaneous, session-persistent, cross-session cumulative, and sub-session non-bounded.
  • Most dangerous threats cluster at high layers (governance, multi-agent, ecosystem) with slow-burn temporality.
  • Only 7% of 120 reviewed threat-defense pairs address the high-layer, slow-burn zone.
  • Covert agent collusion, long-term memory poisoning, and supply-chain compromise represent emerging high-risk vectors.
  • Existing defenses focus on low-layer, fast attacks (prompt injection, jailbreaking) leaving systemic gaps.
  • Agentic security requires distributed systems thinking, not stateless LLM security models.

Frequently asked

  • Agentic systems maintain persistent memory, invoke external tools, coordinate with other agents, and operate over extended time horizons. These capabilities introduce new threat vectors—memory poisoning, supply-chain compromise, and covert collusion—that stateless LLM security models do not address. Traditional defenses like prompt injection filters are insufficient.

Related