From Text to Actions
Large language models began as text generators. Security meant filtering harmful outputs—profanity, misinformation, toxic content. If the model said something wrong, you could discard it before displaying. That model is obsolete. Today’s AI systems do things:- Query databases and modify records
- Send emails and Slack messages
- Create, edit, and delete files
- Execute code and shell commands
- Call external APIs and webhooks
- Manage cloud infrastructure
- Process financial transactions
DROP TABLE customers, no output filter helps. The data is gone.
The Runtime Security Gap
AI-driven actions have five characteristics that break traditional security:Irreversibility
Irreversibility
Unlike text generation—which can be filtered before display—tool executions produce immediate, permanent effects. Database mutations, sent emails, financial transfers, credential changes.Once executed, the damage is done.
Speed
Speed
Agents execute hundreds of actions per minute. No human can review them in real time.
Security decisions must be automated and instantaneous.
| Actor | Actions per minute |
|---|---|
| Human operator | 2–5 |
| AI agent | 100–500 |
| Human reviewer capacity | 5–10 |
Compositional Risk
Compositional Risk
Individual actions may each satisfy policy. Their composition violates it.Traditional security evaluates actions in isolation. Context-aware evaluation catches what isolated checks miss.
Untrusted Orchestration
Untrusted Orchestration
Prompt injection, jailbreaks, and indirect attacks mean the model’s “intent” cannot be trusted.The agent might be:
- Following malicious instructions embedded in a document
- Manipulated by a crafted error message
- Deceived about what action it’s actually taking
Privilege Amplification
Privilege Amplification
Autonomous agents routinely operate under static, high-privilege identities misaligned with the principle of least privilege. A calendar integration granted full Google Workspace admin access. A database helper with
DROP permissions.Small reasoning failures produce large-scale impact when the agent’s execution identity is too powerful. A single prompt injection can exploit every permission the agent holds—not just the ones relevant to the current task.Why Existing Security Fails
SIEM
Built for: Event analysis and correlationFailure mode: Observes after execution. By the time SIEM alerts, the database is dropped.SIEM answers “what happened?”—but can’t prevent it.
API Gateways
Built for: Authentication, rate limiting, routingFailure mode: Verifies who is calling, not what the action means. A valid token making a destructive call passes through.Gateway sees credentials, not intent.
Firewalls
Built for: Network perimeter defenseFailure mode: Agents operate inside the perimeter with legitimate credentials. The call comes from an authorized service.The threat is already inside.
AI Guardrails
Built for: Filtering model inputs and outputsFailure mode: Filters text, not actions. Easily bypassed. Can’t evaluate whether
db.execute(query) is safe—only whether the text describing it looks harmful.Guardrails see words, not operations.IAM / RBAC
Built for: Identity and access managementFailure mode: Evaluates permissions in isolation. User can read customers. User can send email. System doesn’t know doing both in sequence is exfiltration.Permissions don’t understand composition.
Human-in-the-Loop
Built for: Manual approval of sensitive actionsFailure mode: Doesn’t scale to agent speed. Leads to rubber-stamping. Can itself be exploited through forged approval dialogs.Humans become the bottleneck—or the vulnerability.
Automated vs. Autonomous Response
These existing systems share a fundamental limitation: they support automated response—deterministic enforcement of pre-specified rules. A firewall blocks a port. An API gateway rejects an expired token. A SIEM triggers an alert when a pattern matches. What they do not support is autonomous response—reasoning over accumulated context to decide whether an action should be permitted, modified, delayed, or blocked. Determining whether “send email to external recipient” is safe requires knowing what data the agent accessed, what the user originally asked for, and whether the action aligns with that intent. No static rule captures this. The gap lies at the intersection of prevention and context-awareness: systems that can block actions before execution based on accumulated session context, not just static policy.The Missing Layer
There’s a gap in the security stack:What’s Needed
A security system that:| Requirement | Why |
|---|---|
| Intercepts before execution | Prevention, not just detection |
| Evaluates action semantics | Understands what is happening, not just who |
| Tracks accumulated context | Detects multi-action violations and intent drift |
| Enforces policy inline | Allow, deny, modify, defer, or escalate in real time |
| Handles uncertainty | Defers actions when context is insufficient for a confident decision |
| Treats agent as untrusted | Assumes orchestration may be compromised |
| Enforces least privilege | Scopes credentials to the specific operation, not standing permissions |
| Creates forensic trail | Tamper-evident record of every decision |
| Scales to agent speed | Automated, millisecond decisions |
Next
What is AARM?
How AARM fills the runtime security gap