From Text to Actions
Large language models began as text generators. Security meant filtering harmful outputs—profanity, misinformation, toxic content. If the model said something wrong, you could discard it before displaying. That model is obsolete. Today’s AI systems do things:- Query databases and modify records
- Send emails and Slack messages
- Create, edit, and delete files
- Execute code and shell commands
- Call external APIs and webhooks
- Manage cloud infrastructure
- Process financial transactions
DROP TABLE customers, no output filter helps. The data is gone.
The Runtime Security Gap
AI-driven actions have four characteristics that break traditional security:Irreversibility
Irreversibility
Unlike text generation—which can be filtered before display—tool executions produce immediate, permanent effects. Database mutations, sent emails, financial transfers, credential changes.Once executed, the damage is done.
Speed
Speed
Agents execute hundreds of actions per minute. No human can review them in real time.
Security decisions must be automated and instantaneous.
| Actor | Actions per minute |
|---|---|
| Human operator | 2-5 |
| AI agent | 100-500 |
| Human reviewer capacity | 5-10 |
Compositional Risk
Compositional Risk
Individual actions may each satisfy policy. Their composition violates it.Traditional security evaluates actions in isolation.
Untrusted Orchestration
Untrusted Orchestration
Prompt injection, jailbreaks, and indirect attacks mean the model’s “intent” cannot be trusted.The agent might be:
- Following malicious instructions embedded in a document
- Manipulated by a crafted error message
- Deceived about what action it’s actually taking
Why Existing Security Fails
SIEM
Built for: Event analysis and correlationFailure mode: Observes after execution. By the time SIEM alerts, the database is dropped.SIEM answers “what happened?” — but can’t prevent it.
API Gateways
Built for: Authentication, rate limiting, routingFailure mode: Verifies who is calling, not what the action means. A valid token making a destructive call passes through.Gateway sees credentials, not intent.
Firewalls
Built for: Network perimeter defenseFailure mode: Agents operate inside the perimeter with legitimate credentials. The call comes from an authorized service.The threat is already inside.
Prompt Guardrails
Built for: Filtering model outputsFailure mode: Filters text, not actions. Easily bypassed. Can’t evaluate whether
db.execute(query) is safe—only whether the text describing it looks harmful.Guardrails see words, not operations.IAM / RBAC
Built for: Identity and access managementFailure mode: Evaluates permissions in isolation. User can read customers. User can send email. System doesn’t know doing both in sequence is exfiltration.Permissions don’t understand composition.
Human-in-the-Loop
Built for: Manual approval of sensitive actionsFailure mode: Doesn’t scale to agent speed. Leads to rubber-stamping. Can itself be exploited through forged approval dialogs.Humans become the bottleneck—or the vulnerability.
The Missing Layer
There’s a gap in the security stack:What’s Needed
A security system that:| Requirement | Why |
|---|---|
| Intercepts before execution | Prevention, not just detection |
| Evaluates action semantics | Understands what is happening, not just who |
| Enforces policy inline | Allow, deny, modify, or escalate in real time |
| Handles composition | Detects multi-action violations |
| Treats agent as untrusted | Assumes orchestration may be compromised |
| Creates forensic trail | Tamper-evident record of every decision |
| Scales to agent speed | Automated, millisecond decisions |
Next
What is AARM?
How AARM fills the runtime security gap