Skip to main content
Current version: v0.1 — Initial public release of the AARM specification. The specification is open and will evolve as the agent ecosystem matures. Contribute on GitHub.

What is AARM?

Autonomous Action Runtime Management (AARM) is an open system specification for securing AI-driven actions at runtime. It defines what a runtime security system must do — not how to build it. An AARM system intercepts AI-driven actions before execution, accumulates session context including prior actions and data accessed, evaluates actions against organizational policy and contextual intent alignment, enforces authorization decisions (allow, deny, modify, defer, or require approval), and records tamper-evident receipts binding action, context, decision, and outcome for forensic reconstruction. AARM is not a product, library, or service you install. It is a specification that describes the components, behaviors, and conformance requirements for systems that secure AI agents. You use AARM to design and build your own runtime security system, or to evaluate whether existing solutions meet the specification.

The Runtime Security Gap

The security posture of AI systems is increasingly determined not by what models say but by what they do. Traditional security paradigms fail to address five characteristics of AI-driven actions:
CharacteristicWhy It Matters
IrreversibilityTool executions produce permanent effects. Once a database is dropped or data exfiltrated, the damage is done.
SpeedAgents execute hundreds of actions per minute — far beyond human review capacity.
Compositional riskIndividual actions may satisfy policy while their composition constitutes a breach.
Untrusted orchestrationPrompt injection and indirect attacks mean the AI layer cannot be trusted as a security boundary.
Privilege amplificationAgents operate under static, high-privilege identities misaligned with least privilege. Small reasoning failures produce large-scale impact.
Existing tools don’t solve this:
  • SIEM observes events after execution — too late to prevent harm
  • API gateways verify who is calling, not what the action means
  • Firewalls protect perimeters — but agents operate inside with legitimate credentials
  • Prompt guardrails filter text, not actions — and are easily bypassed
  • Human-in-the-loop doesn’t scale, and can itself be exploited
  • IAM / RBAC evaluates permissions in isolation — cannot detect compositional threats
These systems primarily support automated response: deterministic enforcement of pre-specified rules. They do not support autonomous response, in which a system reasons over accumulated context to decide whether an action should be permitted, modified, delayed, or blocked. The gap lies at the intersection of prevention and context-awareness. AARM fills this gap.

Action Classification

AARM recognizes that security decisions aren’t binary. Actions fall into four categories:

Forbidden

Always blocked regardless of context. Hard policy limits defined by the organization.

Context-Dependent Deny

Allowed by policy, but blocked when context reveals inconsistency with the user’s stated intent.

Context-Dependent Allow

Denied by default, but permitted when context confirms alignment with legitimate intent.

Context-Dependent Defer

Temporarily suspended when available context is insufficient, ambiguous, or conflicting for a confident decision.
CategoryExampleEvaluation
ForbiddenDROP DATABASE production, send to known malicious domainsStatic policy → DENY
Context-Dependent DenyAgent can send emails, but just read sensitive data and recipient is externalPolicy ALLOW + context mismatch → DENY
Context-Dependent AllowAgent wants to delete records; context shows user explicitly requested cleanup of their own test dataPolicy DENY + context match → STEP_UP or ALLOW
Context-Dependent DeferAgent initiates credential rotation outside maintenance window; context is ambiguousPolicy indeterminate → DEFER until resolved
This is why AARM requires both static policy evaluation and context accumulation. An action that looks fine in isolation might be a breach in context. An action that looks dangerous might be exactly what the user asked for. And some actions simply cannot be resolved into a binary allow/deny without additional assurance.

Full Action Classification

Detailed classification framework with examples and evaluation logic

What an AARM System Does

A system conforming to AARM:
1

Intercepts

Captures AI-driven actions before they reach target systems
2

Accumulates Context

Tracks session state: the user’s original request, prior actions, data accessed, and tool outputs — in a tamper-evident, append-only log
3

Evaluates

Assesses the action against static policy and contextual alignment with stated intent
4

Enforces

Implements one of five authorization decisions: allow, deny, modify, defer, or require human approval
5

Records

Generates tamper-evident receipts capturing action, context, decision, and outcome
┌─────────────────┐         ┌─────────────────────────────────┐         ┌─────────────────┐
│                 │         │          AARM SYSTEM            │         │                 │
│  Agent / LLM    │ ──────► │  ┌─────────────────────────┐   │ ──────► │  Tools / APIs   │
│                 │  action │  │    Context Accumulator  │   │  allow  │                 │
│                 │         │  └────────────┬────────────┘   │   or    │                 │
│                 │ ◄────── │               ▼                │ ◄────── │                 │
│                 │  result │  ┌─────────────────────────┐   │  result │                 │
└─────────────────┘         │  │     Policy Engine +     │   │         └─────────────────┘
                            │  │   Intent Evaluation     │   │
                            │  └────────────┬────────────┘   │
                            │               ▼                │
                            │  ┌─────────────────────────┐   │
                            │  │   Receipts (+ context)  │   │
                            │  └─────────────────────────┘   │
                            └─────────────────────────────────┘

Threat Model

AARM addresses eleven attack vectors specific to AI-driven actions:

Threat Model Overview

Full threat summary table, attack lifecycle, and trust assumptions

System Components

An AARM-compliant system implements these components:

Components Overview

Full component architecture with data flow diagrams

Implementation Architectures

AARM defines four implementation architectures, each with distinct trust properties:
ArchitectureYou ControlBypass ResistanceContext RichnessDefer SupportAARM-Conformant Alone
Protocol GatewayNetworkHighLimitedPartialYes
SDK / InstrumentationCodeMediumFullFullYes
Kernel / eBPFHostVery HighNoneLimitedNo
Vendor IntegrationPolicy onlyVendor-dependentVendor-dependentLimited–ModerateIf hooks sufficient
Kernel-level (eBPF/LSM) implementations alone cannot satisfy AARM conformance for context-dependent classifications. eBPF must be deployed as a defense-in-depth backstop alongside a semantic-aware architecture.
Vendor-side governance hooks must execute synchronously and prior to any side-effectful tool execution. Asynchronous or best-effort hooks do not satisfy AARM requirements.
For defense-in-depth, organizations should deploy multiple architectures in layers.

Conformance Requirements

To claim AARM compliance, a system must satisfy these requirements:
IDLevelRequirement
R1MUSTPre-execution interception — block or defer actions before execution
R2MUSTContext accumulation — track prior actions, data classifications, original request
R3MUSTPolicy evaluation with intent alignment — forbidden, context-dependent deny/allow/defer
R4MUSTFive authorization decisions — ALLOW, DENY, MODIFY, STEP_UP, DEFER
R5MUSTTamper-evident receipts — cryptographically signed with full context
R6MUSTIdentity binding — human, service, agent, session, and role/privilege scope
R7SHOULDSemantic distance tracking — detect intent drift via embedding similarity
R8SHOULDTelemetry export — structured events to SIEM/SOAR platforms
R9SHOULDLeast privilege enforcement — scoped, just-in-time credentials
AARM Core (R1–R6): Baseline runtime security guarantees. AARM Extended (R1–R9): Comprehensive runtime security with operational maturity features.

Research Directions

AARM addresses runtime action security, but several challenges remain open:

Intent Inference

Can we build reliable intent classifiers for agent actions?

Data Flow Tracking

How to track data lineage through non-deterministic LLM transformations?

Multi-Agent Coordination

Coherent authorization across distributed agent boundaries

Approval & Deferral Fatigue

Balancing safety against operational usability

Vendor Standardization

Industry-wide governance hook standards for SaaS agents

AARM System Security

Protecting the security system itself as a high-value target

Open Challenges

Full research directions with open questions

Building an AARM-Compliant System

1

Understand the Threats

Study the threat model to understand what attacks your system must defend against — all eleven threat categories.
2

Implement the Components

Build the core system components: action mediation, context accumulator, policy engine with intent evaluation, approval service, deferral service, receipt generator, and telemetry exporter.
3

Choose an Architecture

Select an implementation architecture based on your control level: protocol gateway, SDK instrumentation, kernel eBPF (as backstop), or vendor integration.
4

Verify Conformance

Test your implementation against the conformance requirements (R1–R9) using the testing protocol.

Guides and Patterns

Once you understand the specification, the Guides tab provides practical implementation help:

Why an Open Specification?

The market for AI agent security is emerging rapidly, with multiple vendors building proprietary solutions. AARM aims to:

Establish Baseline

Define requirements before fragmentation forecloses interoperability

Enable Evaluation

Let buyers objectively assess vendor claims against defined criteria

Preserve Choice

Specify what systems must do, not how they must be built

Accelerate Adoption

Provide implementation guidance, not just principles
The goal is not to build AARM, but to define what an AARM-conformant system must do — enabling the market to compete on implementation quality rather than category definition.

Contribute

AARM is an open specification. We welcome contributions from security researchers, agent framework developers, and enterprise practitioners.

GitHub Repository

Specification source, issues, and discussions

Citation

If you reference AARM in academic or professional work:
Errico, H. "Autonomous Action Runtime Management (AARM): 
A System Specification for Securing AI-Driven Actions at Runtime." 
AARM Specification v0.1, 2025. https://aarm.dev