Skip to main content

Definition

Autonomous Action Runtime Management (AARM) is a runtime security system that:
1

Intercepts

Captures AI-driven actions before they reach target systems
2

Evaluates

Assesses actions against organizational policy using identity, parameters, and context
3

Enforces

Implements authorization decisions: allow, deny, modify, or require human approval
4

Records

Generates tamper-evident receipts binding action, decision, identity, and outcome

Core Principle

The action boundary is the security boundary.Not the model. Not the prompt. Not the orchestration layer. The moment an AI system attempts to execute a tool—that is where security must be enforced.

What AARM Is

PropertyDescription
Inline enforcementDecisions made and enforced before execution, not after
Semantic evaluationPolicies express meaning (what the action does), not just syntax
Compositional awarenessEvaluates action sequences, not just individual calls
Forensic completenessEvery action produces a signed, verifiable receipt
Agent agnosticWorks with any agent framework, model, or orchestration layer
Fail-secureDenies actions when policy cannot be evaluated

What AARM Is Not

Not a prompt filter

AARM operates on actions, not text. Prompt guardrails are complementary but insufficient.

Not an alignment technique

AARM does not attempt to make models safer. It constrains what they can do, regardless of intent.

Not an observability tool

AARM enforces policy, not just monitors. Logging without blocking is insufficient for irreversible actions.

Not an IAM replacement

AARM complements identity systems. It answers “should this action execute?” not “who is this?”

Trust Model

AARM’s security model treats different components with different trust levels:
ComponentTrust LevelRationale
AARM systemTrustedMust be trusted to enforce policy correctly
PolicyTrustedDefines organizational security requirements
Agent / ModelUntrustedMay be compromised via prompt injection or manipulation
User inputsUntrustedPrimary vector for injection attacks
Retrieved contentUntrustedDocuments, emails, web pages may contain malicious instructions
Tool outputsUntrustedResponses may attempt to influence subsequent actions
Tools / APIsPartially trustedAssumed to execute as documented, but effects must be verified
The critical insight: the AI orchestration layer cannot be trusted as a security boundary. Prompt injection is a fundamental property of current LLM architectures, not a bug to be fixed. Security must be enforced at a layer the model cannot influence.

Scope

In Scope

AARM addresses runtime authorization and audit for AI-driven actions:
  • Runtime action authorization (allow/deny/modify/step-up)
  • Parameter validation and constraint enforcement
  • Human approval workflows for high-risk actions
  • Cryptographically signed action receipts
  • Identity binding (human → service → agent → action)
  • Telemetry export for SIEM/SOAR integration

Out of Scope

AARM does not address (but may complement):
AreaWhy Out of ScopeComplementary Control
Model trainingAARM operates at runtime, not training timeRLHF, constitutional AI
Prompt engineeringAARM secures actions, not text generationSystem prompts, guardrails
Agent internalsAARM treats agents as black boxesAgent-specific safety measures
Tool implementationAARM mediates access, doesn’t secure toolsTool-level security controls
Infrastructure securityAARM assumes secure deploymentNetwork security, container hardening

Relationship to Existing Security

AARM fills a gap in the security stack—it does not replace existing controls:
┌─────────────────────────────────────────────────────────────────┐
│                    Existing Security Stack                       │
├─────────────────────────────────────────────────────────────────┤
│  Identity (IAM)        → Who is making the request?             │
│  Network (Firewall)    → Can they reach this endpoint?          │
│  Application (WAF)     → Is the request well-formed?            │
│  Data (DLP)            → Is sensitive data leaving?             │
│  Monitoring (SIEM)     → What happened? (after the fact)        │
├─────────────────────────────────────────────────────────────────┤
│                         ⚠️  GAP  ⚠️                              │
│     Should THIS action, with THESE parameters, by THIS agent,   │
│     in THIS context, be allowed to execute RIGHT NOW?           │
├─────────────────────────────────────────────────────────────────┤
│  AARM                  → Inline action authorization + audit    │
└─────────────────────────────────────────────────────────────────┘

Next Steps