Skip to main content
arXiv Preprint · 2025

AARM: Autonomous Action Runtime Management — A Category Specification for Securing AI Agent Actions at Runtime

Herman Erlich  ·  arXiv:2602.09433

Introduces AARM as a security category for AI agent runtime governance. Defines the action-intercept lifecycle, Core conformance model, and trust-tier framework. The canonical reference for all downstream specification work and conformance evaluation.

arxiv.org/abs/2602.09433 ↗
01

Cross-agent trust propagation

How should AARM trust tiers propagate across multi-agent pipelines where agents spawn sub-agents dynamically?

02

Policy drift under fine-tuning

Runtime policies may become inconsistent with model behavior after fine-tuning. Defining invariant policy surfaces is an open problem.

03

Latency-preserving intercept

Conformant action intercept adds overhead. Research into zero-latency or speculative intercept architectures is needed.

04

Formal conformance verification

Current conformance is evaluator-driven. Automated, reproducible verification methods for AARM Core conformance remain unsolved.

05

Regulatory alignment

Mapping AARM controls to emerging AI regulations (EU AI Act, NIST AI RMF, ISO 42001) requires ongoing harmonization work.

Publish your research here

Research aligned with the AARM specification is welcome. Fork the repo, add your paper using the template below, and open a pull request. Published papers live at:

/research/aligned/your-paper-title
  • Fork aarm-dev/aarm.dev on GitHub
  • Copy the template from /research/aligned/_template.mdx
  • Add your file to /research/aligned/your-title.mdx
  • Open a pull request — reviewed by the TWG