Skip to main content

About the author

Hi — I’m Herman Errico. I work at the intersection of security engineering, compliance frameworks, and AI governance, with a focus on turning abstract requirements into runtime controls that teams can actually implement and operate. Over the last few years, I’ve spent a lot of time on:
  • Security control design (policy, identity, enforcement, auditability)
  • Mapping practical controls to ISO 27001/27002, SOC 2, NIST, and emerging AI governance expectations
  • Building programs and tooling that make security measurable, automatable, and repeatable
  • Thinking about how modern systems fail when autonomy increases (humans → software → AI-driven systems)

Why I created AARM

As AI systems shift from “assistant” to “actor,” the main risk stops being what the model says and becomes what the system does:
tool calls, API actions, state changes, and data movement.
AARM (Autonomous Action Runtime Management) is my attempt to name and define that missing layer:
a runtime control plane for authorizing, constraining, and auditing autonomous actions.
This is not a product pitch — it’s a blueprint meant to be useful whether you’re building with MCP, function calling, plugins, internal tool servers, or whatever comes next.

Open by default

This work is published openly on purpose. AARM is here for:
  • builders shipping agentic workflows in production
  • security teams trying to govern tool-enabled AI
  • researchers and standards folks who want a clean vocabulary
  • anyone who needs a practical way to reason about runtime AI actions
You can:
  • reuse the terminology and definitions
  • implement the architecture patterns
  • copy/paste the checklists into your own policies
  • propose edits, extensions, and improvements
If you build on it, attribution is appreciated — but the priority is adoption and shared progress.

Contribute to AARM

Propose edits, add patterns, and share real-world implementation notes.