Core Concepts

AEGIS is built on a set of interlocking concepts that together provide deterministic governance over AI system actions. This page introduces each concept and how they relate.

Governance Runtime

The governance runtime is the central enforcement layer in the AEGIS architecture. It receives action proposals from AI systems, evaluates them against registered capabilities and active policies, and returns a deterministic decision.

The runtime is not a model — it does not learn, predict, or use probabilistic reasoning. It executes explicit rules deterministically:

The runtime can be self-hosted or accessed as a managed service via the AEGIS platform. (The managed platform at aegissystems.live is coming soon and not yet available. The governance runtime exists today as a working Python implementation in the aegis-governance repository.)

Capabilities

A capability is a named operation that an AI system may be authorized to perform. Capabilities are registered in a structured registry before they can be used.

Examples of capabilities:

Capabilities follow a default-deny model: if a capability is not explicitly registered and granted to an actor, any action referencing it will be denied.

Policies

Policies are the rules that determine when and how capabilities may be exercised. They are written in an unambiguous, deterministic policy language — not natural language, not learned models.

A policy might specify:

Policies are evaluated in a deterministic order with explicit precedence rules. There are no hidden decision paths.

Risk Scoring

Every action proposal receives a risk score computed from deterministic algorithms. The risk score considers:

Risk thresholds are configured per policy. An action that falls within acceptable risk proceeds to policy evaluation; an action that exceeds the threshold is denied or escalated regardless of policy.

Risk scoring uses deterministic algorithms — no randomness, no learned models. The exact computation path is captured in the audit trail.

Actors

An actor is any entity that proposes actions through AEGIS. Actors are authenticated and typed:

Actor TypeDescription
ai-agentAn autonomous AI system
ai-copilotAn AI assistant operating alongside a human
automationA non-AI automated process
humanA human operator (for governance-wrapped manual actions)

Every request must include an authenticated actor identity. This enables complete attribution — every governance decision can be traced back to the entity that requested it.

Audit Trail

AEGIS maintains an immutable audit log of every governance decision. The audit trail captures:

Audit logs are append-only and hash-chained to prevent tampering. They are designed to satisfy compliance requirements (SOX, HIPAA, and similar frameworks) and support forensic analysis of incidents.

The Governance Protocol (AGP)

The AEGIS Governance Protocol (AGP-1) is the wire protocol that standardizes communication between AI systems and the governance runtime. It defines:

For the full protocol specification, see the aegis-governance repository.

How It All Fits Together

AI Agent
   |
   v
ACTION_PROPOSE (via AGP-1)
   |
   v
AEGIS Governance Runtime
   |-- Capability Authorization: Is this capability registered? Does this actor have it?
   |-- Policy Evaluation: Do active policies allow this action in this context?
   |-- Risk Scoring: Is the computed risk within acceptable thresholds?
   |
   v
DECISION_RESPONSE (ALLOW / DENY / ESCALATE / REQUIRE_CONFIRMATION)
   |
   v
Audit Log (immutable, hash-chained)

Next Steps