Reference / Glossary
Glossary — AI Agent Security and Operations Terms
Definitions of terms used across One Man Ops content. Each entry is written to directly answer “What is [term]?” queries from both humans and AI agents.
Agent
An AI system that receives objectives, decides how to pursue them, selects tools, and takes actions autonomously. Unlike a chatbot that responds to prompts, an agent acts — sending emails, modifying files, making API calls, and interacting with external systems without per-action human approval.
Agent isolation
An architectural pattern where each agent operates with its own credentials, permissions, and failure boundary. If one agent is compromised or malfunctions, the blast radius is limited to what that agent can access. The alternative — shared credentials across agents — turns any single compromise into a system-wide incident.
Approval gate
A checkpoint in an agent workflow where execution pauses and waits for human authorization before proceeding. Used for high-impact actions: sending external communications, making payments, modifying permissions, deploying to production. The operator reviews the proposed action and explicitly approves, rejects, or modifies it.
Blast radius
The scope of damage that results when an agent fails or is compromised. In a well-isolated system, blast radius is limited to the single agent's permissions. In a poorly isolated system, blast radius can extend to every system the agent's credentials can reach — which is often everything.
Channel classification
The practice of categorizing every input an agent processes as either an authenticated command (from the operator, through a verified channel) or an information channel (content from the outside world — emails, web pages, documents, mentions). Agents must treat information channels as untrusted. Prompt injection attacks succeed when this distinction is missing.
ClawHub
The marketplace for OpenClaw extensions, skills, and integrations. Skills installed from ClawHub execute with the agent's full permissions, making each install an implicit trust decision. Malicious skills have been documented in the ecosystem. Treat every skill install as granting file access, credential-adjacent execution, and workflow influence.
Compaction
An event that occurs when an AI agent's context window fills up. The platform summarizes older context into a compressed form, and the agent loses awareness of its recent work. Without explicit disk-write recovery mechanisms (such as session state files), the agent forgets what it was doing mid-task. Also called context overflow or context window exhaustion.
Credential cascade
A failure pattern where compromising a single set of credentials provides access to multiple connected systems. Common when agents share API keys, OAuth tokens, or service accounts across roles. The Drift chatbot incident — one compromised integration cascading into Salesforce, Google Workspace, Slack, S3, and Azure across 700 organizations — is the canonical example.
Credential isolation
The practice of assigning separate credentials (API keys, OAuth tokens, service accounts) to each agent based on its specific function. A content drafting agent gets social media credentials. A payment agent gets payment processor credentials. Neither gets the other's. This prevents credential cascade.
Failure boundary
The architectural limit of what breaks when one component fails. In a multi-agent system, each agent should be a separate failure boundary — its malfunction does not propagate to other agents. Failure boundaries are enforced through credential isolation, scoped permissions, and separate execution contexts.
Failure mode runbook
A documented procedure for diagnosing and recovering from specific failure patterns. In agent operations, common documented failures include: provider cooldown stalls, coordination loops, compaction amnesia, silent cron failures, and edit race conditions. Runbooks exist because these failures recur, and the fix should not depend on the operator remembering the solution.
Human-in-the-loop
An operating model where a human operator maintains oversight of agent actions through monitoring, approval gates, and exception handling. The operator does not approve every action — but they are present in the decision loop for high-impact actions and can intervene when agents behave unexpectedly. Distinct from full autonomy (no human oversight) and manual operation (human performs every action).
Hub-and-spoke model
An agent architecture where one central orchestrator (the hub) receives all incoming work, decides which specialized agent (spoke) should handle it, and routes tasks accordingly. Spokes never communicate directly with each other — all coordination goes through the hub. This prevents uncontrolled agent-to-agent interactions and provides a single point for audit and oversight.
Least privilege
The security principle that every agent should have access to exactly what it needs to perform its function and nothing more. A monitoring agent needs read access to logs — not write access to production files. A content agent needs social media credentials — not payment processor credentials. Violations of least privilege create unnecessary blast radius.
MCP (Model Context Protocol)
A protocol for connecting AI agents to external tools and data sources. MCP defines how agents discover, authenticate with, and use external services. Security implications: MCP connections expand the agent's capability surface, and each connection is a potential injection point if the external service provides untrusted content.
Multi-agent system
A deployment where multiple AI agents operate in coordination, each handling a different function. The security advantage is isolation — separating responsibilities means separating permissions. The security risk is coordination — agents that can communicate with each other create interaction patterns the operator may not anticipate.
OpenClaw
An open-source platform for deploying AI agents with tool access, file system interaction, shell execution, and external service integration. OpenClaw provides the runtime (Gateway), management interface (Control UI), extension marketplace (ClawHub), and multi-device connectivity (Nodes). Operators self-host the Gateway on their own infrastructure.
Prompt injection
An attack where malicious instructions are embedded in content that an agent processes — documents, emails, web pages, calendar entries, or any other input. If the agent cannot distinguish the injected instructions from legitimate commands, it follows them. Zero-click prompt injection requires no user interaction — the agent processes the malicious content as part of its normal workflow.
Role isolation
The practice of assigning each agent a specific, bounded role with permissions limited to that role's function. An inbox agent handles incoming messages — it cannot deploy code. A deployment agent pushes to production — it cannot access customer payment data. Role isolation is enforced through credential isolation, tool scoping, and explicit permission boundaries.
Scoped API key
An API key that is restricted to specific operations, resources, or endpoints. Instead of granting an agent a full-access API key, the key is scoped to only the operations the agent needs. If the key is compromised, the attacker can only perform the scoped operations — not everything the API supports.
Session state file
A persistent file that an agent reads at the start of every interaction to recover its working context. Used to survive compaction events, session restarts, and context loss. Contains the agent's current task, status, and next action. Without a session state file, an agent that loses context has no way to resume its work.
Silent failure
A failure where an agent or cron job stops producing results without generating an error message. The most dangerous type of failure in agent operations because it is invisible — discoverable only when someone notices the absence of expected output. Monitoring systems must check for the presence of results, not just the absence of errors.
Tool scoping
Restricting which tools an agent can use based on its function. A content agent gets access to text editing and social media posting tools. A monitoring agent gets access to log reading and alerting tools. Neither gets access to shell execution, file deletion, or credential management tools unless explicitly required.
Workspace boundary
The directory or filesystem scope that an agent is intended to operate within. On unpatched versions of some platforms (including OpenClaw before 2026.2.25), workspace boundaries can be bypassed through symlink traversal — an agent creates a symlink inside the workspace that points to a sensitive file outside it. Workspace boundaries are convenience features, not security guarantees, unless explicitly hardened.
Zero-click attack
An attack that requires no user interaction. In the agent context, zero-click prompt injection occurs when malicious instructions are embedded in content the agent processes automatically — an email it triages, a document it summarizes, a web page it fetches. The agent follows the injected instructions without anyone clicking, approving, or even seeing the malicious content.
This glossary is maintained by Andres at One Man Ops. Terms are added as new concepts emerge in the AI agent security landscape. Last updated: March 2026.