
Glossary of Agentic AI
Intelligence-Led Enterprise & Orchestrated Multi-Agentic AI
Intelligence-Led Enterprise
An organisation where intelligence, not process or static systems, is the primary organising principle. It reflects a shift from systems of record to systems of knowledge, enabling decisions to be supported in real time.
Systems of Record
Traditional enterprise systems that capture transactions, store data, and provide audit and compliance. They explain what happened, not what it means.
Systems of Knowledge
Capabilities that interpret data in context, maintain shared understanding, learn from patterns, and support decisions as they are being made.
Orchestrated Intelligence
The coordinated operation of multiple AI agents and human judgement across the enterprise, acting as a single, coherent intelligence layer.
Multi-Agentic AI
An approach where multiple specialised AI agents perform defined roles, collaborate with each other, and interact with humans within clear boundaries.
AI Agent
A software entity designed to observe signals, reason within scope, take or recommend actions, and learn over time.
Enterprise Intelligence Layer
A conceptual capability layer that observes work as it happens, interprets signals across domains, and coordinates decisions dynamically.
Process
A structured sequence of steps created to compensate for missing intelligence, fragmented information, or coordination challenges.
Process Dissolution
The managed removal of process as intelligence improves visibility, reduces risk, and simplifies coordination.
Decision Domain
A group of related decisions that materially affect outcomes such as risk, performance, or capital.
Decision State
How a decision is made at a point in time, ranging from fully human to agent-led with human oversight.
Human–Agentic Balance
The intentional distribution of responsibility between humans and AI agents, evolving as trust and capability grow.
Human-in-the-Loop
A design choice where humans remain directly involved in decisions while intelligence supports or recommends.
Autonomy
The degree to which an AI agent can act without human approval, earned progressively as confidence grows.
Guardrails
Explicit boundaries defining what intelligence may do, what requires human judgement, and what must never be automated.
Ambient Intelligence
Intelligence that operates continuously in the background, supporting decisions in real time.
Right-to-Left Design
A method that starts with the desired future state and works backwards to identify what must be true.
Interim State
A temporary operating state designed to build confidence, prove capability, and maintain control.
Non-Negotiable Outcomes
Outcomes leadership agrees should never come as a surprise, anchoring intelligence design.
Intelligence Maturity
The organisation’s ability to trust intelligence, act on insight, and govern autonomy responsibly.
LLM (Large Language Model)
A general-purpose language model trained on broad data. Powerful, but can be harder to govern for safety, cost, and predictability in regulated operations.
SLM (Small Language Model)
A smaller, more controllable model designed for specific tasks. Often preferred for deterministic behaviour, cost control, and data boundary requirements.
Data Sovereignty
Keeping data and processing within defined legal, geographic, and organisational boundaries, with clear control over who can access it and where it can go.
Security Boundary
The enforced perimeter within which data, models, and services operate (for example inside a tenant, VPC, or controlled environment).
Explainability
The ability to justify why an output or recommendation was produced, in a way that humans, auditors, and regulators can understand.
Auditability
The ability to trace inputs, actions, decisions, and outputs end-to-end, including who/what acted, when, and based on which evidence.
Model Governance
Policies, controls, and oversight for how models are selected, tested, deployed, monitored, and changed, including approval and rollback.
Deterministic Behaviour
Consistent outputs given the same inputs and context. Often required for high-control operational decisions.
Hallucination
When a model produces plausible but incorrect content. Mitigated through grounding, guardrails, and constrained outputs.
Grounding
Constraining model outputs to approved sources or facts (for example via retrieval) to reduce hallucination and improve trust.
RAG (Retrieval-Augmented Generation)
A pattern where the model retrieves relevant approved content at runtime and uses it to produce grounded answers.
Vector Database
A datastore optimised to search embeddings (semantic representations) used for retrieval in RAG and similarity search.
Embedding
A numeric representation of text (or other data) that captures meaning, enabling semantic search and matching.
Knowledge Base
A curated set of approved documents, policies, and references used to ground intelligence and ensure consistent answers.
Policy-as-Code
Encoding rules, controls, and decision logic into executable policies so they are consistent, testable, and auditable.
IAM (Identity and Access Management)
The controls that define who/what can access systems and data, under what conditions, with logging and enforcement.
Encryption
Protecting data so it cannot be read without keys, typically both in transit and at rest.
PII (Personally Identifiable Information)
Information that can identify a person (directly or indirectly). Requires strict handling, minimisation, and access controls.
Data Minimisation
Collecting and using only the data needed for the purpose, reducing risk and improving compliance.
Human Approval Gate
A deliberate control where an agent can recommend or prepare an action, but a human must approve before execution.
Confidence Bands
A way to express where intelligence is trusted (high confidence), bounded (medium), or observational only (low), guiding autonomy progression.
Signals
Operational indicators that inform decisions (events, patterns, thresholds, anomalies) used by humans and agents.
Orchestration
Coordinating multiple agents, tools, and humans so work happens in the right order, with shared context and governed handoffs.