Valliance logo in black
Valliance logo in black

Five layers of enterprise intelligence (And why the order matters)

Mar 13, 2026

·

10 Mins

AI Transparency

Five terms. Overlapping definitions. Different jobs. One operating system.

Ask five CTOs to define "knowledge graph" and you will get five answers. Ask them how it differs from an ontology and you will get silence. Ask where a semantic layer fits and the conversation shifts to whatever their vendor told them last quarter.

This is not a vocabulary problem. It is an architecture problem.

These five concepts (ontology, knowledge graph, semantic layer, context graph, trust layer) have been well-defined in academic and semantic web literature for decades. The current AI hype cycle has collapsed them into interchangeable marketing language, and the cost of that confusion is causing trouble. Enterprises are building knowledge graphs without ontologies, which is the equivalent of populating a database without defining a schema. The data goes in. Nothing coherent comes out. Even with a NoSQL database, there was always an application with logic to provide semantic understanding to the raw data.

On the vocabulary question, consider your laptop. It has a kernel, a filesystem, a user interface, and a permissions model. Nobody confuses these components because the industry learned long ago that each layer serves a distinct purpose and depends on the one beneath it. Enterprise AI has an equivalent stack. Confusing its layers is just as architecturally dangerous, and considerably more expensive.

This article disambiguates the five layers of that stack: what each one does, how it relates to the others, and why the sequence matters.

These layers are not interchangeable. They are sequential. Each one answers a question that the previous layer cannot, and each one depends on the answers already provided beneath it. Skip a layer, and the architecture does not degrade gracefully. It fails structurally.

1. The ontology: What the world looks like

The first layer answers the most fundamental question: what exists?

The term originates in philosophy, where ontology is the study of being, of what things are and how they relate. In computer science, Tom Gruber formalised it in 1993 as "a specification of a conceptualisation." In enterprise terms, it is simpler than either of those definitions suggest. An ontology defines the nouns and verbs of your business. The nouns are the entity types: Customer, Account, Product, Contract, Obligation. The verbs are the relationships between them: a Customer holds an Account, an Account contains Positions, Positions reference Instruments. The ontology also defines the constraints: a retail Customer cannot hold an institutional Product; a Contract must reference at least one counterparty.

This is not a taxonomy. A taxonomy classifies things into hierarchies—Animal, Mammal, Dog. An ontology goes further. It defines what those things are, what they can do, what they relate to, and what rules govern their behaviour.

Critically, the ontology contains no data. It is the empty blueprint. No customer names, no account numbers, no transaction records. It defines what can exist in the world, not what does exist. Think of it as the schema of the world model.

Yann LeCun’s new venture AMI Labs is pursuing this concept at planetary scale: world models as the basis for intelligence, rather than statistical language patterns. Their thesis is that AI needs structured models of how reality works, not just distributions over tokens. AMI raised over a billion dollars on the conviction that understanding the structure of the world is a prerequisite to reasoning about it. The enterprise ontology is the same principle applied to a narrower domain. It is the world model of your organisation: a formal representation of what your business is, what it does, and how its parts relate.

Why does this layer have to come first? Because without it, every downstream system invents its own definitions. "Customer" in the CRM means anyone with a record. "Customer" in finance means anyone who has paid an invoice. "Customer" in compliance means anyone subject to KYC obligations. These are three different concepts sharing one word, and until the ontology forces a canonical definition, every system that touches customer data is silently disagreeing with every other system. This is a governance act disguised as a technical one.

It is also the layer most enterprises skip. Ontology design is slow. It requires cross-functional agreement on definitions that people have been quietly disagreeing about for years. It produces no visible output. You cannot demo an ontology to a board.

That said, "ontology first" is not the only viable sequence. A pragmatic alternative is to start with the data that already exists, craft the ontology over it as an interpretive layer, and let the knowledge graph emerge from the combination. This is less architecturally pure but often faster to value, because it works with the enterprise as it is rather than as it should be. The ontology still gets built. It simply gets built inductively, from observed data patterns, rather than deductively from first principles. The important thing is that it gets built at all. Without it, everything downstream is guesswork.

2. The knowledge graph: Populating the world with facts

If the ontology answers "what exists?" then the knowledge graph answers the next question: what is true?

A common misconception is that the knowledge graph populates the ontology. It does not. It populates itself, using the ontology as its schema. The ontology is the empty structure. The knowledge graph is that structure filled with real entities, real relationships, real data. Where the ontology says "a Customer holds an Account," the knowledge graph says "Acme Corp holds Account 4471-B, opened 14 March 2019, managed by Sarah McAccountmanager." This is where raw data becomes organisational truth. The knowledge graph is the set of facts ( and sometimes axioms) from which the rest of the system reasons.

It is worth distinguishing this from the technology that often gets conflated with it. A graph database (Neo4j, Amazon Neptune) is a storage engine. It stores nodes and edges efficiently. A knowledge graph is a semantic construct: data that conforms to an ontology, with typed entities and typed relationships that carry meaning. You can build a knowledge graph on top of a relational database. The graph database is an implementation choice, not a defining characteristic. Of course, it almost goes without saying that you can’t have the knowledge without the storage.

The construction process is where the real work happens. Consider a financial services firm where the CRM stores "John Smith," the ERP stores "J. Smith," and the HRIS stores "John D. Smith." These are three records in three systems referring to one person. The knowledge graph construction process resolves them into a single canonical entity, using the ontology's class definitions as the basis for what constitutes a match. This is entity resolution, and it is the first genuinely difficult problem in the stack. Without canonical class definitions from the ontology, there is no principled basis for deciding whether two records refer to the same thing.

Entity resolution is only half the challenge. Source systems store relationships implicitly, as foreign keys or join tables, or not at all. The relationship between a support ticket and the contract it relates to may exist only in someone's head. The knowledge graph makes these relationships explicit and typed. A foreign key linking a customer ID to an orders table becomes a formally defined "Customer places Order" relationship that can be traversed, queried, and reasoned about. The tacit and the explicit are both first class constructs in the knowledge graph backed by the ontology.

The deeper difficulty is semantic reconciliation. Source systems do not merely store data in different formats. They encode different assumptions about the domain.

Take "Revenue." In sales, revenue includes pipeline. In finance, revenue means recognised revenue. In operations, revenue means billable hours delivered. Same word. Three different numbers. Three different truths.

Mapping all three into the knowledge graph is not a formatting exercise. It is a reconciliation of competing worldviews. Someone has to decide which definition is authoritative, whether the others should coexist as distinct concepts, and how conflicts resolve when the numbers disagree. Every one of these mapping decisions is a business decision, not a technical one.

The plumbing of moving data between systems is a solved problem. Agreeing on what the data means is not.

This is why knowledge graph construction is iterative. You define a minimal viable ontology, populate the graph, discover that your class hierarchy does not accommodate a real-world pattern you had not anticipated, revise the ontology, re-map. The graph is the primary mechanism by which ontology gaps surface. Google's Knowledge Graph, which powers search enrichment and entity disambiguation across billions of queries, went through years of this iterative refinement. LinkedIn's Economic Graph, which models the professional world as entities and relationships between people, companies, skills, and jobs, is another example of a knowledge graph that evolved its ontology continuously as new patterns emerged from the data. Enterprise knowledge graphs follow the same pattern at smaller scale. The added complexity is that the data is messier, the politics are louder, and the stakes of getting a definition wrong are commercial rather than informational.

3. The semantic layer: Making facts speak business

The knowledge graph now contains organisational truth: real entities, real relationships, resolved and reconciled. But truth expressed in ontological terms is not accessible to the people and systems that need to act on it. The semantic layer answers the next question: what does that mean to us?

The semantic layer is a translation interface. It maps the formal structures of the knowledge graph into business language: metrics, definitions, and governed calculations that the organisation agrees on. Where the knowledge graph knows that Acme Corp has three accounts with contract values of £1.2m, £1.8m, and £1.2m, the semantic layer knows that this makes Acme Corp an "Enterprise" customer, because Enterprise is defined as any customer with total contract value exceeding £3m. The graph stores facts. The semantic layer applies business logic to those facts and gives the results names that people recognise.

This is not a dashboard. It is not a reporting tool. It is the abstraction layer that dashboards and reporting tools query against. The dashboard is the presentation. The semantic layer is the meaning. Tools like dbt's metrics layer and Looker's LookML popularised this concept in the analytics world by giving organisations a single place to define what "revenue" or "active customer" or "churn rate" actually means. One definition, one calculation, one truth. Everyone queries against it. Nobody reinvents it.

The strategic importance of this layer has shifted dramatically in the context of agentic AI. Without a semantic layer, every AI agent that queries organisational data has to interpret the knowledge graph on its own terms. It has to decide what "revenue" means, how to aggregate it, which accounts to include. The result is divergence: different agents producing different answers to the same question, with no governing authority over which is correct. The semantic layer eliminates this by providing a single, governed interface between any consumer of organisational intelligence, human or machine, and the facts in the knowledge graph.

This is where LLMs become relevant. When an AI agent needs to answer a business question, the semantic layer is where it should go for definitions rather than generating its own. A well-constructed semantic layer gives an LLM the canonical meaning of every business term, the sanctioned calculation behind every metric, and the constraints on how data should be aggregated. Without it, the agent hallucinates definitions. With it, the agent reasons from governed truth. The semantic layer is no longer a BI convenience. In an agentic enterprise, it is the guardrail that prevents autonomous systems from confidently producing different answers to the same question.

4. The context graph: How decisions get made

The first three layers establish what exists, what is true, and what it means. The context graph answers the next question: what did we do about it?

This is the newest of the five concepts and the least settled in definition. Two framings are circulating in the market.

The first, popularised by Foundation Capital, treats the context graph as a decision-trace layer: a living record of how decisions were made against organisational facts including what data was consulted, what rules applied, what precedents existed, who approved, and what the outcome was.

The second, more common in technical literature, treats it as an AI-optimised subgraph: a contextual window into the knowledge graph, scoped and shaped for a specific task or query.

Both are valid descriptions of real architectural patterns. For the purposes of this stack, the decision trace framing is the more useful one, because it addresses a problem the first three layers cannot: institutional memory.

Most organisations have no structured record of how decisions were made. The reasoning lives in email threads, Slack messages, meeting notes, and people's heads. When those people leave, the reasoning leaves with them. The context graph captures it as a traversable, queryable structure. "The last four times this exception type was requested with these conditions, it was approved by the risk committee with these caveats." That is not an audit trail. An audit trail records that a decision was made. The context graph records why.

This is what gives AI agents memory. Without a context graph, an agent making a decision today has no knowledge of how similar decisions were made last week. It operates statelessly, reasoning from first principles every time. With a context graph, the agent can consult organisational precedent. It can recognise patterns across decision traces and apply them. This is where the context graph becomes genuinely powerful: it turns a reactive agent into one that learns from the accumulated judgement of the organisation.

It is also where it becomes genuinely dangerous. Automating decisions based on historical precedent can encode bias. If past decisions were systematically unfair, the context graph will faithfully represent that unfairness as precedent and serve it up as guidance. This is where evaluation frameworks become essential. If the context graph shows that 90% of past approvals went to one category of applicant, is that because the policy intended it, or because the decision-makers carried an unconscious bias that is now baked into the precedent? Continuous evaluation of context graph outputs is essential: testing whether decision patterns reflect policy intent or merely inherited habit. Any system that uses historical decisions to inform future ones needs this discipline built in, not bolted on.

It is worth addressing the relationship to Retrieval-Augmented Generation. RAG is, in a simplified sense, a primitive context graph. It retrieves relevant context to inform a generation task. But the quality of that retrieval depends entirely on how the context was encoded.

Standard RAG pipelines work by converting documents into embeddings: numerical representations that capture the meaning of text as the model understands it. When an agent searches for relevant context, it compares the meaning of its query against the meaning encoded in those embeddings and retrieves the closest matches. The problem is that standard embeddings reflect what words mean to a foundation model trained on the open internet. "Revenue" is encoded as a generic financial concept. "Customer" is encoded as a general commercial term. The model has no knowledge of what these words mean to your organisation specifically.

A context graph built on top of the semantic layer inverts this. Its retrieval is grounded in organisational semantics. When it retrieves context about "revenue" or "risk" or "customer," those terms carry the canonical definitions established in the semantic layer, not the model's generic interpretation. This is a fundamentally different quality of retrieval, and it is the difference between an agent that sounds informed and one that actually is.

5. The trust layer: Governing it all

The final question in the stack is the one that applies retroactively to every layer beneath it: should we have?

The trust layer is not a product. It is not a single technical component that sits on top of the stack. It is an architectural discipline: the set of policies, constraints, and enforcement mechanisms that govern how every other layer operates. Gartner's AI TRiSM framework, AWS Bedrock's guardrails, and the compliance tooling emerging around the EU AI Act all treat trust as a cross-cutting concern rather than a standalone category. This is the correct framing.

Consider what requires governance at each layer. The ontology needs access control: who can modify the canonical definitions that everything else depends on? The knowledge graph needs provenance: which source system is authoritative when two sources disagree? The semantic layer needs auditability: how was this metric derived, and can we explain it to a regulator? The context graph needs policy enforcement: should this historical precedent actually be followed, or has the policy changed since the decision was recorded? The trust layer wraps around all of these. It does not generate intelligence. It governs how intelligence flows, who can access it, and what can be done with it.

Return to the laptop analogy from the opening. The trust layer is the permissions model of the enterprise AI operating system. Every operating system has one. Without it, any process can read any file, modify any setting, and execute any command. The system is powerful and ungoverned. This is precisely the state of most enterprise AI deployments today.

The regulatory pressure is converging on this gap. The EU AI Act, emerging US executive orders on AI, and sector-specific regulations in financial services and healthcare are all arriving at the same requirement: AI systems must be explainable, auditable, and governable. The trust layer is where enterprises meet that requirement architecturally, through embedded constraints and enforcement, rather than through retroactive compliance exercises bolted on after the system is already in production.

In an agentic context, the stakes sharpen further. An AI agent that has access to the knowledge graph, understands the semantic layer, and has learned from the context graph is powerful. It can traverse organisational knowledge, interpret it in business terms, consult precedent, and act. Without a trust layer, there is nothing constraining what it does with that power. The trust layer is what ensures the agent operates within the boundaries the organisation has set, not just the boundaries the technology permits.

There is one further dimension that most trust layer thinking overlooks. The trust layer needs to govern not just what the system does, but how it changes. When the ontology is modified, that change cascades through every downstream layer. When the semantic layer is updated, every agent consulting it now reasons from different definitions. When the context graph accumulates new decision traces, the precedent base shifts. Each of these changes is individually reasonable and collectively unpredictable. The trust layer's role expands from governance of AI outputs to governance of system evolution itself.

The Stack, not the glossary

Five layers. Five questions. Each one dependent on the answers provided by the one beneath it.

What exists? What is true? What does that mean to us? What did we do about it? Should we have?

Remove the ontology, and the knowledge graph has no schema. It is a pile of data with no shared definitions. Remove the knowledge graph, and the semantic layer has nothing to translate. It is business language with no underlying facts. Remove the semantic layer, and every agent interprets the graph on its own terms. Ten agents, ten answers, no authority. Remove the context graph, and the organisation has no memory. Every decision is made from scratch, with no awareness of precedent. Remove the trust layer, and the system is powerful and ungoverned.

This is not a glossary. It is an architecture. The five layers are load-bearing components of a single system, and confusing them is not an academic mistake. It is a structural one. The enterprise that gets this right builds intelligence that compounds. The enterprise that conflates these layers builds complexity that compounds instead.

The stack also forms a cycle, not just a ladder. Consider a credit approval decision. An agent consults the knowledge graph for the applicant's data, references the semantic layer for the definition of "creditworthy," checks the context graph for how similar applications were handled, and operates within the trust layer's policy constraints. It approves the application. That approval is now a new fact in the knowledge graph. It creates a new decision trace in the context graph. It may shift the semantic layer's metrics: the organisation's average approval rate has changed, its exposure has increased. The next agent that handles a similar application is now operating against a slightly different version of the truth. The five layers are not a one-time construction project. They are an operating loop.

What comes next

This article has described the five layers of a knowledge architecture That means we understand the knowledge aspects of the enterprise operating system. But a knowledge architecture, on its own, means the enterprise OS incomplete. It describes how intelligence is structured, populated, translated, contextualised, and governed. It says nothing about where the raw material comes from, what happens with the intelligence once it exists, or how the system improves over time.

Three concerns remain. The data substrate: the operational databases, data warehouses, streaming platforms, and SaaS APIs that hydrate the knowledge graph. The action layer: the workflow engines, agent orchestration platforms, and human interfaces where intelligence becomes execution. And the learning layer: the mechanisms by which the system evolves, from ontology refinement to context graph pattern recognition to trust layer anomaly detection.

Add these three, and the five-layer knowledge architecture becomes something else entirely. But that is a different article.


AI Transparency

Five terms. Overlapping definitions. Different jobs. One operating system.

Ask five CTOs to define "knowledge graph" and you will get five answers. Ask them how it differs from an ontology and you will get silence. Ask where a semantic layer fits and the conversation shifts to whatever their vendor told them last quarter.

This is not a vocabulary problem. It is an architecture problem.

These five concepts (ontology, knowledge graph, semantic layer, context graph, trust layer) have been well-defined in academic and semantic web literature for decades. The current AI hype cycle has collapsed them into interchangeable marketing language, and the cost of that confusion is causing trouble. Enterprises are building knowledge graphs without ontologies, which is the equivalent of populating a database without defining a schema. The data goes in. Nothing coherent comes out. Even with a NoSQL database, there was always an application with logic to provide semantic understanding to the raw data.

On the vocabulary question, consider your laptop. It has a kernel, a filesystem, a user interface, and a permissions model. Nobody confuses these components because the industry learned long ago that each layer serves a distinct purpose and depends on the one beneath it. Enterprise AI has an equivalent stack. Confusing its layers is just as architecturally dangerous, and considerably more expensive.

This article disambiguates the five layers of that stack: what each one does, how it relates to the others, and why the sequence matters.

These layers are not interchangeable. They are sequential. Each one answers a question that the previous layer cannot, and each one depends on the answers already provided beneath it. Skip a layer, and the architecture does not degrade gracefully. It fails structurally.

1. The ontology: What the world looks like

The first layer answers the most fundamental question: what exists?

The term originates in philosophy, where ontology is the study of being, of what things are and how they relate. In computer science, Tom Gruber formalised it in 1993 as "a specification of a conceptualisation." In enterprise terms, it is simpler than either of those definitions suggest. An ontology defines the nouns and verbs of your business. The nouns are the entity types: Customer, Account, Product, Contract, Obligation. The verbs are the relationships between them: a Customer holds an Account, an Account contains Positions, Positions reference Instruments. The ontology also defines the constraints: a retail Customer cannot hold an institutional Product; a Contract must reference at least one counterparty.

This is not a taxonomy. A taxonomy classifies things into hierarchies—Animal, Mammal, Dog. An ontology goes further. It defines what those things are, what they can do, what they relate to, and what rules govern their behaviour.

Critically, the ontology contains no data. It is the empty blueprint. No customer names, no account numbers, no transaction records. It defines what can exist in the world, not what does exist. Think of it as the schema of the world model.

Yann LeCun’s new venture AMI Labs is pursuing this concept at planetary scale: world models as the basis for intelligence, rather than statistical language patterns. Their thesis is that AI needs structured models of how reality works, not just distributions over tokens. AMI raised over a billion dollars on the conviction that understanding the structure of the world is a prerequisite to reasoning about it. The enterprise ontology is the same principle applied to a narrower domain. It is the world model of your organisation: a formal representation of what your business is, what it does, and how its parts relate.

Why does this layer have to come first? Because without it, every downstream system invents its own definitions. "Customer" in the CRM means anyone with a record. "Customer" in finance means anyone who has paid an invoice. "Customer" in compliance means anyone subject to KYC obligations. These are three different concepts sharing one word, and until the ontology forces a canonical definition, every system that touches customer data is silently disagreeing with every other system. This is a governance act disguised as a technical one.

It is also the layer most enterprises skip. Ontology design is slow. It requires cross-functional agreement on definitions that people have been quietly disagreeing about for years. It produces no visible output. You cannot demo an ontology to a board.

That said, "ontology first" is not the only viable sequence. A pragmatic alternative is to start with the data that already exists, craft the ontology over it as an interpretive layer, and let the knowledge graph emerge from the combination. This is less architecturally pure but often faster to value, because it works with the enterprise as it is rather than as it should be. The ontology still gets built. It simply gets built inductively, from observed data patterns, rather than deductively from first principles. The important thing is that it gets built at all. Without it, everything downstream is guesswork.

2. The knowledge graph: Populating the world with facts

If the ontology answers "what exists?" then the knowledge graph answers the next question: what is true?

A common misconception is that the knowledge graph populates the ontology. It does not. It populates itself, using the ontology as its schema. The ontology is the empty structure. The knowledge graph is that structure filled with real entities, real relationships, real data. Where the ontology says "a Customer holds an Account," the knowledge graph says "Acme Corp holds Account 4471-B, opened 14 March 2019, managed by Sarah McAccountmanager." This is where raw data becomes organisational truth. The knowledge graph is the set of facts ( and sometimes axioms) from which the rest of the system reasons.

It is worth distinguishing this from the technology that often gets conflated with it. A graph database (Neo4j, Amazon Neptune) is a storage engine. It stores nodes and edges efficiently. A knowledge graph is a semantic construct: data that conforms to an ontology, with typed entities and typed relationships that carry meaning. You can build a knowledge graph on top of a relational database. The graph database is an implementation choice, not a defining characteristic. Of course, it almost goes without saying that you can’t have the knowledge without the storage.

The construction process is where the real work happens. Consider a financial services firm where the CRM stores "John Smith," the ERP stores "J. Smith," and the HRIS stores "John D. Smith." These are three records in three systems referring to one person. The knowledge graph construction process resolves them into a single canonical entity, using the ontology's class definitions as the basis for what constitutes a match. This is entity resolution, and it is the first genuinely difficult problem in the stack. Without canonical class definitions from the ontology, there is no principled basis for deciding whether two records refer to the same thing.

Entity resolution is only half the challenge. Source systems store relationships implicitly, as foreign keys or join tables, or not at all. The relationship between a support ticket and the contract it relates to may exist only in someone's head. The knowledge graph makes these relationships explicit and typed. A foreign key linking a customer ID to an orders table becomes a formally defined "Customer places Order" relationship that can be traversed, queried, and reasoned about. The tacit and the explicit are both first class constructs in the knowledge graph backed by the ontology.

The deeper difficulty is semantic reconciliation. Source systems do not merely store data in different formats. They encode different assumptions about the domain.

Take "Revenue." In sales, revenue includes pipeline. In finance, revenue means recognised revenue. In operations, revenue means billable hours delivered. Same word. Three different numbers. Three different truths.

Mapping all three into the knowledge graph is not a formatting exercise. It is a reconciliation of competing worldviews. Someone has to decide which definition is authoritative, whether the others should coexist as distinct concepts, and how conflicts resolve when the numbers disagree. Every one of these mapping decisions is a business decision, not a technical one.

The plumbing of moving data between systems is a solved problem. Agreeing on what the data means is not.

This is why knowledge graph construction is iterative. You define a minimal viable ontology, populate the graph, discover that your class hierarchy does not accommodate a real-world pattern you had not anticipated, revise the ontology, re-map. The graph is the primary mechanism by which ontology gaps surface. Google's Knowledge Graph, which powers search enrichment and entity disambiguation across billions of queries, went through years of this iterative refinement. LinkedIn's Economic Graph, which models the professional world as entities and relationships between people, companies, skills, and jobs, is another example of a knowledge graph that evolved its ontology continuously as new patterns emerged from the data. Enterprise knowledge graphs follow the same pattern at smaller scale. The added complexity is that the data is messier, the politics are louder, and the stakes of getting a definition wrong are commercial rather than informational.

3. The semantic layer: Making facts speak business

The knowledge graph now contains organisational truth: real entities, real relationships, resolved and reconciled. But truth expressed in ontological terms is not accessible to the people and systems that need to act on it. The semantic layer answers the next question: what does that mean to us?

The semantic layer is a translation interface. It maps the formal structures of the knowledge graph into business language: metrics, definitions, and governed calculations that the organisation agrees on. Where the knowledge graph knows that Acme Corp has three accounts with contract values of £1.2m, £1.8m, and £1.2m, the semantic layer knows that this makes Acme Corp an "Enterprise" customer, because Enterprise is defined as any customer with total contract value exceeding £3m. The graph stores facts. The semantic layer applies business logic to those facts and gives the results names that people recognise.

This is not a dashboard. It is not a reporting tool. It is the abstraction layer that dashboards and reporting tools query against. The dashboard is the presentation. The semantic layer is the meaning. Tools like dbt's metrics layer and Looker's LookML popularised this concept in the analytics world by giving organisations a single place to define what "revenue" or "active customer" or "churn rate" actually means. One definition, one calculation, one truth. Everyone queries against it. Nobody reinvents it.

The strategic importance of this layer has shifted dramatically in the context of agentic AI. Without a semantic layer, every AI agent that queries organisational data has to interpret the knowledge graph on its own terms. It has to decide what "revenue" means, how to aggregate it, which accounts to include. The result is divergence: different agents producing different answers to the same question, with no governing authority over which is correct. The semantic layer eliminates this by providing a single, governed interface between any consumer of organisational intelligence, human or machine, and the facts in the knowledge graph.

This is where LLMs become relevant. When an AI agent needs to answer a business question, the semantic layer is where it should go for definitions rather than generating its own. A well-constructed semantic layer gives an LLM the canonical meaning of every business term, the sanctioned calculation behind every metric, and the constraints on how data should be aggregated. Without it, the agent hallucinates definitions. With it, the agent reasons from governed truth. The semantic layer is no longer a BI convenience. In an agentic enterprise, it is the guardrail that prevents autonomous systems from confidently producing different answers to the same question.

4. The context graph: How decisions get made

The first three layers establish what exists, what is true, and what it means. The context graph answers the next question: what did we do about it?

This is the newest of the five concepts and the least settled in definition. Two framings are circulating in the market.

The first, popularised by Foundation Capital, treats the context graph as a decision-trace layer: a living record of how decisions were made against organisational facts including what data was consulted, what rules applied, what precedents existed, who approved, and what the outcome was.

The second, more common in technical literature, treats it as an AI-optimised subgraph: a contextual window into the knowledge graph, scoped and shaped for a specific task or query.

Both are valid descriptions of real architectural patterns. For the purposes of this stack, the decision trace framing is the more useful one, because it addresses a problem the first three layers cannot: institutional memory.

Most organisations have no structured record of how decisions were made. The reasoning lives in email threads, Slack messages, meeting notes, and people's heads. When those people leave, the reasoning leaves with them. The context graph captures it as a traversable, queryable structure. "The last four times this exception type was requested with these conditions, it was approved by the risk committee with these caveats." That is not an audit trail. An audit trail records that a decision was made. The context graph records why.

This is what gives AI agents memory. Without a context graph, an agent making a decision today has no knowledge of how similar decisions were made last week. It operates statelessly, reasoning from first principles every time. With a context graph, the agent can consult organisational precedent. It can recognise patterns across decision traces and apply them. This is where the context graph becomes genuinely powerful: it turns a reactive agent into one that learns from the accumulated judgement of the organisation.

It is also where it becomes genuinely dangerous. Automating decisions based on historical precedent can encode bias. If past decisions were systematically unfair, the context graph will faithfully represent that unfairness as precedent and serve it up as guidance. This is where evaluation frameworks become essential. If the context graph shows that 90% of past approvals went to one category of applicant, is that because the policy intended it, or because the decision-makers carried an unconscious bias that is now baked into the precedent? Continuous evaluation of context graph outputs is essential: testing whether decision patterns reflect policy intent or merely inherited habit. Any system that uses historical decisions to inform future ones needs this discipline built in, not bolted on.

It is worth addressing the relationship to Retrieval-Augmented Generation. RAG is, in a simplified sense, a primitive context graph. It retrieves relevant context to inform a generation task. But the quality of that retrieval depends entirely on how the context was encoded.

Standard RAG pipelines work by converting documents into embeddings: numerical representations that capture the meaning of text as the model understands it. When an agent searches for relevant context, it compares the meaning of its query against the meaning encoded in those embeddings and retrieves the closest matches. The problem is that standard embeddings reflect what words mean to a foundation model trained on the open internet. "Revenue" is encoded as a generic financial concept. "Customer" is encoded as a general commercial term. The model has no knowledge of what these words mean to your organisation specifically.

A context graph built on top of the semantic layer inverts this. Its retrieval is grounded in organisational semantics. When it retrieves context about "revenue" or "risk" or "customer," those terms carry the canonical definitions established in the semantic layer, not the model's generic interpretation. This is a fundamentally different quality of retrieval, and it is the difference between an agent that sounds informed and one that actually is.

5. The trust layer: Governing it all

The final question in the stack is the one that applies retroactively to every layer beneath it: should we have?

The trust layer is not a product. It is not a single technical component that sits on top of the stack. It is an architectural discipline: the set of policies, constraints, and enforcement mechanisms that govern how every other layer operates. Gartner's AI TRiSM framework, AWS Bedrock's guardrails, and the compliance tooling emerging around the EU AI Act all treat trust as a cross-cutting concern rather than a standalone category. This is the correct framing.

Consider what requires governance at each layer. The ontology needs access control: who can modify the canonical definitions that everything else depends on? The knowledge graph needs provenance: which source system is authoritative when two sources disagree? The semantic layer needs auditability: how was this metric derived, and can we explain it to a regulator? The context graph needs policy enforcement: should this historical precedent actually be followed, or has the policy changed since the decision was recorded? The trust layer wraps around all of these. It does not generate intelligence. It governs how intelligence flows, who can access it, and what can be done with it.

Return to the laptop analogy from the opening. The trust layer is the permissions model of the enterprise AI operating system. Every operating system has one. Without it, any process can read any file, modify any setting, and execute any command. The system is powerful and ungoverned. This is precisely the state of most enterprise AI deployments today.

The regulatory pressure is converging on this gap. The EU AI Act, emerging US executive orders on AI, and sector-specific regulations in financial services and healthcare are all arriving at the same requirement: AI systems must be explainable, auditable, and governable. The trust layer is where enterprises meet that requirement architecturally, through embedded constraints and enforcement, rather than through retroactive compliance exercises bolted on after the system is already in production.

In an agentic context, the stakes sharpen further. An AI agent that has access to the knowledge graph, understands the semantic layer, and has learned from the context graph is powerful. It can traverse organisational knowledge, interpret it in business terms, consult precedent, and act. Without a trust layer, there is nothing constraining what it does with that power. The trust layer is what ensures the agent operates within the boundaries the organisation has set, not just the boundaries the technology permits.

There is one further dimension that most trust layer thinking overlooks. The trust layer needs to govern not just what the system does, but how it changes. When the ontology is modified, that change cascades through every downstream layer. When the semantic layer is updated, every agent consulting it now reasons from different definitions. When the context graph accumulates new decision traces, the precedent base shifts. Each of these changes is individually reasonable and collectively unpredictable. The trust layer's role expands from governance of AI outputs to governance of system evolution itself.

The Stack, not the glossary

Five layers. Five questions. Each one dependent on the answers provided by the one beneath it.

What exists? What is true? What does that mean to us? What did we do about it? Should we have?

Remove the ontology, and the knowledge graph has no schema. It is a pile of data with no shared definitions. Remove the knowledge graph, and the semantic layer has nothing to translate. It is business language with no underlying facts. Remove the semantic layer, and every agent interprets the graph on its own terms. Ten agents, ten answers, no authority. Remove the context graph, and the organisation has no memory. Every decision is made from scratch, with no awareness of precedent. Remove the trust layer, and the system is powerful and ungoverned.

This is not a glossary. It is an architecture. The five layers are load-bearing components of a single system, and confusing them is not an academic mistake. It is a structural one. The enterprise that gets this right builds intelligence that compounds. The enterprise that conflates these layers builds complexity that compounds instead.

The stack also forms a cycle, not just a ladder. Consider a credit approval decision. An agent consults the knowledge graph for the applicant's data, references the semantic layer for the definition of "creditworthy," checks the context graph for how similar applications were handled, and operates within the trust layer's policy constraints. It approves the application. That approval is now a new fact in the knowledge graph. It creates a new decision trace in the context graph. It may shift the semantic layer's metrics: the organisation's average approval rate has changed, its exposure has increased. The next agent that handles a similar application is now operating against a slightly different version of the truth. The five layers are not a one-time construction project. They are an operating loop.

What comes next

This article has described the five layers of a knowledge architecture That means we understand the knowledge aspects of the enterprise operating system. But a knowledge architecture, on its own, means the enterprise OS incomplete. It describes how intelligence is structured, populated, translated, contextualised, and governed. It says nothing about where the raw material comes from, what happens with the intelligence once it exists, or how the system improves over time.

Three concerns remain. The data substrate: the operational databases, data warehouses, streaming platforms, and SaaS APIs that hydrate the knowledge graph. The action layer: the workflow engines, agent orchestration platforms, and human interfaces where intelligence becomes execution. And the learning layer: the mechanisms by which the system evolves, from ontology refinement to context graph pattern recognition to trust layer anomaly detection.

Add these three, and the five-layer knowledge architecture becomes something else entirely. But that is a different article.


Are you ready to shape the future enterprise?

Get in touch, and let's talk about what's next.

Are you ready to shape the future enterprise?

Get in touch, and let's talk about what's next.

Are you ready to shape the future enterprise?

Get in touch, and let's talk about what's next.

Valliance Newsletter

Insights and thinking direct to your inbox.
Valliance Newsletter

Insights and thinking direct to your inbox.

_Related thinking
_Related thinking
_Related thinking
_Related thinking
_Explore our themes
_Explore our themes
_Explore our themes
_Explore our themes