
During the concept validation phase of Corvenia , I spoke to a CFO who was excited. His board had just approved a budget for an AI agent that would run across their financial data. Autonomous analysis. Real-time variance commentary. Alerts before problems surface in the numbers.
Six months later he called back. The agent was live. The outputs were confident. And they were wrong often enough that nobody trusted them.
The AI was not the problem. The data underneath it was.
Agentic AI is the most talked-about development in enterprise software right now. AI agents that do not just answer questions but take actions. Reason across data. Flag anomalies. Initiate workflows. Book entries. Generate board packs.
The demos are impressive. The vendor promises are specific. And the underlying assumption in almost every pitch is that the data your agent will reason on is clean, consistent, and structured well enough to support that kind of reasoning.
For most finance teams running multiple entities across multiple ERPs, that assumption is wrong.

Every ERP has its own data model. Its own account structures. Its own way of representing a sales transaction, a cost centre, an intercompany loan.
Tripletex handles revenue recognition differently to Business Central. Fortnox structures its chart of accounts differently to Microsoft Finance & Operations. Xledger has its own dimension model.
None of them were designed to talk to each other, and none of them were designed with AI reasoning in mind. They were designed to close books accurately within one system.
When a group runs four ERPs across ten entities, the underlying data is not one dataset with four sources. It is four different representations of financial reality, each internally consistent, each structurally incompatible with the others.
An AI agent trying to reason across that data is not working with a clear picture of the group. It is working with four different languages, without a translator in between.
The outputs will be confident. They will reference real numbers. And they will be wrong in ways that are difficult to detect, because the errors come from misaligned structure rather than incorrect values.

Before Agentic AI can do anything useful on financial data, someone has to solve the normalization problem.
The trial balance and general ledger data from each ERP is stored differently. Not just named differently, structured differently. Some store cost centre dimensions as account attributes, others as separate hierarchies. When you pull time series data across four ERPs to ask a simple question like "how did group revenue trend over the last twelve months," you are not comparing the same thing four times. You are comparing four different representations of revenue, each internally correct, none of them directly comparable.
That means taking the data model from each ERP and mapping it into a single canonical structure. Every account code from every system mapped to a consistent group-level equivalent. Every transaction classified the same way regardless of which ERP it originated in. Every intercompany flow identified and tagged so an agent does not double-count it as external revenue.
This is not glamorous work. It does not make for an impressive demo. But it is the prerequisite for everything that comes after.
I have spent twenty years building AI systems for regulated industries. Insurance, banking, financial services. The pattern is always the same. The teams that get AI working in production are the ones who spent serious time on the data layer before they touched the model layer. The teams that skip that step get confident AI producing unreliable outputs, and then spend twice as long trying to understand why.
With financial data the stakes are higher. A hallucination in a customer service chatbot is embarrassing. A hallucination in a variance analysis that influences a board decision is a different order of problem.
Once the canonical layer exists, three things become possible that were not possible before.
First, an agent can reason consistently across the whole group. It is working from one data model, not four. Account 4010 in Tripletex and account 3000 in Business Central are the same thing in the canonical layer. The agent does not have to guess.
Second, the noise drops out. Intercompany transactions that would appear as revenue in one entity and cost in another are already eliminated before the agent sees the data. It is reasoning on the group's actual financial position, not on an inflated version of it.
Third, operational data can sit alongside financial data in the same structure. CRM pipeline, headcount costs, project margins. Not as a separate dataset that requires a separate query, but as part of the same canonical model that the agent is already working with. The question "why did margin drop" becomes answerable because the financial signal and the operational context are in the same place.
This is what we mean when we talk about making data AI-ready. Not just clean data. Not just faster reporting. Structured in a way that an AI agent can reason across without introducing errors at the seams.
We built the canonical data layer at Corvenia because we had to. You cannot do real-time consolidation across multiple ERPs without it. Every entity connects its ERP, and the data normalizes into a single model automatically. Account mapping is a classification problem. AI handles it, a controller approves it, and it is done in minutes.
What we are seeing now is that the value of that layer extends beyond consolidation. Groups that have it are in a position to put AI agents on top of their financial data and get reliable outputs. Groups that do not have it will get agents that sound authoritative and produce numbers that require a controller to verify every line before anyone trusts them.
Agentic AI in finance is coming. The firms that benefit from it are the ones that build the foundation first. The canonical data layer is not a step you can skip. It is the step everything else depends on.