When Complexity Becomes the Product
Why simplicity is the secret ingredient of survival in the coming age of Agentic AI
1. The Substitution
In 1997, I filed a patent titled Intelligent Transaction. The core idea was simple: if you embed application logic and data synchronization in the same atomic execution context, they can never drift apart. You eliminate reconciliation by eliminating the gap between the decision and the state change. The patent addressed the architecture of the emerging internet — a method for decentralized systems to reach deterministic outcomes without a central coordinator. Banking was one possible application among many.
I spent the following decade trying to get the internet infrastructure industry excited about this. Three continents. A million miles of flying. Hundreds of meetings with technology vendors, platform builders, and enterprise architects. The idea was sound. The timing was catastrophic. The internet was making a different choice — one that would prove enormously consequential and almost impossible to reverse.
The choice was centralization.
The deeper question — the one I didn’t fully understand at the time — is why centralization won. The answer has nothing to do with transaction mechanics and everything to do with a missing prerequisite. An atomic transaction between independent parties requires three things before the first byte of business logic can execute: verified identity (who am I dealing with?), verified authority (are they empowered to commit?), and verified state (do they actually have what they claim?). The internet provided none of these. TCP/IP moves bits. HTTP serves pages. Neither protocol carries any concept of who you are, what you’re authorized to do, or whether your commitments are real.
The hyperscalers understood this gap intuitively. They solved it by internalizing the trust problem. When you log into Amazon, Amazon knows both buyer and seller. It verifies identity through its own account system, authority through its own payment gateway, and state through its own inventory database. The atomic transaction works beautifully — inside Amazon’s walls. The cost is that you become Amazon’s client rather than a sovereign peer.
The entire internet organized itself around this bargain. Amazon became the source of truth for commerce. Google became the source of truth for information. Facebook became the source of truth for identity. Each silo achieved internal consistency by making everyone a client of the same centralized infrastructure. Decentralized transaction management — the ability for independent systems to reach shared, deterministic outcomes without a central authority — lost the battle because the trust substrate it required simply didn’t exist yet.
The financial industry inherited this architectural error and amplified it. Banks had always been silos, but they had also always needed to transact with each other. Without a shared identity and trust layer between them, they fell back on the only tool available: messaging. A SWIFT MT103 is a message. A FIX protocol order is a message. A SEPA instruction is a message. The entire intellectual framework of modern banking technology rests on a single assumption: if we can guarantee that the message arrives, the system will eventually be consistent.
This is true. It is also the most expensive, and increasingly outdated, truth in the global economy.
“Eventually consistent” is an engineering euphemism for “we will reconcile later.” And reconciliation — the act of comparing Bank A’s private ledger with Bank B’s private ledger to find and fix the mismatches — is now a multi-billion annual industry growing rapidly. That growth rate tells you everything. The problem is being serviced, year after year, with increasing enthusiasm.
The substitution happened so long ago that most banking technologists don’t even recognize it as a substitution. They think transaction management is reliable messaging. They hear “transaction” and think “a message that triggers a state change in my database.” They do not mean what I mean: a bounded execution context where all participants verify conditions and reach a deterministic outcome before any state becomes visible to the outside world.
These are fundamentally different things. The vocabulary collision makes the conceptual gap invisible, and the invisibility makes it permanent.
2. The Immune System
If the substitution is so costly, why hasn’t the industry corrected it? Because the industry experiences complexity as a business model, and business models are defended fiercely.
The financial system has built an immune system that actively rejects architectural simplification. Three mechanisms keep it running.
The Vendor Lock. Reconciliation middleware, clearing platforms, correspondent banking chains, settlement risk management tools — these are multi-billion-dollar business lines. Every vendor in this stack has a revenue model predicated on the continuation of the gap between trade and settlement. When you walk into a room and say “I can eliminate reconciliation,” you are trying to change an entire ecosystem.
The Abstraction Barrier. The generation of architects who understood transaction semantics — two-phase commit, compensating transactions, the actual theory of deterministic execution — is retiring. The current generation builds microservices and assumes that Kafka plus retry logic equals transactional integrity. They have been trained on messaging middleware. The concept of a transaction as a shared execution context of an atomic event is foreign to their mental model, and because the word “transaction” is overloaded to mean “message,” they never discover the gap in their own understanding.
This is a genuine loss of institutional knowledge. The people who built CICS and IMS understood what a transaction is. Their successors think they understand, which is worse than not knowing at all.
The Accountability Dodge. A messaging architecture distributes blame beautifully. If the settlement fails, where does the fault lie? Was it the sender? The receiver? The intermediary? The format? The timing? Nobody knows with certainty, and that ambiguity is comfortable.
An atomic transaction model is merciless. Either the conditions were met and the state changed, or they weren’t and it didn’t. The finite state machine is deterministic. The receipts are signed. The audit trail is perfect. You can reconstruct exactly what happened, who presented what evidence, and where the logic failed. This level of accountability is genuinely threatening to organizations that have spent decades managing risk through ambiguity.
3. The Load-Bearing Wall
The standard critique of legacy banking is that complexity is incidental — accumulated technical debt that could be cleaned up with enough budget and willpower. This misdiagnoses the situation. Complexity in banking is structural and load-bearing. Remove it and you don’t just simplify the technology stack. You alter the organizational structures built on top of it.
Consider what happens if you make reconciliation instantaneous. The operations teams that manage settlement breaks — gone. The middleware vendors that sell reconciliation platforms — obsolete. The clearinghouses that guarantee trades during the T+2 settlement window — unnecessary if the transaction settles atomically.
Each of these represents headcount, vendor contracts, regulatory relationships, and accumulated political capital inside the institution. The total cost of the reconciliation infrastructure is enormous, but it is distributed across dozens of P&L lines. No single team owns the aggregate cost, so no single decision-maker can act on it. The trading desk sees its latency. The operations team sees its breaks. The compliance team sees its reporting burden. The technology team sees its integration backlog. All of these are symptoms of the same architectural deficiency — we separated the agreement from its execution — but the symptoms are managed by different people who never connect them.
This is why complexity persists. The industry prices complexity, monetizes it, and builds organizational charts around it. The complexity has become the product.
4. The Breaking Point
Every load-bearing wall has a maximum load. The messaging architecture held together at human speed because humans are slow. A CFO reviews a settlement report once a day. A compliance officer checks a transaction log once a week. An auditor examines the books once a quarter. The gaps, mismatches, and ambiguities that accumulate in a messaging-based system can be manually cleaned up between these checkpoints. The system leaks, but humans mop up the leaks fast enough to prevent flooding.
Agentic AI breaks this equilibrium.
An autonomous AI agent operating a corporate treasury does not check once a day. It operates continuously, at millisecond intervals, executing trades, hedging currency exposure, managing liquidity across accounts. It cannot wait for eventual consistency. It cannot parse ambiguity. When it sends a payment instruction and the settlement state is “pending reconciliation,” it has no meaningful basis for its next decision. Should it count that cash as available or unavailable? The answer depends on whether the reconciliation will succeed, which depends on whether Bank B’s internal systems will match the instruction within two business days, which depends on factors that the agent cannot observe or predict.
The messaging paradigm forces the agent into a state of permanent uncertainty about its own financial position. This is workable for humans, who intuitively manage ambiguity and can pick up a phone. It is catastrophic for software that needs deterministic state to make deterministic decisions.
The breaking point arrives when the cost of complexity — measured not just in reconciliation fees but in the inability to deploy autonomous agents — exceeds the revenue that the complexity infrastructure generates. Agentic AI makes this inevitable, because the institutions that can deploy agents on clean, deterministic rails will operate at a structural cost advantage over those still mopping up messaging leaks by hand.
The financial industry has spent decades building tolerance for ambiguity into its operating model. Agentic AI has zero tolerance for ambiguity. One of them will have to change.
5. The Missing Foundation
Here is where the argument turns from diagnosis to prescription. And it requires honesty about what actually failed in 1997.
The Intelligent Transaction patent had the right transaction mechanics. The atomic execution model was sound. What was missing — what I did not fully see at the time — was the trust infrastructure underneath. You can design the most elegant atomic execution context in the world, and it remains useless if the participants have no way to verify each other’s identity, authority, and commitments without routing through a centralized intermediary.
The hyperscalers bypassed this problem by becoming the intermediary themselves. The financial industry bypassed it by substituting messaging for transactions. Both were rational responses to the same root cause: the internet had no native protocol for trust.
The technology primitives that would have been necessary in 1997 — decentralized identifiers, verifiable credentials, cryptographic proofs of authority, hardware-rooted key management — did not mature until the late 2010s. The engineering is now ready. The foundational protocols exist. For the first time, it is possible to build a shared identity and trust layer that allows independent parties to verify each other directly, without surrendering their sovereignty to a platform.
But here the question becomes: to what standard should this trust layer be built?
The crypto industry answered this question by building for anonymous peer-to-peer exchange first, with compliance and institutional requirements treated as later additions. That choice permanently limited what their infrastructure can carry. You cannot retrofit accountability into a system designed for anonymity. You cannot bolt banking onto an architecture that was built to avoid banks.
My principle has been the opposite: if it is good enough for banks, it is good enough for everything.
Banking is the most demanding transactional environment in the economy. It requires accountability to a liable legal entity — every action traceable to an identifiable, legally accountable party, because if something goes wrong, someone must answer. It requires hardware-rooted identity — keys generated inside secure enclaves and hardware security modules, non-extractable, impossible to clone. It requires deterministic finality — a transaction either completes atomically or doesn’t happen, with no probabilistic window where the outcome is uncertain. It requires delegated authority with instant revocation — so that when the CFO is dismissed at 9:01 AM, their signing power evaporates at 9:01 AM, regardless of what keys they still hold. It requires recourse and arbitration — because real finance demands the ability to unwind a transaction through a pre-agreed, deterministic process. It requires regulatory observability — so that supervisors can monitor in real time without accessing raw data, turning compliance from a quarterly reporting burden into a mathematical side effect of doing business. It requires privacy by architecture — ephemeral contexts, selective disclosure, nested verification for sensitive data — because banks face GDPR, bank secrecy laws, and fiduciary obligations that make broadcasting transactions a legal impossibility. And it requires that money stays inside the regulatory perimeter — deposits remain bank liabilities, the prudential framework survives digitization, and the credit creation capacity of the banking system is preserved.
These eight properties are challenging to build into a foundation. A simple e-commerce transaction doesn’t need all of them. A supply chain handshake might only need three or four. The temptation is always to start simple and add the hard properties later.
That temptation is a trap. Every system that started simple and promised to add banking-grade properties later has failed to do so. The architecture calcifies. The workarounds accumulate. The bolt-on compliance layer becomes its own source of complexity. The crypto industry has spent fifteen years demonstrating this failure mode in public.
Building to banking-grade from the beginning means the infrastructure over-engineers for the general case. And that over-engineering unlocks something extraordinary: banking dissolves into the fabric of every transaction. A supply chain purchase can embed a payment natively, because the settlement capability is already present in the protocol. An IoT device ordering a replacement part can settle atomically with its bank’s agent, because the trust layer already carries hardware-rooted identity, delegated authority, and deterministic finality. A mortgage can close in a single execution context — identity verification, credit assessment, title transfer, and settlement all happening in the same atomic event — because the infrastructure was designed from the start to carry all of them simultaneously.
Banking becomes ubiquitous because the trust layer was built to banking’s standard. The bank doesn’t need to be invited into the transaction through a separate payment rail. The bank is already there, embedded in the infrastructure, ready to settle the moment the conditions are met.
This changes the architecture of transactions fundamentally. When participants can prove who they are, what they are authorized to do, and that their commitments are backed by verifiable evidence — all through open cryptographic protocols built to the highest institutional standard — the centralized intermediaries, all of them, lose their structural advantage. Independent systems can finally do what the 1997 patent envisioned: meet in a shared execution context, verify conditions deterministically, and reach an atomic outcome whose consistency is guaranteed. The deal and the settlement become a single event. And the settlement carries full banking-grade assurance, because the foundation was designed to provide it from day one.
6. Simplicity as Survival
The financial industry is about to encounter a selection pressure it has never faced. For decades, the competitive advantage went to the institution with the most sophisticated middleware, the most elaborate risk management apparatus, the most comprehensive reconciliation infrastructure. Scale was measured in the ability to manage complexity.
This logic is inverting. In a world where autonomous agents route around friction the way water routes around obstacles, the competitive advantage will go to the institution with the least complexity between intent and settlement. The agent doesn’t care about your middleware stack. It asks one question: can I get a deterministic answer in milliseconds? If your answer is “eventually,” the agent moves to the institution that answers “now.”
The reconciliation industry will be made irrelevant by an architecture that eliminates the need to reconcile. The correspondent banking chain will be bypassed by atomic settlement occurring in a digital meeting room between the participants. The clearinghouse will be rendered unnecessary by execution contexts where the transaction mathematically guarantees the settlement.
And all of it rests on a foundation that the internet should have had from the beginning: a shared, open, cryptographic layer for identity, authority, and trust — built to the most demanding standard in the economy, so that every other use case is covered by default.
The internet made the wrong architectural choice in the early 2000s, and the financial industry compounded the error by building an entire economy of complexity on top of it. Agentic AI is the force that finally makes that accumulated complexity unsustainable — because autonomous software has no patience for ambiguity, no tolerance for eventual consistency, and no interest in paying the reconciliation tax.
Simplicity is the secret ingredient of survival. The courage to stop selling the problem is the hardest part.
