Agentic AI ✦ Human-written

The Open Standard Problem

Enterprise AI buyers are signing contracts right now without asking the question that will most determine whether their investment ages well. That question is: what standard does this run on?

⚡ 60-Second Summary

The Model Context Protocol (MCP) is an open standard defining how AI agents connect to enterprise tools and data. Most enterprise buyers evaluating AI agent platforms aren't asking about it — and they should be. The case for MCP isn't idealism: it's audit trails you own, integrations that survive vendor changes, and the decades of enterprise software history that show proprietary infrastructure always costs more in the long run. One question to add to every AI vendor evaluation: what is your position on MCP?

Earlier this year I sat in a room at the Linux Foundation with infrastructure engineers, enterprise architects, and AI platform leads from about a dozen organisations. We were there to work on governance for the Model Context Protocol — MCP — a standard that defines how AI agents connect to the tools, data sources, and systems they need to do work.

I was there representing the practitioner's side of this conversation. The PMM who has been in the room when enterprises decide to buy, expand, or abandon AI investments. The person who has spent the last three years watching the agentic AI market develop from inside one of its major vendors.

And sitting in that room, I kept thinking: the enterprise IT leaders I talk to every week do not know this conversation is happening. They are evaluating AI agent platforms right now — comparing vendors, approving budgets, signing contracts — and almost none of them are asking the question that will most determine whether their AI investment ages well or becomes a liability.

That question is: what standard does this run on?

The infrastructure question nobody is asking

Enterprise technology buying has a long memory. The CIOs I work with have been in this industry for twenty, sometimes thirty years. They remember what happened to organisations that standardised on proprietary integration middleware in the 1990s, only to spend the next decade paying whatever the vendor charged for a migration they couldn't afford to execute. They remember every category where a dominant early player established proprietary lock-in, and the long, expensive decade that followed for customers who had to extract themselves.

They have also watched what happens when a category standardises early on open protocols. Email. The web. REST APIs. Cloud infrastructure. In every case, standardisation on open protocols expanded the market, drove down costs, increased competition, and made the technology more durable for the organisations that adopted it.

The AI agent category is at exactly this inflection point right now. In 2026. Before the proprietary lock-in has fully set.

"The enterprises making AI investments today are deciding whether they will own their AI infrastructure or rent it indefinitely from whoever owns the protocol."

What MCP actually is and why it matters

MCP — the Model Context Protocol — is an open standard that defines how AI agents connect to external tools and data sources. Instead of each AI platform building its own proprietary integration layer with its own authentication methods, its own data formats, its own tool-calling conventions, MCP provides a shared language.

An agent that speaks MCP can connect to any MCP-compatible tool. A tool that exposes an MCP interface works with any MCP-compatible agent. The integration logic — the wiring between the AI and the thing it needs to interact with — becomes interoperable instead of locked to a specific vendor.

This sounds abstract until you look at what enterprise AI deployment actually involves.

A production agentic workflow in 2026 typically connects to five to fifteen enterprise systems. CRM. ERP. ITSM. Document management. Communication platforms. In some industries, highly specific systems — clinical record systems in healthcare, trading platforms in financial services, case management in legal and compliance.

Without a shared standard, each AI agent platform has to build and maintain its own integration to each of these systems. The enterprise has to maintain the mappings between whatever format their AI vendor uses and whatever format their enterprise systems expect. When they want to switch AI vendors — or add a second one for a different use case — they rebuild all of those integrations.

With MCP, an enterprise that has built MCP-compatible connectors to their systems owns those connectors. They work with any MCP-compatible AI agent, from any vendor. The investment in integration does not belong to the AI vendor. It belongs to the enterprise.

The compliance argument enterprise buyers are not hearing

Here is the argument I do not hear AI vendors making clearly enough, and that enterprise buyers are not demanding loudly enough.

Regulated industries — financial services, healthcare, pharmaceuticals, government, utilities — are not just buying AI for efficiency. They are buying AI that must be auditable, governable, and explicable to regulators, auditors, and their own risk and compliance functions.

What does auditability require? A clear record of every decision an agent made and every tool call it executed to arrive at that decision. What system did the agent query? What data did it receive? What did it do with that data? At what point was a human involved? These are not optional capabilities for regulated enterprises. They are table stakes for deployment.

A proprietary integration layer makes this harder, not easier. The audit trail for an agent operating through proprietary tooling is partially owned by the vendor. The vendor decides what gets logged, what format the logs are in, how long they are retained, and who has access to them. The enterprise is auditing through a lens that someone else controls.

An open standard makes this tractable. When every tool interaction goes through a defined protocol with a defined schema, the audit trail is implementation-independent. The enterprise can own it, export it, format it for their compliance systems, and defend it to regulators without negotiating with a vendor about data access.

"An enterprise that cannot independently audit its AI agent's decisions does not have AI governance. It has a vendor relationship."

The practitioner's case against the proprietary stack

I want to be specific about what I am not arguing. I am not saying every AI vendor should open-source everything. Proprietary model weights, proprietary training approaches, proprietary fine-tuning — these are legitimate areas of competitive differentiation. The intelligence of an AI system can and should be a competitive moat.

The integration layer is different. The plumbing through which the AI connects to the rest of your enterprise is not a reasonable area of vendor lock-in. It is infrastructure. And infrastructure should be owned by the organisation running on it.

The software industry learned this lesson with databases in the 1980s, with web protocols in the 1990s, with cloud APIs in the 2000s. The organisations that built on proprietary infrastructure had faster initial deployment and slower long-term agility. The organisations that pushed for open standards had slower initial deployment and dramatically better long-term positioning.

We are at the same inflection point with AI agent infrastructure. The vendors building proprietary tool-calling protocols, proprietary agent-to-agent communication standards, proprietary orchestration formats — they are betting that lock-in today is worth the customer resentment tomorrow. Some will win that bet in the short term.

The enterprises most sophisticated about long-term infrastructure strategy are the ones asking for MCP compatibility before they sign. Not because MCP is perfect — it is an early standard and there will be iterations — but because the commitment to an open standard signals something about the vendor's long-term relationship with their customers.

What I am watching for in 2026

Three things will determine whether MCP becomes the durable standard for enterprise AI integration or gets displaced by something else.

First: enterprise adoption velocity. MCP needs to be in production deployments at enough large enterprises that the network effects of the standard become self-reinforcing. Every enterprise system that ships an MCP-compatible interface makes the standard more valuable. The Agentic AI Infrastructure Foundation's role at the Linux Foundation is to accelerate this — working with enterprise tool vendors, system integrators, and AI platforms to drive MCP compatibility as a baseline expectation, not a differentiator.

Second: security and governance extensions. The base MCP spec defines tool-calling semantics. What it does not yet define is the full governance layer — granular permissions, audit log format, human approval workflows, rate limiting, and the controls that regulated enterprises require before putting an MCP-compatible agent into a production process that touches sensitive data. This work is in progress. Its quality will determine whether MCP is viable for the industries where governance requirements are most demanding.

Third: the browser moment. There was a point in web history where browsers competing on proprietary HTML extensions gave way to serious standards work that made the web durable. Something similar will happen in the AI agent space. The question is whether it happens before or after a significant portion of enterprise investment is stranded in proprietary integrations.

I would rather be in the room making the case for open standards before that stranding happens.

What to ask before you sign

If you are an enterprise technology leader evaluating AI agent platforms in 2026, one question that should be in every vendor conversation: what is your position on the Model Context Protocol?

A vendor that has shipped MCP compatibility or has it on a dated roadmap is signalling something. They have accepted that the integration layer is not their competitive moat. They are betting that their model quality, their product experience, and their enterprise support are sufficient competitive differentiation — which is a healthier basis for a long-term vendor relationship.

A vendor that does not have a position on MCP, or that argues their proprietary integration approach is superior, is also signalling something. They may be right that their current integration approach is better in some dimension. But they are betting that the value of lock-in to them is worth the cost of portability to you.

That is a bet that enterprise software customers have repeatedly made and repeatedly regretted. The enterprises that move fastest in any technology category are the ones that invest in infrastructure they own. Open standards are how you own infrastructure in a world where the intelligence layer is someone else's IP.

MCP is not a magic answer. It is an early standard that will evolve. But the commitment to an open integration layer — from vendors, from enterprises, from the infrastructure organisations doing the governance work — is the commitment that separates a durable AI investment from a very expensive pilot.


Kuber Sharma leads platform product marketing at UiPath and is a representative to the Agentic AI Infrastructure Foundation at the Linux Foundation, where he contributes to MCP governance and enterprise adoption. He writes Positioned, a newsletter on AI-era product marketing for enterprise PMMs.

Related Essays

← All Essays ← Previous Essay