Why is the MCP Standard So Important for AI Agents? It’s All About Openness and Interoperability

Understanding the MCP Protocol for Enterprises and Regulated Industries

Published Nov 11, 2025

Published Nov 11, 2025

Executive Summary: Why MCP Matters for Your Enterprises, Especially in Regulated Industries 

The Model Context Protocol (MCP) addresses a critical business problem: until now, enterprises seeking to utilize AI with their private data have been limited to using only one vendor's tools and AI models. This "walled garden" approach meant enterprises couldn't freely choose the best AI solutions for their needs or easily switch between providers. MCP changes this by creating an open standard that works across all major AI platforms—think of it as the equivalent of HTTP for the early internet, which replaced incompatible proprietary systems and enabled universal connectivity. Within just six months of release, MCP has been adopted by industry giants including OpenAI, Microsoft, Google, and dozens of major technology companies, signaling a fundamental shift in how AI will integrate with enterprise systems.

For enterprises, this means freedom, flexibility, and future-proofing. You can now connect your AI applications to real-time data sources, proprietary databases, and specialized tools without being tied to a single AI vendor. More importantly, you can make your enterprise systems accessible to non-technical employees through natural language—managers and analysts can query complex systems using plain English rather than requiring specialized technical skills. This dramatically lowers the barrier to AI adoption, reduces vendor lock-in risk, and allows you to choose the best AI models and tools for each specific business need rather than being constrained by what one vendor offers.

Introducing the Model Context Protocol (MCP)

It’s amazing to see that, virtually overnight, a single standard has emerged in a matter of a few months for the creation and development of AI agents. Of course, we're all pretty familiar with the core reason why AI agents exist in the first place. It's all about the fact that tools such as ChatGPT, Anthropic Claude, and other LLMs are really great at telling you stuff (or creating stuff) about information that is publicly available, like everything on the Internet.

However, since the public release of ChatGPT a few years ago, businesses and enterprises have now figured out that what they REALLY want is to use AI with their own private data repositories. This is why they coined the term "AI in the enterprise".

As a result, each of the major players in the industry, of course, decided to create their own AI agent libraries and APIs that conveniently worked with their OWN language models. OpenAI leads the market with its Assistants API and Responses APIs, featuring a few built-in tools, including Code Interpreter and File Search. Google offers its Vertex AI Agent Builder, and Amazon offers Bedrock Agents with native AWS integration. 

Of course, this is the typical "walled-garden" business strategy, where you only offer tools that are compatible with only YOUR technologies. Therefore, you couldn't (heaven forbid) use the OpenAI ChatGPT Agent SDK for Python to call a model from Anthropic Claude or Google Gemini — sorry, but the walled garden business strategy won't allow for those kinds of things.

Regulated enterprises deploying AI face a critical constraint: vendor lock-in combined with compliance risk.

So, late last year, the team at Anthropic decided to create an open standard for AI agents, known as the Model Context Protocol (MCP). One of the reasons why MCP is constantly appearing in the headlines today is that everyone in the industry and the AI community has quickly adopted this new standard. On GitHub (and you can go check this out for yourself), the project has well over 50K stars and accomplished it within 6 months. This, of course, places it in the top 0.0001% of GitHub projects, which is an impressive feat, considering that other popular projects require over a decade to achieve similar results. So the real questions are, "why" and "how" did this happen?

Well, for some of us, we've seen this movie before

I can honestly say that for many of us in the industry, we've "seen this movie before" when it comes to large major players creating walled gardens of tools that only work exclusively with their own technologies. Do you remember the early days of the Internet and the World Wide Web before the HTTP and HTML standards existed? 

In those days, the Internet service providers (ISPs) had the brilliant idea of creating walled gardens of content and compatibility. I'm talking about companies such as America Online (AOL), CompuServe, and Prodigy. Guess what all these companies had and did in common?

  • They all provided Internet access

  • They all created their own client apps (web browsers)

  • They all hosted a "mini-Internet" on their own servers

  • They all created their own hyperlinking system and network addressing

And they all failed.

None of their systems were compatible with their competitors, and everyone in the world embraced the open standards of HTTP and HTML once they were released. All of the previously mentioned companies were multimillion-dollar businesses, and yet none of them exist today. The open standards of HTTP and HTML were not only technology disruptors, but they were business model killers.

Therefore, since many of us have "seen this movie before," we're pretty well aware that open standards that allow for interoperability will be the dominant force compared to any proprietary closed standard within a walled garden.

So how does MCP work?

Well, it's actually a lot easier to show you than to tell you. The image below shows me attempting to find the weather forecast for Seattle, Washington, using Anthropic Claude Desktop, a MCP Client app.

I'm not using any built-in tools or integrations, because for obvious reasons, I want to prove that the AI model (in this case, Claude Sonnet 4 model) only provides information up to the date of which it was trained, which it states to be Jan 2025.  

Now, I’m going to ask the exact same question, but in this case, I'm going to enable my own MCP server that returns the weather. Here's what it looks like to activate an MCP server.

All you need to do is click on the button with the icon for “settings”, and you can see all the readily available “integrations”, such as web searching and pre-made Google integrations for Google Drive, Gmail, and Calendar. Since the term “Model Context Protocol” is quite technical, Anthropic decided to refer to external agent services as “integrations,” which is a more consumer-friendly name.  As you examine things further, you’ll also see that “weather” is included in the list, but it is separated from the others because it’s a custom MCP server that I created. So, let's ask the same question again, now that my Weather MCP server has been enabled.

Great, it’s 85°F today, so it’s going to be a nice day. But what does this mean? First of all, we already knew that the AI model was trained on information up to January 2025. Therefore, my question about the weather (this time) prompted the model to ask my MCP Server for the current temperature in Seattle.

MCP is Very Similar to HTTP,  the Protocol We Know and Love

So, let's get back to the original question. How does MCP work? Just as the HTTP protocol has clients and servers, the MCP specification also has a similar structure. An MCP client (like Claude Desktop, which is what I’ve been using) can invoke and call any compatible MCP server.

At a high level, MCP servers function like browser plug-ins for regular web browsers. Browser plug-ins can extend the capabilities of your default, out-of-the-box web browser. For example, most web browsers have a basic spellchecker for common dictionary words, and if you spell something the wrong way, you'll see a red squiggly line under the misspelled word. However, they don't have any way to check your grammar or have any tools to help improve your writing. Now, if you install and enable, for example, the Grammarly browser plug-in, then you can type into any form on the internet and have your grammar checked instantly before saving or submitting anything.

Well, MCP servers work similarly; they extend the capabilities of an AI model beyond what it already knows, based on its training data. Therefore, you can add real-time information, such as weather, stocks, live traffic, sports, or any other relevant data. The whole point here is that you're able to use these Large Language Models and converse with them beyond their knowledge cutoff date.

Who is Supporting MCP Right Now?

First of all, you may be surprised to know that after Anthropic released the MCP specification last year, it was quickly adopted by OpenAI, Microsoft, and Google. So, why would these companies, which are normally competitors in the AI arena, all decide to support this open standard and open protocol? Needless to say, some smart people at these companies have also "seen this movie before" and understand that proprietary systems within walled gardens lose every time against open standards.

However, industry support has become even more astounding, as major technology companies, including Stripe, PayPal, Square, Slack, HubSpot, Cloudflare, GitHub, and Figma, have already released products that support the MCP standard. Again, we’re talking about a spec that is less than a year old.

Interestingly enough, at the Build 2025 conference, Microsoft announced comprehensive MCP integration into Windows 11 itself. This means that literally billions of devices will be able to run MCP servers once the rollout is fully complete worldwide.

How Axoniq is Leveraging MCP?

The MCP Protocol is embedded within the Axoniq Platform and is a vital component to expose our foundational and intelligence layers, enabled by Event Sourcing, as agentic services that our customers can leverage to build intelligent applications and services.

As an example, we recently released an early access preview of the Axoniq Insights product. Using Axoniq Insights, your managers, data analysts, and data engineers can efficiently execute analytical queries using SQL against their entire event store. Yes, that’s right - SQL in your event store. However, there’s even more that you can do, because Axoniq Insights allows users of any skillset to perform Natural Language queries against the events stored in Axon Server. We’ve completely lowered the “barrier to entry” when it comes to using Event Sourcing, and now, literally, any authorized user can ask any question about what’s going on in their system. So literally, there are no stupid questions anymore when it comes to your own applications and services.

These capabilities become even more powerful when they are leveraged externally by custom agents and workflows. Axoniq Insights accomplishes this because of our support for MCP. This means that:

  • Third-party applications, such as Claude Desktop, can invoke the query MCP tool with a user’s natural language question and receive a set of results that Claude can further analyze and visualize.


  • As the workflow runs, an AI-powered workflow may similarly register recurring SQL queries and invoke them via an MCP tool.  

And this is just the beginning.

We’re adding MCP-powered interfaces for system ops, application health, development tools, and more.

The Bottom Line

MCP isn’t just another integration layer. It’s the gateway to real AI interoperability,  where agents work together, across platforms, without silos.

And for companies like Axoniq, it unlocks powerful new ways to make enterprise systems explainable, traceable, and safe to automate.

Join the Thousands of Developers

Already Building with Axon in Open Source

Join the Thousands of Developers

Already Building with Axon in Open Source

Join the Thousands of Developers

Already Building with Axon in Open Source