Skip to main content
  1. Blog/

Model Context Protocol — The Quiet Standard That Could Reshape AI Tooling

·1135 words·6 mins
Osmond van Hemert
Author
Osmond van Hemert
Table of Contents
Developer Tooling - This article is part of a series.
Part : This Article

While the tech press has been focused on model sizes and benchmark scores, something potentially more important has been quietly gaining momentum: Anthropic’s Model Context Protocol (MCP). Announced late last year as an open standard, MCP is starting to see real adoption — and it could fundamentally change how we build AI-powered applications.

What MCP Actually Is
#

At its core, the Model Context Protocol defines a standardised way for AI models to interact with external tools and data sources. Think of it as a universal adapter layer between an LLM and the world outside its training data.

Before MCP, every AI integration was bespoke. Want your AI assistant to query a database? Write a custom function. Want it to search your codebase? Build another integration. Want it to interact with your project management tool? Yet another custom adapter. Each AI platform had its own approach: OpenAI has function calling, Anthropic has tool use, Google has function declarations — all similar in concept but different in implementation.

MCP proposes a different model: define your tools and data sources once, using a standard protocol, and any MCP-compatible AI client can use them. It’s the same pattern we’ve seen succeed in other domains — LSP (Language Server Protocol) standardised how editors talk to language tooling, and it transformed the developer tools landscape. MCP aims to do the same for AI integrations.

The Architecture
#

MCP follows a client-server architecture. MCP servers expose capabilities — tools, resources (data), and prompts — through a standardised JSON-RPC interface. MCP clients (typically AI applications or agents) connect to these servers and can discover and invoke their capabilities.

The protocol supports multiple transport mechanisms, including stdio (for local integrations) and HTTP with Server-Sent Events (for remote services). A typical setup might look like:

AI Application (MCP Client)
    ├── MCP Server: File System Access
    ├── MCP Server: Database Queries
    ├── MCP Server: Git Operations
    └── MCP Server: API Integration

Each server is a relatively simple program that exposes its capabilities in a structured format. The AI model receives descriptions of available tools and can decide when and how to use them based on the user’s request. This is where it differs from traditional API integration — the AI has agency in choosing which tools to invoke and how to compose them.

The SDKs are available in TypeScript and Python, making it straightforward to build both servers and clients. I’ve been experimenting with building a few MCP servers, and the developer experience is genuinely good — you can have a working server exposing custom tools in under an hour.

Why Adoption Is Picking Up
#

Several factors are driving MCP adoption right now. First, Anthropic open-sourced the specification and reference implementations under a permissive license, removing the “vendor lock-in” concern that often kills open standards from single companies.

Second, developer tool makers are starting to integrate MCP natively. Cursor, the AI-powered code editor, added MCP support, which means you can extend its AI capabilities with custom tools without waiting for the Cursor team to build specific integrations. Other development tools are following suit.

Third, the community has been prolific. There are already MCP servers for databases (PostgreSQL, SQLite), cloud platforms (AWS), version control (Git, GitHub), file systems, and dozens of other tools. The ecosystem is growing in that organic, bottom-up way that characterises successful open standards.

Implications for Developers
#

If MCP succeeds as a standard, it changes the calculus for AI integration in several ways.

Build once, use everywhere. Instead of building separate integrations for each AI platform, you build an MCP server for your tool or service, and it works with any MCP-compatible client. This is especially valuable for internal tools — instead of building a ChatGPT plugin AND a Claude integration AND a custom solution, you build one MCP server.

Composability. Because MCP servers are independent processes, you can mix and match them. Need an AI agent that can search your codebase, query your monitoring system, and create Jira tickets? Connect three MCP servers and the AI can orchestrate across all of them. This composability is powerful for building complex workflows without monolithic integration code.

Security boundaries. Each MCP server runs in its own process with its own permissions. This is architecturally cleaner than giving an AI model direct access to everything — you can control what each server exposes and audit its usage independently. The protocol includes capability negotiation, so clients and servers can agree on what operations are permitted.

Local-first development. The stdio transport means MCP servers can run entirely on your local machine with no cloud dependency. This is important for sensitive codebases and development environments where sending data to external services isn’t acceptable.

Challenges and Open Questions
#

MCP isn’t without challenges. The security model, while better than “give the AI your API key,” still needs maturation. When an AI agent can dynamically discover and invoke tools, the attack surface is broad. Malicious MCP servers, prompt injection through tool outputs, and privilege escalation through tool composition are all concerns that the community is actively working on.

There’s also the adoption chicken-and-egg problem. MCP is most valuable when there’s a rich ecosystem of servers and clients, but developers won’t build servers until there are enough clients, and vice versa. Anthropic’s integration in Claude Desktop and the Cursor adoption help bootstrap this, but it needs more momentum.

Performance is another consideration. Each tool invocation adds latency — the AI decides to use a tool, sends a request to the MCP server, waits for a response, and then incorporates the result. For interactive applications, this round-trip overhead can affect the user experience. Server implementations need to be fast, and clients need to handle async tool calls gracefully.

My Take
#

I’ve seen enough technology cycles to be cautious about “universal standards” — for every LSP success story, there are a dozen standards that never achieved critical mass. But MCP has several things going for it: a clear problem statement, a well-designed protocol, good reference implementations, and backing from a major AI company that’s committed to keeping it open.

What excites me most is the potential to make AI integration a first-class part of the developer experience rather than an afterthought. Right now, connecting AI to your specific tools and data is still too much friction. If MCP can reduce that friction to “install an MCP server and it just works,” it’ll unlock a lot of practical AI applications that are currently too expensive to build.

Whether Anthropic’s protocol becomes the standard or merely inspires a better one, the direction is right. We need standardised ways for AI models to interact with the world, and MCP is the most credible attempt I’ve seen so far.

Part of my Developer Landscape series, exploring the tools and trends shaping how we build software.

Developer Tooling - This article is part of a series.
Part : This Article

Related