Model Context Protocol (MCP) Explained: The USB-C of AI Agents
The Model Context Protocol (MCP) is an open standard that gives AI agents a universal way to connect to tools, APIs, and data sources. Think of it as USB-C for AI — one protocol to replace dozens of custom integrations.
The Model Context Protocol — MCP for short — is quietly becoming the most important infrastructure standard in the AI agent ecosystem. Introduced by Anthropic in late 2024 and now adopted by OpenAI, Microsoft, Cursor, and dozens of other platforms, MCP solves a problem every engineering team building with AI agents has encountered: how do you connect a language model to the tools and data it needs without writing bespoke integration code for every single combination?
If you have built an AI agent that needs to read from a database, call an API, search files, or interact with a SaaS tool, you know the pain. Each integration is a one-off. Each requires custom authentication handling, schema definitions, error mapping, and maintenance. MCP eliminates that fragmentation with a single open protocol. This article breaks down what MCP is, how it works under the hood, and how to start using it in production.
The Problem MCP Solves: Fragmented Tool Access
Before MCP, connecting an AI agent to external tools meant building point-to-point integrations. Want your agent to query PostgreSQL? Write a custom tool function. Need it to also search Jira? Another custom function. Slack notifications? One more. Each tool integration required:
- ▸A unique schema definition for the LLM to understand the tool
- ▸Custom authentication and credential management
- ▸Error handling specific to that tool's failure modes
- ▸Ongoing maintenance as APIs change
For a single agent with three tools, this is manageable. For an organization running dozens of agents across multiple teams, it becomes an integration nightmare. The M-times-N problem emerges: M agents times N tools equals M*N custom integrations.
MCP collapses this to M+N. Each agent implements the MCP client protocol once. Each tool implements the MCP server protocol once. Any client can connect to any server. This is why people call it the "USB-C of AI" — before USB-C, every device had its own charger. MCP does for AI tool access what USB-C did for hardware connectivity.
How MCP Works: Architecture and Protocol
MCP is built on JSON-RPC 2.0, a lightweight remote procedure call protocol that has been battle-tested for over a decade. The architecture has three components:
MCP Hosts
The host is the application the user interacts with — Claude Desktop, VS Code with Copilot, Cursor, or your own custom AI application. The host manages the lifecycle of MCP connections and provides the interface between the user and the AI model.
MCP Clients
Each host contains one or more MCP clients. A client maintains a 1:1 connection with a single MCP server. The client handles protocol negotiation, capability exchange, and message routing. When the AI model decides it needs to use a tool, the client translates that request into an MCP message and sends it to the appropriate server.
MCP Servers
Servers are where the actual tool logic lives. An MCP server exposes capabilities — tools, resources, and prompts — over the protocol. A server might wrap a database, a REST API, a file system, or any other data source. Servers can run locally as a subprocess (using stdio transport) or remotely over HTTP with Server-Sent Events (SSE) for streaming.
The communication flow looks like this:
User -> Host -> AI Model -> MCP Client -> MCP Server -> External Tool
<- <- <-When the model determines it needs external data or wants to perform an action, it requests a tool call. The client routes this to the correct server, the server executes the operation, and the result flows back to the model for incorporation into its response.
Key Concepts: Tools, Resources, and Prompts
MCP servers expose three types of capabilities, each serving a different purpose:
Tools
Tools are executable functions the AI model can invoke. They represent actions — querying a database, sending a message, creating a file, calling an API. Tools are "model-controlled," meaning the AI decides when and how to use them based on the conversation context.
A tool definition includes a name, description, and an input schema (JSON Schema format). For example, a GitHub MCP server might expose a `create_issue` tool with parameters for repository, title, body, and labels.
Resources
Resources are data the server makes available for the model to read. Unlike tools, resources are typically "application-controlled" — the host application decides when to include them in context. Think of resources as files, database records, API responses, or any structured data the model might need as background context.
Resources are identified by URIs (e.g., `file:///path/to/doc.md` or `postgres://db/users/schema`). They can be static or dynamic, and servers can notify clients when resources change.
Prompts
Prompts are reusable templates that servers can offer to guide specific workflows. A database MCP server might provide a "analyze_table" prompt template that structures how the model should approach data analysis. Prompts are "user-controlled" — typically selected explicitly by the user from a menu or slash command.
Adoption: Who Supports MCP Today
MCP's adoption curve has been remarkably steep. Within 18 months of Anthropic's initial release, the protocol has become a de facto standard:
- ▸**Anthropic** — Claude Desktop and the Claude API natively support MCP. Claude Code uses MCP servers extensively for file system access, code search, and tool integration.
- ▸**OpenAI** — Added MCP support to the Agents SDK and ChatGPT desktop app in early 2025, signaling that MCP is not just an Anthropic play but an industry standard.
- ▸**Microsoft** — VS Code, GitHub Copilot, and Azure AI Foundry all support MCP connections, making it the default way to extend Copilot's capabilities.
- ▸**Cursor** — One of the earliest adopters, Cursor's MCP support lets developers connect their editor-based AI to any MCP server.
- ▸**Community ecosystem** — Thousands of open-source MCP servers exist on GitHub and npm, covering everything from databases (PostgreSQL, MongoDB, SQLite) to SaaS tools (Jira, Slack, GitHub, Linear) to infrastructure (AWS, Kubernetes, Docker).
The network effect is real. As more servers are built, the protocol becomes more valuable for client implementers, which drives more server development. This flywheel is what separates MCP from previous attempts at standardizing AI tool access.
MCP vs. A2A: Complementary Protocols
Google introduced the Agent-to-Agent (A2A) protocol around the same time MCP was gaining traction, which caused some confusion. The two protocols solve different problems and are complementary, not competing.
**MCP** governs the connection between an AI agent and its tools. It is about giving a single agent access to external capabilities — databases, APIs, file systems. Think of MCP as how an agent uses tools.
**A2A** governs communication between multiple AI agents. It handles agent discovery, task delegation, progress reporting, and result aggregation across a multi-agent system. Think of A2A as how agents talk to each other.
In a production multi-agent system, you would use both: A2A for inter-agent coordination and MCP for each agent's connection to its tools. They operate at different layers of the stack.
Building an MCP Server: A Practical Example
Building an MCP server is straightforward. The official SDKs (TypeScript and Python are the most mature) handle protocol details so you can focus on tool logic. Here is a minimal TypeScript example of an MCP server that exposes a weather lookup tool:
import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import { z } from "zod";
const server = new McpServer({
name: "weather-server",
version: "1.0.0",
});
server.tool(
"get_weather",
"Get current weather for a city",
{ city: z.string().describe("City name") },
async ({ city }) => {
const response = await fetch(
`https://api.weatherapi.com/v1/current.json?key=${API_KEY}&q=${city}`
);
const data = await response.json();
return {
content: [{
type: "text",
text: `${city}: ${data.current.temp_c}C, ${data.current.condition.text}`,
}],
};
}
);
const transport = new StdioServerTransport();
await server.connect(transport);To use this server from Claude Desktop, you add it to the configuration file:
{
"mcpServers": {
"weather": {
"command": "node",
"args": ["./weather-server.js"]
}
}
}That is it. The AI model can now check the weather in any city by calling the `get_weather` tool. No custom function calling setup, no manual prompt engineering to describe the tool — MCP handles capability discovery automatically.
For production servers, the Python SDK offers similar simplicity:
from mcp.server.fastmcp import FastMCP
mcp = FastMCP("database-server")
@mcp.tool()
async def query_users(department: str) -> str:
"""Look up employees by department."""
results = await db.fetch("SELECT name, role FROM users WHERE dept = $1", department)
return format_results(results)
mcp.run()The `FastMCP` class uses type hints and docstrings to automatically generate the tool schema that the AI model sees. This means the tool definition stays in sync with the implementation — no separate schema file to maintain.
Security Considerations for Production MCP Deployments
MCP's power comes with real security implications. When you give an AI agent access to databases, APIs, and file systems, you need to think carefully about permissions, data exposure, and attack surfaces.
Principle of Least Privilege
Each MCP server should expose the minimum set of capabilities needed. A database server for an analytics agent should provide read-only query access, not full DDL permissions. Scope tools tightly — instead of a generic `run_sql` tool, expose specific query tools like `get_sales_by_region` with parameterized inputs.
Input Validation and Injection Prevention
MCP tools receive input from AI models, which in turn receive input from users. This creates an indirect injection path. If your MCP server executes SQL, shell commands, or API calls based on model-provided input, you must validate and sanitize rigorously. Use parameterized queries, avoid shell interpolation, and validate inputs against strict schemas.
Authentication and Transport Security
For remote MCP servers (HTTP+SSE transport), use OAuth 2.0 or API key authentication. The MCP specification includes an authorization framework, but implementation is your responsibility. Always use TLS for remote connections. For local stdio servers, the security boundary is the host machine — ensure the server process runs with appropriate OS-level permissions.
Audit Logging
Log every tool invocation with the full request and response. In regulated industries, you need a complete audit trail of what the AI agent did, when, and with what parameters. MCP's structured request/response format makes this straightforward to implement at the client or server level.
Human-in-the-Loop Controls
For high-stakes operations — deploying code, modifying production data, sending communications — implement approval gates. The MCP protocol supports this pattern: the server can return a confirmation prompt that the host displays to the user before executing the action.
Getting Started: Practical Next Steps
If you are evaluating MCP for your engineering organization, here is a concrete path forward:
- ▸**Start with existing servers.** Before building custom servers, explore the ecosystem. There are production-quality MCP servers for PostgreSQL, GitHub, Slack, Jira, filesystem access, and dozens more. Install one and connect it to Claude Desktop or Cursor to see the protocol in action.
- ▸**Identify your highest-value integration.** What tool does your team waste the most time context-switching to? A database you constantly query? A ticketing system? An internal API? Build an MCP server for that first.
- ▸**Use the official SDKs.** The TypeScript and Python SDKs handle protocol negotiation, capability exchange, error formatting, and transport management. Do not implement the JSON-RPC layer from scratch.
- ▸**Plan your security model early.** Decide which tools need human approval, which data sources should be read-only, and how you will handle authentication before you ship to production.
- ▸**Consider remote servers for shared infrastructure.** Local stdio servers are great for development, but teams benefit from shared remote MCP servers that centralize access to databases, internal APIs, and SaaS tools with consistent authentication and logging.
The Model Context Protocol is still early in its lifecycle, but the fundamentals are solid and the adoption trajectory is clear. For organizations building AI agents — whether for internal productivity, customer-facing products, or automated workflows — MCP eliminates the integration tax that has historically slowed agent development to a crawl.
At A001.AI, we build AI agent systems with MCP at the integration layer. If your team is planning an agent architecture or struggling with tool integration sprawl, we can help you design and implement MCP servers tailored to your infrastructure. Reach out to discuss your use case.
Ready to Put AI Agents to Work?
Get a free AI audit of your codebase and discover what can be automated today.