MCP — Model Context Protocol
MCP — Model Context Protocol
The Model Context Protocol (MCP) is a protocol for connecting LLM clients to external data, tools, and prompts in a standardized way. Anthropic released it in November 2024 and other AI tools and IDEs adopted it quickly afterward.
1. About MCP
MCP is an open protocol Anthropic released on November 25, 2024. The official site is modelcontextprotocol.io and the spec and SDKs live at the github.com/modelcontextprotocol organization. The starting point was an attempt to standardize so that each client wouldn't have to build its own integration for the same external service.
A common analogy is "an interface like USB-C" — one integration on the AI client side connects to many servers, and one implementation on the server side gets used by many clients.
2. Components
| Role | Meaning |
|---|---|
| Host | The application that actually runs the LLM (Claude Desktop · Cursor). |
| Client | Component inside the Host that maintains a 1:1 connection to one server. |
| Server | An external process that exposes resources, tools, and prompts. |
A single Host can have multiple Clients, and each Client connects to one Server.
3. JSON-RPC 2.0 based
MCP messages follow the JSON-RPC 2.0 spec. Three message types — request, response, notification — and standardized fields (id · method · params · result · error).
Handshake flow:
Client → initialize (protocol version · capabilities)
Server → initializeResult (supported features)
Client → notifications/initialized
…then normal request/response exchange
4. The three primitives
MCP defines three kinds of primitive resources a server can expose to a host.
Resources — read-only data (files · DB rows · documents). Identified by URI and fetched via resources/list · resources/read. Material the model can put directly in its context.
Tools — callable functions. Input shape is declared as JSON Schema, the model produces the call arguments, and the host executes via tools/call. Includes side effects (file writes, API calls).
Prompts — reusable prompt templates. Take arguments and produce a sequence of messages. Workflow shortcuts.
| Primitive | Control | Note |
|---|---|---|
| Resources | Application | Host decides how to use as context. |
| Tools | Model | Model decides to call, host approves and executes. |
| Prompts | User | A person picks them explicitly. |
The host can also offer features back to the server, like sampling (asking the LLM again) or roots (the file roots accessible).
5. Transports
| Transport | Where | Note |
|---|---|---|
| STDIO | Between local processes | Server is spawned as a child process; JSON-RPC over stdin/stdout. |
| Streamable HTTP | Remote | HTTP + server-side streaming. Introduced in 2025, gradually replaces the older SSE approach. |
The older spec had a separate SSE transport, which was later folded into Streamable HTTP. The message format is the same regardless of transport.
6. Other paths
Tool calls were already being handled in these shapes before MCP took hold:
- Function calling — model-specific formats from OpenAI, Anthropic, Google. Standards differ.
- OpenAPI / REST — connecting general HTTP APIs directly to the LLM.
- LangChain Tools — library-side abstraction.
- Plugins (ChatGPT plugin, retired) — single-host bound.
MCP layers a host-neutral standard on top of this space.
7. Server registration · SDKs
Most MCP clients register servers via an mcp.json file or an mcpServers section.
{
"mcpServers": {
"filesystem": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-filesystem", "/path/to/dir"]
}
}
}
Official SDKs — TypeScript · Python · Java · C# · Go · Rust · Kotlin · Swift. Writing a server typically starts at a few dozen lines (tool definition + handler + entry point).
The modelcontextprotocol/servers repository hosts a variety of reference servers (filesystem · git · Postgres · SQLite · fetch · time).
8. Security model
User consent — the spec strongly emphasizes consent and control. Hosts should provide a user approval interface for tool calls and resource access, and shouldn't blindly execute based on model output alone.
Isolation — servers usually run isolated in a separate process (STDIO) or separate host (HTTP). The server's own privileges and system access are kept separate from the host.
Authentication — authentication for remote transports uses standard mechanisms like OAuth 2.0. Check the spec updates from time to time.
Known threats:
- An untrusted server might expose malicious tools or data.
- Hidden prompt injection inside tool results.
- Environment variables and credentials passed straight through to the server.
- Overly broad filesystem permissions.
The host's permission UI, server allow-list, and per-tool approval are the mitigations.
9. Common pitfalls
Version compatibility — the spec is evolving actively. If protocol versions of host, server, and SDK get out of sync, some features may not work.
STDIO and environment variables — STDIO servers start in the environment the host hands them. Design credential passing and log locations carefully.
Tool result injection — text returned from an external tool may contain hidden model instructions. Make trust boundaries explicit in the host system prompt.
Too many tools — when context fills up with tool definitions the model's performance drops. Group servers per task.
HTTP authentication and CORS — remote servers inherit ordinary web security concerns.
Logging and observability — dumping JSON-RPC messages straight to logs can leak sensitive data.
Error boundaries — design the client to reconnect smoothly when a server dies.
Closing thoughts
MCP is settling in quickly as a standard interface between LLM clients and external tools. The USB-C analogy is appealing — a single integration usable across many hosts. Since the spec is changing actively, putting protocol-version compatibility of host, server, and SDK at the top of the operations checklist is wise.
Next
- mcp-clients
- mcp-context7
We refer to Model Context Protocol · MCP Specification · MCP GitHub · Anthropic — Introducing MCP (2024-11) · Official server collection · TypeScript SDK · Python SDK.