MCP for the REST of Us

If you’ve been anywhere near AI tooling lately, you’ve probably heard about MCPs. I however see a lot of questions like “isn’t MCP just a fancy REST API?” and I get it. On the surface, they look similar. Both involve clients and servers exchanging data. But comparing MCP to REST is a bit like comparing a phone conversation to sending a fax (Yes I’m old). They both communicate information, but they work in fundamentally different ways.
Let me break down what MCP actually is, why it exists, and how it differs from the RESTful paradigm we’ve all grown comfortable with.
What is MCP?
Anthropic launched the Model Context Protocol back in November 2024 as an open standard for connecting LLMs and AI assistants to different data sources. We’re talking APIs, databases, content repositories, development tools, you name it.
At its core, the protocol is just a standardized way for a client and server to communicate, providing secure, real-time two-way communication between AI systems and external tools. Before MCP, every AI tool needed custom connectors for each data source. Anthropic called this the “MΓN integration problem,” and it was a maintenance nightmare.
I won’t go through the full specification here since that’s not what this post is about. If you want the details, the full spec lives at modelcontextprotocol.io.
But the basic architecture for an MCP integration looks more or less like this:
Host (Claude, Cursor, etc.)
βββ Client ββ MCP Protocol ββ Server ββ Your Data/APIs
- Hosts are applications like Claude Desktop, Claude Code, Cursor, or Windsurf
- Clients live inside the host, each handling direct communication with one MCP server
- MCP Protocol is the standardized interface used for communication between client and server
- MCP Servers are applications that expose specific capabilities to clients through the protocol layer
This modular approach makes MCP maintainable and provides a high level of separation of concerns, with each piece having a clear responsibility.
A Stateful Protocol
Here’s where things get interesting, and where MCP starts diverging from what you’re used to with RESTful APIs.
REST is stateless by design. Every request stands alone. You send your auth token, your parameters, your context, everything, every single time. The server forgets you exist between calls.
MCP maintains session state. Previous requests shape how future requests get handled. When an AI debugs your code, it can open files, run tests, see errors, and suggest fixes without amnesia between each step. The session remembers. This isn’t just convenience. It fundamentally changes what’s possible.
With REST, if you want an AI to “look at my recent commits, understand the patterns, and suggest where the bug might be,” you’re orchestrating multiple independent calls and manually stitching context together. With MCP, that context flows naturally because the session maintains it.
Is this always better? No. Statelessness scales beautifully. State introduces complexity, memory overhead, session management headaches. There are real tradeoffs here, and if someone tells you MCP is strictly superior, they’re selling something.
MCP consists of two layers:
Data Layer - defines the JSON-RPC 2.0 protocol for client-server communication:
- Lifecycle management
- Server features
- Client features
- Utility features
Transport Layer - defines how communication and data exchange actually happens:
- Stdio transport (for local integrations)
- Streamable HTTP transport (for remote servers)
The lifecycle management piece is what we will focus on in this blog post. It handles capability negotiation between client and server during initialization. Both sides declare what they support, and this shapes the entire session. This is a fundamental difference from REST, where every request is independent.
The Three Primitives (Plus Some Extras)
MCP servers can expose these three core primitives:
Tools - Executable functions that AI agents can invoke to perform actions. When you ask Claude to search your codebase or create a file, that’s a tool call.
Resources - Data sources providing contextual information to AI tools and agents. These work somewhat like RESTful APIs, exposing data the AI can read.
Prompts - Templates that help structure communication with an LLM. They provide tested, reusable instructions for specific tasks.
But here’s where it gets interesting. Because MCP is stateful, servers can also make requests back to the client:
Sampling - Allows servers to request LLM completions through the client. The server can say “I need the model to analyze this code” and the client orchestrates that completion, returning the result to the server.
Elicitation - Allows servers to request additional information from users. If the AI is filling out a form and needs clarification, it can ask.
Logging - Allows servers to send log messages back to clients for debugging.
This two-way communication is what makes MCP feel like an ongoing conversation rather than isolated request-response cycles.
Here’s what a simple interaction looks like. The client first discovers what the server can do, then invokes a tool:
Client β Server: initialize (protocol version, capabilities)
Server β Client: initialize response (server capabilities)
Client β Server: tools/list
Server β Client: [{name: "search_files", description: "...", inputSchema: {...}},
{name: "read_file", description: "...", inputSchema: {...}},
{name: "run_tests", description: "...", inputSchema: {...}}]
Client β Server: tools/call {name: "search_files", arguments: {query: "auth"}}
Server β Client: {content: [{type: "text", text: "Found 3 matches..."}]}
Notice the discovery step. The client doesn’t need to know what tools exist ahead of time. It asks, and the server tells it. This is fundamentally different from REST, where you hardcode endpoint paths.
MCP vs REST: The Real Differences
I understand why people compare MCP to REST APIs. They’re both integration technologies connecting clients to resources. But they approach the problem from completely different angles.
| Feature | MCP | RESTful API |
|---|---|---|
| State Management | Stateful sessions with maintained context | Stateless - each request is independent |
| Connection Type | Persistent, session-based | Request-response, connection-per-call |
| Communication | Bidirectional (server can request from client) | Unidirectional (client requests, server responds) |
| Protocol | JSON-RPC 2.0 | HTTP methods (GET, POST, PUT, DELETE) |
| Discovery | Dynamic - client discovers available tools at runtime | Static - endpoints defined in documentation |
| Context Handling | Built-in context awareness across requests | Context must be passed with every request |
| Target Consumer | AI agents and LLMs | Traditional software applications |
| Integration Pattern | Capability negotiation, then ongoing dialogue | Document endpoints, implement, maintain |
The discovery piece is vastly underrated by people disregarding MCP servers. REST APIs expose endpoints. You read documentation, you learn what’s available, you write code to call specific paths. MCP servers expose capabilities that clients discover dynamically. The AI learns what’s available and figures out how to use it. This matters because it inverts who needs to understand the integration. With REST, your developers need to know the API. With MCP, the AI figures it out (with appropriate guardrails).
The stateful nature is also a big differentiator. REST APIs are stateless by design. Each request stands alone and must include all necessary context. That works great for predictable, idempotent operations, but it requires a lot of work if you need conversational memory. With MCP, previous requests influence how future requests are handled. When an AI debugs your codebase, it opens a file, runs tests, identifies errors, and suggests fixes without losing track of what it just did. The session maintains awareness throughout.
When This Matters and When It Doesn’t
If you’re building traditional web services, keep using REST. Seriously. It’s battle-tested, scales horizontally, has decades of tooling, and every developer on earth knows how it works. MCP doesn’t obsolete any of that.
If you’re building AI-native applications where agents need to interact with multiple services while maintaining context, MCP solves real problems. The alternative is building custom orchestration for every model-service combination, which is exactly the “MΓN integration problem” that motivated MCP in the first place.
Most interesting case: you probably want both. Your MCP servers will likely wrap your existing REST APIs. REST handles the actual operations. MCP adds the AI-friendly layer on top. The underlying service doesn’t change. You’re adding an interface optimized for a different kind of consumer.
The Security Model
The authorization layer follows established patterns. OAuth 2.1, audience-bound tokens, PKCE. If you’ve implemented OAuth before, nothing here will surprise you.
That said, I see a few anti-patterns in the wild worth calling out.
The spec explicitly forbids token passthroughβan anti-pattern where your MCP server accepts tokens from a client and forwards them to downstream APIs without validating they were issued for your server.
Say you build an MCP server that wraps GitHub’s API. A lazy implementation might just take whatever token the client sends and pass it along to GitHub. This breaks down fast: your server can’t distinguish between clients, your audit logs show the wrong identity, and if someone steals a token they can use your server as a proxy to exfiltrate data.
The fix: MCP servers must only accept tokens explicitly issued for them. Your server authenticates clients, then uses its own credentials with downstream APIs.
Local MCP servers run on your machine with your privileges. When you install one, you’re downloading and executing code. The spec recommends sandboxing and explicit consent flows before execution, but enforcement depends entirely on your client. If your client lets you one-click install servers from untrusted sources without reviewing what commands they run, that’s a problem. The mitigation here isn’t protocol-level. It’s being deliberate about what you install and understanding that an MCP server has the same access to your system that you do.
Session management matters more than you’d expect. MCP sessions are stateful, which means session IDs become valuable targets. The spec calls out binding session IDs to user-specific information and using secure random generation. If you’re building an MCP server that handles multiple users, treating session IDs as authentication is explicitly called out as something you must not do. Sessions identify connections. Authorization validates requests. Conflating them opens you up to hijacking.
Scope design is where most implementations will make mistakes. The temptation is to request broad permissions upfront to avoid repeated consent prompts. The spec recommends the opposite: start with minimal scopes, elevate progressively when privileged operations are attempted. A stolen token with narrow scope limits blast radius. A stolen token with files:* and admin:* is a different problem entirely.
The harder questions aren’t in the spec. Some security researchers have raised concerns beyond the auth layer: tool poisoning (malicious servers exposing dangerous capabilities), lookalike tools replacing trusted ones, and the broader attack surface of AI agents with tool access. These are real discussions, but they’re about the agentic paradigm generally. So which servers do you trust? How do you evaluate a server before connecting it? What approval flows make sense for your users? The protocol gives you the primitives. The trust model is yours to build.
The Bottom Line
MCP isn’t REST, and treating it like REST will lead you to make bad decisions. It’s a protocol designed for AI agents that need stateful, bidirectional communication with tools and data sources.
Does every system need this? Absolutely not. If your integration is straightforward request-response, REST remains the right choice.
But if you’re building systems where AI agents need to maintain context across complex multi-step workflows, or where you want AI to dynamically discover and use capabilities without hardcoded integrations, MCP is solving the right problem.
The protocol is open, the ecosystem is growing, and major players are investing. Whether that matters to you depends entirely on what you’re building.
If you want to dig in, the full spec is very readable and well-documented. Start there rather than relying on secondhand explanations. Including this one.