MCP Dev Summit 2026: The Enterprise Era Has Arrived
What I learned from the premier gathering of MCP builders, contributors, and enterprise adopters
The Model Context Protocol (MCP) has crossed a threshold. At the MCP Dev Summit North America 2026, the conversation shifted from "Will this work?" to "How do we deploy at scale?" What emerged was a clear picture: MCP is no longer experimental—it's becoming the standard integration layer for enterprise AI agents.
Here's what I learned.
The Adoption Numbers Don't Lie
David Soria Parra, co-creator of MCP at Anthropic, opened with metrics that would make any protocol designer take notice:
100M+ SDK downloads per month across Python, TypeScript, Java, and other SDKs. That's what React achieved in 3 years—MCP did it in 16 months.
But the more interesting number came from the enterprises you'd least expect to be early adopters. James Hood, a 16-year Amazon veteran and Principal Engineer, revealed that MCP is "the most popular way to connect agents to internal systems" at AWS. They have tens of thousands of builders in their internal AI community.
Uber's scale is even more striking: 1,500+ internal agents running 60,000+ executions per week. Their engineering organization is 90%+ using AI weekly. These aren't pilot programs—they're production deployments.
The message is clear: behind every corporate firewall, teams are quietly wiring MCP to systems of record. The enterprise era has arrived.
The Architecture That Actually Works
If there was one recurring theme across every enterprise speaker, it was this: You need a Gateway and Registry.
Sheng Liang, CEO of Obot (and previously founder of Rancher Labs), put it bluntly:
"A good third to half of vendors at this conference are providing variations of gateway/registry solutions. It's the essential starting point."
Why You Need Both
Gateway: Control point for what's happening. It gives IT/admins visibility into agent behavior, enables access control, and provides audit trails.
Registry: Single source of truth for discovery. Teams can find and reuse MCP servers, versions are managed, and service owners stay in control.
Uber built exactly this:
- Gateway Orchestrator automatically translates their 10,000+ services into MCP tools
- Registry enables discovery and reuse across 5,000+ engineers
- Security layers handle PII redaction, authorization, and auditing
Duolingo took a simpler approach: one AI Slackbot with 180+ MCP tools serving 30% of the company (250+ weekly users). They open-sourced it at github.com/duolingo/slack-ai-agents.
The Decision Framework
Do you need Gateway + Registry?
├─ <10 agents → Not yet (overhead exceeds benefit)
├─ 10-100 agents → Start with Registry (discovery, reuse)
└─ 100+ agents → Full Gateway + Registry (compliance, visibility)
Security Is Not Optional
James Hood from AWS highlighted a concept originally coined by Simon Willison that stuck with me: The Lethal Trifecta.
If an agent has all three capabilities simultaneously:
- Access to private data
- Exposure to untrusted content (like emails or documents)
- Ability to communicate externally
...you have a data exfiltration risk. Even with trustworthy servers, composition creates vulnerabilities.
The Solution: Defense in Depth
| Layer | What It Does | Why You Need It |
|---|---|---|
| MCP Gateway | Access control, tool filtering | Prevent unauthorized access |
| LLM Gateway | Visibility into model prompts | See what's being sent |
| Supply Chain Scanning | Detect compromised MCP servers | Prevent supply chain attacks |
| Isolated Runtime | Container-based execution | Contain breaches |
Mike Schwartz, founder of Gluu, framed it memorably with his "Golem to Murderbot" talk: AI agents are like golems—powerful but potentially dangerous. You need a "governor module" to keep them in check.
"Governance is not about preventing bad things from happening. It's about holding people, software, business units, and third parties accountable when bad things happen."
The Context Window Problem (And the Solution)
One of the most practical insights came from Apify's founder, Jan:
"Most MCP clients load ALL tools into context at startup. If you have 10 servers × 10 tools = 100 tools loaded from the start. Context rots. Accuracy drops."
The solution isn't to change the protocol—it's to change client behavior:
Progressive Disclosure: Load tools only when needed. Jan's MCPC (MCP CLI Client) demonstrates 61% context savings by deferring tool loading.
CLI Wrapping: Tools available via shell, not direct injection. Models already know shell/CLI well from 50+ years of training data.
Vendor Config: Let MCP server owners specify which tools to inject directly vs. defer to CLI.
David Soria Parra emphasized: "Context window is not an MCP problem. It's a client problem. Solvable."
Skills: The New Playbooks
A new primitive is emerging: Skills—domain knowledge bundled with MCP tools.
Think of them as playbooks for AI agents. Instead of giving an agent raw tools and hoping it figures out how to use them, Skills provide:
- Domain knowledge
- Instructions
- Tool preferences
- Example workflows
Rush Tehrani from Uber explained their roadmap: "Skills as shareable recipes. You don't just see MCP registry anymore; you see skills too."
Sheng Liang observed: "Agents I see today look like skills—some markdown files and generated code. The framework layer is collapsing."
What This Means for You: Start documenting how your tools should be used. The agents are getting smarter, but they still need the playbook.
Running MCP at the Edge
One of the most technically dense talks covered a use case I hadn't considered: running MCP on edge devices with limited compute and intermittent connectivity.
The reality: billions are being invested in edge AI for manufacturing, automotive, and healthcare. But MCP was designed for cloud environments with stable networks and abundant compute.
The solutions proposed:
- Binary MCP: Protocol Buffers instead of JSON-RPC → 10-25x payload reduction
- MQTT Transport: Store-and-forward for intermittent connectivity
- Context-Aware Discovery: Return only relevant tools based on task
The insight that stuck: "The edge is not going to wait for consensus. There's already billions of dollars being invested into it." If MCP can't adapt, something else will.
Evals: The Non-Negotiable
Diamond Bishop from Datadog delivered the strongest warning of the conference:
"If you're launching an agent and don't know how you're doing eval, please don't launch that agent."
Evaluation isn't a nice-to-have. It's how you know your agent is improving—or degrading. The recommendation: make your eval system available via MCP so agents can help improve themselves.
Datadog's approach:
- Living eval system that evolves with your agents
- Offline, online, and in-the-loop evals
- Agent self-improvement by exposing eval via MCP
The Vendor Landscape
If you're evaluating vendors, here's the landscape:
| Category | Players | Notes |
|---|---|---|
| Gateway/Registry | Obot, Workato, Docker | Essential for enterprise |
| MCP Platform | Anthropic, OpenAI | Protocol creators |
| CLI Tools | Apify (MCPC) | For developers |
| Security Tools | mcp-deep, Open Shell | Emerging ecosystem |
Key Takeaways
-
MCP is enterprise-ready. Uber (1,500+ agents, 60K executions/week) and AWS (tens of thousands of builders) prove it.
-
Gateway + Registry is table stakes. Don't deploy agents at scale without them.
-
Security requires new thinking. Simon Willison's Lethal Trifecta (read + write + network) creates data exfiltration risk even with trustworthy servers.
-
Context window is solvable. Progressive disclosure and CLI wrapping achieve 61%+ savings.
-
Skills are the new playbooks. Document how tools should be used—the agents are ready.
-
Evals are non-negotiable. "If you don't know how you're doing eval, don't launch that agent."
What's Next
The protocol roadmap from Anthropic:
- Stateless transport for hyperscale (1-2 months)
- Long-running tasks for capable models
- Cross-app access without manual auth
- Triggers (webhooks) for proactive agents
- Skills as a first-class primitive
The enterprise era of MCP has arrived. The question is no longer "Will this work?" but "How do we deploy responsibly at scale?"
This post synthesizes insights from the MCP Dev Summit North America 2026, including speakers from Anthropic, AWS, Uber, Duolingo, Workato, Datadog, Apify, Gluu, Obot, Docker, Nordstrom, and others. For the full conference documentation with speaker pages and topic deep-dives, see the MCP Dev Summit 2026 docs.
Further Reading:
- MCP Dev Summit 2026 Documentation — Full speaker and topic documentation
- Presentation Notes — All 13 session summaries
- Duolingo's open-source Slackbot: github.com/duolingo/slack-ai-agents
