Skip to main content

Code Mode is Best Served in the Shell – Jan Čurn

Track: Sessions – Day 1 | Ref: 1.4 Speaker: Jan Čurn, Apify

Overview

Jan Čurn, founder and CEO of Apify, presented a compelling argument: LLMs are far better at generating CLI commands than at function calling. Function calling is a synthetic construct with limited training data, while shell commands have decades of training data from Unix history. This insight led him to build MCPC — a universal MCP command-line client.

The Problem with Function Calling

The MCP ecosystem has thousands of servers of varying quality and ~105 clients, but only about 5 clients support the core protocol features (resources, prompts, tools, discovery, instructions). Most clients simply stuff all tools into the context window, leading to:

  • Token waste — 22% of a 200K context consumed by tool definitions alone
  • Reduced accuracy — more tools = more room for errors
  • Unfair criticism — people blame MCP when the real problem is client implementation

Code Mode: The Alternative

Cloudflare prototyped "code mode" — instead of function calling, the LLM generates code that calls MCP tools directly. Results:

  • Dramatically reduced token usage compared to function calling
  • Increased accuracy, especially when chaining multiple tool calls
  • Better composability — results from one call pipe naturally into the next

Anthropic independently confirmed these results. But neither implementation was practical for general use — Cloudflare's was tied to their platform, and Anthropic provided no code.

MCPC: Universal MCP CLI Client

Over Christmas 2025, Jan built MCPC — a thin wrapper that maps MCP commands to intuitive CLI commands.

Key features:

  • Live sessions — persistent connections to MCP servers
  • JSON mode — output as JSON for scripting and piping with jq
  • Secure credential storage — OAuth credentials stored securely, shared across agents
  • Cross-agent config — same configuration works in Claude Code, Codex CLI, Open Code, and others
  • Human and LLM readable — colored markdown output for people, structured output for machines

How it works:

# Connect to a server
mcpc fs connect

# List available tools
mcpc fs tools

# Get tool schema (human-readable)
mcpc fs tools get read_file

# Get tool schema (JSON for scripting)
mcpc fs tools get read_file --json

The LLM can then generate shell scripts that chain MCPC commands — running in a sub-context and returning only the result to the main context window.

GitHub: MCPC is open source and accepting contributions, with the goal of becoming the standard CLI client for MCP (like ssh or ftp for their protocols).

Key Takeaways

  • LLMs generate CLI commands more accurately than function calls due to training data volume
  • Code mode dramatically reduces token usage and increases accuracy vs. function calling
  • MCPC provides a universal CLI interface to any MCP server
  • Credential sharing across agents eliminates per-tool configuration overhead
  • Shell scripting with MCPC enables composable MCP tool chains outside the context window