Every local MCP build runs on STDIO. The stdout rule is the one that catches people.
Your AI client launches the MCP server as a subprocess. Three streams, three jobs: - stdin: client writes JSON-RPC 2.0 messages in - stdout: server writes responses back -- valid MCP messages ONLY - stderr: logs, debug output. Nothing else touches this stream That last rule trips people up. Route debug logs to stdout and you corrupt the data stream. The spec is explicit. Stderr for logs, always. Claude Code is the client in this setup. Add your server to ~/.claude/settings.json: ```json { "mcpServers": { "my-server": { "command": "python", "args": ["server.py"] } } } ``` Claude Code launches it as a subprocess, wires the STDIO pipes, and handles the protocol. No transport config needed -- STDIO is the default for local servers. Your server side is one line with FastMCP: ```python server.run(transport="stdio") ``` Why STDIO works for local builds: No network stack. No open ports. No firewall configuration. Pipes every major language handles natively -- Python, TypeScript, Go. When to use STDIO vs. Streamable HTTP: STDIO: Claude Code, Cursor, VS Code, personal CLI tools, single-machine setups. Streamable HTTP: multiple concurrent clients, remote access, cross-machine communication. Start with STDIO. Switch when your use case outgrows single-machine. --- Building production MCP systems: skool.com/ai-for-life-3967 ELI5 explanation: Imagine your AI app (Cursor, VS Code) wants to talk to a little helper program you built. It opens up that helper as a child process and they pass notes through three pipes: stdin = AI sends questions in stdout = Helper sends answers back stderr = Helper writes its diary (logs, debug stuff) The trap everyone falls into: stdout is only for clean answers in the agreed format (JSON-RPC). The second you print("debugging...") to stdout, you’ve dumped your diary into the answer pipe and the AI chokes. Logs go to stderr. Always. No exceptions. Why this setup is great for stuff running on your own machine: