How Claude Code calls tools, proxies to backend servers, and executes code across a sandboxed MCP infrastructure
ai-servicers.com
Model Context Protocol (MCP) is an open standard that lets AI models talk to external tools and data sources in a structured way.
Instead of Claude writing a bash script and hoping it runs, MCP gives Claude a typed function call interface to databases, browsers, file systems, APIs, and more — with structured responses it can reason about.
On this server: 10 active MCP servers exposing 66+ tools are accessible through a single proxy endpoint. Claude reaches all of them through one MCP client.
Claude Code CLI → MCP Code Executor → MCP Proxy → 10 backend servers
All containers on mcp-net Docker network • Code executor also on traefik-net for chat access
Also: memory (KG store, 9 tools), tradingview (8 tools), gemini-image (1 tool) • 10 active servers • 66+ total tools
execute_code ToolClaude Code does not call MCP servers directly. It calls one tool on the code executor, which handles everything else.
mcp__code-executor__execute_code({
"code": "const { read_file } =
await import('/workspace/servers
/filesystem/read_file.js');
console.log(await read_file(
{ path: '/workspace/config.json' }));"
})
mcp-proxy:9090/filesystem/mcpOne container serves both admin and developer users. Access is controlled by API key, not by running separate containers.
mcp-wrapper.sh (host) → detects Linux group → reads key file → docker exec with key → server filters tools by role
secrets/code-executor-admin.key (administrators group) • secrets/code-executor-developer.key (developers group)roles.json mounted read-only in container • Rotate keys: edit roles.json → restart container (no rebuild)
Claude calls execute_code on a single container. All 10 MCP servers are accessible through that one tool.
Code runs in an isolated tmpfs container. 1 CPU, 1 GB RAM, 5-minute timeout, no internet access, no privilege escalation.
Admins get all servers. Developers get 6. One container, two access levels, controlled by API key and Linux group membership.
Progressive disclosure: load tool names only (245 tokens) or full details on demand. 97% token reduction vs loading all tools upfront.
Built-in chat_send, chat_read, chat_who tools route through the AI Agent Chat gateway to Matrix.
postgres, filesystem, playwright, minio, openmemory, n8n, arangodb, timescaledb, ib, tradingview — 66+ tools total.
MCP Code Executor • ai-servicers.com