Infrastructure

MCP Code Executor

How Claude Code calls tools, proxies to backend servers, and executes code across a sandboxed MCP infrastructure

What is MCP?

Model Context Protocol (MCP) is an open standard that lets AI models talk to external tools and data sources in a structured way.

Instead of Claude writing a bash script and hoping it runs, MCP gives Claude a typed function call interface to databases, browsers, file systems, APIs, and more — with structured responses it can reason about.

On this server: 10 active MCP servers exposing 66+ tools are accessible through a single proxy endpoint. Claude reaches all of them through one MCP client.

Claude Code CLI → MCP Code Executor → MCP Proxy → 10 backend servers

Architecture Overview

Claude Code CLI
Your terminal session
↓ stdio
mcp-code-executor
container · port 9091
TypeScript sandbox · RBAC · chat tools
↓ HTTP · mcp-net
mcp-proxy (TBXark)
port 9090 · /[server]/mcp
↓ stdio / HTTP
postgres
filesystem
playwright
minio
openmemory
+ 5 more

All containers on mcp-net Docker network • Code executor also on traefik-net for chat access

Available MCP Servers

filesystem
9 tools
read, write, list, search, move files • /workspace (r/w) • /tmp (write-only)
postgres
1 tool
execute_sql • primary database • postgres:5432
playwright
6 tools
navigate, screenshot, click, extract_text, fill_form, get_page_info
minio
9 tools
S3-compatible object storage • upload, download, list, metadata
openmemory
4 tools
semantic AI memory • add, search, list, delete_memory via mem0
n8n
6 tools
workflow automation • 400+ integrations • n8n:5678
arangodb
7 tools
multi-model DB • query, insert, update, collections • ai_memory DB
timescaledb
6 tools
time-series queries • hypertables, stats, describe_table
ib (paper)
10 tools
Interactive Brokers • market data, portfolio, contracts

Also: memory (KG store, 9 tools), tradingview (8 tools), gemini-image (1 tool) • 10 active servers • 66+ total tools

The execute_code Tool

Claude Code does not call MCP servers directly. It calls one tool on the code executor, which handles everything else.

Step 1 — Claude Calls MCP Tool

mcp__code-executor__execute_code({
  "code": "const { read_file } =
    await import('/workspace/servers
    /filesystem/read_file.js');
  console.log(await read_file(
    { path: '/workspace/config.json' }));"
})

Step 2 — Sandbox Executes TypeScript

  • Runs in isolated tmpfs container
  • 1 CPU, 1 GB RAM, 5 min timeout
  • Imports generated tool wrapper
  • Wrapper calls MCP proxy via HTTP

Step 3 — Proxy Routes to Backend

  • HTTP POST to mcp-proxy:9090/filesystem/mcp
  • Proxy translates HTTP → MCP stdio
  • Backend server processes request
  • Response returns as JSON

Step 4 — Result Returns to Claude

  • Sandbox captures stdout
  • 100KB output limit enforced
  • JSON response back to Claude CLI
  • Claude reasons over result

Role-Based Access Control

One container serves both admin and developer users. Access is controlled by API key, not by running separate containers.

mcp-wrapper.sh (host) → detects Linux group → reads key file → docker exec with key → server filters tools by role

Admin Role

  • All 10 MCP tools
  • All MCP servers (wildcard)
  • filesystem, postgres, playwright
  • minio, n8n, arangodb, memory
  • openmemory, ib, timescaledb
  • Full chat access (send/read/who)

Developer Role

  • All 10 MCP tools
  • 6 MCP servers only
  • postgres, playwright, openmemory
  • minio, ib, timescaledb
  • No filesystem, n8n, arangodb, memory
  • Chat access included
Key files: secrets/code-executor-admin.key (administrators group) • secrets/code-executor-developer.key (developers group)
Role config: roles.json mounted read-only in container • Rotate keys: edit roles.json → restart container (no rebuild)

Summary

One Entry Point

Claude calls execute_code on a single container. All 10 MCP servers are accessible through that one tool.

Sandboxed Execution

Code runs in an isolated tmpfs container. 1 CPU, 1 GB RAM, 5-minute timeout, no internet access, no privilege escalation.

Role-Based Access

Admins get all servers. Developers get 6. One container, two access levels, controlled by API key and Linux group membership.

Token Efficient

Progressive disclosure: load tool names only (245 tokens) or full details on demand. 97% token reduction vs loading all tools upfront.

Chat Integration

Built-in chat_send, chat_read, chat_who tools route through the AI Agent Chat gateway to Matrix.

10 Active Servers

postgres, filesystem, playwright, minio, openmemory, n8n, arangodb, timescaledb, ib, tradingview — 66+ tools total.