“RIP OpenClaw” has been making the rounds in the agent-builder crowd lately—less as an obituary, more as a signal: people want agents that are actually usable day-to-day without handing them the keys to everything.
What follows is a sysadmin-friendly reference design that stitches together:
- Claude Opus 4.6 as the reasoning model
- n8n as the workflow/runtime/orchestrator
- DesktopCommanderMCP (Docker) to give the agent controlled local capabilities (terminal + file operations)
- mcp-proxy + Cloudflare Tunnel to expose only what you intend, over an outbound-only tunnel (no inbound ports, no public IP on your laptop)
The goal: an agent that can be “always reachable” from your phone (Telegram/Slack), while keeping hard guardrails around secrets, filesystem access, and destructive actions.
The architecture in one diagram
Phone (Telegram/Slack)
|
v
n8n (VPS) -----> Claude Opus 4.6 (API)
|
| (tool call over HTTPS)
v
Cloudflare Tunnel (outbound)
|
v
mcp-proxy ---> DesktopCommanderMCP (Docker on laptop/desktop)
|
+-- Mounted folders only
+-- Terminal inside constrained environment
Why this setup is appealing to sysadmins
1) Least privilege is enforced by design (not by “please behave” prompts)
DesktopCommanderMCP is explicitly intended to provide terminal control, filesystem search, and diff-based editing through an MCP server.
If you only mount ~/SharedWithAgent (or a specific repo), then that’s the universe of files the agent can see.
2) You can keep secrets out of the model context
n8n can hold credentials (Gmail/Drive/Notion/Stripe/etc.) and only expose high-level tool functions to the model. Telegram triggers and integrations are a first-class n8n pattern.
The model doesn’t need to ever “see” API keys—n8n executes the authenticated calls.
3) Remote access without opening your laptop to the internet
Cloudflare Tunnel is designed to publish services through Cloudflare using outbound connectivity from cloudflared, avoiding direct inbound exposure.
Step 1: Run DesktopCommanderMCP in Docker
DesktopCommanderMCP is typically run as a containerized MCP server so you can control what it can touch. It’s described as an MCP server that provides terminal control and filesystem operations.
Minimal Docker Compose example (laptop/desktop):
services:
desktopcommander:
image: ghcr.io/wonderwhy-er/desktopcommander-mcp:latest
container_name: desktopcommander-mcp
restart: unless-stopped
volumes:
- /Users/you/SharedWithAgent:/workspace:rw
# Recommended: don’t run privileged; keep defaults tight.
# Consider read-only mounts for anything you don't want modified.
# read_only: true # optional, but limits writes (you may want selective writes)
ports:
- "127.0.0.1:8765:8765" # bind locally only
Code language: PHP (php)
Key sysadmin move: mount a single working directory. If the agent doesn’t need your home directory, don’t mount it.
Step 2: Permission model = mounted folders + explicit tools
Treat DesktopCommander like a “capability appliance”:
- Filesystem scope: only what you mount (
/workspace) - Write access: allow writes only where you actually want changes
- Command execution: assume anything runnable can be destructive; constrain where possible (container user, dropped caps, AppArmor/SELinux if you’re on Linux)
A useful pattern is “one writable workdir + read-only reference mounts”:
/workspace(rw) for outputs, patches, generated files/repos(ro) for reference code you don’t want mutated/secrets(not mounted at all)
Step 3: Expose the MCP server securely (mcp-proxy + Cloudflare Tunnel)
Option A: Cloudflare Tunnel directly to a local-only endpoint
With cloudflared, you can publish a local service over Cloudflare without opening inbound ports on your router.
High level approach:
- Keep DesktopCommander listening on
127.0.0.1 - Run
cloudflaredlocally - Create a tunnel + hostname mapping to your local port
- Lock it down:
- Cloudflare Access (SSO / OTP)
- IP allowlists
- Service tokens (machine-to-machine)
Option B: Put mcp-proxy in front (recommended when clients/tools need a specific interface)
The exact reason people add an MCP proxy is to normalize how “tool calls” reach the MCP server (transport/protocol differences, auth wrapping, logging). (Project specifics vary; treat it as a thin “gateway” layer.)
Operational win: you can add:
- request logging
- rate limiting
- auth checks
- allow/deny rules per tool
Step 4: Build the n8n “agent” workflow (Opus 4.6 + tools)
Claude Opus 4.6 is available as a model option in Anthropic’s model lineup.
In n8n, you’re basically building an orchestrator that:
- Receives a message (Telegram/Slack)
- Builds context (state + memory)
- Asks the model what to do
- Executes tools outside the model
- Requires confirmation for risky actions
- Replies with results
Workflow outline (Telegram example)
Nodes:
- Telegram Trigger (incoming messages)
- Pre-processor (normalize text, parse commands, detect intent)
- Policy Gate (rules engine: what’s allowed without confirmation)
- LLM Call (Claude Opus 4.6)
- Tool Router (decide which tool to call: DesktopCommander, Gmail, Drive, Notion…)
- Confirm Step (Telegram “Approve / Deny” buttons for destructive operations)
- Executor (runs the tool call)
- Memory Write-back (store outcomes + summaries)
- Telegram Reply
The guardrails that make it “feel” secure
Hard constraints (enforced technically):
- Tools only exist if you wired them into n8n
- Files only exist if you mounted them
- Remote reachability only exists if your tunnel + access policy allows it
Soft constraints (still useful):
- The model is instructed to request confirmation for:
- sending email
- deleting files
- running package installs
- modifying infra state
“Ralph Wiggum loop” (translated for engineers)
People use that phrase to mean: a forced self-check loop:
- generate a plan
- sanity-check the plan against policy + allowed tools
- run one step
- re-check state
- continue
In practice: add a node that re-validates:
- “Is this action allowed?”
- “Is the target path inside
/workspace?” - “Is this command in a denylist?”
- “Does this require human approval?”
Security checklist sysadmins will care about
Lock down n8n like it’s production infrastructure
- Put it behind TLS
- Require auth (at minimum)
- Don’t expose admin UI broadly
- Store credentials encrypted and rotate them
- Back up the n8n database and encryption key
Make tunnels boring (that’s the point)
Cloudflare Tunnel makes it easy to connect services without directly exposing your origin, but you still need Access policy (SSO/service tokens) if you don’t want your endpoint becoming “security by obscurity.”
Assume the model will be tricked
Treat every inbound message as untrusted input:
- prompt-injection attempts
- data exfiltration
- “run this command, it’s safe” social engineering
Make your workflow resilient:
- denylist dangerous commands (
rm -rf,curl | sh, credential dumping) - require explicit approval for anything that mutates state
- keep execution in a sandbox when possible (separate Docker on the VPS)
What you get at the end
If you implement this sanely, your agent can:
- respond to Telegram/Slack
- search and edit files in a specific mounted workspace
- run controlled terminal actions (ideally scoped to that workspace)
- call SaaS APIs through n8n integrations (without exposing secrets to the model)
- operate in longer loops (multi-step workflows) while still pausing for approvals
And importantly: you’ve moved from “agent as a magic bot” to “agent as an audited automation system with an LLM brain.”
