Sandboxed AI Agents.
Two Layers Deep.
Run NVIDIA NemoClaw on Mac Mini with dual-layer sandboxing. macOS user isolation + container security.
Most NemoClaw guides only cover the container sandbox. This guide adds an outer layer — a dedicated, non-admin macOS user account — for true defense-in-depth. Fully self-contained. An AI agent given just the GitHub URL can follow it end-to-end.
Why This Guide Exists
Standard NemoClaw Setup
- Single layer of protection — container sandbox only
- Agent runs under your main user account with full home directory access
- Assumes Docker Desktop — heavy, GUI-dependent, not ideal for headless
- No hardware-to-model guidance — guessing which Nemotron fits your machine
- One inference path — typically locked to a single backend
This Guide
- Dual-layer security — macOS user isolation + container sandbox
- Dedicated non-admin user via dscl — no sudo, no cross-user access
- Colima instead of Docker Desktop — lightweight, CLI-only, open source
- Hardware decision matrix — exact model for your specific RAM/VRAM config
- Three inference backends — Ollama Cloud, Ollama Local, or LM Studio
Dual-Layer Security Model
macOS User Isolation
A dedicated non-admin macOS user account created via dscl. The agent cannot sudo, cannot access other users’ files, and cannot modify system settings — even if it escapes the container.
NemoClaw Container Sandbox
NemoClaw’s OpenShell runtime wraps every agent session in a hardened container with Landlock filesystem restrictions, seccomp syscall filtering, and network namespace isolation.
Flexible Inference Backends
Ollama Cloud
Easiest path — no local model downloads. Inference is routed through Ollama’s cloud. Good for getting started fast.
Free / $20 Pro / $100 Max per month
Cloud-routed
Any (no GPU needed)
Ollama Local
Full privacy — all inference happens on your machine. Requires sufficient RAM for your chosen model.
Free
Fully on-device
16–96+ GB RAM depending on model
LM Studio
GUI-based local inference with an OpenAI-compatible API. Visual model management with the same privacy as Ollama Local.
Free
Fully on-device
Same as Ollama Local
Hardware-to-Model Matrix
Instead of assuming everyone has 96 GB of VRAM, this guide maps specific hardware configurations to the right Nemotron model.
| Hardware | Recommended Model | Download |
|---|---|---|
| Any Mac Mini (16 GB+) | Nemotron 3 Super via cloud | None |
| Mac Mini M4 (16–32 GB) | Nemotron 3 Nano 4B | 2.8 GB |
| Mac Mini M4 (32 GB) | Nemotron 3 Nano 30B (MoE, 3.5B active) | 24 GB |
| Mac Mini M4 Pro (48–64 GB) | Nemotron 3 Nano 30B with headroom | 24 GB |
| Mac Studio / 96+ GB | Nemotron 3 Super 120B (MoE, 12B active) | 87 GB |
| VPS with NVIDIA GPU (24+ GB VRAM) | Nemotron 3 Nano 30B | 24 GB |
| VPS with NVIDIA GPU (96+ GB VRAM) | Nemotron 3 Super 120B | 87 GB |
Setup Flow
Choose Inference Backend
Ollama Cloud (free, easiest), Ollama Local (private), or LM Studio (GUI). The hardware decision matrix tells you exactly which Nemotron model fits your machine.
# Check your hardware
system_profiler SPHardwareDataType | grep "Memory"Install Host Prerequisites
Homebrew, Colima (lightweight Docker runtime), Docker CLI, and your chosen inference backend. Colima is CLI-only, open source, and better suited for headless setups than Docker Desktop.
brew install colima docker
colima start --cpu 4 --memory 8Create Sandboxed macOS User
Layer 1: A dedicated, non-admin macOS user account via dscl. This user cannot sudo, cannot access other users’ files, and cannot modify system settings.
# Create a locked-down agent user
sudo dscl . -create /Users/eve
sudo dscl . -create /Users/eve UserShell /bin/zsh
sudo dscl . -create /Users/eve NFSHomeDirectory /Users/eveInstall NemoClaw as Agent User
Layer 2: NemoClaw’s installer runs as the agent user, setting up the container sandbox with OpenShell’s Landlock, seccomp, and network namespace isolation.
# Switch to agent user and install
su - eve
curl -fsSL https://install.nemoclaw.com | bashVerify & Monitor
Connect to the sandbox, test inference, and monitor with OpenShell’s built-in TUI. Confirm both security layers are active and the agent can reach only approved endpoints.
# Check NemoClaw status
nemoclaw status
# Connect to sandbox
nemoclaw shell
# Monitor in real-time
nemoclaw monitorAgent-Executable
Paste the repo URL and your machine specs into any AI agent. The AGENT-SETUP.md is designed so an agent can read it and execute every step sequentially.
Set up a NemoClaw agent on my Mac Mini M4 Pro with 64 GB RAM
following https://github.com/bcharleson/nemoclaw-macmini-setup
— I want to run Nemotron locally for full privacy.
Agent name: EVE, username: eve.Six Claude Skills Included
Prompt-engineering skills that can be loaded into any LLM — not just Claude. Each one is a standalone file in the repo.
Setup Assistant
Interactive NemoClaw setup with troubleshooting. Walks through every step based on your hardware.
Upgrade Assistant
Guided updates for NemoClaw, Ollama, and Colima. Handles version checks and rollbacks.
Sales Outreach
Personalized cold email and LinkedIn sequences using AIDA, PAS, and BAB frameworks.
Content Marketing
SEO blog posts, social media, and newsletters with templates. Optimized for search and engagement.
Customer Research
Buyer personas, ICP development, and Jobs-To-Be-Done analysis from market signals.
Competitive Analysis
Competitor profiles, battlecards, SWOT, and win/loss analysis for positioning.
Key Technologies
NemoClaw
NVIDIA’s open-source OpenClaw wrapper with security guardrails (Apache 2.0)
OpenShell
Sandboxed runtime: Landlock FS, seccomp, network namespace isolation
Nemotron 3 Super
120B param MoE (12B active), #1 on PinchBench, 1M context window
Ollama
Local and cloud inference — free cloud tier, fully private local mode
LM Studio
GUI-based local inference with OpenAI-compatible API
Colima
Lightweight Docker runtime for macOS — CLI-only, open source, headless-ready
Repo Structure
nemoclaw-macmini-setup/
├── README.md — Architecture diagram, prerequisites, quick start
├── AGENT-SETUP.md — Full walkthrough (7 parts, ~30-45 min)
├── SECURITY.md — Dual-layer security model with threat scenarios
├── claude-skill.md — Claude skill: interactive setup assistant
├── claude-upgrade-skill.md — Claude skill: upgrade/update assistant
├── claude-sales-outreach-skill.md — Claude skill: sales outreach
├── claude-content-marketing-skill.md — Claude skill: SEO content creation
├── claude-customer-research-skill.md — Claude skill: buyer personas & JTBD
├── claude-competitive-analysis-skill.md — Claude skill: competitive intel & battlecards
├── LICENSE — MIT
└── .gitignoreWant us to set this up for your business?
We build and manage sandboxed AI agents for B2B teams — NemoClaw, OpenClaw, and custom agent infrastructure wired into your stack.
Dual-layer security, hardware-matched models, and inference backends configured for your use case. Book a Discovery Call and we’ll scope it for your team.
Dual-layer sandbox setup
macOS isolation + container security
Model & inference config
Hardware-matched Nemotron deployment
Agent skill wiring
Sales, marketing, and research skills live
Free 30-minute call · No commitment · We’ll scope the build for your specific infrastructure