Back to the Terminal
I've built MCP servers used by thousands of agents worldwide. I'm still bullish on MCP. But the destination was never the protocol — it was always the terminal.
MCP evolved into connectors. Mini-apps emerged inside chat. Autonomous agents exploded. And all of it is converging on one place: the CLI — where compute started, and where AI agents do their best work.

I Built MCPs Used by Thousands. Here's What I've Learned.
I've built many MCP servers and integrations over the past two years — some that are widely used, like the Instantly.ai MCP server, used by thousands of users and agents all over the world. I've seen firsthand how these protocols enable AI to reach into external systems, pull context, and take action.
I'm still very bullish on MCP. But MCP as we knew it — a novel protocol for tool calling — has evolved into something broader. And where it's heading tells us something important about the future of how AI agents actually operate.
Instantly.ai MCP (used by thousands globally) · OpenClaw autonomous agent architect · openshart.dev encrypted agent memory · Forbes Top 100 GTM Technologist · Air Force Veteran · Clay Co-Host
From MCP to Connectors to the CLI
MCP didn't die — it grew up. What started as a protocol for AI tool-calling has evolved through distinct phases, each one bringing us closer to where agents actually belong.
MCP Evolved into “Connectors”
The core idea behind MCP — giving AI models structured access to external systems — didn't go away. It just got a more honest name. Connectors are what MCP was always becoming: the ability for an AI agent or model to plug into your CRM, your database, your file system, your email platform, your analytics — anything external.
This is valuable. It's necessary. And every major platform is building it — Claude, ChatGPT, Gemini, Cursor, you name it. But connectors are the plumbing, not the destination.
Mini-Apps: Everything Inside the Chat
We also saw the rise of “mini-apps” within AI chat interfaces. ChatGPT's “back to the chat” pattern, Claude's artifacts, Gemini's canvas — the idea that you should be able to do anything you want inside the chatbot workflow without ever leaving the conversation.
This is genuinely powerful for humans. You keep your full context. You don't lose your train of thought. You can build spreadsheets, write code, generate images, browse the web — all without switching tabs. The use cases are endless, and it enables users to do dramatically more within a single session.
“Mini-apps are a human-centric innovation. They make AI accessible. But agents don't need accessible — they need efficient.”
Mini-apps optimize for the human operator. But when the operator is no longer human — when it's an autonomous agent executing a multi-step workflow — the chat window becomes a bottleneck, not a benefit.
Then OpenClaw Came Out
When OpenClaw launched, and fully agentic autonomous workflows started to boom, the landscape shifted. Swarms of agents executing complex multi-step tasks. Research agents feeding into writing agents feeding into deployment agents. Entire workflows running end-to-end without human intervention.
And where do all of these agents live? Where do they do their best work?
The terminal. The CLI.
$ openclaw run --agent research-agent --task "analyze competitor landscape"
$ openclaw run --agent writer-agent --input ./research-output.json
$ openclaw run --agent deploy-agent --target production
# No browser. No GUI. No chat window.
# Just the terminal — where compute lives.The Terminal Is Where It All Started — and Where It's All Going
This is the part that most people miss. The command line interface isn't a step backward — it's the original foundation of compute. Before the visual UI came along to make software accessible for humans, this is how all software was operated. The terminal is the core.
The GUI was a translation layer — a way to take the raw power of compute and present it visually so that humans could interact with it. Buttons, windows, icons, menus — all invented because humans needed a visual map to navigate digital systems.
“AI doesn't need the visual map. AI is going back to the raw compute layer — and that layer is the terminal.”
No Context Window Bloat
CLI tools execute and return results without stuffing schemas and tool definitions into the prompt. The agent stays lean.
Proven Infrastructure
REST APIs and CLI tools have been battle-tested for decades. No new protocol to adopt — just the digital freeway that already exists.
Composable by Nature
Pipe output from one tool to another. Chain commands. Build workflows. The Unix philosophy was designed for this — decades before AI.
No UI Overhead
Agents don't need buttons, dropdowns, or visual interfaces. They need inputs and outputs. The terminal is pure signal, zero noise.
Agents Are No Different Than Humans
Think about it this way: a human has internal memory and intelligence. But to understand the world, you need external tools — the internet, books, magazines, databases. Your brain is the processor, but you need inputs from outside to do useful work.
AI agents are the same. The model is the brain — the intelligence, the reasoning engine. But it needs external systems and tools to pull in context for whatever task it's executing. APIs, databases, file systems, web scrapers — these are the agent's “internet and books.”
Even OpenClaw's Creator Agrees
Peter Steinberger, the creator of OpenClaw, has said he purposely has not implemented MCP for many reasons. Despite the hype around MCP as a protocol, the reality is clear: the CLI and REST APIs have been around for decades and have been proven as the digital freeway of how software connects to each other.
You don't need a new protocol when the existing infrastructure already works. REST APIs are battle-tested, universally supported, and understood by every system on the planet. The CLI is the native interface for programmatic execution. Together, they form the backbone that agents need.
REST APIs
The digital freeway. Proven for 20+ years. Every major service exposes one. No new protocol needed.
CLI Tools
Execute, pipe, chain, compose. The Unix philosophy predated AI by 50 years — and it's more relevant than ever.
The Full Cycle of Compute Interfaces
When you zoom out, the pattern is clear. We started with the terminal, built visual interfaces for humans, created conversational interfaces with AI — and now autonomous agents are going back to the terminal. The circle is complete.
The Terminal Era
Command line was the only interface. Programmers communicated directly with the machine. Pure, efficient, powerful — but limited to those who spoke the language.
The GUI Revolution
Visual interfaces made software accessible to everyone. Windows, Mac, browsers, smartphones — all designed to translate compute into something humans could see and click.
The Chat Interface
AI brought natural language as an interface. ChatGPT, Claude, Gemini — chat became the new browser. Mini-apps emerged inside the conversation.
Back to the Terminal
AI agents don't need GUIs or chat windows. They need direct access to compute. The terminal — the original interface — is the optimal runtime for autonomous AI.
What Comes After Programming Languages?
This also raises a bigger question. We as humans have only recently defined universal programming languages — Python, JavaScript, C, and others. These are frameworks we all agree to communicate and build on. They define the programmatic logic of how we want software to interact with data, make triggers, and execute actions.
But if these are the languages AI models are universally trained on — what does the next generation of software actually look like?
It's possible that AI models start to develop their own methods for executing at the operating system level — communicating with each other, coordinating tasks, and returning results to the main operator (a human, or another agent) in ways we haven't imagined yet.
But we do know one thing for certain: the CLI and REST APIs will be around for a long while. And for now, they represent the easiest, most efficient way to enable AI agents to execute on tools — without overwhelming the context window, without unnecessary overhead, and without reinventing the wheel.
What This Means for Builders
If you're building tools for agents
Build a CLI first. Expose a REST API. Don't make them go through a UI.
If you're building autonomous workflows
Your agents should live in the terminal. Pipe outputs, chain tools, compose workflows in the shell.
If you're evaluating MCP vs. REST
MCP as connectors is valuable. But don't ignore decades of proven REST API infrastructure. Use both.
If you're thinking about the future
Programming languages are agreed-upon frameworks. What comes after them is unknown — but the CLI is the bridge.
Ready to Build for the Agent Economy?
We help companies build agent-ready infrastructure — from CLI tools and REST APIs to fully autonomous AI workflows powered by OpenClaw.