The Shrinking Moat
In 2019, Rich Sutton published “The Bitter Lesson,” arguing that general methods leveraging computation consistently beat hand-crafted human-designed solutions. Chess engines that search deeply beat engines encoding grandmaster knowledge. Speech recognition systems using statistical methods beat systems encoding linguistic rules.
The lesson is bitter because it means your clever engineering gets steamrolled by scale.
The Bitter Lesson Applied to Agents
The same logic applies to the current wave of AI agents. If you’re building agentic workflows today—chains of prompts, tool-calling orchestration, retrieval pipelines—you’re building something that will likely be obsolete. Agentic behavior will get baked into the next generation of models. The scaffolding you’re proud of will become unnecessary.
This doesn’t mean you shouldn’t build. It means you should be clear-eyed about what’s defensible and what isn’t.
The Shrinking Moat
Here’s what’s not defensible:
- Prompt engineering - Models get better at understanding intent; elaborate prompts become unnecessary
- Orchestration frameworks - Models learn to plan and decompose tasks themselves
- Fine-tuning on public data - If you can access it, so can everyone else
- Clever tool-calling patterns - This is the next thing to get absorbed into the model
The moat keeps shrinking. What seemed like valuable IP six months ago is now a commodity.
What’s Left: Proprietary Data
The only durable moat is data that only you have.
Not “data” in the abstract sense of “we have a data strategy.” Specific, proprietary data that no one else can access:
- Internal systems and their schemas
- Institutional knowledge captured in tickets, docs, Slack threads
- Domain-specific workflows that exist only inside your organization
- The messy reality of how your business actually operates
This is why MCP (Model Context Protocol) strategy should focus on servers, not clients. Clients will proliferate and improve—that’s the commoditized layer. But only you can expose your internal systems. Only you can make your data legible to an LLM.
Implications
If you’re building AI capabilities for an organization:
- Don’t invest in client-side tooling - Let Cursor, VS Code, and the rest fight that battle
- Don’t build elaborate agentic frameworks - They’ll be obsolete
- Do invest in exposing your proprietary data - Build the servers, the APIs, the integrations that make your unique data accessible
- Do invest in data quality - The better your metadata, the more legible your systems are to LLMs
The bitter pill: most of the “AI engineering” work being done today is temporary scaffolding. The permanent value is in the data layer.