What does it actually look like to work with a personal AI agent day-to-day? Real workflows combining MCP servers, OpenClaw skills, and 4 weeks of production usage patterns. The good, the unexpected, and the genuinely useful.
After building both OpenClaw skills and MCP servers, what did I learn about the right abstraction layer? And where are personal AI agents heading in the next 5 years? A reflection on tooling, standards, and the shift from AI-as-service to AI-as-infrastructure.
How to build Model Context Protocol servers that actually work in production. Deep-dive in auto-generation from API routes, permission models, and why 13 small tools beat one smart tool.
Why I moved my AI agent stack from Kubernetes to a bare metal laptop, what broke along the way, and the 98% success rate that followed. A technical journey through OpenClaw, Ollama, and production AI infrastructure.