TL;DR
API veteran Kevin Swiber unpacks why LLMs are changing how we interact with the web, why Anthropic's MCP protocol could be a game-changer for AI integration, and why the best infrastructure should be so reliable it becomes boring.
AI won't kill APIs
Kevin kicks off with a reality check: LLMs aren't replacing traditional web browsing—they're just changing it. While many users now go straight to ChatGPT instead of Google, this creates new challenges for content creators and businesses.
We're losing the entire experience of the web. People are asking ChatGPT instead of actually browsing to websites.
This shift means APIs become even more critical as the backbone connecting AI systems to real-time data and services that LLMs can't generate on their own.
MCP: The protocol you haven't heard of yet
Anthropic's Model Context Protocol (MCP) caught Kevin's attention as a potential standardization layer for AI-to-system integration. Think of it as a universal adapter that lets AI models connect to any data source or service.
MCP is interesting because it's trying to solve the integration problem at the protocol level, not just the application level.
Kevin sees parallels to early API standards—the potential is huge, but adoption will determine whether it becomes the standard or just another protocol in the pile.
Infrastructure should be boring (in a good way)
One of Kevin's strongest opinions: the best infrastructure disappears into the background. Whether it's APIs, databases, or AI agents, the goal is reliability so complete that developers stop thinking about it.
The goal is to make it so boring that you don't have to think about it. That's when you know you've built something great.
This philosophy drives his work at Layered, where the focus is making complex API infrastructure feel simple and predictable for development teams.
Too many tools, not enough context
The conversation touches on tool proliferation in modern development stacks. Kevin argues that while having options is good, context switching between tools kills productivity.
The challenge isn't finding good tools—it's integrating them in a way that doesn't break your flow.
His solution? Platforms that provide opinionated, well-integrated tool chains rather than forcing teams to become integration experts.
Looking ahead: AI agents and API governance
Kevin predicts that AI agents will drive the next wave of API innovation, but warns that governance and security models need to catch up. The question isn't whether AI will use more APIs—it's whether we'll build the right controls around that access.
AI agents are going to be making API calls at scale, and we're not ready for what that means for rate limiting, security, and governance.
The episode wraps with Kevin's advice for API builders: focus on developer experience, plan for AI consumers, and remember that boring infrastructure is often the best infrastructure.
This episode was directed by Jacob Javor. Transcript lightly edited for clarity and flow.