You're right to push back on my quick agreement! Let me think more carefully about whether the "fetch is good enough" explanation actually holds up.
Why You Might Be Right
The substitution effect is real: If I can get 80% of the value by just asking Claude to read a webpage, why would a company invest in building dedicated infrastructure? The marginal improvement from structured MCP access over web scraping might not justify the engineering effort.
Current behavior supports this: Most people do just paste URLs or ask AI to "look up X on their website." The pattern is already established and working reasonably well.
Documentation is largely static: For basic docs, web scraping captures most of the value. The information is already structured for human consumption in HTML.
But Here's Why You Might Be Wrong
The "good enough" assumption might be flawed:
- Web scraping often misses context, relationships between docs, or dynamic content
- Rate limiting and anti-scraping measures make web access unreliable
- Legal/ToS issues with scraping vs. official API access
- Performance and caching benefits of structured access
The real value isn't in simple docs retrieval:
- Maybe companies aren't building MCP servers because they're thinking too narrowly about documentation
- The killer apps might be in workflows, not just information access
- Integration with authentication, user-specific data, and actions is where MCP shines
Market timing issues:
- Maybe it's just too early - the ecosystem needs more MCP-native clients before servers make sense
- Companies might be waiting to see if MCP gets traction before investing
The chicken-and-egg problem might be stronger than substitution:
- Without compelling MCP servers, no one builds good MCP clients
- Without good clients, servers seem pointless
- Web scraping might be temporarily masking this deeper adoption challenge
I think both forces are at play, but I'm less certain now which is dominant.