GraphQL Is the API Layer AI Agents Actually Need

Jens Neuse
TL;DR
AI agents don't consume APIs the way human developers do. They can't read docs, install SDKs, or maintain integrations. They need to search for capabilities, understand what's available, and request exactly the data they need, and do all of that within a limited context window. GraphQL's typed, navigable schema was built for this. The federated supergraph takes it further by unifying an entire organization's capabilities into one searchable graph.
For years, we've designed APIs for human developers. A person reads the docs, writes an integration, ships it, and maintains it. REST with OpenAPI specs and SDKs is built for this workflow.
But the consumer is changing.
Increasingly, the client calling your API is an agent, not a person β one that explores APIs, selects the right endpoints, and executes requests autonomously.
This changes what "good API design" means. And it turns out GraphQL, and more specifically the federated supergraph, is what agents actually need.
Introducing WunderGraph Hub: Rethinking How Teams Build APIs
WunderGraph Hub is our new collaborative platform for designing, evolving, and shipping APIs together. Itβs a design-first workspace that brings schema design, mocks, and workflows into one place.
A human developer integrating a third-party API follows a predictable path:
- Read the documentation
- Generate or install an SDK
- Write integration code against typed methods
- Handle errors, edge cases, pagination
- Maintain the integration as the API evolves
It's slow, but it produces reliable, maintainable integrations. The developer understands what they're building, can reason about edge cases, and writes code that's specific to their use case.
REST + OpenAPI + SDKs is built for this workflow. The specification describes the API. The SDK provides typed access. The developer fills in the logic.
An agent doesn't read docs the way a human does. It receives a task ("find the user's shipping status"), explores available APIs to figure out how to accomplish it, selects the right calls, executes them, and returns the result.
The workflow is:
- Understand the task
- Discover available capabilities
- Select the right API calls
- Execute with minimal data transfer
- Return the result
Steps 2 through 4 are fundamentally different from human development. The agent doesn't want an SDK. It doesn't want to read pages of documentation. It needs to search for capabilities, understand what's available, and request exactly the data it needs.
Agents run on LLMs, and LLMs have context windows. Every piece of data you put into the context window costs tokens, and every token costs money, slows processing, and raises the chance of errors.
When an agent calls a REST endpoint, it gets what the endpoint returns. GET /users/1 gives you the entire user object β name, email, address, preferences, metadata, timestamps, nested objects. The agent gets everything, even though it only needs the user's name and shipping address, for example.
GraphQL solved this over-fetching problem for frontend apps. It's more critical for agents, because wasted data pollutes the context window and degrades the agent's reasoning.
The agent gets exactly the data it needs without wasting tokens.
But selective querying is only half the story. Before an agent can write a query, it needs to know what's available.
With REST, discovery means processing an OpenAPI specification, which is potentially thousands of endpoints. And each of those endpoints have request/response schemas. As Cloudflare noted , it doesn't work to expose thousands of API endpoints to an agent. The agent's context window fills up with endpoint descriptions before it can even start reasoning about which ones to use.
MCP helps by providing a protocol for agents to discover tools, but the underlying problem remains: if your API surface is a long list of independent endpoints, the agent has to scan through all of them to find what it needs.
GraphQL changes this fundamentally.
A GraphQL schema is a graph. It is a structured, navigable hierarchy of types and relationships, so the agent doesn't need to scan a long, flat list. It can navigate the type system:
- "I need user data" β look at the
Usertype - "What's connected to a user?" β follow the relationships:
orders,shippingAddress,preferences - "Can I get shipping status?" β navigate from
UsertoShippingEligibility
Every type, field, and relationship is introspectable. The agent can explore the API surface by following connections.
A single GraphQL API is already better for agents than REST. But a federated supergraph takes this further.
In a large organization, capabilities are spread across many teams and services. Without federation, an agent would need to discover and integrate with dozens of separate APIs.
A supergraph unifies all of this into a single, coherent graph , and this way, the agent sees one API surface that covers the entire organization's capabilities.
The agent writes one query. The router handles the distributed execution. Five backend services are involved, but the agent just sees a graph.
MCP (Model Context Protocol) is an important step forward for agent-API interaction. It provides a standard way for agents to discover and invoke tools.
But MCP alone can't solve the fundamental problem .
When you expose REST endpoints through MCP, each endpoint becomes a separate "tool" the agent can call. For a small API, this works fine. For an enterprise with hundreds of services and thousands of endpoints, the agent is back to scanning a flat list.
Even with good descriptions on each tool, the agent can't navigate the API surface. It can't follow relationships. It can't say "show me everything connected to this user", it has to know upfront which endpoints to call.
GraphQL through MCP is different. The agent can ask: "Is there a query that gives me the user's shipping status?" The schema answers that question structurally. The agent constructs a precise query and gets exactly what it needs.
This is why we're building a MCP integration into Hub . An agent will be able to ask the supergraph what capabilities exist, search for the data it needs, and construct queries .
There's a catch.
Not every GraphQL schema works well for agents. A schema with cryptic field names, inconsistent naming conventions, and types organized by team rather than use case is just as confusing to an agent as it is to a new developer.
For agents to consume your API effectively, the supergraph needs to be:
- Consumer-first β designed around use cases , not backend structure
- Well-named β field names that describe the business capability, not the implementation
- Discoverable β consistent patterns that an agent can learn and navigate
- Governed β maintained at a quality level where the schema is trustworthy
This is what Fission enables. By designing the supergraph top-down from consumer needs rather than assembling it bottom-up from service internals, you get an API surface that's naturally suited for both human and agent consumption.
The shift to agentic API consumption is happening right now.
Every organization building internal tools with LLMs, deploying copilots for customer support, or automating workflows with agents is already dealing with the question: how do my agents access our business capabilities?
The current answer, hand-wiring each agent to specific REST endpoints, works for prototypes, but it doesn't scale.
The supergraph gives you a unified, searchable, structured API surface that both humans and agents can consume. With federation, that single schema spans your entire organization. Hundreds of services, dozens of teams, but one coherent graph. An agent doesn't need to know which team owns what or how many backend services are involved. It sees one API.
Not because GraphQL is trendy, but because agents need what GraphQL was designed to provide: selective data access over a typed, navigable schema that unifies capabilities across service boundaries.
We're not building APIs for the sake of APIs. We're building them so that applications β built by humans or agents β can access business capabilities efficiently and reliably.
The supergraph is the API layer agents need. It just happens to be the same layer your frontend teams have been asking for.
If you're thinking about how agents will consume your APIs, we'd love to talk .
Frequently Asked Questions (FAQ)
GraphQL lets agents query exactly the fields they need, reducing token usage and context window pollution. The schema is self-describing, so agents can explore available capabilities without external documentation. REST requires agents to process entire endpoint responses and rely on separate OpenAPI specs to understand what is available.
Agents can introspect the schema to see all available types and fields, search for specific capabilities, and construct queries that return only the data they need. With Hub and MCP integration, agents can ask the graph directly whether a capability exists before making any API calls.
Agents have limited context windows. Exposing thousands of REST endpoints overwhelms the agent with information, leading to higher costs, slower processing, and more hallucinations. A structured graph lets agents search and select precisely what they need.
Jens Neuse
CEO & Co-Founder at WunderGraph
Jens Neuse is the CEO and one of the co-founders of WunderGraph, where he builds scalable API infrastructure with a focus on federation and AI-native workflows. Formerly an engineer at Tyk Technologies, he created graphql-go-tools, now widely used in the open source community. Jens designed the original WunderGraph SDK and led its evolution into Cosmo, an open-source federation platform adopted by global enterprises. He writes about systems design, organizational structure, and how Conway's Law shapes API architecture.

