Persisted Queries

Persisted Queries are the concept of storing the Contents of a GraphQL Operation on a Server and use a reference alongside with the variables to tell the server when a client wants to invoke this Operation.

Persisted Queries come with a few advantages and disadvantages:

First, there's usually a build step required where the Operations must be stored or registered on the server. This makes the Setup a bit more complex and might turn into issues in case you forgot to persist a Query.

On the upside Persisted Queries make a GraphQL server more secure and performant.

With Persisted Queries it's no longer possible for the client to send arbitrary Queries at runtime. This reduces the attack surface of a GraphQL server.

A related concept to this is Query Whitelisting. It doesn't help with performance but allows to only allow Queries that were previously seen by the server during development.

Additionally, Persisted Queries can also increase performance. This is because when the GraphQL Server starts up it can prepare all the Queries already. In more detail, the server can lex, parse, normalize, validate, etc. the Query at startup once for all Operations. This makes each operation execute faster, reduces latency and saves CPU time.

WunderGraph takes this one step further. As you might have already learned, WunderGraph only allows persisted Queries. Once you have configured your Operations using the WunderGraph console they get "deployed" onto the configured WunderGraph Node.

Deploying an Operation in this case means we don't just store the Operation. The WunderGraph Node will compile an optimal execution tree in memory. This execution tree will then be used at runtime to fetch and merge data from various upstreams as well as resolve each field.

This means a WunderGraph node will only fetch data and resolve the fields during runtime. There's no GraphQL related task during the execution other than building a JSON response.

This makes the WunderGraph Node very resource efficient and fast.

During execution, we use mechanisms like field caching and single flight in order to speed up the execution as well as parallelizing data fetching. This usually leads to reduced rps to the upstream, a reduction in latency and increased throughput when used in front of any GraphQL server.