Cosmo: Full Lifecycle GraphQL API Management
Are you looking for an Open Source Graph Manager? Cosmo is the most complete solution including Schema Registry, Router, Studio, Metrics, Analytics, Distributed Tracing, Breaking Change detection and more.
If you follow me for a while, you know that one of my pet peeves is comparisons between GraphQL, REST, tRPC and other technologies that fail to mention Relay and Fragments. In this post, I will explain why I think Relay is a game-changer, how we make it 100x easier to use and adopt, and why you should give it another look.
What makes Relay so special?
Stop thinking for a moment about APIs from the perspective of the server. Instead, think about consuming APIs from the frontend perspective and how you can build scalable and maintainable applications. That's where Relay shines, that's where you can see a significant difference between Relay, REST, tRPC, and other technologies.
If you haven't used Relay before, you probably never realized how powerful GraphQL can be when combined with Relay. The next section will explain why.
At the same time, a lot of people are scared of Relay because it has a steep learning curve. There seems to be a sentiment that Relay is hard to set up and use, and that's true to some extend. It shouldn't take you PhDs to use it.
That's exactly why we built a first-class Relay integration into WunderGraph that works with both NextJS and pure React (e.g. using Vite). We want to make Relay more accessible and easier to use. Essential features like Server-Side Rendering (SSR), Static Site Generation (SSG), persisted Queries (peristed Operations), and Render-as-you-fetch (aka Suspense) are built-in and work out of the box.
Before we dive into how we've made Relay easier to use, let's first take a look at what makes Relay so special.
Collocation of Data Requirements using Fragments
The typical data fetching pattern in applications like NextJS is to fetch data in the root component and pass it down to the child components. With a framework like tRPC, you define a procedure that fetches all the data you need for one page and pass it down to the children. Doing so, you implicitly define the data requirements for the component.
Let's say you've got a page that displays a list of blog posts, and each blog post has a list of comments.
In the root component, you'd fetch the blog posts with comments and pass the data down to the blog post component, which in turn passes the comments down to the comment component.
Let's illustrate this with some code:
As an example, the
Comment component has two data dependencies:
content. Let's say we're using this component in 10 different places in our application. If we want to add a new field to the
Comment component, e.g.
author, we have to figure out all the places where we're using the
Comment component, navigate to the root component, find the procedure that fetches the data, and add the new field to it.
You can see how this can quickly become a huge maintenance burden. The problem that leads to this is that we're fetching data top down. The result is tight coupling between the data fetching logic and the components.
With Relay and Fragments, we're able to collocate the data requirements with the component, while simultaneously decoupling the data fetching logic from the component. Together with data masking (next section), this is a game-changer, because it allows us to build re-usable components that are decoupled from the data fetching logic.
It's worth noting that GraphQL itself doesn't solve this problem. Moreover, most GraphQL clients don't encourage this pattern, leading to the same problems we've seen with REST APIs.
So-called "God Queries" that fetch all the data for a page are a common pattern with GraphQL clients. Without Fragments, it's really just the same problem as with REST APIs or tRPC, just with a different syntax and the added overhead of GraphQL.
Let's take a look at how we can achieve this with Relay and Fragments.
In this example, the
Comment component is completely decoupled from the data fetching logic. It defined its data requirements in a Fragment that's collocated with the component. We can use the
Comment component in as many places as we want, it's completely decoupled from the data fetching logic.
If we want to add a new field to the
Comment component, like the
author field, we can simply add it to the Fragment and the
Comment component will automatically receive the new field.
Turning around our perspective to the data fetching logic, we can see that the
Home component doesn't care about what exact fields the
Comment component needs. This logic is completely decoupled from the
Home component through the use of Fragments.
That said, there's one more thing to make truly de-coupled components possible: Data Masking.
Re-Usable Components through Data Masking
Let's say we've got two sibling components that both use comment data. Both define their data requirements in a separate Fragment. One component only needs the
title field, while the other component requires the
If we were to directly pass the comment data to both components, we might accidentally use the
title field in the component that didn't define it in its Fragment. Doing so, we'd introduce a dependency between the two components.
To prevent this, Relay allows us to mask the data before passing it to the component. If a component didn't define a field in its Fragment, it won't be able to access it, although it's theoretically available in the data.
To my knowledge, no other API client has this feature, which is why I think you shouldn't dismiss GraphQL without having tried Relay. GraphQL and Relay comes at a cost if you compare it to e.g. tRPC. It's important to understand the benefits to make an informed decision on whether it's worth it.
A lot of people think that GraphQL and Relay are only useful for huge applications. I think that's a misconception. Building re-usable components is a huge benefit for any application, no matter the size. If you've wrapped your head around Fragments and Data Masking, you really don't want to go back to the old way of doing things.
We'll take a look after the next section on how easy we made it to get started with Relay and Fragments.
Compile-Time GraphQL Validation & Security
Another benefit of using Relay is that the "Relay Compiler" (recently rewritten in Rust) compiles, validates, optimizes and stores all GraphQL Operations at build time. With the right setup, we're able to completely "strip" the GraphQL API from the production environment. This is a huge benefit for security, because it's impossible to access the GraphQL API from the outside.
Moreover, we're able to validate all GraphQL Operations at build time. Expensive operations like normalization and validation are done at build time, reducing the overhead at runtime.
How does WunderGraph make using Relay easier?
You might not yet be convinced of the benefits of Relay, but I hope you're at least curious to try it out.
Let's see how the integration with WunderGraph makes it easier to get started with Relay.
Setting up Relay + NextJS/Vite with WunderGraph is easy
We've tried to setup Relay with NextJS and Vite ourselves. It's not easy. In fact, it's rather complicated. We found npm packages that try to bridge the gap between Relay and NextJS, but they were not well-maintained, documentation was outdated and most importantly, we felt like they were too opinionated, e.g. by forcing the use of
getInitalProps which is deprecated in NextJS.
So we've taken a step back and built a solution that works with Vanilla React and frontend frameworks like NextJS and Vite without being too opinionated. We've built the necessary tooling to make Server-Side Rendering (SSR), Static Site Generation (SSG), and Render-as-you-fetch easy to use with any frontend framework.
Additionally, we've made sure to choose some reasonable defaults, like enforcing persisted Operations by default with zero setup, giving the user a secure-by-default experience without having to think about it.
So, how does a simple setup look like?
That's it. All you need to do is wrap your app with the
WunderGraphRelayProvider and pass the
initialRecords prop. This works with NextJS 12, 13, Vite and others as it doesn't rely on any framework-specific APIs.
Next, we need to configure the Relay Compiler to work with WunderGraph. As you'll see, WunderGraph and Relay are a match made in heaven. Both are built with the same principles in mind: Declarative, Type-Safe, Secure-by-default, Local-first.
Relay being the frontend counterpart to WunderGraph's backend. WunderGraph ingests one or more GraphQL & REST APIs and exposes them as a single GraphQL Schema, which we call the virtual Graph. Virtual, because we're not really exposing this GraphQL Schema to the outside world. Instead, we're printing it into a file to enable auto-completion in the IDE and to make it available to the Relay Compiler.
At runtime, we're not exposing the GraphQL API to the outside world. Instead, we only expose an RPC API that allows the client to execute pre-registered GraphQL Operations. The architecture of both WunderGraph and Relay make the integration seamless.
It feels like WunderGraph is the missing server-side counterpart to Relay.
Relay Compiler Configuration with out-of-the-box support for persisted Operations
So, how do we wire up the Relay Compiler to work with WunderGraph?
As mentioned above, WunderGraph automatically persists all GraphQL Operations at build time. In order for this to work, we need to tell the Relay Compiler where to "store" the persisted Operations. On the other hand, Relay needs to know where to find the GraphQL Schema. As WunderGraph stores the generated GraphQL Schema in a file, all we need to do is wire up the two using the
relay section in the
With this config, the Relay Compiler will assemble all GraphQL Operations in the
./src directory, generates the TypeScript types and stores the persisted Operations in
./.wundergraph/operations/relay/persisted.json. Each stored Operation is a pair of a unique ID (hash) and the GraphQL Operation. WunderGraph will automatically read this file, expand it into
.graphql files and store them in
./.wundergraph/operations/relay/, which will automatically register them as JSON-RPC endpoints.
Additionally, the WunderGraph code generator will generate a
WunderGraphRelayEnvironment for you, which internally implements fetch to make the RPC calls to the WunderGraph API.
Here's an abbreviated version of the internals:
fetchQuery function creates JSON-RPC requests from the Operation ID and the variables, no GraphQL is involved at this point.
Server-Side Rendering (SSR) with NextJS, Relay and WunderGraph
Now that we've configured the Relay Compiler, we can start integrating Relay into our NextJS app, e.g. with Server-Side Rendering (SSR).
Render as you fetch with Vite, Relay and WunderGraph
Here's another example using Vite with Render-as-you-fetch:
You key takeaway should be that GraphQL and Relay bring a lot of value to the table. Together with WunderGraph, you can build modern full-stack applications on top of three solid pillars:
- Collocation of components and data requirements
- Decoupled re-usable components using Data Masking
- Compile-time Validation & Security
What's more, with this stack, you're not really limited to just GraphQL APIs and React. It's possible to use Relay with REST APIs, or even SOAP, and we're also not limited to React, as Relay is just a data-fetching library.
If you want to learn more about WunderGraph, check out the documentation .
Want to try out some examples?
One more thing. This is really just the beginning of our journey to make the power of GraphQL and Relay available to everyone. Stay in touch on Twitter or join our Discord Community to stay up to date, as we're soon going to launch something really exciting that will take this to the next level.