WunderGraph Cloud Waitlist
Before we get into the blog post. WunderGraph Cloud is being released very soon. We’re looking for Alpha and Beta testers for WunderGraph Cloud.
Testers will receive access to WunderGraph Cloud and 3 months Cloud Pro for free.
I'm a big fan of tRPC. The idea of exporting types from the server and importing them in the client to have a type-safe contract between both, even without a compile-time step, is simply brilliant. To all the people who are involved in tRPC, you're doing amazing work.
That said, when I'm looking at comparisons between tRPC and GraphQL, it seems like we're comparing apples and oranges.
This becomes especially apparent when you look at the public discourse around GraphQL and tRPC. Look at this diagram by theo for example:
Theo explained this diagram in depth and at first glance, it makes a lot of sense. tRPC doesn't require a compile-time step, the developer experience is incredible, and it's a lot simpler than GraphQL.
But is that really the full picture, or is this simplicity achieved at the cost of something else? Let's find out by building a simple app with both tRPC and GraphQL.
Let's build a Facebook clone with tRPC
Let's imagine a file tree with a page for the news feed, a component for the feed list and a component for the feed item.
At the very top of the feed page, we need some information about the user, notifications, unread messages, etc.
When rendering the feed list, we need to know the number of feed items, if there's another page, and how to fetch it.
For the feed item, we need to know the author, the content, the number of likes, and if the user has liked it.
If we were to use tRPC, we would create a "procedure" to load all this data in one go. We'd call this procedure at the top of the page and then propagate the data down to the components.
Our feed component would look something like this:
Next, let's look at the feed list component:
And finally, the feed item component:
Keep in mind, we're still a single team, it's all TypeScript, one single codebase, and we're still using tRPC.
Let's figure out what data we actually need to render the page. We need the user, the unread messages, the notifications, the feed items, the number of feed items, the next page, the author, the content, the number of likes, and if the user has liked it.
Where can we find detailed information about all of this? To understand the data requirements for the avatar, we need to look at the
Avatar component. There are components for unread messages and notifications, so we need to look at those as well. The feed list component needs the number of items, the next page, and the feed items. The feed item component contains the requirements for each list item.
In total, if we want to understand the data requirements for this page, we need to look at 6 different components. At the same time, we don't really know what data is actually needed for each component. There's no way for each component to declare what data it needs as tRPC has no such concept.
Keep in mind that this is just one single page. What happens if we add similar but slightly different pages?
Let's say we're building a variant of the news feed, but instead of showing the latest posts, we're showing the most popular posts.
We could more or less use the same components, with just a few changes. Let's say that popular posts have special badges which require extra data.
Should we create a new procedure for this? Or maybe we could just add a few more fields to the existing procedure?
Does this approach scale well if we're adding more and more pages? Does this not sound like the problem we've had with REST APIs? We've even got famous names for these problems, like Overfetching and Underfetching, and we haven't even gotten to the point where we're talking about the N+1 problem.
At some point we might decide to split the procedure into one root procedure and multiple sub-procedures. What if we're fetching an array at the root level, and then for each item in the array, we have to call another procedure to fetch more data?
Another open could be to introduce arguments to the initial version of our procedure, e.g.
This would work, but it feels like we're starting to re-invent the features of GraphQL.
Let's build a Facebook clone with GraphQL
Now, let's contrast this with GraphQL. GraphQL has the concept of Fragments, which allows us to declare the data requirements for each component. Clients like Relay allow you to declare a single GraphQL query at the top of the page, and include fragments from the child components into the query.
This way, we're still making a single fetch at the top of the page, but the framework actually supports us in declaring and gathering the data requirements for each component.
Let's look at the same example using GraphQL, Fragments, and Relay. For laziness reasons, the code is not 100% correct because I'm using Copilot to write it, but it should be very close to what it would look like in a real app.
Next, let's look at the feed list component. The feed list component declares a fragment for itself, and includes the fragment for the feed item component.
And finally, the feed item component:
Next, let's create a variation of the news feed with popular badges on feed items. We can reuse the same components, as we're able to use the
@include directive to conditionally include the popular badge fragment.
Next, let's look at how the updated feed list item could look like:
As you can see, GraphQL is quite flexible and allows us to build complex web applications, including variations of the same page, without having to duplicate too much code.
GraphQL Fragments allow us to declare data requirements at the component level
Moreover, GraphQL Fragments allow us to explicitly declare the data requirements for each component, which get then hoisted up to the top of the page, and then fetched in a single request.
GraphQL separates API implementation from data fetching
The great developer experience of tRPC is achieved by merging two very different concerns into one concept, API implementation and data consumption.
It's important to understand that this is a trade-off. There's no free lunch. The simplicity of tRPC comes at the cost of flexibility.
With GraphQL, you have to invest a lot more into schema design, but this investment pays off the moment you have to scale your application to many but related pages.
By separating API implementation from data fetching it becomes much easier to re-use the same API implementation for different use cases.
The purpose of APIs is to separate the internal implementation from the external interface
There's another important aspect to consider when building APIs. You might be starting with an internal API that's exclusively used by your own frontend, and tRPC might be a great fit for this use case.
But what about the future of your endeavor? What's the likelihood that you'll be growing your team? Is it possible that other teams, or even 3rd parties will want to consume your APIs?
Both REST and GraphQL are built with collaboration in mind. Not all teams will be using TypeScript, and if you're crossing company boundaries, you'll want to expose APIs in a way that's easy to understand and consume.
There's a lot of tooling to expose and document REST and GraphQL APIs, while tRPC is clearly not designed for this use case.
So, while it's great to start with tRPC, you're very likely to outgrow it at some point, which I think theo also mentioned in one of his videos.
It's certainly possible to generate an OpenAPI specification from a tRPC API, the tooling exists, but if you're building a business that will eventually rely on exposing APIs to 3rd parties, your RPCs will not be able to compete against well-designed REST and GraphQL APIs.
As stated in the beginning, I'm a big fan of the ideas behind tRPC. It's a great step into the right direction, making data fetching simpler and more developer friendly.
GraphQL, Fragments, and Relay on the other hand are powerful tools that help you build complex web applications. At the same time, the setup is quite complex and there are many concepts to learn until you're getting the hang of it.
While tRPC gets you started quickly, it's very likely that you'll outgrow its architecture at some point.
If you're making a decision today to bet on either GraphQL or tRPC, you should take into account where you see your project going in the future. How complex will the data fetching requirements be? Will there be multiple teams consuming your APIs? Will you be exposing your APIs to 3rd parties?
With all that said, what if we could combine the best of both worlds? How would an API client look like that combines the simplicity of tRPC with the power of GraphQL? Could we build a pure TypeScript API client that gives us the power of Fragments and Relay, combined with the simplicity of tRPC?
Imagine we take the ideas of tRPC and combine them with what we've learned from GraphQL and Relay.
Here's a little preview:
What do you think? Would you use something like this? Do you see the value in defining data dependencies at the component level, or do you prefer to stick with defining remote procedures at the page level? I'd love to hear your thoughts...
We're currently in the design phase to build the best data fetching experience for React, NextJS and all other frameworks. If you're interested in this topic, follow me on Twitter to stay up to date.
If you'd like to join the discussion and discuss RFCs with us, feel free to join our Discord server.