Full-Stack development just reached a whole new level of productivity. Isomorphic TypeScript APIs, as I call them, blur the lines between client and server. Without any compile time step, the developer receives immediate feedback when they make a change to the API. You can easily jump between the client and server code because after all, it's the same code.
Here's a short video to illustrate the experience:
The video shows the API definition of a mutation on the left tab and the client implementation to call the mutation on the right tab. When we're changing an input field name or type of the mutation, the client code will immediately reflect the changes and show us the errors in the IDE. This immediate feedback loop is a game changer for full-stack development.
WunderGraph Cloud Waitlist
Before we get into the blog post. WunderGraph Cloud is being released very soon. We’re looking for Alpha and Beta testers for WunderGraph Cloud.
Testers will receive access to WunderGraph Cloud and 3 months Cloud Pro for free.
Let's talk about the history of Isomorphic TypeScript APIs, how they work, the benefits and the drawbacks, and how you can get started with them.
Isomorphic TypeScript APIs: What exactly does the term mean and where does it come from?
There are a few frameworks that allow you to define your API in TypeScript and share the code between the client and the server; the most popular one being trpc .
There are different approaches to achieve this kind of feedback loop, code generation, and type inference. We'll talk about the differences and the pros and cons of each approach. You'll see that type inference is much better during development but has drawbacks when it comes to sharing types across repositories.
Isomorphic TypeScript APIs allow you to define the API contract on the server and infer the client code from it through type inference. Consequently, we don't have to go through a code generation step to get a type-safe client; we use the TypeScript compiler to infer the client code immediately during development instead.
The magic behind Isomorphic TypeScript APIs
Let's take a look at how Isomorphic TypeScript APIs work. How is it possible to share the same code between the client and the server? Wouldn't that mean that the client would have to import the server code?
The magic behind Isomorphic TypeScript APIs is the
import type statement of TypeScript. TypeScript 3.8 introduced Type-Only Imports and Export .
Type-only imports and exports are a new form of import and export. They can be used to import or export types from a module without importing or exporting any values.
This means that we're able to import types from a module; and therefore, share types between the client and the server without importing the server code itself. That's the crucial part of Isomorphic TypeScript APIs.
Now, let's take a deep dive into how we can put this knowledge into practice.
1. Define the API contract on the server
WunderGraph uses file-based routing similar to Next.js. By creating the file
get.ts in the
.wundergraph/operations/users folder, we're registering this operation on the
We could now call this operation using curl:
That's great if we're not using TypeScript, but the whole point of this post is to use TypeScript. So let's take a look at how
createOperation.query is defined.
2. Exposing types from the API definition
It's a lot of code to unpack, so let's go through it step by step.
createQuery function is a factory that returns the
createOperation.query function. By wrapping the actual function into a factory, we're able to pass generic types like InternalClient (IC) and UserRole to the function. This allows us to inject generated types without complicating the API for the user.
What's important to note are the two generic arguments of the
I extends z.AnyZodObject, R.
I is the input type, and
R is the response type.
The user can pass an input definition to the
createOperation.query function as seen in step 1. Once this value is passed to the
createQuery function, the
I generic type is inferred from the input definition. This enables the following:
- We can use
z.infer<I>to infer the input type from the input definition
- This inferred type is used to make the
- Additionally, we set the inferred type as the
Inputgeneric type of the
We can later use
import type to import the
Input type from the
What's missing is the
R, which is actually less complicated than the
Input type. The second generic argument of the
createQuery function is
R (the response type). If you look closely at the
handler argument definition, you'll see that it's a function that returns a
Promise<R>. So, whatever we're returning from the
handler function is the
Response type. We simply pass
R as the second generic argument to the
NodeJSOperation type, and we're done.
Now we've got a
NodeJSOperation type with two generic arguments,
Response. The rest of the code ensures that the internal client and user object are type-safe but ergonomic; for example, omitting the
input property if the user didn't pass an input definition.
3. Exposing the API contract on the client
Finally, we need a way to import the API contract on the client. We're using a bit of code generation when creating the models for the client to make this a pleasant developer experience.
Keep in mind that the
NodeJSOperation type is a generic with the
Response types as generic arguments, so we need a way to extract them to make our client models type-safe.
Here's a helper function to achieve this using the
infer keyword allows us to extract a generic argument from a generic at a specific position. In this case, we're extracting the
Response types from the
Here's an excerpt from the client models file that uses this helper function:
Notice how we're only importing the
function_UsersGet type from the operations file, not the actual implementation. At compile time, all the type imports are removed.
There's one more nugget here that you might easily miss: the generated client models export the
UsersGetInput type, which is inferred from the
function_UsersGet type, which is the type export of the
NodeJSOperation type, which infers its
Input type from the
This means that there's a chain of type inference happening here. This doesn't just make the client models type-safe but also enables another very powerful feature that I think is very important to highlight.
Inferring clients from the server API contract definition enables refactoring of the server API contract without breaking the client.
Let's add some client code to illustrate this:
This is the generated client code for the
users/get query. If we're setting the
users/get (which, by the way, is a type-safe string), we're forced to pass an
input object that matches the
If we'd now refactor the
id property to
userId in the server API contract, the client code will also be refactored to
userId because the
UsersGetInput type is inferred from the server API contract. If, instead, we'd change the type of the
id property from
number, the IDE will immediately show an error because the inferred type of the
id field (number) wouldn't match the string anymore.
This kind of immediate feedback loop is what makes this approach so powerful. If you've previously worked with REST or GraphQL APIs, you'll know that refactoring the API contract would involve many more steps.
The different types of Operations available in WunderGraph
WunderGraph supports three different types of TypeScript operations: queries, mutations, and subscriptions. Let's have a look at how you can define them.
Isomorphic TypeScript APIs: Queries
We've seen a Query Operation above, but I still want to list all three types of operations here for completeness.
A query operation will be registered as a
GET request handler on the server. By defining an
input definition, the
input argument of the
handler function will be type-safe. Furthermore, we're also creating a JSON-Schema validation middleware for the endpoint.
Other options we'd be able to configure are
rbac, for role-based access control;
requireAuthentication, to require authentication for the endpoint;
live, to configure live queries (enabled by default); and
internal, to make this endpoint only available to other operations, not the client.
Once you enable authentication, you'll also be able to use the
user property of the
handler function argument:
This operation will return the user object from the database, using the email claim from the JWT token / cookie auth header as the identifier.
Isomorphic TypeScript APIs: Mutations
Next, let's take a look at a mutation operation:
A mutation operation will be registered as a
POST request handler on the server. We're accepting three properties and return them as-is. In the handler, we would usually do some database operations here.
Isomorphic TypeScript APIs: Subscriptions
Finally, let's define a subscription operation:
A subscription operation will be registered as a
GET request handler on the server, which you can curl from the command line, or consume via SSE (Server-Sent Events) from the client if you're appending the query parameter
handlerfunction looks a bit different from the other two operations because it's an async generator function.
Instead of returning a single value, we use the
yield keyword to return a stream of values. Async generators allow us to create streams without having to deal with callbacks or promises.
One thing you might have wondered about is how to handle the client disconnecting. Async generators allow you to create a
Once the client disconnects from the subscription, we're internally calling the
return function of the generator, which will call the
finally block. Consequently, you can start your subscription and clean it up in the same function without using callbacks or promises. I think the async generator syntax is an incredibly ergonomic way to create asynchronous streams of data.
Bridging the gap between GraphQL, REST and TypeScript Operations
If you're familiar with GraphQL, you might have noticed that there's some overlap in terminology between GraphQL and Isomorphic TypeScript Apis. This is no coincidence.
First of all, we're calling everything an Operation, which is a common term in GraphQL. Secondly, we're calling read operations Queries, write operations Mutations, and streaming operations Subscriptions.
All of this is intentional because WunderGraph offers interoperability between GraphQL, REST, and Isomorphic TypeScript APIs. Instead of creating a
.wundergraph/operations/users/get.ts file, we could have also created a
Given that we've added a
users GraphQL API to our Virtual Graph , this GraphQL query would be callable from the client as if it were a TypeScript Operation. Both GraphQL and TypeScript Operations are exposed in the exact same way to the client. For the client, it makes no difference if the implementation of an operation is written in TypeScript or GraphQL.
You can mix and match GraphQL and TypeScript Operations as you see fit. If a simple GraphQL Query is enough for your use case, you can use that. If you need more complex logic, like mapping a response, or calling multiple APIs, you can use a TypeScript Operation.
Additionally, we're not just registering GraphQL and TypeScript Operations as RPC endpoints, we're also allowing you to use the file system to give your operations a structure. As we're also generating a Postman Collection for your API, you can easily share this API with your team or another company.
Calling other Operations from an Operation
It's important to note that you get type-safe access to other operations from within your TypeScript Operations handlers through the context object:
In this example, we're using the
internalClient to call the
Weather operations and combine the results. You might remember how we passed
IC extends InternalClient to the
createOperation factory in the beginning of this article. That's how we're making the
Learning from the Past: A Summary of Preceding Work in the Field
We're not the first ones to use these techniques, so I think it's important to give credit where credit is due and explain where and why we're taking a different approach.
tRPC: The framework that started a new wave of TypeScript APIs
tRPC is probably the most-hyped framework in the TypeScript API space right now as it made using the
import type approach to type-safe APIs popular.
I was chatting with Alex/KATT, the creator of tRPC, the other day, and he asked me why we're not directly using tRPC in WunderGraph as we could leverage the whole ecosystem of the framework. It's a great question that I'd like to answer here.
First of all, I think tRPC is a great framework, and I'm impressed by the work Alex and the community have done. That being said, there were a few things that didn't quite fit our use case.
One core feature of WunderGraph was and is to compose and integrate APIs through a virtual GraphQL layer . I discussed this earlier, but it's essential for us to allow users to define Operations in the
.wundergraph/operations folder by creating
.GraphQL files. That's how WunderGraph works, and it's a great way to connect different APIs together.
We've introduced the ability to create TypeScript Operations to give our users more flexibility. Pure TypeScript Operations allow you to directly talk to a database, or to compose multiple other APIs together in ways that are not possible with GraphQL. For example, the data manipulation and transformation capabilities of TypeScript are much more powerful than what you can do with GraphQL—even if you're introducing custom directives.
For us, TypeScript Operations are an extension of the existing functionality of WunderGraph. What was important to us was to make sure that we don't have to deal with two different ways of consuming APIs. So, by inheriting the structure, shape, and configuration options of the GraphQL layer, we're able to consume TypeScript Operations in the exact same way as GraphQL Operations. The only difference is that instead of calling one or more GraphQL APIs, we're calling a TypeScript Operation.
Furthermore, WunderGraph already has a plethora of existing features and middlewares like JSON-Schema validation, authentication, authorization, etc., which we're able to re-use for TypeScript Operations. All of these are already implemented in Golang, our language of choice for building the API Gateway of WunderGraph. As you might know, WunderGraph is divided into two parts: the API Gateway written in Golang; and the WunderGraph Server written in TypeScript, which builds upon fastify. As such, it was a clear choice for us to leverage our existing API Gateway and implement a lightweight TypeScript API server on top of it.
With that being said, I'd like to highlight a few things where we're taking a different approach to tRPC.
tRPC is framework-agnostic, WunderGraph is opinionated
One of the great things about tRPC is that it's both framework and transport layer-agnostic. This can be a double-edged sword, however: while it's great that you can use tRPC with any framework you want, there's the drawback that the user is forced to make a lot of decisions.
For example, the guide to using tRPC with Subscriptions explains how to use tRPC with WebSocket Subscriptions:
There's no such guide in WunderGraph where you must handle WebSocket connections yourself. Our goal with WunderGraph is that the developer can focus on the business logic of their API, which leads us to the next point.
tRPC vs. WunderGraph - Observables vs. Async Generators
While tRPC is using Observables to handle Subscriptions, WunderGraph is using Async Generators.
Here's an example of the tRPC API for Subscriptions:
And here's the equivalent in WunderGraph:
What's the difference? It might be personal preference as I mostly develop in Golang, but I think Async Generators are easier to read because the flow is more linear. You can more or less read the code from top to bottom—the same way it's being executed.
Observables, on the other hand, use callbacks and are not as straight forward to read. I prefer to register the event listener and then yield events instead of emitting events and then registering a callback.
tRPC vs. WunderGraph - Code as Router vs Filesystem as Router
tRPC is using a code-based router, while WunderGraph is using a filesystem-based router. Using the filesystem as a router has many advantages. It's easier to understand the context and reasoning behind code as you can see the structure of your API in the filesystem. It's also easier to navigate as you can use your IDE to transport you directly to the file you wish to edit. And last but not least, it's easier to share and reuse code.
Conversely, a code-based router is much more flexible because you're not limited to the filesystem.
tRPC vs. WunderGraph - When you're scaling beyond just TypeScript
It's amazing when you're able to build your entire stack in TypeScript, but there are certain limitations to this approach. You'll eventually run into the situation where you want to write a service in a different language than TypeScript, or you want to integrate with 3rd party services.
In this case, you'll end up manually managing your API dependencies with a pure TypeScript approach. This is where I believe WunderGraph shines. You can start with a pure TypeScript approach and then gradually transition to a more complex setup by integrating more and more internal and external services. We're not just thinking about day one but also offer a solution that scales beyond a small team that's working on a single codebase.
The future of Isomorphic TypeScript APIs
That said, I believe that Isomorphic TypeScript APIs will have a great future ahead of them as they provide an amazing developer experience. After all, that's why we added them to WunderGraph in the first place.
I'm also excited to share some ideas we've got for the future of Isomorphic TypeScript APIs. The current approach is to define single procedures/operations that are independent of each other.
What if we could adopt a pattern similar to GraphQL, where we can define relationships between procedures and allow them to be composed? For example, we could define a
User procedure in the root and then nest a
Posts procedure inside it.
Here's an example of how this might look:
Now, we could query the
User procedure and get the
Posts procedure as a nested field by specifying the
posts field in the operation.
I'm not yet exactly sure on the ergonomics and implementation details of this approach, but this would allow us to have a more GraphQL-like experience, while still being able to enjoy the benefits of type inference.
Do we really need selection sets down to the field level? Or could some way of nesting procedures/resolvers be enough?
On the other hand, not having this kind of functionality will eventually lead to a lot of duplication. As you're scaling your RPC APIs, you'll end up with a lot of procedures that are very similar to each other but with a few small differences because they're solving a slightly different use case.
I hope you enjoyed this article and learned something new about TypeScript and building APIs. I'm excited to see what the future holds for Isomorphic TypeScript APIs, and how they'll evolve. I think that this new style of building APIs will heavily influence how we think about full stack development in the future.
However, one thing to bear in mind is that there's no one-size-fits-all solution. TypeScript RPC APIs are great when both frontend and backend are written in TypeScript. As you're scaling your teams and organizations, you might outgrow this approach and need something more flexible.
WunderGraph allows you to move extremely quickly in the early days of your project with a pure TypeScript approach. Once you hit a certain product market fit, you can gradually transition from a pure TypeScript approach to a more complex setup by integrating more and more internal and external services. That's what we call "from idea to IPO". A framework should be able to support you best in the different stages of your project.
Similarly to how aircraft use flap systems to adjust to different flight conditions, WunderGraph allows you to adjust to different stages of your project. During take off, you can use the pure TypeScript approach to get off the ground quickly. Once you're in the air, full flaps would create too much drag and slow you down. That's when you can gradually transition to leveraging the virtual Graph and split your APIs into smaller services.
At some point, you might even want to allow other developers and companies to integrate with your system through APIs. That's when a generated Postman Collection for all your Operations comes in handy. Your APIs cannot create value if nobody knows about them.