Blog
/
New Feature

Three Handlers for Event Driven GraphQL Subscriptions

cover
Dominik Korittki

Dominik Korittki

min read

TLDR

Cosmo Streams extends EDFS by moving subscription authorization, per subscriber event filtering, and mutation side validation into the router itself. Instead of building custom subscription services, Streams adds three lifecycle handlers that run when a subscription starts, while events are in flight, and when mutations publish events.

This gives teams control over who can subscribe, what data each subscriber receives, and what events are allowed into the system, without changing schemas or event infrastructure. The result is a policy-aware GraphQL subscription system that scales without extra services.

Introducing WunderGraph Hub: Rethinking How Teams Build APIs

WunderGraph Hub is our new collaborative platform for designing, evolving, and shipping APIs together. It’s a design-first workspace that brings schema design, mocks, and workflows into one place.

Why Cosmo Streams Exists

EDFS solved a basic problem: connecting your internal event driven architecture to GraphQL subscriptions without building a separate translation layer. The router connects to your message broker, listens for events, and publishes them as subscriptions. That flow works and remains the foundation of how Cosmo Streams operates.

Cosmo Streams extends that model by letting you run subscription authorization, per-subscriber event filtering, and mutation side validation directly inside the router. Instead of pushing this logic into custom subscription services or resolver layers, Streams introduces controlled extension points in the event flow itself.

But as customers started using EDFS, they ran into limitations. Three problems in particular were identified. They all revolve around the need for more flexibility when dealing with events on the router. Cosmo Streams addresses these by adding three new handlers to the custom module system.

You can think of these handlers as three stages in the event lifecycle: logic that runs when a subscription starts, logic that runs while events are in flight to subscribers, and logic that runs when mutations publish new events.

Problem 1: "You Can Subscribe to Everything or Nothing"

With EDFS, subscription access was all-or-nothing. A user who passed authentication could subscribe to any available topic, with no way to scope or restrict access to individual subscriptions or events.

You could try to solve this outside the router by introducing custom subscription services, resolver-level checks, or post-processing layers. In practice, this means managing connection state, duplicating authorization logic, and fanning out events manually, all while keeping performance under control.

Cosmo Streams moves this logic into the router itself, where subscription context, connection details, and event flow already exist.

The problem is that many teams need finer-grained control. Different users should see different streams, even if they're authenticated in the same way. An admin might need access to all order updates. A regular user should only see their own orders. With EDFS, that kind of role-based access control on subscriptions wasn't possible.

The Solution: SubscriptionOnStart Handler

With Cosmo Streams, the router can run custom logic when a client subscribes using the SubscriptionOnStart handler.

This means you can grant access to specific streams based on custom logic. You have access to the token data, HTTP headers, connection details, and GraphQL operation variables. Basically, all the context you need to make authorization decisions.

In the demo example, users can create orders and subscribe to order updates. When a client subscribes, the SubscriptionOnStart handler validates the JWT and its issuer to decide whether the subscription should be allowed.

Once the subscription is established, the OnReceiveEvent handler determines which events each subscriber actually receives. Two users can subscribe to the same GraphQL subscription but receive different events based on their permissions. This kind of per-subscriber event filtering was not possible with EDFS alone.

Problem 2: "New Subscribers See Nothing Until the Next Event"

EDFS is purely event driven. A client subscribes, then receives future events as they occur. This is efficient and straightforward, but in some scenarios it can create a UX problem.

Imagine a user joining a live soccer match stream. The game is already 1-0. With the old EDFS approach, they see nothing until the next goal drops, which could be 30 minutes away. The client is connected, but they have no context about the current state of the game.

The Solution: SubscriptionOnStart Handler (Again)

The same SubscriptionOnStart handler that handles authorization can also send data immediately when a subscription starts.

When a client subscribes to a soccer match that is already in progress with a score of 1-0, the router can query the current score and update the client immediately. The client sees the correct score, then continues receiving updates as future goals occur. They get the initial state first, then the event stream.

That same handler solves two different problems. It runs when someone subscribes, and you can use it for access control, for sending initial data, or both.

Problem 3: "All Subscribers Get Identical Events"

With EDFS, when an event arrived from the broker, it was fanned out to all subscribers in the same way. Everyone got the same event with the same data. This kept the system simple and performant, but some teams needed to filter or modify events based on individual subscriber permissions.

The challenge appears when you have two users subscribed to the same topic, but they should receive different data. Maybe one is an admin who should see everything. The other is a regular user who should only see events they're allowed to see.

The Solution: OnReceiveEvent Handler

The OnReceiveEvent handler runs whenever a new event is received from the message broker, before the router resolves it. It executes once per subscriber, which means you can make decisions per subscriber about what gets delivered.

In the demo, when a user subscribes to order updates, they only receive events from orders they created. The handler filters events based on the user's token. It matches the customer ID from the token against the customer ID in the order event. Two users subscribe to the same subscription but receive different events.

This filtering happens for each subscriber. If you have 30,000 subscribers, the handler runs 30,000 times, once for each subscriber. The router executes these handlers asynchronously by default, with configurable concurrency limits to avoid memory spikes when processing large numbers of subscribers.

The Mutation Side: OnPublishEvent Handler

Subscription filtering controls what leaves the system. Mutation side validation controls what enters it. OnPublishEvent completes the loop by enforcing rules before events are emitted into your event driven architecture.

The OnPublishEvent handler runs when a mutation explicitly publishes an event through Cosmo Streams, before the router emits that event to the message system. This lets you validate or enrich data before it enters your internal event flow.

In the demo, a user can create an order through a mutation. The handler inspects the mutation data and validates that the user is creating an order for themselves, not for another user. It matches the customer ID from the token against the customer ID in the mutation input.

Without this handler, it was very hard to do these kinds of checks at the router level. EDFS just took the mutation data, transformed it into an event, and sent it out. Now you have an additional layer of validation before events get emitted into your system.

How the Handlers Work

These three handlers are implemented through the Custom Module system. Custom modules aren't new on the router; they already exist for other things. What we did with Cosmo Streams was add three new handler types to help tackle customization and compliance needs.

All handlers are optional. If a handler is not defined, the router falls back to standard EDFS behavior with no additional logic applied.

You write this logic in Go as custom modules that compile with the router itself. The router processes your custom code at three extension points in the event flow:

  • When subscriptions start: SubscriptionOnStart
  • When events are received from the broker: OnReceiveEvent
  • When mutations publish events: OnPublishEvent

The handlers run inside the router. You're not building a separate service; you're shaping the router's behavior at specific points where you need custom logic.

Building on EDFS

Streams does not require changes to your GraphQL schema or subscription configuration. Existing EDFS setups continue to work as before, with handlers applied only where additional control is needed.

What Streams adds is flexibility. Instead of requiring teams to move logic into separate services, Streams allows custom logic to run inside the router itself at these three extension points.

Performance Considerations

Every subscriber needs to be processed individually on the OnReceiveEvent handler. So if you have 30,000 subscribers, this handler runs 30,000 times per received event batch.

If you do an API call per handler run, that's 30K API calls with 30K subscribers for each event the handler processes. As Cosmo Streams uses Custom Modules, which are essentially Go functions, handlers can use in-process caches or external caches. This lets you run asynchronous routines and look up pre-computed data instead of calling upstream systems for every subscriber.

From the above, it should be clear that your handlers need to stay performant. Don’t do expensive or potentially blocking operations per subscriber. We already handle concurrency and memory management, but the handler logic itself must be efficient. Where additional data is required, use caching or move API calls out of the hot path.

For example, performing external network calls inside OnReceiveEvent can quickly become expensive at scale. With thousands of subscribers, a single event could trigger thousands of outbound requests.

Getting Started

The demo repository shows all three handlers in action with a working example of the order system I mentioned. You can see how SubscriptionOnStart validates JWTs, how OnPublishEvent ensures users can only create orders for themselves, and how OnReceiveEvent filters events so each customer only sees their own orders.

The documentation explains how to configure Streams and highlights patterns for authentication, filtering, and initial data handling.

For teams with existing event driven architectures, Cosmo Streams offers a way to expose those events through GraphQL subscriptions without building translation infrastructure around the router. You connect the router directly to your message broker and add custom logic where you need it.

Cosmo Streams turns GraphQL subscriptions from a broadcast pipe into a policy-aware event system, with control at subscribe time, in-flight, and at publish time.


Frequently Asked Questions (FAQ)

EDFS connects a message broker to GraphQL subscriptions but does not support per-subscription authorization, initial state delivery, or per-subscriber event filtering. Cosmo Streams adds these capabilities by introducing router-level handlers.

Cosmo Streams uses the SubscriptionOnStart handler, which runs when a client subscribes. The handler has access to token data, HTTP headers, connection details, and GraphQL operation variables and can decide whether the subscription should be allowed.

Yes. The SubscriptionOnStart handler can send data as soon as the subscription starts, allowing the router to deliver the current state before continuing to stream future events.

The OnReceiveEvent handler runs once per subscriber whenever an event is received from the message broker. This allows the router to filter events per subscriber based on context, such as token data.

All Cosmo Streams handlers run inside the router as Custom Modules written in Go and compiled with the router. No separate service is required.


Dominik Korittki

Dominik Korittki

Software Engineer at WunderGraph

Dominik Korittki is a software engineer at WunderGraph, specializing on backend systems and APIs. He has twelve years of experience across software engineering, distributed systems, clouds and data centers where he built SaaS applications, Kubernetes integrations and hosting platforms.