Cosmo Streams: GraphQL Subscriptions Without the Hurdle

Dominik Korittki
Cosmo Streams lets teams expose events from their event driven systems through GraphQL subscriptions without building or operating a separate translation layer. The Cosmo Router connects directly to the message broker. When an event arrives, it fetches the required data from subgraphs, handles subscription delivery, and provides router level hooks for authorization, filtering, and event handling.
Many teams already run event driven systems internally. When something happens, an event is emitted into that system, and other parts of the infrastructure react to it.
The challenge appears when the same internal events need to be exposed to clients over GraphQL.
In these setups, you're using a message broker or event store (like Kafka , NATS , or Redis ) to collect and redistribute events. Cosmo Streams is designed to integrate with the same brokers you're already using.
In GraphQL, streaming updates to clients is most commonly handled with subscriptions. A client subscribes to an entity in the schema and selects one or more fields on it, then receives updates when relevant events occur.
When those events already exist internally, subscriptions are an obvious way to expose them. In practice, however, teams often introduce an additional layer to handle subscriptions.
Classic GraphQL subscriptions require a translation layer to manage connections and fan out. Cosmo Streams handles this at the router.
The root issue is that classic GraphQL subscriptions introduce a GraphQL specific translation layer between event driven systems and consumers. This layer must manage connections, execution, and fan out, forcing teams to adapt their event streams to GraphQL’s subscription model rather than letting GraphQL integrate directly with existing event infrastructure.
To work around this mismatch, teams often build a separate service whose role is to consume internal events, translate them into GraphQL subscription payloads, and push updates to connected clients. This bridge service becomes its own codebase that you have to build, deploy, scale, and maintain.
In a classic GraphQL federation setup, a subgraph must implement the subscription itself. That means the subgraph service is responsible for managing stateful, long running client connections, dealing with WebSockets and subprotocols, fetching and merging data from various sources to form correct responses, handling fan out, and managing timeouts and network issues. All of this makes GraphQL subscriptions notoriously difficult to build and operate at scale.
Cosmo Streams removes this responsibility from subgraphs entirely by handling connections, subscription state, data fetching, merging, and delivery at the router level. Subgraphs do not need to implement subscriptions, WebSockets, or long running connections.
Cosmo Streams runs in the Cosmo Router .
Instead of placing a separate service between the event system and GraphQL subscriptions, the router connects directly to the message broker. It listens for events and publishes them through the GraphQL API as subscriptions.
In addition to consuming events, Cosmo Streams can also publish them. Events can be emitted through GraphQL mutations backed by Cosmo Streams, allowing GraphQL to act as both a consumer and a producer in an event driven architecture.
The router resolves client GraphQL queries by fetching the required data from subgraphs via GraphQL HTTP requests. Subgraphs remain completely stateless while the router owns connection management, subscription state, and delivery to clients.
To improve efficiency, the router deduplicates client connections, broker connections, and subgraph fetches. It also uses kqueue and epoll to optimize resource handling.
Cosmo Streams can be customized using router hooks implemented as Custom Modules . These modules are written in Go and compiled directly into the Cosmo Router, allowing teams to add custom behavior without introducing a separate service.
This runs custom logic when a client starts a subscription. This is primarily used for authorization, ensuring the client is allowed to open a long-running stream before any events are delivered. The handler can reject a subscription during connection initialization based on information such as HTTP request details or authentication token data.
It can also send the initial state. For example, when a user joins the stream for a live football match that is already 1–0, the router can immediately send the current score without waiting for the next event.
When an event arrives from the broker, this handler runs per subscriber to filter or modify what each receives. Two users subscribing to the same stream can receive different data, for example, based on their permissions. An admin might see all order updates while a regular user sees only their own orders.
This handler runs custom logic when a Cosmo Streams-backed mutation publishes an event. It executes before the event is emitted to the underlying message system, allowing validation or enrichment as part of the publish flow.
For example, you can validate that a user is creating an order for themselves, not on behalf of another user, before the event enters your internal event stream.
For platform teams, the event system is already in place and is a natural fit for sending events via GraphQL subscriptions. However, adding another service solely to translate events into GraphQL subscriptions means owning an additional service and managing long running, stateful connections with all their drawbacks, along with everything else required to bridge the gap between an internal event and a working GraphQL subscription.
Cosmo Streams uses the router, which already hosts the GraphQL API, as the integration point. Instead of building and maintaining a separate bridge, you connect the router directly to the event system and add custom logic where you need it.
When subscriptions run at scale, things like concurrency, performance optimizations, GraphQL over WebSockets, and event translation become your responsibility. By keeping subgraphs stateless, Cosmo Streams works cleanly with serverless and short lived runtimes where long running connections are impractical.
These are hard problems that Cosmo Streams already accounts for, so you do not have to build and operate that layer yourself.
Cosmo Streams builds on Event Driven Federated Subscriptions (EDFS) . EDFS established the core flow: router connects to broker, listens for events, delivers them via GraphQL subscriptions. The main limitation of EDFS was coarse grained control. The available options for customizing authorization and filtering were limited.
That same broker-to-router-to-subscription flow remains the basis of how Streams works. What Cosmo Streams adds is the three handlers described above. This gives you control over subscription behavior without moving logic into separate services.
For teams already using EDFS, Streams preserves the same schema and EDFS directive model and extends it with handlers at the router level. Teams already using EDFS can upgrade to Streams and add handlers without changing their existing GraphQL schema.
The demo repository shows how the three handlers work together in a simplified order management system with authentication and filtering.
Documentation explains how to configure Streams and highlights patterns for authentication, filtering, and initial data handling.
For teams with existing event driven architectures, Cosmo Streams offers a way to expose those events through GraphQL subscriptions without needing to deal with the technical hurdles that come with them.

Dominik Korittki
Software Engineer at WunderGraph
Dominik Korittki is a software engineer at WunderGraph, specializing on backend systems and APIs. He has twelve years of experience across software engineering, distributed systems, clouds and data centers where he built SaaS applications, Kubernetes integrations and hosting platforms.
