Go vs. Rust, AI Blackmail, and OpenAI’s IPO Unpacked
TL;DR
In this episode of The Good Thing, Stefan and Jens unpack the Go vs Rust debate in the era of AI-generated code, weigh in on OpenAI’s IPO hype, and reveal WunderGraph’s new plugin system. The conversation moves from syntax wars to business trade-offs, ending with a look at how LLMs could reshape backend frameworks.
Go vs Rust in the AI Era
The episode kicks off with Jens’ viral LinkedIn post claiming that Go will “wipe the floor” with Rust in an AI-generated code world. The argument? Go is easier to read, faster to compile, and more accessible to average developers.
We need a simple language that can be easily read by average developers, not just the top 5%.
Both hosts agreed Rust shines in safety-critical systems, like avionics or embedded software. But for most backend services, they argued readability and hiring ease tip the scales toward Go.
Readability vs Performance
Jens pushed back on claims that performance should always outweigh readability. In his view, AI-generated backends make clarity more valuable than micro-optimizations.
Readability will be the number one factor… performance is not going to be that important.
Rust still has a place for mission-critical code — but Jens stressed those cases will be written by hand, not generated by LLMs.
The Business Lens on Language Choice
The debate soon shifted from syntax to strategy. Hiring Rust developers is harder and more expensive, and startups can’t always justify that cost.
What you need to think about in a business context is what kind of human resources do I have available on the market? Who can I hire, and what will it cost me?.
For many teams, Go offers a lower barrier to entry because hiring developers is easier and less costly, making long-term maintenance more straightforward.
OpenAI’s IPO and the “Fear Game”
Midway through, the hosts dissected headlines about OpenAI’s nonprofit-to-PBC shift and IPO speculation. Stefan noted how fear and hype are used to push narratives:
Fear is a very powerful motivator… say it might IPO, and suddenly everyone feels they need to get in.
Both agreed the cycle echoed the dot-com era, with lofty visions attracting capital long before products mature.
Monopolies, Margins, and Distribution
The conversation broadened into the power dynamics of AI infrastructure . Jens warned that OpenAI, backed by massive GPU farms, could outcompete startups reselling API calls.
Sam Altman will copy your product and run it on his hardware, which is extremely cheaper than your solution. At that point, you’re dead.
Stefan likened it to AWS in the 2000s: controlling the hardware creates unbeatable margins, while distribution wins over product quality.
The Future of Backend Plugins
The episode closed with an inside look at WunderGraph’s new plugin system. Jens explained how it turns GraphQL SDL into gRPC contracts, letting teams plug in REST or legacy services without running separate subgraphs.
We accidentally created a backend framework for LLMs… you define a schema, run it through a compiler, and you’re essentially done
For the hosts, the plugin system showed how LLMs could generate adapters and proxy logic automatically, making it easier to use federation and supergraphs.
This episode was directed by Jacob Javor. Transcript lightly edited for clarity and flow.
Note This episode references reports of Claude “blackmailing” engineers. For context published after recording, see Anthropic's official comments .
