WunderGraph Cloud Waitlist
Before we get into the blog post. WunderGraph Cloud is being released very soon. We’re looking for Alpha and Beta testers for WunderGraph Cloud.
Testers will receive access to WunderGraph Cloud and 3 months Cloud Pro for free.
“_Within three to eight years, we will have a machine with the general intelligence of an average human being…if we’re lucky, they might decide to keep us as pets.” — **_Marvin Minsky, 1970**. Founder of the MIT AI Labs, Turing Award winner, advisor to Stanley Kubrick for 2001 : A Space Odyssey.
Coming up on 53 years since that quote, I’d say we’ve done okay in terms of averting a robot uprising.
However, we have reached a point where we can teach machines to learn, and generate text and images based on what they learn. Memorizing statistical patterns isn’t intelligence, of course, but with the advent of Large Language Models (LLMs) — like BERT , OpenAI’s GPT-3 and the new ChatGPT — that approach near-human levels of understanding of the concept of language, we can even use AI to aid human accessibility!
Think Chatbots/helpers that don’t need to tell you “Press 3 if you’re having trouble connecting to our servers”, but understand that you want to solve a connectivity issue when you tell it, “My game freezes at the login screen! Pls help!!1”
It follows, then, that we can use these LLMs — in an “ask a question, get an answer” capacity — to provide personalized, accessible, on-demand assistance to students, helping them to learn at their own pace and according to their own needs and abilities. For e-learning, such instant feedback and support that doesn’t need the current lecture to be interrupted (and indeed, doesn’t even need questions to only be within regular class hours), only compliments a course, making it more accessible and more engaging for students.
But enough with these thought experiments. Let’s try our hand at building just such a frontend integration — a chat helper that can use OpenAI to answer a potential student’s questions, without them having to tab out of the course!
What would the tech stack look like?
Something like this.
- A Next.js frontend that models an e-learning platform, with a Chat Helper/Assistant/Chatbot/whatever-you-want-to-call-it component that students can type questions into, and receive answers from.
- A Node.js/Express API that receives the questions from the frontend, proxies them onto the OpenAI ChatGPT servers, and serves the answers in response. The ChatGPT API is not public yet, but we can use the unofficial
chatgptpackage for our purposes.
- A backend-for-frontend (BFF) using WunderGraph , a free and open source dev tool that uses GraphQL at build time only, serving data via secure JSON-over-RPC. The WunderGraph server will be a service layer, or API gateway, whatever you wish to call it, that serves as the only ‘backend’ that your frontend can see.
Why use a BFF pattern in the first place? Why not simply deal in GET/POST calls to your API from the frontend? Okay, let’s indulge this hypothetical for a second. This is what your architecture might look like in that case.
"What’s wrong with this picture?
Your frontend is now tightly coupled with your backend. You will have to commit hundreds of lines of code to your frontend repo just to orchestrate two-way communication between your frontend and the many microservices and APIs that your app uses. If any of them are nascent or fluid technology — ChatGPT being the prime example — you’re going to frequently end up diving into your frontend code to make the necessary changes to the underlying wiring to make sure everything keeps working. Not ideal.
Using WunderGraph as a backend-for-frontend decouples the frontend — for any set of clients — from the backend, simplifying maintenance, and the two-way communication between the two, by using GraphQL at build time only to turn this whole operation into simple queries and mutations, with complete end-to-end type safety, and data from all your data sources consolidated into a single, unified virtual graph — served as JSON-over-RPC.
This way, you can parallelize all of your microservices/API calls, fetching the exact data each client needs in one go, with reduced waterfalls for nested data, and autocomplete for all your data fetching…all without the typical pain points of GraphQL, i.e. large client bundles and caching/security headaches.
Your app doesn’t even need to offer a GraphQL endpoint; you’re only harnessing its power for massive DX wins.
Part 1: Express, and the ChatGPT library
Step 0: Dependencies
Within the project root, create a directory for your API (./backend works fine), CD into it, and then type these in.
OpenAPI has added CloudFlare protection to ChatGPT recently, making it harder to use the unofficial API. This library (optionally) uses puppeteer under the hood to automate bypassing these protections — all you have to do is provide your OpenAI email and password in an .env file.
Step 1: The Server
The API server code itself is pretty self-explanatory. It receives a question in the request body, uses ChatGPT to send it to OpenAI servers, awaits an answer, and serves both in this response format:
Step 2: The OpenAPI Spec
WunderGraph works by introspecting your data sources and consolidating them all into a single, unified virtual graph, that you can then define operations on, and serve the results via JSON-over-RPC. For this introspection to work on a REST API, you’ll need OpenAPI (you might also know this as Swagger) Specification for it.
An OpenAPI/Swagger specification is a human-readable description of your RESTful API. This is just a JSON or YAML file describing the servers an API uses, its authentication methods, what each endpoint does, the format for the params/request body each needs, and the schema for the response each returns.
Fortunately, writing this isn’t too difficult once you know what to do, and there are several libraries that can automate it.
Here’s the OpenAPI V3 spec for our API, in JSON.
Part 2: Next.js + WunderGraph
Step 0: The Quickstart
We can set up our Next.js client and the WunderGraph BFF using the
create-wundergraph-app CLI. CD into the project root (and out of your Express backend directory), and type in:
Then, CD into the directory you just asked the CLI to create:
Install dependencies, and start :
That’ll boot up the WunderGraph AND Next.js servers (leveraging the
npm-run-all package), giving you a Next.js splash page at
localhost:3000 with an example query. If you see that, everything’s working.
Step 1: Setting up WunderGraph
WunderGraph can introspect pretty much any data source you can think of — microservices, databases, APIs — into a secure, typesafe JSON-over-RPC API; OpenAPI REST, GraphQL, PlanetScale, Fauna, MongoDB, and more, plus any Postgres/SQLite/MySQL database.
So let’s get right to it. Open
wundergraph.config.ts in the
.wundergraph directory, and add our REST endpoint as one such data source our app depends on, and one that WunderGraph should introspect.
A real-world app will of course have more than just one REST endpoint, and you’d define them just like this. Check out the different types of data sources WunderGraph can introspect here, then define them accordingly in your config.
Once you’ve run npm start, WunderGraph monitors necessary files in your project directory automatically, so just hitting save here will get the code generator running and it’ll generate a schema that you can inspect (if you want) — the
wundergraph.app.schema.graphql file within
Step 2: Defining your Operations using GraphQL
This is the part where we write queries/mutations in GraphQL to operate on WunderGraph’s generated virtual graph layer, and get us the data we want.
So go to
./wundergraph/operations and create a new GraphQL file. We’ll call it
So this is our mutation to send in a question to our Express API (as a String), and receive an answer (with the original question included, for either your UI, or just logging).
Mind the namespacing! Also, notice how we’ve aliased the
chatgpt_postApi field as
You’d also define your data fetching operations for other data sources in .graphql files just like this one.
Each time you’ve hit save throughout this process, WunderGraph’s code generation has been working in the background (and it will, as long as its server is running), generating typesafe, client-specific data fetching React hooks (
useQuery, useMutation, etc.) on the fly for you (using Vercel’s SWR under the hood). These are what we’ll be using in our Next.js frontend.
Step 3: Building the UI
Our UI really needs just two things for a minimum viable product. A content area where you’d show your courses, tutorials, or any kind of content that you offer, and a collapsible Chat Assistant/Chatbot interface — that uses one of the hooks we just talked about,
Note that I’m using Tailwind for this app. Tailwind is a fantastic utility-first CSS, and easily incorporated into any app via its Play CDN (though you’ll probably want to switch to the PostCSS implementation for production).
The NavBar isn’t really necessary for this example; this is just a go-to component I throw into all of my projects during layouting to make things prettier 😅.
useMutation hook is called only when you call trigger on form submit with an input (i.e. the question; which will end up being the request body in the Express backend). This is pretty intuitive, but for further questions regarding trigger, check out SWR’s documentation here .
And you’re done! Provided the ChatGPT servers aren’t under heavy load, you should be able to type in a question, hit the button, and see an answer stream in.
Where to go from here?
Hopefully, this tutorial has given you an insight into how you can use ChatGPT for your own use cases, writing APIs and generating OpenAPI documentation for it so you can use it with WunderGraph as a BFF to make querying a cinch.
Going forward, you’ll probably want to add a
<ul> list of canned/pre-selected questions (based on the current course) that when clicked, are passed to the
<ChatHelper> component as questions, so your students have a list of suggestions for where to start asking questions.
Other than that, you could also use the
messageId in the result object, and pass them to
parentMessageId respectively to track the conversation with the bot, and add to it an awareness of questions asked immediately before — so your students can ask follow-up questions to get more relevant information and make the conversation flow more naturally.
Additionally, keep an eye on the
chatgpt library itself, as OpenAI frequently changes how ChatGPT’s research preview works, so you’ll want to make sure your code keeps up with the unofficial API as it is updated accordingly.
Finally, if you want to know more about WunderGraph’s many use cases, check out their Discord community here !