Blog
/
Education

The Impact of MCP and LLMs on Software Development - A Practical Example

cover
Jens Neuse

Jens Neuse

min read

TL;DR

Model Context Protocol sounds fancy, but what can you actually do with it? In this post I'll show you how to use MCP to one shot real tasks like exploring an API schema, writing GraphQL queries, or configuring a router. There is no deep domain knowledge required, no hype, just practical examples.

How MCP and LLMs Actually Help

There's a lot of talk about Model Context Protocol (MCP). A lot of people are excited about it, others are concerned about security, and then there's a whole lot of confusion.

Why another protocol? Why not just OpenAPI? Who actually needs this? And what problems does it solve?

To answer these questions, let's be pragmatic and look at three scenarios and how MCP impacts them; no specific knowledge of our domain is required.

Scenario 1: Exploring GraphQL API Schemas with MCP

We're the creators of an open source Schema Registry for GraphQL Federation called Cosmo . You're a frontend developer using the GraphQL API of a company using Cosmo as their Schema Registry. You could also be using REST APIs, SOAP, gRPC, or anything else.

Your task is to build a new feature that makes it possible to show all of the company's employees, their availability, and make changes to their information.

You're not wasting time on the boring parts of software development, so you're using Cursor to do the boring parts , like making the API calls, mapping the data to frontend components, handling errors, and so on.

But speaking of "API calls", which APIs should we use to solve this task? We're not exactly sure what the GraphQL API Schema looks like, so we ask Cursor. The problem? Cursor doesn't know exactly what the Schema looks like either, and it could be hallucinating when creating the API calls.

This is where MCP comes in. We've built an extension to our existing CLI that starts an MCP server locally.

With the "fetch_supergraph" tool, the LLM can fetch the most recent Schema of our GraphQL API. In addition, the model can use the "verify_query_against_remote_schema" tool to verify that a generated query is valid.

This is a game changer, because we can implement the complete feature in a single prompt. There is no need to switch between different tools, or manually copy and paste information.

The frontend developer can now make a prompt like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15

We might have to iterate a bit on our prompts, but then we can implement features like this in a single prompt, or at least get close.

It's important to note that you can easily use CLI commands to achieve the same result. You can download the latest version of the GraphQL API Schema with "npx wgc federated-graph fetch mygraph". Cursor can then generate GraphQL Operations for you, and you can verify them somehow.

The "big deal" about MCP in this scenario is that you can model your tools in a way that supports an LLM to one-shot entire tasks like this. Build one powerful prompt that leverages all the tools available, instead of having to go step by step.

In terms of security, we've just automated the steps you'd usually do manually with the CLI. The client must configure an API key in the same way as if they were using the CLI without the mcp subcommand.

Scenario 2: Using the Dream Query Workflow and MCP to Design GraphQL APIs

Another scenario is the dream query workflow , a tool that allows developers to describe their desired GraphQL Query, and the LLM will then propose schema changes to make the query work.

In this scenario, you're a frontend developer whose job is to build a new feature, e.g., to show each employee's department.

The problem? The GraphQL API Schema currently doesn't support this field, so we have to come up with a proposal on how to change the Schema.

Without MCP, the steps look like this:

  1. Go to the Schema Registry UI (Studio) and search for the type "Employee"
  2. Search through all Subgraph Schemas to find which one implements Employee type fields
  3. Think about the best Subgraph to add this new field to
  4. Create an updated Subgraph Schema with the new field and Department type
  5. Run the "check" CLI command to verify that the proposed changes will work (compose) with all other Subgraph Schemas

With MCP, you can one-shot this with the following prompt:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15

Similar to the previous scenario, there's not much magic involved. But instead of having to manually go through all the steps, we can have the LLM read through our existing Schema and propose a solution.

Now you might ask, but what if the LLM proposes a schema change that we're not happy with? In this case, you can continue the conversation with the LLM and leverage it to iterate on the proposal or come up with alternative solutions. Just create another "AI-Tab" and try with a different prompt.

Scenario 3: Using MCP to Configure Cosmo Router

The last scenario is about Cosmo Router , or more precisely, its configuration.

The Router could also be understood as the API Gateway for GraphQL APIs. It's a very complex piece of software with endless ways to configure simple use cases like authentication, or really hard ones like limiting Prometheus attributes to reduce the cardinality of the metrics.

Enough domain knowledge... let's talk about the problem of configuring a complex software thing. To actually make this problem solvable by an LLM, we've set some important basics.

As you've seen in the previous scenarios, we've been pretty successful in creating feedback loops that allow the LLM to iterate on a solution. As such, we're doing the same here.

As a foundation, we're using JSON Schema to describe the configuration. An LLM can use this schema to understand the configuration and propose changes, and we can expose a tool to verify a proposed configuration against this schema.

In addition, we're exposing another tool so that the LLM can make full text search queries through the documentation of Cosmo, the Router, and all other available documentation. This allows the model to come up with a configuration based on a meaningful prompt.

Let's look at an example.

1
2
3

Why is this powerful? You might know that the Router supports Subscriptions over WebSockets, but you're not exactly sure what the configuration syntax looks like. We can search through the docs, copy-paste, and then fight with YAML indentation. Or we just ask the LLM to propose and validate our desired configuration.

Model Context Protocol: Frequently Asked Questions

We've looked at three scenarios to demonstrate the impact of MCP on software development. Before we wrap up, let's take a look at some of the most frequently asked questions about MCP.

Why another protocol?

MCP is inspired by LSP, the Language Server Protocol. The goal is to enable a generic way to extend LLMs with custom tools, similar to how LSP enables a generic way to extend text editors with custom language features.

Who actually needs this?

MCP is model and tool agnostic. It's designed to enhance the experience of tools like Cursor, Windsurf, ChatGPT, VSCode, and more. While this blog post mainly focused on how MCP can be used to enhance the developer experience, the protocol can be used by any LLM and tool, e.g., to enable business users to generate dashboards and reports.

Why not just OpenAPI?

OpenAPI is a great way to describe REST APIs, or HTTP APIs in general.

For MCP, as mentioned before, the authors have modeled the protocol after LSP, and one of their goals was to allow calling an MCP server via stdio, not just via HTTP.

As such, OpenAPI isn't the best fit for MCP already. In addition, MCP tools are much more lightweight and don't need the complexity of OpenAPI. MCP tools are simple RPC endpoints that expose single tools, while OpenAPI is typically used to describe resources, their relationships, and the operations that can be performed on them.

What about security?

Running MCP servers locally comes with risks. It's important to understand the implications of running third party code locally and allowing LLMs to interact with it.

The alternative is to run MCP servers over SSE, which comes with a different risk profile. If an MCP server can modify files on your local machine, and it's opening ports on the local wifi network attackers can potentially exploit this.

Conclusion

We can look at MCP as just another protocol to expose an API. We can get stuck in details like the syntax of the protocol, but we might be missing the bigger picture.

Agent Mode combined with MCP completely redefines how we build software. With carefully crafted workflows, combined with powerful tools and the right prompts, we're enabling developers to one-shot tasks that would otherwise take hours or days.

Developers can be architects, rather than code monkeys. You can only put so much context into a single prompt, but with MCP, we can compose much more complex workflows, combine tools that use other tools, and so on.

Just ask yourself, why are you following a 5 step process to build a feature? Are you able to write down the steps in a markdown document? And if so, why don't your automate the steps by turning them into an MCP tool?