How a Holiday Weekend Project Turned Into My First MCP

Abstract

Over the holiday weekend, I took a detour from my main SaaS project to experiment with something I knew I would eventually need: Model Context Protocols (MCPs). MCPs are a method for large language models to connect securely to external tools, APIs, and data, improving their usability in real-world applications. A proof-of-concept project with a public API turned into a crash course in MCPs, from SDKs and repeatable parsing with Python scripts to running a local MCP server and eventually moving from STDIO to HTTP. This is part one of that journey, with part two coming next week, where I will cover containerization, optimization, and deployment.

What is an MCP, Anyway?

Before diving in, a quick definition: an MCP (Model Context Protocol) is a way for large language models (LLMs) to securely connect and use external tools, APIs, and data. Think of it as a bridge, instead of expecting an LLM to “guess” how an API works, the MCP provides structured instructions and context, so the model can interact more effectively. For developers, MCPs are quickly becoming a standard method to extend what AI systems can do in real-world applications.

Starting With Curiosity

A friend pointed me toward a publicly available API, and I was curious what I could build against it using Cursor. At first, I pointed the Cursor agent at the API’s website documentation. The results were typical but frustrating; the agent struggled to parse the site and could not follow links correctly.

Digging deeper, I realized why: the documentation was built as a single-page application (SPA) with JavaScript handling navigation. There is a hashtag (#) embedded in the URL that was confusing the parser. To the Cursor agent, much of the content was effectively hidden.

Pivoting to SDKs

Fortunately, the API provider also had downloadable SDKs. I started with the Python SDK and was able to steer Cursor toward those resources. From there I built a very basic proof of concept: a Python backend with a Next.js frontend that could display endpoints and give me a sense of what a user-facing website might look like. Rudimentary, yes, but a good start.

Thinking in MCP Terms

That is when it clicked: instead of trying to wrestle with the SPA documentation, why not build an MCP to surface the documentation to Cursor in a more structured way?

I turned to FastMCP to spin up a basic server. To make it useful, I wrote Python scripts to parse the SDKs in a repeatable way and separated the content into logical JSON files, controllers, models, documents, and so on. I also started thinking ahead by adding versioning to the MCP design so future API changes could be handled gracefully.

Here are a few examples of the parsing scripts from my core directory:

  • enhanced_model_parser.py – pulled detailed schema information from models.
  • exceptions_parser.py – handled API error definitions.
  • package_metadata_parser.py – extracted package-level metadata.
  • run_tutorial_pipeline.py – stitched tutorials into a structured pipeline.
  • scrape_docs.py – handled scraping fallback when needed.
  • source_code_parser.py – parsed code examples in the SDK.
  • unified_docs_parser.py – brought together multiple document sources.
  • utilities_parser.py – helper functions used across scripts.

This was an iterative process, discovering what was hidden in the SDKs, adjusting the scripts, and fine-tuning how information should be split. Over time, the MCP began to feel like a structured window into a previously messy, from an AI perspective, set of resources. The API site is already well structured for a human to understand it.

From STDIO to HTTP

Once I had the MCP up and running locally, I initially used STDIO (the default communication method for FastMCP). This worked fine for development, but it highlighted a limitation: every environment would need the MCP installed locally. That did not seem like a long-term solution. And it prompted me to dig deeper into some other MCP’s I was already using to understand more about their install and usage methods.

At this point, another friend pointed out that if I wanted this to scale, I needed to move beyond STDIO. HTTP would be more flexible, especially if I ever wanted to run the MCP centrally and make it accessible across environments.

So I dug into FastMCP’s configuration and worked through the changes needed to support HTTP transport. After a bit of trial and error, I had it running locally over HTTP. That felt like a breakthrough, I now had something that was not tied to one machine and could eventually be hosted.

Why This Matters

This exercise started as simple curiosity but quickly became a valuable learning experience. MCPs are becoming an integral part of how large language models and agents interact with structured information. Even this early experiment showed me how an MCP could bridge the gap between complex API documentation and the developer tools that need to use it.

For my SaaS project, that is a big deal. We are already thinking about where MCPs could help, from making documentation more accessible to enabling agentic actions that actually use an API, not just describe it.

Closing

That is where I will pause for now. I ended this first phase with a working MCP running locally over HTTP. Next week’s post will cover how I packaged this into a Docker container, the surprises I ran into (yes, including FastMCP trying to drag me back to STDIO), and what it took to finally deploy to Google Cloud Run. I will also reflect on where AWS services like App Runner or ECS/Fargate might fit in the future.

Stay tuned for Part 2!

Matt Pitts, Sr Architect

Tags:

No responses yet

Leave a Reply

Your email address will not be published. Required fields are marked *