Extending LLM Capabilities with Model Context Protocol, Part 2

In Part 1, I showed how MCPs can solve fundamental LLM limitations – and how the interaction between LLMs and MCPs can still fail in subtle ways. Now I want to look at the practical reality: What’s it actually like to use MCPs today? What can they do? And what should you expect when you try them? 

TL:DR

There are three themes that I will touch on consistently throughout this post:

  1. LLMs are pretty new, but  MCPs are even newer. They are currently difficult to find, configure, and use if you are not fairly tech savvy. They mostly lack the polish you have come to expect from existing applications.
  2. Despite the rough edges, MCPs are already very useful. They will add increasing value to LLMs (and thus to you) over time, extending the functionality and abilities of LLMs in many areas.
  3. Because of point #2, MCPs will grow more ubiquitous, polished, functional, corporate, and monetized over time. In the future MCPs, or something like them, will be a standard part of LLM functionality.

Think about the AppStore for iPhone apps circa 2008. At first apps were funky, cool, and spread by word-of-mouth, but now they are standard, commonplace and a bit boring.

An MCP Usage Example–The Good

You might have read the first installment of this blog and thought that MCPs were an interesting technology, but not something you would ever use. Let me provide an example using health data from smartwatches. Despite the vast amount of data these devices gather, you usually have limited access to it – but MCPs can change that. 

If you are in the Apple ecosystem you can follow this link for an MCP that works with the Apple Health information. You can install this MCP and start using it!  Scroll down on the page to see a good demonstration video.

These days I’m more into the Garmin ecosystem, so let me show some examples I generated from the (unofficial) Garmin MCP.

Garmin Walking Info
Garmin Walking Info

Figure: Asking Claude how far I’ve walked this year, based on my Garmin data. You can see the analysis it provides right away. If this were critical data I would find a way to double check my LLM!

Garmin Adventures
Garmin Adventures

Figure: For my next query I asked Claude to find any interesting activities in the past couple months. It parsed through the data and chose some more unusual events to highlight, rather than my daily workouts.

With the same easy conversational tone that I query Claude  on other topics, I can now ask it to check and analyze my Garmin health data. Previously I only had a few Garmin-sanctioned ways of reviewing it. Now I can try any question or analysis I can think of. 

Consider any of the data you work with on a regular basis: Jira, Salesforce, Spreadsheets, Google Docs, GitHub, databases–whatever. What would you ask if you could?  What do you want to know about it? How would you like to analyze, display, summarize, or review it?

The MCP Multiplier

MCPs provide your LLM with a wide range of access and functionality it would otherwise not have. You can interact with the resources you use in the same way you use LLMs: casually, conversationally, digging deeper where you would like, summarizing when needed. LLMs are great for research and learning–now that same power can be expanded with additional access to your particular data and functionality. 

Beyond data access MCPs can also be used to provide computing and logic functionality that LLMs currently lack. There are already general and specialized calculators along with memory and structured thinking assistance. I expect that for any limitation LLMs exhibit, someone will try to create an MCP to address it!

MCPs dramatically extend what LLMs can access and do, but they don’t fix the fundamental probabilistic nature of LLM reasoning. You still need to verify important conclusions, double-check critical data, and remember that the LLM might occasionally ignore MCP results or misinterpret them. MCPs give LLMs new powers—but they’re still the same unpredictable LLMs underneath. 

Before you rush to try one, though, you should know what you’re getting into.

An MCP Usage Example–The Bad

Hopefully the example I provided gives you an inkling as to how useful MCPs will become. Unfortunately, they can be extremely difficult to use right now. I’ll walk you through the process I followed to get the Garmin MCP up and running to give you an idea of how complex this can be. 

To be fair, I had some advantages going in. I’d been experimenting with MCPs for months, so I already had:

  1. Paid for a pro version of Claude, Anthropic’s LLM. I believe there are some open source LLMs that don’t require a subscription to use MCPs, but I have not yet set one up.
  2. Installed the (required) Claude Desktop on my various computers, and enabled Developer mode.
  3. Learned where the tools, config files, and log files are, and how to use them to setup and debug MCPs.
  4. Reasonably good experience with Python and JavaScript-the languages most MCPs use.

Even with all of those advantages the Garmin MCP process was not trivial. I am presenting the main steps, rather than boring you with the details:

  • Cloned the code repository from github-this particular MCP is designed to run on your own machine.
  • The Garmin MCP did not require me to get a separate API key from Garmin–the developer already did this. This is great as many MCPs require you to sign up directly with the services they access.
  • Set the config up as specified in their Readme.
  • Had to add my Garmin email and password in clear text–this is a security issue.
  • It did not work, Claude could not find the MCP.  I checked the logs and tried the  MCP inspector.
  • I noted in github that they had changed their tech stack, but had not updated the Readme.
  • Surfed the net to understand the ‘uv’ ecosystem they had switched to (it turns out that this is a pretty cool system for working with Python!)
  • Using my new knowledge I ran local tests–success!
  • Figured out the likely way to launch the server locally, given the success with the tests.
  • Reloaded the config in Claude–now Claude could see and access the server.
  • Total time: about an hour, after several months of previous MCP experience. A newcomer should expect to take significantly longer.

A New Failure Mode

Once past these technical hurdles I thought I could have some fun with this. The first thing I did was to ask Claude to determine how far I’ve walked. And I immediately received the dreaded warning: “Claude hit the maximum length for this conversation. Please start a new conversation to continue chatting with Claude” .  

This usually occurs when you have spent so long in a single chat session that the context has built up beyond the maximum context window. So I began a new chat session with the same prompt and, to my surprise, immediately encountered the same message.

I had to investigate a bit to figure out what was going on. It turns out this is a new failure mode for me: In some scenarios, such as unlimited date ranges, the Garmin MCP is returning so much data that it immediately exceeds Claude’s context window when the LLM tries to process the results! 

Some LLMs use a floating context window, forgetting early info as you exceed the number of tokens. Claude uses a fixed window, currently 200K tokens for Sonnet 4.5. The response from the Garmin MCP was a huge data dump, not only extracting relevant, useful data but also containing vast amounts of irrelevant information.

Lesson learned: Ensure that the requests to the Garmin MCP will not result in large returns, by limiting what you ask for. For instance, by specifying restricted date ranges.

Current Reality Vs. The Future

This section will hopefully have shown you that MCPs can be really useful and even fun to use, but are not really accessible for most people at this time. However, this will change. MCPs are simply too valuable to remain hidden behind a tech-knowledge curtain. As time goes on they will become far easier to install, configure, and use. Within a few years, you’ll likely have dozens of MCPs installed – some you sought out, others required by services you use, just like apps on your phone today.

Finding and Using MCPs

If you haven’t been turned off by the hurdles to running MCPs, you may wonder where these amazing things are, and what they can do for you. Good for you, get ahead of the curve and learn about this technology now!

Arbitrary Lists of MCPs

Currently lists of MCPs resemble the early days of the internet, where various people and companies have gathered up examples that they find useful.

Fair warning: Lists of links tend to become out-of-date so rapidly that I’ve had to revise this list while editing due to link rot. Consider this an overview of what is available as of mid 2025, but don’t hesitate to search for whatever functionality you think might be useful, regardless of whether it is listed here or not.

Try these sites to see some good examples. Hopefully you will gain insight into how much is going on in this space, and may well find some that you want to try out:

Because of the newness and low barrier to entry, at this point most of the servers out there are being created by hobbyists. However, many companies are starting to come around and produce official MCPs for their applications and services. I am seeing more company websites that tout their MCP implementations on their corporate pages

In the future there are likely to be app store equivalents for you to browse to find appropriate MCPs. You will likely need to pay for some of them outright, through subscriptions, or for additional features. Enjoy the golden age of (mostly) free MCPs while it lasts!

MCPs in the Wild

The MCP ecosystem has grown rapidly, with developers creating tools that address the most common LLM limitations. Rather than just describing general categories of MCP, let me briefly describe some useful MCPs that are available today.

File Systems and Local Data Access

FileSystem MCP provides LLMs with secure access to local file operations—reading, writing, and managing files within specified directories. This is fundamental for any application where LLMs need to work with documents, code, or data files on your machine.

Google Drive MCP extends this to cloud storage, letting LLMs search, read, and manage documents in your Google Drive. Essential for business applications where important data lives in cloud storage rather than local files.

Amazon S3 MCP and Box MCP (a rare company-created server) serve similar roles for their respective cloud platforms, providing secure access to enterprise storage systems.

Development and System Operations

GitHub MCP gives LLMs the ability to interact with repositories—reading code, checking issues, reviewing pull requests, and understanding project structure. Invaluable for development workflows where LLMs need to understand existing codebases.

iTerm MCP bridges the gap between LLMs and system administration, allowing controlled terminal access for system operations, script execution, and development tasks. This represents the more powerful (and potentially dangerous) end of system interaction.

Information Gathering and Research

Brave Search MCP, Web Research MCP, and DuckDuckGo help address the knowledge cutoff problem by providing LLMs with access to current web information. While many LLM developers are building web search functionality into their products these MCPs provide more search options.

Database and Structured Data

PostgreSQL MCP represents a crucial category—giving LLMs access to live database systems. This enables applications that need to query real business data, not just generate plausible-sounding responses based on training patterns.

Communication and Collaboration

Slack MCP allows LLMs to interact with team communication systems—reading messages, posting updates, and integrating with collaborative workflows. This opens up possibilities for AI assistants that can participate in actual business processes.

Specialized Problem Solving

Sequential Thinking MCP addresses LLMs’ reasoning limitations by providing structured thinking frameworks.

wcgw MCP (What Could Go Wrong) focuses on risk analysis and failure mode thinking—exactly the kind of systematic analysis that LLMs struggle with due to their optimistic, pattern-matching nature.

Integration and API Management

APIWeaver MCP provides a meta-solution, allowing LLMs to interact with arbitrary APIs through a standardized interface. This could potentially eliminate the need for specific MCPs for every service, though it may sacrifice some of the specialized handling that makes individual MCPs effective.

Getting Started

As you saw in the Garmin example above, the process of using MCPs can require quite a bit of tech-savvy. Once you’ve found an MCP you want to try, you will need to follow a similar setup pattern: download or clone the repository, install dependencies, configure any required credentials or settings, then connect your LLM client to the MCP server. The specifics of the installation and configuration will vary based upon the particular MCP and its functionality, the hosting of the MCP (local or cloud), and the LLM you use. You will need to do a bit of research to figure this out for your setup. Most MCPs provide some instructions and examples in their Readme files.

Given the overall utility of MCPs, I expect that the process of installing them will become significantly easier in the near future, to ensure that even non-technical LLM users will be able to access them. Don’t be surprised to see lists of MCPs (like lists of plugins) available in your favorite LLM, and wizards to help you configure them. 

In corporate environments I companies will likely control which MCPs are available, what settings/permissions they have, and who can use them. IT departments will be able to push (or pull) MCPs to their users as they see fit.

Security Warning

Security is a key area that needs careful consideration in the rush to create and use MCPs. Keep in mind that MCPs running locally, such as those that can manipulate your OS or filesystem, may have dangerously high levels of access to your system. Likewise, MCPs that interact with applications, from Azure to Zoho, will be able to make changes to your systems and data. You need to carefully consider the access levels you give to the MCPs. LLMs can be unpredictable–you don’t want your LLM  to misinterpret your request for information as a request to permanently delete something…

The Garmin MCP I am using has a couple potential security issues:

  1. It requires your unencrypted email and password to access the Garmin system
  2. It, and your LLM can access all of the health data gathered by Garmin

After reviewing the code, I’m comfortable using this MCP for fitness data. But I wouldn’t use a similar approach for financial or medical records – the security model isn’t mature enough yet.

Using them Safely

At some point safety will be a key part of all MCPs.  For now carefully consider the access you give them and the private information you share with them. You will want to consider the source of the MCP before you grant it more access and information. Is it from a trusted company with which you already share the information?

The sharing of private or company information with an LLM is already a hot topic. MCPs increase the exposure risk.

Consider the capabilities of both the MCP and those of LLMs. 

  1. Does the MCP have access to private information that you would not want leaked? Can it perform destructive operations, such as changing, overwriting, or deleting data? Or does it just fetch data and/or perform calculations? 
  2. Are you asking the LLM to do things it is good at such as summarizing and extrapolating information. Do you have a way to double check the results from the LLM to ensure that they are true and accurate? Is it OK if the LLM generally, but not necessarily exactly, follows your instructions? You can get into more danger if you try to force the LLM to follow very specific instructions where any deviation can be harmful!

MCP producers will eventually add safeguards. Well-written MCPs should have settings for roles, access, and sharing of information. They should require confirmation before performing potentially destructive actions. These safeguards will require the assistance of LLM producers to ensure that the models don’t simply bypass them!

What to Expect

Be prepared for some trial and error. MCPs are still relatively new, documentation varies in quality, and you may encounter some of the same reliability issues I faced with my guitar tab generator. Even if the MCP is working perfectly the LLMs may misuse them or mangle the results.

Start with simpler MCPs like the Garmin example or web search and give them easy tasks before moving to more complex integrations. This will give you a feel for how your LLM interacts with external tools before tackling specialized use cases.

Many MCPs are currently created by hobbyists, so in addition to the security warnings I provided above, you should currently expect that they will have bugs, lack options you might expect, could change suddenly (or disappear), and could be slow or unreliable. 

As time goes by they will become as reliable as apps on your phone–for better and for worse. Like apps they will be regularly updated, changed, split, etc.–and they will become more standardized, smarter, and offer more options.

Moving Forward

MCPs are at an inflection point that mirrors the early iPhone App Store – rough, exciting, and full of possibility.

Today, they’re challenging but rewarding. If you have technical expertise, this is an ideal time to experiment. You’ll encounter bugs, cryptic documentation, and security concerns, but you’ll also gain early understanding of technology that’s about to become fundamental. If you lack that tech background, waiting some months will make things considerably easier.

They’re already solving real problems. By the time you’ve read this far, you should see how MCPs can address the LLM limitations you’ve been working around. Access to your data, computational tools, structured thinking – the variety of MCPs emerging each week shows developers finding genuine value.

They’ll rapidly evolve into something bigger. I’m comfortable predicting that MCPs, or something like them, will become ubiquitous, polished, monetized, and powerful. The specification is young – the technology we use in a few years probably won’t even be called ‘MCP’, and better alternatives may emerge. But the core concept of letting LLMs interact with external tools through standard protocols is here to stay.

This has all the characteristics of a platform play. Just as the App Store transformed what phones could do, MCPs will become the ‘apps’ that make LLM platforms truly useful and widespread. And profitable.

Whether you experiment now or wait for maturity, start thinking about what data you’d want to access and what questions you’d ask. The tools defining the next generation of LLMs are being built right now – many by hobbyists with GitHub accounts and good ideas. Understanding what’s possible now positions you to use these tools effectively when they become mainstream.

Stay Tuned

In Part 3, I’ll take you behind the scenes of building an MCP from scratch. I’ll walk through developing and hosting my guitar tab generator, share what went right (and very wrong) when using LLMs to help with the development process itself, and show you how to actually use it. If you’ve been curious about creating your own MCP, or want to see the other side of the hobbyist ecosystem I’ve been describing, that’s where we’re headed next.