MCP is dead; long live MCP

(chrlschn.dev)

94 points | by CharlieDigital 4 hours ago ago

85 comments

  • 0xbadcafebee 2 hours ago

    MCP is a fixed specification/protocol for AI app communication (built on top of an HTTP CRUD app). This is absolutely the right way to go for anything that wants to interoperate with an AI app.

    For a long time now, SWEs seem to have bamboozled into thinkg the only way you can connect different applications together are "integrations" (tightly coupling your app into the bespoke API of another app). I'm very happy somebody finally remembered what protocols are for: reusable communications abstractions that are application-agnostic.

    The point of MCP is to be a common communications language, in the same way HTTP is, FTP is, SMTP, IMAP, etc. This is absolutely necessary since you can (and will) use AI for a million different things, but AI has specific kinds of things it might want to communicate with specific considerations. If you haven't yet, read the spec: https://modelcontextprotocol.io/specification/2025-11-25

    • tptacek 43 minutes ago

      Why is this the right way to go? It's not solving the problem it looks like it's solving. If your challenge is that you need to communicate with a foreign API, the obvious solution to that is a progressively discoverable CLI or API specification --- the normal tool developers use.

      The reason we have MCP is because early agent designs couldn't run arbitrary CLIs. Once you can run commands, MCP becomes silly.

      There is a clear problem that you'd like an "automatic" solution for, but it's not "we don't have a standard protocol that captures every possible API shape", it's "we need a good way to simulate what a CLI does for agents that can't run bash".

      • harrall a minute ago

        CLI doesn’t work for the coworkers at work that aren’t technical.

        Have you tried to use a random API before? It’s a process of trial and error.

      • isbvhodnvemrwvn 37 minutes ago

        It's significantly more difficult to secure random clis than those apis. All llm tools today bypass their ignore files by running commands their harness can't control.

    • simianwords 2 hours ago

      > This is absolutely necessary since you can (and will) use AI for a million different things

      the point is, is it necessary to create a new protocol?

      • hannasanarion 2 hours ago

        Exactly this. I've made some MCP servers and attached tons of other people's MCP servers to my llms and I still don't understand why we can't just use OpenAPI.

        Why did we have to invent an entire new transport protocol for this, when the only stated purpose is documentation?

      • CharlieDigital an hour ago

        By and large, it is a very simple protocol and if you build something with it, you will see that it is just a series of defined flows and message patterns. When running over streamable HTTP, it is more or less just a simple REST API over HTTP with JSON RPC payload format and known schema.

        Even the auth is just OAuth.

      • paulddraper an hour ago

        It’s not a new protocol.

        It’s JSON-RPC plus OAuth.

        (Plus a couple bits around managing a local server lifecycle.)

        • drdaeman 14 minutes ago

          World would be surely a saner place if instead of “MCP vs CLI” people would talk about “JSON-RPC vs execlp(3)”.

          Not accurate, but at least makes on think of the underlying semantics. Because, really, what matters is some DSL to discover and describe action invocations.

    • ambicapter 2 hours ago

      If AI is AI, why does it need a protocol to figure out how to interact with HTTP, FTP, etc.? MCP is a way to quickly get those integrations up and running, but purely because the underlying technology has not lived up to its hyped abilities so far. That's why people think of MCP as a band-aid fix.

      • 8note an hour ago

        Why the desire to reinvent the wheel every time? Agents can do it accurately, but you have to wait for them to figure it out every time, and waste tokens on non-differentiated work

        The agents are writing the mcps, so they can figure out those http and ftp calls. MCP makes it so they dont have to every time they want to do something.

        I wouldnt hire a new person to read a manual and then make a bespoke json to call an http server, every single time i want to make a call, and thats not a knock on the person's intelligence. Its just a waste of time doing the same work over and over again. I want the results of calling the API, not to spend all my time figuring out how to call the API

        • theptip an hour ago

          It’s simply about making standard, centralized plugins available. Right now Claude benefits from a “link GitHub Connector” button with a clear manifest of actions.

          Obviously if the self-modifying, Clawd-native development thing catches on, any old API will work. (Preferably documented but that’s not a hard requirement.)

          For now though, Anthropic doesn’t host a clawd for you, so there isn’t yet a good way for it to persist customs integrations.

      • avereveard an hour ago

        C

        each ai need context management per conversation this is something that would be very clunky to replicate on top of http or ftp (as in requiring side channel information due session and conversation management)

        Everyone looks at api and sure mcp seem redundant there but look at agent driving a browser the get dom method depends on all the action performed from when the window opened and it needs to be per agent per conversation

        Can you do that as rest sure sneak a session and conversation in a parameter or cookie but then the protocol is not really just http is it it's all this clunky coupling that comes with a side of unknowns like when is a conversation finished did the client terminate or were just between messages and as you go and solve these for the hundredth time you'd start itching for standardization

        • superturkey650 23 minutes ago

          All MCP adds is a session token. How is that not already a solved problem?

      • CharlieDigital 2 hours ago

        Because protocols provide structure that increases correctness.

        It is not a guarantee (as we see with structured output schemas), but it significantly increases compliance.

        • ambicapter 2 hours ago

          You're interacting with an LLM, so correctness is already out the window. So model-makers train LLMs to work better with MCP to increase correctness. So the only reason correctness is increased with MCP is because LLMs are specifically trained against it.

          So why MCP? Are there other protocols that will provide more correctness when trained? Have we tried? Maybe a protocol that offers more compression of commands will overall take up more context, thus offering better correctness.

          MCP seems arbitrary as a protocol, because it kinda is. It doesn't >>cause<< the increase in correctness in of itself, the fact that it >>is<< a protocol is the reason it may increase correctness. Thus, any other protocol would do the same thing.

          • fartfeatures 2 hours ago

            > You're interacting with an LLM, so correctness is already out the window.

            With all due respect if you are prompting correctly and following approaches such as TDD / extensive testing then correctness is not out the window. That is a misunderstanding likely caused by older versions of these models.

            Correctness can be as complete as any other new code, I've used the AI to port algorithms from Python to Rust which I've then tested against math oracles and published examples. Not only can I check my code mathematically but in several instances I've found and fixed subtle bugs upstream. Even in well reviewed code that has been around for many years and is well used. It is simply a tool.

          • CharlieDigital 2 hours ago

                > So why MCP? ...  MCP seems arbitrary as a protocol
            
            You're right, it is an arbitrary protocol, but it's one that is supported by the industry.

            See the screencaps at the end of the post that show why this protocol. Maybe one day, we will get a better protocol. But that day is not today; today we have MCP.

      • nonethewiser 2 hours ago

        If AI is AI why does it need me to prompt it?

  • codemog 2 hours ago

    As soon as MCP came out I thought it was over engineered crud and didn’t invest any time in it. I have yet to regret this decision. Same thing with LangChain.

    This is one key difference between experienced and inexperienced devs; if something looks like crud, it probably is crud. Don’t follow or do something because it’s popular at the time.

    • fartfeatures 2 hours ago

      All the code I work on now has an MCP interface so that the LLM can debug more easily. I'd argue it is as important as the UI these days. The amount of time it has saved me is unreal. It might be worth investing a very small amount of your time in it to see if it is a good fit. Even a poor protocol can provide useful functionality.

      • kybernetikos an hour ago

        I've just been discovering this pattern too. It's made a huge difference. Trying to get Claude to remote control an app for testing via the various other means was miserable and unreliable.

        I got it to build an MCP server into the app that supported sending commands to allow Claude to interact with it as if it was a user, including keypresses and grabbing screenshots, and the difference was immediate and really beneficial.

        Visual issues were previously one of the things it would tend to struggle with.

      • moralestapia 2 hours ago

        Our workflows must be massively different.

        I code in 8 languages, regularly, for several open source and industry projects.

        I use AI a lot nowadays, but have never ever interacted with an MCP server.

        I have no idea what I'm missing. I am very interested in learning more about what do you use it for.

        • Kaliboy an hour ago

          I've managed to ignore MCP servers for a long time as well, but recently I found myself creating one to help the LLM agents with my local language (Papiamentu) in the dialect I want.

          I made a prolog program that knows the valid words and spelling along with sentence conposition rules.

          Via the MCP server a translated text can be verified. If its not faultless the agent enters a feedback loop until it is.

          The nice thing is that it's implemented once and I can use it in opencode and claude without having to explain how to run the prolog program, etc.

        • CharlieDigital 2 hours ago

              > I have no idea what I'm missing.
          
          The questions I'd ask:

              - Do you work in a team context of 10+ engineers?
              - Do you all use different agent harnesses?
              - Do you need to support the same behavior in ephemeral runtimes (GH Agents in Actions)?
              - Do you need to share common "canonical" docs across multiple repos?
              - Is it your objective to ensure a higher baseline of quality and output across the eng org?
              - Would your workload benefit from telemetry and visibility into tool activation?
          
          If none of those apply, then it's not for you. Server hosted MCP over streamable HTTP benefits orgs and teams and has virtually no benefit for individuals.
          • fartfeatures 2 hours ago

            MCP is useful for the above. I work on my own more often than not and the utility of MCP goes far beyond the above. (see my other comment above).

        • fartfeatures 2 hours ago

          I can't go into specifics about exactly what I'm doing but I can speak generically:

          I have been working on a system using a Fjall datastore in Rust. I haven't found any tools that directly integrate with Fjall so even getting insight into what data is there, being able to remove it etc is hard so I have used https://github.com/modelcontextprotocol/rust-sdk to create a thin CRUD MCP. The AI can use this to create fixtures, check if things are working how they should or debug things e.g. if a query is returning incorrect results and I tell the AI it can quickly check to see if it is a datastore issue or a query layer issue.

          Another example is I have a simulator that lets me create test entities and exercise my system. The AI with an MCP server is very good at exercising the platform this way. It also lets me interact with it using plain english even when the API surface isn't directly designed for human use: "Create a scenario that lets us exercise the bug we think we have just fixed and prove it is fixed, create other scenarios you think might trigger other bugs or prove our fix is only partial"

          One more example is I have an Overmind style task runner that reads a file, starts up every service in a microservice architecture, can restart them, can see their log output, can check if they can communicate with the other services etc. Not dissimilar to how the AI can use Docker but without Docker to get max performance both during compilation and usage.

          Last example is using off the shelf MCP for VCS servers like Github or Gitlab. It can look at issues, update descriptions, comment, code review. This is very useful for your own projects but even more useful for other peoples: "Use the MCP tool to see if anyone else is encountering similar bugs to what we just encountered"

        • 8note an hour ago

          Its very similar to the switch from a text editor + command line, to having an IDE with a debugger.

          the AI gets to do two things:

          - expose hidden state - do interactions with the app, and see before/after/errors

          it gives more time where the LLM can verify its own work without you needing to step in. Its also a bit more integration test-y than unit.

          if you were to add one mcp, make it Playwright or some similar browser automation mcp. Very little has value add over just being able to control a browser

          • CPLX 43 minutes ago

            I’ve been using Chrome DevTools MCP a lot for this purpose and have been very happy with it.

        • winrid 2 hours ago

          Many products provide MCP servers to connect LLMs. For example I can have claude examine things through my ahrefs account without me using the UI etc

          • 8n4vidtmkvmk an hour ago

            That's also one of the things that worries me the most. What kind of data is being sent to these random endpoints? What if they to rogue or change their behavior?

            A static set of tools is safer and more reliable.

            • 8note an hour ago

              mcp is generally a static set of tools, where auth is handled by deterministic code and not exposed to the agent.

              the agent sees tools as allowed or not by the harness/your mcp config.

              For the most part, the same company that you're connecting to is providing the mcp, so its not having your data go to random places, but you can also just write your own. its fairly thin wrappers of a bit of code to call the remote service, and a bit of documentation of when/what/why to do so

      • mlnj 2 hours ago

        You are right.

        Although I have been a skeptic of MCPs, it has been an immense help with agents. I do not have an alternative at the moment.

    • ph4rsikal 2 hours ago

      LangChain is not over-engineered; it's not engineered at all. Pure Chaos.

      • embedding-shape 2 hours ago

        Much like how "literally" doesn't literally mean "literally" anymore, "over-engineered" in most cases doesn't mean "too much engineering happened" but "wrong design/abstractions", which of course translates to "designs/abstractions I don't like".

        • fartfeatures an hour ago

          Under-engineered is a much better term.

    • jamesrom 37 minutes ago

      What part of MCP do you think is over-engineered?

      This is quite literally the opposite opinion I and many others had when first exploring MCP. It's so _obviously_ simple, which is why it gained traction in the first place.

    • tptacek 43 minutes ago

      I still don't really understand what LangChain even is.

    • whattheheckheck 2 hours ago

      So let's say you have a rag llm chat api connected to an enterprises document corpus.

      Do you not expose an mcp endpoint? Literally every vscode or opencode node gets it for free (a small json snippet in their mcp.json config) If you do auth right

      • CharlieDigital 2 hours ago

        Not only editors, but also different runtime contexts like GitHub Agents running in Actions.

        We can plug in MCP almost anywhere with just a small snippet of JSON and because we're serving it from a server, we get very clear telemetry regardless of tooling and envrionment.

        • chatmasta 2 hours ago

          What are you using for hosting and deploying the MCP servers? I’d like something low friction for enterprise teams to be able to push their MCP definitions as easily as pushing a Git repo (or ideally, as part of a Git repo, kinda like GitHub pages). It’s obviously not sustainable for every team to host their own MCP servers in their own way.

          So what’s the best centralized gateway available today, with telemetry and auth and all the goodness espoused in this blog post?

          • CharlieDigital 2 hours ago

            We built our own (may open source eventually).

            MCP is effectively "just another HTTP REST API"; OAuth and everything. The key parts of the protocol is the communication shape and sequence with the client, which most SDKs abstract for you.

            The SDKs for MCPs make it very straightforward to do so now and I would recommend experimenting with them. It is as easy to deploy as any REST API.

          • whattheheckheck an hour ago

            ROSA

            https://docs.aws.amazon.com/whitepapers/latest/overview-depl...

            it should be part of your app and coordinated in a way that everyone in the enterprise can find all the available mcps. Like backstage or something

    • kubanczyk 2 hours ago

      > if something looks like crud, it probably is crud

      Yes, technically, but you've probably meant cruft here.

  • Frannky 13 minutes ago

    I don't know. Skill+http endpoint feel way safer, powerful and robust. The problem is usually that the entity offering the endpoint, if the endpoint is ai powered, concur in LLM costs. While via mcp the coding agent is eating that cost, unless you are also the one running the API and so can use the coding plan endpoint to do the ai thing

    • monsieurbanana 8 minutes ago

      If I didn't misunderstood you, it doesn't really matter if it's an endpoint or a (remote) mcp, either someone else wants to run llms to provide a service for you or they don't.

      A local mcp doesn't come in play because they just couldn't offer the same features in this case.

      • Frannky a minute ago

        The MCP server usually provides some functions you can run, possibly with some database interaction.

        So when you run it, your codign agent is using AI to run that code (what to call, what parameters to pass, and so on). Via MCP, they don't pay any LLM cost; they just offer the code and the endpoint.

        But this is usually messy for the coding agent since it fills up the context. While if you use skill + API, it's easier for the agent since there's no code in the context, just how to call the API and what to pass.

        With something like this, you can then have very complex things happening in the endpoint without the agent worrying about context rot or being able to deal with that functionality.

        But to have that difficult functionality, you also need to call an LLM inside the endpoint, which is problematic if the person offering the MCP service does not want to cover LLM costs.

        So it does matter if it's an endpoint or an MCP because the agent is able to do more complex and robust stuff if it uses skill and HTTP.

  • jamesrom 25 minutes ago

    The problem with MCP isn't MCP. It's the way it's invoked by your agent.

    IMO, by default MCP tools should run in forked context. Only a compacted version of the tool response should be returned to the main context. This costs tokens yes, but doesn't blow out your entire context.

    If other information is required post-hoc, the full response can be explored on disk.

  • s0ulf3re 2 hours ago

    I’ve always felt like MCP is way better suited towards consumer usage rather than development environments. Like, yeah, MCP uses a lot of a context window, is more complex than it should be in structure, and it isn’t nearly as easy for models to call upon as a command line tool would be. But I believe that it’s also the most consumer friendly option available right now.

    It’s much easier for users to find what exactly a model can do with your app over it compared to building a skill that would work with it since clients can display every tool available to the user. There’s also no need for the model to setup any environment since it’s essentially just writing out a function, which saves time since there’s no need to setup as many virtual machine instructions.

    It obviously isn’t as useful in development environments where a higher level of risk can be accepted since changes can always be rolled back in the repository.

    If I recall correctly, there’s even a whole system for MCP being built, so it can actually show responses in a GUI much like Siri and the Google Assistant can.

  • jswny 2 hours ago

    MCP is fine, particular remote MCP which is the lowest friction way to get access to some hosted service with auth handled for you.

    However, MCP is context bloat and not very good compared to CLIs + skills mechanically. With a CLI you get the ability to filter/pipe (regular Unix bash) without having to expand the entire tool call every single time in context.

    CLIs also let you use heredoc for complex inputs that are otherwise hard to escape.

    CLIs can easily generate skills from the —help output, and add agent specific instructions on top. That means you can give the agent all the instructions it needs to know how to use the tools, what tools exist, lazy loaded, and without bloating the context window with all the tools upfront (yes, I know tool search in Claude partially solves this).

    CLIs also don’t have to run persistent processes like MCP but can if needed

    • simianwords 2 hours ago

      but you need to _install_ a CLI. with MCP, you just configure!

      • charcircuit an hour ago

        You just paste in a web link to a skill. Your agent is smart enough to know hours to use it or save it.

  • MaxLeiter 2 hours ago

    MCPs are great for some use cases

    In v0, people can add e.g. Supabase, Neon, or Stripe to their projects with one click. We then auto-connect and auth to the integration’s remote MCP server on behalf of the user.

    v0 can then use the tools the integration provider wants users to have, on behalf of the user, with no additional configuration. Query tables, run migrations, whatever. Zero maintenance burden on the team to manage the tools. And if users want to bring their own remote MCPs, that works via the same code path.

    We also use various optimizations like a search_tools tool to avoid overfilling context

    • tptacek 42 minutes ago

      I can add Supabase or Stripe to my project with zero clicks just by setting up a .envrc.

      • MaxLeiter 39 minutes ago

        But then the LLM needs to write its own tools/code for interacting with said service. Which is fine, but slower and it can make mistakes vs officially provided tools

  • antirez 2 hours ago

    As yourself: what kind of tool I would love to have, to accomplish the work I'm asking the LLM agent to do? Often times, what is practical for humans to use, it is for LLMs. And the reply is almost never the kind of things MCP exports.

    • CharlieDigital 2 hours ago

      You interact with REST APIs (analogue of MCP tools) and web pages (analogue of MCP resources) every day.

      I'd recommend that you take a peek at MCP prompts and resources spec and understand the purpose that these two serve and how they plug into agent harnesses.

      • antirez an hour ago

        So you love interacting with web sites sending requests with curl? And if you need the price of an AWS service, you love to guess the service name (querying some other endpoint), then ask some tool the price for it, get JSON back, and so forth? Or you are better served by a small .md file you pre-compiled with the services you use the most, and read from it a couple of lines?

        > I'd recommend that you take a peek at MCP prompts and resources spec

        Don't assume that if somebody does not like something they don't know what it is. MCP makes happy developers that need the illusion of "hooking" things into the agent, but it does not make LLMs happy.

  • AznHisoka 2 hours ago

    I am not sure where the OP is hearing that the hype cycle is dissipating, but MCP adoption is actually accelerating, not decreasing [1]

    More than 200% growth in official MCP servers in past 6 months: https://bloomberry.com/blog/we-analyzed-1400-mcp-servers-her...

    • esafak 2 hours ago

      He's talking about the vanguard; early adopters. Growth is in the bigger, later stages of the funnel.

  • jwilliams 2 hours ago

    I have moved towards super-specific scripts (so I guess "CLI"?) for a few reasons:

    1. You can make the script very specific for the skill and permission appropriately.

    2. You can have the output of the script make clear to the LLM what to do. Lint fails? "Lint rules have failed. This is an important for reasons blah blah and you should do X before proceeding". Otherwise the Agent is too focused on smashing out the overall task and might opt route around the error. Note you can use this for successful cases too.

    3. The output and token usage can be very specific what the agent needs. Saves context. My github comments script really just gives the comments + the necessary metadata, not much else.

    The downsides of MCP all focus on (3), but the 1+2 can be really important too.

  • jollyllama 2 hours ago

    > Centralization is Key

    > (I preface that this is primarily relevant for orgs and enterprises; it really has no relevance for individual vibe-coders)

    The thing about tools that "democratize" software development, whether it is Visual Studio/Delphi/QT or LLMs, is that you wind up with people in organizations building internal tools on which business processes will depend who do not understand that centralization is key. They will build these tools in ignorance of the necessity of centralization-centric approaches (APIs, MCP, etc.) and create Byzantine architectures revolving around file transfers, with increasing epicycles to try to overcome the pitfalls of such an approach.

    • CharlieDigital 2 hours ago

      There's a distinction between individual devs and organizations like Amazons or even a medium sized startup.

      Once you have 10-20 people using agents in wildly different ways getting wildly different results, the question of "how do I baseline the capabilities across my team?" becomes very real.

      In our team, we want to let every dev use the agent harness that they are comfortable with and that means we need a standard mechanism of delivering standard capabilities, config, and content across the org.

      I don't see it as democratization versus corporate facism in so much as it is "can we get consistent output from developers of varying degrees of skill using these agents in different ways?"

    • grensley 2 hours ago

      On the other hand, I've seen over-centralization completely crush the hopes and dreams of people with good ideas.

  • skybrian 2 hours ago

    If it's a remote API, I suppose the argument is that you might as well fetch the documentation from the remote server, rather than using a skill that might go out of date. You're trusting the API provider anyway.

    But it's putting a lot of trust in the remote server not to prompt-inject you, perhaps accidentally. Also, what if the remote docs don't suit local conditions? You could make local edits to a skill if needed.

    Better to avoid depending on a remote API when a local tool will do.

    • CharlieDigital 2 hours ago

      Or just build your own remote MCP server for docs? It's easy enough now that the protocol and supporting SDKs have stabilized.

      Most folks are familiar with MCP tools but not so much MCP resources[0] and MCP prompts[1]. I'd make the case that these latter two are way more powerful and significant because (most) tools support them (to varying degrees at the moment, to be fair).

      For teams/orgs, these are really powerful because they simplify delivery of skills and docs and moves them out of the repo (yes, there are benefits to this, especially when the content is applicable across multiple repos) on top of surfacing telemetry that informs usage and efficacy.

      Why would you do it? One reason is that now you can index your docs with more powerful tools. Postgres FTS, graph databases to build a knowledge base, extract code snippets and build a best practices snippet repo, automatically link related documents by using search, etc.

      [0] https://modelcontextprotocol.io/specification/2025-06-18/ser...

      [1] https://modelcontextprotocol.io/specification/2025-06-18/ser...

      • skybrian an hour ago

        I think that might make sense for teams or people working in multiple repos. Maybe less so for individuals working on a side project.

  • Jayakumark an hour ago

    Can you please share source code for the Resources/Prompts example ?

  • kburman 2 hours ago

    I’m struggling to understand the recent wave of backlash against MCP. As a standard, it elegantly solves a very real set of integration problems without forcing you to buy into a massive framework.

    It provides a unified way to connect tools (whether local via stdio or remote via HTTP), handles bidirectional JSON-RPC communication natively, and forces tools to be explicit about their capabilities, which is exactly what you want for managing LLM context and agentic workflows.

    This current anti-MCP hype train feels highly reminiscent of the recent phase where people started badmouthing JSON in favor of the latest niche markup language. It’s just hype driven contrarianism trying to reinvent the wheel.

  • twapi 2 hours ago

    > Influencer Driven Hype Cycle

  • lostdog 2 hours ago

    In MCP setups you do give the agent the full description of what the tool can do, but I don't see why you couldn't do the same for executables. Something like injecting `tool_exe --agent-usage` into the prompt at startup.

    Great article otherwise. I've been wondering why people are so zealous about MCP vs executable tools, and it looks like it's just tradeoffs between implementation differences to me.

  • menix 2 hours ago

    One aspect I think is often overlooked in the CLI vs. MCP debate: MCP's support for structured output and output schema (introduced in the 2025-06-18 spec). This is a genuinely underrated feature that has practical implications far beyond just "schema bloat."

    Why? Because when you pair output schema with CodeAct agents (agents that reason and act by writing executable code rather than natural language, like smolagents by Hugging Face), you solve some of the most painful problems in agentic tool use:

    1. Context window waste: Without output schema, agents have to call a tool, dump the raw output (often massive JSON blobs) into the context window, inspect it, and only then write code to handle it. That "print-and-inspect" pattern burns tokens and attention on data the agent shouldn't need to explore in the first place.

    2. Roundtrip overhead: Writing large payloads back into tools has the same problem in reverse. Structured schemas on both input and output let the agent plan a precise, single-step program instead of fumbling through multiple exploratory turns.

    There's a blog post on Hugging Face that demonstrates this concretely using smolagents: https://huggingface.co/blog/llchahn/ai-agents-output-schema

    And the industry is clearly converging on this pattern. Cloudflare built their "Code Mode" around the same idea (https://blog.cloudflare.com/code-mode/), converting MCP tools into a TypeScript API and having the LLM write code against it rather than calling tools directly. Their core finding: LLMs are better at writing code to call MCP than at calling MCP directly. Anthropic followed with "Programmatic tool calling" (https://www.anthropic.com/engineering/code-execution-with-mc..., https://platform.claude.com/docs/en/agents-and-tools/tool-us...), where Claude writes Python code that calls tools inside a code execution container. Tool results from programmatic calls are not added to Claude's context window, only the final code output is. They report up to 98.7% token savings in some workflows.

    So the point here is: MCP isn't just valuable for the centralization, auth, and telemetry story the author laid out (which I fully agree with). The protocol itself, specifically its structured schema capabilities, directly enables more efficient and reliable agentic workflows. That's a concrete technical advantage that CLIs simply don't offer, and it's one more reason MCP will stick around.

    Long live MCP indeed.

  • charcircuit an hour ago

    >The LLM has no way of knowing which CLI to use and how it should use it…unless each tool is listed with a description somewhere either in AGENTS|CLAUDE.md or a README.md

    This is what the skill file is for.

    >Centralizing this behind MCP allows each developer to authenticate via OAuth to the MCP server and sensitive API keys and secrets can be controlled behind the server

    This doesn't require MCP. Nothing is stopping you from creating a service to proxy requests from a CLI.

    The problem with this article is it doesn't recognize that skills is a more general superset compared with MCP. Anything done with MCP could have an equivalent done with a skill.

  • SilverElfin 2 hours ago

    This came up in recent discussions about the Google apps CLI that was recently released. Google initially included an MCP server but then removed it silently - and some people believe this is because of how many different things the Google Workspace CLI exposes, which would flood the context. And it seemed like in social media, suddenly a lot of people were talking about how MCP is dead.

    But fundamentally that doesn’t make sense. If an AI needs to be fed instructions or schemas (context) to understand how to use something via MCP, wouldn’t it need the same things via CLI? How could it not? This article points that out, to be clear. But what I’m calling out is how simple it is to determine for yourself that this isn’t an MCP versus CLI battle. However, most people seem to be falling for this narrative just because it’s the new hot thing to claim (“MCP is dead, Long Live CLI”).

    As for Google - they previously said they are going to support MCP. And they’ve rolled out that support even recently (example from a quick search: https://cloud.google.com/blog/products/ai-machine-learning/a...). But now with the Google Workspace CLI and the existence of “Gemini CLI Extensions” (https://geminicli.com/extensions/about/), it seems like they may be trying to diminish MCP and push their own CLI-centric extension strategy. The fact that Gemini CLI Extensions can also reference MCP feels a lot like Microsoft’s Embrace, Extend, Extinguish play.

    • jswny 2 hours ago

      MCP loads all tools immediately. CLI does not because it’s not auto exposed to the agent, got have more control of how the context of which tools exist, and how to deliver that context.

      • CharlieDigital 2 hours ago

        You can solve the same problem by giving subsets of MCP tools to subagents so each subagent is responsible for only a subset of tools.

        Or...just don't slam 100 tools into your agent in the first place.

        • simianwords 2 hours ago

          >Or...just don't slam 100 tools into your agent in the first place.

          But I can do them with CLI so that's a negative for MCP?

          • CharlieDigital 2 hours ago

            You've missed the point and hyperfocused on the story around context and not why an org would want to have centralized servers exposing MCP endpoints instead of CLIs

            • simianwords 2 hours ago

              I would want to know what point I missed. I can have 100 CLI's but not 100 MCP tools.

              100 MCP tools will bloat the context whereas 100 CLI's won't. Which part do you disagree with?

              • CharlieDigital an hour ago

                1. The part where you are providing 100 tools instead of a few really flexible tools

                2. The part where you think your agent is going to know how to use 100 CLI tools that are not already in its training dataset without using extra turns walking the help content to dump out command names and schemas

                3. The part where, without a schema defining the inputs, the LLM wastes iterations trying to correct the input format.

                4. The part where, not having the full picture of the tools, your odds of it picking the same tools or the right tools is completely gambling that it outputs the right keywords to trigger the tool to be used.

                5. The part where you forgot to mention that for your agent to know that your 100 CLI tools exist, you had to either provide it in context directly, provide it in context in a README.md, or have it output the directory listing and send that off to the LLM to evaluate before picking the tool and then possibly expanding the man pages for several tools and sub commands using several turns.

                Don't get me wrong, CLIs are great if its already in the LLMs training set (`git`, for example). Not so great if it's not because it will need to walk the man pages anyways.

                • simianwords an hour ago

                  > The part where you are providing 100 tools instead of a few really flexible tools

                  I'm not sure how that solves the issue. The shape of each individual tool will be different enough that you will need different schema - something you will be passing each time in MCP and something you can avoid in CLI. Also, CLI's can also be flexible.

                  > The part where you think your agent is going to know how to use 100 CLI tools that are not already in its training dataset without using extra turns walking the help content to dump out command names and schemas

                  By CLI's we mean SKILLS.md so it won't require this hop.

                  > The part where, without a schema defining the inputs, the LLM wastes iterations trying to correct the input format.

                  What do we lose by one iteration? We lose a lot by passing all the tool shapes on each turn.

                  > The part where, not having the full picture of the tools, your odds of it picking the same tools or the right tools is completely gambling that it outputs the right keywords to trigger the tool to be used.

                  we will use skills

                  > The part where you forgot to mention that for your agent to know that your 100 CLI tools exist, you had to either provide it in context directly, provide it in context in a README.md, or have it output the directory listing and send that off to the LLM to evaluate before picking the tool and then possibly expanding the man pages for several tools and sub commands using several turns.

                  skills

      • climike 2 hours ago
  • rvz an hour ago

    Great article, and what I would expect from someone inspecting the hype and not jumping head first, just because influencers (paid or unpaid) are screaming for engagement just because a large X account posted their opinions.

    This is one of the first posts that I've see that cuts through the hype against both MCPs and CLIs with nuance findings.

    There were times where it didn't make sense for using MCPs (such as connecting it to a database) and CLIs don't make sense at all for suddenly generating them for everything. It just seems like the use-case was a solution in search of a problem on top of a bad standard.

    But no-one could answer "who" was the customer of each of these, which is why the hype was unjustified.