Introducing The Model Context Protocol

(anthropic.com)

127 points | by benocodes 3 hours ago ago

41 comments

  • ianbutler an hour ago

    I’m glad they're pushing for standards here, literally everyone has been writing their own integrations and the level of fragmentation (as they also mention) and repetition going into building the infra around agents is super high.

    We’re building an in terminal coding agent and our next step was to connect to external services like sentry and github where we would also be making a bespoke integration or using a closed source provider. We appreciate that they have mcp integrations already for those services. Thanks Anthropic!

    • bbor an hour ago

      I've been implementing a lot of this exact stuff over the past month, and couldn't agree more. And they even typed the python SDK -- with pydantic!! An exciting day to be an LLM dev, that's for sure. Will be immediately switching all my stuff to this (assuming it's easy to use without their starlette `server` component...)

  • WhatIsDukkha 12 minutes ago

    I don't understand the value of this abstraction.

    I can see the value of something like DSPy where there is some higher level abstractions in wiring together a system of llms.

    But this seems like an abstraction that doesn't really offer much besides "function calling but you use our python code".

    I see the value of language server protocol but I don't see the mapping to this piece of code.

    That's actually negative value if you are integrating into an existing software system or just you know... exposing functions that you've defined vs remapping functions you've defined into this intermediate abstraction.

    • resters 3 minutes ago

      The secret sauce part is the useful part -- the local vector store. Anthropic is probably not going to release that without competitive pressure. Meanwhile this helps Anthropic build an ecosystem.

  • somnium_sn 2 hours ago

    @jspahrsummers and I have been working on this for the last few months at Anthropic. I am happy to answer any questions people might have.

    • quibono 16 minutes ago

      Please pass this on to someone if at all possible: LaTeX parsing&rendering breaks ALL the time in Claude.

    • kseifried 2 hours ago

      For additional context the PyPi package: https://pypi.org/project/mcp/

      And the GitHub repo: https://github.com/modelcontextprotocol

    • startupsfail 6 minutes ago

      Is it at least somewhat in sync with plans from Microsoft , OpenAI and Meta? And is it compatible with the current tool use API and computer use API that you’ve released?

      From what I’ve seen, OpenAI attempted to solve the problem by partnering with an existing company that API-fys everything. This feels looks a more viable approach, if compared to effectively starting from scratch.

    • slalani304 2 hours ago

      Super cool and much needed open-standard. Wondering how this will work for websites/platforms that don't have exposed API's (LinkedIn, for example)

    • throwup238 an hour ago

      Are there any resources for building the LLM side of MCP so we can use the servers with our own integration? Is there a specific schema for exposing MCP information to tool or computer use?

    • tcdent an hour ago

      Do you have a roadmap for the future of the protocol?

      Is it versioned? ie. does this release constitute an immutable protocol for the time being?

    • s3tt3mbr1n1 2 hours ago

      First, thank you for working on this.

      Second, a question. Computer Use and JSON mode are great for creating a quasi-API for legacy software which offers no integration possibilities. Can MCP better help with legacy software interactions, and if so, in what ways?

      • jspahrsummers 2 hours ago

        Probably, yes! You could imagine building an MCP server (integration) for a particular piece of legacy software, and inside that server, you could employ Computer Use to actually use and automate it.

        The benefit would be that to the application connecting to your MCP server, it just looks like any other integration, and you can encapsulate a lot of the complexity of Computer Use under the hood.

        If you explore this, we'd love to see what you come up with!

    • benocodes 2 hours ago

      Seems from the demo videos like Claude desktop app will soon support MCP. Can you share any info on when it will be rolled out?

    • instagary 2 hours ago

      What is a practical use case for this protocol?

      • somnium_sn an hour ago

        A few common use cases that I've been using is connecting a development database in a local docker container to Claude Desktop or any other MCP Client (e.g. an IDE assistant panel). I visualized the database layout in Claude Desktop and then create a Django ORM layer in my editor (which has MCP integration).

        Internally we have seen people experiment with a wide variety of different integrations from reading data files to managing their Github repositories through Claude using MCP. Alex's post https://x.com/alexalbert__/status/1861079762506252723 has some good examples. Alternatively please take a look at https://github.com/modelcontextprotocol/servers for a set of servers we found useful.

  • jascha_eng 29 minutes ago

    Hmm I like the idea of providing a unified interface to all LLMs to interact with outside data. But I don't really understand why this is local only. It would be a lot more interesting if I could connect this to my github in the web app and claude automatically has access to my code repositories.

    I guess I can do this for my local file system now?

    I also wonder if I build an LLM powered app, and currently simply to RAG and then inject the retrieved data into my prompts, should this replace it? Can I integrate this in a useful way even?

    The use case of on your machine with your specific data, seems very narrow to me right now, considering how many different context sources and use cases there are.

    • jspahrsummers 3 minutes ago

      We're definitely interested in extending MCP to cover remote connections as well. Both SDKs already support an SSE transport with that in mind: https://modelcontextprotocol.io/docs/concepts/transports#ser...

      However, it's not quite a complete story yet. Remote connections introduce a lot more questions and complexity—related to deployment, auth, security, etc. We'll be working through these in the coming weeks, and would love any and all input!

    • bryant 24 minutes ago

      > It would be a lot more interesting if I could connect this to my github in the web app and claude automatically has access to my code repositories.

      From the link:

      > To help developers start exploring, we’re sharing pre-built MCP servers for popular enterprise systems like Google Drive, Slack, GitHub, Git, Postgres, and Puppeteer.

      • jascha_eng 22 minutes ago

        Yes but you need to run those servers locally on your own machine. And use the desktop client. That just seems... weird?

        I guess the reason for this local focus is, that it's otherwise hard to provide access to local files. Which is a decently large use-case.

        Still it feels a bit complicated to me.

  • orliesaurus 8 minutes ago

    How is this different from function calling libraries that frameworks like Langchain or Llamaindex have built?

    • quantadev 2 minutes ago

      After a quick look it seemed to me like they're trying to standardize on how clients call servers, which nobody needs, and nobody is going to use. However if they have new Tools that can be plugged into my LangChain stuff, that will be great, and I can use that, but I have no place for any new client/server models.

  • orliesaurus 9 minutes ago

    Are there any other Desktop apps other than Claude's supporting this?

  • ado__dev an hour ago

    You can use MCP with Sourcegraph's Cody as well

    https://sourcegraph.com/blog/cody-supports-anthropic-model-c...

  • outlore 2 hours ago

    i am curious: why this instead of feeding your LLM an OpenAPI spec?

    • jasonjmcghee 42 minutes ago

      It's not about the interface to make a request to a server, it's about how the client and server can interact.

      For example:

      When and how should notifications be sent and how should they be handled?

      ---

      It's a lot more like LSP.

      • outlore 26 minutes ago

        makes sense, thanks for the explanation!

    • pizza 38 minutes ago

      I think OpenAI spec function calls are to this like what raw bytes are to unix file descriptors

    • quotemstr 38 minutes ago

      Same reason in Emacs we use lsp-mode and eglot these days instead of ad-hoc flymake and comint integrations. Plug and play.

  • recsv-heredoc an hour ago

    Thank you for creating this.

  • benocodes 2 hours ago

    Good thread showing how this works: https://x.com/alexalbert__/status/1861079762506252723