No need for Ctrl+C when you have MCP

March 03, 2026
5 min read
940 views

No need for Ctrl+C when you have MCP

Ryan sits down with Member of the Technical Staff at Anthropic and Model Context Protocol co-creator David Soria Parra to talk the evolution of MCP from local-only to remote connectivity, how security and privacy fit into their work with OAuth2 for authentication and authorization, and how they’re keeping MCP completely open-source and widely available by moving it to the Linux Foundation.

Article hero image
Credit: Alexandra Francis

The Model Context Protocol (MCP) is an open-source standard for connecting AI applications to external systems created by Anthropic. You can keep up with—or join—the work the MCP community is doing at their Discord server.

Connect with David on Twitter.

Today’s shoutout goes to Populist badge winner competent_tech for their answer to How do I review a PR assigned to me in VS 2022.

TRANSCRIPT

[Intro Music]

Ryan Donovan: Calling all roboticists and engineers interested in AI robotics. Intrinsic, along with Open Robotics, Nvidia, and Google DeepMind have announced a competition with a prize pool of $180,000 to solve texts, cable management, and insertion with the latest AI and Open-Source tools. Register by the 17th of April and take part as an individual, or form a team. Go to intrinsic.ai/stack. That's I-N-T-R-I-N-S-I-C dot AI forward slash S-T-A-C-K.

Ryan Donovan: Hello everyone, and welcome to the Stack Overflow Podcast, a place to talk. All thanks, software and technology. I am your host, Ryan Donovan. Today, as we have for about a year, we we're gonna be talking about MCP, but I wanted to go right to the source. So, today we are talking with the co-creator of Model Context Protocol. My guest is that co-creator David Soria Parra, who's also a member of the technical staff at Anthropic. So, welcome to the show, David.

David Soria Parra: Nice being on the show. Really appreciate it.

Ryan Donovan: Yeah, of course. I appreciate you being here. So, before we get into Model Context Protocol, can you tell us a little bit of your origin story in software and technology?

David Soria Parra: Yeah, my origin story goes back to most people that are deep int tech. I started programming quite early, maybe like 13-14, and then worked very early on at a small company in Munich, Germany, where I'm originally from. And through that, got really involved with very early—maybe at 16-17—with a lot of Open-Source communities, for example, the PHP community, where I was an active contributor and Release Manager for a while. I then went on and worked on a version control system called Mercurial, which was a competitor to Git. And then, from there on, I joined Facebook in 2013 to work on source control systems. And so, I've always been a developer tooling and systems kind of engineer, and really love developers, developer trolling, developer experience, and have worked originally 10 years at Facebook on developer tooling experience, and then moved on to Anthropic with a short stint at a venture capital called Sutter Hill. And then, at Anthropic, even there, I started originally on developer tooling-related work.

Ryan Donovan: Right. Well, you know, MCP is kind of a developer tooling.

David Soria Parra: Yeah.

Ryan Donovan: If you consider the AI to be a developer. So, I think I first heard about it a year ago, January, and then everybody was talking about it almost immediately. It seemed like this was a real need that it filled in the space. But I'm wondering where this came from with you all. What was the problem that you all run into?

David Soria Parra: So, when I joined Anthropic, I was originally tasked to look into, how can we make most of our engineers and researchers increasingly use AI? And I think when I looked at the existing tools that we had at the time, we had a strong model, but the applications we had around were quite limited in terms of the ability to take this model that you have and connect it to the outside world. And so, my early observations and my early frustrations with a lot of software we had at the time was that I have to go and copy in and out code snippets, or documents into the application, and then get the answer out and copy it out. And what I was really thinking about was that I have like amazing AI system, this brain, but it's kind of put into a jar and can't reach out to the world. And so, it was very clear, very early on for me that we need some form of connectivity. And I originally had this idea around something that I called, in the first iteration, something called 'Claude Connect,' which was this little application sitting next to Claude Desktop that allows you to connect to different sources. And then, talking through Justin Spahr-Summers, the other co-creator of MCP, we realized this is really a need for a protocol. This is a very classic end times M problem, where you have multiple clients, be it your IDE, be it something like Claude Desktop, be it something like Claude Code, and multiple sources, be it the file system at the time, web search, be it connectivity to something like Sentry or databases, and so on. And so, that's really where this originated, basically out of my own frustration of having to copy in things and out of the prompt.

Ryan Donovan: Right. I mean, I think some of the best software is to solve your own problems, right?

David Soria Parra: Of course.

Ryan Donovan: You know, I've had these conversations where we're talking about the AI agent landscape in terms of the protocols, right? And comparing it to the early web protocols. MCP has seemed to have evolved from this thing that's solving your copy and paste problem to one of the new protocols. I mean, it sounds like that was a conscious decision on your part.

David Soria Parra: I think it was conscious that it's a protocol.

Ryan Donovan: Mm-hmm.

David Soria Parra: It was because, again, protocols solve really well for these end times and problems where you have multiple clients and multiple servers. I think it was very intentional that we wanted to create an open ecosystem. And as such, I think it was always intentional that we take this idea that we have, and help the whole ecosystem to really make sure that they have the ability to connect their systems to AI models in the best possible way. And so, in that regard, I think it was always intentional to have a protocol there. I think with all Open-Source projects there's always a slight aspect, or with all projects in general, there's an aspect of where it is [in] evolution, and if you have not seen [it], there are things you have seen ahead. And I do think we always went there to, we wanna really build something that people use to take these models and connect them to the things that matter the most to them.

Ryan Donovan: It seems like it's grown more, like you said, to a protocol from a piece of middleware, but it's also, as I understand it, a spec that people can implement themselves. Is that right?

David Soria Parra: Yes. Yeah, because it's a protocol. So, basically, it just defines how does our software piece, like a client, talk to a server. It is effectively a specification that says, these are the primitives, these should talk, this is the life cycle of when they [should] talk to this and that. Similar to how, if you think about HTTP, it is a protocol, but it's specified in a document of how do systems talk to each other? And similarly, MCP defines how an AI application talks to a source of data for it. And of course, we have on top of that, implementations in SDKs for D for popular programming languages, we have implementations into client applications like Claude Desktop and Claude Code. But in principle, because it's a protocol, it's just a specification on a website that people must follow if they wanna call or wanna implement MCP.

Ryan Donovan: Mm-hmm. And with most of the ones we know about, you know, when you talk about the primitives, you have things that you can expect it to call to hook into those primitives; but with AI, obviously it's plain text, non-deterministic, you're not sure what you're gonna get. How did you think about and design the primitives that would actually hook into the protocol?

David Soria Parra: Yeah, the first interesting aspects that when we made the protocol, we thought about it as a protocol between an application that uses an AI model, and some form of data source. And so, there's a distinct difference between it just being a protocol between the model, or between a protocol between the application and a data source. And so, what this means is that the original three primitives we had were designed on different interaction patterns within the application. So, that'd be prompts, things that the user itself can use to obtain a prompt from the data source, from the server, and put it directly into the prompt. We had resources which the application can use to either add to the prom by itself, or ingest into a RAG pipeline, or something very similar. And then, we have tools that the models really interact with. The interesting aspect there was that when we looked at this, particularly when it comes to tools, we got a lot of leeway of not having to be very precise about or having a lot of variety of different actions that you can take, because there is an somewhat intelligent model on the other side that can do a lot of the hard work for you. [It] can tell you when to call. You can be very flexible with the different parameters. So, for example, traditional protocols often have like, these are the parameters or these are the things you must provide. And we are fairly simple [and] open in that regard. We're like, 'here [is] some form of tool definition you provide,' and then, the model just calls you in the right possible way. And that's because you have this AI model on the other hand. But for prompts and resources, they are still driven by a deterministic software, like an application, so they're a little bit more deterministic in that regard.

Ryan Donovan: Yeah. So, it is not that you had to look for some sort of keyword, some sort of magic word.

David Soria Parra: No, the magic happens in the model, right?

Ryan Donovan: Yeah. I mean, that's always the AI way – that the magic always happens in the model. So, in the initial state of it, what were the things that you left out that you've had to sort of come across and develop along the way?

David Soria Parra: So, the very first iteration of MCP was a very small set of primitives, and it was local only, at the time. So, you would have to call a program, talk over a standard in standard out with it, and then speak MCP with it. And that worked well in many ways, but it was clear to us that eventually we want to make that connectivity something that you can talk to an internet service somewhere. These were things that we had anticipated, but then there was a lot of things you need to discover that you know to some degree and have an idea that you will require it, but to the extent of how you need to do it, that is something you need to learn along the way. So, for example, authorization and authentication is an area where if you are assuming a local application, like a local server, does not become very important to you, because you have the same security model. You're executing a program on your computer with full access, and so you already have full access and you can provide it with any information in different ways. But for remote services, you need to have a strong authentication model. And so, that's when we leaned really into OAuth, and OAuth 2. But what we didn't know is that what we are effectively trying is building a plug-and-play system like the internet has very rarely seen before, where you can take any client and any server and connect them freely with each other. And it turned out that OAuth is actually not very great at some of these aspects because OAuth mostly assumes that the server and the authenticating resources know each other in advance, and that's not true for MCP. And so, we had to discover a lot of work, and actually change or add to the OAuth specification in order to make it work really well for MCP. So, this was some of these. I think there were additional things, where later on, as people used MCP more, there were richer interactions where we looked at: you do really want to have a way to force a client to ask the user a question and not have the model answer that question and say,' these are primitives–' in this case, it's the primitives that we call elicitations that we just did not foresee that we require. But having an open community and having an open ecosystem that really pushed the boundaries and engaged with us in an open discussion really helped us there to drive the standard forward in a way that I think is beneficial for everyone.

Ryan Donovan: Did you ever consider doing a sort of central proxy MCP server to get around that clients-are-having-to-know-each-other-problem?

David Soria Parra: We never– there was a pattern about proxies that existed quite early, even internally, people inherently were drawn to proxies. But they mostly were drawn to proxies to combine abilities of a server together, or make configuration a little bit more easier. I think we always wanted an open ecosystem where client and servers don't have to know each other, and we always designed around this kind of work.

Ryan Donovan: Mm-hmm.

David Soria Parra: A lot of the additional proxy work was around combining multiple services, and authentication keys, and this type of handling that just makes infrastructure handling a little bit easier. And I think the pattern later became what's now known as 'MCP Gateways,' which are doing a lot of that grunt work of authentication. The hard parts. MCP is difficult in implementing at times, mostly because of OAuth, and so things like gateways help you to do it once centrally against a central entity, like an IDP, and then you just handle in the backend on authenticated servers that are way easier to implement. And so, that's a common pattern. But I think in Open-Source, you often can rely on some of them being inherently coming out of some of the problems being solved by virtue of building infrastructure that people will build for you. And you need to understand what are the premises to really care about in the protocol versus what is software on top and infrastructure on top that other people will build for you. And I think proxies and gateways are part of the latter

Ryan Donovan: Mm-hmm. Yeah. That's not necessarily something you as the protocol designer, have to worry about.

David Soria Parra: No. That's the same like in HTTP. HTTP doesn't have to worry about application-level firewalls and load balancing. These sort of things that naturally happen out of infrastructure components that people build for the ecosystem.

Ryan Donovan: Yeah. And I think the OAuth and the security parts of MCP are what a lot of people are thinking about. We have an article, as of publication, will probably be published on the blog about it. You said you're using the OAuth, OAuth2 – how much of that security, that OAuth, are you putting into the protocol, and how much are you foisting on the user?

David Soria Parra: This is an interesting problem, right? I think one thing that we, of course, at Anthropic deeply care about, is safety of AI systems.

Ryan Donovan: Mm-hmm.

David Soria Parra: And I think one thing, we are very aware of is that the interface to a model is just pure text. You just put any type of text together in a massive prompt, and there's no distinction of what are trustworthy sources and what aren't trustworthy sources within a model, right? Within a prompt. And MCP amplifies this problem of untrustworthiness because you now allow applications to connect to external aspects. And so, the question there is [of] all that, [what are] an inherent problems for the model providers and making these inputs safe? What are aspects that we can help on the protocol side? At the moment, I think on the protocol side, we give a lot of guidance for client implementers [of] how to handle certain aspects. We are working with some people in working groups to make stricter definitions for areas where this is necessary. So, for example, when you are dealing with healthcare data, you must guarantee that the client does only have one MCP server, for example, that provides that healthcare data, and no other MCP server, so that you avoid exfiltration.

Ryan Donovan: Right.

David Soria Parra: Because these are inherently hard, because you eventually end up with a model that takes this data and changes it around potentially, and then inserts it to a different tool call. And so, they're just in fundamental problems that I think MCP amplifies, and MCP as a result, gets a lot of stories around security. But I think a good chunk of them are inherent problems to LLMs. And I think that's where we at Anthropic deeply, deeply care, and deeply make sure that the model, how you use it, and that the safety mechanisms are in place for you to use these things in safe ways. Right. And then, on the model side, we wanna align on the protocol side. We wanna look into additional ways to help categorizing some of the data so that clients can make better decisions about what is trustworthy, what is not trustworthy.

Ryan Donovan: Yeah. On the other side, I think there's some issues around this that are sort of inherent in dependency chains.

David Soria Parra: Of course.

Ryan Donovan: I heard a story about a third-party MCP server that connected to a CRM that inserted a BCC field at one point. So, people were emailing BCC, emailing something that they didn't know about. That sort of data–

David Soria Parra: Yeah. This is a classic problem, right? There are two things to that. There are actually, interesting enough, problems that have always existed, right?

Ryan Donovan: Right.

David Soria Parra: So, local MCP servers have the exact problem that any type of registry has. If you download an NPM package today, you do not know what this NPM package does. It has code, it can be executed, it's fully true and complete, and can do whatever it wants. There is no permission or sandboxing, usually. And so, people do– you know, that's why you have supply chain attacks, and this is what classic supply chain attacks look like. And you can circumvent it to some degree, and you can have clients that implement sandboxing in other approaches to reduce the blast radius. But the fundamental problem is, you have a supply chain attack that you need to trust what you download, and as long as you cannot verify this, it is difficult. That applies to package management systems, and that applies to all sorts of registries in general. And so, this is a fundamental problem that has always existed, and I think you need to treat it the same way, right? If you are IT, you do want to have a list of trusted MCP servers and you hash-verify them against your device management system, and you only trust those. Then, on the remote side, I think it's a similar approach to websites, right? There's inherent trust that we have going to, I don't know, Paypal.com versus Givemeyourmoney.com, right? There's a fundamental difference in how we perceive this. Although the protocols make no distinction. HTTP doesn't tell you what is trustworthy and what's not, because it cannot. And I think, similarly, MCP, without starting to have a central authority, which is a whole different story and problematic piece, would not be able to pro provide. And even central authorities have problems. And we have seen this with SSL and other things of, what is trustworthy? What does an SSL certificate tell you? It tells you this person has verified who they are, but it doesn't [tell you] if they're shady or not.

Ryan Donovan: Right. Yeah. They verified it with somebody else, right?

David Soria Parra: Yeah, exactly.

Ryan Donovan: So, with the the sort of open nature of ' any client can talk to any server,' did you have to consider, what if you have multiple servers that kind of do similar things, and how do you finely slice the routing for MCP servers?

David Soria Parra: You always have this issue of like, you will have multiple things do the same thing, and that's just very natural in the Open-Source ecosystem. I think it's even considered something good, you want this to be the case so people have alternatives. People can hack on different things. And so, I'm not too worried. I think there's a natural aspect that with the increasing amount of remote servers, that companies provide, you will have naturally a bit more authoritative servers because the GitHubs of the world obviously provide a GitHub server that's remote, and then that's obviously the authoritative server for GitHub. And similar to Linear and others that provide these type of servers. And for local servers, I think that hackability is actually beautiful and good, and I'm not too worried about it. The routing, [at] the end of the day, boils down to discoverability for the user, and since the user just has to install MCP servers themselves, and there are things like marketplaces, registries, these type of things that currently exist, and there's some form of curated things like Anthropic as a marketplace for MCP servers that you can use with ClaudeAI. There are other people that provide marketplaces. Some have different forms of curation and that helps you to understand. You hopefully trust Anthropic to curate that these servers are useful, and good. And we of course make sure that there's not overlap between, like– you will not find three different Google servers. You will find one Google server, right?

Ryan Donovan: Right, right. When I first heard about MCP, somebody compared it to GraphQL. What do you think about that comparison?

David Soria Parra: It depends how they mean it. I think you can look at graph– I think there are different meanings to GraphQL. You can look at it as like, 'oh, it's a fairly generalized query mechanism.' You can look at the less well-meaning case of GraphQL got very popular and then became very unpopular over time. I'm not sure I care to tell about the specific comparison, what I care about is valid criticism, and valid benefits. I like when people say positive things about the protocol that they like, and I like when people have very meaningful written criticism, so that we can all learn. Because in the end of the day, the goal that we have is really just building a standard for people to connect models or AI applications to the data sources that they care about. And I think that's a need that doesn't go away. And I want the industry to work more towards standardization. And so, how we get there is only by having hard and honest conversation of what's the best for everybody out there in terms of the user base and what the users actually require. And that's what we wanna build as MCP. And I hope, of course, that MCP is that standard, and if not, then I hope there will be another standard. But I worry that the worst outcome could be that if the current standard goes away, that there will be no other standard, and I think that will be a net negative for the industry.

Ryan Donovan: Mm-hmm. Yeah. And you know, on the path to that standardization, Anthropic recently donated MCP to the Linux Foundation, right? How did that come about, and how do you see that as guaranteeing or improving the standardization of MCP?

David Soria Parra: So, if you think about an Open-Source project that is company run, which I think is very common these days, effectively, we are put into place very early. Open governance. And so actually, MCP is not steered by Anthropic, but a combination of engineers from Anthropic, Google, OpenAI, Amazon, Microsoft.

Ryan Donovan: Mm-hmm.

David Soria Parra: But part of the problem still remains that even though the way the decisions are made are now more open, like an open governance model, technically Anthropic still owns the trademarks and the logos. We had seen in the past in Open-Source examples where some of these actors around Open-Source have then taken that position that they have, and eventually close down some of the access to some of the Open-Source project. And I don't want to name names, but if you look back, there's plenty of examples of where this happened. And what is very important to me and as an Open-Source person, but also to Anthropic in general, is this notion that this will be truly open.

Ryan Donovan: Mm-hmm.

David Soria Parra: And for that, finding a foundation that is trustworthy, that has a long track record of having very successfully run big Open-Source projects and giving all their trademarks, all the logo pieces to them so that everyone in the industry knows now nobody can take away MCP anymore.

Ryan Donovan: Right.

David Soria Parra: That's really the goal, right? Nobody can go and make this proprietor anymore. Anthropic can't run around anymore and start suing people for it, which we would never do, but that's not what other companies know. So, for us, it's finding partners to make this next step to truly do what we actually always meant [to do], like build an open standard everybody can implement, everybody can use, and give people in the industry, and particularly companies here, the security that they can lean into this, and don't have to fear that there's any change to the situation. In fact, on the governance side, little has changed. We're still operating very similarly. The models haven't changed. The move to Linux Foundation and the Agentic AI Foundation that we've created with it is primarily ensuring that MCP will always stay open, and will always– that everybody can lean on it to its fullest.

Ryan Donovan: Yeah. So, for looking towards the future, you're still with the chief maintainer. Where do you see the gaps in the standard? What do you hope to implement in the future?

David Soria Parra: So, the standard is actually quite big at this point, and I think there's a risk that it can become too big. There are a few things that we do wanna improve. The number one thing that we're currently focusing, and I think will take a lot of the time in the next six months, is that we are lucky enough that a lot of the big hyper skaters, the Googles of the world, the Microsofts of the world, really lean into MCP and work on and with MCP; but because of the skill that they're having, they found that some of the ways we're using currently—our transport protocols for remote servers—is very limiting to them to scale it really horizontally, and I think they're very right about this. And so, they have put forward proposals around changing the protocols slightly, that transport protocols slightly to really allow this horizontal scaling. And that's something we really wanna enable. We wanna make sure that this works for people who really need to horizontally scale to thousands and millions of servers and users. That's a big focus. Other focuses around adjacent things, like making it easy to discover MCP servers by on one side, making our currently experimental Open-Source registry GA, by providing discoverability endpoints like dot well known URLs for MCP servers that can provide, so that a browser or a client can discover MCP servers as it browses the web. Very simple, like an agent. You can imagine an agent going to a website, looking up information, realizing there's MCP server, and now using that, right? So, discoverability is another aspect there. It's really important to us. And then, we do think that for a lot of areas, be it healthcare, be it finance, be it big enterprises, the areas that require additional work, that do not belong in the core specification to not create more surface error for core specification, because they're only useful for specific domains. And so, we're working very hard on a more official extension mechanism to MCP, so that an enterprise can do additional authorization authentication work, that other clients don't have to worry about; or that a finance client can make assumptions about [how] to, for example, adhere to certain standards around how to deal with certain data points, how to quote certain data that has financial requirements from servers that others wouldn't have to consider. And so, we really wanna make sure that MCP works in all different domains by using extensions. And then, stepping into using MCP as a transportation mechanism for more advanced interaction mechanisms like MCP apps, which is this interactive pattern where MCP servers not just provide tools, but fully HTML React components that clients can render, that they can use then to have more interactive patterns, like seed selection in an opera, or in a flight, or something like that, are classic examples of MCP apps that you could wanna deliver. So, we are looking into this extensibility part. We're looking into improving transport for horizontal scalability. We're looking into discoverability. Then of course, we're always looking into growing the community and truly, truly, truly build an amazing long living Open-Source project that does not rely on Anthropic's development resources at all times, which doesn't mean that we want to step back. We still fully committed to it and we still do a lot of the work there, but we really want to make this an open project that everybody feels welcomed and can contribute to.

Ryan Donovan: All right, everybody. It is that time of the show again where we shouted somebody who came onto Stack Overflow, dropped some knowledge, shared some curiosity, and earned themselves a badge. Today, we're shouting out a Populous Badge winner, somebody who dropped an answer that was so good, it outscored the accepted answer. So, congrats to @competent_tech for answering, 'How do I review a PR assigned to me in VS 2022?' If you're curious about that, we'll have an answer for you in the show notes. I am Ryan Donovan. I host the podcast, edit the blog, here at Stack Overflow. If you have questions, concerns, comments, tops to cover, please email me at [email protected], and if you wanna reach out to me directly, you can find me on LinkedIn.

David Soria Parra: I'm David, member of Technical Staff at Anthropic, still the co-creator of MCP, and you can find me mostly on twitter slash x.com, under the handle @dsp_ and of course, on the Model Context Protocol Discord.

Ryan Donovan: All right, thank you for listening, everyone, and we'll talk to you next time.

Comments

Sign in to comment.
No comments yet. Be the first to comment.

Sign out

Are you sure you want to sign out?