MCP Everywhere -- How Model Context Protocol Is Reshaping M365 Development

MCP Everywhere -- How Model Context Protocol Is Reshaping M365 Development

If you have been building anything in the Microsoft 365 ecosystem this year, you have probably noticed a pattern. Every major announcement – Teams SDK, Copilot Studio, declarative agents – keeps mentioning the same three letters: MCP.

I first ran into Model Context Protocol back when Anthropic released the spec in late 2024. Interesting idea, I thought. A standard way for AI models to talk to external systems. Cool. Filed it under “things to watch.” Simon Doy wrote a great deep dive into MCP fundamentals for M365 Copilot back in August that’s worth reading if you want the protocol basics covered in an M365 context. Fast forward nine months, and Microsoft has gone all in. MCP is not a nice-to-have anymore. It is becoming the standard integration protocol for everything agent-related in Microsoft 365.

Here is what happened, where MCP shows up now, and what it means if you are building on M365.

What MCP actually is

Model Context Protocol is an open standard – originally from Anthropic – that defines how AI applications communicate with external data sources and tools. The full MCP specification is worth a read if you want the protocol-level details. Think of it as a universal adapter between an AI model and the outside world.

Before MCP, every integration was custom. Want your Copilot agent to talk to your ERP system? Build a custom connector. Want it to query a database? Another custom integration. Every new data source meant new plumbing. Anthropic called this the “N x M problem” – N AI applications times M data sources, each needing its own bespoke integration.

MCP fixes this with a standardized protocol built on JSON-RPC 2.0. An MCP server exposes capabilities through three primitives:

  • Tools – functions the AI model can call (model-controlled)
  • Resources – data the application can read, like files or API responses (application-controlled)
  • Prompts – reusable prompt templates (user-controlled)

The transport layer supports both local connections (via stdio) and remote connections (via HTTP with Server-Sent Events, or the newer Streamable HTTP transport). For M365 scenarios, you will almost always use the remote HTTP approach.

The point is: build one MCP server for your business system, and every MCP-compatible client – Copilot Studio, a declarative agent, a Teams bot – can use it. No additional integration work.

MCP in the Teams SDK

The updated Teams AI Library (headed for a rename to “Teams SDK” later this year) shipped as GA for JavaScript and C# in September, with Python in developer preview. Microsoft’s official announcement covers the full scope, but the key thing is that MCP support is baked right in.

The SDK ships optional packages that let your Teams agent act as an MCP client, an MCP server, or both. As an MCP client, your agent can connect to external MCP servers and dynamically discover their tools. As an MCP server, your agent can expose its own capabilities for other agents to consume.

Here is the interesting part: tool definitions load dynamically. When an MCP server adds a new tool, your agent picks it up automatically. No redeployment. No manifest update. The server declares what it can do, and the client adapts.

That matters more than it sounds. Imagine a Teams agent that connects to three different MCP servers – your CRM, your ticketing system, internal docs. Each server team maintains their own tools independently, and your agent stays current without you touching a line of code.

Add the SDK’s support for Agent-to-Agent (A2A) communication and agentic memory (persistent context across conversations), and the whole thing starts to look very different from the chatbot frameworks we were using a year ago.

MCP in Copilot Studio

Copilot Studio has had MCP support since May 2025, and it hit general availability at the end of that month. But the September updates really round it out.

The setup is simple. In your agent, go to Tools, then Add Tool, then New Tool, and select MCP. Paste in your MCP server URL. That is it. Copilot Studio discovers the available tools from your server and makes them available to your agent. You need Generative Orchestration enabled, but that is the default for new agents anyway.

What landed in September is MCP resource support in public preview. Previously, Copilot Studio could only call MCP tools – basically, invoke functions. Now agents can also read MCP resources: files, API responses, database records. That matters because your MCP server can now serve both actions and data through a single endpoint.

The tracing and analytics also got better. The activity map now shows exactly which MCP server and which tool was invoked at runtime. When you are debugging why an agent gave a weird answer, being able to see which external tool it called (and what came back) saves a lot of guesswork.

One more thing worth noting: Copilot Studio dynamically reflects changes to your MCP server. Add a tool, remove a tool, update a description – the agent picks up the changes automatically. No manual sync required.

MCP in declarative agents

This one is coming at Ignite in November 2025 as a public preview, but it is worth understanding now because it fills in the last gap.

Declarative agents are how you customize Microsoft 365 Copilot for specific business scenarios. They sit inside the Copilot experience and can access M365 data, but until now, connecting them to external systems meant building API plugins with OpenAPI specs. It worked, but it was verbose and required a lot of manual configuration.

With MCP support, most of that goes away. The Microsoft 365 Agents Toolkit in VS Code will offer a guided flow:

  1. Create a new declarative agent project
  2. Select “Add Action” and then “Start with an MCP server”
  3. Enter your MCP server URL
  4. The toolkit fetches available tools and lets you pick which ones to include
  5. Configure authentication (SSO and OAuth 2.0 supported)
  6. The toolkit generates all the scaffolding – manifest.json, ai-plugin.json, declarativeAgent.json – automatically

You do not have to hand-edit JSON manifests or manually write function definitions. The toolkit reads the MCP schema and generates everything.

This is where the “build once, use everywhere” promise of MCP really clicks. The same MCP server that powers your Copilot Studio agent can also power your declarative agent in M365 Copilot. Same server, same tools, different surface.

Building an MCP server for M365

So what does an MCP server actually look like? It is an HTTP endpoint that implements the MCP specification. There are official SDKs for TypeScript, Python, C#, Java, and more – all available from the Model Context Protocol GitHub organization.

Here is a minimal conceptual example in TypeScript:

import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { StreamableHTTPServerTransport } from "@modelcontextprotocol/sdk/server/streamableHttp.js";

const server = new McpServer({
  name: "contoso-inventory",
  version: "1.0.0",
});

// Define a tool
server.tool(
  "getProductStock",
  "Check current stock level for a product",
  { productId: { type: "string", description: "The product SKU" } },
  async ({ productId }) => {
    const stock = await inventoryDb.getStock(productId);
    return {
      content: [
        { type: "text", text: `Product ${productId}: ${stock.quantity} units in stock` }
      ],
    };
  }
);

// Define a resource
server.resource(
  "product-catalog",
  "products://catalog",
  async () => ({
    contents: [
      { uri: "products://catalog", text: JSON.stringify(await inventoryDb.getCatalog()) }
    ],
  })
);

The server declares its tools with names, descriptions, and input schemas. The AI model uses the descriptions to decide when to call a tool, and the schema to construct the right parameters. Good descriptions matter – they are essentially your prompt engineering for tool selection.

For M365 Copilot specifically, you will need to register your MCP server as an agent connector in the Microsoft 365 app manifest. This is how the platform discovers your server and handles authentication. The Agents Toolkit handles this wiring for you, but it is good to know what is happening under the hood.

Authentication is the part that takes the most thought. For enterprise scenarios, you will typically use OAuth 2.0 with Azure AD tokens. Your MCP server validates the token, extracts the user identity, and returns data scoped to that user’s permissions. The platform supports SSO out of the box, so users do not have to sign in separately.

The A2A companion – agents calling agents

MCP handles the connection between an AI agent and a tool or data source. But what about agents talking to each other?

That is where the Agent2Agent (A2A) protocol comes in. Google introduced A2A in April 2025, and Microsoft backed it almost immediately – announcing support in both Azure AI Foundry and Copilot Studio in May. The protocol is now under the Linux Foundation, with over 50 technology partners on board.

Think of MCP and A2A as complementary layers:

  • MCP = agent to tool (vertical integration)
  • A2A = agent to agent (horizontal collaboration)

A2A lets agents discover each other through “Agent Cards” (JSON metadata describing capabilities), exchange tasks with defined lifecycle states, and share context. Think procurement and inventory agents that need to collaborate on a purchase order without a human wiring them together.

The Teams SDK already supports A2A communication patterns. Pair that with MCP for tool access, and you can wire up some pretty involved multi-agent workflows inside M365.

Practical scenario – connecting your business system

Let me make this concrete. Say you have an internal inventory management system with a REST API. Today, if you want M365 Copilot users to check stock levels or place reorder requests through natural language, you would build an API plugin with an OpenAPI specification, configure authentication, write detailed function descriptions, and deploy it as a declarative agent.

With MCP, the approach shifts:

  1. Build one MCP server that wraps your inventory API. Define tools like getProductStock, searchProducts, createReorderRequest. Define resources like the product catalog.

  2. Connect it to Copilot Studio by pasting the URL. Your low-code team can build agents on top of it right away.

  3. Connect it to a declarative agent using the Agents Toolkit. Your pro-dev team gets it into M365 Copilot with a few clicks.

  4. Connect it to a Teams bot via the Teams SDK’s MCP client. Same server, third surface.

  5. Expose it to other agents via A2A, so a procurement agent from a different team can query your inventory as part of its workflow.

One server, multiple consumers. When your inventory API adds a new endpoint, you add a tool to the MCP server, and every connected agent picks it up. You do not redeploy clients, you do not update manifests, you do not coordinate across teams.

That is what makes this interesting. Your business system becomes something any agent in the ecosystem can discover and use on its own.

Where this is heading

The way M365 integrations work is changing. The old model – build a specific connector for a specific surface – is giving way to something looser. Build a capability once, expose it through a standard protocol, let any agent use it.

The September 2025 wave (Teams SDK GA, Copilot Studio MCP resources, A2A support) gets most of the pieces in place. The Ignite announcements in November (declarative agents with MCP) will fill in the rest. By early 2026, I expect MCP to be the default recommendation for any new M365 integration project.

If you are starting a new integration today, build an MCP server. Even if you only connect it to one surface right now, you will thank yourself later when a second team asks for access.

Read more

Enjoyed this post? Let's connect on LinkedIn:

Follow on LinkedIn →