Microsoft Build 2025: Infrastructure Meets Intelligence

Microsoft's Build 2025 positioned the company as the platform for autonomous AI agents, spanning GitHub Copilot to Azure infrastructure.

Sean Michael Kerner, Contributor

May 23, 2025

4 Min Read
Microsoft CEO Satya Nadella at Build 2025
Microsoft CEO Satya Nadella delivers a keynote address at Build 2025.Microsoft

Microsoft's Build conference has always been about developers, and in 2025 that very specifically means AI.

Microsoft CEO Satya Nadella kicked off the company's Build 2025 developer conference held in Seattle this week with announcements designed to position Microsoft as the central platform for what he called the open agentic web.

Microsoft's Build 2025 conference showcased the company as both the infrastructure backbone and development platform. The announcements from the conference span the entire technology stack, from silicon-level optimizations to high-level agent orchestration frameworks.

Key Announcements from Build 2025

The following are key announcements Microsoft made at its Build 2025 conference:

  • Open-sourcing GitHub Copilot in VS Code: AI capabilities integrated directly into the core VS Code repository.

  • Autonomous coding agents: GitHub Copilot can now handle complete software engineering tasks independently through agent workflows.

  • Azure AI Foundry general availability: Production platform supporting 1,900+ models with enterprise-grade multi-agent orchestration.

  • Windows AI Foundry with native MCP support: Local AI development platform bringing cloud capabilities to edge computing.

  • Nvidia GB200 deployment leadership: First cloud provider to deploy GB200 GPUs at scale, achieving 865,000 tokens/second throughput.

  • Copilot Studio multi-agent orchestration: Enterprise workflow automation with fine-tuned models and deterministic logic integration.

  • NLWeb protocol introduction: HTML-equivalent protocol for agentic web applications enabling natural language interfaces.

Related:Power Availability Now Drives Data Center Site Selection

"We're just about getting into these middle innings of another platform shift, and these middle innings are where all things happen, all things scale," Nadella explained during his keynote. "We're going from these few apps with vertically integrated stacks to more of a platform that enables this open, scalable agentic web. More importantly, it's all about expanding that opportunity for developers across every layer of the stack so that you all can build the apps, the agents that can empower every person and every organization on the planet."

Development Platform Evolution: From Being a Pair Programmer to a Peer Programmer

At Build 2025, Microsoft announced a fundamental shift in GitHub Copilot's capabilities, transitioning from code assistance to autonomous software engineering. 

"Over the past few years, we've gone from code completions to chat to multi-file edits and now agents," Nadella said. "And this same pattern is emerging more broadly across the agentic web."

Related:GTC 2025: Nvidia Announces Next-Generation AI ‘Superchips’

The autonomous capabilities represent a significant technological leap. Nadella announced that there is now a full coding agent built right into GitHub, taking Copilot from being a pair programmer to a peer programmer. 

"You can assign issues to Copilot, bug fixes, new features, code maintenance, and it will complete these tasks autonomously," he said.

Beyond general coding, Microsoft introduced specialized autonomous agents, including the SRE (Site Reliability Engineering) agent. The agent can get information from the platform, start automatically triaging, determine the root cause, and mitigate the issue. The agent also logs the incident management report as a GitHub issue with all the repair items.

Azure AI Foundry Goes GA

Azure AI Foundry reached general availability as Microsoft's comprehensive platform for building production-grade AI applications. Nadella positioned it as a production line for intelligence. 

"It takes more than a great model to build these agents and applications. The system around the model, whether they are evals, this orchestration layer, or RAG, all really, really matter," Nadella said. "Foundry is that complete app platform for the AI age."

Nadella claimed that more than 70,000 organizations are already using it across industries. The technical scope is also comprehensive, with support for over 1,900 models.

Related:U.S. Data Center Tax Incentives: A Special Report

Windows AI Foundry and Local Development

Microsoft also introduced Windows AI Foundry, extending cloud AI capabilities to local development environments.

The platform's capabilities are comprehensive and have already been used to help Microsoft build its own features.

"Windows AI Foundry is what we used, in fact, ourselves internally to build features on Copilot+ PCs for things like Recall or even Click-to-Do. All of these now are built using the same runtime and the SDK," Nadella said. "And now we're extending this platform to support the full dev life cycle, not just on Copilot PCs, but across CPUs, GPUs, NPUs, all, and in the cloud."

A significant technical development is native MCP support across Windows: "Today, we are taking the next step, modernizing Windows for the agentic web," he said.

Infrastructure Architecture: Engineering AI at Unprecedented Scale

Build is also an event where Microsoft has always talked about how its own infrastructure helps enable developers. 

Microsoft's infrastructure strategy centers on aggressive silicon adoption, positioning Azure as the first cloud provider to deploy Nvidia's top-end Blackwell GB200 GPUs at production scale. 

"Microsoft was the first cloud provider to bring online the first server, the first rack, and the first data center running Nvidia GB200s," Scott Guthrie, Microsoft executive vice president of Cloud and AI, said during his Build 2025 keynote. "We have multiple customers, including OpenAI, already running production workloads on this infrastructure today, and there are some cloud providers like AWS that still haven't launched a GB200 offering."

The technical implementation leverages rack-scale architecture. Guthrie noted that there are 72 GB200 GPUs in a single rack, all interconnected in a single NVLink domain.

"This allows you to train and run a much larger AI model than previous generations of hardware," he said.

Read more about:

ITPro Today

About the Author

Sean Michael Kerner

Contributor

Sean Michael Kerner is an IT consultant, technology enthusiast and tinkerer. He consults to industry and media organizations on technology issues.

https://www.linkedin.com/in/seanmkerner/