Microsoft Looks to Build Its Own AI Chips, Reducing Dependence on NVIDIA and AMD

By Aayush
When you purchase through links on our site, we may earn an affiliate commission.

Microsoft appears to be charting a new course in AI, focusing on developing its own data centre chips rather than relying heavily on NVIDIA or AMD. This move reflects the company’s broader push for self-sufficiency in artificial intelligence.

Over the past few months, Microsoft has doubled down on AI, heavily backing OpenAI with a $13 billion investment and integrating its technology into a wide range of products and services. CEO Satya Nadella has described this shift as part of a departure from Bill Gates’ traditional “software factory” vision, moving toward intelligence, integration, and AI.

Earlier this year, OpenAI unveiled the $500 billion Stargate project, designed to build advanced AI-focused data centres across the U.S. While Microsoft retains first refusal rights, it lost its exclusive cloud provider status.

Microsoft AI CEO Mustafa Suleyman confirmed that the company is developing its own AI models, though they may lag OpenAI by three to six months. “Our strategy is to really play a very tight second,” Suleyman said.

Moving Toward In-House Chips

Microsoft CTO Kevin Scott revealed that the company intends to rely more on its own AI chips across data centres. “We’re not religious about what the chips are,” he said, noting that NVIDIA has historically offered the best price-performance solution. “We will literally entertain anything to ensure we’ve got enough capacity to meet demand.”

The AI boom has created intense demand for GPUs, helping push NVIDIA’s market value close to $4 trillion. By developing its own chips, Microsoft aims to control system design end-to-end—including networks, cooling, and workload optimisation—allowing greater flexibility and efficiency in its data centres.

Suleyman elaborated on the vision for independence:

“We should have the capacity to build world-class frontier models in-house of all sizes, but we should be pragmatic and use other models where we need to. For a company of our size, it’s critical that we are self-sufficient in AI if we choose to be.”

Microsoft’s move builds on the Azure Maia AI Accelerator, launched in 2023, and signals a push toward creating fully integrated, next-generation AI systems.

At the same time, its partnership with OpenAI has shown signs of strain, with Microsoft reportedly backing out of two major data centre deals due to disagreements over additional training support—though OpenAI CEO Sam Altman has indicated the company is no longer compute-constrained.

Share This Article
Follow:
Aayush is a B.Tech graduate and the talented administrator behind AllTechNerd. . A Tech Enthusiast. Who writes mostly about Technology, Blogging and Digital Marketing.Professional skilled in Search Engine Optimization (SEO), WordPress, Google Webmaster Tools, Google Analytics
Leave a Comment