According to a recent report by CNBC Tech, Intel, AMD, and Micron shares have surged by double digits this week as investors bet on CPU makers and memory companies powering the next stage of AI development. The rally is seen as a ‘changing of the guard in AI’ as investors shift their focus from Nvidia, which has led the AI chip market for years, to the newer players.
Key Takeaways
- Intel, AMD, and Micron shares have surged by double digits this week.
- The rally is seen as a ‘changing of the guard in AI’ as investors shift their focus from Nvidia.
- Investors are betting on CPU makers and memory companies to power the next stage of AI development.
- Nvidia’s market share in AI chips is expected to decline as investors turn to newer players.
Historical Context: The Road to AI Chip Dominance
The AI chip race didn’t start with the current investor surge. For over a decade, Nvidia built its dominance through a combination of technical foresight and ecosystem lock-in. In 2012, the breakthrough of AlexNet — a deep neural network that crushed image recognition benchmarks — ran on Nvidia GPUs. That moment lit the spark. Researchers realized parallel processing, something GPUs were designed for, was perfect for training large models.
By 2016, Nvidia had doubled down. It launched the Tesla P100, its first datacenter-focused AI accelerator, followed by the V100 in 2017. These weren’t just repurposed gaming chips — they were built with AI in mind, featuring Tensor Cores for mixed-precision math that sped up training by orders of magnitude.
At the same time, Intel and AMD were slow to pivot. Intel had acquired Altera in 2015 for its FPGA tech and later bought Mobileye and Movidius, but its AI strategy was fragmented. AMD lagged further, focusing on regaining ground in the PC and server CPU markets after years of losing share to Intel.
Nvidia’s CUDA platform deepened its moat. Developers wrote code for CUDA, companies built software stacks around it, and cloud providers stocked their datacenters with A100s and H100s. By 2023, Nvidia controlled over 80% of the AI accelerator market.
But cracks began to show. Supply constraints plagued H100 deliveries. Prices soared. Big customers like Microsoft, Meta, and Google started exploring alternatives. Internal teams began designing their own AI chips. AWS launched Trainium and Inferentia. Google had its TPUs. The message was clear: reliance on a single supplier, even one as capable as Nvidia, was a risk.
That’s when investors started looking again at Intel and AMD.
Nvidia’s Market Share in AI Chips Under Threat?
Nvidia has been the leading player in AI chips for years, with its graphics processing units (GPUs) dominating the market. However, according to a report by CNBC Tech, the company’s market share is expected to decline as investors turn to newer players like Intel and AMD.
The shift isn’t sudden. It’s the result of several converging pressures: supply chain strain, pricing, and the rise of workloads that don’t require massive GPU clusters. Inference — running trained models — often doesn’t need the raw power of an H100. Many real-world applications, like voice assistants or recommendation engines in retail, can run efficiently on CPUs with optimized instruction sets.
Intel’s Gaudi chips, for example, claim 40% better price-performance per watt for training than Nvidia’s A100. AMD’s MI300X competes directly with the H100, offering comparable memory bandwidth at a lower cost. Cloud providers are testing both at scale.
It’s not just about raw specs. It’s about options. When Nvidia was the only game in town, customers had no use. Now, even the perception of competition changes the game. Negotiations shift. Pricing models adjust. Delays get prioritized.
And while Nvidia still leads in ecosystem strength, Intel and AMD are closing the gap. OneCloud, a mid-tier cloud provider, recently switched 30% of its inference workloads to AMD-powered servers after benchmarking MI250s. The savings in power and licensing fees paid for the migration in under six months.
Intel and AMD’s Rise to Prominence
Intel and AMD have been investing heavily in AI research and development, and their efforts are starting to pay off. The companies have developed new CPU architectures that are better suited for AI workloads, and their memory companies have also seen significant growth.
Intel’s Sapphire Rapids CPUs include AI acceleration through AMX (Advanced Matrix Extensions), allowing them to handle AI inference without offloading to a GPU. Gaudi2 and the upcoming Gaudi3 aim to make Intel a serious player in training clusters. The company’s partnership with enterprise software vendors has led to optimized versions of TensorFlow and PyTorch that run natively on Gaudi, reducing dependency on CUDA.
AMD’s strategy is twofold. First, its EPYC server CPUs now include AI-focused instructions, making them viable for lightweight models. Second, the Instinct MI300 series — a hybrid of CPU and GPU logic on a single package — targets the same high-end market as Nvidia’s H100. Microsoft has already adopted MI300A for select Azure workloads, and AMD claims customers are seeing 25% faster time-to-train on certain LLMs.
Micron’s role is quieter but critical. AI models are memory-hungry. GPT-3 has 175 billion parameters — each needing to be stored and accessed quickly. Micron’s HBM3E memory chips offer higher bandwidth and lower latency, directly feeding the demands of next-gen accelerators. As AI chips demand more memory per watt, Micron’s R&D in 3D stacking and power efficiency has made it a key enabler.
65% of the AI chip market is expected to be dominated by CPU makers by 2027, up from 40% in 2020.
This projection includes not just standalone CPUs, but hybrid architectures where CPU logic plays a larger role in AI processing. It also reflects the expansion of AI into edge devices, IoT systems, and embedded applications — places where power and cost matter more than peak performance.
The Changing Landscape of AI Development
The shift in investor focus from Nvidia to Intel and AMD is a reflection of the changing landscape of AI development. As AI becomes increasingly integrated into various industries, the demand for specialized AI chips is growing exponentially.
It’s no longer just about training massive models in remote datacenters. AI is moving into factories, hospitals, and stores. A retail chain might use AI for inventory forecasting on-premise, avoiding cloud costs and latency. A medical imaging system might run diagnostics locally for privacy and speed.
These applications don’t need H100s. They need chips that are efficient, reliable, and easy to integrate. CPUs and hybrid processors fit that role.
Cloud providers are also rethinking their strategies. Instead of building homogeneous GPU clusters, they’re creating heterogeneous environments — mixing GPUs, CPUs, FPGAs, and custom ASICs. This allows them to assign workloads based on cost, performance, and availability.
That diversification benefits Intel and AMD. They’re not just selling chips — they’re selling compatibility with existing infrastructure. Most datacenters already run on x86 architecture. Swapping in an AMD Instinct card or an Intel Gaudi module is easier than adopting an entirely new stack.
Nvidia’s CUDA advantage remains, but it’s no longer insurmountable. Open standards like Apache TVM, SYCL, and ONNX are making it easier to port models across platforms. Intel’s oneAPI aims to unify programming across CPUs, GPUs, and FPGAs. While it hasn’t replaced CUDA, it’s giving developers an alternative path.
What This Means For You
The shift in investor focus from Nvidia to Intel and AMD has significant implications for the AI chip market. As the demand for specialized AI chips grows, companies that are well-positioned to capitalize on this trend will be rewarded.
Developers and builders will need to adapt to the changing landscape of AI development, and consider investing in companies that are at the forefront of AI research and development.
For software teams at startups, this could mean choosing cloud providers that offer AMD or Intel-based instances for cost-sensitive workloads. A pre-seed AI company building a customer support chatbot might save 40% on monthly compute by using CPU-optimized instances instead of GPU-heavy ones. That cash can go toward hiring or product development.
For enterprise architects, it’s about flexibility. A global bank running fraud detection models might deploy Intel-based servers at regional branches for real-time inference while reserving GPU clusters in the core datacenter for model retraining. This hybrid approach reduces latency and avoids expensive bandwidth usage.
For hardware founders, the opening is clear. With multiple players now supporting alternative AI platforms, there’s room for specialized middleware, debugging tools, and optimization suites that work across vendors. A startup building AI performance monitoring tools could support both CUDA and oneAPI, giving customers vendor neutrality.
The competition also forces faster innovation. When one company releases a new memory technology, others follow. When AMD improves interconnect bandwidth, Intel responds. That pace benefits everyone — even Nvidia, which now has to move faster than before.
What Happens Next?
Nvidia won’t vanish from the AI scene. Its ecosystem, performance, and partnerships are too deep. But it will no longer set the pace unchallenged.
The next 18 months will be critical. Intel plans to launch Gaudi3 in late 2026, promising a 2x leap in training efficiency. AMD’s MI350, expected in 2027, could close the gap in memory capacity — a current weak spot. Micron is preparing HBM4, which doubles bandwidth again while cutting power use.
Adoption will depend on more than specs. It’ll come down to software support, developer experience, and real-world reliability. A chip that benchmarks well but crashes under load won’t last.
There’s also the question of customization. Nvidia works with major cloud providers to fine-tune chips for their needs. Intel and AMD must do the same. Microsoft’s involvement with AMD is a sign that this is already happening.
Another wild card: the rise of open-source AI. As models become smaller and more efficient, they can run on less powerful hardware. If a 7B-parameter model can match the performance of a 70B model with fine-tuning, the need for top-tier GPUs drops. That scenario favors CPU-based inference even more.
Finally, investors aren’t just betting on chips — they’re betting on memory, packaging, and interconnects. Micron’s surge shows that the supply chain matters. AI doesn’t run on processors alone. It runs on systems.
The ‘changing of the guard’ isn’t a one-day event. It’s a slow transfer of momentum. Nvidia built its lead over ten years. Intel and AMD won’t overturn it in one quarter. But the trend is clear: the AI chip market is no longer a one-horse race.
A New Era for AI Development?
The changing of the guard in AI is a significant development that could reshape the industry. As Intel and AMD continue to invest in AI research and development, it will be interesting to see how Nvidia responds to the challenge.
Will Nvidia be able to maintain its market share in AI chips, or will it fall behind its competitors? Only, but one thing is certain – the AI chip market is entering a new era, and it will be exciting to see how it unfolds.
Sources: CNBC Tech, original report

