Today we have a neat one. NVIDIA and Intel both announced a new partnership. NVIDIA is looking to take a $5 billion stake in Intel while Intel builds custom chips on both the desktop and server sides, featuring NVIDIA technology.
NVIDIA to Take 5 Billion Stake in Intel and Intel to Make Custom x86 NVIDIA Chips
First, the companies say that NVIDIA will invest $5 billion in Intel stock. The purchase price is listed at $23.28. As one might imagine, there are regulatory approvals for this investment, but it has caused Intel’s share price to jump higher today.
On the data center CPU side, the companies say that “Intel will build NVIDIA-custom x86 CPUs that NVIDIA will integrate into its AI infrastructure platforms and offer to the market.” (Source: NVIDIA-Intel.) That is certainly a net positive for Intel and likely a net negative for AMD. NVIDIA has been pushing ahead with Arm architectures with chips like the NVIDIA Grace, but now it has a clear x86 story.
The companies did not announce how custom the parts are. For example, Intel has “custom” SKUs that are effectively a different power and frequency than standard SKUs. STH readers may remember that IBM Power chips once had NVLINK integrated. If it is just a power and frequency optimization, then that is not as big of a moat for AMD to cross as it has done in Hopper and Blackwell systems. If it is more of an IBM Power style NVLink integration, or a NVLink Fusion integration, then that will be a bigger deal.
Also, there will be a question of how these will be sold. If NVIDIA sells the Intel parts directly, even outside of DGX systems as it does with Grace GPUs, then OEMs and customers will effectively need to use Intel, not AMD x86 CPUs. Also, NVIDIA can use its pricing power to buy custom chips from Intel and sell them at NVIDIA margins not at Intel and AMD margins.
For PCs the companies say that “Intel will build and offer to the market x86 system-on-chips (SOCs) that integrate NVIDIA RTX GPU chiplets. These new x86 RTX SOCs will power a wide range of PCs that demand integration of world-class CPUs and GPUs.” (Source: NVIDIA-Intel.) Intel has its own GPU IP on the consumer side, so this is a big deal if it starts to integrate RTX chiplets. The AMD Ryzen AI Max+ 395 is a very good part, so perhaps Intel is looking at a future where it needs to compete in that arena both on a graphics and AI side. The NVIDIA GB10 found in the NVIDIA Spark is considered a consumer product on the Arm side, not a data center product. As such, we have been expecting a consumer PC GB10 variant for some time bringing Arm CPU and NVIDIA Blackwell GPU to PCs. This gives NVIDIA a path to launch not just an Arm-based CPU, but also an x86 offering.
Updates: On the call, Jensen said that the goal is NVLink integration in the data center so that x86 could be used in future NVL rack scale designs. That is a net negative for Arm.
Jensen also said that there is something like 150 million notebook units shipped per year. NVIDIA sees this as an opportunity to create a fused together Intel x86 CPU and NVIDIA GPU IP to go after markets like that. NVIDIA will sell a GPU chiplet to Intel.
Jim Cramer of CNBC asked if that means that Intel Foundry will be used. Jensen said that there was no direct Intel Foundry announcement, but insofar as Intel will be producing these x86 chips even if they are integrating NVIDIA silicon, it will drive volume for Intel.
Jensen said they will resell CPUs as they get to NVL72. He said that it will be like NVIDIA buying Arm Grace CPUs from TSMC and selling those. The CPUs will be bought by NVIDIA, integrated into Superchips, and then sold in big systems.
Jensen said that this Intel partnership is targeting $30-50B per year of market opportunity.
Jensen said that the Arm roadmap will continue including the architecture in the GB10 going into other products.
Final Words
For Intel, this deal makes a lot of sense as it gets to offer NVIDIA’s market its x86 CPUs. Folks in the industry probably remember that Intel pushed NVIDIA out of x86 desktop chipsets back in the day, but a few Intel regime changes later, NVIDIA has come to help Intel in a big way. The big question is whether that means more NVIDIA use of Intel fabs, but in some ways this does answer that question since these are CPUs for NVIDIA GPU systems.
Intel needs the investment and $5B in stock is frankly not a huge number for NVIDIA these days. An outright acquisition would likely face a lot of regulatory scrutiny so this is likely a much lower bar to cross.
For AMD, this is not a great announcement as many have found AMD EPYC chips in NVIDIA AI servers to be a winning combination. On the other hand, almost everyone we talk to says that NVIDIA is much more concerned with competition from AMD than from Intel, so it might also indicate that AMD has a good roadmap.
The companies are set to have a webcast later today, so we will update this article if there are more details.




So does this imply that intels GPU IP is dead or at very least may be on its way out? They have made some real progress and while they aren’t the king of AI there consumer GPUS are really competitive and the business ones B50 and probably B60 seem to be the best value? I mean if they are building x86 cpus with integrated NVIDIA GPU’s it would make it easier for someone to cut the GPU division despite it making some amazing progress.
Intel must compete with AMD 395 max – future AI absolutely WILL be doing AI inference at the edge. That edge may not be a “desktop” part like 395, but there is a market non-cloud based, private ai inference running on a edge device. GB10 is not that product, AMD395 is the hobbyist version of this. That’s big-fast onboard ram, and decent dGPU scale ai. Those dGPU solutions don’t have that big ram to hold a 70b or bigger llm. 395 is not quite fast enough GPU on the GPU side for the solution and just enough ram to get it running locally.
The ARM cores limit the GB10, it shouldn’t but it does, x86 dominates software development for these heavy lifts. Adding extensions to ARM for AI is prohibitive because you need all new compilers to take advantage of “custom” nvidia cores – which is why NVDA tried to buy ARM.
Now they just buy a little bit of Intel to avoid bigger antitrust blocking, get their x86 extensions to ai cuda code, into linux and windows easy – and the igpu (it will be copackaged that’s what intel is really good at right now) will be suddenly capable, easy to code to and off the the races with on-prem AI inference. Training – that’s still going to be done on big farms, it’s a heavier lift. Unfortunate for AMD they proved the case for this with the 395 obviously, and they have struggled to make the replacement for cuda.
I don’t know if we will ever get HBM memory on an x86 server chip, but the desire for this will be there.
Intel already builds customs for hyper-scalers, they don’t have to buy 5Bn in Intel stock to get that. There might be some quick x86 specific cpus out the door like that, but I can’t believe Nvidia doesn’t want something more profound. Intel always beat it’s competitors with x86 extensions like mmx and sse …the coup this time is that Nvidia is going to do it for them as they flounder.
The Aurora system had a bunch of SPR Xeon chips with HBM.
I don’t recall where I saw them compared, but the Xeon-6 chips with MRDIMM outperformed those SPR XEON-MAX chips by a large margin.
I’m making this comment from a Xeon system with HBM in 2LM mode.
@JayN
If “raw” memory bandwidth numbers are considered the HBM Xeons still outperform the 6900P series Xeons by a fairly wide margin; in 1LM mode with the right numactl control, I’m getting 1.96TB/s stream triad like memory bandwidth on a dual socket system, however it is really easy to get bad memory performance if you don’t explicitly manage you memory domains correctly.
@Paul
It doesn’t really matter when Aurora itself has performance problems, bad efficiency (#3 on Top500 yet #83 on Green500, when El Capitan is #1 and #25), was delivered years late and over budget.
What was supposed to be a big win for Intel turned out to be a disaster, one among many recently.
HBM Xeons were significantly more performant in some aspects, yes, but the idea didn’t sell well thus next generations do not offer that solution. It wasn’t even refreshed with Emerald Rapids.
@MDF
While Intel ships custom SKUs to hyperscalers they aren’t really that different from “public” ones.
What NVIDIA is doing here is integrating NVLINK directly, and that is a significant modification which might require a different platform/socket from normal Intel offerings. I don’t know if it means switching the IO dies on Xeons or actual custom CPU chiplets. They did it before with IBM POWER.
@Kyle
I won’t argue that the Xeon Max performance was all it’s hyped up to be, it doesn’t get nearly the performance that it’s memory bandwidth numbers imply it should in many real world workloads, but I’d blame Aurora’s somewhat lackluster showing on the GPU accelerators it used since they are responsible for the vast majority of the flops computed in the TOP500 benchmark.
The benefit of Aurora having so much trouble is a glut of Xeon Maxs were dumped into liquidators and sold at rock bottom prices to us normal folks.
So let’s get this straight. It appears that nVidia will NOT be using Intel Foundry. It is stated in the updated answers and it becomes clear from the chiplet wording: no in-die integration but die to die in a single package.
nVidia gets semi-custom x86 CPUs. First integration with NVLINK and secondly it may suggest that it could more easily influence future extensions in the x86 roadmap for the sake of nVidia. So nVidia earns hugely here, since up to this point it could not influence neither the x86 nor the ARM architecture roadmaps. This is big for nVidia.
nVidia also gets really fat high performance CPUs for its systems without relying on the Intel cadence, aspirations, system architecture etc. It commands what will be in there – at least indirectly. NVLINK is a first step at this moment, but we all understand that this simply opens the floodgates.
nVidia also seems to get some exclusivity on the products made. So on one hand it helps Intel to sell processors (e.g. in the laptop market), but I would consider that the custom nVidia chips will not be sold on the open market by Intel. The real question there, are these chips going to squeeze Intel’s margins and enlarge nVidia ones? I would bet that Intel will not be charging the prices they would want, so nVidia would get a good price and as soon as the sticker on the chip writes nVidia, it will have the outrageous margins that nVidia currently enjoys.
Intel in my opinion definitely writes off its GPU department. Discrete Intel GPUs are not sustainable without the integrated ones and who on earth is going to buy a laptop with an Intel CPU/GPU and he/she can buy the one with the Intel CPU/nVidia GPU? So to my eyes, this marks the gradual end of Intel GPU aspirations. A very cheap way to throw away all the billions already spent. Intel is desperate to increase its sales and this indeed will help counter the AMD offensive. Intel CPU + nVidia GPU may well become a better proposition than AMD SoC. But…. with Intel still fabricating at TSMC (hope this changes as fast as possible) AND paying for the nVidia chiplet within the package, at what margin will it be competitive to AMD’s own offerings? This may increase sales but squeeze profits per sale. nVidia wins big here, Intel not so much. The real question to come to a positive or negative conclusion here, is whether there is a chance that nVidia will manufacture those chiplets at Intel Foundry. From the current announcement, it seems not but maybe there is a future promise.
nVidia this week also became the leader in Ethernet switching in the datacenter through its Mellanox acquisition. NVLINK is specifically mentioned in the current announcement, the real question is whether after NVLINK, we will eventually see other networking components being integrated in those custom chips. Could this also hurt Intel’s networking business?
Overall, I see this as a clear win for nVidia, but a very moderate one for Intel – if not negative. I think that Intel gets some money (which for its size and expenses are not really that much – and it is stocks that are bought) and some increased potential sales. But these sales come at a cost and it is early to cheer up or get disappointed. I would be more pessimistic rather than optimistic. The Intel GPU folks should be enraged. As one mentioned, this comes at a point when they started to really have something more competitive. With this announcement, I would not touch an Intel GPU product, not from miles away.
> It is stated in the updated answers and it becomes clear from the chiplet wording: no in-die integration but die to die in a single package.
Meteor Lake is using this implementation isn’t it?
Intel says that It is tile to tile, however, it is die to die in old terms.
I doubt Intel will cancel Arc anytime soon because they won’t want to rely on nVidia for ALL graphics. If they did, it would also raise immediate antitrust concerns.
That said, this can’t be good for the future of Arc. At the very least, it will reduce the political prestige of the project at Intel HQ. At worst, they will explicitly lower the performance targets for next gen parts because it would be extra money just to compete with a lucrative business deal partner.