Qualcomm made its splash in the AI space today with the announcement of two new designs and a 200MW deal. One of the designs included the Qualcomm AI200 with 768GB of LPDDR memory per card. The other is a future AI250 design. There is also a mention of a new processor in the mix, albeit this is light on details.
Qualcomm Announces New Integrated AI Racks with 768GB Cards and a 200MW AI Deal
At the heart of the announcement are two AI designs that update the Qualcomm Cloud AI 100 AI Inference Cards and lines we have seen for years. The AI200 is said to use LPDDR memory to hit high capacity points for AI inference workloads. The AI250 uses a future memory technology. Here is Qualcomm’s blurb about the memory on the AI250:
The Qualcomm AI250 solution will debut with an innovative memory architecture based on near-memory computing, providing a generational leap in efficiency and performance for AI inference workloads by delivering greater than 10x higher effective memory bandwidth and much lower power consumption. (Source: Qualcomm)
Both of the new desgins will come as integrated rack solutions. These will also features PCIe for scale up and Ethernet for scale out. Ethernet has become the clear winner in the scale out space. PCIe is interesting for the scale-up side, but we will wait to see what that entails. The racks also will aim for 160kW per rack of power consumption and using direct liquid cooling, putting them in the ballpark of a NVL72 rack. Qualcomm also said that these will support confidential computing, which we plan to have a piece on at STH soon, albeit not using this Qualcomm setup.

The 200MW deal with HUMAIN will start seeing deployments in 2026. Announcements today did not cover using a Qualcomm processor, but earlier in 2025 HUMAIN and Qualcomm signed a MOU around using Qualcomm data center CPUs.

Also at Computex 2025, we had a teaser for a Qualcomm data center CPU and AI chips. What was not mentioned was the inclusion of NVLink Fusion despite that being another announcement from Computex.
Final Words
It will be interesting to see how aggressive Qualcomm pushes these. Selling racks of AI compute is an important milestone. Getting a deal signed to sell 200MW of compute starting in 2026 means that Qualcomm likely now has a sizabe data center business brewing. From a software perspective, Qualcomm needs to get more chips out there so it can also get working on its software side to compete in this type of large scale infrastructure. There is also more to being a big player in larger scale AI clusters than being able to put together cards and servers with PCIe and Ethernet into racks. At the same time, this is a much smaller scale than recent NVIDIA and AMD announcements. We will see how this develops, but starting off with a few billion in potential revenue from a signed deal is usually a good place to be.



