We now have the launch of 3rd generation Intel Xeon Scalable servers codenamed “Cooper Lake.” The new chips utilize a new socket (called Socket P+) that supports up to 6x UPI links (up from 3) per CPU which means they offer around the same inter-socket bandwidth in 4-socket systems as the 2nd Gen Intel Xeon Scalable Refresh SKUs offer in 2-socket systems. Although the core counts remain at 28 per socket, maximum TDP rise from 205W to 250W moving to Socket P+. The new chips support the Intel Optane DC Persistent Memory Module 200 generation which is the company’s second-generation. A big catch is that this is only for App Direct mode in Cedar Island. Memory is still limited to 4.5TB per socket in 6-channel configurations but speeds on top SKUs can reach DDR4-3200 in 1DPC modes. Perhaps the other big new feature is support for bfloat16 operations which will bring enhanced AI training capabilities to Xeon CPUs.
That is a ton to go into. We are going to have breakout articles on the Optane DCPMM 200 series as well as the Lewisburg Refresh PCHs. In this article, we are going to focus on the chips and the platform features they provide. We have a lot of juicy details on the DCPMM 200 series as well as the Lewisburg Refresh PCHs in those articles, so check them out. We are also going to discuss what is next for Intel so you can plan your IT purchasing accordingly.
The Cooper Lake Xeon Saga
Before we get too far here, the Cooper Lake we are seeing today is not the mainstream Cooper Lake that would have used a multi-die approach to put more cores in a next-gen Whitley socket that would be shared with Ice Lake Xeons later in 2020 (XCC dies)/ 2021 (HCC dies) is the word on the street. Earlier in 2020, we broke that Intel Cooper Lake was rationalized, removing those parts. This has been a long journey, but with schedule slips, Cooper Lake did not make 2019 as Intel showed in Q3 2018, so the mainstream part was rationalized.
Effectively the strategy was to have Cooper be a leading Xeon CPU for Whitley, offering higher core counts to combat AMD EPYC. Intel could then offer what is launching today as a 4-8P platform for scale-up and Facebook. On the topic of Facebook, while most of our readers will experience the Cooper Lake a 4-socket and larger platforms, Facebook has the OCP Sonora Pass dual-socket system for mainstream compute nodes.
This is also being used in a single-socket configuration in Yosemite V3’s Delta Lake server platform.
Using Cooper Lake allows Facebook to introduce bfloat16 systems both in mainstream compute as well as for its single NUMA node front-end platforms. This is likely the reason we do not have a Cascade Lake-D. We go into details on these platforms and the implications in Facebook Introduces Next-Gen Cooper Lake Intel Xeon Platforms.
Since Ice Lake Xeons / Whitley will be a 1 and 2-socket only platform, Cooper will be the four-socket option until Sapphire Rapids Xeons launch with an enormous technology refresh in late 2021. It also positions Cooper to have the highest core counts and memory footprint per physical server, with a big caveat.
The big (XCC die) Ice Lake Xeons are expected to top out at 38 cores. This may change, but that means 76 cores and 144 threads per server for Ice Lake by the end of the year along with up to 6TB of memory (8 channels with 256GB DDR4 DIMM + 512GB Intel Optane DCPMM 200 series “Barlow Pass”) per socket for 12TB of memory. In contrast, Cooper Lake 4P buyers get 28 cores per socket * 4 sockets for 112 cores and 244 threads per server. One also gets 4.5TB of memory per socket for a total of 18TB of memory per 4P server. This is the same as the current generation Cascade Lake CPUs.
While Intel is expected to drop the “high-memory capacity” SKUs and associated premiums with Ice Lake, the current Cooper Lake platform still has Cascade Lake Xeon-like high-memory SKUs that carry premiums.
As a result of all of this, a 4P Cascade Lake Xeon server will be obsolete, except for a fairly specific use case. A 4P Cooper Lake Xeon server will still have a place in the market after Ice Lake launches and until we get to Sapphire Rapids Xeons targeted for 2021.
Perhaps the best way to think about the Cooper Lake platform as we go through this article is that it is a refreshed Cascade Lake line, rather than being more similar to the Ice Lake Xeon line. This is another 14nm part, and it shares a lot of features with Cascade Lake, but Intel did a fairly large tweak to the underlying hardware.
Going forward, let us be clear. Intel has discussed Ice Lake Cores and there are going to be major gains. Intel will also have the Whitley socket for higher TDPs, 8-channel memory, PCIe Gen4, new instructions, and a host of other upgrades. The reason we are not seeing vendors push for mainstream 2P adoption of this version of Cooper Lake is that Intel committed to having Ice Lake Xeons (even if just high-performance variants) out in 2020. Most vendors have their Whitley platforms ready because of the Q3 2018 roadmap with Cooper in that platform, so there was likely little appetite to launch a Cooper Lake 1P/2P platform that would be completely eclipsed by Ice Lake in the subsequent two quarters. It takes most large vendors more than two quarters to roll out a line of servers for a new platform.
As a quick note on the above, technically Intel released non-R CPUs such as the Intel Xeon Gold 6250 that were 3 UPI and 4-socket capable during the refresh window, but we are treating them as updates to the original Cascade Lake line rather than part of the Refresh since that view aligns closely with features and pricing instead of aligning with timing.
We are going to talk about what all this means for server buyers in 2020 in a subsequent piece. It is important to realize the context of the “Cooper Lake Saga” before even scratching the details. Without this context, the launch may make sense. With the nuanced positioning of Cascade Lake Refresh, Cooper Lake, Ice Lake, and Sapphire Rapids, we get the story.
With the Saga recounted, let us move on to the new platform.