At SC24, we saw the Kaytus KR2190V3 which was a really neat design so we wanted to show it. Our first take on the server was very different from what the server actually was when we looked closer. We also saw the K22V2 which is similar to the server running a major gaming service.
Kaytus KR2190V3 at SC24
Here is the server at SC24. It looks like many dual socket Intel Xeon servers, until you look a bit closer.
If you look at this casually you might think this is a dual socket 2U server, but if you count PCIe lanes you might see what is going on.
The overhead shot shows us a clearer picture. These are two single socket server motherboards with very tight tolerances, so much so that they look like a single motherboard with a M.2 boot drive between the DIMM slots.
In the rear we have sets of expasnion slots, I/O and more. While there are redundant power supplies, the power distribution board that also has connectivity for GPU power is a single board.
This is a really interesting design that is very different than one might think initially looking at the server.
Kaytus K22V2 at SC24
Sitting next to that server we saw an AMD 2U 2-node.
This one used a more traditional sled design which was a stark contrast to the server sitting next to it.
We can see that this is an AMD EPYC SP5 design.
Rumor has it that this is a platform that is used by a popular gaming company that supports user created and shared games.
Final Words
We see a lot of servers that look like standard 2U dual socket designs, and it makes sense why. In the industry, single socket servers keep gaining market share. What is quite unique here is that we have a modern Intel Xeon and AMD EPYC server sitting next to each other with two very different designs. It just seemed neat, so we wanted to cover this.
Pretty cool. I wonder if the separated single-socket CPU and I/O boards approach will be adapted to low-profile and edge server layouts as well. It looks quite well suited to that application. And maybe larger high-capacity multinode mainframe-ish systems with 8 or more nodes can be built this way, too. I could see this also working in a scaled up fashion with 48 DIMMs per node, the nodes going to a midplane board and stacked vertically one above the other. Yet another potential solution to the 48-DIMM problem with upcoming server designs.
Oops… I meant 24 DIMMs per node. Minor error, though 48 could fit well too.
That is a lot of MCIO on that intel board. Of course, fewer physical places to put cards than total IO across both boards, but the flexibility is definitely interesting. I worry a bit about the reliability with all those “jumper” cables, but never played with MCIO specifically, presumably there is sufficient signal tolerances.
Kaytus = Chinese Inspur