Gigabyte H174-A80-LAS1 Liquid Cooled 4x 500W CPU Server
This Gigabyte H174-A80-LAS1 is a 1U 2-node server using the Intel Xeon 6900 CPUs that is certainly different.

The front has two 3.2kW PSUs.

Each node also gets a front management NIC and two 2.5″ NVMe storage bays.

With liquid cooling, this server can handle up to four 128-core 500W parts for 512 performance cores, 1024 threads half in each node.

A big challenge is that the 500W parts in half-width chassis often see the second CPU get too much warm air. Liquid cooling from CoolIT fixes this.

Even with the liquid cooling, just fitting twelve DDR5 RDIMM slots per CPU is a challenge to the point that some need to be offset in the chassis to accomodate other components.

Each node also gets M.2 storage as well as two PCIe card slots.

This is something really different as we see many 1U single node servers and 2U 4-node servers, but the 1U 2-node is less common.
Gigabyte B343-X40-AAS1 Multi-Node
One other fun one is this 3U 10-node server. This is not a traditional blade server since the networking is external to the chassis.

Each node supports one Intel Xeon E-2400 or Xeon 6300P series processor, four DIMMs, and a M.2 (PCIe Gen3 x2 for boot.) There is also a removable I/O card, and a PCIe Gen5 x16 slot.

You might wonder what that metal box is on the end of the node. Inside, there is room for dual 2.5″ SATA or PCIe Gen44 NVMe SSDs.

If you just need lots of nodes, this is a pretty neat design and the internal 2.5″ storage was not something we expected.
Final Words
Overall, there was a lot to see at Computex 2025. These were a few of the servers that offered something beyond traditional compute platforms that we saw. We have a number of server reviews that we are doing, but it was fun to get to see several new and unique designs ranging from liquid cooled servers, AI servers, high-density servers, and even CXL servers at Computex this year.
Apropos of the B343-X40-AAS1: how are blade servers doing these days? In principle it seems like they could be a lot more capable than they used to be(or a lot less dependent on proprietary black-box glue logic to be as capable; you can’t necessarily just plug a multi-host NIC or one of the more complicated PCIe topologies into a random motherboard and expect it to work; but those are now standards that you can make work); but I can’t remember the last time I heard one mentioned as an object of any interest.
Are they still out there, plugging away reliably in some niche; or did the economics of being wholly at the vendor’s mercy for network modules and theoretically more elegant chassis-level management just not really survive the squeeze between ‘just use big VM host’ if you want a bunch of hosts that look like they have fast networking and flexible resource allocation between them and ‘commodity 2U4n or other high density’ that is slightly less elegant but cheaper and faster moving if you just want a bunch of nodes?