ASUS RS720-E12-RS8G Block Diagram and Topology
Here is the block diagram for the system. We can see the relatively even distribution of the PCIe lanes and the NVMe drive bays on CPU1.

Taking a look at the rest configuration with two NVIDIA H100 NVL GPUs, two Intel Xeon 6740P CPUs, and 1TB of memory, here is the topology.

This is very similar to what we see on other 2U Intel Xeon 6 servers.
ASUS RS720-E12-RS8G Management
In this system, we have the ASMB12-iKVM running on an OCP DC-SCM form factor. That puts the ASPEED AST2600 BMC on a removable board at the rear of the chassis instead of on the motherboard.

We have looked at ASUS ASMB generations and ASUS uses the MegaRAC SP-X design that has features like HTML5 iKVM functionality. We are not going to go into it deeply here since this is fairly standard.
ASUS RS720-E12-RS8G Performance
For the CPUs, we had two Intel Xeon 6740P processors which are 48 core P-core CPUs from Intel.

With these, the big question is how do those coolers perform. Are the coolers able to keep these CPUs cool and therefore running at full clock speeds.

This is very close to our baseline reference system numbers, so it seems like the answer is that the server is doing a great job.
We also had two NVIDIA H100 NVL PCIe GPUs installed.

The NVIDIA H100 NVL is the HBM3 equipped PCIe GPU that NVIDIA is targeting at systems that want to run AI acceleration instead of running graphical related tasks. We again wanted to test the GPUs running in this system to see how they perform and if the cooling for the GPUs is effective.

As we can see here, the NVIDIA H100 NVL GPUs with their 400W TDP limits are performing as we would expect. That is important. Some 2U servers can run the higher-TDP PCIe GPUs, but only at lower power levels like 300W. Getting the full 400W GPUs cooled well here is great.
Next, let us get to the power consumption.
Seeing the design of the CPU coolers makes me wonder how well the memory modules standing right behind the extended portions will tolerate the extra heat being blown over them long-term. Those bits don’t run cool and the RAM is already running at speeds hot enough to require their own cooling system. Anyway, neat looking dual LGA-4710 system.
The design language of the PCBs is the classic ASUS mainstream one down to the fonts used, and of course black solder mask. Even half of RAM slots are black and the other half is blue. There are a lot of matching blue components from jumpers to SSD latches which contrast with the black components.
It looks like they actually care about how the server looks both externally and internally, it’s quite refreshing.
With Dell using the DC-MHS this is almost the same except it’s got another OCP NIC instead of the BOSS, ASUS is using 2.5″ not E3.S, and they’ve got standard IPMI instead of iDRAC. If ASUS can build something that’s almost the same as Dell now, then what’s the point of paying more for Dell’s “engineering????” I don’t understand why the motherboards are almost the same.
3.2kW PSUs… wow. And I thought the 2kW PSUs in the GPU server I’m building for my homelab was a lot.