Supermicro AS-1123US-TR4 Server Review 1U Dual AMD EPYC


Supermicro AS-1123US-TR4 Power Consumption

Although the dual AMD EPYC designs can offer a lot of performance, they do so while surprisingly sipping power. We swapped the system into rackmount mode and put it in our data center using 208V 30A Schneider Electric APC PDUs. We measured power using the AMD EPYC 7601 CPUs, and saw some great figures:

  • Idle: 110W
  • STH 70% Load: 383W
  • 100% Load AVX2 (GROMACS): 479W
  • Maximum: 498W

If you are comparing this to two dual-socket Intel Xeon E5-2698 V3 systems, you will see lower power. This is certainly the case where one can consolidate 2:1 based on performance, capacity, and operating costs. Beyond that, these figures compare well to the dozens of dual Intel Xeon Scalable configurations we have tested.

Note these results were taken using a 208V Schneider Electric / APC PDU at 17.7C and 71% RH. Our testing window shown here had a +/- 0.3C and +/- 2% RH variance. The ambient temperatures and humidity factors are important as they greatly influence server power consumption, especially in densely populated 1U servers such as this. We do not see many of these servers destined for 110V or 120V racks as common 15A 120V circuits do not have enough power for even a quarter cabinet worth of these servers.

Final Words

Overall, the Supermicro AS-1123US-TR4 is an excellent machine. If you are looking for a 1U server with the maximum number of CPU cores and RAM in a single node, this is the platform that will get you there. Having 64 x86 cores in two-socket with up to 4TB of RAM is awesome. Using the custom motherboard, Supermicro offers its customers and its VARs/ channel partners the ability to customize the platform for specific applications.

There are a few items that fall into the category of we understand, and they are completely functional, but we wish Supermicro would change. We wish that the fan situation incorporated easily hot-swappable fans. Fans rarely fail, and the system is designed to be redundant so it is unlikely there will be many if any cases where this is needed. That means that fan/ shroud design is functional, but we still prefer some of the higher-end setups that Supermicro uses in its 2U servers. We also wish that the default was 10x NVMe 2.5″ in the front panel. Given the choice, these days it makes sense to buy NVMe drives and the AMD EPYC platform has the ability to handle 10x NVMe drives due to the plethora of PCIe lanes on the platform. Most likely that desire will be addressed by another model down the road, so we understand Supermicro’s product decisions.

This server has performed extremely well in our testing. Our total setup time of the base system including adding a 100GbE NIC, 4x SSDs, 2x EPYC CPUs, and 16x DDR4 DIMMs was around 7 minutes from having the system on the table at the data center to being installed in the rack which means this system is extremely easy to service. After reviewing this unit, we would have no hesitation installing it in the STH production hosting cluster.

For those reading this review either as customers or as VARs that are evaluating EPYC and wondering how this experience compares to Intel Xeon Scalable, at this point, it is nearly identical. EPYC is supported by every major Linux distribution, the latest VMware iterations, and Microsoft Windows Server 2016 out of the box at this point. Management of the Supermicro AS-1123US-TR4 is identical to the Intel Xeon experience. It is certainly the case today that these systems can be deployed in greenfield applications without concern.

Here is a link to the product page.

Design & Aesthetics
Feature Set
Previous articleSynology DS1618+ Launched 6 Bay NAS
Next articleIntel and Partners Discuss GPZ Variant 4 Speculative Store Bypass
Patrick has been running STH since 2009 and covers a wide variety of SME, SMB, and SOHO IT topics. Patrick is a consultant in the technology industry and has worked with numerous large hardware and storage vendors in the Silicon Valley. The goal of STH is simply to help users find some information about server, storage and networking, building blocks. If you have any helpful information please feel free to post on the forums.


  1. A criticism of STH reviews is that they don’t put price in. These barebones are like $1600 and available from channel partners. For a barebones that is about right but if you’re loading with RAM, CPUs, a 25/100Gb NIC, and 10 drives the $1600 is a small cost overall.

    When are we gonna see 10x nvme? That’s really the sweet spot for EPYC.

    Good lookin’ system though and really thorough review. You guys have kicked it up a notch on the server reviews.

  2. Too bad this isn’t ten NVMe like the Dell R6415. It looks really nice and since we’re doing NVMe-oF attached storage these days with less local it’s fine for us. Something to talk to our reseller about. Price is really reasonable here Tyrone.

  3. KILLS me that they didn’t do 4 NVMe.

    Why did they do a x8 internal on the riser not an x16? They’ve got risers with that. Since 8 SAS3 isn’t going to do us much good an x16 internal slot filled with 4 M.2’s I’d say is ideal.

    I’m with these guys. I want one. If you could get the 7401’s at 7401P price I’d have a stack of these already.

  4. Have you also noticed that you have to disable “above 4G decoding” in bios in order for 100G Connect-x 4 to initialize properly?

    Also, it should be worth mentioning that if you fully populate dimm slots, memory frequency goes down to 2133MHz.

  5. For those looking for NVMe, the SMC site shows two versions of the AS-1123US.
    AS-1123US-TR4 = 10 x 2.5 SATA + 2 x NVMe
    AS-1123US-TN10RT = 10 x U.2 NVMe

    Really great review STH. Looking forward to getting a few of these.

  6. @Jure: please check the manualon page 34. Pick the right dimms and in most situations you wil have 2666mhz.

  7. @tyrone saddleman

    Yes it is rather unfortunate that NAND and RAM aren’t going to be cheaper any time soon. Which means the overall cost of Intel / AMD is becoming much smaller in % of TCO. Although right now AMD is selling as much EPYC as they could.


Please enter your comment!
Please enter your name here

This site uses Akismet to reduce spam. Learn how your comment data is processed.