Gigabyte B343-C40-AAJ1 Review 10-Node AMD EPYC 4005 Goes High-Density

6

STH Server Spider: Gigabyte B343-C40-AAJ1

In the second half of 2018, we introduced the STH Server Spider as a quick reference to where a server system’s aptitude lies. Our goal is to start giving a quick visual depiction of the types of parameters that a server is targeted at.

STH Server Spider Gigabyte B343 C40 AAJ1
STH Server Spider Gigabyte B343 C40 AAJ1

This is a really interesting server. On the one hand, we get a lot of node density. On the other, we do not get the maximum density per U since technically 1U dual socket EPYC 9005 servers are denser. That means we do not get the most networking, storage, cores, or memory per U. What we get instead is a lot of nodes.

Some might compare these to traditional blade servers. Perhaps the biggest difference is that this does not have the built-in network switches usually found in traditional blade servers. Many of those blade chassis, however, have pass-through modules and so this would behave more similar to those types of blade systems.

Key Lessons Learned

That is not really this market however. We did a Gigabyte R113-C10 Review earlier this year. In many ways, this review and that one are related. That is a 1U server designed for the dedicated hosting market.

What this server does is packs ten of these into a single 3U space. That may seem insignificant, but it matters.

Gigabyte B343 C40 AAJ1 Node 2
Gigabyte B343 C40 AAJ1 Node 2

Just as a recap, if you had ten 1U servers with redundancy that would take:

  • 10U of rack space
  • 20x power supplies
  • 20x PDU ports
  • 10x management ports on a management switch
  • 10x high-speed NICs
  • 10x high-speed NIC ports on switches

While that may not seem like a lot, consider this solution:

  • 3U of rack space
  • 4x power supplies
  • 4x PDU ports
  • 1x management port
  • 3x high-speed NICs
  • 3-6 high-speed NIC ports on switches

There is an added benefit of the compute sleds being easier to pop out of the chassis than 1U servers from a rack since the cabling would be, at most, on one side of the rack. If you only used the CMC ports and the OCP multi-host adapters, then that would mean it would all be cabled from the rear making service easy.

That is really the point of a server like this. Making higher-density dedicated servers available at lower-costs.

Final Words

This is one of those systems that was particularly neat since we reviewed the Gigabyte R113 earlier this year. It really put into focus the difference between the lower-density and low-cost 1U servers and this shared-chassis design. There is a reason that in the dedicated hosting market, this type of system has become very popular.

Gigabyte B343 C40 AAJ1 Node 6
Gigabyte B343 C40 AAJ1 Node 6

The AMD EPYC 4005 is the top CPU for these single-socket dedicated hosting nodes right now, and it makes sense why. With up to 16 cores and 3D V-Cache, these nodes offer a lot of performance really maximizing the performance per core.

Gigabyte B343 C40 AAJ1 AMD EPYC™ 4005 Processor
Gigabyte B343 C40 AAJ1 AMD EPYC™ 4005 Processor

Overall, the Gigabyte B343-C40-AAJ1 makes a lot of sense for those organizations looking to provide high-performance cores in dedicated hosting nodes. We are looking at the 1GbE onboard networking version here, because we can also use the multi-host adapters in the rear. Gigabyte also has designs with higher-speed onboard networking, although we did not look at those models. Still, this is a really neat system for those looking to drive down the costs of providing AMD EPYC 4005 nodes while increasing rack density.

6 COMMENTS

  1. How do the rear OCP slots work shared with the nodes?

    I am wondering both physically and logically? If each one of those needs 4x 4X PCI-E cables for full bandwidth, how do these get connected?

  2. @George these will require OCP NIC from the 3.0 standard with multi host capabilities like a Connect-X 6, you get one or two physical connections into the NIC and then the NIC has vSwitch/vRouter capabilities that present a separate NIC to the intrrnal hosts, typically 4 per card (that’s why you need 3 for 10 nodes)

  3. Notably, each node seems to only have a Gen4 x4 connection to the multi-host NICs, and it’s from the B650 chipset, not direct from CPU. That means you’re capped at 60 Gbps, and it’s shared with all the other chipset functions. I also wonder if it will impact things like SR-IOV & VFIO (because of iommu groups).

  4. Ten individual separate PCs in one box. Each PC can be leased (rented?) to a user who logs in to an online cloud hosting service and remotes into their own online cloud PC. Ten users at once, each to their own respective PC. The one box requires less power to operate than ten separate boxes, each with its own PSU and corresponding PDU socket to do the same job. Makes sense to me. I like it!

  5. Also, it’s a physical hardware PC, so fewer issues with setting up virtualization (although that’s possible here too). Very neat server design. I can see this being useful for businesses running their own federated cloud PC services to users who need to have a separate work computer but maybe can’t afford to buy a whole new system on their dime.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

This site uses Akismet to reduce spam. Learn how your comment data is processed.