Gigabyte ME33-AR0 AMD EPYC 8004 Motherboard Review

13
Gigabyte ME33 AR0 Overview With AMD EPYC 8324PN
Gigabyte ME33 AR0 Overview With AMD EPYC 8324PN

The Gigabyte ME33-AR0 is the company’s single-socket AMD EPYC 8004 motherboard. We have started our EPYC 8004 “Siena” series and this is an understated, or under-hyped platform that a lot of folks could be taking advantage of. Gigabyte’s platform for a Siena server is unique compared to others in the market. Let us get into it.

Gigabyte ME33-AR0 Hardware Overview

The motherboard is going to look very different, but the size is EATX 12″ x 13″. That means it will integrate into many different chassis. If you are reading this, and doing a double-take on the motherboard, we will go through why it looks the way it does in this article.

Gigabyte ME33 AR0 Overview
Gigabyte ME33 AR0 Overview

The new motherboard uses the AMD SP6 socket. One could be excused for thinking it looks a lot like the Naples through Milan (and Threadripper) SP3 socket. An easy way to tell is that there are six memory channels and up to two DIMMs per channel for twelve DIMM slots total.

Gigabyte ME33 AR0 Socket SP6 And DIMMs
Gigabyte ME33 AR0 Socket SP6 And DIMMs

In the socket are the AMD EPYC 8004 CPUs that scale up to 64 cores using Zen 4c cores, the same that are in “Bergamo“. One way to think about this is that it is like half a Bergamo from a maximum core count and DDR5 memory channel perspective.

Gigabyte ME33 AR0 With AMD EPYC 8324PN And DIMM Slots
Gigabyte ME33 AR0 With AMD EPYC 8324PN And DIMM Slots

Putting the CPU at the front of the motherboard allows for the system to direct airflow over the CPU heatsink. It also allows for all of the memory channels to be present alongside a number of PCIe slots.

Gigabyte ME33 AR0 Overview With AMD EPYC 8324PN
Gigabyte ME33 AR0 Overview With AMD EPYC 8324PN

Just showing an example of this, we can see theĀ Gigabyte G242-Z10 or theĀ Gigabyte MZ32-AR0 as previous generation examples of how this works. Now, Gigabyte is bringing a similar style platform using a newer technology.

Gigabyte G242 Z10 AMD EPYC Heatsink And RAM
Gigabyte G242 Z10 AMD EPYC Heatsink And RAM

Next to the DIMM slots on the top of the motherboard are two PCie Gen5 x4 M.2 slots.

Gigabyte ME33 AR0 M.2 Slots
Gigabyte ME33 AR0 M.2 Slots

Gigabyte also has three x8 MCIO connectors on the leading edge of the motherboard. Two are on the top side.

Gigabyte ME33 AR0 MCIO Connectors 1
Gigabyte ME33 AR0 MCIO Connectors 1

The other is on the bottom. These give a total of 24 lanes or enough for six NVMe drives. Gigabyte also includes a MCIO to SATA cable for those who want to use SATA drives. Two of the MCIO connectors can be used for SATA giving a total of 16 drives.

Gigabyte ME33 AR0 MCIO Connectors 2
Gigabyte ME33 AR0 MCIO Connectors 2

There are four PCIe x16 slots. Three are PCIe Gen5 x16. The fourth, or the top slot in this photo is a PCIe Gen4 x16 slot.

Gigabyte ME33 AR0 PCIe Gen5 Slots 1
Gigabyte ME33 AR0 PCIe Gen5 Slots 1

For management, there is an ASPEED AST2600 BMC.

Gigabyte ME33 AR0 ASPEED BMC
Gigabyte ME33 AR0 ASPEED BMC

Networking is handled by a Broadcom BCM5720. This is a dual port 1GbE NIC chip.

Gigabyte ME33 AR0 Broadcom BCM5720
Gigabyte ME33 AR0 Broadcom BCM5720

The top of the motherboard has the ATX power connector and the CPU power connectors on the motherboard.

Gigabyte ME33 AR0 Power Input
Gigabyte ME33 AR0 Power Input

Next, let us get to how this is all connected.

13 COMMENTS

  1. With the CPU & RAM placement like this, how can one do with these PCIe slots? Does it only suitable for NICs?

  2. Goodness, such an I/O connected board with only 1G ethernet? I fail to understand why mid-end or higher-end boards don’t have at least one 10G port and a 2.5G port

  3. Hello,

    I am running HPC servers for FEA consulting that I do. Could you comment on the idle power of this board as I am actively looking for HPC server solutions with low idle power consumption to replace an older 4 socket Xeon system. My current dual socket EPYC 9554 system pulls around 475W at idle and my older 4 socket Xeon system pulls close to 750W at idle.

  4. Indeed Eric, a word about the placement of the PCIe slots would be interesting. What’s the idea here? Always use risers? Did you talk to Gigabyte about it?

  5. These boards aren’t usually bought by those who uses GPU accelerators
    figure mostly NICs, HBAs, or U.2 breakout cards

  6. That’s pcie placement is gigabyte trademark i guess, the most terrible design that I have seen.

    You can’t use any card that have bigger size than x16 slot, wuick reminder, that almost any raid controller have cables that going not to the top, but to the side.

    You wanna install something like perc h755? Hah, shame on you you only have 1(!) port to do it, otherwise you will by on top of memory dimms.

    And this spaces between pcie slots is more for GPU (since they are twoslot width), but nope, can’t use it for it.

    So I guess it’s just HBA retimers card, but they ALSO bigger, than x16 slot. So the cables from them will go into cpu heatsink, yikes.

    I just don’t understand why gigabyte continues to do this..

  7. 2024, at least 10GbE should be the bare minimum standard.

    If I were the designer of this board, I would have swap the placement of the M.2 with the DIMM/CPU

    That at least would render the PCIE slots usable….

  8. That is the stupidest board layout I have ever seen. Should have left out the PCIe slots and sold it cheaper, they being blocked by CPU & RAM.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

This site uses Akismet to reduce spam. Learn how your comment data is processed.