The AMD EPYC 4004 is Finally Here and Intel Xeon E Needs an Overhaul

26

AMD EPYC 4564P to Intel Xeon E-2488 Performance

Given that we were doing this pre-launch, we only had two SKUs, an AMD EPYC 4344P 6-core part and an AMD EPYC 4564P part. Probably the most interesting for folks would be the AMD EPYC 4464P with 12-cores at 65W TDP, the 3D V-Cache parts, and the 8-core SKUs that match Intel’s top-end Intel Xeon E-2488G. Alas, we have a 16 core part, so instead we will take a look at the per-socket performance.

AMD EPYC 4564P Lscpu Output
AMD EPYC 4564P Lscpu Output

Just since folks like to see this, here is what the topology looks like. We can see both 32MB L3 clusters one fore each 8-core compute die.

ASRock Rack With AMD EPYC 4564P Topology
ASRock Rack With AMD EPYC 4564P Topology

Taking a look at the performance, the AMD EPYC 4564P is just a lot faster.

AMD EPYC 4564P To Intel Xeon E 2488 Performance
AMD EPYC 4564P To Intel Xeon E 2488 Performance

AMD and Intel are closer on a per-core basis at a similar TDP. The challenge is really this. AMD can scale to 16 cores and 32 threads in a socket whereas Intel tops out at 8C/16T. That makes performance just look like they are on a different scale. On one hand, this is an unfair comparison just based on the core count. On the other hand, the parts are $606 for the Xeon E-2488 and $699 for the EPYC 4564P so they are fairly close on price. The Xeon E may use less power for fewer compute resources, but AMD has a 12 core 65W TDP part that would likely change that narrative based on what we have seen with the Ryzen parts.

At the end of the day, AMD’s socket just scales much higher in performance. That matters. If you need more cores or more performance than the Xeon E-2488, then you are buying a bigger and more expensive platform. With AMD, you can stay in the entry level platform.

AMD EPYC 4004 And Intel Xeon E SPEC CPU2017 Int Rate Estimates
AMD EPYC 4004 And Intel Xeon E SPEC CPU2017 Int Rate Estimates

Since there are many SKUs that we do not have, here is AMD’s SPECrate2017_int_base for the CPUs that AMD is using.

Next, let us get to our key lessons learned.

26 COMMENTS

  1. I was looking for a CPU like this for quite some time to replace an old system at work. The 16c one will probably be the CPU we‘ll end up using

  2. Now we just need 64GB ECC UDIMMs at a reasonable price. Preferably with a 4x64GB kit option from Crucial/Micron.

  3. Fantatic. As Brian S mentions we just need 64GB ECC UDIMM and this will be perfect to replace my aging ZFS server. 256GB ram without having to go to large and power hungry sockets.

  4. I was hoping they would use 3DVcache on both chiplets so we could avoid heterogeneous performance issues, but alas, we get the same cache setup as desktop Ryzen.

    As noted in the article, this makes the absence of an 8 core 3DVcache 4364PX model all the more glaring.

  5. Maximum memory supportes is 192GB as indicated in the first AMD slide. So unless this is a fake limit like with some Intel processor, this is the max.

  6. I have been doing essentially this for a while with my colo – using Ryzen 7900 + 128G of ECC in a ASRock Rack 1U – it’s pretty much unbeatable for performance at the price, and it runs in around 0.5A at 240V so efficiency is excellent – and you get superb single core performance with turbo going over 5GHz.

    Never could understand why AMD didn’t embrace the segment earlier – because Intel have no answer with their current ranges.

  7. Looks like these CPUs will be sold at Newegg for consumer retail, hopefully not an OEM only launch…

    The real cost is how much the server motherboards will cost (the only difference between the consumer and server boards in this case will be ipmi). I’m guessing it will be a couple $100 for the cheapest server motherboards. So if these server boards still cost a leg and an arm it might be worthwhile for homelab users to get a consumer am5 motherboard that is known to work with ECC and just forgo ipmi.

  8. I don’t see appeal of 3dVcache is server processor at this point. I’d rather take more efficient CPU instead. I would love to see newer motherboards, with integrated 10g, sas and/or u2 ports that will be useful for homeuse.

  9. @TLN I agree, AM4 had Ryzen pro APUs which came in 35w and 65w, which are basically closest to what this current epyc launch is (ECC memory works, oem variant). No 35w launch kinda sucks but I bet there is some bios setting to limit power usage.

  10. I dont understand the point of these tiny Xeons and EPYCs. Each one is thousands of dollars and their performance is laughable. I get they can be scaled but still, theyre absolutely horrible. Is there something else im missing?

  11. @Sussy, what you’re missing is in the article, and it’s that they are NOT thousands of dollars.

    “AMD’s pricing is also very strong. Its 65W TDP SKUs are priced between $36-41 per core”

    $40 x 4 cores…you do the math

  12. I wonder how much it would cost Intel to bring at least a bit(eg. at least 1x10GbE and enough QAT punch to handle it) of the networking features they use on Xeon Ds and their server/appliance oriented Atoms to Xeon E.

    It wouldn’t change the fact that the competition as a compute part is not pretty for them; but if there are savings from tighter integration vs. a motherboard NIC or a discrete card that could probably make the system-level price look more attractive using IP blocks that they already have.

    The other one I’d be curious about, not sure if Aspeed’s pricing is just too aggressive or if too many large customers are either really skittish about proprietary BMCs or very enthusiastic about their own proprietary BMCs, would be the viability of using the hardware that they have for AMT to do IPMI/Redfish instead. Wouldn’t change the world; but could shave a little board space and BoM, which certainly wouldn’t hurt when people are making platform level comparisons.

  13. @Joeri, I doubt it’s a technical limitation. Rather, the biggest GDDR5 DIMMs are 48GB at the moment. When we have 64GB available it will be 256GB. But AMD have not been able to validate this. For a server platform that matters.

  14. If this leads to AM5 mobos with dual x8 PCIe slots (or x8 x4 x4) becoming more common and reasonably priced then bring it on. If it just means more mobos with a single x16 yawning chasm of bandwidth slot then it is stupid. I do think servers need a bit more flexibility than a typical gamer rig.

  15. @emerth: it will be interesting to see what layouts formally endorsing Ryzen as a server CPU leads to, especially with the increase in systems running higher PCIe speeds doing fewer slots and more cabled risers.

    Server use cases are definitely much less likely than gamer ones to be “your GPU, an m.2 or two; that’s basically it”; but 16 core/192GB size servers are presumably going to mostly be 1Us, so it wouldn’t be a huge surprise if there are fewer ‘ATX/mATX but with 2×8 or x8, 2×4’ and more ‘2×16(mechanical), one cabled one right-angle riser’ or ‘single riser with a mechanical x8 on one side and two single-width mechanical x16s on the other’ just because these motherboards are less likely than the Ryzen ones to be slated for 4Us or pedestal servers.

    Probably a nonzero number of more desktop-style motherboards; this will be used as a low end ‘workstation’ part for people who want desktops but are more paranoid about validation; and that’s still typically tower case stuff; but boards aimed specifically at low cost physical servers will likely be heavy on risers(whether cabled or right-angle) just because of the chassis they are intended to pair with.

  16. @fff, risers or cabled are fine. Breaking out the PCIe to allow more devices sttached is the important thing.

  17. I don’t think PCIe 5.0 is going to do me any good without a storage controller that can utilize that extra bandwidth.

    I can’t fan out PCIe 5.0 to twice as many 4.0 NVMe drives without a bleeding edge retimer.

    If I could get a 48-port SATA HBA at PCIe 5.0 that cost less than the total of the rest of the platform, then I could understand what to do with this thing.

    Looks almost ideal for my cheap designs if I could connect those dots.

  18. As well as Boyd’s remarks and fuzzy’s and mine there is this: server does not really need a chipset or collection of 10/20/40 Gb/s USB. There is no need to spend eight PCIe lanes on that stuff like in a desktop board. Vendors should use them for more slots or cabled connections or on a very large number of SATA drives.

  19. @MSQ: NVMe RAID has been Supported since RAIDxpert 2.0 (2020). Supermicro explicitly mention RAID 1 & 0 on their 4004 mainboards with 2x M.2 slots.

  20. A couple of questions to anyone that uses these systems:

    1. Where did you bought them? In Eastern Europe there are a lot of resellers for Dell/HPE but have not seen any for Asrock Rack.
    2. What about RAID? Do you use it on such systems? If so, do you use the motherboard implementation, or maybe a software one?

  21. Glaringly missing from this article is the number of PCIe lanes. With the standard Ryzen 28 this is no competition for Intel’s Xeon W processors.

  22. Hrmm, now if I could buy these in a small platform, similar to HPE Microserver, would be awesome. I don’t have time anymore for manual builds, but these are perfect otherwise for home custom NAS/Firewall/etc…

  23. This is an interesting lineup… ANd even if Intel decides ot go with P cores, AMD can overwhelm them with 4C or 5C cores up to 32 cores easily…

LEAVE A REPLY

Please enter your comment!
Please enter your name here

This site uses Akismet to reduce spam. Learn how your comment data is processed.