The AMD EPYC 4004 is Finally Here and Intel Xeon E Needs an Overhaul

AMD EPYC 4564P Front 4
AMD EPYC 4564P Front 4

Back in 2018, I distinctly remember attending the AMD EPYC Embedded 3000 Series Launch in London and sitting at dinner with Scott Aylor who was the head of EPYC at the time asking when we would get an EPYC Ryzen. Other execs at AMD are probably sick of me asking over the last six years for a Ryzen-based EPYC. Finally, I can stop asking as we now have the AMD EPYC 4004 series for entry-level servers. With this launch, something we have discussed in a few pieces over the years is coming true, Intel needs to re-think its Xeon E series since it is now getting walloped in the segment.

As one might expect, we have a video for this one that you can find here:

As always, we suggest opening this in its own tab, browser, or app for the best viewing experience.

AMD EPYC 4004 Series Overview

Let us get to the new CPUs, and here is the overview. AMD is leveraging its AMD Ryzen 7000 series architecture that scales up to 16 cores and with 3D V-cache models, to disrupt the entry-level server market.

AMD EPYC 4004 Summary
AMD EPYC 4004 Summary

We are not going to go deep into the architecture, because it is well-known from the Ryzen side. We have Zen 4 cores and a TDP of 65-170W, ECC UDIMM support, and even onboard graphics.

AMD EPYC 4004 Architecture And Features Large
AMD EPYC 4004 Architecture And Features Large

Indeed, these CPUs even look like someone etched something different on a Ryzen 7000 series heatspreadder and called it a day. That is both not too far off from what is happening here, but like with the Xeon E series, the EPYC branding is really adding value for validation of a number of features and validation with server operating systems and software. As an example, AMD will lean into the fact that it has two 16 core SKUs and Windows Server 2022 is licensed in 16 core increments making it now the de-facto entry server SKU for those who want to use Windows Server.

AMD EPYC 4224P Front 1
AMD EPYC 4224P Front 1

When we say that these parts are similar to Ryzen, consider this. We put our CPUs into the ASRock Rack 2U1G-B650 2U AMD Ryzen GPU server we reviewed. For the lower power 6-core part it was a straight swap.

ASRock Rack With AMD EPYC 4564P
ASRock Rack With AMD EPYC 4564P

For the 16-core SKU it required us adding a bigger heatsink. We will have a follow-up with the bigger heatsink version, but this is all the same socket.

AMD EPYC 4004 Back 1
AMD EPYC 4004 Back 1

Prior to this, AMD had its big socket AMD EPYC 9004 series with Genoa, Genoa-X, and Bergamo. Then there was the AMD EPYC 8004 “Siena” for up to 64 cores. Still, it did not have a real entry-level server product, so companies were just making Ryzen-based servers. We have probably looked at around a dozen Ryzen server platforms over the years. At the same time, it was always a bit strange to have a Ryzen server platform running with hardware and software that it was not validated for.

AMD EPYC Family Including EPYC 4004
AMD EPYC Family Including EPYC 4004

Here is the SKU stack ranging form the AMD EPYC 4124P at $149 to the 16-core SKUs at $699. Historically, the 65W Rzyen CPUs have been absolutely moster efficiency SKUs, so the AMD EPYC 4464P is the one that we would focus on.

AMD EPYC 4004 SKU Stack
AMD EPYC 4004 SKU Stack

AMD uses the “P” to show these are single-socket only SKUs as it does with the rest of the EPYC range. The X signifies 3D V-cache. We wish there was a 8 core 3D V-cache SKU as the AMD Ryzen 7800X3D has been one of our favorite SKUs to use for Ryzen-based servers. That is a glaring omission in the SKU stack.

AMD EPYC 4004 Naming Convention
AMD EPYC 4004 Naming Convention

AMD’s pricing is also very strong. Its 65W TDP SKUs are priced between $36-41 per core, at least 20% lower than Intel’s offerings. Its higher TDP SKUs all sit under the $/core of even Intel’s best Xeon E-2400 series parts. The 6-core Intel Xeon E-2486 is a 30W higher-TDP part than the EPYC 4224P, yet offers less performance and costs twice the price.

AMD EPYC 4004 Dollar Per Core
AMD EPYC 4004 Dollar Per Core

Intel needs to massively adjust its pricing to stay competitive even on a core-to-core basis. The days of charging a premium for the Xeon E-2400 line are gone, and it now needs to adjust pricing to the reality of AMD entering the market. With those adjustments, Intel can be more competitive, but what it really needs is an all E-core Xeon E series that is somewhere between an Intel Core i3-N305 and a Xeon 6 Sierra Forest.

Next, let us get to the performance of the new parts to see the socket-level performance gap.


  1. I was looking for a CPU like this for quite some time to replace an old system at work. The 16c one will probably be the CPU we‘ll end up using

  2. Now we just need 64GB ECC UDIMMs at a reasonable price. Preferably with a 4x64GB kit option from Crucial/Micron.

  3. Fantatic. As Brian S mentions we just need 64GB ECC UDIMM and this will be perfect to replace my aging ZFS server. 256GB ram without having to go to large and power hungry sockets.

  4. I was hoping they would use 3DVcache on both chiplets so we could avoid heterogeneous performance issues, but alas, we get the same cache setup as desktop Ryzen.

    As noted in the article, this makes the absence of an 8 core 3DVcache 4364PX model all the more glaring.

  5. Maximum memory supportes is 192GB as indicated in the first AMD slide. So unless this is a fake limit like with some Intel processor, this is the max.

  6. I have been doing essentially this for a while with my colo – using Ryzen 7900 + 128G of ECC in a ASRock Rack 1U – it’s pretty much unbeatable for performance at the price, and it runs in around 0.5A at 240V so efficiency is excellent – and you get superb single core performance with turbo going over 5GHz.

    Never could understand why AMD didn’t embrace the segment earlier – because Intel have no answer with their current ranges.

  7. Looks like these CPUs will be sold at Newegg for consumer retail, hopefully not an OEM only launch…

    The real cost is how much the server motherboards will cost (the only difference between the consumer and server boards in this case will be ipmi). I’m guessing it will be a couple $100 for the cheapest server motherboards. So if these server boards still cost a leg and an arm it might be worthwhile for homelab users to get a consumer am5 motherboard that is known to work with ECC and just forgo ipmi.

  8. I don’t see appeal of 3dVcache is server processor at this point. I’d rather take more efficient CPU instead. I would love to see newer motherboards, with integrated 10g, sas and/or u2 ports that will be useful for homeuse.

  9. @TLN I agree, AM4 had Ryzen pro APUs which came in 35w and 65w, which are basically closest to what this current epyc launch is (ECC memory works, oem variant). No 35w launch kinda sucks but I bet there is some bios setting to limit power usage.

  10. I dont understand the point of these tiny Xeons and EPYCs. Each one is thousands of dollars and their performance is laughable. I get they can be scaled but still, theyre absolutely horrible. Is there something else im missing?

  11. @Sussy, what you’re missing is in the article, and it’s that they are NOT thousands of dollars.

    “AMD’s pricing is also very strong. Its 65W TDP SKUs are priced between $36-41 per core”

    $40 x 4 cores…you do the math

  12. I wonder how much it would cost Intel to bring at least a bit(eg. at least 1x10GbE and enough QAT punch to handle it) of the networking features they use on Xeon Ds and their server/appliance oriented Atoms to Xeon E.

    It wouldn’t change the fact that the competition as a compute part is not pretty for them; but if there are savings from tighter integration vs. a motherboard NIC or a discrete card that could probably make the system-level price look more attractive using IP blocks that they already have.

    The other one I’d be curious about, not sure if Aspeed’s pricing is just too aggressive or if too many large customers are either really skittish about proprietary BMCs or very enthusiastic about their own proprietary BMCs, would be the viability of using the hardware that they have for AMT to do IPMI/Redfish instead. Wouldn’t change the world; but could shave a little board space and BoM, which certainly wouldn’t hurt when people are making platform level comparisons.

  13. @Joeri, I doubt it’s a technical limitation. Rather, the biggest GDDR5 DIMMs are 48GB at the moment. When we have 64GB available it will be 256GB. But AMD have not been able to validate this. For a server platform that matters.

  14. If this leads to AM5 mobos with dual x8 PCIe slots (or x8 x4 x4) becoming more common and reasonably priced then bring it on. If it just means more mobos with a single x16 yawning chasm of bandwidth slot then it is stupid. I do think servers need a bit more flexibility than a typical gamer rig.

  15. @emerth: it will be interesting to see what layouts formally endorsing Ryzen as a server CPU leads to, especially with the increase in systems running higher PCIe speeds doing fewer slots and more cabled risers.

    Server use cases are definitely much less likely than gamer ones to be “your GPU, an m.2 or two; that’s basically it”; but 16 core/192GB size servers are presumably going to mostly be 1Us, so it wouldn’t be a huge surprise if there are fewer ‘ATX/mATX but with 2×8 or x8, 2×4’ and more ‘2×16(mechanical), one cabled one right-angle riser’ or ‘single riser with a mechanical x8 on one side and two single-width mechanical x16s on the other’ just because these motherboards are less likely than the Ryzen ones to be slated for 4Us or pedestal servers.

    Probably a nonzero number of more desktop-style motherboards; this will be used as a low end ‘workstation’ part for people who want desktops but are more paranoid about validation; and that’s still typically tower case stuff; but boards aimed specifically at low cost physical servers will likely be heavy on risers(whether cabled or right-angle) just because of the chassis they are intended to pair with.

  16. @fff, risers or cabled are fine. Breaking out the PCIe to allow more devices sttached is the important thing.

  17. I don’t think PCIe 5.0 is going to do me any good without a storage controller that can utilize that extra bandwidth.

    I can’t fan out PCIe 5.0 to twice as many 4.0 NVMe drives without a bleeding edge retimer.

    If I could get a 48-port SATA HBA at PCIe 5.0 that cost less than the total of the rest of the platform, then I could understand what to do with this thing.

    Looks almost ideal for my cheap designs if I could connect those dots.

  18. As well as Boyd’s remarks and fuzzy’s and mine there is this: server does not really need a chipset or collection of 10/20/40 Gb/s USB. There is no need to spend eight PCIe lanes on that stuff like in a desktop board. Vendors should use them for more slots or cabled connections or on a very large number of SATA drives.

  19. @MSQ: NVMe RAID has been Supported since RAIDxpert 2.0 (2020). Supermicro explicitly mention RAID 1 & 0 on their 4004 mainboards with 2x M.2 slots.

  20. A couple of questions to anyone that uses these systems:

    1. Where did you bought them? In Eastern Europe there are a lot of resellers for Dell/HPE but have not seen any for Asrock Rack.
    2. What about RAID? Do you use it on such systems? If so, do you use the motherboard implementation, or maybe a software one?

  21. Glaringly missing from this article is the number of PCIe lanes. With the standard Ryzen 28 this is no competition for Intel’s Xeon W processors.

  22. Hrmm, now if I could buy these in a small platform, similar to HPE Microserver, would be awesome. I don’t have time anymore for manual builds, but these are perfect otherwise for home custom NAS/Firewall/etc…

  23. This is an interesting lineup… ANd even if Intel decides ot go with P cores, AMD can overwhelm them with 4C or 5C cores up to 32 cores easily…


Please enter your comment!
Please enter your name here

This site uses Akismet to reduce spam. Learn how your comment data is processed.