AMD EPYC 7002 Series Rome Delivers a Knockout

58

AMD EPYC 7002 Topology Impact

We wanted to show a few views of why this matters from a system topology perspective. You may have heard Intel and other commentators mention how AMD needed to use multiple NUMA nodes to hit high core counts due to their EPYC 7001 chiplet design. That has changed. Now Intel needs more NUMA nodes to hit a given core count.

AMD EPYC 24-core Topology Changes

In the first-generation AMD EPYC 7001 CPUs, there are four die packaged into a socket. Each die has two channels of local memory. This creates four NUMA nodes per socket and there is a major latency impact going from one die’s memory to another. Worse was off socket access. Here is a look at an AMD EPYC 7401P in one of the HPE ProLiant DL325 Gen10 systems that we have been deploying in droves.

AMD EPYC 7401P In HPE ProLiant DL325 Gen10 Topology
AMD EPYC 7401P In HPE ProLiant DL325 Gen10 Topology

As you can see, there are four NUMA nodes. Also, PCIe devices are attached to different NUMA nodes so there is a lot of traffic that may need to cross the first generation Infinity Fabric.

Here is a different example from a Gigabyte R272-Z32 server where we have NVMe SSDs populated in different slots across the 24x NVMe SSD front bays.

Gigabyte R272 Z32 2U 24x NVMe Server PCIe Gen4 Slots And Cabling
Gigabyte R272 Z32 2U 24x NVMe Server PCIe Gen4 Slots And Cabling

Even with all of that PCIe Gen4, here is what the topology looks like:

AMD EPYC 7402P In Gigabyte R272 Z72 NVMe Topology
AMD EPYC 7402P In Gigabyte R272 Z32 NVMe Topology

As you can see, the package has a single memory domain since all of the DDR4 memory sits off of the I/O die. Also, all of the PCIe devices go to that NUMA node.

AMD EPYC 64-core Topology Changes

Scaling up, here is what 64 cores look like using an AMD EPYC 7702P 64-core CPU in a Supermicro WIO 1U platform:

AMD EPYC 7702P In Supermicro WIO 1U Topology
AMD EPYC 7702P In Supermicro WIO 1U Topology

That massive 256MB L3 cache is split across the cores. All 256GB of memory is attached to this large NUMA node. Likewise, PCIe is attached to a single NUMA node. Again, this would have been across four NUMA nodes previously.

Supermicro H12SSW NT Motherboard
Supermicro H12SSW NT Motherboard

Intel cannot field 64 cores in even a dual-socket mainstream platform. Instead, one needs to span four NUMA nodes. The example below is only 4x 12 core CPUs, but Intel with a 56 core maximum in dual-socket needs at least four NUMA (4×16 core) nodes to hit what AMD can now do in one.

Supermicro SYS 2049U TR4 Topology
Supermicro SYS 2049U TR4 Topology

Disparaging AMD’s four NUMA node design to one NUMA node on Intel now sees the tables turn at 64 cores. Intel needs four NUMA nodes, AMD only needs one.

Getting Big: 128-Core/ 256-Thread Topology

That scales-up as well. With 64 cores/ 128 threads per socket, AMD can now do this is in only two sockets:

Dual AMD EPYC 7742 Topology
Dual AMD EPYC 7742 Topology

Intel can only get to 112 cores/ 224 threads in quad sockets. If you wanted this many cores with Intel Xeon Platinum 8200, you would need to move to an exotic (and costly) 8-socket design.

Impact: Memory Bandwidth and NPS

AMD now has a setting with the EPYC 7002 generation to provide multiple NUMA domains on a system. Although AMD did not want to make a direct comparison, there is a feature that is similar to the Intel Xeon Scalable Sub-NUMA Clustering (SNC.) One can effectively partition the EPYC 7002 CPU to behave like four NUMA nodes with up to two compute die and one quarter of the I/O die each. This keeps data flowing through the shortest paths in the system. Indeed, with NPS=4 (4 NUMA nodes) one has a topology that looks not dissimilar to the AMD Ryzen 3000 topology, 1/4 of an IOD and up to two compute dies.

For this test, we are using the industry-standard STREAM benchmark. STREAM is a benchmark that needs virtually no introduction. It is considered by many to be the de facto memory performance benchmark. Authored by John D. McCalpin, Ph.D. it can be found here.

AMD EPYC 7002 NPS Impact On Stream Bandwidth
AMD EPYC 7002 NPS Impact On Stream Bandwidth

The default for AMD EPYC 7002 systems will be NPS=1. That is what we showed with the above charts, and what we use in our benchmarks. In most of the test we run, NPS=2 or NPS=4 does not get you much more performance, but for those optimizing hardware and software platforms for peak performance, this is an option available. Since NPS changes topology and results, we wanted it on this page rather than our main benchmark runs.

All of this architecture background is great, but we know our readers want to see the performance. We are going to cover that on the next page.

58 COMMENTS

  1. Absolutely amazing. I still can’t believe the comeback AMD has made in just a few years. From a joke to toppling over the competitor for the top position in what, 3 odd years?

    Definitely going to get this for our next server build. Major props to AMD.

  2. “Intel does not have a competitive product on tap until 2020.”
    Cooper Lake is not remote competitive with Rome, much less it’s actual 2020 competitor Milan.

    Highly unlikely Intel will be close to competitive until it’s Zen equivalent architecture on it’s 7nm node.

  3. Wow! I’ve been holding out upgrading my E5 v3-generation server, workstations, and render farms in my post-production studio because what has been available as upgrades seemed so incremental, it was udnerwhelming. And now here comes Rome and the top SKU is performing 5-6X faster than an E5-2697 v3! Maybe a weird comparison, but specific to me. I’m thinking back to some painfully long renders on recent jobs and imagining those done 5x faster…

    I would really, really love to see some V-Ray or even Cinebench benchmarks. I know I’m not the target market, but I’m not alone in wanting this for media & entertainment rendering and workstation use. Any chance you could run some for us?

    Also, what Rome chip would you need for a 24x NVMe server to make sure the CPU isn’t the bottleneck?

    Great work, as always. Thank you!

  4. Intel’s got Ice Lake too. I’d also wager that Patrick and STH know more about Intel’s roadmap than most.

    Ya’ll did a great job. Using CPU 2017 base instead of peak was good. I thought it was shady of AMD to use peak in their presentations.

    I’d like to see sysbench come back.

  5. Most OEMs will have no problems with moving to Rome but Apple is in a tough situation with their Intel partnership, aren’t they? How can they market Xeon generational improvements when others are will be talking about multiplying performance and a substantial relative price decrease?

  6. Take a look at the top of dual socket systems in the SPECrate2017_int_base benchmark here:
    Supermicro already posted a 655 base with 7742’s to top the charts.

  7. Wizard W0wy – we applied patches, however:
    1. We left Hyper-threading on. I know some have a harder-line stance on if they consider HT on a fully-mitigated setup.
    2. We did not patch for SWAPGSAttack. AMD says they are already patched or not vulnerable here. Realistically, SWAPGSAttack came out the day before our review and there was no way to re-run everything in a day.

    Tyler Hawes – we have the Gigabyte R272-Z32 shown on the topology page. That will handle 24x U.2 NVMe but that will be a common 2U form factor in this generation. CPU selection will depend on NIC used, software stack, and etc., but that is a good place to investigate.

  8. Awesome article STH

    I would love to see some more latency test, Naples had some issues with latency sensitive workloads in part due to the chiplet design. So, will you guys test it out in the future?

    And more database tests?

  9. You did mention you would talk more about 3rd Gen EPYC? I don’t think I saw it anywhere in the article. Will it be out to compete with Ice Lake? What are the claims so far?

    Thanks for the great article! Best I’ve read so far.

  10. I’m also disappointed in the lack of a second gen 7371 SKU. Our aging HP GL380p G8 MSSQL server is due for a replacement, and I don’t want to have to license any more cores. Per-core performance really shines considering $7k/core. It would feel wrong to deploy without PCIe Gen 4; I might drop a 7371 into one of the new boards (if I can get any vendor support) and swap it when the time comes.

  11. I appreciate the amount of work you have done in compiling all this information. Thank you, and well done.

    Also, well done to AMD! What an amazing product they have delivered. Truly one of the greatest leaps in performance-per-dollar we have seen in recent years.

  12. Hello Patrick,
    There was a Gigabyte converged motherboard layout (H262-Z66) floating out that showed 4 Gen-Z 4C slots coming from the CPU. There were rumors of Gen -Z in Rome going back to the Summer of 2018; Is there anything you can tell us about that?

  13. Hi guys, taking my wife to the hospital in 30 minutes for surgery. Will try to get a few more answered but apologies for the delay later today. She broke her elbow (badly.) Thank you for the kind comments.

    Jesper – it is a bit different in this generation. When you are consolidating multiple sockets, or multiple servers, into a single socket, your latency comparison point becomes different as well. We have data but tried to manage scope for the initial review. We will have more coming.

    Luke – Milan is coming, design complete, 7nm+ and the same socket. AMD said the Rome socket is the Milan socket.

    Billy – I think AMD’s problem is that there is so much demand for their current stack, some of those SKUs did not make the launch. I am strongly implying something here.

    Michael Benjamins – 2P 7742 was 27005 without doing thread pinning. There is a lot more performance there. Also, Microsoft Windows Server 2019 needed a patch (being mainlined now) to get 256 threads to boot. I am not sure if I want to show this before we get a better tuned result. Even with this, R20 hits black screen to fully rendered in ~12 seconds. Cores were under 40-98% load for <10 seconds with R20. I actually think R20 needs a bigger test for a 256 thread system.

  14. I’m not sure I understand the paragraph about Intel putting pressure on OEMs. What exactly should not be named/disclosed? Can someone please explain the meaning to me?

    Sounds like the typical and shady anti competitive measures Intel is known for.

    p.s. I hope this is not a double post, but I got no indication if my previous submit worked or not.

  15. Quick question on the successor to Snowy Owl? Have we got an ETA, or will AMD simply pop Ryzen in its place, like ASRock have done?

  16. This is f@#$ing great work. You’ve covered high-level, deep technical, business and market impact, with numbers and practical examples like your load gen servers that are great. I’ve read a few of the other big sites but you’re now on a different level.

  17. To anyone that’s new I’ll reiterate what I said on the jellyfish-fryer article

    Patrick’s the Server Jesus these days.

    He’s done all the server releases and they’re reviewing all the servers

  18. Okay. My criticism was this looked really long. I started reading yesterday. Finished today. Why’d AMD have to launch so late????

    After I was done reading I was totally onboard with your format. You’ve got a lot of context interjected. I’d say this isn’t as sterile as a white paper, but it’s ultra valuable.

    Now get to your reviews on CPUs and servers.

  19. @Youri and another Epyc system from Gigabyte already beat the SuperMicro one at your link 😉

    R282-Z90 (AMD EPYC 7742, 2.25GHz)

  20. I’m thinking you should submit this to some third tier school and call it a doctoral thesis for a PhD. That was a dense long read. I’ve been reading STH since Haswell and I’ll say that I really like how you’ve moved away from ultra clinical to giving more anecdotes. I can tell the difference reading STH over other pubs. This is deep and thorough.

  21. What vendor can accept the first orders for the systems with AMD EPYC 7002 (configurator ready) and is able to ship let’s say within next 2-3 weeks?

  22. I am so glad I waited until today to read this, when I could sit down and read at my leisure. Thank you Patrick and team. This is why I read STH.

  23. “2. Customers to change behavior”

    This is likely not what AMD can do since there is no medicine or medical operation available to fix stupidity!

    Stupidity can’t be fixed by others except people themselves!

  24. Mike Palantir,
    During the event, I thought I recalled the HP rep stating they had systems available for order today.

  25. FYI Rumour rag, WCCF claimed to Fact check your statistic’s!

    “Warning: some of the numbers below are simply absurd.

    ServeTheHome reviewed the top-end 64 core dual socket and found that “AMD now has a massive power consumption per core or performance advantage over Intel Xeon, to the tune of 2x or more in many cases.”

    The new EPYC parts have a massive I/O advantage with 300% the memory capacity versus Xeon 33% more memory channels (8 versus 6) and finally 233% more PCIe Gen3 lanes. But what about actual performance?”

  26. This is probably a dumb question but are there any vendors that will be selling individual chips (not systems) within the next quarter or two? And who would the best vendor be?

    Thanks

  27. guys.. remember that both AMD and us as customers do owe TSMC a lot. Without TSMC all this would probably be not possible today.

  28. Never mind my previous comment. Newegg is selling the processors and is already on back order to the end of August for most of the desirable SKU’s.

  29. Patrick thank you for the informative article and all the great work you and your team do. Also would like to thank the STH readers for their article comments and posts in the forums. This is one of very few sites where I actually enjoy reading what other people think and say…

    And thank you for the nudge nudge wink wink information with regards to the 7371 style skus. I have a application that processes in a very serial fashion and it benefits from higher megahertz vs Core quantity, though 16 cores is perfect for the SQL and other tasks on the machine. I’m excited about the new NUMA architecture and I’m looking forward to whatever is next.

    Best wishes and a speedy recovery to your wife!

  30. @Billy
    Epyc 7542 would probably match or beat the 7371 in mosts lightly threaded tasks.
    @lejeczek
    What can TSMC make that Samsung couldn’t?

  31. Amazing writeup Patrick, once again! Beamr is proud to be a Day 1 application partner as the only company focused on video encoding. As a result of this amazing achievement by AMD, on the Gen 2 EPYC we were demonstrating at the launch event 8Kp60 HDR live HEVC video encoding on a single socket of a 7742.

    And as a result of having 64 high performance cores, because we are heavily optimized for parallel operation, all cores were utilized at 95% or above! Beamr is super excited to have this level of performance available to our first tier OTT streaming customers and large pay TV operators.

    AMD has broken through on so many levels with this new processor generation that I understand why you feel the need to even go deeper with your analysis and review after writing an “epic” 11k word article.

  32. Great look at the next big thing… After it all, I can only ask if with FINALLY a 1 node socket is there any talk of 4P or 8P…
    The thought of 512C\1024T in a 4U is like dreams come true… And if the rumors of SMT4 turn out to be true (EUV does give 20% more density and power-savings) 512C/2048T could do most heavy jobs in one box…
    And it does change the landscape since the progression from 8C to 64C covers basically 100% of the market.. The market doesn’t care if they need 1P or 8P, they only care about the areas where AMD is excelling…
    Another interesting area I’m not seeing a lot of is Edge Computing… This should seal the deal with an 8 or 16C that can have 6 NICs and an Instinct for AI inferencing…

    Love the site… Looking at bare metal in the future…

  33. So what they’ve figured out that other sites haven’t yet, is the whole consolidation story. That 4 Xeon E5-2630 V4 to 1 epyc really resonates.

  34. It will be interesting to see how long it is before VMware and other companies start adjusting their licensing to reflect future market trends. Software companies have investors to please too. If Intel doesn’t have anything to compete by the time prices start going up then it could cause a huge wave of companies switching to AMD for the simple fact that their licensing would be too expensive otherwise. The other thing they could do is switch. Everything to per core licensing which would give Intel a slight advantage or possibly just a tie once you factor in the total cost. I bet you big changes are coming though. No company could survive having their revenue cut to 1/6 its original value in a couple of years.

  35. So this is me just thinking about this some more. It will also be interesting to see the impact this could have on interest in open source alternatives. Costs jumping 2-6x are the kind of events that get people to start looking into alternatives.

  36. Colby, vmware changing its licensing to per core after appearing and praising rome on stage together with amd, would be one of the top3 stupidest move this industry has ever seen. Not impossible, but highly improbable.

  37. Yeah but in my experience when it comes to looking like an idiot and having to explain to your investors and wall street analysts why your revenue stream has been cut in half most CEOs would prefer to look like an idiot. After all the CEO owns a good portion of the company as well. I don’t necessarily think it will be all at once but instead of a 3% annual increase we may start seeing 10-15%. They also may be hoping that due to the cost reduction allowed in Rome that they will see more customers coming in looking to virtualize since it will be cheaper. Another thing that could potentially go VMware’s way would be if customers just started giving more resources to each vm since they aren’t as constrained by their licensing anymore. Instead of dual core vms with 4GB of ram now everyone gets

  38. …everyone gets 4 cores and 8GB with the benefit to the company being added productivity. Nothing happens in a vacuum in business but the question is what factors are going to prevail the most.

  39. Just joining in for the thanks. The most thorough and in-depth review on the net I’ve found so far.

    Also, Patrick, I wish your wife quick and full recovery. So you can get back to benchmarking, that is 😉

  40. I’m surprised at how inexpensive the lower core count 1P processors are. Are these practical in a high end CFD workstation or for other compute intensive workstations ?

    Someone needs to compare the Ryzen 9 3950X ($750) with the soon to be released 16 core Zen2 with the 7302P ($825). Can’t believe a 16 bit Rome EPYC is only $75 more than the R9 ! The 16 core Zen 2 has to be priced between these 2 devices, maybe $800 ?

    With the 7502P (32 cores) selling for $2300, I guess we know the upper end of the price on the Zen2 32 bit Threadripper.

    Another thing to keep in mind is that Zen3 products will be shipping in 15 months or so. They will surely push down the price/performance curve even further. Zen 3 will be 7nm EUV, which should be 20% higher density, lower power consumption and faster clock speeds. Zen 3 Ryzen should be 32 core, TR should be 128 core, EPYC should be 128 or even 256 core !

  41. @Nobody I’m also really curious about the suitability of these chips for a workstation and how they compare to threadripper. Patrick thought the clock speeds on gen 1 EPYC chips were too slow before the 7371 was released.

  42. Devastating. Adding the fact that second generation is compatible to SP3 and vendors have v2-enabled BIOSes out there already is a serious hit. Good job, AMD

  43. Followed the link back from your article on the 7 and 10nm Intel woes. When you wrote this, you expected Intel to be competitive in 2020. Instead Intel’s process woes have messed up the other parts of the company, and they are considering contracting out CPU and GPU production!

    I never thought I’d see Intel mess up so badly on process, and I know I’m not alone on this. It has given AMD a really big doorway, and curiously enough also seems to have opened the doorway further for ARM vendors, due I think to AMD being limited in production capabilities.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

This site uses Akismet to reduce spam. Learn how your comment data is processed.