AMD Radeon RX 6800 16GB GPU Review

10

Power Tests

For our power testing, we used AIDA64 to stress the AMD Radeon RX 6800, then HWiNFO to monitor power use and temperatures.

AMD Radeon RX 6800 Power
AMD Radeon RX 6800 Power

After the stress test has ramped up the AMD Radeon RX 6800, we see it tops out at 203 Watts under full load and 7 Watts at idle. The power draw for the RX 6800 is very low considering past generation AMD GPUs high power draw. It also puts the power consumption very competitive with the GeForce RTX 3070 / RTX 2070 series.

Cooling Performance

A key reason that we started this series was to answer the cooling question. Blower-style coolers have different capabilities than some of the large dual and triple fan gaming cards.

AMD Radeon RX 6800 Temperatures
AMD Radeon RX 6800 Temperatures

Temperatures for the AMD Radeon RX 6800 ran at 68C under full loads, which shows AMD’s cooling solution performs very well and makes very little noise. The idle temps we saw were 33C, which is also excellent for a GPU of this size.

Final Words

Overall, the AMD Radeon RX 6800 performed well across the workloads that we could run. Power consumption and thermals were very reasonable compared to the NVIDIA GeForce RTX 3070.

It seems like AMD has an edge in double-precision FLOP performance. We have received feedback that there are students out there who view this as a need. At some point, a data center card is going to be faster, but that is an area where AMD seems to have an edge in the consumer space. By the same token, we did not get to show off Tensor core performance now that NVIDIA is building AI acceleration into its GeForce parts.

AMD Radeon RX 6800 Angle View
AMD Radeon RX 6800 Angle View

From a software perspective, this is AMD’s big challenge. ROCm is getting better, but it is an extra step. For example, for our deep learning containers, we have to re-build the containers with ROCm, troubleshoot why things are not working, and then ensure we are getting something comparable to what we had on the NVIDIA side. Realistically that last step is specific to us but the first steps are more universal. If you are coding net-new work from scratch, this is likely not an issue, but CUDA acceleration is present in other applications as well, and not all are open source.

The good news is that software tends to iterate faster than hardware and AMD has a competitive hardware platform. Having a competitive hardware platform means that software developers are more willing to work on optimization. We think that AMD is on the right side of a cycle here. Frankly, the 16GB of memory in a card of this class is a great feature. Moving data over the PCIe bus if you run out of on-card memory today is a relatively slow operation. That is why we want to see NVIDIA cards like the NVIDIA GeForce RTX 3060 12GB with more memory than the 2-generation old $650 GeForce GTX 1080 Ti. AMD has delivered on that while NVIDIA has not necessarily done so in this price bracket (yet.)

Overall, this is a very nice card. AMD clearly has a new GPU platform that performs well.

10 COMMENTS

  1. Will you include video encoding benchmarks? Given how popular Plex is in the home lab space, I would expect video encoding to be very relevant.

  2. Wanna know the real dirty little secret woth your new card, William? ROCm does not support RDNA GPUs. The last AMD consumer cards that ROCm supported was the Vega 56/64 and it’s 7nM die shrink the VII.

    You got a RX5000 or RX6000 series card or any version of APU, well you get to use OpenCL. Aren’t you lucky?

    nVidia supported CUDA on every GPU since at least the 8800GT. I can’t imagine how AMD expects to get ROCm out of the upcoming national labs when the only modern card it will work on is the mi100. Ever try to buy an mi100 (or mi50)? It is basically possible to find an AMD reseller that will even condescend to speak to a small ISV.

  3. I find all these reviews and release news for both AMD and Nvidia card a joke at the moment, as an end user I can’t ever find any in stock no matter how deep my pockets!!

    I’m not just talking about STH

  4. Pure junk selling, USA warranty evading, AMD still owes me a video board since I did not even get a 1/2 year of performance from the 3 very poorly designed and QA, Vega Frontier Editions.
    AMD wanting all the selfish benefits of consumer sales, but none of the mature responsibilities.

  5. The replacement warranty boards from AMD all junk. The first not lasting more than 2 days without crashing (BSD), the second, having waited ~1 mth, just 1 day then crashing. I having the impression, no one watching AMD, hey simply return “defective boards” as replacements, then wash their foul hands, not honoring anything, their word to then not respecting customer.
    Worst, this company then sought to abuse USA Consumer Protection Laws by expecting their customers in the USA to send the defective product OUT OF COUNTRY, having no USA depot.

  6. Park McGraw above ^^ had a faulty system (likely PSU or motherboard) that was making graphics cards either not work or actually break/fail, and then decided to blame AMD for it… ‍♂️

    You didn’t get 3x faulty graphics cards in a row, you freaking imbecile… Basic silicon engineering science says that getting 3x GPU duds in a row is practically an impossibility (unless the product itself had a fundamental device killing flaw… of which Vega 10 did not). Aka, it was YOUR SYSTEM that was killing the cards!

  7. And to emerth, I wouldn’t expect ROCm to EVER come to RDNA personally. API translation seriously isn’t easy, so keeping things limited to just two instruction sets (modern CUDA to GCN [+ CDNA which uses the GCN ISA & is basically just GCN w/ the “graphics” cut out]) likely cuts down the work & difficulty DRAMATICALLY!

    Not to mention that even IF RDNA DID support ROCm, performance vs Nvidia would still be total crap because of the stark lack of raw FP compute! (AMD prioritized pixel pushing DRAMATICALLY over raw compute w/ RDNA 1 & 2 to get competitive gaming performance & perf/W, with only RDNA 3 starting to eveeeeer so slightly reverse course on that front).

    AMD just doesn’t give a crap, whatsoever, about the hobbyist AI/machine learning market. Nvidia’s just got way, WAAAAAY too much dominance there for it to be worth AMD spending basically ANY time & effort to try and assault it. Especially when CDNA is absolutely beating the everliving SH!T out of Nvidia in the HPC & supercomputer market!

LEAVE A REPLY

Please enter your comment!
Please enter your name here

This site uses Akismet to reduce spam. Learn how your comment data is processed.