For our power testing, we used AIDA64 to stress the NVIDIA RTX 3090 NVLink, then HWiNFO to monitor power use and temperatures.
After the stress test has ramped up both GeForce RTX 3090 GPUs, we see it tops out at 738 Watts under full load and 59 Watts at idle. Keep in mind that this is for the GPUs alone, adding the rest of the system like the Threadripper platform we closed in on 1,500 Watts from the wall.
Just for comparison purposes, this is in-line with what we see from complete servers, at the wall, with minimal storage, and with ~200-225W TDP CPUs.
A key reason that we started this series was to answer the cooling question. Blower-style coolers have different capabilities than some of the large dual and triple fan gaming cards.
Temperatures for the two GeForce RTX 3090 GPUs ran at 74C under full loads, and again this is for a single GPU. The bigger challenge is going to be getting enough airflow into a chassis to keep these GPUs cool. We would not suggest trying to squeeze this setup into a small case with poor airflow.
There are effectively two large buckets of performance we are seeing here. First, there are applications that are not designed to use multiple GPUs. These tend not to be workloads like we see in the deep learning and scientific fields. We can understand why NVIDIA would start a SLI phase-out. In the rare instances, we saw slightly worse performance with two cards than one card. Our sense is that one will purchase two GPUs because it meets the needs of their most demanding use case. If you have a workload that will spend 80-95% of the time being 80-95% faster, and another workload that will be 5-20% of the time but be 5-20% slower, that is still a net win.
For those applications such as our deep learning training and inferencing benchmarks, we get great results. This is directly due to the domain areas focusing on multi-GPU support and NVIDIA creating NVLink to address that market. Since these are GeForce cards, double precision math is not great. We also cannot use scale-out features such as GPUDirect RDMA. Still, the impact of moving from a GeForce RTX 2080 Ti to dual GeForce RTX 3090’s can yield a 2-4x performance gain in many instances which can absolutely be career-changing and yield better end work product if limited by deadlines.
Still, the key challenge remains availability. Indeed, that is part of the reason we have two cards that are different sizes. We wanted to highlight that just getting two cards, let alone matching cards to recreate our experience is difficult, many months after these cards hit the market. We validated that even if one has to get different cards from different brands that one can get a performance benefit, even if it creates physical connection challenges.
Darn son that’s the bossliest beast I ever did see!!!
DirectX 12 supports multi GPU but has to be enabled by the developers
NVlink was only available on the 2080 Turing cards – so only the high end SKU having it – nothing new. AMD’s solution is what again? Nothing.
in DX11 games – dual 2080Ti were a viable 4K 120fps setup – which I ran until I replaced them with a single 3090. 4K 144Hz all day in DX11.
I would imagine someone will put out a hack that fools the system into enabling 2 cards – even if not expressly enabled by the devs
2 different cards is about as ghetto as it gets and shows the (sub)standards of this site – Patrick’s AMD fanboyism is the hindrance to this site – used to check every day – but now check once a week – and still little new… even the jankiest of yootoob talking heads gets hardware to review.
As an aside, I hope ya’ll get a 3060 or 3080 TI to review.
The possibility of the crypto throttler affecting other compute workloads has me very worried… and STH’s testing is very compute focused.
Good review Will, ignore the fanboy whimpers any regulars knows how false his claims are.
Next up A6000?
Curious how close the 3090 is.
Nice review. I wonder how well the temperature can be controlled with a GPU water cooler.
Thanks for the review. It would be awesome to see how much the NVLink matters. I’m particularly interested for ML training – does the extra bandwidth help significantly, v.s. going through PCIe?
One huge issue is the pricing.
AFAIK only Supermicro sells Servers equipped with the RTX 3080… why they are allowed to do that ? IDK… considering it is supermicro, they might just not care.
Here comes the pricing issue though. If you are offering your customers the bigger brands such as HPE and Dell EMC you are stuck with equipping your Servers with the high end datacenter GPUs such the V100S or A100 which cost 6-8 times as much as a RTX 3080 with similar ML perfomance … on paper.
Nvidia seems to be shooting themselfes in the foot with this. In addition to making my job annoying trying to convince customers that putting a RTX 3080 into their towers should be considered a bad idea.
I’ve got exactly the same 2 cards!
What specific riser did you use? I’d like to hear your recommendation before I purchase something random ;).
I have two 3090, same brand and connected with the original NVLink.
We acquired these for a heavy weight VR application done with Unreal Engine 4.26
We tested all the possible combinations but we couldn’t make them work together in VR. Only one GPU is taking the app. We checked with the Epic guys and they don’t have a clue. We contacted Nvidia technical support and the guys of the call center literally don’t have any page to use it against this extreme configuration We want to use one eye per GPU but it is not working. Anyone has an idea or knows something. Any help is more than welcome !!!!!
Dual gpu LOL Can’t believe people keep doing this hahaha
One of the problems I have run into with multiple Cards is that they do not seem to increase the overall GPU memory available. I have configurations where there are 2-4 cards in the computers and when I run applications, they only seem to think that I have 12 GB of GPU memory only. Even when 2 are NVLinked. I see the processes spread out amongst the cards, but for large data files, I see that my GPU footprint increase to around 11.5 – 11.7 GB and things slow down when this happens. Thus, GPU memory seems to be the bottle neck that I have been running into (12 GB on the 3080ti and the 2080ti).
While getting cards has been a little difficult, it isn’t that hard to source a pair of the same cards. I currently have 3 x rtx3090 ftw3 ultra cards and 1 3090 from an Alienware.
I learned long ago while running a pair of gtx1080ti’s, very few dev’s utilized the necessary products to benefit from SLI. One card just sat silently while the other worked. Perhaps they’ve improved. Only time will tell.
I have Asus Strix 3090s (x2) and with NVLink Bridge (4Slot) cant get Nvidia control panel to see that they are connected, no option to enable SLI/NVlink. using latest driver 512.59