NVIDIA GeForce RTX 3090 NVLink Rendering Related Benchmarks
Next, we wanted to get a sense of the rendering performance of the dual GeForce RTX 3090’s.
Arion Benchmark is a standalone render benchmark based on the commercially available Arion render software from RandomControl. The benchmark is GPU-accelerated using NVIDIA CUDA. However, it is unique in that it can run on both NVIDIA GPUs and CPUs.
Download the Arion Benchmark from here. First-time users will have to register to download the benchmark.
Like our first set of benchmarks, the GeForce RTX 3090 NVLink shows impressive dual GPU results to the point we are seeing close to a 3x GeForce RTX 2080 Ti result here.
MAXON Cinema4D 3D
ProRender is an OpenCL-based GPU renderer that is available in MAXON’s Cinema4D 3D animation software. A fully functional 42-day trial version is available for downloaded from the MAXON website here. Note: Even after expiration, the trial can still be used to measure render times.
While Cinema4D, could see both cards, we saw something different. Unlike a few of the benchmarks we are not highlighting due to not having a discernable difference moving from one card to two, this is a different case. Here we have lower performance than single-card performance with two of these GPUs.
Redshift is a GPU-accelerated renderer with production-quality. A Demo version of this benchmark can be found here.
With Redshift, the NVIDIA RTX 3090 NVLink configuration crushes our Redshift demo benchmark at 1 minute and 27 seconds, achieving the fasted render time we have seen to date. We did not get perfect 2x scaling, but one can easily see how this is an enormous performance gain.
Next, we will have 3DMark and Deep Learning results before moving on to power consumption, thermals, and our final thoughts.
Darn son that’s the bossliest beast I ever did see!!!
DirectX 12 supports multi GPU but has to be enabled by the developers
NVlink was only available on the 2080 Turing cards – so only the high end SKU having it – nothing new. AMD’s solution is what again? Nothing.
in DX11 games – dual 2080Ti were a viable 4K 120fps setup – which I ran until I replaced them with a single 3090. 4K 144Hz all day in DX11.
I would imagine someone will put out a hack that fools the system into enabling 2 cards – even if not expressly enabled by the devs
2 different cards is about as ghetto as it gets and shows the (sub)standards of this site – Patrick’s AMD fanboyism is the hindrance to this site – used to check every day – but now check once a week – and still little new… even the jankiest of yootoob talking heads gets hardware to review.
As an aside, I hope ya’ll get a 3060 or 3080 TI to review.
The possibility of the crypto throttler affecting other compute workloads has me very worried… and STH’s testing is very compute focused.
Good review Will, ignore the fanboy whimpers any regulars knows how false his claims are.
Next up A6000?
Curious how close the 3090 is.
Nice review. I wonder how well the temperature can be controlled with a GPU water cooler.
Thanks for the review. It would be awesome to see how much the NVLink matters. I’m particularly interested for ML training – does the extra bandwidth help significantly, v.s. going through PCIe?
One huge issue is the pricing.
AFAIK only Supermicro sells Servers equipped with the RTX 3080… why they are allowed to do that ? IDK… considering it is supermicro, they might just not care.
Here comes the pricing issue though. If you are offering your customers the bigger brands such as HPE and Dell EMC you are stuck with equipping your Servers with the high end datacenter GPUs such the V100S or A100 which cost 6-8 times as much as a RTX 3080 with similar ML perfomance … on paper.
Nvidia seems to be shooting themselfes in the foot with this. In addition to making my job annoying trying to convince customers that putting a RTX 3080 into their towers should be considered a bad idea.
I’ve got exactly the same 2 cards!
What specific riser did you use? I’d like to hear your recommendation before I purchase something random ;).
I have two 3090, same brand and connected with the original NVLink.
We acquired these for a heavy weight VR application done with Unreal Engine 4.26
We tested all the possible combinations but we couldn’t make them work together in VR. Only one GPU is taking the app. We checked with the Epic guys and they don’t have a clue. We contacted Nvidia technical support and the guys of the call center literally don’t have any page to use it against this extreme configuration We want to use one eye per GPU but it is not working. Anyone has an idea or knows something. Any help is more than welcome !!!!!
Dual gpu LOL Can’t believe people keep doing this hahaha
One of the problems I have run into with multiple Cards is that they do not seem to increase the overall GPU memory available. I have configurations where there are 2-4 cards in the computers and when I run applications, they only seem to think that I have 12 GB of GPU memory only. Even when 2 are NVLinked. I see the processes spread out amongst the cards, but for large data files, I see that my GPU footprint increase to around 11.5 – 11.7 GB and things slow down when this happens. Thus, GPU memory seems to be the bottle neck that I have been running into (12 GB on the 3080ti and the 2080ti).
While getting cards has been a little difficult, it isn’t that hard to source a pair of the same cards. I currently have 3 x rtx3090 ftw3 ultra cards and 1 3090 from an Alienware.
I learned long ago while running a pair of gtx1080ti’s, very few dev’s utilized the necessary products to benefit from SLI. One card just sat silently while the other worked. Perhaps they’ve improved. Only time will tell.
I have Asus Strix 3090s (x2) and with NVLink Bridge (4Slot) cant get Nvidia control panel to see that they are connected, no option to enable SLI/NVlink. using latest driver 512.59