Dual NVIDIA GeForce RTX 3090 NVLink Performance Review


NVIDIA GeForce RTX 3090 NVLink Rendering Related Benchmarks

Next, we wanted to get a sense of the rendering performance of the dual GeForce RTX 3090’s.

Arion v2.5

Arion Benchmark is a standalone render benchmark based on the commercially available Arion render software from RandomControl. The benchmark is GPU-accelerated using NVIDIA CUDA. However, it is unique in that it can run on both NVIDIA GPUs and CPUs.

Download the Arion Benchmark from here. First-time users will have to register to download the benchmark.

NVIDIA RTX 3090 NVLink Arion
NVIDIA RTX 3090 NVLink Arion

Like our first set of benchmarks, the GeForce RTX 3090 NVLink shows impressive dual GPU results to the point we are seeing close to a 3x GeForce RTX 2080 Ti result here.

MAXON Cinema4D 3D

ProRender is an OpenCL-based GPU renderer that is available in MAXON’s Cinema4D 3D animation software. A fully functional 42-day trial version is available for downloaded from the MAXON website here. Note: Even after expiration, the trial can still be used to measure render times.

NVIDIA RTX 3090 NVLink Cinema4D
NVIDIA RTX 3090 NVLink Cinema4D

While Cinema4D, could see both cards, we saw something different. Unlike a few of the benchmarks we are not highlighting due to not having a discernable difference moving from one card to two, this is a different case. Here we have lower performance than single-card performance with two of these GPUs.

Redshift v3.0.31

Redshift is a GPU-accelerated renderer with production-quality. A Demo version of this benchmark can be found here.

NVIDIA RTX 3090 NVLink Redshift
NVIDIA RTX 3090 NVLink Redshift

With Redshift, the NVIDIA RTX 3090 NVLink configuration crushes our Redshift demo benchmark at 1 minute and 27 seconds, achieving the fasted render time we have seen to date. We did not get perfect 2x scaling, but one can easily see how this is an enormous performance gain.

Next, we will have 3DMark and Deep Learning results before moving on to power consumption, thermals, and our final thoughts.


  1. DirectX 12 supports multi GPU but has to be enabled by the developers

    NVlink was only available on the 2080 Turing cards – so only the high end SKU having it – nothing new. AMD’s solution is what again? Nothing.

    in DX11 games – dual 2080Ti were a viable 4K 120fps setup – which I ran until I replaced them with a single 3090. 4K 144Hz all day in DX11.

    I would imagine someone will put out a hack that fools the system into enabling 2 cards – even if not expressly enabled by the devs

    2 different cards is about as ghetto as it gets and shows the (sub)standards of this site – Patrick’s AMD fanboyism is the hindrance to this site – used to check every day – but now check once a week – and still little new… even the jankiest of yootoob talking heads gets hardware to review.

  2. As an aside, I hope ya’ll get a 3060 or 3080 TI to review.

    The possibility of the crypto throttler affecting other compute workloads has me very worried… and STH’s testing is very compute focused.

  3. Good review Will, ignore the fanboy whimpers any regulars knows how false his claims are.
    Next up A6000?
    Curious how close the 3090 is.

  4. Thanks for the review. It would be awesome to see how much the NVLink matters. I’m particularly interested for ML training – does the extra bandwidth help significantly, v.s. going through PCIe?

  5. One huge issue is the pricing.

    Many see the potential ML / DL Applications of the 3080 and their first idea is to stick them in Servers for professional use. The issue with that is that, in theory, this is a datacenter use of the GPU and thus violates the Nvidia Terms of Use…

    AFAIK only Supermicro sells Servers equipped with the RTX 3080… why they are allowed to do that ? IDK… considering it is supermicro, they might just not care.

    Here comes the pricing issue though. If you are offering your customers the bigger brands such as HPE and Dell EMC you are stuck with equipping your Servers with the high end datacenter GPUs such the V100S or A100 which cost 6-8 times as much as a RTX 3080 with similar ML perfomance … on paper.

    Nvidia seems to be shooting themselfes in the foot with this. In addition to making my job annoying trying to convince customers that putting a RTX 3080 into their towers should be considered a bad idea.

  6. I’ve got exactly the same 2 cards!
    What specific riser did you use? I’d like to hear your recommendation before I purchase something random ;).

  7. I have two 3090, same brand and connected with the original NVLink.
    We acquired these for a heavy weight VR application done with Unreal Engine 4.26
    We tested all the possible combinations but we couldn’t make them work together in VR. Only one GPU is taking the app. We checked with the Epic guys and they don’t have a clue. We contacted Nvidia technical support and the guys of the call center literally don’t have any page to use it against this extreme configuration We want to use one eye per GPU but it is not working. Anyone has an idea or knows something. Any help is more than welcome !!!!!

  8. One of the problems I have run into with multiple Cards is that they do not seem to increase the overall GPU memory available. I have configurations where there are 2-4 cards in the computers and when I run applications, they only seem to think that I have 12 GB of GPU memory only. Even when 2 are NVLinked. I see the processes spread out amongst the cards, but for large data files, I see that my GPU footprint increase to around 11.5 – 11.7 GB and things slow down when this happens. Thus, GPU memory seems to be the bottle neck that I have been running into (12 GB on the 3080ti and the 2080ti).


Please enter your comment!
Please enter your name here

This site uses Akismet to reduce spam. Learn how your comment data is processed.