Dual NVIDIA Titan RTX Review Compute with NVLink Performance

10

NVIDIA Titan RTX 3DMark Suite Testing

Here we will run the dual NVIDIA Titan RTX through graphics-related benchmarks. At its heart, these are still GPUs so we wanted to give at least a perspective on them.

2x NVIDIA Titan RTX NVLink Port Royal
2x NVIDIA Titan RTX NVLink Port Royal

Port Royal uses the real-time ray tracing aspect of the RTX cards, two NVIDIA Titan RTX GPUs with NVLink smashes these scores.

2x NVIDIA Titan RTX NVLink Time Spy
2x NVIDIA Titan RTX NVLink Time Spy

With Time Spy we saw a jump, but nowhere as big as with Port Royale.

2x NVIDIA Titan RTX NVLink Fire Strike
2x NVIDIA Titan RTX NVLink Fire Strike

With Fire Strike, we are sorting on the standard results which make the dual NVIDIA Titan RTX cards look only slightly faster than the single card. When you dig into the data and see the Ultra and Extreme results, you see much better scaling.

Dual NVIDIA Titan RTX with NVLink Unigine Testing

Unigine is extremely popular, so we wanted to show performance here as well.

2x NVIDIA Titan RTX NVLink Superposition
Dual NVIDIA Titan RTX NVLink Superposition
2x NVIDIA Titan RTX NVLink Heaven
2x NVIDIA Titan RTX NVLink Heaven
2x NVIDIA Titan RTX NVLink Valley
2x NVIDIA Titan RTX NVLink Valley

Unigine does not show nearly as impressive of results. Unigine Valley regresses. Our sense is that this is simply a case where we have too much hardware for the benchmark we are running on the Unigine side. We are simply presenting the output from our standard test suite here, not modifying to get better results.

Next, we are going to look at the dual NVIDIA Titan RTX with NVLink setup with several deep learning benchmarks.

10 COMMENTS

  1. Incredible ! The tandem operates at 10x the performance of the best K5200 ! This is a must have for every computer laboratory that wishes to be up to date allowing team members or students to render in minutes what would take hours or days ! I hear Dr Cray sayin ” Yes more speed! “

  2. This test would make more sense if the benchmarks were also run with 2 Titan RTX but WITHOUT NVlink connected. Then you’d understand better whether your app is actually getting any benefit from it. NVLink can degrade performance in applications that are not tuned to take advantage of it. (meaning 2 GPUs will be better than 2+NVLink in some situations)

  3. Great review yes – thanks !
    2x 2080 Ti would be nice for a comparison. Benchmarks not constrained by memory size would show similar performance to 2x Titan at half the cost.
    It would also be interesting to see CPU usage for some of the benchmarks. I have seen GPUs being held back by single threaded Python performance for some ML workloads on occasion. Have you checked for CPU bottlenecks during testing? This is a potential explanation for some benchmarks not scaling as expected.

  4. Literally any amd GPU loose even compared to the slowest RTX card in 90% of test…In int32 int64 they don’t even deserve to be on chart

  5. @Lucusta
    Yep the Radeon VII really shines in this test. The $700 Radeon VII iis only 10% faster than the $4,000 Quadro RTX 6000 in programs like davinci resolve. It’s a horrible card.

  6. @Misha
    A Useless comparison, a pro card vs a not pro in a generic gpgpu program (no viewport so why don’t you say rtx 2080?)… The new Vega VII is compable to rtx quadro 4000 1000$ single slot! (pudget review)…In compute Vega 2 win, in viewport / specviewperf it looses…

  7. @Lucusta
    MI50 ~ Radeon VII and there is also a MI60.
    Radeon VII(15 fps) beats the Quadro RTX 8000(10 fps) with 8k in Resolve by 50% when doing NR(quadro RTX4000 does 8 fps).
    Most if not all benchmarking programs for CPU and GPU are more or less useless, test real programs.
    That’s how Puget does it and Tomshardware is also pretty good in testing with real programs.
    Benchmark programs are for gamers or just being the highest on the internet in some kind of benchmark.

  8. You critique that many benchmarks did not show the power of nvlink and using pooled memory by using the two cards in tandem. But why did you not choose those benchmarks and even more important, why did you not set up your tensorflow and pytorch test bench to actually showcase the difference between nvlink and one without?

    It’s a disappointing review in my opinion because you set our a premise and did not even test the premise hence the test was quite useless.

    Here my suggeation: set up a deep learning training and inference test bench that displays actual gpu memory usage, the difference in performance when using nvlink bridges and without, performance when two cards are used in parallel (equally distributed workloads) vs allocating a specific workload within the same model to one gpu and another workload in the same model to the other gpu by utilizing pooled memory.

    This is a very lazy review in that you just ran a few canned benchmark suites over different gpu, hence the rest results are equally boring. It’s a fine review for rendering folks but it’s a very disappointing review for deep learning people.

    I think you can do better than that. Pytorch and tensorflow have some very simple ways to allocate workloads to specific gpus. It’s not that hard and does not require a PhD.

LEAVE A REPLY

Please enter your comment!
Please enter your name here