ZOTAC GeForce RTX 2070 Super Twin Fan Review

3

Zotac GeForce RTX 2070 Super Twin Fan Deep Learning Benchmarks

As we continue to innovate on our review format, we are now adding deep learning benchmarks. In future reviews, we will add more results to this data set.

ResNet-50 Inferencing Using Tensor Cores

ImageNet is an image classification database launched in 2007 designed for use in visual object recognition research. Organized by the WordNet hierarchy, hundreds of image examples represent each node (or category of specific nouns).

In our benchmarks for Inferencing, a ResNet50 Model trained in Caffe will be run using the command line as follows.

nvidia-docker run --shm-size=1g --ipc=host --ulimit memlock=-1 --ulimit stack=67108864 --rm -v ~/Downloads/models/:/models -w /opt/tensorrt/bin nvcr.io/nvidia/tensorrt:18.11-py3 giexec --deploy=/models/ResNet-50-deploy.prototxt --model=/models/ResNet-50-model.caffemodel --output=prob --batch=16 --iterations=500 --fp16

Options are:
–deploy: Path to the Caffe deploy (.prototxt) file used for training the model
–model: Path to the model (.caffemodel)
–output: Output blob name
–batch: Batch size to use for inferencing
–iterations: The number of iterations to run
–int8: Use INT8 precision
–fp16: Use FP16 precision (for Volta or Turing GPUs), no specification will equal FP32

We can change the batch size to 16, 32, 64, 128 and precision to INT8, FP16, and FP32.

The results are in inference latency (in seconds.) If we take the batch size / Latency, that will equal the Throughput (images/sec) which we plot on our charts.

We also found that this benchmark does not use two GPU’s; it only runs on a single GPU. You can, however, run different instances on each GPU using commands like.
```NV_GPUS=0 nvidia-docker run ... &
NV_GPUS=1 nvidia-docker run ... &```

With these commands, a user can scale workloads across many GPU’s. Our graphs show combined totals.

We start with Turing’s new INT8 mode which is one of the benefits of using the NVIDIA RTX cards.

ZOTAC RTX 2070 SUPER ResNet 50 Inferencing INT8
ZOTAC RTX 2070 SUPER ResNet 50 Inferencing INT8

Using precision of INT8 is by far the fastest inferencing method if at all possible converting code to INT8 will yield faster runs.

We see the Zotac GeForce RTX 2070 Super Twin Fan just passing the ASUS Turbo RTX 2080 here.

Let us look at FP16 and FP32 results.

ZOTAC RTX 2070 SUPER ResNet 50 Inferencing FP16
ZOTAC RTX 2070 SUPER ResNet 50 Inferencing FP16
ZOTAC RTX 2070 SUPER ResNet 50 Inferencing FP32
ZOTAC RTX 2070 SUPER ResNet 50 Inferencing FP32

Again, the NVIDIA GeForce RTX 2070 Super is no match for the RTX 2080 Ti, but it does beat the RTX 2080 here.

ResNet-50 Training using Tensor Cores and Tensorflow

We also wanted to train the venerable ResNet-50 using Tensorflow. During training the neural network is learning features of images, (e.g. objects, animals, etc.) and determining what features are important. Periodically (every 1000 iterations), the neural network will test itself against the test set to determine training loss, which affects the accuracy of training the network. Accuracy can be increased through repetition (or running a higher number of epochs.)

The command line we will use is:

nvidia-docker run --shm-size=1g --ipc=host --ulimit memlock=-1 --ulimit stack=67108864 -v ~/Downloads/imagenet12tf:/imagenet --rm -w /workspace/nvidia-examples/cnn/ nvcr.io/nvidia/tensorflow:18.11-py3 python resnet.py --data_dir=/imagenet --layers=50 --batch_size=128 --iter_unit=batch --num_iter=500 --display_every=20 --precision=fp16

Parameters for resnet.py:
–layers: The number of neural network layers to use, i.e. 50.
–batch_size or -b: The number of ImageNet sample images to use for training the network per iteration. Increasing the batch size will typically increase training performance.
–iter_unit or -u: Specify whether to run batches or epochs.
–num_iter or -i: The number of batches or iterations to run, i.e. 500.
–display_every: How frequently training performance will be displayed, i.e. every 20 batches.
–precision: Specify FP32 or FP16 precision, which also enables TensorCore math for Volta and Turing GPUs.

While this script TensorFlow cannot specify individual GPUs to use, they can be specified by
setting export CUDA_VISIBLE_DEVICES= separated by commas (i.e. 0,1,2,3) within the Docker container workspace.

We will run batch sizes of 16, 32, 64, 128 and change from FP16 to FP32. Our graphs show combined totals.

Some GPUs like the new Super cards as well as the GeForce RTX 2060, RTX 2070, RTX 2080 and RTX 2080 Ti will not show higher batch size runs because of limited memory.

ZOTAC RTX 2070 SUPER ResNet 50 Training FP16
ZOTAC RTX 2070 SUPER ResNet 50 Training FP16
ZOTAC RTX 2070 SUPER ResNet 50 Training FP32
ZOTAC RTX 2070 SUPER ResNet 50 Training FP32

Here, we are seeing the performance of the Zotac GeForce RTX 2070 Super Twin Fan well above the performance of the previous generation GeForce RTX 2070.

Next, we are going to look at the Zotac GeForce RTX 2070 Super Twin Fan power and temperature tests and then give our final words.

3 COMMENTS

  1. I can downclock this card’s GPU. For example, limiting the maximum GPU frequency to 1000MHz? In this case I will use it for neural network training that can last several days on a 24 hour basis.

  2. This is really a good graphic card. It was much awaited. I have already ordered one and it’s on the way.
    Thank you for providing the detailed review. You’ve covered every aspect, much appreciated.

  3. where are the gaming benchmarks and how loud is the card.

    This review answers none of the questions games want to see 🙁

LEAVE A REPLY

Please enter your comment!
Please enter your name here