AMD Radeon Vega Frontier Edition Compute Related Benchmarks
The 16GB of HBM2 is undoubtedly a great feature which makes the GPU appropriate for workloads driven primarily by larger problem sizes. We wanted to take a look at the compute performance just to see where it falls in our growing GPU compute performance database.
The next GPUs in our cycle are going to be the NVIDIA RTX and NVIDIA Tesla cards, but we wanted to get some of these numbers out before we discuss those.
Geekbench 4 measures the compute performance of your GPU using image processing to computer vision to number crunching.
While the Vega FE does not have a CUDA score, it does show a good OpenCL score which is slightly higher than the AMD Radeon Pro WX 8200.
LuxMark is an OpenCL benchmark tool based on LuxRender.
The Vega FE takes the lead here with considerable performance jump over the Radeon Pro WX 8200. The Vega FE easily takes the high score for LuxMark based on OpenCL.
These benchmarks are designed to measure GPGPU computing performance via different OpenCL workloads.
Single-Precision FLOPS: Measures the classic MAD (Multiply-Addition) performance of the GPU, otherwise known as FLOPS (Floating-Point Operations Per Second), with single-precision (32-bit, “float”) floating-point data.
Double-Precision FLOPS: Measures the classic MAD (Multiply-Addition) performance of the GPU, otherwise known as FLOPS (Floating-Point Operations Per Second), with double-precision (64-bit, “double”) floating-point data.
Here single precision GFLOPS are just below what we are seeing from our GTX 1080 Ti and well above the AMD Radeon Pro WX 8200. Again, the AMD Radeon Vega Frontier Edition 16GB card is less expensive than the NVIDIA GTX, and newer NVIDIA RTX cards for this level of performance. In terms of raw double precision performance, the Vega FE is almost double the performance of the GTX 1080 Ti.
The next set of benchmarks from AIDA64 are focused on IOPS.
24-bit Integer IOPS: Measures the classic MAD (Multiply-Addition) performance of the GPU, otherwise known as IOPS (Integer Operations Per Second), with 24-bit integer (“int24”) data. This particular data type defined in OpenCL on the basis that many GPUs are capable of executing int24 operations via their floating-point units.
32-bit Integer IOPS: Measures the classic MAD (Multiply-Addition) performance of the GPU, otherwise known as IOPS (Integer Operations Per Second), with 32-bit integer (“int”) data.
64-bit Integer IOPS: Measures the classic MAD (Multiply-Addition) performance of the GPU, otherwise known as IOPS (Integer Operations Per Second), with 64-bit integer (“long”) data. Most GPUs do not have dedicated execution resources for 64-bit integer operations, so instead, they emulate the 64-bit integer operations via existing 32-bit integer execution units.
The Vega FE smashes the 24-bit integer IOPS with impressive results. This is a case where one’s workload will dictate which is the best solution for you.
SPECviewperf 12 measures the 3D graphics performance of systems running under the OpenGL and Direct X application programming interfaces.
As you can see, the performance is solid, and on a current price/ performance basis is extremely competitive. When NVIDIA raised prices on its latest GeForce RTX series, the AMD Radeon Vega FE became a better value.
We have only started using the new SPECworkstation 3 benchmark, so we do not have a full set of graphics cards to compare. Still, we wanted to provide results for comparison.
SPECworkstation 3 output using the Professional Drivers.
SPECworkstation 3 output using the Gaming Drivers.
Graphics related benchmarks
Here we will run the Vega FE through all of our graphics-related benchmarks.
The Vega FE is not an actual gaming video card but targeted at professionals who wish to switch between Pro and Gaming modes to test applications and even grab some game time at the end of the day.
Next, we are going to look at power and temperature tests before giving our final thoughts.