With the sheer number of CPUs and platforms I test for ServeTheHome.com I have been keeping my own records on performance and scaling. Various benchmarks yield very different results under Windows and Linux, but also yield different scaling based on core counts and clock speed. It turns out that Stanford’s Folding@Home client, when proper optimizations have been applied, can scale very well with increased cores and clock speed. With Windows installations, CPU scaling is unpredictable to a point. On Linux installations, with proper thread to core affinity settings, scaling is very close to linear. Moving from 2P dodeca (twelve) core Opteron 6100 series CPUs to quad dodeca core Opteron 6100 series CPUs has yielded approximately 97% scaling. This ~97% scaling also works with clock speeds. After finding this scaling factor, it became obvious that the community needed not just a standardized benchmark, but also a standardized way of reporting results. User self-reporting has been tried in the past, but it is too sporadic and too prone to error to use on a large scale.
The original FAH benchmark was created by musky, a top Folding@home contributor, and used a CLI interface with captured work-units. By standardizing on work units, users could self-report results given their hardware setups.
The breakthrough occurred when another contributor and forum member, Haitch, made musky’s benchmark report to a MySQL database. Going one step further, Haitch made the benchmark not just use captured units, but also would analyze live unit performance. His awesome work on this project has made it possible to create a database with information on performance and configurations used to reach the level of performance. This work is really a standout community contribution.
For those wondering, other than a personal interest in the Folding@home project, one of the big reasons I wanted to host this benchmark on ServeTheHome is that it does a great job of showing the differences between server-class CPUs using a free tool for the benchmarking. Many benchmarks are highly dependent on clock speed or cores/ threads but few show great scaling with both. This benchmark does exactly that.
This is still very much a work in progress, but this may be one of the best “real-world” free HPC style benchmarks out there. There are many non-useful applications one can run as benchmarks, or many useful benchmarks that cost tens of thousands of dollars to run, but for most users trying to get a general idea, they are not practical. I have seen approximately 97% scaling both with clock speed and core counts under Linux (up to 48 cores thus far.)
Follow this link to the Folding@home Benchmark and feel free to contribute results to the community. We are still working on a lot of the functionality, but hopefully this benchmark will help show performance differences between CPUs in terms of clock speed, memory bandwidth, and core count. As a side effect, one not only can run a benchmark, but also assist in medical research.
Feel free do discuss this on the ServeTheHome.com Folding@home forum if you have questions or find a bug.