Major Linux-Bench September 2014 Update – Bug Fix Edition

1
Linux-Bench Web Parser
Linux-Bench Web Parser

Linux-Bench is a project we sponsor here. The idea is to have a quick and dirty benchmark suite that anyone can run on a major Linux distribution such as Ubuntu or CentOS. Since the first version was released in July, there have been a number of improvements. Yet there were still some issues. Overall the tests were working by July but there were still a few nagging issues which have come out on the forums.

One example was that we started working on a web-based log parser. The premise was simple. Reading through thousands of lines of log files is no fun. Instead, we wanted a feature to output logs into a usable format preferably on a web page.

The first version of the Linux-Bench web parser came out in August 2014 and was a major upgrade. It parsed a lot of the information automatically. What we found was a set of results that could not be parsed due to an output issue. We fixed that output in the script and the parser worked, usually. There has been an extremely annoying intermittent “500” error bug where the curl command that sends the file to be parsed was randomly giving an error. Even on the same machine one run of Linux-Bench would produce the error then the next would not. This week, that error was fixed.

Linux-Bench Web Parser
Linux-Bench Web Parser

The next plan for the web parser is to get a RESTful API up so that you can share your links directly instead of having to say “go to http://linux-bench.com/parser.html and use result number: 21031409750541” as an example. For those wondering that ties to a dual Intel Xeon L5638 run in a Dell C6100. Right now a primary place to do this is on the Linux-Bench related forums.

Another issue we found recently as I was using Linux-Bench for the Tom’s Hardware Intel Xeon E5-2600 V3 “Haswell-EP” launch. Results were being parsed incorrectly. I found a few values which were wrong so for the Tom’s Hardware piece I had to manually comb through log files, copy results into a spreadsheet, then create all of the charts. That was an extremely time consuming process. The positive is that it did provide a good data set regarding where the errors were being produced in the web view.

At this point – it seems as though the Linux-Bench test script is working across major distributions.

1 COMMENT

LEAVE A REPLY

Please enter your comment!
Please enter your name here

This site uses Akismet to reduce spam. Learn how your comment data is processed.