We are still a few days away from launching the full data set at Linux-Bench.com. Thus far we have about 42 different physical and cloud platforms that have been benchmarked. We decided to take the script for a spin on the new Amazon AWS T2 instances. The experiment was to see if we could see signs of a slowdown with the AWS EC2 T2 instances using the new Linux benchmark test script.
The AWS T2 instances are relatively lightweight. As a result they are at or below the minimum threshold for performance we looked for when profiling platforms for Linux-Bench. The challenge is scaling from a small VM to 4-way and larger systems across multiple tests. If the platform is decently fast, the script can now be run from a Ubuntu 14.04 LTS LiveCD in 3 commands. The total run time should be about an hour on most platforms.
The new Amazon AWS t2 instance type is burstable. The original t1.micro instance was also burstable for short periods of time. Amazon has a new concept with the EC2 T2 instances where one gets “credits” every hour of usage. Here is a quick breakdown of the three new EC2 T2 instance types:
|Name||vCPUs||Baseline Performance||RAM (GiB)||CPU Credits / Hour||Price / Hour||Price / Month|
The basic concept is that one can run burstable workloads at higher levels of performance using less expensive instances. Here is a link to the announcement for those who have not seen it. The quick summary of how this burstable performance works is:
[quote]As listed in the table above, each T2 instance receives CPU Credits at a rate that is determined by the size of the instance. A CPU Credit provides the performance of a full CPU core for one minute.
For example, a t2.micro instance receives credits continuously at a rate of 6 CPU Credits per hour. This capability provides baseline performance equivalent to 10% of a CPU core. If at any moment the instance does not need the credits it receives, it stores them in its CPU Credit balance for up to 24 hours. If your instance doesn’t use its baseline CPU for 10 hours (let’s say), the t2.micro instance will have accumulated enough credits to run for almost an hour with full core performance (10 hours * 6 CPU Credits / hour = 60 CPU Credits).[/quote]
Naturally, it was time to take Linux-Bench and try the new instances out. I reccently started a small thread in the forums on the topic of benchmarking other AWS instances. We recently fixed a bug where the default AWS Ubuntu AMI’s would not run Linux-Bench. After it was fixed, Linux-Bench ran perfectly with one curl command.
Also, and somewhat coincidentally I just got the new ASUS P9A-I/C2550/SAS/4L which is going to be a killer cold storage motherboard in the lab. That motherboard uses a quad core Intel Atom C2550 processor which is basically the lower end of what we were targeting Linux-Bench for. So here I had three ultra inexpensive instances and a bare metal platform that could be built with 16GB of RAM for about the same cost as a t2.medium instance.
For the test I did something very simple. Of course, more formal testing will ensue but for now consider this the “quick and dirty” method of testing. One t2.micro, t2.small and t2.medium instance was set up using the Ubuntu 14.04 LTS default distribution. The ASUS P9A-I/C2550/SAS/4L was setup with 4GB of 1600MHz 1.35v RAM (2x2GB sticks) and the Ubuntu 14.04 LiveCD was mounted via the ASUS iKVM solution. On the ASUS platform I did apt-get install curl and openssh-server so I could run the tests using the same base command:
[slider][pane]time (bash <(curl -sk https://raw.githubusercontent.com/STH-Dev/linux-bench/master/STHbench-Dev012.11.sh))[/pane][/slider]
Note – we are moving this file to http://linux-bench.com/ once the uploader/ parser is finished development but for now we are using the development version.
Amazon AWS T2 instance benchmarks
I extracted the first two c-ray benchmark results we used colloquially named “easy” and “medium”. The hard one is a destroyer of low end processors but Haswell-EP (Xeon E5 V3) would have zipped too fast through the medium benchmark.
c-ray is the fourth test in the Linux-Bench CPU script. hardinfo, UnixBench single threaded and UnixBench multi threaded precede it. The single threaded instances skip the UnixBench multi-threaded tests since they are not multi-threaded.
Since I had all four environments running simultaneously, it seemed like the new t2 instances were going to give the Atom C2550 a rough day. After sending a few e-mails, I came back to find the Intel Atom C2550 was finished with the benchmark. The AWS t2 instances were all still working. Luckily I had the time command running so I could see the results:
For the Atom C2550 and the AWS t2.small and t2.medium I copied the log files off the server then restarted the script. In theory, there are less things to update so the script should run slightly faster the second time. The t1.micro took about 410 minutes. That is 10 minutes shy of 6 hours and it is not running either UnixBench multi-threaded nor the 7-zip benchmark (bug on <4GB memory systems that is being fixed). The t2.small is similarly not running those two benchmarks.
One can see that on the c-ray test, the t2.medium looked like it was going to wipe the floor with the 14w TDP Intel Atom C2550. By the end of the first run, it took the t2.medium about 57% longer to do the same amount of work. By the end of the second run it took the t2.medium almost twice as long to finish.
For those wondering, the t2.micro was spared from a second run. It was slow enough to clock with a sundial however in the almost 7 hours it took to run the sun had long set.
On one hand, I really like the idea of burstable performance instances. This would be great for smaller WordPress websites and what have you. On the other hand, on a “sustained” workload the $400 bare metal Intel Atom server was much faster. The Linux-Bench benchmark script runs a variety of tests and we will have detailed results added to the online viewer at http://Linux-Bench.com once it is formally launched. If you do testing, please feel free to send logs in since we are validating the parser right now and can always use extra test data.