As part of our Going Colo Series, Part 2 is centered on picking the site of the second colo facility. In Part 1 we re-evaluated the notion of moving to a colo versus using AWS for a second STH site and AWS was still more expensive 2 years after our initial analysis. Realistically, deciding on colo should be part of the colo v. cloud decision, but it does require some time. Las Vegas is a low disaster risk location, and our first colocation site at Fiberhub. For the second site, we therefore wanted to be at least 150 miles from Las Vegas. Here is a bit on our factors deciding on a colocation site.
Looking for a local colocation
One of the key items we wanted with our colocation was a local data center. Believe it or not, we could not find another review site regularly reviewing server hardware in datacenters. Instead, most reviews are done in homes and converted offices. Surely, lighting is much better in those environments for photos but dealing with equipment in a data center environment is much different than in the same room or a room next door.
Cutting to the big answer, we ended up getting a full cabinet at Hurricane Electric in Fremont, CA. This data center sits directly under the Hayward fault, but has moderate pricing (although usually good move-in promo for a month or three) and is only 15 miles and 15 minutes from Mountain View, California. As a primary data center this is probably not an optimal place, but proximity was a key driver in getting the cabinet in the first place. Fremont is in the San Francisco bay area in the northern part of the Silicon Valley. Fremont also has the local headquarters for companies such as ASUS USA.
At the end of the day the HE.net folks have been around for quite some time. Some hosting companies like Linode use it as a primary data center. The operation is much larger than Fiberhub in Las Vegas (much larger.)
We shortlisted local vendors based on prices, features and other criteria that we will be making the template for publicly available soon (stay tuned.) In the end, Hurricane Electric was the winning provider.
Taking a look at the HE rack
There are a number of caveats. Right now, we are still on 120V/ 15A circuit. Frankly, we did not need as much power but we do want to get a 220V setup soon as it is much more efficient. Also, Hurricane Electric supplies cabinets with round hole posts. For some servers that is great, but we get rail kits from Dell, Intel, Supermicro and others that all use square holes. There was a $200 charge for square holes (which I was not happy about.)
The cabinet at HE did come extremely bare. To give you an idea of what move in looked like:
That is a bare cabinet! Not even the posts were screwed down to provide for mounting flexibility. A few hours of work later, we had a basic setup running. HE did provide us with a Tripp Lite Zero U PDU. Our first unit showed 0 amps with 6 machines plugged in so HE replaced the malfunctioning unit. One other gripe: there were no U markings on the posts. That made for a much longer installation especially with a sparse 15A/ 120V rack. Just for comparison, our Las Vegas colo has free rack & stack at move-in and when we upgraded cabinets. This is a very nice service which is worth quite a bit to itself. Here is a shot closed up and locked while it was still in progress.
HE does have onsite support and 24×7 access. Also unlike some other premium providers, HE allows you to work on equipment unsupervised. Other data centers do require that you are escorted for the duration of your work and some even charge for this service. Since we will actively be at HE, we needed easy physical access and they had it. We will have more on the HE experience in a later piece.
We have already had vendor reps tour the facility and as of this week, will become the first hardware review site to test hardware in an actual data center. Given, this is not a consideration for many sites, but we needed access to the rack for hardware reviews and wanted to have local vendors stop by. Here is a quick look at the Supermicro system we reviewed that we had in the data center for 2 weeks as part of the review.
Aside from shelf space we also have rackmount server space available. One can see the Dell PowerEdge R220 in rack that will have its review time soon. The big issue we are having right now is that the lighting is not great. While great for conserving power, it is not ideal for photos. We have been working on a portable setup to remedy this. Of course, we are some of the few folks who actually take photos in data centers.
A positive point with being located in Silicon Valley is that despite its expense, a high portion of companies we work with have a presence here so convenience is a factor.
Why local matters
Our first colocation setup was placed based on price and the stability of the location. When hardware fails, it still means getting on a plane for an hour to go fix it. Alternatively, we can use remote hands which has a relatively low cost associated with it. This new colo facility has been great. We had a platform that was going to be one of the Linux-Bench hosting nodes that was being extremely unstable after installation. We had the opportunity to drive 15 minutes on a Saturday and swap out the entire box, re-installing a new Xeon E5 V3 system from ASrock (review coming soon) in its place two days later. Certainly the convenience factor cannot be underestimated. Especially if you have a need to make frequent adjustments to your equipment. Whereas even for STH where we have hardware refresh possibilities constantly, the Las Vegas site sits static for a year at a time while the Fremont facility gets upgrades every time we find a great deal on enterprise SSDs for example.
More to come very soon. Thank you to the folks at Hurricane Electric for being friendly and helping get our second colocation site setup.
Feel free to follow the colocation #2 build log in the forums.